Apple introduced its take on generative AI,Apple Intelligence, atWWDC 2024with the goal of offering features that are more personal than the competition, and critically, more private, too. But that’s a tricky needle to thread, not only because the processing demands of some requests means that information needs to leave the security of youriPhone 15 Pro or Pro Max, but also because Apple’s first third-party AI partner could log information too.
Luckily, Apple has an approach that should keep a majority of the worst-case scenario privacy violations from happening, and it leverages the structure of Apple Intelligence itself, the company’s powerful custom silicon, and the terms of Apple’s deal with third-party partners to work. Here’s what Apple currently has planned for Apple Intelligence’s privacy features and how they’ll work when they launch in beta this fall.

Apple Intelligence is no Gemini, but it sure looks handy
Apple is skipping AI for “Personal Intelligence.” It’s less exciting, but it could turn out to be more useful and even more ethical.
How does Apple Intelligence work?
Apple has been leveraging local AI processing on the iPhone for years at this point to improve photos, make Siri suggestions, and more, powered by the custom Neural Engines (what other companies call a Neural Processing Unit) in the company’s A-series and M-series chips. The bundle of features Apple has dubbed Apple Intelligence introduces new large language and diffusion models that run alongside what Apple was doing before.
Using the knowledge stored locally on your iPhone, iPad, or Mac about your life, like messages, emails, and calendar events, Apple’s able to feed those new models the necessary information to process a request, and do it all locally on your device. For more complex requests, Apple relies on new servers it calls Private Cloud Compute to access larger models running on Apple Silicon. And for requests that require a large amount of outside knowledge, Apple can also send information toChatGPTto get answers.

Based on what you askSirifor, whether it’s information about upcoming calendar events, or to create an image of a dog riding a bicycle, Apple Intelligence is able to determine what information needs to be shared, and whether it can stay on your phone or needs to travel to Private Cloud Compute or ChatGPT to be completed. Along the way, everything ideally stays nearly as secure as it normally would before Apple Intelligence.
How does Private Cloud Compute keep Apple Intelligence secure?
It’s not clear how many of Apple Intelligence’s skills will require extra help from servers, but seemingly enough of them that Apple branded the server architecture it’s using to complete them. Private Cloud Compute is an attempt by Apple to extend the privacy and security of the company’s hardware to the server rack. As Apple’s Senior Vice President of Software Engineering Craig Federighishared in a Q&Aheld after the company’s keynote, “It’s essential that you know no one – not Apple, not anyone else, can access the information used to process your request.”
Private Cloud Compute is run on Apple Silicon,reportedly the M2 Ultra, and is designed to be impermanent and easily inspectable. Apple claims that after a request is processed and completed, any data used is deleted. The IP address of the device making the request is also masked, not unlike the iCloud Private Relay feature offered as part of iCloud+. According toApple’s press release, “independent experts can inspect the [Private Cloud Compute] code that runs on Apple Silicon servers to verify privacy” and Private Cloud Compute “cryptographically ensures” that Apple devices don’t talk to servers unless “its software has been publicly logged for inspection.”

If it all works correctly, these features combine to make sure that Private Cloud Compute never holds on to or knows any information it doesn’t need and doesn’t even connect to your device in the first place unless its privacy has been verified.
Can Apple Intelligence maintain privacy if it connects to ChatGPT?
The wrinkle in all of Apple’s private AI plans is that it’s partnered with OpenAI, a company notoriously hungry for data to train its models on. Apple Intelligence can send requests to ChatGPT if they require outside knowledge or if users specifically want text or images generated by OpenAI models. As a first layer of protection, you have to agree to send your request and necessary information to ChatGPT before Apple Intelligence does anything. Beyond that, Apple says that IP addresses are again obscured, and OpenAI will not store any requests made. Except, if you have a ChatGPT Plus plan and choose to connect it to Apple Intelligence, in which case you’re subject to OpenAI’s usual privacy policy.
Apple Intelligence’s security features are good until proven otherwise
Regardless of where you land on Apple’s new AI features, the company has taken a thoughtful approach to privacy in an inherently less secure cloud-based software environment. The design of Apple Intelligence and Private Cloud Compute really does seem like it could prevent something terrible from happening with the private information these models need to see to be helpful.
It’s also all offered at no additional cost to iPhone, iPad, and Mac owners, something few other AI companies can say at this point. Now that all the information is out there, it’s in the hands of security researchers to determine whether Apple’s plan makes sense and will actually be as secure as the company says. Ultimately, Apple would love to have more of Apple Intelligence live on your devices rather than in the cloud, but until that happens, this will have to do.
