"Apple Smart": Focus on small models
Unlike other tech giants in the industry pursuing "bigger is better" in terms of model parameters, Apple is more concerned about seamlessly integrating AI into the operating system to optimize user experience. Despite having a smaller parameter size, Apple's advantage lies in having a sufficient number of models to meet a wide range of user needs
At the WWDC conference early Tuesday morning, Apple replaced Artificial Intelligence with Apple Intelligence as the synonym for AI, which seems to redefine the meaning of AI. The much-discussed collaboration with OpenAI actually only provides a ChatGPT interface, which users can choose to use or not.
In a sense, Apple has indeed redefined AI - unlike other tech giants in the industry pursuing "bigger is better" in terms of model parameters, Apple is more concerned with seamlessly integrating AI into the operating system to optimize user experience.
Customized Small Models to Ensure User Experience
According to the information revealed by Apple at the WWDC conference, the key to "Apple Intelligence" lies in the numerous small models specifically built for certain needs within the operating system.
As Craig Federighi, Apple's Senior Vice President of Software Engineering, mentioned at the WWDC conference, many Apple AI models can run entirely on devices powered by A17+ or M series chips, eliminating the risk of sending personal data to remote servers.
These models are trained on customized datasets, with low parameters, limited intelligence, and not as versatile as large models like GPT or Claude. For example, in functions such as summarizing text or rewriting text built into the system, Apple expects the results of these requests to be relatively simple.
This is actually a result of considering running speed and computing power - most of Apple's in-house models can run on phones without the need for cloud servers, ensuring fast response times.
Although the parameter size is small, Apple Intelligence excels in having a sufficient number of models. Because Apple has trained a large number of models for different tasks in advance, Apple Intelligence can meet a wide range of user demands. From handling photos, text, to various complex cross-application operations, it can handle them all.
Privacy Comes First
However, when it comes to more complex user requests, Apple may introduce third-party models, such as OpenAI's ChatGPT and Google's Gemini. In such cases, the system will prompt users whether they are willing to share relevant information externally (if there is no such prompt, it means the user request is being processed using Apple's in-house models).
This mechanism helps alleviate user privacy concerns, as privacy has always been Apple's priority strategy.
As mentioned in a previous article by Wall Street News, if more powerful, cloud-dependent large models are needed, Apple will use servers specifically driven by its own chips to process them. Federighi emphasized that these servers are equipped with security tools written in the Swift language, and Apple AI "only sends the relevant data needed to complete the task" to these servers, without granting full access to device context information And, Apple has stated that this data will not be stored for future server access or used for further training of Apple's server-based models.
Apple has not explicitly stated which operations require cloud processing, as this situation may change at any time. Some tasks that require cloud computing today may be able to be completed locally on the device in the future. On-device computation is not always the first choice, as speed is just one of the many factors that Apple's intelligent systems consider when deciding whether to invoke cloud computing.
However, some user requests will always be executed on the device. The most significant of these is the on-device AI image generator, Image Playground, which stores the complete diffusion model locally and can generate images in three different styles.
Even in the current early testing phase, the image generation speed of Image Playground is very impressive, usually taking only a few seconds, of course, the effect cannot be compared to larger models.