Wallstreetcn
2023.08.06 06:23
portai
I'm PortAI, I can summarize articles.

Recruiting "Leaking Secrets"! Apple is developing AI large models to run on mobile devices instead of the cloud.

Apple's surface "restrains" when it comes to AI, but the high demand for AI talent cannot be "hidden". The recruitment requirements revealed in multiple job postings since April have exposed Apple's research direction over the years - compressing existing large models to run more efficiently on mobile devices.

In the wave of artificial intelligence, AI has become a buzzword for every technology company. However, it seems that Apple is an exception. Last week, after Apple's Q2 (Q3 of the 2023 fiscal year) earnings report conference call, Apple remained "restrained" in terms of AI.

But this apparent restraint does not stop Apple's demand for AI talent. Perhaps Apple has already begun to expand its talent pool and develop large-scale AI models for mobile devices.

On August 6th, the Financial Times reported that based on Apple's recruitment information released between April and July this year, they are conducting a "large-scale long-term research project that is expected to impact Apple and related products in the future."

In mid-July, an Apple job posting stated that the company is looking for a senior software engineer to "implement compression and acceleration of large language models in Apple's inference engine," enabling them to run on mobile devices rather than in the cloud.

On July 28th, another Apple job posting stated that the company hopes to "equip the most advanced foundational models on pocket-sized phones and achieve a new generation of machine learning-based features in a privacy-protecting manner."

The Financial Times analysis believes that Apple's recruitment requirements reveal Apple's research direction over the years - compressing existing language models to operate more efficiently on mobile devices rather than in the cloud.

A French AI entrepreneur who recently left a large technology company said in an interview with the Financial Times that Apple wants to expand its AI talent pool in Paris and has a greater recruitment effort than other large technology companies:

Apple currently has a small AI research institute in Paris, and recently poached many researchers from Meta, with plans to further expand the team.

Has Apple already started developing large-scale AI models for mobile devices?

During last week's conference call, Cook's emphasis on artificial intelligence was as minimal as ever, making Apple stand out more compared to peers such as Microsoft and Alphabet.

When asked by analysts, Cook stated that Apple has been committed to the development of generative AI and other models for many years:

We consider artificial intelligence and machine learning as core foundational technologies. They are embedded in almost every product we make and are an essential part.

They are absolutely critical to us.

Based on our research, we have been studying artificial intelligence and machine learning for many years, including generative AI.

Cook also stated that Apple will continue to use these technologies responsibly to advance Apple products, and Apple tends to announce them when the products are launched.

According to two insiders, as early as 2020, Apple spent nearly $200 million to acquire Xnor, an AI startup based in Seattle, beating out other major companies such as Microsoft, Amazon, and Intel. Xnor's main business is researching how to run large AI models on mobile devices. In mid-July, Wall Street News mentioned that Apple is developing its own generative AI tool. Last year, Apple created its own framework called Ajax to build large language models, aiming to unify Apple's machine learning development. With Ajax, Apple has developed a chatbot service similar to ChatGPT, internally referred to as Apple GPT.

The Battle of Large Models on Mobile Devices

Before Apple's "leaked" recruitment news, Meta was also focusing on large models for mobile devices.

On July 19th, Qualcomm and its latest announcement revealed that starting from 2024, Llama 2 will be able to run on flagship smartphones and PCs:

Customers, partners, and developers can build intelligent virtual assistants, productivity applications, content creation tools, entertainment, and other use cases. AI capabilities can run in places without network connection, even in airplane mode.

In 2024, running Llama 2 or similar generative AI models on smartphones, PCs, VR/AR headsets, and cars will help developers save on cloud costs and provide users with a more private, reliable, and personalized experience.

Durga Malladi, Senior Vice President and General Manager of Edge Cloud Computing Solutions at Qualcomm, stated that in order to effectively promote generative AI to the mainstream market, AI needs to run on both the cloud and edge devices (such as smartphones, laptops, cars, and IoT devices).

Qualcomm stated that compared to cloud-based large language models, running Llama 2 and other large language models on edge cloud computing devices like smartphones has many advantages. It is not only more cost-effective and performs better, but also works offline and provides more personalized and secure AI services.