Amazon's new large-scale model, Olympus, is revealed! With 20 trillion parameters, it surpasses GPT-4 in an instant.
Amazon's "secret weapon" is likely to make its debut in December, replacing OpenAI. The moment is coming?
Amazon, which has a low profile in the "Battle of Models," is now brewing its "secret weapon."
On November 8th, it was reported that Amazon is training its second large-scale language model, codenamed "Olympus," which is likely to be launched in December this year.
This large-scale language model, named Olympus, has a scale of 20 trillion (2000B) parameters, surpassing the parameter scale of GPT-4 (previously reported to be around 10 trillion).
Amazon plans to integrate "Olympus" into online retail stores, Echo devices, and provide new features for the AWS platform.
Media analysis believes that as the largest cloud service provider, Amazon's infrastructure construction and technological accumulation over the years have given it a huge competitive advantage. The emergence of "Olympus" will be a clear signal: Amazon hopes to develop its own LLM in the AI era and not rely on others for key technologies.
It is reported that due to technical issues and the emergence of ChatGPT, Amazon's AI large-scale language model named "Titan" was postponed last year, and executives generally believed that ChatGPT was far superior to Titan.
Since then, Amazon has been quietly catching up, trying to narrow the gap as quickly as possible.
In April of this year, Amazon Web Services (AWS) announced the launch of the Bedrock generative AI service and its own large-scale language model "Titan," but it did not cause much of a stir.
At the heavyweight product launch in September, Amazon announced that the new version of Alexa will be enhanced with generative AI technology. Alexa's AI assistant will help users draft emails and complete various tasks in life, just like other generative AI assistants.
Dave Limp, Senior Vice President of Amazon Devices and Services, said that by then, the Alexa voice assistant will also be independent of Amazon Prime subscription.
Amazon's $4 billion investment in OpenAI, the biggest rival
In Amazon's AI layout, in addition to the yet-to-be-released "Olympus," the $4 billion investment in Anthropic, the biggest rival of OpenAI, in October, has attracted special attention.
After this investment, Anthropic's model has also become part of the AWS service, and users can access Claude through Amazon Bedrock, allowing AWS to provide customers with state-of-the-art large-scale language model services.
Amazon developers and engineers will be able to use the Anthropic model through Bedrock for construction and integrate it into their own business, providing strong support for the development of Amazon AI. Two companies announced in a statement that Amazon will hold a minority stake in Anthropic, with the valuation yet to be determined.
At the same time, Amazon's collaboration with Anthropic aims to "spend money to acquire customers" and accelerate the development of in-house AI chips.
On one hand, most artificial intelligence applications rely on expensive chips from Nvidia. However, Amazon Web Services (AWS) has introduced its own acceleration chips, Trainium and Inferentia, which can reduce the cost of training models and running inferences. Anthropic states that it will use AWS chips to build and train models.
On the other hand, AWS prefers to build its own products rather than rely on technology or business acquired from other companies. Amazon states that its engineers, including those working outside of AWS, will be able to use Anthropic's models.
Amazon executives have stated that so-called generative AI is still in the early stages, with over 100,000 customers currently using AWS's machine learning services. Claude, as part of Amazon's "Bedrock" service, will provide Amazon and third-party models to customers, undoubtedly giving AWS a competitive advantage.