Microsoft: From "Software Giant" to "AI Empire" Microsoft, founded by Bill Gates and Paul Allen in 1975, has long been known as the "Software Giant" due to its dominance in the software industry. However, in recent years, the company has undergone a significant transformation and is now positioning itself as an "AI Empire". With the rise of artificial intelligence (AI) technology, Microsoft has been actively investing in and developing AI capabilities. The company has made significant progress in areas such as machine learning, natural language processing, and computer vision, enabling it to provide advanced AI solutions to various industries. One of Microsoft's key AI initiatives is its Azure cloud platform, which offers a wide range of AI services and tools. These include Azure Machine Learning, Azure Cognitive Services, and Azure Bot Service, among others. These services enable developers and businesses to easily incorporate AI capabilities into their applications and processes. In addition to its cloud platform, Microsoft has also been focusing on developing AI-powered products and services. For example, the company's virtual assistant, Cortana, utilizes AI technology to provide personalized assistance to users. Microsoft's AI research division, Microsoft Research, is also at the forefront of AI innovation, conducting cutting-edge research in areas such as deep learning and reinforcement learning. Furthermore, Microsoft has been actively collaborating with other companies and organizations to promote the development and adoption of AI. The company has formed partnerships with leading AI research institutions, such as OpenAI and the Allen Institute for Artificial Intelligence, to advance the field of AI and address its ethical implications. Through these efforts, Microsoft is not only expanding its influence in the AI industry but also transforming itself into an "AI Empire". The company's vision is to empower individuals and organizations with AI technology, enabling them to achieve more and drive innovation in the digital age. In conclusion, Microsoft's journey from being a "Software Giant" to an "AI Empire" reflects its commitment to embracing and harnessing the power of AI. With its extensive AI capabilities and strategic partnerships, Microsoft is well-positioned to lead the way in the AI revolution and shape the future of technology.
Microsoft's strategy at Ignite 2023 clearly indicates its focus on leading the AI revolution and leveraging its platform heritage to innovate in both hardware and software, in order to maintain its industry leadership.
Since 2009, Microsoft has been developing AI models and investing in OpenAI in 2019, gradually becoming a giant in the AI era.
For Microsoft and its ecosystem, this year's Ignite conference can be described as dazzling. Microsoft announced over 100 new products and features centered around AI in various aspects such as cloud computing infrastructure, Model as a Service (MaaS), data platforms, and the Copilot AI assistant, showcasing an end-to-end AI vision.
However, there is one detail that should be noted: traditionally, Microsoft's Ignite conference has mainly focused on infrastructure and operations, while the Build conference is primarily for developers.
But in this year's Ignite conference, AI content for developers and machine learning engineers took center stage.
This means that this conference is not only about developers and IT professionals, but also a groundbreaking moment for the entire Microsoft ecosystem.
In the keynote speech, Microsoft CEO Satya Nadella explicitly stated that Microsoft aspires to be a significant force in the AI ecosystem. From developing their own AI acceleration chips to launching the Copilot marketplace, Microsoft has set a long-term strategy for artificial intelligence.
Overall, Microsoft's strategy at Ignite 2023 clearly indicates its focus on leading the AI revolution, leveraging its platform heritage to innovate in hardware and software, and maintain its industry leadership.
Azure → New AI Operating System Copilot → New Applications
Microsoft is trying to replicate its glory in the era of software operating systems.
In terms of building system platforms, Microsoft indeed has rich experience: from the early Windows platform, development of OLE and COM, to the early 2000s when Microsoft .NET and Visual Studio drove the development of web services, and the successful launch of the Azure platform in the past decade.
Now, Microsoft hopes to recreate this miracle through artificial intelligence, creating a thriving ecosystem that brings together developers, independent software vendors (ISVs), system integrators, enterprises, and consumers.
This time, Azure becomes the new operating system, just like the new Windows, providing runtime and platform services, while Copilots become the new applications that Microsoft refers to as AI assistants. Basic models such as GPT-4 form the core of this new operating system.
Similar to Visual Studio, Microsoft has also invested in a set of developer tools in the form of AI Studio and Copilot Studio. This stack is very similar to Windows, .NET, and Visual Studio, which have dominated the developer field for decades. In just a few months, Microsoft has delivered multiple products embedded with artificial intelligence, from New Bing to Microsoft 365, to the Windows operating system.
The speed at which Microsoft embraces generative AI is astonishing, and it demonstrates a sense of urgency: the company is committed to becoming a pioneer in artificial intelligence, making AI capabilities more customer-centric.
Satya Nadella may not want Microsoft to miss the next wave in technology, just as the company missed out on search and mobile in the past.
In-house CPUs, GPUs, and DPUs
In the past, CPUs dictated the rules of software architecture and influenced its development. Now, GPUs are also influencing the development of artificial intelligence, and Microsoft wants to directly grasp this key aspect.
At this year's Ignite conference, Microsoft unveiled its first custom CPU, Azure Cobalt, and AI accelerator chip, Azure Maia.
Microsoft Azure Cobalt is a cloud-native chip based on the Arm architecture, optimized for performance, power, and cost-effectiveness for general workloads. Nadella stated that this CPU chip is already being used in supporting Microsoft Teams Azure communication services and parts of Azure SQL, and will be made available to customers next year.
Microsoft Azure Maia is an AI accelerator chip designed for cloud-based training and inference of AI workloads such as OpenAI models, Bing, GitHub Copilot, and ChatGPT. This chip is manufactured using 5-nanometer technology and has 105 billion transistors.
Azure Maia 100 美颜照|Microsoft
Microsoft's own DPU, Azure Boost, is also now available. Earlier this year, Microsoft acquired DPU company Fungible to improve the efficiency of Azure data centers. With Azure Boost, software functions such as virtualization, network management, storage management, and security are offloaded to dedicated hardware, allowing the CPU to allocate more cycles to workloads rather than system management. This offloading significantly improves the performance of cloud infrastructure.
Nadella stated that "silicon diversity is a key factor in our ability to support the world's most powerful foundational models and all AI workloads." At this year's Ignite conference, Microsoft not only released two in-house chips but also included the latest AI-optimized chips from more industry partners, including AMD Instinct MI300X, as well as NVIDIA H100 and H200. On top of the aforementioned supercomputers, Microsoft also provides basic models with parameter quantities ranging from billions to trillions to meet the cost, latency, and performance requirements of different developers when building AI applications.
In-house and Open-source Basic Models
Although Azure is still the preferred platform for enterprises to use OpenAI models, Microsoft is also investing in building its own large-scale models.
As previously reported by Wall Street News, Microsoft is developing its own "small models".
Compared to traditional large-scale language models, Microsoft's Phi-1.5 and Phi-2 are lightweight small-scale language models that require fewer resources. Phi-1.5 has 1.3 billion parameters, and Phi-2 has 2.7 billion parameters, which are much smaller compared to Llama 2 (which starts with 7 billion parameters and can go up to 70 billion parameters).
Therefore, these small models are very suitable for embedding into Windows to provide a local Copilot experience without the need for round-trip communication with the cloud. Microsoft has also released extensions for Visual Studio Code, allowing developers to fine-tune these models in the cloud and deploy them locally for offline inference.
In addition, Microsoft Research has developed the Florence model, which allows users to analyze and understand images, videos, and language. Microsoft's model platform Azure ML now also supports other open-source basic models, including Llama, Code Llama, Mistral 7B, Stable Diffusion, Whisper V3, BLIP, CLIP, Flacon, and NVIDIA Nemotron, providing customers with the most comprehensive and extensive selection of basic models.