Demand explosion! NVIDIA chips out of stock (Morgan Stanley TMT Conference Chinese version)
NVIDIA's CFO answers various questions from Morgan Stanley analysts.
Morgan Stanley's Brief Meeting with NVIDIA
Participants:
NVIDIA Corporation presenting at Morgan Stanley
Colette M. Kress, Executive Vice President and CFO
Joseph Lawrence Moore, Morgan Stanley
Joseph Lawrence Moore, Executive Director
Welcome back, I'm Joe Moore. It's a pleasure to have NVIDIA's CFO, Colette Kress, join us. Before we begin, let me quickly go over the research disclosure.
For important disclosures, please visit the Morgan Stanley research disclosure website at morganstanley.com. If you have any questions, please contact your Morgan Stanley sales representative.
Colette, before we start, I joked that I don't know how you manage your life now, as I know how much of my life is taken up by NVIDIA. And there are 30 people like me, while you are the only one, and you also have to serve as the CFO. So, you must be very busy. I truly appreciate you being here.
Joseph Lawrence Moore, Morgan Stanley
Perhaps we can get started. Look, you were here last year too, and we were discussing the importance of ChatGPT to the ecosystem. Your data center revenue was $4 billion then. Now you are close to $20 billion every quarter. I'm not defining it for you, but close to $20 billion. How did you achieve this in a year? I mean, when you were sitting here a year ago, did you ever think that you might need to reach $20 billion in four quarters? Like Jensen said, "Hey, we have to be ready in case we need to reach $20 billion in four quarters." How did you do it? How did you execute and meet this demand?
Colette M. Kress, Executive Vice President and CFO
So, I also need to make a very brief opening statement. As a reminder, this presentation contains forward-looking statements, and I advise investors to read the reports we have submitted to the SEC to obtain information related to the risks and uncertainties facing our business.
Let's review the past year. Yes, it has been a busy year. But perhaps, when we were talking on this platform a year ago, our perspectives were different. I believe that the introduction of generative AI was still in its early stages then, people were discussing what it was, what people were doing with ChatGPT, and it was all just beginning.
From our perspective, it was an important part back then. We understood OpenAI. We have partnered with them.
Over the years, in terms of the work they have done. We may just see it as starting from deep learning, using GPUs for inference, and now it's a very important large language model, we have been working in this field.
But things have indeed changed because we are curious about the global interest. When you say global, I mean every country, every company, every consumer, every CEO in the world has a profound understanding of what AI can do, whether it's from their perspective of monetization or from the company's perspective of efficiency and productivity.
But taking a step back, I think you must understand that our company's overall goal is to focus on accelerating computing, it has been over ten years. It may have been over 15 years, as our overall mission is to help people understand a transformation that is about to take place. With Moore's Law coming to an end, new platforms to drive accelerated computing will be necessary and may last for decades.
AI happens to be that great killer application that will make the use of accelerated computing possible from the start. Therefore, we are working every day to expand our platform, expand our systems, our software, everything we can do for the future data center. But we are very excited about this. We will call it a turning point, generative AI is a significant part of this work.
Joseph Lawrence Moore Morgan Stanley
Great. Maybe I want to talk about the demand side, but if we could talk about how you have grown so rapidly. I mean, I am increasingly talking to investors outside the semiconductor industry about NVIDIA. When you consider growing a business of this scale and complexity, using all specialized multi-chip packaging processes, etc., growing 4 to 5 times is truly remarkable. We have seen some companies grow by 40%, 50% in similar situations, but constrained by supply limitations. How did you do it? How did you increase production so aggressively? I will touch on some supply chain concerns, people are worried that you will catch up with demand. How do you see this issue? I mean, you are ultimately trying to catch up with demand.
Colette M. Kress Executive Vice President and CFO
Exactly. So focusing on increasing supply, the increase in supply must come from many different aspects. It's not just one thing that allows us to do this. Over the years, we have been talking about the resilience and redundancy of the supply chain, something that all businesses must consider in the future. We just scaled much faster than we imagined. But we still used a lot of the infrastructure we have been working on.
First of all, remember that many of our suppliers and partners have been working with us for decades. The first thing is, who called whom first? In fact, both did. So, how can we help, how can we help you expand
Considering the current capacity of existing suppliers and our additional supplier's expansion, we are also seeking new suppliers. These new suppliers possess a fresh set of capabilities to build many of the products we are working on. Our focus is on the manufacturing cycle time, breaking it down to see what we can do to improve this cycle time so that we can push inventory to the market faster. Over the past year, we have made significant progress. Each quarter, our goal is to increase our supply to customers, and we have achieved that. Even this year, the situation will be the same. So, we are very satisfied with this work.
Joseph Lawrence Moore Morgan Stanley
Thank you. When you look at the recent environment, do you feel close to meeting demand? I guess in this scenario, people are very concerned about delivery times to answer this question. However, there are also issues such as power supply and rack space that your customers are dealing with. In many cases, these things have not kept up with the supply you deliver. It seems that the final demand for GPUs is still very strong and unmet. So, how do you view your position in meeting this demand?
Colette M. Kress, Executive Vice President and CFO
Yes. When you consider demand, about a year ago, we presented the demand. This is not about saying I need demand on this day or that day. People are just queuing up for what they want in terms of foreseeable future demand. We are dealing with a large portion of it. But please remember, this is largely based on a very important product we are bringing to the market, the H100. The H100 is a real success from the platform architecture we have created.
But please remember, we have new products about to hit the market, which will enter the next phase of supply and demand management. I may talk more about what we see later on. But the focus is on maintaining the alignment of supply and demand while also assisting our customers. As we bring these new products to the market, understanding the requirements for building these products and constructing data centers will be crucial.
When you look back, suppose you lease or build a data center but have not fully configured it yet. It may take a year to set up everything inside the data center and be ready. The planning process is lengthy, and as you also consider, what changes will come with the launch of new products? How should I think about power? How should I consider the overall configuration of the data center? Our top customers and data center builders have been working for years to prepare these data centers for the transition to accelerated computing, maintaining consistency in all these matters. So, we are in a good position.
There is a lot of locked power. Locked power has been used for inefficient data center construction and may need to be completely redone. This will be the first thing they want to do. But in the long term, what changes will there be in the power procurement for future data centers?This will be a demand. However, all of these have been seen in our work with many top clients.
Joseph Lawrence Moore Morgan Stanley
Jensen made these comments during the conference call, specifically mentioning one of your mega clients, extending the depreciation cycle of traditional servers. You mentioned to some extent that you are replacing some demands, or creating more legacy environments for traditional services. Is this true? Or are people just looking to upgrade servers, but need to focus on this first? How do you now view the competition with traditional server workloads? Is it just a budget issue, or is it also a functional issue?
Colette M. Kress Executive Vice President and CFO
Yes, that's a great question. It's hard to pinpoint exactly how it will evolve, but there could be many factors. So, some of the things we see in the market are that annually, about 10 to 20 years, data center CapEx spending is around $250 billion, which has been relatively stable. However, last year, it actually increased, which was the first time in a long time. Where is their focus? Of course, their focus is on accelerating computing, but you also see the extension of asset life, which allows existing x86 servers to continue to be produced and used without necessarily upgrading. When considering capital deployment, all companies go through, how do I think about the return on investment I will achieve? They may prioritize the most important projects. The most important project for them right now is to stay competitive in AI. All companies will inject AI into their various businesses. So AI has become a significant part of their capital expenditure.
The question is, will they continue to upgrade some investments that are not high return? Probably not. This will be the first thing that may be left behind, possibly replaced by more efficient solutions, such as accelerated computing and AI. So I think as we move forward, you will see this situation.
Joseph Lawrence Moore Morgan Stanley
That's very helpful. Thank you. You mentioned in the conference call that when new products are launched, you may not be able to meet all demands, initially may not fully meet the demand. Obviously, you have officially mentioned the H200, but there are other products that you have not announced yet. But I think Jensen hinted a bit in some news articles, saying that the next generation of products will also be in short supply. Can you talk about this? How do you already know that people will be interested in products that have not been announced, to the point where they are already prepared to make a purchase?
Colette M. Kress Executive Vice President and CFO
Yes. An important part of the improvements we have made over the decades in bringing architectures to market is our collaboration with many keyCustomer relations, we have been working with them for 10 years. The launch of the new architecture was not surprising to them because we have always been understanding their needs to incorporate them into the architecture. Secondly, they have a good understanding of the specifications and samples of many products, even in the early stages before the products enter the market.
We also have a good understanding of their anticipated needs, what level of products they are looking for. This was very helpful for us to understand their needs when we started building the new architecture. This is a statement left by Jensen, that demand may exceed our supply, or demand may seem quite tight. So, the question we face again is how to meet the demand in front of us.
Joseph Lawrence Moore Morgan Stanley
Great. I know you will soon talk more about these new products, but from all the marketing activities you have done, and from what we have heard from customers, these products seem really good. How do you view the transition? Do you think there is a risk of stagnation for old products with the launch of new products? How do you feel when you see truly eye-catching products like B100, B200? Do you think you know the future of all H100, or is it in a transitional phase?
Colette M. Kress, Executive Vice President and CFO
Yes. This somewhat involves the work we have done to accelerate the architecture from a possible 2-year cycle to a more likely 1-year cycle. But even in that architecture, we now have the ability to launch other key products that can also impact some demands in the market. H200 is an example of this, it complements H100.
What we see time and time again is that when you are in a certain architecture and stick to using that architecture, you have already qualified it in terms of system, software, and security. This is an important part of their process. This continuity will be an important demand cycle for many. Now everyone is thinking about H100, there are many people in this room, many people in this city who have not yet come into contact with H100, so even as we introduce new products, the availability of H100 remains important for many, whether for those who have already established clusters and want to add to them, or for those who have not started yet. When you consider the availability of new products, potential supply constraints, and qualification periods, the next great thing or the best product in the market will still be H100.
Joseph Lawrence Moore Morgan Stanley
Great. Following up on your comments about the number of unsatisfied customers. You mentioned sovereign spenders, non-traditional enterprise software spenders. Obviously, if you are a sovereign nation, you would want to have your own hardware.Colette M. Kress, Executive Vice President and CFO
Certainly, there are unique demands in various fields that we have yet to tap into. We have strong connections in key sectors such as healthcare, finance, automotive, and manufacturing. We also see the increasing application of generative AI across all enterprise software companies globally. So, there is still a lot of interest from many U.S. domestic companies. We discussed the unique perspectives brought by sovereign AI, OpenAI, and ChatGPT. ChatGPT is U.S.-based, using American language and culture, including American slang throughout the process. Many other countries also want their own versions, tailored to their culture and language. Therefore, the work on large language models in these regions is crucial, driven by national sovereignty. You see companies interested in integrating these large language models into their enterprise applications. Hence, this flow of interest is a significant part. We have a substantial pipeline and are collaborating with them because they have not yet reached the computing capabilities we sometimes have in the U.S. So, we will continue to build a significant portion for these new enterprises, sovereign nations, and products that we hope will meet the expectations of our Chinese partners.
Joseph Lawrence Moore, Morgan Stanley
Understood. Regarding China, perhaps you can elaborate on how this will unfold. There is currently a threshold where if you fall below it - in fact, if it's below 50%, you still need a license. So, you can operate within that range, but it may feel more like a car race at 30 miles per hour, somewhat limited. Do you have the ability to further increase to that range and enter into something less threatening from a government perspective?
Colette M. Kress, Executive Vice President and CFO
Yes. So, with the new export controls, they have not only increased performance but also density. So, the combination of these two factors really challenges us to create suitable products for China. They are certainly interested in collaborating with NVIDIA, with whom we have been working for decades. Equally important is their respect for our software and platform. Therefore, we have indeed prepared products for them as they continue to review performance, ensuring that they are not threatened from a government perspective.
How to solve it and be able to use the software.
We have already collaborated with the government. We have worked with the U.S. government to ensure they understand our upcoming product launch. This is not only for us, but also for our Chinese customers who are eager to know. Why do they want to know? They are looking for long-term use. They want to ensure that the U.S. government is aware of our operations there.
Looking ahead, we are uncertain about any future changes with the U.S. government. However, we can only abide by the rules we have today. Will the performance in China improve? It's still uncertain. As we prepare to launch new products with enhanced performance, will this be a consideration for export controls? We have not received any signals yet.
Joseph Lawrence Moore Morgan Stanley
Alright. Regarding the question about 40% of revenue coming from inference that you mentioned in the earnings call, this is actually the most frequently asked question I've received after this quarter. It's indeed a significant figure, as we are still in the relatively early stages of using these models. If the growth trajectory of inference is so clear and strong, it would be quite a bullish signal. But how do you know? Because when I spoke with people from the cloud company, they mentioned that the same GPU is used for both. We are not sure how much these inference workloads account for. You mentioned on the call that it's an estimate, but you believe it's a conservative estimate. So, perhaps you could elaborate a bit on this 40%.
Colette M. Kress, Executive Vice President and CFO
Yes, this is a good exercise for us. I don't want you to suddenly fall off your chair at any point. But what we are actually doing is helping people understand that we do know our largest systems, we understand the work we are doing, the collaboration between engineering teams, and the companies we work with. So we went to the engineers, actually studied all the projects we collaborated with many customers on, and were able to categorize their use cases.
So I know we are in the early stages of generative AI, where people may still be in the part of building large language models. Some have already shifted to Copilot, turned to monetization. But one of the most important things in building large language models is recommendation engines, right? Recommendation engines power everyone's phones in this room, and what they do, whether it's news, things you purchase, or future restaurants or anything else. This is a crucial part of marketing that needs to be redesigned for privacy. At this point, recommendation engines are crucial.
Search is another very important workload, and I understand the excitement about generative AI. But it's still a very important workload. What we are focusing on is the future of inference. Not the inference of the past 30 years, not the binary prototype type of response, but understanding the large data that requires millisecond response rates. We know this will
When it comes to the market, it's crucial. But the development of generative AI has just begun. We have a massive recommendation engine and search foundation, which means reasoning may grow as we progress.
Joseph Lawrence Moore from Morgan Stanley
So, when we hear companies, especially cloud companies, talking about reducing the cost per query as a primary priority in this meeting and other occasions, does this lead people towards NVIDIA? Away from NVIDIA? How should we consider this dynamic? Because using GPUs for this reasoning is very expensive, but evidently very efficient.
Colette M. Kress, Executive Vice President and CFO
You need to break down this issue. The cost they need to consider is not just the cost of the systems we provide. They need to consider their total cost of ownership (TCO), which is the cost they are incurring.
For example, during the pandemic, one area people are most concerned about is studying their electricity bills, understanding where their power is running, but in reality, nothing is actually running.
So, from the perspective of the system, engineering usage, software usage, and power usage, NVIDIA provides the most efficient inference solution. That's why people are turning there.
It can achieve excellent response rates and obtain this response rate in the shortest time. Power consumption rises, but it quickly drops during this process. So, you have to look at the overall cost value of the entire system, as well as everything that goes into the account. You can't just look at the price of the card. That's not the right equation.
Joseph Lawrence Moore from Morgan Stanley
Yes, that's very helpful. Thank you. So, you mentioned networking, I believe it's a $13 billion annual growth rate now. It's not surprising, as you mentioned something similar last quarter, I think it was a $10 billion annual growth rate. But it's still a surprising number, as some of our semiconductor competitors are excited about the billion-dollar opportunities around networking. It seems like you dominate in networking as well as processors when it comes to AI. So, could you talk about this number and your ongoing visibility into this number?
Colette M. Kress, Executive Vice President and CFO
Yes, it's an amazing job. The Mellanox team has collaborated with NVIDIA to add a crucial part to data center computing, focusing on networking. We can and indeed have the best systems and processors to handle the processing and acceleration of computing tasks. But if you don't have the best network, you actually weaken the success you get from computing. So, we have been collaborating for years and have been understanding both our strengths in networking and processors since the acquisition.When integrating the network into our work, the focus is mainly on traffic patterns and speed. The traffic patterns within the data center are crucial for the success you can achieve, especially during the inference phase. Considering the traffic from all directions, north, south, east, and west, is a key and essential part.
Our InfiniBand platform has become the gold standard for many AI and accelerated computing clusters. Therefore, we often sell our data center computing structures together. This is a continuation of what we have been doing. We also have some great new products about to be launched, including Spectrum-X. Spectrum-X focuses on Ethernet, which is also a critical standard for many enterprises. So now we will have the same advantages as InfiniBand in the Ethernet phase. We are excited about all these developments.
Joseph Lawrence Moore from Morgan Stanley is excellent. So, in response to skeptics about the network, there are concerns about shortages and bundling sales, but this doesn't seem to be the case as we have seen. Just as there are unmet needs in the network area.
Colette M. Kress, Executive Vice President and CFO
When all good things encounter challenges at some point, creating these great products. They are obviously not ordinary commodities. So, there may be high demand for our optical cables at times. But we believe we have addressed most of the issues, which we have created with our suppliers and partners.
Joseph Lawrence Moore from Morgan Stanley
Alright. You mentioned that software and services have reached $1 billion for the first time. Could you elaborate on this? And what are the main components of this part?
Colette M. Kress, Executive Vice President and CFO
Yes. We are pleased to have reached an annualized $1 billion level for software services and SaaS by the end of the year. The main components you will see there are NVIDIA AIE, which can be sold separately or bundled with many of our key platforms. This is crucial for enterprise use. You also have the work we have done from a SaaS perspective with DGX Cloud. You also have the services we provide for building models for people, even using NeMo, BioNeMo, and the system support we offer them.
We have a large array of software solutions to choose from. This will be a significant growth area in the future, as we will sell, for example, automotive software with our overall infrastructure there. The scale of Omniverse is also a big deal. As we continue to develop enterprises and the importance of AIE, as we continue to develop generative AI, companies need licensed software to ensure that A, it stays up to date, with new features, and secure essential parts.In the coming years, we will witness the emergence of new things. People need to enter the development stage, such as CUDA, to ensure they keep up with the latest and greatest developments in AI. This is also where they can access CUDA DNN for deep learning networks. They have the ability to obtain NeMo, BioNeMo, SDK, API, a complete end-to-end stack, fully engineered with our data center stack. So, this is very different from creating chips. Our business is a platform company that can provide any form of data center computing you may need. The process of creating a platform is different from creating chips. Our focus is to ensure that at every level of the data center, we can provide all different components, whether it's computing infrastructure, network infrastructure, or the overall memory section. We can assemble a complete supercomputer. This is our business, accompanied by an end-to-end software stack. Software will always say that with the continuous development of AI, we will see new things. People need to enter the development stage, such as CUDA, to ensure they keep up with the latest and greatest developments in AI. This is also where they can access CUDA DNN for deep learning networks. They have the ability to obtain NeMo, BioNeMo, SDK, API, a complete end-to-end stack, fully engineered with our data center stack. So, this is very different from creating chips. Our business is a platform company that can provide any form of data center computing you may need. The process of creating a platform is different from creating chips. Our focus is to ensure that at every level of the data center, we can provide all different components, whether it's computing infrastructure, network infrastructure, or the overall memory section. We can assemble a complete supercomputer.
Joseph Lawrence Moore Morgan Stanley
Great. On the topic of services, I have received many questions about DGX Cloud. You basically collaborate with hyperscale computing service providers to offer cloud-based services. The question I have is, how ambitious are you in this business? Secondly, are some of these hyperscale computing service providers concerned that you might compete with them?
Colette M. Kress, Executive Vice President and CFO
Yes. I believe it is correct to consider our cloud service providers (CSPs) as our partners. They are building computing capabilities, focusing on providing extreme expertise in computing for a multi-tenant environment. When they discuss software with enterprises, it is a great opportunity for them to introduce NVIDIA, so they can help you with the software part. So, it's a win-win situation. CSPs have established computingWe offer holistic software solutions. Both CSP and clients are pleased with NVIDIA's sales and establishment of software relationships. This is how it works.
You can take another approach, where clients can directly collaborate with NVIDIA on software, services, and solutions, whether they are building LLM or developing applications they want to offer on their LLM. However, please bear in mind that we have already procured computing capabilities with CSP. So, CSP still sees business completion, and we are now just collaborating with them directly on our platform. Both ways are good. Both ways are good solutions for them.
Joseph Lawrence Moore Morgan Stanley
Great. That's very helpful. Perhaps you could touch on competition a bit, as I seem to receive press releases every week about your custom silicon for hyperscale computing service providers. There are many startups. AMD and Intel seem to have some revenue growth in the market with their commercial products. How do you view all of this? How much focus is on competing with competitors rather than concentrating on what NVIDIA is doing?
Colette M. Kress, Executive Vice President and CFO
Our focus is different from many companies that concentrate on silicon or specific chips for certain workloads. Stepping back, understanding our vision is that of a platform company, one that can provide a platform for data center computing in any form you may need. The process of creating a platform is different from creating chips, and our focus is to ensure that at every data center level, we can provide all the different components, whether it's computing infrastructure, network infrastructure, or the overall memory section. Our goal is to provide a complete supercomputer.
So, this is our business, accompanied by an end-to-end software stack. This software stack always says that as AI continues to evolve, we will see new things. People need to get into the development phase, like CUDA, to ensure they keep up with the latest and greatest developments in AI. But this is also where they get CUDA DNN for deep learning networks. They have the ability to get NeMo, get BioNeMo, get SDKs, APIs, a complete end-to-end stack, also fully engineered with our data center stack. So this is very different from creating chips. Our business is a platform company that can provide data center computing in any form you may need. The process of creating a platform is different from creating chips, and our focus is to ensure that at every data center level, we can provide all the different components, whether it's computing infrastructure, network infrastructure, or the overall memory section. We can assemble a complete supercomputer.
We will occasionally see simple chips being listed. No problem. But please remember, customers must have the total cost ownership in mind.Joseph Lawrence Moore Morgan Stanley
Great. I have a question about the data center, but first, a financial question, and then I'll open it up to the audience. Your comments on gross margin, obviously, the gross margin is very good in the short term. You expect it to decline in the second half of the year. Could you talk a bit about this? Is it more about conservatism, just not wanting to be constrained by very high gross margin targets, or are you considering that we must pursue things that will lower the gross margin?
Colette M. Kress, Executive Vice President and CFO
Yes. When we finish the fourth quarter, our gross margin level is above 70%. And our guidance for the first quarter is also at this level. We have managed the process of bringing H100 to the market, using other suppliers to help us improve gross margins along the way. However, as we move forward, we may introduce more different types of products. So, we will return to the levels before H100, and for some of these products, we will return to the mid-70% level. I think this is a good place, but they are all open for discussion to determine what our internal mix will be. But that's our plan. It's just a different mix, and our excitement about H100. Because it's so advanced, we are able to really bring it to maturity and improve overall manufacturing costs.
Joseph Lawrence Moore Morgan Stanley
Great. Let me see if the audience has any questions.
Unknown Analyst
We are big fans of Jensen and NVIDIA leading the AI revolution. I have two questions. The first one is about the long-term outlook, right? For example, some of your competitors, AMD and TSMC, have commented on the long-term future, citing a figure of around $400 billion in about 27 years.
Joseph Lawrence Moore Morgan Stanley
27 years, yes.
Unknown Analyst
Question one, if you reverse-engineer the figure from TSMC, it's around $300 billion. So, how do you view the upside potential for this long-term profit margin?
Question two, you mentioned that we are accelerating product innovation, from about 2.5 years to about 1 year. I think this
It's a big deal. It's going to be tough for competitors to catch up. So, can you talk about this? B100, H100? Is there a possibility of further acceleration in the future?
Colette M. Kress, Executive Vice President and CFO
Sure. There are many different numbers regarding the size of the accelerator market, whether it's for the current year or the future. Let's take a step back and look at our perspective and the scale of the market. We've talked about our current installation base in data centers today and our focus on truly becoming a data center computing company, focusing on transforming data centers into accelerated computing and AI.
This is the market in front of us, this is our TAM (Total Addressable Market) that we are going after. So, even if you think about the $1 trillion installation base in data centers and our ability to completely transform it into accelerated computing, the opportunity for us is therefore $1 trillion. But if we consider the use of AI in accelerated computing and what it can also do, from an efficiency and monetization perspective, it may be even larger than the existing installation base. You will be able to handle data that you couldn't handle before. You will be able to find solutions more effectively, using AI to find many solutions.
You might see growth of $1 trillion. I think in a road trip, Jensen mentioned that you might see something close to $2 trillion, not just $1 trillion. So we look at it in a different way, we are not just an accelerator company, we are a comprehensive accelerated computing company for data centers. So we do believe the market is much larger.
Unknown Analyst
Jensen mentioned multimodal reasoning in the previous earnings conference call. What progress do you see in multimodal reasoning?
Just like the example we just saw, from text to video. How do you view this as a lever for the growth of reasoning demand? As investors here, we are concerned about the derivatives of demand growth we see from here. That's the first question. The second question is about government spending, do you see reasoning in it? Or just training?
Colette M. Kress, Executive Vice President and CFO
Great. Let's talk about our ideas on reasoning in front of us. That's a great question. As we mentioned before, our 40% reasoning and our focus on it as a demand for future growth. The area we focus on is reasoning, not just standard data, using video as one of the most important areas we see, recommendation engines. But there are potential ways in biology, and considering all different areas such as text to speech, video, and so on, yes, this is our focus for the future. The second question is - is there a second question?
Unknown Analyst
Regarding government spending in this regard, is it for training? Maybe reasoning later? Or is it...
Colette M. Kress, Executive Vice President and CFO
Governments are increasing their spending in this area. Initially, their focus is on training, training in their own natural language or their own cultural domain. So building a large model for their country or the entire nation will be their main area of focus. Some may even start at the corporate level. Outside the United States, there is government funding assistance and frequent collaboration with businesses. So there is a combination of the two. However, after their training phase, working on applications and solutions for their region is also a top priority for them.