2024.06.17 12:27
I'm PortAI, I can summarize articles.

Huang Renxun's latest dialogue: Future internet traffic will decrease significantly, and there will be more real-time generation in computing

In the future, internet traffic will significantly decrease, and computing will rely more on real-time generation. Huang Renxun emphasizes that generative AI is growing exponentially, and enterprises need to quickly adapt and utilize this technology. Open-source and closed-source AI models will coexist, and enterprises need to leverage their respective strengths to promote the development and application of AI technology. The development of AI needs to consider energy efficiency and sustainability, reduce energy consumption, and achieve more environmentally friendly intelligent solutions

Key Points:

  1. Huang Renxun emphasized that generative AI is growing exponentially, and companies need to quickly adapt and utilize this technology rather than wait and risk falling behind the pace of technological development.

  2. Huang Renxun believes that open-source and closed-source AI models will coexist, and companies need to leverage their respective strengths to promote the development and application of AI technology.

  3. Huang Renxun proposed that the development of AI needs to consider energy efficiency and sustainability, by optimizing the use of computing resources and promoting the inference and generative capabilities of AI models to reduce energy consumption and achieve more environmentally friendly intelligent solutions.

  4. With the continuous accumulation of data and the continuous advancement of intelligent technology, customer service will become a key area for companies to achieve intelligent transformation.

According to foreign media reports, at the recent 2024 Databricks Data + AI Summit, Huang Renxun, the founder and CEO of NVIDIA, had a fascinating conversation with Ali Ghodsi, the co-founder and CEO of Databricks. Their dialogue showcased the importance and development trends of artificial intelligence and data processing technologies in modern enterprises, emphasizing the crucial role of technological innovation, data processing capabilities, and energy efficiency in driving corporate transformation and industry development.

In the dialogue, Huang Renxun looked ahead to the future of data processing and generative artificial intelligence. He pointed out that the business data of each company is like an untapped gold mine, containing immense value, but extracting profound insights and intelligence from it has always been a challenging task.

Huang Renxun also mentioned that open-source models like Llama and DBRX are driving the transformation of companies into AI companies, activating a global AI movement, promoting technological development, and fostering corporate innovation. Through the collaboration between NVIDIA and Databricks, the two companies will leverage their expertise in accelerating computing and generative artificial intelligence to bring unprecedented benefits to users.

The following is a transcript of the dialogue:

Host: I am very excited to introduce the next guest to everyone. He is an outstanding figure who needs no introduction - the globally unique "rock star" CEO of NVIDIA, Huang Renxun. Please come on stage. Thank you very much for being here! I would like to start with the remarkable performance of NVIDIA. Your company's market value is as high as 30 trillion US dollars. Did you ever imagine five years ago that the world would evolve so rapidly and present such a remarkable scene today?

Huang Renxun: Of course! I anticipated this from the beginning.

Host: Truly admirable. Can you provide some advice for the CEOs in the audience on how to achieve their goals?

Huang Renxun: Whatever you decide to do, my advice is not to get involved in the development of graphics processors (GPUs).

Host: I will inform the team that we do not intend to venture into this area. Today, we have spent a lot of time delving into the profound significance of data intelligence. Companies hold vast amounts of proprietary data, which is crucial for building customized artificial intelligence models. The deep exploration and application of this data are essential for us Have you also noticed the trend in this industry? Do you think we should increase investment in this area? Have you collected voices and insights from the industry on this issue?

Huang Renxun: Every company is like having a gold mine, holding rich business data. If your company provides a range of services or products, and customers are satisfied with these services and products, while providing valuable feedback, then you have accumulated a wealth of valuable data. This data may involve customer information, market dynamics, or supply chain management. For a long time, we have been collecting this data, accumulating a large amount of data, but it is only now that we are truly beginning to extract valuable insights from it, and even higher-level intelligence.

Currently, we are passionate about this. We use this data in chip design, defect databases, the creation of new products and services, and supply chain management. This is the first time we have adopted an engineering process starting from data processing and refined analysis, by building learning models, then deploying these models, and connecting them to the data collection Flywheel platform to obtain more data. Our company is using this approach to propel us into the ranks of the world's largest companies. This is certainly due to our company's extensive use of artificial intelligence technology, which has helped us achieve many remarkable accomplishments. I believe that every company is undergoing such a transformation, so I think we are in an extraordinary era. The starting point of this era is data, and the accumulation and effective utilization of data.

Harmonious Coexistence of Open Source and Closed Source

Host: This is truly amazing, thank you very much. Currently, the debate between the closed-source model and the open-source model is gradually heating up. Can the open-source model catch up? Can the two coexist? Or will they eventually be dominated by a single closed-source giant? What is your view on the entire open-source ecosystem? What role does it play in the development of large language models? How will it develop in the future?

Huang Renxun: We need cutting-edge models, especially those that can broaden horizons with advanced models. The work of OpenAI and Google in this area is crucial, as they not only push the technological boundaries but also help us explore new possibilities. However, looking at the events of this year, the most important events are closely related to open source, such as Llama 2, Llama 3, Mistral, and the DBRX project carried out by the Databricks team. DBRX is indeed a very cool achievement. Its coolness lies in energizing every enterprise, making it possible for any company to transform into an artificial intelligence company. You must have also noticed this, we have seen this trend globally. We recently transformed Llama 3 into an inference microservice, and it is now available for download. You can visit Hugging Face, and of course, Databricks, which is now adopted by hundreds of companies worldwide This fully demonstrates that open source has unleashed the potential of every company, giving them the opportunity to be part of the artificial intelligence field. At NVIDIA, we extensively use open source models and combine them with our own data and expertise to fine-tune and train them. Without open source, there wouldn't be this global movement encouraging every company to transition to artificial intelligence. I believe this is undoubtedly a significant development.

Host: Indeed, this is a remarkable development. Open source and closed source models will coexist, and we do need both modes. The Nim framework you mentioned, Nims, is exactly what we are focusing on. I am excited to announce here that we will integrate DBRX into Nims and provide services on the Databricks platform. In fact, all new models we develop in the future will also adopt this approach. We are very optimistic about the prospects of Nims.

Huang Renxun: The process of creating a large-scale language model API is indeed a technical challenge. Although these models may not seem massive at the moment, they are still very complex computationally, involving numerous dependencies. To address this, we developed the NVIDIA Inference Microservice Nim, which integrates and optimizes all necessary dependencies. NVIDIA has a team of professional engineers dedicated to this field, encapsulating complex technology into easy-to-use microservices. Users can easily use this service on the Databricks platform, or download and personalize it as needed. NVIDIA's NeMo (Neural Modules) microservice provides this flexibility, ensuring it can run in any cloud or local environment, truly achieving ubiquitous artificial intelligence capabilities.

Host: This is truly an admirable technology. The ability to deploy and run locally is particularly outstanding, as it means we are no longer entirely dependent on cloud services, which is a huge step forward. In our interactions with customers, we have found that they are focusing on developing internal expertise to customize models and gain a competitive edge. What are your thoughts on this phenomenon?

Huang Renxun: I believe the future trend is, as we are witnessing today, that we are able to tokenize almost all types of information and data. We can extract their structure, understand their implications, and learn their representations, whether it's sound, language, images, videos, chemical substances, proteins, or even robot motion control or driving operations. As cloud data centers are producing these tokens, we are essentially creating some unprecedented unique products. For the first time, we have tools known as artificial intelligence supercomputers that produce tokens in factories designed specifically for this purpose, enabling us to mass-produce intelligence on a large scale, which is a completely new technology. This is one of the reasons why I firmly believe we are at the beginning of a new industrial revolution, one that is not about producing electricity but about producing intelligence.

Of course, every company at its core is about intelligence in specific domains. When it comes to data, data processing, artificial intelligence, and their infrastructure, few companies can have a deeper understanding than Databricks We focus on our expertise in this specific field, whether it is in financial services or healthcare, etc. Ultimately, all of us will become manufacturers of intelligence.

If you are to become an intelligence manufacturer today, you will have human resources in the field of artificial intelligence, which we call the AI factory. Therefore, every company must start this process. We are doing this, and you will too. We observe that regardless of the size of the company, they are all moving in this direction. Therefore, in the future, all of us will be involved in this process. You will start with data from your specific field, which is stored somewhere in Databricks. You will process this data, refine it, extract intelligence from it, and then put it into the Flywheel platform. You will have an AI factory.

Fusion of Accelerated Computing and Generative AI

Host: This is truly an amazing achievement, and I have full confidence in it. We are passionate about this, especially in data processing. The amount of data processed by Databricks every day is extremely large, about 40 trillion bytes per day.

Huang Renxun: This is undoubtedly one of the largest computing demands on Earth right now, which is data processing. In fact, almost every company is doing this work.

Host: Indeed, the high parallelism of data processing makes it an ideal field for us to repeatedly perform the same operations. I am looking forward to our collaboration in introducing GPU acceleration technology into data processing. We are committed to achieving revolutionary progress in core data processing that rivals AI models. We are excited to work with you to optimize our Photon engine using GPU acceleration technology, stepping into a new era of applying GPUs to core data processing. Currently, these massive workflows have to rely on CPUs to execute, and we hope they can also run efficiently on Nvidia GPUs.

Huang Renxun: By the way, this is a major announcement: the two key trends in today's computing field - accelerated computing and generative artificial intelligence, Nvidia and Databricks are joining forces, bringing our expertise in these areas to every user. Although accelerating data processing is technically challenging, we have invested five years of relentless effort and finally developed libraries that can significantly enhance Photon's performance. This is the result of our long-term efforts, and now we will accelerate Photon to make data processing faster, more cost-effective, and most importantly, significantly reduce energy consumption.

Host: This is indeed a profound development, which is very logical. Despite the complexity and special cases in the data processing process, due to its high level of parallelism, we actually do not need general computing power. We are dealing with highly repetitive operations, processing massive datasets, rather than unique data. Therefore, I am excited about this technology, as it not only has the ability to disrupt the status quo but will also greatly improve performance, reduce costs, and undoubtedly bring about amazing changes Huang Renxun: When we can quickly process massive amounts of data, researchers may wake up one morning and whimsically say, "Let's collect all the data on the Internet to train a huge model, because now this is no longer a time-consuming task." Without the development of accelerated computing technology, people would not consider such ideas, as it would be costly and time-consuming. But now, it is possible, we can process unprecedented amounts of data at lower costs and higher efficiency. This will inspire infinite innovative thinking, such as "Let's use all of the company's data to train our super artificial intelligence," such days are coming.

Opening a New Chapter in Intelligent Services

Host: Indeed, processing the entire Internet's data was once a concept only found in science fiction. We thought it was impossible until hardware and infrastructure developed to an advanced enough level to allow specialized processing of technology. Today, this has become a reality, with everyone participating. Let's turn to another topic. The flourishing development of generative artificial intelligence is truly remarkable. Initially, many companies started with chatbots and focused on developing and customizing chatbots based on their own data. However, we now see people gradually expanding into more cutting-edge application scenarios. Looking ahead, which new applications of artificial intelligence excite you the most?

Huang Renxun: Among all potential impacts, customer service may be the most profound area. For every company present here, customer service involves expenditures amounting to trillions of dollars, spanning every industry and every company. The application of chatbots in customer service is important not only for its automation capabilities but also for its contribution to the data flywheel. Companies need to capture conversations, incorporate customer interactions into their data systems, which will undoubtedly generate a large amount of data.

Currently, the growth rate of data is approximately tenfold every five years. Given the drive of customer service, I expect the future growth rate of data to reach a hundredfold every five years. We will integrate all elements into the data flywheel, which will collect more data, refine deeper insights, extract more precise intelligent information, provide better services, and even achieve proactive prevention and resolution before issues arise, similar to preventive maintenance. We will achieve proactive customer support, which will further drive data generation and the rotation of the flywheel. Therefore, I believe customer service will be the key for most companies to achieve super acceleration, especially considering the amount of data it will collect.

We have already achieved digital tagging of everything, and I am excited about our progress in fields such as chemistry, proteins, carbon capture materials, enzymes, and innovative batteries. We have also used generative artificial intelligence to improve the accuracy of regional weather forecasts, a task that previously required the computing power of supercomputers. Logistics, insurance, and the ability to protect people from harm will all be enhanced as a result.

Furthermore, generative artificial intelligence shows great potential in the fields of physics, biology, as well as 3D graphics, digital twins, video game virtual world construction, and more. If your company has not yet ventured into generative artificial intelligence, it may be because you have not paid enough attention to it In fact, it has penetrated into every industry.

Host: I completely agree with your point. The application of artificial intelligence will undoubtedly spread across various fields, which is not only reasonable but also full of infinite possibilities, making people look forward to it. Faced with these emerging frontier areas, our demand for data is increasing. What are your thoughts on how to help companies achieve more sustainable development in artificial intelligence?

Huang Renxun: Sustainability can be considered from multiple perspectives, especially in relation to energy. It is worth noting that artificial intelligence itself is not picky about the location of its "learning". There is no need to set up artificial intelligence training data centers in densely populated areas where the power grid is already under pressure. On the contrary, we can place them in regions with sufficient and evenly distributed energy. Global energy resources are abundant, and the key lies in how to distribute and utilize them reasonably. Therefore, I believe this is our first opportunity to capture and utilize surplus energy, convert it into the power of artificial intelligence models, and ultimately feed back these intelligent achievements to society to serve our actual needs.

Another important perspective is that the core of artificial intelligence lies not only in the training of models but also in its reasoning and generative capabilities. The ultimate goal of training models is to apply them. When we focus on the long-term benefits of artificial intelligence, taking the example of using artificial intelligence for weather forecasting as I mentioned earlier, we no longer need to simulate physical laws from scratch every time, but can generate forecast results through artificial intelligence. This approach not only shortens the prediction time, improves prediction accuracy, but also achieves a thousandfold reduction in energy consumption.

Furthermore, the vertical benefits of artificial intelligence are also reflected in other aspects, such as designing mobile phone chips through one-time model training to save energy for all users. I believe that over time, artificial intelligence will demonstrate its potential in energy saving.

Finally, regarding generative artificial intelligence, today's computing experience is mostly based on retrieval. Every time we click on our phones, although it seems to consume little energy, it actually activates APIs globally, retrieves information, lights up the Internet, and then collects a small amount of information from different data centers to present to us through recommendation systems. In the future, as small language models running on devices become more contextual and generative, internet traffic will significantly decrease, and computation will be more instantly generated, leading to a substantial energy saving and a fundamental transformation in computational models.

In this way, we can not only save a large amount of energy but also obtain answers more efficiently. This will completely change our way of computing, allowing us to ask questions faster, get answers, and spark more interesting questions. This future of collaboration with artificial intelligence will be a new era full of hints and inspiration.

Host: Yes, the future is very exciting. Alright, my final question is, how can we help customers, that is, all of you present here, to take action starting today? What is the best way?

Huang Renxun: As I mentioned before, I believe that Databricks' transition from data processing to data governance, then to data storage, and further vertically expanding it to extracting intelligence from data is very visionary I couldn't remember her name, but there is no doubt that the performance of "Ms. Cookie" was excellent. Is it Casey? Please don't let her be poached by other companies. Her presentation backstage was indeed impressive. I was deeply attracted by her presentation, although there were many opportunities for interaction backstage, I personally prefer to focus on watching her presentation. Her mastery of the data intelligence platform and presentation skills are undoubtedly worthy of our high praise and respect. I think this platform is amazing, making it easier for people to manage data, extract information, and process data. Data organization is still a very important part of model training. People talk about model training, but before training a model, you must figure out which data is correct. This involves data quality, data format, and data preparation. So, I think the way to start is to come to Databricks and use Databricks' data intelligence platform. Am I right?

Host: Absolutely correct.

Huang Renxun: Indeed, no one would object to naming their platform DIP, which stands for Data Intelligence Platform. This name is both sonorous and meaningful, and I greatly appreciate it. It is as impressive as Nims, both are impressive names. You can use both at the same time without having to choose. Getting a Nims plus DIP, I fully agree with this combined usage, it is a wise strategy.

Whatever you plan to do, the key is to start acting immediately. You must actively participate and immerse yourself in this rapidly developing train. Remember, generative artificial intelligence is growing at an exponential rate, and you should not just watch or wait. The speed of exponential trend development is astonishing, and within a few years, laggards will be far behind. Therefore, join this technological revolution immediately, and as technology continues to advance, you will also learn and grow with it. This is the action we take.

This is a process that should not be learned through observation. You cannot master it just by reading; real learning comes from hands-on practice. Just as we do, immerse yourself fully in it.

Host: Thank you very much. This is valuable advice. The unforgettable collaboration of the past decade, thank you for everything you have done. We have always been excellent partners, looking forward to welcoming the next brilliant decade with Databricks