
"Shortages will eventually lead to surpluses"! a16z Anderson's 2026 outlook: AI chips will see a capacity explosion and price collapse

"If there is a shortage of something, history has proven that it will eventually be mass-produced until it becomes abundant—AI chips and computing power will be no exception." The co-founder of a16z predicts that AI is a technological transformation grander than the internet and is currently in its early stages. He pointed out that the cost of intelligence is collapsing at a speed exceeding Moore's Law, and the shortage of infrastructure will ultimately lead to overcapacity, which will benefit the application layer
Key Takeaways:
Magnitude of AI Technology: AI is a technological revolution larger than the internet, comparable to electricity, microprocessors, and steam engines, and is still in the "very early" stage.
Cost "Deflation": The per unit cost of intelligence is decreasing at a rate far exceeding Moore's Law, which will lead to explosive growth in demand.
"Shortage Leads to Surplus": Following the historical pattern of "shortage leads to surplus," the large-scale construction of GPUs and data centers will ultimately lead to oversupply, further driving down AI costs.
Market Structure: The future AI market will resemble the structure of the computer industry: a few "god-level models" (similar to supercomputers) at the top, with a vast number of low-cost "small models" proliferating at the edges.
US-China Competition: This is a duel between two giants. China (e.g., DeepSeek, Kimi) is impressively catching up in speed, open-source strategies, and self-developed chips, prompting a shift in US federal regulatory attitudes towards supporting innovation. He stated, "Basically, AI is only being built in the US and China. Other parts of the world either can't produce it or don't want to."
Business Model: AI applications are shifting from "pay-per-token" to "value-based pricing"; startups are no longer just "wrappers" but are integrating backward to build their own models.
Democratization of AI: Whether in text, video, or music, the world's most advanced AI technologies (like ChatGPT, Sora, Suno) have broken barriers, allowing anyone to directly use and verify these "previously expensive" top technologies in real-time.
Public Panic and Embrace of AI: Polls show that the public feels panic over "AI replacement," but actual behavior indicates that people are wildly adopting AI.
Europe's Lagging AI Situation: The EU, unable to lead in innovation, has turned to pursue "regulatory leadership" (e.g., the EU AI Act). Anderson believes this approach has nearly stifled local AI development, even leading Apple and Meta to refuse to launch the latest features in Europe.

"If something is in short supply, history has shown that it will ultimately be mass-produced until it becomes surplus—AI chips and computing power will be no exception."
In the latest special episode of The a16z Show, a16z co-founder and Silicon Valley venture capital pioneer Marc Andreessen delved deeply into the future landscape of artificial intelligence, the US-China technology competition, and the return issues most concerning the capital markets. As a seasoned investor who has experienced the internet cycle, Andreessen candidly stated: "(AI) is the largest-scale technological revolution I have experienced in my lifetime... not only larger than the internet, but its magnitude is comparable to electricity and microprocessors."
“The primary reason for surplus is actually shortage”
Despite market disagreements over the revenue growth and cash burn rate of AI, Anderson pointed out from an investor's perspective that many current concerns may be misinterpreted. He believes the core issue lies in the "extreme deflation" of intelligent costs.
"The rate at which AI prices are falling is faster than Moore's Law," Anderson emphasized in the interview, "the unit costs of all AI inputs are collapsing. The result is 'hyper deflation' of unit costs, which will drive demand growth beyond expectations."
Regarding the GPU and infrastructure bottlenecks that investors are generally concerned about, Anderson provided a judgment based on historical cycles: "In any market with commodity attributes, the primary reason for surplus is actually shortage... Because of the shortage, you will see hundreds of billions or even trillions of dollars invested underground. In the next decade, the unit costs of AI companies will drop like a stone."
US-China Competition: The Shock from DeepSeek
In the interview, Anderson rarely provided a detailed assessment of the competitive pressure from China, particularly mentioning the rise of models like DeepSeek and Kimi. He acknowledged that China's progress in open-source models has surprised both Washington and Silicon Valley.
"The release of DeepSeek is a 'supernova moment,'" Anderson said, "it not only performs excellently but comes from a hedge fund rather than a large tech giant, which is completely unexpected." He pointed out that the strategy of Chinese companies in the open-source field has actually created a global price competition, which may cause US policymakers to rethink regulatory directions.
"In Washington, whether it's the Democrats or Republicans, there is now little interest in doing anything that might prevent us from beating China," Anderson revealed, noting that the previously concerning risk of stringent federal regulation has significantly decreased, and the current game is mainly focused at the state level (such as California's SB 1047 bill).

The Shift of AI Pricing Power: From “Pay-per-use” to “Value Pricing”
In terms of business models, Anderson observed a key shift. While cloud giants are happy to sell computing power through "Tokens by the drink," startups are exploring more defensible models.
"If you can significantly enhance the productivity of doctors, lawyers, or programmers, can you take a share of that increased value?" Anderson believes that high pricing is often beneficial to customers because it supports better R&D. "AI startups are more creative in pricing than SaaS companies."
Moreover, in response to the question of whether "startups are just wrappers around large models (GPT Wrappers)," Anderson provided a strong rebuttal. He pointed out that leading application companies like Cursor are integrating backward, "actually building their own AI models," because they possess the deepest domain knowledge
Closed-source Large Models vs. Open-source Small Models
Regarding whether closed-source large models or open-source small models will win in the future, Anderson believes this is not a zero-sum game, but rather a clearly defined "intellectual pyramid." He uses recruitment as a metaphor: "Some tasks require a string theory PhD with an IQ of 160 (large models), but the vast majority of economic activities in the world only need competent individuals with an IQ of 120 (small models)."
He predicts that the industry structure will resemble that of the computer industry:
"You will have a very small number of 'God models' equivalent to supercomputers running in huge data centers; then there will be layers of small models, ultimately extending to embedded systems." Anderson concluded, "The smartest models will always be at the top, but the most numerous will be those small models on the margins."

Full Translation of Marc Andreessen's Interview:
Background Introduction:
Marc Andreessen's 2026 Outlook: AI Timeline, US-China Competition, and AI Pricing
January 8, 2026 The a16z Show
Anderson (Marc Andreessen) 00:00
This wave of new AI companies is achieving revenue growth, and I mean real customer revenue, real demand being converted into cash appearing in bank accounts, and the speed of takeoff is absolutely unprecedented. We see companies growing much faster. But I am very skeptical that the product forms people are using today will look the same five or ten years from now. I believe products will become more refined from now on. So I think we still have a long way to go.
Anderson 00:20
These are trillion-dollar questions, not answers. But once someone proves that something is feasible, it doesn't seem difficult for others to catch up, even those with far fewer resources. When a company faces fundamental open strategy or economic issues, it often becomes a big problem. Companies need to answer these questions. If they get the answers wrong, they are really in trouble.
Anderson 00:40
In the venture capital field, we can bet on multiple strategies at the same time. We are actively investing in every strategy we want to identify, as long as we believe it has a reasonable chance of success. If you want to understand people, there are basically two ways to know what they are doing and thinking. One is to ask them, and the other is to observe them. You often see this in many areas of human activity, including many different aspects of politics and society, where the answers you get when you ask people are very different from the answers you get when you observe them. If you conduct a survey or poll, such as American voters' views on AI, the results are like they are all in extreme panic. It's like, oh my god, this is terrifying. This is terrible. This will take away all jobs and ruin everything But if you observe their "display preferences" (actual behavior), they are all using AI.
Host (The a16z Show) 01:22
Many friends sent in questions in advance. I have organized these questions into several different sections for today's AMA (Ask Me Anything) with Marc this morning. So we thought we would cover four main themes: AI and market dynamics, policy and regulation, everything about a16z, and finally, we have an interesting comprehensive segment that we call the "sandbox," if we have time. So let's start with the biggest question: we are at the center of the AI revolution. Marc, what inning do you think we are in right now? What excites you the most?
Andreesen 01:51
First of all, I want to say this is the biggest technological revolution of my lifetime. You know, I hope to see more transformations like this in the next 30 years. But this is indeed the biggest one. Just in terms of magnitude, it is clearly larger than the internet, comparable to things like microprocessors, the steam engine, and electricity. So this is a truly massive transformation.
Andreesen 02:13
The reason it is so massive may already be obvious to everyone, but I will quickly go through it. If you go back to the 1930s, there is a great book called "Rise of the Machines" that talks about this process.
Andreesen 02:26
If you trace back to the 1930s, there was actually a debate among the inventors of the computer. This debate was about whether they should build computers based on the image of what was then called a "adding machine" or "calculator," which you can imagine as essentially a cash register. IBM was actually the successor to the National Cash Register Company in the United States. Of course, that was the path the industry ultimately chose, which was to build these super-literal mathematical machines that could perform billions of mathematical operations per second, but of course, had no ability to interact with humans in the way that humans like. So, they could not understand human voice, language, and so on. This is the computer industry I have been in for the past 80 years, which has created all the wealth and financial returns over the past 80 years, spanning all generations from mainframes to smartphones. But at that time, back in the 30s, they actually knew that they understood the basic structure of the human brain. They understood that they had a theory of human cognition, and they actually had a theory of neural networks. So they had this theory, and in fact, the first academic paper on neural networks was published in 1943, which is over 80 years ago, which is incredibly astonishing. You can go read an interview or watch a segment on YouTube with the two authors McCulloch and Pitts. I think you can see McCulloch in an interview around 1946. That was what he looked like on television in ancient times That was a truly amazing interview because he was shirtless for some reason in his seaside villa. He was talking about a future where computers will be built on brain models through neural networks. That was a path not taken. What essentially happened was that computers were actually built in the image of additive machines, while neural networks basically did not happen.
Anderson 04:22
But the idea of neural networks continued to be explored by a small group of people in academia and cutting-edge research, originally called cybernetics, later known as artificial intelligence, and it has essentially persisted for 80 years. Essentially, it didn't work; it was basically decades of over-optimism followed by disappointment. When I was in college in the 80s, Silicon Valley had a famous boom and bust cycle in AI. It looks small by modern standards, but it was a big deal at the time. By 1989, when I entered the computer science department in college, AI was considered a niche field, and everyone thought it would never come to fruition. But scientists deserve credit for sticking with it, building a huge repository of concepts and ideas. Then we basically all witnessed the emergence of the ChatGPT moment. Suddenly, everything materialized, like, oh my god, it turns out it is feasible. So, that’s the moment we are in now.
Anderson 05:18
And very importantly, that was less than three years ago, right? That was the summer of 2022, or Christmas of 2022. So we have actually only entered this wave for three years, and this is actually an 80-year revolution that is finally able to deliver on the promises seen by those on the "other path"—the human cognitive model path—from the very beginning.
Anderson 05:43
And the great news about this technology is that it has somehow become extremely democratized. The best AI in the world can be accessed through products like ChatGPT, Grok, Gemini, or any of those you can use directly, and you can see how they work. The same goes for video; you can see the latest technology in the video field through Sora. You can see music; you can see Suno and Udio, and so on. So we are basically witnessing all of this happening.
Anderson 06:09
Now Silicon Valley is responding with this incredible wave of enthusiasm. And very crucially, this touches on the magic of Silicon Valley, which is that Silicon Valley has long ceased to be a place that manufactures "silicon" (chips); that industry moved out of California long ago and eventually out of the United States, although we are now trying to bring it back. But the great advantage of Silicon Valley in its 80 years of existence is its ability to recycle talent from previous technological waves and attract talent from new technological waves, then inspire an entirely new generation of talent to join this project. So Silicon Valley has a cyclical model that can reallocate capital and talent, build enthusiasm, establish critical mass, build funding support, build human capital, and build all the enthusiasm to embrace each new wave of technology
Anderson 06:57
This is what is happening with AI applications. I think the biggest thing I can say is: I am amazed. I feel like I am basically surprised every day by what I see. And we are in a fortunate position to see it from two angles. One is that we can track the underlying science and research work very closely. So I can say that every day I see a new AI research paper that completely astonishes me, whether it's some new capability, or some new discovery, or some new advancement that I never anticipated, and I just marvel, wow, I can't believe this is real.
Anderson 07:30
And then on the other side, of course, we are seeing the emergence of all these new products and new startups. I want to say that we often say things that similarly drop my jaw to the ground. So it feels like a massive transformation. I think it will indeed come in fits and starts, and these processes are chaotic. You know, this is an industry that often tends to "ski too fast (get ahead)" and overpromise. So there will definitely be moments when people feel, wow, this is not as good as people thought, or, wow, this is too expensive, the economics don't add up, and so on.
Anderson 08:05
But beyond that, I just want to say that these capabilities are indeed magical. By the way, I think this is the experience consumers have when using it, and I think this is also the experience most businesses have when piloting and considering adoption. And then this translates into the underlying data. I mean, we are just seeing this wave of new AI companies achieving revenue growth, like actual customer revenue, actual demand translating into real money appearing in bank accounts. This is absolutely an unprecedented rate of takeoff. We see companies growing much faster. The key leading AI companies, and those with real breakthroughs and highly attractive products, are definitely experiencing revenue growth faster than any wave I've seen before.
Anderson 08:46
So from all of this, it feels like we are definitely still in the early stages. It's hard to imagine that we have peaked. It feels like everything is still developing. Frankly, for me, it feels like the products themselves are still super early. I am very skeptical that the forms of products people are using today will look the same five or ten years from now. I think products will become more refined from now on, so I think we still have a long way to go.
Host 09:11
Since we are on this topic. One of the biggest criticisms is that, yes, the revenue is huge, but spending seems to be growing in sync as well. So what are people overlooking in this discussion?
Anderson 09:21
Yes, let's start with the business model. You're right. This industry basically has two core business models: consumer business models and what is called enterprise or infrastructure business models. Look, in terms of consumers, we live in a very interesting world where the internet has existed and has been fully deployed, right? So let me give you an example, sometimes people ask: Is AI like the internet revolution? Well, it's somewhat similar, but unlike the internet, we had to "build" the internet back then. We had to actually establish the network, and that involved laying down a lot of fiber optics underground, a lot of mobile base stations, and a lot of shipments of smartphones, tablets, and laptops to get people online. It was an incredible physical engineering project. By the way, people forget how long this took, right?
Anderson 10:10
The internet itself was invented in the 1960s and 1970s. The consumer internet was a new phenomenon in the early 1990s. However, it wasn't until the 2000s, if you remember, that we really got broadband into homes. It really only started to roll out after the internet bubble burst, which is quite astonishing. Then it wasn't until around 2010 that we had mobile broadband. People actually forget that the original iPhone was released in 2007. It didn't have broadband. It was on a narrowband 2G network. It didn't have high-speed data, not at all. So it wasn't until about 15 years ago that we really had mobile broadband. So the internet is a massive undertaking, but the internet has been built, and smartphones have become widespread. So the key point now is that there are 5 billion people on Earth using some version of mobile broadband internet. Right?
Anderson 11:12
And smartphones around the world are priced as low as $10. We have amazing projects like Jio in India that are bringing the remaining population on Earth online. So we're talking about 5 billion, 6 billion people. The reason I'm saying all this is that consumer AI products can essentially be deployed to all these people at the speed they want to adopt, right? So the internet is the carrier wave that allows AI to spread to a broad global population at lightning speed. This is a potential new technology diffusion speed that has never been possible before. Like what? You can't download electricity, right? You can't download indoor plumbing, you can't download a television, but you can download AI.
Anderson 11:48
That's what we're seeing, the growth rate of consumer AI killer applications is astonishing. And their monetization capability is very good. Just to emphasize again, as I mentioned before, overall, monetization is very good, by the way, including at higher price points.
Anderson 12:05
One thing I like to observe within the AI wave is that I think AI companies are more creative in pricing than SaaS companies and consumer internet companies. For example, it has become quite common for consumer AI to reach monthly price tiers of $200 or $300, which I think is very positive. Because I think many companies have limited their opportunities by pricing too low. I think AI companies are more willing to push this, which is great.
Anderson 12:30
In summary, I think this is a reason to remain reasonably optimistic about the scale of consumer revenue we are talking about
Anderson 12:38
Then in terms of businesses, the question is basically: How much is intelligence worth? Right? If you have the ability to inject more intelligence into your business, if you can do even the most mundane things, like improving your customer service ratings, increasing upsells, or reducing customer churn, or if you can run marketing campaigns more effectively, all of these are directly related to AI, and these are the direct business returns that people have already seen. Then, if you have the opportunity to inject AI into new products, suddenly your car can talk to you, everything in the world lights up and starts to become very smart. How much is that worth? Once again, if you just look at it, you'll find that, wow, the revenue growth of leading AI infrastructure companies is incredibly fast. That kind of pull is really huge. So, this once again feels like this incredible product-market fit, and the core business model is actually very interesting. The core business model is basically "tokens by the drink," which is how many intelligent tokens you can buy for each dollar.
Anderson 13:43
Oh, by the way, another interesting thing is if you look at what is happening with the price of AI, its rate of decline is faster than Moore's Law. I can elaborate, but basically, all inputs for AI are collapsing in cost on a unit basis. The result is extreme deflation of unit costs. This brings about demand growth that exceeds corresponding levels under supply-demand elasticity. So, even in that regard, we feel like we are just beginning to figure out how expensive or cheap this thing will actually become. I mean, there is no doubt that the price of "tokens by the drink" will start to decline significantly from here. This will only drive what I believe to be huge demand. Then everything in the cost structure will be optimized, right?
Anderson 15:21
So, when we talk about chips or whatever, the unit input cost of building AI will now be subject to the laws of supply and demand, right? But in any market with commodity attributes, the primary cause of excess is shortage. And the primary cause of shortage is excess, right? So, when you encounter GPU shortages, or chip shortages, or data center space shortages, if you look at the history of human responses to demand for building things. If something is in short supply and can be physically replicated, it will be replicated. So there will be a huge construction boom. I mean, there may be hundreds of billions or even trillions of dollars being invested underground to do these things right now. So the unit costs for AI companies will plummet like a stone over the next decade.
Anderson 15:52
So yes, the economic issues are certainly very real. Of course, there are microeconomic issues surrounding all these businesses. But at least here, I think the macro forces are very strong. Yes, given the potential value of this technology to consumers and business users, and the many ways people are actively exploring how to use it in their lives and businesses In fact, I really find it hard to see how it wouldn't grow significantly and generate huge revenue.
Host 15:52
Yes, in fact, I think it was about two or three weeks ago that AWS said the GPUs they have been using have been able to extend their lifespan to even over 7 years. So the shelf life of the GPUs they are using is also being extended, which allows them to optimize better than in previous cycles. Is that the correct way to think about it?
Anderson 16:12
Yes, that's right. This is a very important question and observation. By the way, this also raises another question of different theories, basically the battle between large models and small models. Much of the construction of data centers is centered around hosting, training, and serving large models, and the reasons are obvious. But at the same time, a small model revolution is also happening.
Anderson 16:37
If you just track— you can get these charts from various research institutions—if you just track the capabilities of cutting-edge models over time, you will find that in 6 or 12 months, there will be a small model with the same capabilities. So there is a chasing function happening, where the capabilities of large models are basically compressed and provided at a smaller size, thus at a lower cost and faster speed. Let me give you a recent example, just something that happened in the past two weeks. Again, this is the kind of shocking thing.
Anderson 17:07
There is a Chinese company, I forgot the name of the company (note: referring to Moonshot AI), but this is a company that produces a model called Kimi, which is one of China's leading open-source models. The new version of Kimi is an inference model that, at least according to current benchmarks, basically replicates the reasoning capabilities of GPT-5 (note: the original text may refer to o1 or similar advanced reasoning capabilities, and Anderson uses GPT-5 to refer to the next generation of capabilities). And a reasoning model at the GPT-5 level is a significant advancement compared to GPT-4. Of course, the development and service costs of GPT-5 are extremely high. Suddenly, whether it's six months later or whenever, you have an open-source model called Kimi. And I think, I don't know if they have achieved it, it is either compressed to run on a MacBook or two MacBooks, right? So suddenly, if you are a business, you want to have an inference model with GPT-5 capabilities.
Anderson 17:52
But you don't want to pay the cost of GPT-5, or you don't want it to be hosted, you want to run it locally, you can do that. This is another breakthrough. It's like another Tuesday, another huge advancement, like, oh my gosh. Of course, what will OpenAI do? Obviously, they will develop GPT-6, right?
Anderson 18:12
So there is a ladder effect happening, and the whole industry is moving forward. Large models are becoming more capable, while small models are catching up behind. Then small models provide a completely different deployment method at a very low price So, yes, I think we will see what happens.
Anderson 18:32
There are some very smart people in this industry who believe that ultimately everything will just run on large models. Because obviously, large models are always the smartest. So you will always want the smartest thing. Because if you can use the smartest thing, why would you use something less smart?
Anderson 18:45
The counterargument is that there are a lot of tasks in the economy and the world that don’t require an Einstein; you know, a person with an IQ of 120 is just fine. You don’t need a string theory PhD with an IQ of 160; you just need a competent person, and that’s good enough. And, as we discussed earlier, I tend to think that the structure of the AI industry will be very similar to the structure of the computer industry, meaning you will have a very small number of things that are basically equivalent to supercomputers, which are these giant “God models” running in huge data centers. Then, I’m not completely sure about this, but my working hypothesis is that what will happen next is this cascading downwards, meaning smaller models, ultimately down to very small models, running in embedded systems, on single chips inside every physical object in the world.
Anderson 19:33
That is, the smartest models will always be at the top, but the number of models will actually be more of those smaller models that spread out. Right? That’s exactly what happened with microchips. That’s what happened with computers; they turned into microchips, and that’s also what happened with operating systems and many other things we built in software. So, I tend to think that this is what will happen. Regarding chips, again, just like chips, if you look at the entire history of the chip industry, shortages will turn into surpluses.
Anderson 20:00
You will get—just like any time a new chip category emerges with huge profit pools, someone leads for a while and captures the profits corresponding to what we call robust market share. But over time, what happens, right, is that this attracts competition. Of course, this is happening. So Nvidia is actually a great company, fully deserving of their position, fully deserving of the profits they generate. But they are too valuable now, generating too much profit, which is the best signal ever for other companies in the chip industry to figure out how to advance their AI chip status.
Anderson 20:32
By the way, this is already happening, right? So you have other big companies like AMD catching up to them. And very importantly, you have hyperscalers making their own chips. So, many big tech companies are making their own chips, and of course, the Chinese are also making their own chips . So, within five years, AI chips are likely to become cheap and abundant, at least compared to the situation today, which I think will again be very beneficial for the economic benefits of the types of companies we invest in.
Host 21:02
Moreover, startups are also beginning to work on new chip designs.
Anderson 21:06
Yes, that's another matter; you have these disruptive startups. In fact, when it comes to chips, we are not major investors in chips because that's more of a big company thing, but AI running on what's called GPUs has a bit of historical coincidence; GPU stands for Graphics Processing Unit. Basically, for those who haven't tracked this, there are essentially two types of chips that facilitated the birth of personal computers. The so-called CPU, Central Processing Unit, the classic one being the Intel x86 chip. That's the brain of the computer. Then there's another type called GPU or Graphics Processing Unit. That's the second chip in every PC, responsible for all the graphics processing. This includes gaming, CAD/CAM, or anything else that involves a lot of visual effects, like Photoshop. So the classic architecture of a personal computer is a CPU and a GPU. By the way, it's the same for smartphones. But this has become blurred over time. So many CPUs now have GPU functionality built in. In fact, many GPUs now also have CPU functionality built in.
Anderson 22:05
So this boundary has become blurred over time, but that's the classic classification. But the fact is, this classification means that while Intel monopolized CPUs for a long time, in the GPU market, Nvidia has fought a 30-year GPU war and emerged as the winner, being the best company in the field, but this was once an extremely competitive graphics processor market. In fact, the profit margins weren't that high. The market wasn't that big either. And basically, it turned out that there are two other forms of computing that are very valuable and happen to be massively parallel in operation, which is very suitable for GPU architecture. These two basic lucrative additional applications started about 15 years ago with cryptocurrency. Then there was AI, which started about four years ago. So, Nvidia, I would say, very cleverly built an architecture that is very suitable for this. But there was also a bit of a twist of fate; it turned out that if AI is the killer application, the GPU architecture happens to be the best traditional architecture dedicated to it.
Anderson 23:08
I'm saying this to point out that if you were to design AI chips from scratch today, you wouldn't build a complete GPU. You would build dedicated AI chips that would be more tailored and specific to AI, which I think would be more economically efficient. And, Jen, as you mentioned, there are startups that are actually building entirely new types of chips specifically for AI. We'll see what happens there. Starting a new chip company from scratch is difficult. Perhaps one or more of these startups can succeed on their own, and some are doing quite well. Of course, it's also possible that if they are acquired by big companies, those companies have the ability to scale them. So we'll see how this unfolds. By the way, we will also see that the Koreans will definitely get involved. The Japanese will get involved, and then the Chinese will also participate in a major way You know, they are building their own local chip ecosystem. So, there will be many choices for AI chips in the future. That will be a huge battle. That will be a major battle we are watching very closely, and we need to ensure that our company can essentially take advantage of this.
Host 24:16
Since we are talking about international topics, we mentioned Kimi earlier, and it seems that some of the best open-source models today come from China. Should this be a concern for people? How do you think and discuss this topic with people in Washington? I know you were just there last week. How much concern is there for American companies, especially seeing China doing some unnatural things in the solar market and automotive market? Are they somewhat flooding the ecosystem so that they can ultimately gain market share and increasingly own this ecosystem?
Anderson 24:51
Yes, there are a few things. First, you want to start these discussions by saying, look, there is a fierce debate going on in the U.S. and around the world. It's like, how hostile should we view them? This is very tempting, by the way. Very tempting.
Anderson 27:27
Then there is the AI issue. The AI issue is an economic issue, but it is also a geopolitical issue, namely, okay, basically AI is essentially only being built in the U.S. and China. The rest of the world either can't produce it or doesn't want to produce it, and we can talk about that. So basically, it's the U.S. versus China. Then AI will spread worldwide. So, will it be American AI or Chinese AI that spreads worldwide?
Anderson 27:54
And I want to say that even across party lines in Washington, this is how they view this issue. The Chinese are in this game. So, China is definitely in the game, in terms of software, DeepSeek is kind of like the starting gun that has fired the software race.
Anderson 28:12
Now you have, I think you have companies like DeepSeek, which is an AI model from a Chinese hedge fund. This is a bit surprising. Then Tongyi Qianwen is Alibaba's model. Kimi comes from another startup called Moonshot. Then there are Tencent, Baidu, and ByteDance, which are all major companies doing a lot of work in the AI field. So, there are probably three to six major AI companies. And then there are a lot of startups. So, they are in the software race. They are working hard to catch up in chips. They are not there yet, but they are working extremely hard to catch up. And so, that is there, and everything that follows is basically AI in the form of robotics, right? So basically, the global technology , economy, and robotics race is unfolding, and China is a bit ahead in robotics because they are ahead in many components that make up robots. I want to say that Washington is watching this very closely
Anderson 29:54
This year's "supernova moment" is the release of DeepSeek, which is surprising in many ways. One is how good it is. Again, along this line, it takes the capabilities we run in cloud-based large models and compresses them into a smaller, quite capable version that you can run on a small amount of local hardware. So there's that. Then it's surprising that it was released as open source. And then it's surprising that it actually comes from a hedge fund. So it's not from a large R&D, university research lab. It's not from a large tech company. It comes from a hedge fund, which, as far as we know, is basically a bit of a special case where you have a very successful quantitative hedge fund with all these super geniuses, and the founder of that hedge fund basically decided to build AI.
Anderson 30:53
When DeepSeek was released, it wasn't a national champion tech company. It kind of popped up from left field (meaning unexpectedly), and by the way, this is very encouraging for the field because it may mean that an unknown person could potentially create something like this, which may mean that perhaps you don't need all these super genius researchers. Maybe actually smart kids can create this stuff, and I think that's the direction things are heading.**
Anderson 31:23
So this opens up, I want to say, this kind of, I don't know if "imitation" is the right word, but it feels like the success of DeepSeek and the success of DeepSeek from China as open source has opened up a trend of China releasing these open source models.**
Anderson 31:37
Look, the cynics in Washington would say, yes, it's like they're dumping, right? They are clearly dumping. They are trying to, you know, they see an opportunity in the West to build this training industry. They are trying to commoditize it at the starting line. That might make some sense. The Chinese industrial economy does have a history of subsidizing production, which leads to selling products at prices below cost in certain cases. But I think this is also almost an overly cynical view because it's like, wow, they are really in the race, whether it's open source, closed source, whatever, they are actually really in the race.
Anderson 32:11
We've talked about this in LP calls before; we've been having these policy battles in Washington for two years. Two years ago, there was a pretty significant push within the U.S. government to basically restrict or outright ban a lot of AI. If one country is the only player in town, it's easy to have that conversation. If you're actually racing against China, that's another thing. So I think the policy landscape in Washington has actually improved dramatically because now people realize this is actually a two-horse race, not a one-horse race.
Host 32:45
Of course. Yes, actually on this point, I want to jump to the policy and regulatory part, because it seems that the current stance of states creating 50 different sets of AI laws seems like a disastrous way, and in fact, it leaves us with one hand tied behind our back in the AI race. Is there any plan for this? Do people realize that this will be disastrous for progress and development? Where do most people stand on this topic today?
Anderson 33:16
Yes, it's a bit complicated. To rewind, two years ago, I was very concerned that there would be truly destructive legislation on AI at the federal level. At that time, we were very deeply involved, and we've talked about this in the past. The good news about that is that I think the risk is very low today. In Washington, whether it's Democrats or Republicans, there is almost no sentiment that really wants to stifle it. Essentially, there is little interest in doing anything that might prevent us from beating China. So, on the federal side, things are much better now. There will be issues and tensions in the system, but I think things look quite good. Jen, as you pointed out, this has shifted a lot of attention to the states. What is basically happening is that under our federal system, states can legislate on many things. So yes, basically, a lot, I think anything is always a combination of both. Many well-meaning people are trying to figure out what to do at the state level. Of course, there are also many opportunists, AI is just a hot topic. So if you are an ambitious state legislator in some state, you want to run for governor and then president, you want to latch onto this heat. So there are political motivations for doing state-level affairs. Sitting here today, we are tracking about 12 bills across 50 states. By the way, it's not just blue states, there are also red states. So, I have spent about five years complaining about Democratic politicians threatening to do something to technology. There are also many Republicans, for example, the Republican Party is not a monolith on this issue. In various states, there are quite a few Republican officials, I think there are also, what we call, misguided or improperly advised views, and trying to propose bad bills.
Anderson 34:54
It's a bit strange because the federal government does have the authority to regulate interstate commerce. And technology AI is essentially interstate. No AI company operates only in California, or only in Colorado or Texas. Among all technologies, AI is clearly something of national scope. It is clear that the federal government should be the regulator, not the states. But the federal government needs to assert its power and needs to intervene.
Anderson 35:25
There have actually been attempts at this. There was an attempt to add a moratorium on state-level AI regulation, essentially reserving the federal government's right to regulate AI and somewhat preventing states from advancing these bills. I think that was part of the so-called "beautiful big bill" negotiations. That was the deal behind it, and that deal fell apart at the last minute, and that moratorium did not happen. Fairly speaking, critics of that moratorium argued that it might have gone too far. I mean, it was indeed too forced to gain enough support to pass, but it might have also gone too far in limiting states from doing certain regulations that they really should be able to do So it just couldn't take off. We are currently having very active discussions in Washington about what to do next. The government is very supportive of the idea that the federal government should be responsible for this, as it is essentially a 50-state issue and a matter of national importance. And I would say that most members of Congress on both sides somewhat understand this. So we need to find a way to make it happen. But I believe that will happen. Some state-level legislation is quite crazy. Colorado passed a very stringent regulatory bill last year, which faced strong opposition from the local startup ecosystems in Denver and Boulder. In fact, they are now trying to reverse that bill, perhaps a year later.
Host 36:51
Some of the nuances, such as algorithmic discrimination and how to formulate it, what are the extreme versions they proposed?
Anderson 36:59
Yes, the really harsh one is the one we are fighting against in California, called SB 1047. It basically mimics the so-called EU AI Act. This is the backdrop for everything in the U.S., the EU passed this thing called the AI Act, I don't know, about two years ago. It basically stifled AI development, effectively stifling AI development in Europe to a large extent. It was so harsh that even large American companies like Apple and Meta did not launch leading AI features in their products in Europe. That bill is that harsh. This is a typical European approach; they have this viewpoint, and they actually say: If we can't be the leaders in innovation, at least we can be the leaders in regulation. Then they passed this incredible, self-destructive thing. And then years went by, and they were like, oh my god, what have we done? So they are experiencing their version of regret.
Anderson 38:00
By the way, when I talk about Europe, I tend to be very pessimistic about the whole thing. I will tell you, my most pessimistic friends about Europe are those European entrepreneurs who moved to the U.S., and they are absolutely furious about what is happening in Europe on this issue. But even so, the situation in Europe is so bad that they have shot themselves in the foot so hard that the EU now has a process trying to roll it back, trying to repeal GDPR. In short, for those tracking Europe, Mario Draghi, I think the former Prime Minister of Italy, did something called the Draghi Report about a year ago, a report on European competitiveness. He somewhat detailed all the ways Europe is hindering itself, part of which is excessive regulation in areas like AI. So they are trying to reverse it or make gestures. We'll see what happens.
Anderson 38:45
Okay. Amid all this, California has inexplicably decided to basically copy the EU AI Act and try to apply it to California, which might make you feel completely crazy, to which I would say, yes, welcome to California This is basically the political dynamics in Sacramento (the capital of California). They are a bit crazy. If it unfortunately passes, it would completely stifle AI development in California. Fortunately, our governor vetoed it at the last moment. It did pass both houses of the legislature, but he vetoed it at the last moment. Jen, as you said, it would do a lot of things with devastatingly bad impacts. But one of the things it would do is assign downstream liability to open-source developers. So, we talked about the open-source situation in China. Okay, now you can have American companies doing open-source AI. By the way, you will also have American scholars and independent individuals developing open-source in the evenings and weekends, which is a key way for all this technology to spread. And this law would assign any downstream liability for the misuse of open-source back to the original open-source developers.
Anderson 39:48
So, you are an independent developer, or a scholar, or a startup, and you develop and release an AI model. The AI model works fine. That release is great. Five years later, it gets embedded in a nuclear power plant, and then the nuclear power plant has a meltdown, and then someone says, oh, that’s the AI’s fault. The legal liability for the nuclear meltdown or any other real-world event that happens years later would be retroactively assigned to that open-source developer. Of course, this is completely insane. This would completely stifle open-source. This would completely stifle startups doing open-source. This would completely stifle academic research, just like it would completely kill anything in the field.
Anderson 40:24
So, this is the level of playing with fire that those state-level politicians are obsessed with. As I said, I think the good news is that the federal government understands this. I doubt this will get resolved, but it really needs to be resolved because it makes no sense for states to operate in such a suicidal manner as a country. So that’s what we are doing.
Anderson 40:45
This is what we call the "Little Tech Agenda." We are extremely focused on the freedom of innovation for startups. We are not trying to argue many other issues. We operate in a completely bipartisan manner. We have broad support, both across the aisle and on both sides. So this is a real bipartisan effort, very policy-driven. And I think this aligns very well with the broader interests of the country. So that’s what we are doing. Then we get another question, actually some from LPs, but many actually from employees, which is: okay, why us? Right? Just like with any of these policy issues, there is always a collective action problem, which is the tragedy of the commons, theoretically every venture capital firm, every tech company should weigh in on this issue. What actually happens is that most people don’t speak up at all. So to some extent, the responsibility to fight these things falls on certain people. Ben and I basically concluded that the stakes are too high here. If we are going to be industry leaders, we have to take responsibility for our own fate, for better or worse. I think that’s the cost of doing business as leaders in the field
Host 41:51
Before we conclude the AI topic, I want to return to a question that was submitted. Do you think usage-based or utility-based pricing is the right way to price AI? Compared to charging by seat?
Anderson 42:02
This is an excellent question. It’s a huge question on my list of "trillion-dollar questions," and the answer to this question will drive trillions of dollars in market value. Yes, usage-based pricing, if you think about it from the perspective of startups and venture capital, is actually quite astonishing. I don’t talk about this in public much because I don’t want it to stop, right? I think it’s actually quite amazing, which is that you have these tech companies, these large tech companies with incredible R&D capabilities, building these large models, these large AI models. This is an incredibly new form of intelligence. And then it turns out they are already in a war. They are already in a cloud war, right? So they are already in a war for cloud services. This is AWS vs. Azure vs. Google Cloud, and all the other cloud efforts. So what’s actually happening is, in another parallel universe, they are basically keeping all the magical AI secret and private, using it only within their own businesses, or using it to compete with more companies across more categories. But in reality, what they are doing is they are basically spreading their magical new technology through their cloud businesses, which is a business with incredible scale effects, and there is this super competition among vendors, and prices are dropping very quickly. So you have the most magical new technology in the world, and it is basically being offered by these companies as cloud services to everyone on Earth, just a click away, and for just a relatively small amount of money. And it’s usage-based, which means that usage is great for startups because it means you can easily get started, right? For startups building AI applications, there are basically no fixed costs. They don’t need huge fixed costs because they can directly access OpenAI or Anthropic or Google or Microsoft or any other cloud-based "pay-per-use" intelligent token service and just get started. So from the perspective of startups, this is a wonderful thing, it’s like the most magical things in the world can be purchased on a pay-per-use basis. It’s absolutely amazing.**
Anderson 44:09
And that kind of model is working, those companies are very happy, they are growing very quickly, they are happily reporting huge cloud revenue growth, they are satisfied with their profit margins, and so on. So yes, I think generally speaking, this is feasible, and those businesses may become larger. So I think generally speaking, that is feasible.
Anderson 44:27
But back to the question, that doesn’t mean that the best pricing model for all applications should be pay-per-use. In fact, I think the opposite is true. You know, we spend a lot of time researching this. In fact, we have dedicated pricing experts in the company We spend a lot of time researching pricing with our companies because it is a magical art and science that many companies do not pay enough attention to. So we spend a lot of time with our companies on this issue.
Anderson 44:50
Of course, one core principle of pricing is that if you can avoid it, do not price based on cost; you want to price based on value, right? Just like you want a price from which you can gain a certain percentage of business value, especially when you are selling to enterprises, you want to price as a percentage of the business value you provide.
Anderson 45:06
So you do have some AI startups charging per use for certain things they are doing, but you also have many other companies exploring different pricing models. Some are just copies of SaaS pricing models, but you also have other companies that are exploring pricing models, for example, if AI can actually do the work of programmers, or if AI can do the work of doctors, nurses, radiologists, lawyers, paralegals, or teachers, right? Or whatever. Basically, can you price based on value? Can you capture a certain percentage of the value of work that would have required a person to complete? Or by the way, similarly, can you price based on marginal productivity? So if you can significantly increase the productivity of a human doctor because you provide them with AI, can you price as a certain percentage of that productivity increase from the symbiotic relationship between humans and AI?
Anderson 45:58
So what I think we are seeing in the startup space is a lot of experimentation with these pricing models. I think it’s worth emphasizing that this is very healthy.
Anderson 46:07
I once gave a little talk on this: high prices are really underrated. High prices are often a favor to customers. This is actually very interesting.
Anderson 46:14
A lot of naive views on pricing are that lower prices are better for customers. The more nuanced view is that high prices are often beneficial to customers. Because higher prices mean that suppliers can make products better and faster, right? Even those with higher prices and higher profit margins can actually invest more in R&D, and they can actually make products better. Most buyers are not just looking for the cheapest price. They want something that really works well. So usually high prices, customers will never say this. It will never show up in surveys. But high prices can actually be a gift to customers because they can allow suppliers to get better, to make products better, and ultimately benefit customers.
Anderson 46:49
So I am very encouraged by the degree to which AI entrepreneurs are willing to experiment with these. We will see how the results turn out. But at least so far, I feel good about the attitude of the industry.
Host 47:01
Awesome. Actually, when you were speaking, I had about 10 follow-up questions, but I actually want to go back to one trillion-dollar question you raised at that point: Will open source or closed source win? I feel like we already have a result in this debate, or where do you place it?**
Anderson 47:17
No, I think it's still open. I think it's still very open. Closed-source models are continuously getting better. By the way, if you have a rough understanding of the people working in large labs, that is, those working on large proprietary models, they usually tell you that progress is continuing at a very fast pace. There is often this concern in the market that maybe the capabilities of these models are peaking. Indeed, there are some areas where people are struggling to overcome challenges, but those working in large labs are like, oh no, we have 100 new ideas. We have a lot of new ideas. We have many new ways of doing things. We may need to find new ways to scale, but we have a lot of ideas on how to do it. We know many ways to make these things better. Basically, we are constantly making new discoveries.
Anderson 48:01
So I want to say, overall, everyone working in the big labs is quite optimistic. So I think large models will continue to get better, and soon.
Anderson 48:11
And overall, open-source models are also continuing to get better. As I said, there’s probably another big release like Kimi every month, and it’s like, wow, that’s amazing. Wow, they really compressed it and achieved that capability in a very small form factor. So that’s a fact. Then, maybe just to mention a third point, another real benefit of open source is that it’s something easy to learn, right? So if you are a computer science professor wanting to teach CS or AI courses, or if you are a computer science student wanting to learn it, or if you are just an ordinary engineer at a regular company wanting to learn this new thing. Or just someone in a basement with a startup idea.
Anderson 48:59
The existence of these cutting-edge open-source models is amazing. Because that’s the education you need. In fact, these open-source models actually show you how to do everything, right? So this also leads to the rapid expansion of knowledge about how to build AI, which again, compared to a counterfactual world where everything is basically locked up in two or three big companies. So open source is also spreading knowledge, and that knowledge is producing a lot of newcomers. So, as you see today, AI researchers are extremely scarce. Today's AI researchers are paid more than professional athletes, right? That’s the supply-demand imbalance. There aren’t enough hands. However, the shortage creates an overflow. There are a lot of smart people in the world quickly mastering how to build these things. Some of the best AI talents in the world are probably only 22, 23, or 24 years old. By definition, they cannot stay in this field for too long. They cannot be experts for a lifetime, right? So they have to quickly master it in the past four or five years. If they can do it, then in the future, more people will be able to do it. So the dissemination of expertise in this technology is happening very quickly now
Anderson 50:15
So, yes, I think this is still, as I said, I think this is still a game. By the way, the long-term answer is likely to be a combination of both. As I said, if you believe in my pyramid industry structure, then there will definitely be a huge business belonging to whatever is the smartest thing, almost regardless of how much it costs. But there will also be a huge ubiquitous small model value market, which is what we are seeing.
Host 50:39
Another question you raised at that point was who would win between the existing incumbents and startups? At that time, I think the incumbents' attitude towards AI was mixed. I believe there has been a fundamental change in the past two years. And, for example, the thriving of startups is now increasingly migrating into the existing incumbent category. From that point, would you like to answer this question and give your assessment of the current state of the world?
Anderson 51:08
So, look, big companies are definitely trying to play this game. Google is trying, Meta is trying, Amazon, Microsoft. There are a number of such companies that are aggressively involved. Then you have these companies we call "new giants," like Anthropic and OpenAI. But you also see that even in the past two years, you have seen these brand new companies emerge that almost instantly become giants. You could say xAI is one of them.
Anderson 51:33
Mistral, by the way, is a great exception to the European thing I mentioned earlier. Mistral should do very well. A continental European AI champion, proving the exception to the rules. But now there is a batch of such companies doing well and becoming new giants.
Anderson 51:52
Of course, there are also a lot of startups, by the way. Then there are actual foundational model startups, right? So we funded Ilya Sutskever to leave OpenAI to start a new foundational model company (Safe Superintelligence). We funded Mira Murati to also leave OpenAI. We funded Fei-Fei Li from Stanford to start a spatial intelligence model company (World Labs).
Anderson 52:07
So, there are new attempts, all very early, but very promising to quickly establish new giants. All of this is happening. And then, on top of that, there is a huge explosion of AI application companies, right? Basically, those are the companies that take the technology and then deploy it in specific fields, whether that’s law, medicine, education, creativity, or whatever. Again, the complexity here is escalating very quickly, which is astonishing.
Anderson 52:42
Let me briefly talk about application companies, such as Cursor. They acquire core AI capabilities, which they purchase on a per-use basis from Anthropic, OpenAI, or Google, and then they build a code editor, which we used to call an IDE (Integrated Development Environment), or basically a software creation system. So they are building an AI coding system on top of Anthropic, OpenAI, or other large models. The criticism from the industry towards these companies is, oh, those are so-called "GPT Wrappers." That has a somewhat derogatory connotation, and the basic idea is that, well, they aren't actually doing anything of lasting value because the entire focus of what they are doing is showcasing AI, but that AI isn't theirs; the showcased AI comes from others.
Anderson 53:31
So these things that are somewhat like passing off shells ultimately won't have value; the reality is quite the opposite. Leading AI application companies, like Cursor, first of all, they find that they are not just using a single AI model. In fact, as these products become more complex, they actually end up using many different kinds of models, which are tailored to specific aspects of how the product works. So they might start with just one model, but ultimately use a dozen or so models. Then over time, it could be 50 or 100 different models used for different aspects of the product.
Anderson 54:02
Secondly, they end up building a lot of their own models. So many leading application companies are actually integrating backwards and building their own AI models because they have the deepest understanding of their own domains, and they can build the models that are best suited for that. By the way, there is also AI open source, and they can pick up and run on open-source models. So if they don't like the economic model of purchasing intelligence from cloud service providers, they can pick up one of these open-source models and implement it, and these companies are doing that as well. So the best AI application companies are actually fully mature deep tech companies that are actually building their own AI. So, I think all of...
Host 54:43
This makes the model rights... Marc. But when you think about god models versus small models, as you described, those small ones, how would you categorize them...
Anderson 54:49
Some of them, I mean, we should let them announce what they are doing in due time. But some of them are also now doing large model development.
Anderson 54:57
Again, this is also part of the learning from the past two years, which is very interesting. Two years or three years ago, you would definitely say, wow, OpenAI is far ahead, and probably no one can catch up
Anderson 55:12
And then it’s like, okay, Anthropic has caught up. But you know, they come from OpenAI, so they have all the secrets anyway, so they know how to do it. So, okay, they’ve caught up. Really, no one can catch up to them after that.
Anderson 55:20
But shortly after that, a batch of other companies caught up very quickly. xAI might be the best example, Elon’s company xAI, Grok is its consumer product version.
Anderson 55:34
xAI has basically caught up to the state-of-the-art levels of OpenAI and Anthropic from scratch in less than 12 months, right? Again, this somewhat contradicts any notion of a permanent lead, that any existing giant can basically lock up the entire market. If you can catch up like that.
Anderson 55:50
As we discussed, the Chinese part was brand new last year, right? The DeepSeek moment was probably in January or February of this year, right? So less than 12 months ago. Now you have about four Chinese companies that have actually caught up. So it’s like, okay. Again, these are trillion-dollar questions rather than answers, but it’s like, wow, okay. This is one of those things that once someone proves it’s feasible, others seem to want to catch up, even those with far fewer resources. So, I don’t know what this will bring. Maybe it will make you a bit more skeptical about the long-term economic benefits of the big players. On the other hand, maybe it will make you more optimistic about the entrepreneurial ecosystem. This should definitely make you more optimistic about startup application companies, right, being able to do interesting things, which is why we’re so excited about it.
Anderson 56:52
So yes, I think these are real-time dynamics. I think we still need more time to pass to know the exact answers. I should say this. Sometimes when I say this, it scares people, saying these are open questions. When a company faces fundamentally open strategic or economic issues, it often becomes a big problem because companies need to have a strategy, and the strategy needs to be very specific. Companies must make very specific, concrete choices about where to deploy investment funds and personnel, and the strategy must be logically coherent; otherwise, the company will collapse into chaos. So companies need to answer these questions, and if they get the answers wrong, they really have trouble.
Anderson 57:32
In venture capital, we have our issues in VC, but one huge advantage we have is that we don’t have to... we can bet on multiple strategies at the same time, right? We’re doing that. So we’re betting on big models and small models. Proprietary models and open-source models, right? And foundational models and applications, right? And consumer and enterprise
Anderson 57:51
So the portfolio approach, in essence, is that we are actively investing in every strategy we want to identify, as long as we believe it has a reasonable chance of success, even when that contradicts another strategy we are investing in. One is that the world is chaotic, and many things may work. So many of these things won't have clean yes or no answers. Many of these questions, I think, have answers that are just "and." Another is that if one of the strategies doesn't work, we are not trying to hedge against it, but you know, we will have alternative strategies represented in the portfolio, and we will have multiple ways to win. In short, that is the goal. That is why we take this approach in the field. That is why when I say there are these big open questions, I have a big smile on my face because I think it actually works to our advantage.
Host 58:39
This is also a good time to transition to the a16z question, as we have received some, and there are some we sent in advance. So I will start with a broad topic. Is there something you and Ben (Ben Horowitz) "disagree but commit" on?
Anderson 58:55
Basically, commitment, you know, we agree. I mean, we are like an old married couple. So we constantly argue. But we... yes, the sparks have long since died out. Yes, we have been arguing in the park. So, yes, I mean, look, we debate everything. We argue about everything. That said, one thing that makes our partnership work is that we do tend to arrive at the same conclusions. It's like each of us is willing to be persuaded by the other. So most of the time, we end up reaching the same conclusions.
Anderson 59:32
So I want to say, sitting here today, there is nothing specific that I am sitting here thinking, I can't believe I am enduring this crazy thing he is doing that I really disagree with, but I feel I have to commit, or I don't think the opposite is true either. So we don't have that kind of thing.
Anderson 59:51
Frankly, I want to say the biggest thing, that is the thing I discuss the most with him. Since someone asked this question, it is not the most important thing we are doing, but it is indeed a topic. The thing I discuss the most with him, I don't know, maybe I always self-doubt, or I am never sure what position I should take, is basically the company's public image. That is, our presence in the world, including public statements, controversies, how we voice and express our views on things. I want to say there is tension there, that is real, and perhaps obvious, but very important tension.
Anderson 01:00:32
Generally speaking, the more we are out there, the more outspoken and controversial, the better it is for business, in that sense, entrepreneurs like this, founders want to work with it This point is very clear at this moment; the founders want to collaborate with those who are basically brave, controversial, take controversial stances, and express things clearly. They want that for many reasons. One is that it is a display of courage, which they appreciate. Another is that it tells them who we are before they even meet us. This has proven to be an incredible competitive advantage.
Anderson 01:01:07
Long-term LPs will know that this is why we adopted a very aggressive marketing strategy from the very beginning. It has worked out completely. The whole thing is that if we can broadcast our message, we can basically be very clear about what we believe, even to the point of being controversial. The best founders in the world will know about us even before they walk in, right? They will know about us and our beliefs before they meet us, unlike others in the venture capital world. At least at that time, it was basically keeping everything quiet, and the founders had no idea who these people were or what they believed.
Anderson 01:01:35
This is very effective. It continues to be very effective. By the way, this is universally true. On the other hand, being publicly visible and controversial in many ways also has externalities. I want to say that we are working very hard to thread the needle. So we are not shying away from doing a lot of external work. Eric Werenberg and the team he has built, which we have talked to you about in the past, are ready to go. We are going to double down on basically being the leaders in articulating important business issues, specifically the issues that people need to be able to understand. This has proven to be very effective. By the way, a significant portion of our public relations is actually aimed at Washington. Because, again, if you are a policymaker in Washington sitting 3,000 miles away, and all your information sources are those East Coast newspapers that hate Silicon Valley, that is bad. So, our ability to broadcast wise views on technology means that we often meet people in DC who say, yes, most of what I know about this topic I learned from you. Because I listen to the podcast, I read the articles, I watch the YouTube channel. So, we will continue to do this. So overall, we are on the front lines in this regard. But yes, he and I do have some back and forth on exactly how much to touch on “third rail (sensitive)” topics and the frequency. I want to say we are trying to reconcile this.
Host 01:03:03
As Elizabeth Taylor said, as long as you spell our names right, that is usually a good thing in most cases, especially when it comes to “small tech.” I think that question also implies a certain degree of your relationship with Ben, which has been over 30 years of collaboration, to the point where Marc has become a representation of the two people. Some people refer to Marc as Andreessen Horowitz. No, plus Marc has merged into one person. Yes. This is the result of over 30 years of working together. Okay, since you have been restructuring around AI and launching the “American Dynamism (AD)” fund for two years now What do you think you did best? In hindsight, what did you underestimate or miss in this decision-making process?
Anderson 01:03:51
No. I mean, look, we made a lot of mistakes. I think those were the right decisions. AI, as I said, supports the entire venture capital theory, and the venture capital theory we should have from the beginning is one that many people before us have had. That is very correct.
Anderson 01:04:05
I think these two theories are also true: venture capital money is made when fundamental architectural changes occur, when there is a fundamental shift in the technology landscape. This has basically always been true for venture capital. The reason is that if you have a fundamental technological change, then you have a period of creation during which you can have basically aggressive people starting these new companies, who have the opportunity to enter and win categories before large companies can react. Without a fundamental technological change, it is difficult for startups to succeed because large companies will ultimately do everything. So certain companies indeed thrive on these transitional waves.
Anderson 01:04:46
So there is always this question. I mean, I want to say that the best venture capital firms in history are those that are most actively able to navigate from one wave to another, right? Look, I was a beneficiary when I came to Silicon Valley in 1994. In 1994, there was no venture capital firm that was like an "internet venture capital firm." It simply did not exist. But at that time, there was a group of venture capital firms. Our firm, Kleiner Perkins, said, oh, this is a new architecture, this is a new technological change. It seemed completely crazy. Everyone said you wouldn't make money. Anyway. These kids are crazy. But we decided to bet. So they were willing to invest. By the way, KP not only invested in Netscape in the 90s but also in Amazon and then Google. The company took one, they invested in @Home, which basically enabled home broadband. They invested in a series of companies, and that was a venture capital firm that started in the 1970s around the so-called microcomputer, which was three generations of technology ago, and they navigated from one wave to another. Sequoia is the same; basically, any successful venture capital firm that has been operating for 30, 40, or 50 years is like that. So I think in all businesses in this industry, you have to jump on new things.
Anderson 01:08:15
Perhaps a related question is, what do we feel we have missed now? I think the answer is really not... I don't think we have missed a vertical field like we do now. I don't feel like there is a specific vertical field at the moment, no matter what, that we are like, oh, we just need a new unit or a new fund, etc. I don't see that right now. I think more is that the vertical fields in front of us are executing very well and becoming the best partners for portfolio companies
Host 01:08:44
Yes, in fact, regarding AD, there has been a lot of discussion about AI taking jobs, etc. Ironically, jobs in the AD sector have never been more sought after in the physical world, which is related to energy and obviously to data center construction, etc. So it seems that the pendulum is also swinging from the perspective of accelerators, from a societal standpoint. You mentioned the importance of society being prepared for technology adoption. Have you seen that acceleration recently? What are your thoughts on how to actually increase this preparedness? Just to ensure that the integration of adoption is also in line with the speed of actual technology implementation.
Anderson 01:09:27
So, look, we've talked about this before, but for a long time, technology wasn't very relevant. Look. If you go back 300 years, there has always been this recurring panic and fear caused by new technologies. Or even if you go back 500 years to the advent of the printing press, that basically coincided with the rise of Protestantism and completely changed things. Then there has always been ongoing panic.
Anderson 01:09:55
There have been multiple waves of automation panic over the past 200 years. The foundational panic of Marxism was largely based on the fear of job elimination through the application of automation. Today, you hear many of the same arguments about AI concentrating all wealth in the hands of a few, while others become poor and miserable.
Anderson 01:10:14
This is basically what Marx used to say, by the way, I think that was wrong then and is wrong now. We can discuss this. But even in the 1960s, there was a whole panic around AI replacing all jobs. This has long been forgotten, but it was a big deal during the Johnson administration. You read today's "AI pause" letters. You know the one that just came out a few weeks ago, where Prince Harry was the headline. He talked about how AI would ruin everything. Then going back to 1964, there was basically a group of leaders from academia, science, and public affairs that formed what was called the "Triple Committee" or "Triple Revolution Committee." If you Google "Triple Revolution Committee Johnson White House," you'll see this thing. It was a very similar declaration, like we need to stop the commercialization of technology today, or we will ruin everything. Even in the past 20 years, there was a major panic around outsourcing in the 2000s, thinking it would take away all jobs. Then it was actually robots, strangely enough, in the 2010s, which is amazing because robots in the 2010s couldn't even work properly. They still can't to some extent now. But there was panic around that. Now it's a kind of AI panic. So, like, I would say, look, the way I describe it is that we in Silicon Valley have always hoped that the work we do is meaningful. Frankly, most of the time, we are hearing people tell us that everything we do is stupid and won't work. That is the default position, and then basically that flips at some point into a panic about how it will ruin everything
Anderson 01:11:51
It's easy to be cynical about this when sitting here, especially when you see the patterns over time. But my point is that we really need to respect this very much, and we need to be very aware of it. Basically, we are like "dogs chasing the bus," always wanting to do meaningful things. We are doing meaningful things. Other people in society actually care about these matters. We need to think very carefully about all of this and do a good job, which is not just about building technology but also explaining it. Look, I think we have a real obligation to truly explain ourselves and engage on these issues regarding how to measure its progress.
Anderson 01:12:27
This is a classic social science question, which is, okay, if you want to understand people's patterns, there are basically two ways to understand what people are doing and thinking. One is to ask them, and the other is to observe them. Like every sociologist will tell you this, basically you can ask people. Right. The way you do this is through surveys, focus groups, polls. You know how they think, but then you can observe them, you can do what is called "revealed preferences" or just observe behavior. What you often see in many areas of human activity, including many different aspects of politics and society and cultural changes over time, is that the answers you get when you ask people are very different from the answers you get when you observe them. The reason is that you can have a bunch of theoretical explanations for why this is the case. Marxists claim that people have false consciousness. To some extent, the explanation I believe is simply that people have opinions on various things, especially when they can express themselves in some context. Then they tend to express themselves in a very intense way. And then if you just observe their behavior, they are usually much calmer, more measured, and more rational. This goes back to what is happening with AI now; if you conduct a survey or poll about American voters' views on AI, the results are like they are all in extreme panic. It's like, oh my god, this is terrifying. This is awful. This will take away all jobs, it will ruin everything, the whole narrative. But if you observe revealed preferences, they are all using AI. They are downloading apps, they are using ChatGPT at work. You know, you often see this online.
Anderson 01:14:04
Now I had an argument with my boyfriend or girlfriend. I don't understand what happened. I copy and paste the text exchanges into ChatGPT, asking ChatGPT to explain what my partner is thinking and tell me how I should respond so that he/she won't be mad at me, right? Or like, I have this skin condition, and the doctor... I take a picture and feed it to it, and I finally understand my health condition, and I use it at work. For example, I have to get this report ready by Monday morning, and I don't have time. ChatGPT really saved my life. So people in their daily lives, I would say, just look at the data, like, they are not only using this technology, they love this technology, they are passionate about it, and they are adopting it as quickly as possible So I tend to think we will see this public discussion go back and forth for a while. Because there is this divergence between what people say and what they do. But I do believe that the part of what people do is clearly the part that ultimately wins. By the way, I think this technology will be just like all other technologies; what will happen here is that it will spread very widely. It will scare everyone. Then, 20 years from now, everyone will say, oh, thank goodness we have it. Wouldn't life be miserable without it? Or, five years from now or a year from now, people will come to that conclusion. So I am very optimistic about this final landing point, just that there will be turbulence along the way.
Host 01:15:22
I'm laughing because I witnessed this in the wild just late last week. I was on a plane. The person next to me was talking to his ChatGPT, and I could see he was saying: help me draft a complaint letter to United Airlines about this flight delay. I thought to myself, sir, you're on the flight right now. At least wait until it's over. That was very funny. I'm sure he had a great email. Okay, I'm going to switch gears and ask a few interesting questions that were sent in. This is meant to be a lightning round. So what's something recently that you've changed your mind about? Bonus points if it was someone younger than you who changed it.
Anderson 01:15:59
I mean, this happens like every day. It's just a norm. Almost everything is within the realm of "possible." I'm not good at giving specific examples, so I don't have one on hand. But it's like I said, there's always, yes, no, often someone showing up, either something someone wrote or something someone said. Yes, very frequently it's younger people. Yes, it's like I want to say this is a daily experience.
Host 01:16:24
That's a good way to stay young. Are you planning to be cryogenically frozen?
Anderson 01:16:32
No, with the current low-temperature technology, its track record isn't great, and the stories are a bit scary. But, you know, we have to give it some time.
Host 01:16:48
How do you stay grounded when your influence itself might distort the reality around you?
Anderson 01:16:53
Yes, so I want to say the good news is... One is, look, this concern is real, and it's hard for me to talk about, with my Midwestern kind of... We Midwesterners are either very humble or very good at pretending to be humble. But it's hard to talk about but requires some introspection. But yes, I mean, look, the "reality distortion effect" is definitely real. By the way, the reality distortion effect has a very big advantage, which is the ability to get people to do what you want them to do. So, there is another side to that. But indeed, in terms of having a practically accurate understanding of what is happening, that is a concern
Anderson 01:17:30
I think I would say two things. I want to say, one is, you know, my partners, I think they are very straightforward, including Ben, who will tell me very directly when I'm wrong. But more generally, we are very exposed to reality. This is also, again, you mentioned, I don't know, a way to stay young, the hairline not receding or whatever, just like, we conduct these experiments, you know, because we make these decisions, to invest or not to invest. We work with these companies to do all these things, and reality quickly intervenes. In this industry, delusions don't last long because things either work or they don't. You have these long, detailed discussions about this theory and that theory. And then reality just slaps you in the face, you know, like you idiot, right? You know, it's like the ultimate frustration of business, and it's also very motivating, it's this frequency: you think you've applied excellent analysis, and then you invest or don't invest based on that analysis. It turns out that your analysis was completely wrong, right? You just completely overestimated your epistemological ability to analyze these things. You just, you know, basically caused harm.
Anderson 01:18:35
For example, I always ask, does any activity we engage in add value, or is it actually reducing value? I think that's a bit like all the businesses in this industry. It applies to all my contributions as well. So there's that. And then I want to say, maybe the last point is, I have the entire internet ready to tell me I'm an idiot. That's not a bad thing. And it does that regularly.
Host 01:19:04
Regarding the decision-making about investment companies you just mentioned, my favorite quote, I think you said this during your interview with Cheeky Pint, you know, when you invest in a company and it doesn't go well. At least it goes bankrupt, right? If it does well, it does extremely well. You'll hear about it every day for the rest of your life.
Anderson 01:19:24
For the next 30 years. Reality slaps you in the face saying, you fool, you had it. It was right in your office. All you have to say is, by the way, this is the thing, just like every great VC, this is the story VCs tell each other. Every great VC basically has this history, like, oh my god, I had it, it was in my office. That thing was in my office. I said no. If I had said yes at the time. So, yes, it's hard... Yes, The Wall Street Journal and CNBC remind you every day that you made a huge mistake. That's good. It's very good for keeping that humility factor.
Host 01:20:02
Very humbling. Helps you stay grounded at all times. Last question, if the opportunity arises, do you plan to go to Mars?
Anderson 01:20:10
Probably not.
Host 01:20:16
My Zoom background isn't sending out positive signals. This is...
Anderson 01:20:21
I don't even want to leave California. I hardly want to leave my house. So, yes, I don't... maybe, maybe through VR. And then we'll see what happens. I mean, look, that said, I think Elon will make it happen. So I think, I don't know exactly. I don't want to predict. This isn't a prediction. But I wouldn't be surprised if I have routine trips back and forth in ten years. So yes, this could actually become a real issue. By the way, I do know a lot of people who want to go.
Host 01:20:53
Including myself.
Anderson 01:20:54
Count me in. Oh, that's great.
Host 01:20:55
Global flights have already prepared me for a 6-month journey to Mars. So I will be checked in
