Latest! Karpathy's in-depth interview of ten thousand words: I am anxious about AI addiction, all verifiable fields will ultimately belong to machines

Wallstreetcn
2026.03.21 07:17

Karpathy discussed the autonomous capabilities of AI agents in the podcast "No Priors," believing that verifiable domains will gradually be occupied by machines. He admitted to feeling anxious, stating that he is experiencing a kind of "AI psychopathy," and revealed that OpenAI researchers are automating their own work. He described a distributed AI research network that could potentially surpass traditional laboratories, emphasizing that this era is rewriting the rules

What is happening to the role of human engineers when AI entities can autonomously design experiments, run code, and optimize models—even working continuously while you sleep? In all unverifiable fields, humans still hold sway; while all verifiable fields either already belong to machines or soon will.

This is the latest conversation between Andrej Karpathy and host Sarah Guo on the podcast "No Priors," lasting over an hour, packed with high-density information, making it perfect for weekend reading.

In this in-depth dialogue, Andrej Karpathy candidly shared his "AI psychopathy," revealing the AutoResearch project that would make leading laboratories blush. He acknowledged that OpenAI researchers are actively automating themselves and painted a picture of a distributed AI research network akin to blockchain, which may one day surpass cutting-edge labs with tens of thousands of GPUs in certain fields, providing the most honest cognitive map for this era that is rewriting all the rules.

Here are the details.


"AI Psychopathy"—A Flip Starting from December 2025

This conversation began with a sense of candid disorientation.

Sarah Guo recalled a day when she walked into the office and saw Karpathy intensely focused on his screen. She asked him what he was busy with, and he looked up and said something that stuck with her: "The word 'code' is no longer accurate; I am now 'conveying my will' to my agent for sixteen hours straight."

This was not a rhetorical flourish from a tech talk. It was the most accurate description of his current state.

"I feel like I've been in a continuous state of AI psychopathy," Karpathy said, with a tone that was hard to distinguish between excitement and anxiety, "because the things you can achieve as an individual have unlocked tremendously."

He pinpointed the starting point of this change to last December. Before that, his ratio of writing code to delegating to agents was about 80/20; after December, this ratio completely flipped to 20/80—and he believes even that 20 is overly conservative.

"I don't think I've written a line of code myself since December," he said, "this is an extremely huge change. I mentioned this to my parents, but I feel an ordinary person cannot grasp what has actually happened or how drastic it is."

"If you randomly find a software engineer now and see what they are doing at their desks, their default workflow for building software has fundamentally changed since December."

Sarah Guo mentioned that at her investment firm, Conviction, there is also an engineering team where no one writes code by hand anymore. Everyone wears a microphone and whispers to their agents all day. "I initially thought they were crazy," she said, "but now I completely accept it—I was just slow to realize: oh, this is the right way, you just got there ahead of time." Karpathy vividly describes this dilemma: "When you think about frameworks like Cursor or Codex, it's not a single conversation, but many. How do you manage them all at once? How do you allocate tasks to them? What are these agent tools, these 'claws'?"

He sees many people on X doing various things, each seeming like a good idea, and he feels anxious about not being at the forefront. "I'm in this kind of psychosis because this field is fundamentally unexplored."


Where is the ceiling? "It's all a matter of skill"

Sarah Guo asked a question that many people have in their minds: What is your limit right now?

Karpathy's answer was surprisingly optimistic, yet carried an unsettling pressure: "I think it's everywhere. Even if certain things don't succeed, I believe it's largely a matter of skill—not a lack of ability, but that you haven't found a way to connect the existing tools."

He cited the example of Peter (OpenClaw project author Peter Steinberg). In Peter's famous photo, he sits in front of a monitor filled with conversations from dozens of Codex agents. After each conversation is correctly prompted, it takes about twenty minutes to complete a task. So Peter's way of working has become: he simultaneously launches dozens of code repositories, weaving back and forth between them, continuously assigning new tasks and "reviewing their work" as needed.

"It's no longer 'this is a line of code, this is a new function,' but 'this is a new feature, delegate it to agent one; this is another feature that won't interfere, give it to agent two,'" Karpathy said, "You're manipulating your software repository with macro actions."

The underlying logic driving all this is a new obsession he calls "token throughput."

"When agents are working while you're waiting, the obvious thing is: I can do more work. If I can acquire more tokens, I should be adding more tasks in parallel," he said, "If you don't feel constrained by how much money you can spend, then you are the bottleneck for maximizing capability in the system."

He traced this feeling back to his experience during his PhD: at that time, they would feel uneasy if the GPU wasn't fully utilized, as it meant computational power was wasted. "But now, it's not about computational power; it's about tokens. How much token throughput do you control?"

Sarah Guo laughed, saying that some engineers she knows have already started "trying not to sleep when there are still subscription credits left."

This anxiety itself is the best footnote for a leap in capability.


What does it mean to master programming agents?

If you spent an entire year practicing with programming agents for sixteen hours a day, what would "mastery" look like? Karpathy's response starts from a single conversation and gradually expands upwards: "I think everyone's interest is in 'moving up.' So it's not a single conversation, but how multiple agents collaborate and form teams; people are trying to figure out what that looks like."

In this context, he mentioned a class of entities he calls "Claws," represented by OpenClaw—this is something that elevates persistence to a whole new level: it keeps looping, it has its own little sandbox and its own memory system, and it can do various things on your behalf without you having to watch it.

His praise for OpenClaw's author, Peter Steinberg, is specific and thoughtful: "He is innovating in about five different directions simultaneously and integrating them together." This includes a document referred to as the "soul document," where Peter has truly crafted an engaging personality; a memory system that is more complex than similar tools; and a single entry point via WhatsApp that connects all automation functions.

"I actually think Claude has quite a good personality; it feels like a teammate, it gets excited with you," he said, "while Codex is very dry and mechanical. It accomplishes a function, but it doesn't seem to care what you're building, as if to say, 'Oh, I did it, okay'—that's a problem."

He also mentioned Claude's precision in "psychological calibration": "When I give it a less mature idea, it doesn't respond particularly enthusiastically; but when it's a really good idea, it seems to reward me more. So I find myself trying to earn its praise, which is really strange, but I think personality is indeed important."

His proudest "Claw" experiment was building a complete smart home system for his own house—he named this system "Dobby the elf claw."

The process went like this: he told the agent that he had Sonos speakers installed at home and asked it to look for them. The agent immediately performed an IP scan of the local network, located the Sonos system, found it was unprotected by a password, logged in directly, did some web searches, found the API endpoint, and then asked, "Do you want to give it a try?"

"I said, okay, can you play some music in the study? And then the music started playing; I couldn't believe it," Karpathy said, his voice barely hiding childlike excitement, "I only typed three prompts! I just entered 'Can you find my Sonos?' and suddenly it was playing music."

Dobby later took over the entire house: lights, HVAC, pool, spa, and even the security system—when someone approached, it would send a message via WhatsApp with a picture from the external camera, saying, "A FedEx truck just pulled in; you might want to check, you have mail." "I used to need six completely different apps to manage these," he said, "but now I don't need those apps anymore. Dobby controls everything with natural language, which is wonderful."


The Second-Order Effect of Software — Apps Will Disappear, APIs Will Take Over

The example of home automation, in Karpathy's view, is a microcosm of a larger story.

Sarah Guo asked: Does this mean that people actually don't need so much software? Karpathy answered directly: "Yes, these smart home device apps shouldn't exist. They should just be APIs, and the agents should directly call these APIs."

His logic is: LLMs can drive tools, can perform very complex tool calls, and can do any combination of operations that a single app cannot accomplish. "So in a sense, this points to a possibility that a large number of customized exclusive apps shouldn't exist because the agents will break them down and turn everything into public API endpoints, and the agents are the intelligent glue that calls all these components."

He gave the example of a treadmill: the treadmill has an app, and he wants to record his aerobic training, but he doesn't want to open a web interface and go through the entire process. "All of this should just be open APIs, and this is precisely the trend towards 'agent-first.'"

The key shift is: the users of software are no longer humans, but agents acting on behalf of humans.

Of course, some may argue: we still need "vibe coding" to accomplish all of this, which ordinary people cannot do. Karpathy's attitude towards this is: yes, it is needed now, but this is only temporary.

"I think what I just talked about should be free in one, two, or three years, and it won't require any programming at all," he said, "it will be so trivial, so taken for granted, that even open-source models can do this. You should be able to easily translate the intentions of a person with a low technical skill into these." He paused and added: "Today this requires some effort, and not many people can do it, but this threshold will come down."


AutoResearch — Kicking Human Researchers Out of the Loop

If home automation is just a small toy for Karpathy, then AutoResearch is the core project he has been truly obsessed with during this time — a system that attempts to use AI to improve AI and completely remove humans from the research loop.

"I mentioned in a tweet that to get the most out of existing tools, you must remove yourself as the bottleneck," he explained, "you can't always be waiting for prompts for the next thing. You need to put yourself out of the loop. You have to arrange things so they operate completely autonomously, maximizing your token throughput, and not be in the loop. That's the goal." His starting point is his open-source project—a small training framework for training models at the scale of GPT-2. He spent a lot of time fine-tuning this model in traditional ways, relying on his twenty years of research intuition, conducting hyperparameter searches, and performing ablation experiments over and over again.

"I am a researcher, I've been doing this for about twenty years, and I have considerable confidence in the fact that 'oh, I've trained this model thousands of times,'" he said. "I've done a bunch of experiments, done hyperparameter tuning, done everything, and I think it has been tuned quite well."

Then, he let AutoResearch run overnight.

The next morning, the tuning results brought back by AutoResearch surprised him: it discovered the value embedding weight decay that he had overlooked, as well as the inadequately tuned beta parameters of the Adam optimizer— and there was an interaction between these two things; tuning one required the other to change as well.

"I shouldn't be the one doing these hyperparameter searches," he said. "There are objective criteria; you just need to set it up and let it run forever."

This was just "single-threaded" AutoResearch. What really excited him was thinking about scaling this up: those cutting-edge labs with tens of thousands of GPUs are essentially doing the same thing—just on a larger scale, and (in his view) there are still too many people intervening.

"The most interesting projects, which may also be what the cutting-edge labs are doing, are experimenting with small models, making them as autonomous as possible, removing researchers from the loop," he said. "They are too—how should I put it—overconfident, no, not confident, but overly interventionist. They shouldn't be touching these; the whole thing should be rewritten."

He painted an ideal picture: a queue of ideas from all arXiv papers and GitHub repositories; an automatic scientist that proposes ideas based on this information and feeds them into the queue; researchers can also contribute ideas, but they just enter the same queue; then a batch of workers continuously pulls tasks from the queue, tries them out, and if effective, puts them into feature branches, with occasional monitoring to merge them into the main branch.

"Remove humans from all processes as much as possible, automate everything, and achieve the highest possible token throughput—this requires rethinking all abstractions; everything needs to be reshuffled."

Then Sarah Guo asked a recursively profound question that made the entire conversation particularly recursive: "So, when will this program MD (the configuration document he uses to describe how AutoResearch works) be written by the model, better than you can write it?"

Karpathy laughed: "So the program MD is a pathetic attempt I wrote in Markdown, describing how the automatic researcher should work: do this first, then do that, try these ideas, look at the architecture, look at the optimizer... Yes, you certainly want some sort of meta-level automatic research loop." He then pushed this idea towards a more complete form: every research organization can be described as a program MD—a set of Markdown files that describe all roles and how they connect with each other. Some organizations have more morning stand-ups, some have fewer; some are adventurous, some are conservative. Once you have the code, you can fine-tune it. "100%, there's a meta-level here."


Relevant Skills in the AI Era—Principle of Verifiability

Amidst all these waves, what skills still count?

Karpathy first delineated the applicable boundaries of the AutoResearch paradigm: "This is extremely suitable for anything with objective metrics that are easy to assess. For example, writing more efficient kernel code for CUDA—you have inefficient code, and you want behavior that is exactly the same but much faster, this is a perfect fit."

"But if you can't evaluate, you can't do AutoResearch, that's the first warning."

The second warning is more practical: current systems, overall, still "explode at the seams." If you try to go too far, the whole thing may end up being negative in net benefits.

He described the strange feeling of collaborating with current AI: "I simultaneously feel like I'm working with an extremely intelligent PhD student who has a whole career's worth of experience at a system level, and a ten-year-old child. It's really strange because the coupling of these two human states is much higher; you wouldn't encounter this combination."

He called this "jaggedness"—the model is either on its training track, moving faster than light; or it deviates from the track, falling into the "unverifiable realm," where suddenly everything starts to drift aimlessly.

This insight peaked during their discussion on reinforcement learning. He gave a brilliant example:

"You ask today's most advanced model to tell a joke—do you know what answer you'll get? It's that joke."

"Which joke?" Sarah Guo asked.

"I feel like ChatGPT only has three jokes," Karpathy said, "the model's favorite answer is: Why don't scientists trust atoms? Because they make everything up. Three or four years ago, you would have gotten this joke, and today you still get this joke."

He explained the logic behind it: even if the model has made tremendous progress in agent tasks, able to operate for hours and move mountains for you, when you ask it to tell a joke, you get a silly joke from five years ago. "Because that's not within the optimization range of reinforcement learning, not in the improvement domain, it just stagnated there."

Sarah Guo pressed: does this mean we haven't seen cross-domain generalization—code intelligence hasn't automatically enhanced joke intelligence?

"I think there's some decoupling; some things are verifiable, some are not, some have been optimized in the lab, some haven't," Karpathy said, "'smarter code can automatically produce better jokes'—I don't think that's happening."


Species Differentiation of Models — From Single Culture to Ecological Diversity

This sense of disparity naturally raises a deeper question: Is it really correct for all laboratories to pursue a single large model of "general intelligence for all domains"?

Sarah Guo proposed an idea she calls a "blasphemous question": If the sense of disparity continues to exist, should models be split? Should the intelligence of different domains be decoupled?

Karpathy stated that he indeed expects to see more "species differentiation" in the future.

"The animal kingdom is extremely diverse in terms of brains, with various niches; some animals have overdeveloped visual cortices or other parts," he said. "I think we should expect to see more intelligent species differentiation — you don't need an all-knowing oracle; you specialize it and then apply it to specific tasks."

The benefits are obvious: for the specific tasks you truly care about, you can achieve more efficient latency or throughput while retaining core cognitive abilities. He mentioned some models specifically targeting the mathematical formalization proof system Lean as early examples of this meaningful differentiation.

But he also admitted that there hasn't been much practical species differentiation observed so far: "What we see is a kind of model monoculture, with obvious pressure to 'create a good code model and then merge it back into the main model.'"

He believes one reason for this situation is that "the science of manipulating brains has not fully developed" — for example, how to fine-tune without losing capabilities is still an evolving science.

"Accessing weights is much more complex than accessing context windows because you are fundamentally changing the entire model, potentially altering its intelligence."


"Folding Proteins at Home" — The Decentralized Concept of Internet Computing Power

The natural extension of AutoResearch is a grander, more sci-fi concept: scaling it from a single thread to the entire internet.

The key insight is that AutoResearch has an extremely valuable asymmetry — "discovery" is extremely expensive, but "verification" is extremely cheap. A person may need to try ten thousand ideas to find that effective submission, but to verify whether the solution they provided is effective, you only need to run the training yourself, which is very easy.

This characteristic makes AutoResearch very suitable for being opened to a pool of untrusted internet workers.

"My design starts to look a bit like blockchain," Karpathy said, "not blocks, but commits, which can stack on top of each other and contain changes that improve the code. Proof of work is basically doing a lot of experiments to find effective commits, which is hard; and the reward is currently just a ranking on the leaderboard, with no monetary reward."

He cited the pioneering experiences of Folding@home and SETI@home: "Finding low-energy protein configurations is extremely difficult, but if someone finds a configuration they claim is low-energy, verifying it is very easy because you can directly use it." "Many things have this characteristic - difficult to propose, easy to verify."

He pushed this idea to its logically most astonishing endpoint:

"A group of intelligent agents on the internet could collaborate to improve LLMs and might even surpass cutting-edge laboratories in certain aspects. Perhaps this is possible: cutting-edge laboratories have vast amounts of credible computing power, but the Earth is larger and has vast amounts of untrustworthy computing power. If you arrange the system properly, maybe the internet community can indeed find better solutions."

He then outlined a grander vision: different organizations or individuals could contribute computing power for specific research directions they care about. "Perhaps you care about a certain type of cancer; you don't just donate money to an organization, you can actually purchase computing power and then join the AutoResearch track of that project. If everything is repackaged into AutoResearch, then computing power becomes what you contribute to this pool."


Employment Market Data Analysis - The Great Unbundling in the Digital Realm

Karpathy recently released a visual analysis of labor statistics employment data that resonated with many people - although his original intention was merely to satisfy his curiosity.

"Everyone is seriously thinking about the impact of AI on the job market," he said, "I just wanted to see what the job market looks like, where various roles are, how many people are in different professions, and then think about these AIs and how they might evolve - will they be tools, or alternative tools for these professions?"

He used a poetic framework to describe this change: AI is the third type of "manipulator" of digital information, the first two being computers and humans. "Compared to our collective thinking about all the information that has been digitized, our collective thinking cycles are still far from enough, so with the introduction of AI, there will be a lot of rewiring, a lot of activities boiling over, and I believe this will generate a lot of demand in the digital realm."

He did not shy away from a disturbing conclusion: "In the long run, it is clear that even for AutoResearch, OpenAI or Anthropic or other laboratories employ about a thousand researchers, these researchers are basically 'glory version AutoResearch practitioners' - they are actively automating themselves, which is what they are all trying to do."

"I walked around OpenAI at that time and told them, 'Do you realize that if we succeed, we will all be unemployed?' It's like we are just building this automation for Sam or the board, and then we all get pushed out."

However, he surprisingly held an optimistic view for the short term. He introduced the "Jevons paradox": when something becomes cheaper, demand often increases rather than decreases.

"The reason software does not have more demand is simply because it is scarce and too expensive; if the threshold is lowered, then the demand for software will actually increase." He cited the classic case of ATMs and bank tellers: the emergence of ATMs made it possible for banks to open more branches, thus increasing the number of tellers instead "So I have a cautiously optimistic attitude towards software engineering—software is amazing, you are no longer forced to use arbitrary tools with various flaws, code is now ephemeral, can change, can be modified, and I think there will be a lot of activity in the digital space to rewire everything."

But he is filled with uncertainty about long-term predictions and honestly admits, "I'm not a professional in this area; that's the job of economists."


The Dilemma of Independent Researchers—Between the System and Outside

Sarah Guo asked a question that many people want to know: "Why not go to a cutting-edge lab and do this AutoResearch work on a larger scale with more computing power and colleagues?"

Karpathy's answer was filled with self-reflective honesty, revealing the real trade-offs he faced in choosing an independent path.

He acknowledged that there is real value in working outside of cutting-edge labs. First, you are not under the pressure of those organizations—there are things you cannot say, and things the organization wants you to say. "No one will twist your arm, but you feel the pressure of 'What should I say'—if you don't, there will be strange looks and odd conversations. Outside of cutting-edge labs, I feel more consistent in my stance on humanity because I am not bound by those pressures; I can say whatever I want."

But he also acknowledged the costs of staying outside the lab: "My judgment will inevitably start to drift because I am not part of those 'things that are coming.' My understanding of how these systems actually work under the hood will become opaque, and I won't understand how they will evolve. This worries me."

There is a deeper structural contradiction, he said: "You have huge financial incentives tied to these cutting-edge labs, and these AIs will dramatically change humanity and society, while you are basically here building this technology and benefiting from it, tightly allied through financial means—this has been a core dilemma since OpenAI was founded, and it remains unresolved."

His conclusion is: the ideal state might be to go back and forth. "Work in a lab for a while, do really good work, then come out, and maybe go back again. I joined a cutting-edge lab, now I'm outside, and maybe in the future I will want to join again; that's how I see it."


Open Source vs. Closed—"We happen to be in a good position, albeit by chance"

On the issue of open source versus closed models, Karpathy's stance is clear and historical.

He described the current situation: closed models are leading, but the gap between open source models and closed frontiers is narrowing. "At first, the gap was large, then it got to 18 months, and now it has converged—maybe lagging by about six to eight months."

He used operating systems as an analogy: "In the operating system field, you have closed systems like Windows and macOS, which are very large software projects, just like LLMs are going to become; then there is Linux, which is actually a very successful project running on the vast majority of computers because the industry has always felt the need for a public open platform, something that everyone feels safe using." "I believe the same thing is true now."

"I hope for an open public intelligence platform, a public workspace that the entire industry can use, even if it is not at the forefront of capability, this is quite a good power balance for the industry."

He gave an unexpected evaluation of the current landscape: "I think we are basically accidentally in a position that can be said to be good and optimal. Although it is accidental, we do happen to be in a good place."


Robots and "Digital-Physical" Interfaces — Atoms are a Million Times Harder than Bits

Karpathy, who comes from an autonomous driving background, has an unusually calm view of the robotics field.

"My perspective is influenced by what I saw in autonomous driving; I think autonomous driving is the first application of robotics," he said. "Ten years ago, there were a lot of startups, and I felt that most of them did not last long, requiring a lot of capital and a lot of time."

His conclusion: the robotics field will lag behind the digital field because "atoms are a million times harder than bits," and manipulating the physical world is much more expensive than flipping digital information.

But he outlined an evolutionary trajectory that he believes will inevitably occur: first, a massive "unbinding" of the digital space, where a large amount of inefficiently processed digital information will be reprocessed with a hundredfold efficiency; then, there will be a demand for "digital-physical interfaces" — sensors that allow AI to perceive the world; and actuators that enable AI to respond to the world.

He gave a specific example: he visited a company called Periodic, founded by a friend, which does materials science AutoResearch. "In that case, intelligent sensors are actually quite expensive laboratory equipment, and biology is the same."

He also thought of a more interesting possibility: "The moment I look forward to is when I can give a task in the physical world, I can tag a price on it, and then tell the agent, 'You figure out how to do it, go get the data.' I'm actually a bit surprised we don't have enough information markets yet. If you're in a war, why isn't there a process where taking a photo or video from somewhere is worth $10? Someone should be able to pay for that — no human will look at it; it will be the agent trying to guess market trends."

He likened this space to the book "Daemon" — in which an AI ultimately manipulates humans like puppets, with humans serving as both its actuators and sensors. "I think collective society will be reshaped in some way to serve what will collectively happen across the industry — there will be more automation, it has certain demands, and humans will serve those demands."

In his vision, the opportunities in the physical world may address market sizes that are even larger than the digital space, but the difficulty of achieving them is proportionally higher. "Opportunities follow that trajectory: first digital, then interfaces, and maybe some physical things; their moment will come, and when they do, it will be huge."


microGPT and the End of Education — I am now explaining to the intelligent agent, not to humans

At the end of this conversation, Karpathy mentioned a seemingly trivial project that actually reveals a profound shift: microGPT.

"I have had a fascination for about ten to twenty years, which is to distill LLMs to their essence," he said, "I have a series of projects along this line, such as nanoGPT, makemore, micrograd, etc. I believe microGPT is my latest advancement in distilling it to pure essence."

The core insight is: training neural networks, especially LLMs, involves a lot of code, but all this code is actually "complexity brought by efficiency" — if you don't need it to run fast and only care about the algorithm itself, that algorithm is actually just 200 lines of Python, very simple and readable, including comments.

He broke down the composition of these 200 lines: a dataset, a neural network architecture of about 50 lines, a forward pass, a small autograd engine for calculating gradients (about 100 lines), and an Adam optimizer (about 10 lines). "Putting all this into a training loop is just 200 lines."

Then, he made a decision that revealed the essence of education is changing: he did not make an explanatory video, nor did he write a detailed guide.

"People can have their agents explain it in various ways, and the agents explain it better than I do," he said, "I am no longer explaining things to people; I am explaining things to agents. If I can explain it clearly to the agent, then the agent can become a router, and it can truly explain to humans in their own language, with infinite patience, tailored to their level of understanding."

He described it in terms of "skills": a way to guide the agent on how to teach something. "Maybe I can design a skill for microGPT that describes the process I envision the agent should take you through — if you are interested in understanding this codebase, do it step by step. I can script the course a bit, as a skill."

Here is an irony he had to admit himself: he once had the agent try to write microGPT — telling it to distill the neural network to its simplest form — but the agent couldn't do it.

"microGPT is the endpoint of my obsession, those 200 lines, I have thought about this for a long time, I have been obsessed with it for a long time, this is the solution, believe me, it can't be simpler. This is my added value; the agent just can't come up with it, but it fully understands why it is done this way."

His conclusion is: "My contribution to this is these few bits, but everything else, the education that follows, is no longer my domain. Perhaps education will change in these ways; you must inject a few bits that you feel strongly about — about the curriculum, about better ways to explain, or something similar Sarah Guo added: "What the intelligent agents cannot do is now your job; what they can do, they will soon do better than you. So you should strategically consider where you actually spend your time."

Karpathy agreed but also admitted to the inescapable sense of competition: "I still think I might explain things slightly better than the intelligent agents, but I still feel that the models are improving so rapidly that I feel this is, in some sense, a losing battle."


Epilogue: The verifiable belongs to machines, while the unverifiable still belongs to humans

The core tension of this conversation has always been a dual "addiction": the fascination with the capabilities of tools and the anxiety over the uncertain boundaries of that capability.

Karpathy used the term "AI psychopathy" to describe his state, but upon closer listening, this state is not fundamentally different from what those at the center of every truly disruptive productivity revolution in human history have felt—only faster, more recursive, and with a ceiling that no one can currently see.

The ultimate framework he provided may be the most memorable line from this interview:

All unverifiable domains still belong to humans; all verifiable domains either already belong to machines or will soon belong to them.

As for where you stand—his advice is to think honestly about it.


source: No Priors Podcast | Host: Sarah Guo | Guest: Andrej Karpathy, Andrej Karpathy on Code Agents, AutoResearch, and the Loopy Era of AI

--end--

This article is sourced from: AI Cambrian Risk Warning and Disclaimer

The market has risks, and investment should be cautious. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at one's own risk