
AI "chasers" encounter obstacles: Zuckerberg delays new model, Musk reorganizes xAI "from scratch"

Reports indicate that Meta's core AI model "Avocado" has been delayed until May due to internal testing lagging behind Google's Gemini 3.0, with discussions even considering temporary authorization of competitor models. xAI's programming capabilities are significantly behind Claude and Codex, and Musk announced a "complete rebuild from the ground up," with executives from SpaceX and Tesla stepping in to oversee the restructuring
Two tech giants entering the AI arms race as challengers are simultaneously facing internal difficulties.
On Thursday, March 13, Elon Musk posted on X platform admitting that "the initial construction method of xAI was wrong, and we are completely rebuilding from the ground up." According to the Financial Times, the company is also undergoing a new round of large-scale personnel restructuring, with very few of the original founding team remaining.

Meanwhile, according to the New York Times, Meta has quietly postponed the release date of its next-generation core AI model to at least May, as the model's performance on key benchmarks such as reasoning, programming, and writing is inferior to Google's Gemini 3.0.
Both companies are lagging behind in the critical business track of AI programming tools, which is seen as the core revenue source for AI laboratories.
Meta's capital expenditure guidance this year is as high as $115 billion to $135 billion, nearly double that of last year, and the core bet on whether it can deliver is being questioned. xAI is facing pressure regarding its IPO prospects after the merger with SpaceX, and a struggling AI department is not the story Musk wants investors to see.
Meta's Core Foundation Model Delayed, Internal Testing Inferior to Google's Gemini 3.0
Meta's newly established "TBD Lab" is responsible for a series of AI product development tasks codenamed after fruits: Avocado as the core foundation model, Mango focusing on image and video generation, and a larger-scale "Watermelon" project is in planning.
According to the New York Times, the AI model "Avocado" was originally scheduled for release this month, but due to unsatisfactory internal testing results, it has been postponed to at least May.
Tests show that the model performs poorly on key benchmarks such as reasoning, programming, and writing, although it is better than Meta's own previous products and Google's Gemini 2.5 released in March, it falls short compared to Google's Gemini 3.0 released last November.
Reports citing informed sources indicate that Meta is currently even discussing temporarily licensing models from competitors like Google to support its AI products in order to maintain competitiveness, but no related decisions have been finalized.
If Meta ultimately chooses to incorporate Google's Gemini to support its products, it will somewhat contradict Zuckerberg's previous emphasis on self-research capabilities and the public statement of competing for the high ground in AI.
Meta's capital expenditure guidance this year is $115 billion to $135 billion, with the vast majority allocated for AI data centers, computing clusters, and infrastructure construction.
Reports indicate that the company has also made a long-term commitment of nearly $600 billion for future investments in the U.S. market and has invested $14.3 billion in Scale AI, while appointing its CEO Alexandr Wang as Meta's Chief AI Officer Mark Zuckerberg has publicly promised that the aforementioned investment will propel Meta to "break through the frontier" and move towards superintelligence.
xAI "Rebuilt from Scratch," Co-founders Depart One After Another
Wall Street Insight mentioned that among the original 11 co-founders of xAI, only Manuel Kroiss and Ross Nordeen remain.
This week, two co-founders, Zihang Dai and Guodong Zhang, have successively left the company. Both co-founders are of Chinese descent; the former has publicly acknowledged that xAI is lagging in programming capabilities, while the latter was responsible for the pre-training of the Grok model and is considered primarily responsible for the programming shortcomings.
Other co-founders who have previously left include Greg Yang, Tony Wu, and Jimmy Ba, indicating a systemic restructuring behind the personnel upheaval.
According to reports, executives from SpaceX and Tesla have entered xAI as "reformers" to audit employee work, focusing on verifying the data quality issues in model training and dismissing those deemed substandard.
Meanwhile, Musk himself is actively broadening recruitment channels; he publicly apologized on X to candidates who were previously rejected, stating that he would re-examine historical rejection records and reach out to promising applicants.
Currently, xAI has over 5,000 employees, fewer than OpenAI's over 7,500, but slightly more than Anthropic's approximately 4,700.
Programming Tools Become Key Battleground, Business Pressure Forces xAI to Restructure
In the AI industry, programming tools are widely regarded as the most certain path to commercial monetization.
The programming capabilities of Grok under xAI are currently significantly behind Anthropic's Claude Code and OpenAI's Codex. Musk has acknowledged this gap and set the goal of "catching up with competitors" for mid-year during a company-wide meeting this week.
This week, xAI poached Andrew Milich and Jason Ginsberg from the popular AI programming application Cursor to strengthen the product capabilities of "Grok Code Fast."
Musk is also betting on a longer-term vision. xAI's "Macrohard" project aims to create AI agents that can fully replace white-collar jobs.
However, according to Business Insider, the project is currently on hold, and its initial leader, Toby Pohlen, left just 16 days after joining.
Musk revealed this week that Macrohard will be advanced in conjunction with Tesla's "Digital Optimus" project, driven by xAI's language model to execute tasks for Tesla's AI agents. He has reassigned Ashok Elluswamy from Tesla to lead the reconstruction of this project. At the infrastructure level, xAI's supercomputing cluster located in Memphis has currently deployed over 200,000 GPUs and plans to expand to 1 million, leveraging the data resources of the X platform to provide unique advantages in scale and real-time capabilities for model training.
However, external pressures cannot be underestimated. As xAI merges into SpaceX for $1.25 billion, and with SpaceX's potential IPO window expected to open as early as this June, this cash-burning AI division urgently needs to demonstrate Grok's actual user growth and commercialization progress to external investors
