
Refusing to become a "war machine"! Over a hundred Google employees jointly signed a letter demanding a red line in contracts with the U.S. military

The Pentagon strongly promotes AI military authorization, sparking a wave of ethical protests in Silicon Valley. Over a hundred Google employees jointly oppose the use of Gemini for surveillance and autonomous weapons, while Anthropic CEO refuses a $200 million contract, stating "I cannot agree in good conscience." Meanwhile, Musk's xAI has already made compromises. Wargaming simulations reveal astonishing risks: top AI models ultimately choose to use nuclear weapons in 95% of scenarios
The intense game between the Pentagon and the artificial intelligence startup Anthropic regarding the boundaries of military technology is causing a strong reaction in Silicon Valley.
Recently, over 100 Google AI research employees submitted a joint letter to management, demanding that the company draw clear red lines in its cooperation with the U.S. military, refusing to allow its technology to be used for mass surveillance or autonomous weapon systems without human involvement.
On Thursday, more than a hundred Google employees sent a joint letter to Jeff Dean, the chief scientist of its AI department DeepMind, explicitly opposing the U.S. military's use of its Gemini large model to monitor American citizens or control autonomous weapons. This action resonates with Anthropic's previous stance of rejecting the Pentagon's request for authorization for "all legitimate uses."
Meanwhile, nearly 50 OpenAI employees and 175 Google employees also published an open letter, criticizing the Pentagon's negotiation strategy of trying to force compromises by dividing tech companies, calling for all companies to "set aside differences and unite."
This incident has cast uncertainty on Google's impending cooperation agreement with the military. As Anthropic faces the threat of potentially losing a $200 million contract by Friday or being labeled as a "supply chain risk," Elon Musk's xAI has already agreed to military terms, leading to a sharp division among Silicon Valley AI companies between commercial interests and ethical boundaries.
Pentagon Pressure Triggers Silicon Valley Backlash
The Pentagon has recently exerted significant pressure on Anthropic, demanding that it allow the military to use the Claude model in classified systems for "all legitimate uses."
U.S. Secretary of Defense Pete Hegseth explicitly requested the use of models that are "not subject to policy constraints." However, Anthropic CEO Dario Amodei refused to compromise, insisting on two red lines: no use for mass surveillance and fully autonomous weapons, stating, "I cannot agree in good conscience."
This hardline pressure strategy quickly triggered a chain reaction among other companies in Silicon Valley. Google employees in the joint letter urged Jeff Dean to "do everything possible to prevent any deals that cross these fundamental red lines," expressing a desire to feel proud of their work. The letter mentioned that the signatories originally planned to oppose "unwarranted surveillance of any citizen in the world," but removed it from the letter to increase the likelihood of their demands being met.
Google Executives' Statements and Internal Ethical Struggles
As one of Google's most influential software engineers, Jeff Dean expressed support for Anthropic's position. This week, he clearly opposed the government's use of AI to surveil Americans on social media, pointing out that "mass surveillance violates the Fourth Amendment and creates a chilling effect on free speech," and that surveillance systems can easily be misused for political or discriminatory purposes.
Google has a complex history in handling employee activism. In 2018, a collaboration plan with the Pentagon sparked massive protests among employees, ultimately forcing the company to abandon the contract renewal. Since then, Google has centralized related decision-making processes and relaxed some AI safety protocols in the race to catch up with competitors like OpenAI and Anthropic However, earlier this month, more than 800 employees petitioned for the company to disclose how its technology supports federal immigration enforcement, indicating that internal ethical scrutiny remains strong.
"Over-Supply" Strategy and Risks of AI Arms Race
In the face of Anthropic's steadfastness, the Pentagon is rapidly seeking alternatives. According to reports from Axios and The New York Times, the military has reached an agreement with xAI to allow its Grok model to access classified systems for "all lawful purposes."
At the same time, negotiations between the Pentagon and Google have entered an advanced stage, and discussions with OpenAI are ongoing. The Pentagon has even threatened to invoke the Defense Production Act to forcibly requisition Anthropic's models and has requested defense contractors to assess their level of dependence on them.
The potential risks of AI models in military applications are not unfounded. According to a war game led by King's College London, disclosed by Tyler Durden, in a simulation of 329 rounds, top AI models chose to use nuclear weapons in 95% of cases. Among them, Anthropic's Claude exhibited "sophisticated hawkish" traits, decisively implementing strikes when risks escalated to the nuclear realm; while other models like Gemini even resorted to nuclear weapons at very early stages.
Experts warn that the constraints of AI on "nuclear taboos" are far weaker than those of humans. In a future where military decision-making time is extremely compressed, unrestricted AI applications could lead to catastrophic consequences, which is also the core reason why tech companies insist on drawing red lines.
