
Insuring against "AI troubles"? Insurance companies "dare not take it on"

Several insurance giants led by AIG are seeking regulatory approval to exclude AI-related liabilities from corporate policies. Insurance companies are concerned that AI operates like a "black box," and its potential systemic claims risks could lead to enormous losses that the industry cannot bear. Recently, there have been several significant economic loss incidents caused by AI errors, such as Google's AI feature being sued for defamation and Air Canada's chatbot fabricating discounts
As companies rush to embrace the wave of artificial intelligence, risks are quietly accumulating, while the insurance industry, which should play the role of a risk "stabilizer," is showing caution and retreat.
According to a report by the Financial Times on November 24, in the face of potential billions of dollars in claims risks brought by AI technology, major global insurance companies are taking action to attempt to exclude AI-related risks from standard business policies.
Industry giants such as American International Group (AIG), Great American, and WR Berkley have submitted applications to U.S. regulators seeking approval to include exclusion clauses in their policies that explicitly exclude various liabilities arising from the deployment of chatbots, AI agents, and other tools by businesses.
This move marks a significant shift in the insurance industry's attitude towards AI risks. Insurers are generally concerned that the decision-making processes of AI models are opaque, akin to a "black box," making it difficult to define liability when errors occur. Even more alarming to them is that a single AI model's defect could lead to thousands of related claims, creating a "systemic, aggregate risk" that the industry cannot bear.
The backdrop of this shift is that the expensive costs triggered by AI "hallucinations" (i.e., models generating false information) or errors are no longer theoretical. From Canadian Airlines' chatbot fabricating discounts leading to a judgment for compensation, to Google’s AI search function being sued for $110 million for providing incorrect information, real cases are continuously sounding the alarm.
AI Risk "Black Box" and Systemic Risk Concerns
The core reason for the insurance industry's retreat lies in the unpredictability of AI risks and their potential massive scale.
Dennis Bertram, head of European network insurance at Mosaic, a specialist insurer in London, candidly stated that AI models "are too much like a black box," making them difficult to insure. Although the company provides coverage for some AI-enhanced software, it has explicitly refused to insure large language models similar to ChatGPT.
Kevin Kalinich, head of network risk at Ernst & Young, pointed out that the insurance industry might be able to absorb losses of $400 million or $500 million due to pricing or diagnostic errors by a single company, but "what they cannot bear is a mistake by an AI provider leading to 1,000 or 10,000 losses—this is a systemic, interconnected aggregate risk."
Ericson Chan, Chief Information Officer of Zurich Insurance, also believes that AI risks involve multiple parties, including developers, model builders, and end users, with a complex chain of liability, and their potential market impact "could be exponential."
The actions of insurance giants are the most direct. According to documents submitted to regulators, WR Berkley proposed a broad exclusion clause aimed at excluding claims arising from "any actual or alleged use" of AI technology. AIG also indicated in its documents to regulators that generative AI is a "broad-ranging technology," and the likelihood of claims arising from it "may increase over time."
These initiatives come at a time when AI risk events are frequent. In addition to the aforementioned cases, British engineering group Arup lost $25 million last year when fraudsters used a digital clone of an executive to order a transfer during a video conference. As insurance companies seek exemptions, these risks, which might have been covered by "technical errors and omissions" policies, are facing a protection vacuum
Seeking Compromise Solutions, but Coverage is Limited
In response to market demands, some insurance companies are exploring compromise solutions, but the coverage they offer is often narrow and strictly limited.
For example, insurance company QBE has launched an "endorsement" (policy amendment) related to fines under the EU Artificial Intelligence Act, but according to a large broker, this clause limits the payout for AI-related fines to 2.5% of the total insured amount.
Chubb, headquartered in Zurich, has agreed to underwrite certain AI risks in negotiations with brokers, but has explicitly excluded "widespread" AI events, which refer to situations where a model issue affects multiple clients simultaneously.
Aaron Le Marquer, head of the insurance disputes team at law firm Stewarts, warned that as AI-driven losses increase significantly, insurance companies may begin to contest claims in court. He anticipates that "it may take a major systemic event for insurers to step forward and say, wait a minute, we never intended to cover such events."
