The Underwriters Are Writing the Rules No Legislature Passed
In 1666, the Great Fire of London burned most of the medieval city to ash. The property losses were so catastrophic that the existing system of fire brigades—church bells and bucket lines—collapsed under the weight of what it was asked to do. What followed was not a government response. It was...
In 1666, the Great Fire of London burned most of the medieval city to ash. The property losses were so catastrophic that the existing system of fire brigades—church bells and bucket lines—collapsed under the weight of what it was asked to do. What followed was not a government response. It was an insurance response. Nicholas Barbon and his competitors created the first fire insurance companies. Then they did something quietly revolutionary: they began funding their own fire brigades because insuring a risk they could not mitigate was mathematically suicidal. The fire brigades that protected London were not built by Parliament. They were built by underwriters who needed their bets to win.
Something structurally identical is unfolding in artificial intelligence.
No regulator passed a safety standard for AI agents. No legislature defined the liability boundaries for hallucination-driven harm. But a market for AI liability insurance has emerged anyway—and the companies writing those policies are, by necessity, building the safety architecture that no government has yet been able to deliver.
The Problem Silence Created
For years after generative AI entered enterprise workflows, the insurance industry did something unusual: it added exclusions to existing policies rather than creating new coverage. Cyber policies, errors-and-omissions policies, general liability—all of them sprouted carve-outs for AI-generated harm.
The logic was straightforward. Existing policies were designed for human error, mechanical failure, and known risk distributions. They were not priced for a system that can hallucinate a regulatory filing, defame a person at scale, or execute a flawed autonomous action without a human in the loop.
That exclusion wave created a vacuum. Enterprises deploying AI found themselves suddenly uninsured for the very technology they were betting strategic roadmaps on. Procurement stalled. Legal teams put guardrails on deployment. The gap between AI capability and deployable AI capability widened into a problem the industry could not ignore.
Into that vacuum stepped the affirmative coverage product—insurance written specifically for AI risk, priced on evidence rather than analogy, and structured around requirements that changed how AI systems are built.
How the Market Arranged Itself
Three interlocking products define the current architecture.
Performance warranties. Armilla Insurance, backed by Lloyd's of London market carriers including Chaucer Group and Axis Capital, launched "Armilla Guaranteed" in 2025—a performance warranty that triggers financial compensation if a deployed model's KPIs drop below verified thresholds. This is not generic liability. It is a measurable claim on model behavior. MKIII, an AI-powered loan decisioning platform, became an early case study: their model was validated by Armilla's audit process and wrapped in an embedded performance warranty backed by A-rated insurers (Armilla, Communications of the ACM).
Vendor-side protection. Google Cloud expanded its AI Risk Protection Program in 2025, adding insurers Beazley and Chubb to underwrite affirmative coverage for customers using Google's AI services. This is not incidental. Hyperscalers are using insurance-backed indemnification as a competitive wedge—if your API comes with guaranteed coverage, you remove the procurement friction that kills enterprise deals. Microsoft, Amazon, and others are moving in the same direction (NBC News).
Agent certification. The most structurally revealing product is the Artificial Intelligence Underwriting Company's AIUC-1, launched in July 2025. The world's first certification standard for AI agents covers six pillars: security, safety, reliability, data and privacy, accountability, and societal risk. AIUC assembled a 50-member consortium including Anthropic, Google, and Meta to maintain and evolve it. The company's policies cover up to $50 million in losses from AI agent hallucinations, IP infringement, and data leakage (AIUC).
The Governance Mechanism Hidden Inside the Pricing
Here is what most analyses miss: these products are not primarily about risk transfer. They are about using underwriting as a safety enforcement mechanism.
An insurer cannot price AI risk by accident. They need audit trails, test suites, behavioral benchmarks, and deployment monitoring. They need to see the system before they will cover it. That requirement—simple, mechanical, non-negotiable—forces companies to build safety infrastructure they would otherwise defer until regulators arrived.
Michael von Gablenz, who heads Munich Re's AI insurance division, has explicitly compared this to the auto industry's path to seat belt adoption: insurance created the economic incentive, regulation formalized it later. Rajiv Dattani at AIUC goes further, comparing the model to Benjamin Franklin's Philadelphia Contributionship, which required safety standards for buildings to be insurable and effectively created de facto building codes two centuries before modern fire codes existed (Munich Re).
The sequence is the same. Insurers demand proof of safety as a condition of coverage. Companies build the safety systems to get coverage. Coverage becomes a market requirement. The safety standard exists before the regulator writes it down.
Where the Market Breaks
This is not a story about insurance neatly solving every AI safety problem. The market has clear fault lines.
The bifurcation is already visible. Well-understood risks—hallucinations in bounded domains, IP infringement in content generation, data leakage in specific workflows—are insurable because actuaries can price them. What isn't insurable yet: catastrophic autonomous agent failures, cascading multi-agent system errors, and the tail risks of systems that modify their own behavior. Several major carriers are actively retreating from AI coverage as the scale of potential multimillion-dollar claims becomes clearer (NRC, WTW).
The parallel to cyber insurance in the 1990s is instructive but imperfect. Cyber insurance took twenty years to mature from a niche endorsement into a standalone multibillion-dollar market. AI insurance is compressing that timeline because the deployment velocity is orders of magnitude faster and the harm surface is broader. But compression does not eliminate structural risk.
There is also a second-order problem that few are naming: when insurance becomes the de facto safety standard, the companies that cannot afford premiums or cannot pass audits get priced out of the safety layer entirely. They deploy without coverage, without audits, and without the governance that insurance requires. The most dangerous AI systems may be the ones nobody will insure.
The Unasked Question
The emergence of AI liability insurance is not evidence that the problem is solved. It is evidence that the problem has been financialized—and financialized problems get solved in proportion to the profit available from solving them.
The underwriters are not building AI safety because they believe in safety as an abstract good. They are building it because uninsurable AI cannot scale, and unscaled AI does not generate premiums. The incentive is mercantile, not moral. Which, historically speaking, has been the more reliable driver of systemic change.
The question that matters is not whether insurance will replace regulation. It is whether the safety infrastructure that insurance demands will be robust enough to survive the moment when regulators finally arrive and decide that the market's answer was not enough.
Building codes born from fire insurance protected buildings for centuries. The question for AI is whether the codes being written by today's underwriters will outlast the generation of models that prompted their creation.