They support our InsurTech market watch
Become a sponsor: sourcing@astorya.vc
Reports

Insurers are growing uneasy about AI

Despite their interest in AI, many insurers are worried about its hidden risks—duplication, lack of clarity, and uncontrolled exposure. A deep dive into why the insurance sector is raising red flags.

Original text here from Patrice Bernard (LinkedIn)

Although with the extra caution that seems embedded in their DNA, insurers are just as attracted to the opportunities of artificial intelligence as any other organization. However, according to a Financial Times article — relayed, among others, by TechCrunch — they appear extremely reluctant to cover the related risks.

A few major industry players in the United States have reportedly approached their regulator preemptively to obtain formal permission to broadly exclude from their commercial policies the consequences of AI use. Even if an AIG representative claims they are not specifically considering applying such clauses (for now), the initiative clearly reveals the nervousness triggered by technologies with such far-reaching implications.

Subscribe to our newsletter:

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

In a way, the reaction is understandable. As another executive interviewed by the journalists explains, the utterly opaque “black box” behavior of these tools makes it extremely difficult to assess the dangers they may expose policyholders to. Add to this sense of the unknown, of lack of control, the youth of the field (after all, ChatGPT is barely 3 years old), which means there’s little historical data, and the difficulty in measuring risk becomes obvious.

Meanwhile, stories of missteps and other errors committed by intelligent agents are spreading quickly across the world — such as the mishap of the imaginary Air Canada promotion — and the cost of repairs is starting to look alarming. The fears are all the more justified because the impacts can hit every imaginable domain: from financial guarantees to liability, to reputational damage or legal disputes.

But what perhaps worries insurers the most is the potential scale of their exposure. Indeed, the systemic nature of AI-related anomalies changes everything. This is no longer about compensating for an exceptional industrial accident, but potentially about covering the thousands — even millions — of relatively minor incidents triggered by a malfunctioning critical system whose “reasoning” is more or less uncontrollable and therefore difficult to correct.

And the problem could get even worse… if victims themselves start using AI to industrialize their claims submissions (and even, potentially, the creation of disputes whenever a loophole is spotted). For insurers, the challenge would then be to quantify not only the uncertainties of the models deployed within companies — which legal explainability requirements should help with — but also the full extent of the damages they may cause, both in terms of targets and unit costs.

You may like these articles: