aiSure™
More AI Opportunity. Less AI Risk
properties.trackTitle
properties.trackSubtitle
Reap the benefits of AI
While protecting against the downside
0%
of CEOs say there would be immediate business benefits from implementing AI
0%
are hesitant to invest because of uncertain return on investment
Across industries, across use cases aiSure™ has got you covered.
For AI Vendors - For Corporations using AI
Contractual Liabilities
Legal Liabilities
Financial Losses
The Risk
Selling innovation can be challenging when customers lack trust in AI providing a performance warranty for the AI models accuracy d risks the purchase decision but it creates a liability on the balance sheet of the tech company.
The Risk
Selling or using AI models can create new and accumulated liabilities. Those include
- Discrimination
- IP infringement
- Hallucinations
- Regulatory fines
The Risk
Rolling out AI models to automate critical business processes has huge upside potential but comes with uncertainty. AI errors and GenAI hallucinations can lead to ill-informed decisions, business interruption or other forms of lost revenue.
aiSure™ has got you covered for critical AI risks
aiSure™ for providers
aiSure™ for AI deployers
Real AI risk transfer at work.
Explore our aiSure™ Knowledge Center
Frequently Asked Questions
What is AI Insurance?
Which AI technology risks can be covered with insurance?
AI risks come in various types, including but not limited to:
- Underperformance of the models
- Hallucination, misleading content and false information.
- Bias and fairness risks, leading to discrimination.
- Privacy infringement following the leakage of private or sensitive information.
- Intellectual property violations by models trained on copyright-protected material.
- Users and customers exposed to harmful AI-generated content.
Munich Re has been helping our clients with insurance for AI products and services since 2018 and we are happy to discuss with you how to choose the right insurance for your AI solutions and potential exposures.
How insurance can safeguard AI performance reliability and ROI?
AI is probabilistic – it makes the decisions based on the probability of a range of outcomes. At the same time, AI is systematic – meaning it will continue to make these mistakes until they are identified and corrected.
For businesses using or selling AI, the potential consequences of underperformance can include reduced profitability, serious financial losses, business interruptions, customer dissatisfaction and attrition, increased expenses and reputational damage, as well as legal liabilities and the resulting costs. The type of loss depends on the use case and can include financial costs for system malfunctions and breakdowns, business interruption costs, legal liabilities, extra expenses and costs for accidents.
The risk of underperformance cannot be fully avoided by technical means. At the same time, not leveraging AI effectively could lead to a competitive disadvantage for a company. By indemnifying AI vendors and their corporate users against excessive losses, aiSure fills a risk management gap, protecting AI innovations with insurance.
What is Munich Re’s aiSure™ offering?
aiSure™ is a suite of comprehensive coverage for AI systems designed to address a wide area of AI-related risks for AI providers and corporate adopters caused by AI performance errors, including:
- Contractual liabilities
- Own damages/financial losses; and
- Legal Liabilities.
In an emerging market like AI, Insurance solutions should adapt to evolving requirements. As the pioneering insurer in the field of AI, we pride ourselves on delivering tailored insurance solutions for AI product providers.
We welcome other coverage requests and are happy to explore designing a bespoke solution for your business. To learn more on how to protect your AI technology with insurance, please reach out to us.
Which industries can benefit from aiSure™?
Since issuing our first AI policy for anti-fraud model in 2018, Munich Re has written aiSure insurance solutions for AI companies across a diverse range of industries, including:
- Agriculture
- Banking
- Climate and earthquake forecasting
- Cyber security I & Cyber security II
- Insurance
- Retail
We are confident that we can provide significant risk transfer capacity across any industry use case where the accuracy and reliability of AI performance is critical to financial results. Please reach out to us to discuss your particular use case and requirements.
Does MunichRe insure all types of machine learning models?
Do you also offer insurance for GenAI and large language models?
How does Munich Re guarantee the performance of AI systems?
Munich Re follows a proven risk assessment process for ensuring performance reliability in AI systems through insurance:
- The first step is to evaluate the model development pipelines and identify potential risk scenarios (unrepresentative training data, data drift, updating and monitoring processes, etc.). A thorough, quantitative analysis of data inputs and outputs is essential for gaining an understanding AI liability and performance
- The second step is to derive a valid risk estimator. Based on historical performance data, we can estimate the future likelihood of model underperformance. Please refer to our whitepaper.
We are actively contributing to statistical and machine learning research in the area of uncertainty quantification critical to protecting AI systems with insurance. We have also developed methods that enable us to automatically price the model performance risk of AI in real time. Please refer to our research papers [An In-Depth Examination of Risk Assessment in Multi-Class Classification Algorithms and Distribution-free risk assessment of regression-based machine learning algorithms] and follow our LinkedIn feeds for future updates on our research on how to insure AI products and technologies.
How can AI system performance be measured?
The performance of AI relates to the uncertainty of error. No model can ever be perfect even with access to all information and having the best data science team available with state of the art governance policies and processes in place. No matter how good the performance of an AI model is, there will be instances where it makes mistakes.
For example, a GenAI model might hallucinate and provide factually wrong answers to user queries. A financial fraud detection model might fail to detect costly fraud events. A predictive maintenance model might fail to recognise an equipment breakdown event.
What types of liabilities and losses can AI underperformance cause?
An AI model built to make inferences under conditions of uncertainty which means that even if you design it to the most exacting standards, it will make errors. Indeed, in the famous formulation of UCLA Professor John Villasenor, “The laws of statistics ensure that – even if AI does the right thing nearly all the time – there will be instances where it fails.” In all such cases, AI and GenAI models make mistakes, which can cause financial losses and create liabilities, such as:
- Property damage and bodily injury
- Compliance fines and penalties
- Privacy violations
- Data leaks
- Intellectual property infringement
- Pure financial losses
- Discrimination
AI insurance is designed to mitigate the financial impact of AI model error uncertainty. The more you use and depend on AI, the more costly the losses can be.
Why should I insure my AI model?
Does AI insurance help build my customers’ trust?
Among the benefits of insurance for companies developing AI solutions is its ability to alleviate customer concerns about AI reliabilty. Potential clients may be skeptical about the real-world performance of AI technology. Thus AI providers often struggle to address potential clients’ concerns regarding the uncertainty of AI predictions, leading to lengthy proof-of-concept deployments or extended due diligence processes for each client.
Munich Re’s aiSure™, which covers AI system failures, instils confidence and trust in the solution. This enhanced reliability not only boosts customer satisfaction and retention but also strengthens the provider’s reputation and brand image. Ultimately, offering an AI system performance guarantee with clear, well-defined and significant compensation for model errors and their consequences can significantly shorten sales cycles by eliminating the need for extended proof-of-concept phases – why perform lengthy POCs when the outcome is insured? Guaranteeing the performance of AI puts AI users at ease knowing that potential financial losses from relying on the AI’s performance are covered. With aiSure™, AI providers can inspire trust in their solutions.
Does AI insurance accelerate corporate AI adoption?
Beyond traditional liability coverage, what types of AI Insurance do companies need?
Traditional insurance policies are generally made to cover traditional perils. AI, in its different forms (from random forests to Generative AI), with its different uses (from chatbots to medical instruments) and its different risks (prediction errors to discrimination) will test the limits of traditional insurance in the years to come.
At Munich Re, we are aware of the limitations of traditional policies. This is why we offer tailored insurance solutions with flexible limits and payouts, specifically designed for AI risks. We cover all types of damages required, enable different payout triggers, emphasise low coverage thresholds, and require no legal liability element. Our AI insurance solutions therefore address an insurance gap and create legal certainty and peace of mind for AI users in a constantly changing environment.
Understanding AI liability, intellectual property and lawsuits
Can GenAI models’ outputs infringe on IP rights?
Yes, there are a few reasons that GenAI models can create “substantially similar” images to their training data:
- Training data diversity: Generative AI models, especially those used for image generation, are trained on vast datasets that can include billions of images and texts. If many images in the model training dataset share similar features, the model may learn these common patterns and produce similar outputs.
- Model overfitting: for the large models with billions of parameters and iterative training steps, it’s possible that model memorizes specific details of the training data, generating outputs that closely resemble some training data.
- Detailed Prompts: when using text-to-image models, users can create highly detailed prompts that specify particular styles or compositions. If these prompts are similar to descriptions of any images (such as artworks) that are used in model training, the generated images may resemble those images. The prompt similarity might not be intentional by a user and just the product of chance.
If an AI generated image, audio, or text is found by courts to be “substantially similar” to that of an original image, the creator of the image can be found to have infringed on someone’s copyright, leading to fines of up to 30k dollars per infringement besides high legal defense costs and additional damages.
Can GenAI model providers be held liable for IP Infringement?
GenAI providers could be held liable for IP infringement in two ways:
- Copyright infringement in training data: AI providers can be (and are being) sued for utilizing copyrighted artists’ work to train their Generative AI models.
- Secondary liability due to copyright infringement of the GenAI’s output: If GenAI providers know that their GenAI models provide IP-infringing content, profit from it and do nothing to stop it, they could be assigned secondary liability by the courts.
Can a GenAI user be held liable for IP infringement of an AI-generated image?
If the image created is substantially similar to an existing, protected image, and the image is subsequently published, the creator of the image could sue the user in court for IP infringement, resulting in significant statutory damages.
As a wide variety and quantity of content on the internet has been scraped to be used as training data for GenAI models, one must assume that a lot of protected images are part of the GenAI’s dataset. It is an impossible task for a GenAI user to check, whether the image created using GenAI is similar to a protected one that is part of the training data.
Does Munich Re offer IP insurance for AI?
Yes. Munich Re’s aiSure™ IP Liability policy protects both GenAI providers and GenAI users both from lawsuits for IP infringement of “substantially similar” GenAI output, and covers legal costs that stem from an alleged IP infringement by the created image.
We are happy to talk to you about the potential risks, your existing coverage and how we can help.
Have there been lawsuits related to AI bias and discrimination?
Who is held liable if my AI creates fake news or misleading information?
AI-generated content can result in defamation lawsuits if it produces false statements that harm someone’s reputation.
AI itself cannot be held liable. However, both the developers as well as the users of the AI could face legal consequences if the AI creates and disseminates fake news or misleading information. In understanding liability in AI product development, it is critical to note that who ends up bearing the liability is currently not clearly established in case law and remains an area of legal uncertainty.
How can I demonstrate the quality and accuracy of my AI model?
The importance of insurance for AI technology providers derives from the ability to prove the quality of your AI model and assure your customers that their AI tool will perform as expected. With aiSure™ - Contractual Liabilities, you can guarantee the performance of your AI tool and if the AI fails to deliver as promised, we back your performance guarantee and compensate your customers for the losses incurred.
As an example: aiSure™ allows you, for instance, to guarantee that your fraud detection model will catch all fraudulent transactions. If your AI fails to catch a fraud event, we provide a payout amounting to the losses incurred. This insurance-backed performance guarantee increases trust in your AI solution and shortens sales cycles, while our strong balance sheet carries the underperformance risk.
How can my company adopt AI while minimizing P&L risk?
Even by implementing the best AI governance process, you cannot adopt AI without residual risk in performance and, depending on the use case, residual discrimination, IP infringement, data reconstruction, and other risks.
We enable corporates adopting AI by insuring the performance of your own AI (e.g. self-built, purchased, fine-tuned), with aiSure™ - Own Damages, supporting you in implementing AI solutions for your critical operational tasks, such as in manufacturing or agriculture.
Take the case of an automotive manufacturer turning to AI for the final quality control before distributing cars to their sales locations. aiSure™ enables the manufacturer to use AI in quality control in manufacturing without bearing the financial losses which might come with performance risk. Insuring the performance of the AI model protects the manufacturer against distributing sub‑par cars due to the error rate of their AI drifting beyond the desired threshold.
When your models underperform, you know that their financial downside is covered by us. Our aiSure™ insurance solution enables worry-free implementation of AI models for vital parts of your operations. For more information on best practices for insuring AI systems and applications and the role of insurance in complementing an overall risk mitigation and governance process, download the whitepaper.
How can I protect myself from lawsuits from my AI?
With aiSure™ - General Liability, you can protect yourself against damages and financial losses arising from lawsuits alleging that AI-made decisions were biased and discriminated against protected groups, or alleging other liabilities arising from AI use and creation.
As an example: aiSure™ protects you against lawsuits for alleged discrimination against protected groups when, for instance, you use black-box AI to screen job applications or prioritize patient intake in a healthcare setting. This insurance solution promotes the equitable and responsible use of AI and shields you from expensive and far‑reaching lawsuits alleging disparate impact discrimination.
Contact the team
/Michael-Berger-n1102721.jpg/_jcr_content/renditions/cropped.square.jpg.image_file.120.120.file/cropped.square.jpg)

