Explore Munich Re Group

Get to know our Group companies, branches and subsidiaries worldwide.

© Getty Images

aiSure™
More AI Opportunity. Less AI Risk

    alt txt

    properties.trackTitle

    properties.trackSubtitle

    Reap the benefits of AI

    While protecting against the downside

    0%

    of CEOs say there would be immediate business benefits from implementing AI

    0%

    are hesitant to invest because of uncertain return on investment

    Across industries, across use cases aiSure™ has got you covered.

    For AI Vendors - For Corporations using AI

    Contractual Liabilities

    Legal Liabilities

    Financial Losses

    The Risk

    Selling innovation can be challenging when customers lack trust in AI providing a performance warranty for the AI models accuracy d risks the purchase decision but it creates a liability on the balance sheet of the tech company.

    The Risk

    Selling or using AI models can create new and accumulated liabilities. Those include

    • Discrimination
    • IP infringement
    • Hallucinations
    • Regulatory fines

    The Risk

    Rolling out AI models to automate critical business processes has huge upside potential but comes with uncertainty. AI errors and GenAI hallucinations can lead to ill-informed decisions, business interruption or other forms of lost revenue. 

    aiSure™ has got you covered for critical AI risks

    aiSure™ for providers

    aiSure™ for AI deployers

    AI innovators can de-risk new clients’ AI investment decisions by guaranteeing ROI. aiSureTM-backed performance warranties enable you to indemnify your clients for their financial losses or legal liabilities directly related to AI errors.
    For corporations deploying AI at scale, protect against the unexpected downsides of the technology. aiSureTM covers multiple models and loss scenarios arising from AI errors, including lost revenue, business interruption and legal damages.

    Real AI risk transfer at work.

    Barker
    AI-Powered Alternative-Asset Valuations with a Performance Cover
    Luxury assets are often used as collateral for loans. However, pricing is often inaccurate, leading to lower loan-to-values and smaller profits compared to other asset classes. Barker’s AI solution revolutionizes luxury asset pricing, giving lenders confidence in non-bankable asset-backed loans. Barker also warranties its values, paying the difference if they’re wrong. This provides additional security, as Munich Re covers Barker's warranty. By indemnifying Barker for its liabilities under the warranty it gives lenders and banks confidence in their decisions.
    Download Case Study
    AI-Powered Alternative-Asset Valuations with a Performance Cover
    Qumata
    AI solution for medical-free underwriting with a performance guarantee

    Qumata’s AI solution uses digital data to calculate the risk of fraud and an applicant’s health status.

    This output is used to triage customers between medical and non-medical underwriting journeys, as such reducing dropouts – because fewer physical medicals are required. Combined with Munich Re’s product guarantee, insurers can use this solution to significantly increase sales without having to worry about an increase in claims due to a different underwriting methodology.  

    Download Case Study
    AI solution for medical-free underwriting with a performance guarantee
    BforeAI
    Predictive cybersecurity with a performance guarantee
    BforeAI’s preventative behavioural attack identification software enables organisations to identify possible security breaches earlier on and gives them an edge in dealing with security threats. BforeAI’s PreCrimeTM Intelligence is the predictive attack intelligence domain list. It is the pre‑emptive shield against security threats. The PreCrimeTM Intelligence AI predicts dangerous domains before they launch attacks.
    Download Case Study
    SeismicAI
    © Munich Re
    SeismicAI
    Earthquake Early Warning system with a performance guarantee
    Traditional methods fall short in terms of accurately and rapidly detecting and alerting on earthquakes, their location and severity. SeismicAI's Earthquake Early Warning system can determine an earthquake's location, magnitude and impacts on various sites with much higher accuracy and speed than traditional single-sensor seismic networks. SeismicAI's real-time array seismology provides invaluable time to help minimise loss of life and safeguard vital assets by implementing preventive measures in commercial facilities or critical institutions such as schools and hospitals.
    Download Case Study
    SeismicAI
    © Munich Re
    FUGU
    Fraud prevention with a performance guarantee
    FUGU’s multi-layer anti-fraud solution allows online sellers to validate the most suspicious transactions while reducing false declines. It effectively detects friendly fraud attempts, resulting in minimum exposure to future chargebacks. FUGU’s self-learning algorithms collect data at various points of the transaction life cycle and allow a broader and more data-based risk assessment analysis.
    Download Case Study
    FUGU
    © Munich Re
    Growers Edge
    Innovative Crop Plans with warranty backing for farmers
    Growers Edge builds industry-leading fintech and data solutions to empower agriculture retailers and input manu facturers. Adopting innovative ag technologies like new seed, chemicals or earth-conscious practices has been intimidating and costly, with no financial backstop for farmers. Leveraging their comprehensive analytics capabilities, Growers Edge partners with retailers, offering warranty- backed crop plans that help farmers adopt new technologies and practices leading to secured yield.
    Download Case Study
    Growers Edge
    © Munich Re
    Deep Instinct
    True prevention and protection against ransomware

    Deep Instinct provides a deep learning cybersecurity framework to identify and prevent ransomware and other malware before they even execute.

    The prevention-first approach detects attacks in milli-seconds, faster than the fastest known ransomware is able to encrypt and cause harm. Preventing unknown threats is far more effective than any other endpoint protection or detection and response solutions.  

    Download Case Study
    Deep Instinct
    © Munich Re

    1 of 7

    Explore our aiSure™ Knowledge Center

    Discover our AI podcasts, whitepapers, articles, and more...

    Frequently Asked Questions

    AI Insurance is new category of business risk coverage. Unlike traditional casualty policies protecting against catastrophic loss or traditional liability covering third party damages claims, AI insurance, pioneered by Munich Re in 2018, is triggered by unexpected errors in the ongoing performance of AI models in daily business use. This makes it suited to covering high frequency, low severity events which in the aggregate can create significant economic loss or class action exposures that existing forms of insurance simply don’t address.

    AI risks come in various types, including but not limited to:

    • Underperformance of the models
    • Hallucination, misleading content and false information.
    • Bias and fairness risks, leading to discrimination.
    • Privacy infringement following the leakage of private or sensitive information.
    • Intellectual property violations by models trained on copyright-protected material.
    • Users and customers exposed to harmful AI-generated content.

    Munich Re has been helping our clients with insurance for AI products and services since 2018 and we are happy to discuss with you how to choose the right insurance for your AI solutions and potential exposures.

    AI is probabilistic – it makes the decisions based on the probability of a range of outcomes. At the same time,  AI is systematic – meaning it will continue to make these mistakes until they are identified and corrected.

    For businesses using or selling AI, the potential consequences of underperformance can include reduced profitability, serious financial losses, business interruptions, customer dissatisfaction and attrition, increased expenses and reputational damage, as well as legal liabilities and the resulting costs. The type of loss depends on the use case and can include financial costs for system malfunctions and breakdowns, business interruption costs, legal liabilities, extra expenses and costs for accidents.

    The risk of underperformance cannot be fully avoided by technical means. At the same time, not leveraging AI effectively could lead to a competitive disadvantage for a company. By indemnifying AI vendors and their corporate users against excessive losses, aiSure fills a risk management gap, protecting AI innovations with insurance.

    aiSure™ is a suite of comprehensive coverage for AI systems designed to address a wide area of AI-related risks for AI providers and corporate adopters caused by AI performance errors, including:

    1. Contractual liabilities
    2. Own damages/financial losses; and
    3. Legal Liabilities.

    In an emerging market like AI, Insurance solutions should adapt to evolving requirements. As the pioneering insurer in the field of AI, we pride ourselves on delivering tailored insurance solutions for AI product providers.

    We welcome other coverage requests and are happy to explore designing a bespoke solution for your business. To learn more on how to protect your AI technology with insurance, please reach out to us.

    Since issuing our first AI policy for anti-fraud model in 2018, Munich Re has written aiSure insurance solutions for AI companies across a diverse range of industries, including:

    We are confident that we can provide significant risk transfer capacity across any industry use case where the accuracy and reliability of AI performance is critical to financial results. Please reach out to us to discuss your particular use case and requirements.

    We define artificial intelligence (“AI”) broadly as any form of statistical machine learning method based on data. This definition includes machine learning methods, deep learning models, reinforcement learning models, ensemble models, and others. In fact, aiSure™ is model-agnostic, so any type of model, including GenAI, is insurable. The quality of the model and its performance stability determine the premium, enabling us to provide a wide variety of Insurance options for AI and machine learning firms. For more information on our model agnostic risk management and assessment approach, click here.
    Yes. We offer tailored liability coverage that protects against risks specifically inherent to GenAI models, such as hallucinations and copyright infringement. For more information on how we insure GenAI models, click here.

    Munich Re follows a proven risk assessment process for ensuring performance reliability in AI systems through insurance:

    • The first step is to evaluate the model development pipelines and identify potential risk scenarios (unrepresentative training data, data drift, updating and monitoring processes, etc.). A thorough, quantitative analysis of data inputs and outputs is essential for gaining an understanding AI liability and performance
    • The second step is to derive a valid risk estimator. Based on historical performance data, we can estimate the future likelihood of model underperformance. Please refer to our whitepaper.

    We are actively contributing to statistical and machine learning research in the area of uncertainty quantification critical to protecting AI systems with insurance. We have also developed methods that enable us to automatically price the model performance risk of AI in real time. Please refer to our research papers [An In-Depth Examination of Risk Assessment in Multi-Class Classification Algorithms and Distribution-free risk assessment of regression-based machine learning algorithms] and follow our LinkedIn feeds for future updates on our research on how to insure AI products and technologies.

    The performance of AI relates to the uncertainty of error. No model can ever be perfect even with access to all information and having the best data science team available with state of the art governance policies and processes in place. No matter how good the performance of an AI model is, there will be instances where it makes mistakes.

    For example, a GenAI model might hallucinate and provide factually wrong answers to user queries. A financial fraud detection model might fail to detect costly fraud events. A predictive maintenance model might fail to recognise an equipment breakdown event.

    An AI model built to make inferences under conditions of uncertainty which means that even if you design it to the most exacting standards, it will make errors. Indeed, in the famous formulation of UCLA Professor John Villasenor, “The laws of statistics ensure that – even if AI does the right thing nearly all the time – there will be instances where it fails.” In all such cases, AI and GenAI models make mistakes, which can cause financial losses and create liabilities, such as:

    • Property damage and bodily injury
    • Compliance fines and penalties
    • Privacy violations
    • Data leaks
    • Intellectual property infringement
    • Pure financial losses
    • Discrimination

    AI insurance is designed to mitigate the financial impact of AI model error uncertainty. The more you use and depend on AI, the more costly the losses can be. 

    Insurance can help AI providers to increase their customers’ trust in the performance of the AI model and reduces the sales cycle time. For businesses integrating AI technologies, AI insurance accelerates confident AI adoption by providing a financial safety net.

    Among the benefits of insurance for companies developing AI solutions is its ability to alleviate customer concerns about AI reliabilty. Potential clients may be skeptical about the real-world performance of AI technology. Thus AI providers often struggle to address potential clients’ concerns regarding the uncertainty of AI predictions, leading to lengthy proof-of-concept deployments or extended due diligence processes for each client.

    Munich Re’s aiSure™, which covers AI system failures, instils confidence and trust in the solution. This enhanced reliability not only boosts customer satisfaction and retention but also strengthens the provider’s reputation and brand image. Ultimately, offering an AI system performance guarantee with clear, well-defined and significant compensation for model errors and their consequences can significantly shorten sales cycles by eliminating the need for extended proof-of-concept phases – why perform lengthy POCs when the outcome is insured? Guaranteeing the performance of AI puts AI users at ease knowing that potential financial losses from relying on the AI’s performance are covered. With aiSure™, AI providers can inspire trust in their solutions.

    Companies strive to leverage AI in order to automate tasks, optimise efficiency and boost productivity. Despite these benefits, companies face challenges in transitioning AI initiatives from innovation labs to operations. Executive management is rightfully concerned about the financial risks associated with entrusting operations to AI models, including the potential for accruing materially significant compliance fines under new regulatory frameworks. Transferring the risk of AI underperformance to an insurer provides peace of mind to company boards and investors, assuring them that model performance issues will not lead to financial events that could impact stock performance or pose reputational risks while ensuring compliance for AI technologies through insurance.

    Traditional insurance policies are generally made to cover traditional perils. AI, in its different forms (from random forests to Generative AI), with its different uses (from chatbots to medical instruments) and its different risks (prediction errors to discrimination) will test the limits of traditional insurance in the years to come.

    At Munich Re, we are aware of the limitations of traditional policies. This is why we offer tailored insurance solutions with flexible limits and payouts, specifically designed for AI risks. We cover all types of damages required, enable different payout triggers, emphasise low coverage thresholds, and require no legal liability element. Our AI insurance solutions therefore address an insurance gap and create legal certainty and peace of mind for AI users in a constantly changing environment.

    Lawsuits involving unauthorized use of copyrighted images or text as training material. If an AI plagiarizes content, the creator, developer, or user of the AI could be sued for copyright infringement, as they could all be held responsible for the AI's actions.

    Yes, there are a few reasons that GenAI  models can create “substantially similar” images to their training data:

    1. Training data diversity: Generative AI models, especially those used for image generation,  are trained on vast datasets that can include billions of images and texts. If many images in the model training dataset share similar features, the model may learn these common patterns and produce similar outputs.
    2. Model overfitting: for the large models with billions of parameters and iterative training steps, it’s possible that model memorizes specific details of the training data, generating outputs that closely resemble some training data.
    3. Detailed Prompts: when using text-to-image models, users can create highly detailed prompts that specify particular styles or compositions. If these prompts are similar to descriptions of any images (such as artworks) that are used in model training, the generated images may resemble those images. The prompt similarity might not be intentional by a user and just the product of chance.

    If an AI generated image, audio, or text is found by courts to be “substantially similar” to that of an original image, the creator of the image can be found to have infringed on someone’s copyright, leading to fines of up to 30k dollars per infringement besides high legal defense costs and additional damages.

    GenAI providers could be held liable for IP infringement in two ways:

    1. Copyright infringement in training data: AI providers can be (and are being) sued for utilizing copyrighted artists’ work to train their Generative AI models.
    2. Secondary liability due to copyright infringement of the GenAI’s output: If GenAI providers know that their GenAI models provide IP-infringing content, profit from it and do nothing to stop it, they could be assigned secondary liability by the courts.

    If the image created is substantially similar to an existing, protected image, and the image is subsequently published, the creator of the image  could sue the user in court for IP infringement, resulting in significant statutory damages.

    As a wide variety and quantity of content on the internet has been scraped to be used as training data for GenAI models, one must assume that a lot of protected images are part of the GenAI’s dataset. It is an impossible task for a GenAI user to check, whether the image created using GenAI is similar to a protected one that is part of the training data.

    Yes.  Munich Re’s aiSure™ IP Liability policy protects both GenAI providers and GenAI users both from lawsuits for IP infringement of “substantially similar” GenAI output, and  covers legal costs that stem from an alleged IP infringement by the created image.

    We are happy to talk to you about the potential risks, your existing coverage and how we can help.

    Yes, there have been lawsuits related to AI bias and discrimination, particularly in hiring practices and lending. One notable case of AI bias and discrimination is the EEOC's September 2023 settlement with a tutoring provider. The company used AI in hiring decisions, which led to discriminatory practices against certain groups. The EEOC ordered the company to pay $365,000 as part of the settlement.

    AI-generated content can result in defamation lawsuits if it produces false statements that harm someone’s reputation.

    AI itself cannot be held liable. However, both the developers as well as the users of the AI could face legal consequences if the AI creates and disseminates fake news or misleading information. In understanding liability in AI product development, it is critical to note that who ends up bearing the liability is currently not clearly established in case law and remains an area of legal uncertainty.

    The importance of insurance for AI technology providers derives from the ability to prove the quality of your AI model and assure your customers that their AI tool will perform as expected.  With aiSure™ - Contractual Liabilities, you can guarantee the performance of your AI tool and if the AI fails to deliver as promised, we back your performance guarantee and compensate your customers for the losses incurred.

    As an example: aiSure™ allows you, for instance, to guarantee that your fraud detection model will catch all fraudulent transactions. If your AI fails to catch a fraud event, we provide a payout amounting to the losses incurred. This insurance-backed performance guarantee increases trust in your AI solution and shortens sales cycles, while our strong balance sheet carries the underperformance risk.

    Even by implementing the best AI governance process, you cannot adopt AI without residual risk in performance and, depending on the use case, residual discrimination, IP infringement, data reconstruction, and other risks.

    We enable corporates adopting AI by insuring the performance of your own AI (e.g. self-built, purchased, fine-tuned), with aiSure™ - Own Damages, supporting you in implementing AI solutions for your critical operational tasks, such as in manufacturing or agriculture.

    Take the case of an automotive manufacturer turning to AI for the final quality control before distributing cars to their sales locations. aiSure™ enables the manufacturer to use AI in quality control in manufacturing without bearing the financial losses which might come with performance risk. Insuring the performance of the AI model protects the manufacturer against distributing sub‑par cars due to the error rate of their AI drifting beyond the desired threshold.

    When your models underperform, you know that their financial downside is covered by us. Our aiSure™  insurance solution enables worry-free implementation of AI models for vital parts of your operations.  For more information on best practices for insuring AI systems and applications and the role of insurance in complementing an overall risk mitigation and governance process, download the whitepaper.

    With aiSure™ - General Liability, you can protect yourself against damages and financial losses arising from lawsuits alleging that AI-made decisions were biased and discriminated against protected groups, or alleging other liabilities arising from AI use and creation.

    As an example: aiSure™ protects you against lawsuits for alleged discrimination against protected groups when, for instance, you use black-box AI to screen job applications or prioritize patient intake in a healthcare setting. This insurance solution promotes the equitable and responsible use of AI and shields you from expensive and far‑reaching lawsuits alleging disparate impact discrimination.

    Let's discuss how aiSure can help shoulder your AI risk

    Please enter a value.
    Please enter a value.
    Invalid value.
    Thank you for your message. Your message has been sent to Insure AI.