Patrick Green VP of Data

Responsible AI in Insurance

The critical path to successful governance

    alt txt

    properties.trackTitle

    properties.trackSubtitle

    In the second of his series of articles in AI in insurance, Patrick Greene, VP of Data at Munich Re Automation Solutions, explores the importance of responsible AI.

    Responsible AI is the practice of designing, deploying, and developing AI systems with good intention.

    It is managed through regulation, but also through corporate and personal responsibility. It is about the whole ecosystem from start to finish, considering questions such as how AI models are employed and how they are used in a fair and ethical way. In the area of responsible AI, there are several frameworks and guidelines in place today that help to build responsible AI into internal governance.

    The Monetary Authority of Singapore has an acronym which is particularly apt: FEAT- which stands for Fairness, Ethical, Accountability and Transparency and it is important that insurers have the capability to deliver them in a way that makes sense.

    July 2024 ebook: AI transformation in insurance underwriting

    Improve risk prediction accuracy, gauge intent, identify cross-selling opportunities and delight users.

    Fairness

    This is the practice of ensuring that any decisions made by AI systems, or indeed any data driven system, would give the same outcome as a human.

    Fairness is also about being able to justify the decision that has been made. There are two types of bias we try to detect in an AI model, implicit and explicit bias. Explicit bias can, for example, filter based on gender, ethnicity, or age, whereas implicit bias could be the bias that is built into data sets that is not known to the human observer. This could be because the historical data has implicit biases that were already built into it due to protected classes or, for example, exclusion of certain variables that would have caused the model to train towards a particular outcome.

    Ethical

    Models should be ethical and should be held to the same standards as any human decision would.

    Accountability

    Humans should be accountable for the decisions that are made by a predictive model.

    Humans should also be in control of the final output and decisions that are made by the model they are using, and they should be able to explain those decisions to the data processor or consumer in any application. This is particularly pertinent within the highly regulated insurance industry.

    Transperancy

    When we talk about responsible AI, the key aspect is transparency. This is a subject that insurers need to be aware of when a model is making decisions related to their customers. 

    There should also be auditability and accountability in terms of being able to determine why a model made a particular decision about a customer. 

    Within that framework, explainable AI is key to being able to do this. Transparency and explainable AI are also key components in the insurer's regulatory responsibilities under their existing regulatory frameworks and we’ll dive into that in the next article.

    July 2024 ebook: AI transformation in insurance underwriting

    Improve risk prediction accuracy, gauge intent, identify cross-selling opportunities and delight users.

    Contact us today