Patrick Green VP of Data

Explainable AI

Accuracy, bias detection, and compliance

    alt txt

    properties.trackTitle

    properties.trackSubtitle

    In the third of his series of articles in AI in insurance, Patrick Greene, VP of Data at Munich Re Automation Solutions, examines the world of explainable AI and compliance.

    Explainable AI is the tools and processes required to meet your obligations under the responsible AI framework you have put in place. What does this mean? It's about giving transparency to the human owner of the overall system, in this case the insurer, in relation to the decisions that are made by the predictive models. 

    Predictive models tend to be ‘black boxes’ - a system where we take hundreds of data points or, in data science terms, ‘features,’ and these features interact with each other in complex ways, which are not understandable by a human. 

    Explainable AI provides transparent audit logs which enables underwriters to interpret why a model has made a particular decision.

    This is important for two main reasons.

    July 2024 ebook: AI transformation in insurance underwriting

    Improve risk prediction accuracy, gauge intent, identify cross-selling opportunities and delight users.

    Accuracy

    The first is around the accuracy of the model. 

    This is very much dependent on the historical data that the model has been trained on. If at some point in time your philosophy changes from the historical data that you trained the model on, then the model may start to drift from its original intention and give decisions it would not ordinarily have given, away from your traditional philosophy.

    So, it is important to have tools in place to be able to monitor model drift.

    With these you can compare using, for example, a set of random holdouts, a dataset that has been scored or decisioned by a human versus a dataset that has been scored, or decisioned, by a machine.  

    The deviation between these two sets should not go beyond your acceptable limitations, particularly when you take into consideration that within the life insurance industry and underwriting, long-term risk is a key element of the appetite of the insurer.

    Bias detection

    The second aspect where explainable AI is important is around bias detection. 

    Bias can be a part of the underlying data set or based on implicit decisions made within the model. We can use explainable AI to chart the distribution of protected classes and how they would have scored by the model compared to how they would when scored by a human.

    These decisions should not differentiate from each other in meaningful ways. It is good practice to keep the discrepancy between human decisions and model decisions as low as possible using machine learning ops (MLOps) techniques. 

    MLOps

    Machine learning operations (MLOps) is about continuously training the model on the data and keeping random holdout data separate to allow you to have a base which is scored by a human.

    As we have discussed previously, models are difficult to understand - what has happened at an algorithm level? 

    Data scientists and engineers who are part of the process of building and training the model do not necessarily understand why the algorithm made these decisions itself. Techniques like ‘SHAP’ or ‘Lime’ can be used to determine the importance of features within a model decisioning.

    If we take the specific example of underwriting, the feature importance can be determined. 

    See how ‘smoker’ affects, or how ‘gender’ of ‘smoker’ status, for example, has affected the overall decision of the model or other features like medical disclosures or data coming from third-party data sources like Jones.

    Compliance

    The final part of explainability in AI is that it is interpretable and allows you to be compliant and we will look at that in our next article.

    July 2024 ebook: AI transformation in insurance underwriting

    Improve risk prediction accuracy, gauge intent, identify cross-selling opportunities and delight users.

    Contact us today