AI For Underwriters
A challenge yet an opportunity
properties.trackTitle
properties.trackSubtitle
In this article, Patrick Greene, VP of Data at Munich Re Automation Solutions, discusses the four common challenges that face insurance underwriters in the AI space.
The artificial intelligence (AI) space today is exciting.
We have got lots of terminology and technology coming into the workplace and into our daily lives. Among other things we have automated cards, facial analytics, and the advent of large language models (LLMs) and co-pilot style applications to help use create code, and other day to day content.
The insurance industry is no different and particularly within underwriting where optimisations in document summary, fraud detection, and user experience could be optimised using AI or machine learning tools, generally alongside existing rules to augment the automation model.
Additionally, AI can be used at various stages of the whole point-of-sale process, for example, facial analytics can be used to detect things like body mass index (BMI) up front.
Elsewhere, in the automated or manual underwriting processes, underwriters can use predictive models, and natural language processing (NLP) in particular, to allow them to accurately summarise documentation like, in the US, APS (Attending Physician Statement) into electronic health records.
The challenges facing insurers in predictive analytics are no different than in other industries. There has been a massive growth in AI over the last decade and trying to productionise these predictive models into business-as-usual flows is not always straightforward.
So, what are these challenges?
Primarily they fall into the four common areas of:
access to data,
access to the correct tools,
access to the right talent,
regulation.
July 2024 ebook: AI transformation in insurance underwriting
Data
The main challenge in relation to data is silos. For the underwriter at application phase, we have point-of-sale systems, auto-underwriting systems, policy management systems and claims systems.
We do not have systems that manage the whole lifecycle of the policy so typically all these systems have siloed data.
They are using different IDs, different tokens and different data formats and it’s often not easy to access the data. This means underwriters are not able to view the entire lifecycle data.
This is a huge challenge for organisations as a business process, but it also provides challenges in being able to train those data models for predictive purposes.
Tools and skills
There are many new and exciting tools becoming available at an amazing pace and it can be difficult for insurers to upskill their staff to use these tools.
For example, how do you use large language models (LLM) or ChatGPT?
Because of the upsurge in data and AI in the whole market in general, access to talent who are fluent in data science techniques is difficult for insurers to tap into.
On the tools side, particularly, there are no drag and drop style tools or tools that are easily used by underwriters or other business functions. They tend to be very technical in their nature. And it is difficult for an underwriter to be able to understand them.
On the skills side, usually within an organization, the business and domain knowledge lie with the underwriter. The technical knowledge might lie with data engineering and the data science knowledge lies within data science team. Pulling all those things together is challenging for any organisation, and insurers are no different.
Regulation
One of the main challenges for insurers is regulation.
Insurance is a heavily regulated industry and is increasingly under the scrutiny of regulators and in the realm of AI there are various binding and non-binding regulations across the globe.
For example, in North America we have the NAIC (National Association Insurance Commissioners) which has adopted the Artificial Intelligence Guiding Principles, a non-binding regulation in North America, which focuses on protecting the individual who is applying for insurance. Within the European Union, we have the European Union Artificial Intelligence Act, which relates primarily to categorizing systems into different risk classifications and how they should be used to try and protect the consumer or the data processor.
Governance
Another key topic within AI is security and governance.
Data protection is paramount and a key requirement of any regulatory body.
Protecting personally identifiable information (PII) and ensuring that a four-eyes principle is used when building and creating or interacting with any data related to cases, is key to ensuring that you meet your responsible AI obligations, a topic we will dive into in the next article.