properties.trackTitle
properties.trackSubtitle
Generative artificial intelligence is considered one of the most important technological breakthroughs of the last few decades. Fabian Winter, Chief Data Officer for the Munich Re Group, sees great opportunities for insurers – if they explore the possibilities of the new technology and understand its risks.
What do you think generative AI is - a hype or revolution?
Fabian Winter: With generative AI, we observe for the first time that AI can not only have incremental, but disruptive influence on lots of processes and business models. With the launch of ChatGPT, large parts of the society – not only experts – have been able to directly interact with artificial intelligence for the first time. And the results of ChatGPT and other models are stunning: pictures, text, programming code, music, 3D objects – there is a truckload of new opportunities and in many cases the quality of the output is already at a high level. And the models are continuously getting better through the still very high investments.
What are the challenges with AI-driven tools?
Generative AI – much more than traditional AI – offers opportunities and risks that we have to weigh against each other. It can help us make information accessible much more quickly and easily and thus improve many processes’ efficiency and quality. Compared to traditional AI, this even holds true for more complex and creative tasks, like programming or the creation of demanding graphical works. In reinsurance business steering, we assume that this will, amongst others, lead to decision support for our operative business functions, e.g. in underwriting.
Regardless of the technology, the quality of the results always depends on the quality of the data and processes used. Unlike “traditional” AI models that are trained by data prepared by the respective experts, publicly available generative AI models are trained by vast amounts of publicly available datasets. We do not have control over the training data anymore. This means that while generative AI models can provide access to a lot of external unstructured data, but also that there are uncertainties with respect to the quality of outcomes when using these models. On the other hand, the combination of such models with our own data presents a challenge for the protection of our intellectual property. That is why we should continue to be fundamentally guided by ethical considerations and quality requirements in our digital development. This also includes educating the people that are using generative AI on with respect to best practices and potential pitfalls.
Where are insurers in dealing with generative AI?
We are at the beginning. As in all other industries, there are already first applications and products. Exploring the new possibilities of generative AI – but also understanding and managing its risks in the long term – is a worthwhile task and of considerable importance for our policyholders. I would urge everyone in our industry to define potential use cases for their business – but at the end of the day, a lot of additional questions need to be answered to successfully implement them. For example, some models are particularly suitable for understanding medical text (like MedPaLM) or generating programming code (like Code LLaMa). So, it is important to choose the right model or even use a combination of models which need to be orchestrated. Another important question is if and how standard models need to be adjusted in a specific context, like a line of business or a specific task. There exist various techniques to improve outcomes connected with various levels of effort from asking smart questions (“prompt engineering”) to “retraining” models on insurance-specific data, i.e. adjusting the parameters of the algorithms. And, of course, how can we make sure that our intellectual property and data is protected, i.e. by hosting own encapsulated models within proprietary infrastructure? Answering all technical, talent and protection questions can be quite challenging. Based on the experience and expertise that Munich Re has built up in the AI domain, we can support our clients on their journey to maximise the impact of their generative AI use cases.
Where is the AI journey going at Munich Re?
The large generative AI tools available to the general public, while promising, are of limited use to Munich Re. Because of the highly sensitive data that we have, we need to ensure that the knowledge generated from these data is carefully protected. Consequently, we need to explore the possibilities of a self-contained (“encapsulated”) language model landscape, which we are not only designing specifically for the tasks at Munich Re, but also to fulfil all of our very strict intellectual property requirements. When using AI, our primary goal is to offer demand-oriented insurance solutions, for example to make it easier and quicker for clients to assess risks or settle claims, or to insure new types of risks. This is not restricted to generative AI, but also “traditional” AI can and will continue to provide value for insurers.
Munich Re also assumes the performance risk of AI-based models through innovative insurance products such as aiSure™. More than a hundred experts at Munich Re are working intensively on AI – including an increasing number with a focus on generative AI. We are constantly exploring new areas of activity by combining our insurance knowledge with AI knowledge. Our goal is to push the boundaries of insurability and strengthen our clients’ resilience.
Downloads
Related Solutions
Newsletter
properties.trackTitle
properties.trackSubtitle