Digital solutions podcast
Episode 2: Creating a robust strategy for AI in insurance
Medical technology concept. Remote medicine. Electronic medical record.
© FreshSplash / Getty Images
    alt txt

    properties.trackTitle

    properties.trackSubtitle

    Creating a robust AI strategy for AI in insurance

    In this episode of the Digital Solutions podcast, host Adnan Haque discusses how to create a robust AI strategy in the insurance industry with guests Dae Won Kim and Stephen Tse, both machine learning engineers at Munich Re.

    They explore the transformative role of generative AI (gen AI) and predictive modeling, examining how gen AI has increased accessibility and awareness of AI’s capabilities, especially since the release of tools like ChatGPT.

    The discussion covers challenges and benefits of integrating AI in insurance, from compliance and data security to productivity and competitive advantage. Adnan, Dae Won, and Stephen also tackle the nuances of choosing between building in-house AI models versus leveraging off-the-shelf gen AI products and emphasize the importance of responsible governance and risk management in AI applications.

    Listen below to learn more.

    Podcast host:
    Adnan Haque
    , Vice President, Integrated Analytics, Munich Re North America Life

    Guests:
    Dae Won Kim
    , Staff Machine Learning Engineer, Integrated Analytics, Munich Re North America Life

    Stephen Tse, Machine Learning Engineer, Integrated Analytics, Munich Re North America Life

    Adnan Haque:
    Hello and welcome to our Digital Solutions podcast. I'm your host, Adnan Haque, and I oversee risk assessment as a service, which is our alitheia product. I have two guests here today, Dae Won Kim and Stephen Tse. Dae Won, why don't you introduce yourself?

    Dae Won Kim:
    Hi, I'm Dae Won Kim. I'm a staff, both data scientist and machine learning engineer, on the team. I focus a lot on new technology and R&D involving new technology. So, it could be unstructured data. It could be new data sources. It could be new data technology. So I've worked a lot on gen AI technology. I've worked a lot on unstructured text, sometimes images. So that's my specialty.

    Adnan Haque:
    Stephen?

    Stephen Tse:
    Hey, I'm Stephen Tse. I'm a machine learning engineer for the team. I've been working on the EHR summarizer product as well as alitheia, mostly on predictive modeling. I've also led some of our LLM use cases where we experiment with potential applications as well as our machine learning infrastructure.

    Adnan Haque:
    Perfect. So today, we're here to talk a little bit about how do you set a robust strategy for AI in insurance. Fortunately, the two of you are actively building out a lot of the different tools and actively engaging in a lot of what's out there in the market, both within our industry and other industries. So, why don't we start with AI? Everyone uses the term. I think not a lot of people mean the same thing when they use the term. What are we talking about when we say AI?

    Stephen Tse:
    So I think that's a really good place to start with because it is a term that's used a lot and means different things. More generally when I think AI is whenever you're using some kind of algorithm or some kind of predictive model to do a complex task that normally you would have a person do.

    Adnan Haque:
    And I guess colloquially, what do you think most people think of or even within the industry, what do people think of when they think AI?

    Dae Won Kim:
    You know, that's an interesting question because sometimes I think that's where some of the negative connotations come where a lot of the focus is on automation. A lot of the thought is around, hey, these are processes that are being built to replace things that humans used to do. And I think the focus sometimes is there and more colloquially used. I would also add that while automation is part of a big focus in AI, a lot of times, it's more about enablement. It's a lot about adding value to places that people didn't really know could be added, and therefore, making the process a lot better.

    Adnan Haque:
    I think almost every other meeting I've been in for the last nine months or so, maybe over a year, has had some reference to gen AI. Especially ever since ChatGPT has been out, we've had some conversations about gen AI. So, what really drives the hype behind gen AI? Why does it dominate so much of the conversation?

    Dae Won Kim:
    I think before, AI was really being used in ways that were less tangible. I mean, people knew about it. People understood it to a certain extent, but it was like YouTube algorithms, like what videos are being recommended to you. It wasn't real. It wasn't human-like. That's not the way people think. But now it's sort of crossed the comfort zone or the uncanny valley where they're like, "Oh wow, like, things that we uniquely associated with human qualities and human actions are now being somewhat replicated to a visible extent, and you can see the images that look real." You can see the voices that sound real, and you see the text that you would think was not possible. So it's that uncanny valley that’s really accelerating people's attention. And it's really, really easy to understand that these things that are no longer just human.

    Stephen Tse:
    I tend to agree. I think a large part of that is also accessibility. It's not just the understanding, but you know any high schooler can use ChatGPT for their essays. Anyone can generate these images by just typing in a prompt and really interact with it and see the effects right then and there immediately. This is actually a very powerful tool, and it's not just engineers and technical people who have been working with these things for several years now. It's people in upper management who are making these business decisions. They interact with this tool, and the first thought is, well, how do I leverage this for my business?

    Adnan Haque:
    That's a really interesting premise. And I agree, accessibility is dramatically different. Dae Won, to your point, when I think about my first use cases of ChatGPT, they're all around writing poetry. And just a couple years ago, used to talk about of all the things to be addressed, all creative work like poetry and art was very, very far down on the list. So it's interesting how that's really changed our perception of how it can impact work.

    Over the last decade, our companies have spent a lot of time formulating how do predictive models and a lot of the different components, Dae Won, that you led with -- how do they fit into their overarching strategy? So, with the introduction of these new tools over the last year or so, how has that conversation shifted in terms of thinking about a robust AI strategy?

    Dae Won Kim:
    That's a really good question. I think the question has really shifted because there's a lot of interest. It was similar early on in the predictive analytics saga where people were talking about compliance; people were talking about how do we have faith and trust in these systems. And I think back then it was a lot easier because you could show scores. It was basically a scoring system and you say, "Okay, this score is 90% correct or 95% correct." Now you have a more unstructured form where the output is actually text or the output is actually an image, or an output is more unstructured. It can be more nuanced. There's a lot more concern, especially in industry, on compliance, on governance, on leakage of IP, leakage of ideas, or memorization of sensitive data, et cetera, et cetera. That conversation has become a lot more concerned to a certain extent and also excited.

    Adnan Haque:
    You mentioned a lot of elements in that answer. So something like leakage of IP, what does that mean? Why is that a concern?

    Dae Won Kim:
    It's been noticed by both practitioners and companies alike that a lot of these new technologies are essentially large memory machines. Oftentimes, that means things can be memorized or stored in places that you would not intend or not like. A lot of these technologies are also proprietary, so there is that whole boundary of “Hey, is this thing that I want not memorized being memorized by someone else? How do we stop that? What's the issue there?” I think those are more fascinating questions that are happening mostly because of the way these models work.

    Stephen Tse:
    A large portion of that is also just the uncertainty, right? At the end of the day, it's kind of a black box. So if it's in there somewhere, being memorized, we don't really have good ways of telling at the moment. And because we know it's possible, you can never really know. That uncertainty, that risk, can turn off a lot of people.

    Adnan Haque:
    As you think about the relevance of this technology, there have been some who have said, "Hey, some of these new tools can potentially replace a lot of existing predictive models and a lot of the existing machine learning work that, happened up to date.” So what would you say to that?

    Stephen Tse:
    I would say that it's very unlikely that you would be able to replace everything that you've already built. I think in the future going forward, it's also very unlikely that you would continue to use this going forward to train your models. There's a lot of potential risks and downsides that we can talk about for using an LLM as a predictive model.

    Adnan Haque:
    Dive a little bit deeper as to why. Why do we feel that in the ultimate future, instead of using these predictive models, we'll be using some technology that interfaces with gen AI?

    Dae Won Kim:
    I don't want to be too strong and say it will never happen in some ways. The analogy that I make isn't really whether LLMs are good or bad with this. I draw it to the analogy that we generally have with humans about specialists versus generalists. A lot of the technology that's being built are generalist things that are very good at adapting to various tasks, but ultimately...it's like a new grad. When you have a new grad hire, they have a lot of potential. They're very malleable. They have lots of good skills, but are they going to be immediately as good as an engineer who's honed their skills for 10 years? Are they going be as good as a plumber who's worked at their craft for 15 years? I don't think that actually will happen.

    So a lot of the predictive models we've built are, in some ways, the plumbers and the architects and the engineers who've specialized in that craft, to extend an analogy. It's got a great start. It's a great generalist. It can be molded into the things we like really easily, and it's very flexible. That's great but it will not replace predictive models, at least any time soon.

    Stephen Tse:
    I like that analogy. I think the conclusion you have at the end that it will not replace predictive models is interesting in the context of that analogy. Because you say, "Oh, well, this is the new grad. It doesn't have as much experience." And I think the question that's being posited a lot now is, will this new grad go on in a couple years to replace all of these specialists? You said at the end, no, and I agree with that. I think the reasoning is also because of all the potential downsides and pitfalls that LLMs can run into, like the hallucinations, potentials for biases, lack of explainability. In the analogy, it implies, “Hey, this generalist could overtake them” but I don't think that's very likely.

    Adnan Haque:
    So, are there elements of this technology that make us think about this differently within the insurance context? Right? What this technology means for ads looks very different than what it means for writing life insurance. Why don't we talk a little bit about that?

    Dae Won Kim:
    There are two parts, one that is specific to life sciences and healthcare. Obviously, there are a lot of regulations, there's a lot of sensitivity when it comes to data about privacy, about making sure that the sensitive medical information doesn't get memorized, again, to reiterate, by some machine that's owned by another company or another entity that we don't know about. But that's more general to healthcare.

    For insurance, it's interesting that in the past a lot of the data was sequestered in each company. Each company had its own life data, and life and death mortality data, and experience data. There are some hurdles to be jumped because the companies are more protective of that IP or they're traditionally more averse to sharing that and potentially interfacing that with new technology.

    Adnan Haque:
    As you started working with this technology, where have the lines started and stopped between kind of these predictive models that are in place and gen AI? Then talk us through some of the controls also that are put in place, because you mentioned a lot of pitfalls. How do we mitigate some of those aspects?

    Stephen Tse:
    One of the big pitfalls is the risk of hallucination, and this is something that comes up especially when you're doing predictive modeling as it gives you something completely out of left field, not related to the input data at all. This is an active problem, especially in the community. When LLMs come up, there are a lot of papers detailing how you can try to identify. I think for us, you try to identify with good monitoring. We train separate models to try to look at the input and the outputs and see, well, does this actually make sense? Is this a reasonable conclusion that could've been drawn from this data, or is there something that wasn't in the data? Because that would be a massive problem if you're decisioning life insurance like that and not trying to catch these issues.

    Dae Won Kim:
    I agree with that. To expand on the hallucination part, it’s similar to what we talked about earlier in the conversation. A lot of the outputs are now more nuanced and they are a lot more unstructured. So, before, when there's a score generated by a predictive model, it's much easier to vet that than it is to vet a paragraph. What is the validity of a paragraph, really, right? Is it that it makes sense? Is it that it sounds true? Is it that it's actually absolutely every single sentence true? And you know, even for a human, I think that's hard.

    What's really important to emphasize is that these new technologies focus a lot on understandability and then cohesiveness, like the sentences sound real. They solve for real. They're sounding real, not actually being real. Right? I think that's where you need to make sure whatever is interfacing with that technology has guardrails, it has safety measures, and when it's making decisions, it's not making it solely on some unstructured output.

    Stephen Tse:
    There's a lot of, as Dae Won mentioned, risk and complication when you have an unstructured output. So applying the LLM in a way such that you can get a structured output out of it is kind of like a trick or something that you could do to simplify that while also making it easier to monitor and apply these guardrails.

    Adnan Haque:
    So basically, what you describe, something like a mortality score that uses a lot of our older techniques, we don't see that as integrating LLMs in the short term. But something like, I don't know, parsing out information from unstructured tech, there's a big use case for gen AI.

    Dae Won Kim:
    In some ways, I feel like we sound very, very skeptical. And that's not necessarily the case. We have some healthy skepticism, but I think the justification is more that we don't understand the behavior or when these hallucinations or when these averse behaviors occur. I think this is true generally for all new technologies, right? So that's why we advocate for, if you have a system or a process, the more decision making components, the ones that have more lasting and final consequences, let's not rush to integrating gen AI tech there. I think it has a lot of potential for extractive capabilities. And extraction is, I'm not going to say harmless, but it's definitely less dangerous than integrating directly into a decisioning component.

    Adnan Haque:
    There's that interesting story of an airline using a chatbot using gen AI. The chatbot issued a refund that wasn't consistent with existing policy. So what you mention makes a lot of sense in terms of adding those controls. When I think about the work that we're doing or the work that you just mentioned, Dae Won and Stephen, a lot of people who are looking to invest in this technology and invest in projects for this technology, what comes in the back of their mind is, “Okay, how much of what I'm doing right now might not be openly available and in something that exists and something that I can buy off the shelf? Is it worth spending or investing in a lot of the resources?” A good example of that are companies that maybe focused a lot on, I don't know, PDF summarization, and Adobe released a tool that does PDF summarization. That in and of itself addressed a lot of smaller companies' business cases.

    So, how do you pick the right use cases to pursue knowing that a lot of these larger technology companies are investing heavily in integrating this technology into their products?

    Dae Won Kim:
    That's a tricky question because the document summarization is in many ways a huge task. And you know, kudos to Adobe for really making breakthroughs in that. Part of the challenge with working with gen AI isn't making an effective one. It's usually gen AI tasks are somewhat good, as in, they're good enough to be useful, but they're not so good that you can automate or use them completely out of the box or off the shelf. In fact, a lot of times is you have your own use cases, your own guardrails. In this case for insurance, there's health data risks, like HIPAA compliance. There is regulatory risk. There is mortality risk. It's very specific to the use case that you're working with. So you have to mold it through a series of processes, guardrails, rules, you know, different filtering, data ingestion pipelines, et cetera.

    Always look for a tool out there that can solve or make that jump, but ultimately, you have to carve that process out, that framework out. I think always be on the lookout for a technology that can solve that specific gap.

    Adnan Haque:
    That makes sense, but I guess the other part of that question was how do you know that what you're investing in or what you're going work on over the next six months isn't going to be part of Microsoft in two years, right? So what do I do? How do I know what problems to solve using gen AI?

    Stephen Tse:
    It's a very hard question to answer. You're asking a very general question, and it's hard to answer this in a way such that anyone listening could definitely apply it to their use case. As Munich, for our solutions, we have access to very specific data sources. So it's very unlikely that a lot of the stuff that we're building is going to show up in Microsoft in a few months, but that's very specific to us. I think you really have to evaluate what kind of performance benchmarks are acceptable for you. Like Dae Won mentioned, if you were to do it yourself with your own specialized data source, you'll probably get better performance. Assuming you have enough data and everything's done properly, than these out-of-the-box solutions that you could potentially be paying more for.

    You really have to evaluate all of these factors and make the decision for yourself to see if it is worth it for me to build this, or is the maybe slightly easier but potentially worse solution good enough to meet your standards for your business use case?

    Adnan Haque:
    I think that's a good point. It's somewhat an extension of the question that we always ask of do we build or buy, for anything, any technology project, any modeling project, really any project at all. That's definitely a fair point.

    Dae Won Kim:
    The one thing that I would add, though, is that when you're investing in technology, there are two things. One is you're investing in that development of the technology so that it works, but you're also investing in the expertise of learning a new thing, building a new thing, adding value on top of things that are already available out there every day, even if it's not gen AI. The way we make APIs, the way we make software applications, the way we use different programming languages - it builds on things that are already existing. So if a new thing appears where it can replace a lot of things you're building already, you still learn a lot about how to develop really good applications on top of things that already exist. That investment, that intangible - the benefits of that investment should still be considered. And that investment is not going to zero just because something that you've been building for six months has suddenly been replaced by something from Google.

    Stephen Tse:
    That's a really good point, especially because even if you don't successfully do it, those learnings are important and they carry on to the next thing that you try. These out-of-the-box solutions that other companies are doing, they've gone through those same problems. They've hit the same hurdles as you, and they figured out solutions to them. So you're not buying those solutions when you buy the product. You're just buying the product, meaning that they retain that knowledge. The next product is going to build off of that. When we're talking about AI strategy long term, at least attempting some of these things actually has a lot of value because that knowledge that you gain snowballs and allows you to tackle harder problems as you go. Whereas otherwise, if you're just always buying, always buying, you're going to fall further and further behind.

    Adnan Haque:
    You both have referenced expertise. And when I think about expertise, if I'm a company that hasn't invested at all in gen AI and I want to start entering that space, what does that look like in terms of bringing in talent or a team or people to start working on these problems?

    Dae Won Kim:
    That's a difficult question because sometimes it becomes like a chicken and the egg thing. Do you want to invest in engineers who can build the infrastructure to build things, or do you bring in architects who draw the building and influence the stakeholders and really persuade people that the use case is there? It’s difficult, but I think starting with a small team of data scientists and engineers.

    There's a plethora of problems out there that cannot be solved by generalist solutions that these new grads, basically, these more and more skilled and smarter and smarter new grads are being trained by the likes of Google and Adobe and these large tech companies. But at the end of the day, there's always a problem that's very specific to your company and that you're the best situated to solve. Right? And I think starting small, starting with the smallest investment but really snowballing that return on investment is really key.

    Adnan Haque:
    Why don't both of you walk me through some case studies of how we're actually using this type of technology and where we've been able to get comfortable using it?

    Stephen Tse:
    You actually kind of alluded to one of the use cases earlier, which is using them to structure unstructured information. Going back to EHRs, a lot of them have a plethora of valuable information but in a very unstructured form. Think notes written by a physician about a visit and trying to get features for a model out of that can be very difficult. You can sometimes identify where some of these features may be in the text using a regular expression or a simpler model, but doing this at scale, your model probably wants multiple features from this text. You could train a model to pull out and structure this information individually, but that doesn't scale to 30 features, now you're creating 30 label datasets for something. That's one of the areas that we've had some of the most success with.

    Dae Won Kim:
    To add to that, I don't know if anyone listening has experience with data science competitions or challenges, but really, there used to be a very set formula. You have a dataset that looks like a table, and it goes into a model and out comes a score. That used to be the formula, right? But in the real world, you quickly learn that that table looks very ugly. Part of the problem is even getting to that table. The most successful and the most impactful use cases we've had is where the inputs are variable. If you expect ABC things but they're not capitalized and you're only getting smaller case data, then you realize, ‘Oh my, all the code that I built doesn't actually work for all these use cases.’

    More concretely, it could be different medical data sources could have different formats. It could be worded differently. Maybe some have dots, some don't have periods. So robustly handling different input sources is the most impactful.

    Adnan Haque:
    Those two case studies are very specific, I think more involved work with a data science or engineering team. Talk me a little bit about maybe smaller efficiencies or potentially bigger efficiencies that are meant to be had through, using things like Copilot or other products.

    Stephen Tse:
    The productivity boosting for the individual engineer is very promising. I've used it and really enjoyed it. I think it helps. There was one time when I was passed someone else's code, and I figured out there was a bug in this one really long line. There was a key error of some kind, so I figured, what if I just delete this line, see what it fills in, and try to run it? And that worked. And that was potentially a two or three-time save for me.

    Dae Won Kim:
    I think the other scenario where productivity gains can be made, and this is touching on automation a little bit but not really in the sense that in the past a lot of times when we asked domain experts, so an underwriter or an actuary, to give their expertise on a specific set of data, in the past they had to come, start from zero and, and start labeling the rows. I think the approach we've taken now is that gen AI technology and the AI technology can pre-fill these for review and that rejecting or accepting the results of gen AI outputs is a lot faster. From a productivity standpoint, it's a lot faster, but also, it just means that the domain experts now have more of a voice in the data science process. It's no longer entirely dependent on someone to code it up for you, someone to make it into a model for you. The domain experts can directly start making more impact. I think that is as productive as is faster processes.

    Adnan Haque:
    Definitely a fair point. One of the things that's come up as we look at this technology and in the first two case studies you mentioned is there's a lot of potential to build really interesting things with smaller datasets than before, right? So the table you mentioned before had to be very, very large to build something like a mortality score. Has something like this technology started eroding maybe the edge you would have from having a large dataset?

    Dae Won Kim:
    It depends. if a lot of your data depends on things that are readily understandable by a human or gen AI, I think it would erode. But you know, an LLM will not know mortality. It will not know specific things to a specific industry, so that portion of the data is still very much relevant. Having lots and lots of it actually creates a huge advantage regardless. But if your dataset is a lot about text, and human understanding of text is the essential part of your dataset, then yes, absolutely, it will erode into your dataset.

    Stephen Tse:
    I tend to agree. There are situations where it probably will. The example with mortality where we're kind of using it more to augment rather than actually predict mortality, I think it's actually a benefit. It allows us to do more with the data and allows us to experiment faster and expand our edge more by having the data.

    Dae Won Kim:
    I would say, though, that maybe it's a rightful shift from the past; what data you have and how much of it you have determined a lot of your advantage. Now it's more about “Hey, you have this data that's slightly more unique to you, which gives you a time advantage, and it gives you an expertise advantage over the others.” How quickly you realize that into a product and your gain on investment is crucial. It's no longer just having the data, it's how fast you deliver it.

    Adnan Haque:
    The use cases, Stephen, you listed are elements we've looked at in the context of alitheia, in the context of our EHR summarizer. What's a specific use case in alitheia?

    Stephen Tse:
    One that Dae Won and I have both worked on recently is actually using it for mapping. So this would be mapping between different underwriter rule books, so that we can onboard carriers faster and match up their questions with our questions and disclosures.

    Dae Won Kim:
    To add onto that, something that we want to do is when carriers change their rule books or when there are significant changes in the rule books either way, either on our side or on the carrier side, we can automatically generate the discrepancies and overlap and risk appetites but also just be able to automate that process. If you have a change in the client manual, we shouldn't have to have someone read through the language and all 50 sentences of it to make sure that it's overlapping properly. I think it can be done by something like gen AI technology.

    Adnan Haque:
    As you talk through case studies, I always think price and impact. So what does it mean in terms of impact?

    Dae Won Kim:
    Gen AI results or gen AI technology use can result in time saved. A lot of times, when we do these mappings or when we do these changes and alterations and updates and builds, it always takes a lot of time to review. A domain expert has to come in and really read through and vet things. All those things can be shortened by a significant amount of time. I think we've seen reductions of 50% or more actually in time and sometimes even significantly more than 50%, in reduction of time. There is huge potential for efficiency gains.

    Adnan Haque:
    None of this becomes real without governance. Let's talk a little bit about governance and some of the regulatory aspects of adopting this technology. Dae Won, I know you led off with a little bit of information on the governance aspect.

    Dae Won Kim:
    The challenge of governance really is knowing where to draw the line of flexibility and then being conservative and risk averse. Defining your appetite for specific stages of development is really key. Internally in Munich, the way we've thought about this is that exploration and experimentation are very separate zones from deployment and productionization. And of course, when you're productionizing it, you need to make sure you're not leaking data to the wrong places. You need to make sure that your processes are legitimate and that their outputs are valid. But when you're experimenting with it, I don't think you should be as stringent because it can really slow down and it really stops you from looking at the places that could have value.

    Stephen Tse:
    I tend to agree with all of that. One other thing to add is also bias testing. That's an increasingly large concern, especially within insurance. Being aware of how the LLM that you're using is trained. You know, even if you're doing some kind of fine-tuning, a lot of them are trained to large portions of data just from the internet. There's a lot of sexist or racist comments, and you don't want that being propagated through your model, especially in the industry that we're in.

    Dae Won Kim:
    Something that I just thought of now is that from a cost perspective and investment perspective, you also need to make sure that you're not building gen AI technology and components for the sake of building gen AI and component technology. I've committed this mistake many times where I see something cool and new come out and I'm like, "Oh, I definitely want to use that for my modeling," although the most boring model is going to do just fine. Or a spell checker that worked for decades could work just fine. I don't need to use an LLM to check spelling for me. Knowing when it is a good idea to go heavy,  and I say "heavy" because some of the GPT4 technologies, some of the OpenAI technologies, some of the more proprietary technologies can be costly to use and train and tune. Really knowing when to dip in and when to not is also part of that strategy.

    Stephen Tse:
    That's a good point because for a lot of people who are making these business decisions, their first introduction to this kind of predictive modeling is generative AI, LLMs. So the first thing that they think of is using that. As you said, like, the simpler model is often good enough; you could just use something way simpler and get similar, if not better, results.

    Dae Won Kim:
    In fact, Stephen, in general, always asks me to make my models and my ideas simpler. So that point is very, very valid.

    Adnan Haque:
    As one of the people who signs off on the cloud spend, I can definitely see that it can be expensive. Last podcast, we added a question that was a little bit outside of the topic of the day. The question we have for today, Stephen, maybe you can start: if a dog wore pants, would it be on all four legs or just the hind legs?

    Stephen Tse:
    When you wear pants, are they on all four of your limbs?

    Adnan Haque:
    I guess no.

    Stephen Tse:
    Yeah. So I would, I would say two.

    Adnan Haque:
    All right, that checks out. Dae Won?

    Dae Won Kim:
    I don't know. I think that's a very restrictive definition of pants. Pants are where the legs are. As you know, the question mentioned hind and front legs. I believe pants cover the legs, and so they should be on all fours. The real struggle for me isn't the legs, it's really, does the waistline go across the torso in a horizontal fashion? I think it's more of a fashion and an aesthetic answer in my mind than it is about technicality.

    Adnan Haque:
    All right. Both very valid points and valid answers. Before we bring this to a close, any closing remarks that you'd like the audience to take away?

    Stephen Tse:
    Sure. Generative AI has a lot of interesting applications, especially in our work. But to leverage it, you need to do things properly. The main things for us -- at least that we figured out so far -- is recognizing when it's applicable and also being cognizant of things like hallucinations, bias, and other potential issues, and having, a solution in mind from the beginning for how you handle those.

    Dae Won Kim:
    Those are all really valid points. Alluding back to some of the questions that we touched on in this conversation when talking about gen AI and AI in general, it's best to separate the parts that are more hype and more drawing to the eye and the ones that are more relevant to the business. I don't think in terms of the business decisions it's a new thing. You should treat it like any other new technology or new thing that comes into play. The decision to where to use kind of untested technology shouldn't be very different from technology to technology.

    That said, you can think about how the hype, intangibles, and flashy parts apply more directly to your business. But again, the governance parts are very similar. You need to have interdisciplinary conversations, be transparent, and know when to stop and when to start. All those things are fair.

    Adnan Haque:
    Both of you have reiterated to me in the past that gen AI is just one tool in your overarching toolkit. It's remembering that when you have a hammer, especially if it's a really shiny hammer, everything looks like a nail, and to take a step back and make sure we're using the appropriate tool. So with that, thank you so much, Dae Won. Thank you so much, Stephen.

    Dae Won Kim:
    Thank you so much.

    Stephen Tse:
    Yeah, thanks for having us.

    Adnan Haque:
    And thank you for everyone listening. I appreciate you taking the time to listen to the podcast.

    Contact
    Adnan Haque
    Adnan Haque
    Vice President, Integrated Analytics
    Munich Re North America Life
    Dae Won Kim
    Dae Won Kim
    Staff Machine Learning Engineer, Integrated Analytics
    Munich Re North America Life
    Stephen Tse
    Stephen Tse
    Machine Learning Engineer, Integrated Analytics
    Munich Re North America Life

    Newsletter

    Subscribe to our newsletter for the latest industry research, insights on life and health-related topics, and relevant Munich Re Life US updates. Join today and stay connected with our business.
      alt txt

      properties.trackTitle

      properties.trackSubtitle