We have limited Spanish content available. View Spanish content.

Executive Conversations

Scaling AI in Insurance: A Conversation with Zurich's Christian Westermann

Scaling AI in Insurance: A Conversation with Zurich's Christian Westermann

“More than a transformation, I see a revolution. Responsible AI is paramount.”

  • min read

Article

Scaling AI in Insurance: A Conversation with Zurich's Christian Westermann
en

Christian Westermann is shaping the application of artificial intelligence (AI) in insurance—from improving decision-making to streamlining processes. As group head of AI at Zurich Insurance he oversees the growth and scalability of AI throughout the company’s insurance value chain globally, guided by a robust Responsible AI framework that is meant to safeguard the company from potential AI risks.

Tanja Brettel, an expert in our Financial Services practice, chatted with Westermann about how the technology is revolutionizing insurance, why a responsible approach to AI is paramount, and what role the EU AI Act plays in his organization’s AI journey.

Q: How do you envision generative AI transforming the insurance industry in the next three to five years?

Westermann: Generative AI will not change the industry’s innate role in providing coverage for losses. However, more than a transformation, I see a revolution in how it will enable us to provide better customer service and make tasks like underwriting or claims management more efficient.

The insurance industry is very language and picture driven, with a lot of unstructured data. For example, large claims historically required loss adjusters on the ground to write down what happened and take pictures. Extracting information from such reports was partially a manual process. With generative AI, you can now do that at scale, beyond country borders. For global companies like us, this is a huge benefit. This improves insights into losses and, ultimately, helps us better understand our customers.

Q: What are the most significant AI-related risks for insurance companies, including the newer risks of generative AI?

Westermann: Very little has changed with basic AI-related risks: You must ensure that your models are reliable, that you address bias, that your solution is robust and explainable, and that you are transparent and accountable when using AI.

But generative AI introduces new risks, such as misinformation and deep fakes, leading to an increase in the professionalism of fraud and the sophistication of cyberattacks, wherein AI probes weaknesses in a network and finds a strategy to penetrate it. Also, we tend to all rely on the same suppliers, so there’s a concentration and a lock-in risk. If we are all based on the same third-party models, we all have the same dependency on those vendors. This could lead to us unintentionally having the same bias.

Q: How do you ensure AI is responsibly implemented? How do you balance realizing value and mitigating risk?

Westermann: In 2019, we made a data commitment to our customers to be transparent about how we use their information. Building on this, in 2022, we rolled out our Responsible AI framework, which has become integral to how we use AI. Having this in place allowed us to immediately use generative AI during its breakout moment in 2023.

The framework includes an AI risk and impact assessment. Last year, we also introduced additional technical controls through a global gateway, which support us in the operations of generative AI models.

We strike a balance between realizing value and risk mitigation. We assess all cases, while also aiming to make our assessment tools very user-friendly. That way, employees are comfortable evaluating AI-related risks and can better focus on the value creation.

Q: What does the EU AI Act mean for insurance carriers? How do you make sure you’re following the rules? Does it hold the European insurance industry back on innovation and making progress?

Westermann: The EU AI Act has outlined what type of AI is considered as high, medium, or low risk across industries, including insurance. For example, the use of AI in underwriting in life and health is, according to this new regulation, high risk. In addition, the AI Act uses a rather broad definition of AI. As such, insurance carriers need to assess if and how they are impacted by this regulation.

As a global player, we are monitoring regulation across different jurisdictions, and we update our AI assessment tools accordingly. I believe that for insurance carriers who operate in different markets, it is easier to use the same tools globally, as this simplifies AI solution design and rollout across multiple countries.  

I believe the EU AI Act is not materially slowing down our innovation. We don’t limit ourselves in use-case ideation, and we are doing our assessments as part of the solution design. However, it might shy away companies in Europe that build AI platforms or develop AI models. We observe that many large tech players in AI are located outside of Europe, for example in the US.

Q: How do you ensure that employees handle generative AI tools responsibly?

Westermann: First, we make generative AI tools available for employees to explore in their day-to-day activities. For example, our internal generative AI chatbot, ZuriChat, is similar to ChatGPT but is run in our own environment, which is safe and protected. We prioritize training and education across all levels, including how to handle the technology responsibly.

We also keep an eye on so-called low-code or no-code generative AI tools. While these make building use cases and workflows easy, solutions still must adhere to emerging AI regulation. Therefore, it’s important that these tools are rolled out hand-in-hand with training and upskilling on responsible AI practices.

Q: Most of the generative AI uses cases in insurance are internal, but increasingly, companies are exploring customer-facing tools. How should insurance companies think about building customer trust in these tools?

Westermann: Customer trust always starts with transparency, and this ties back to our data commitment from 2019. The moment you use AI, you should disclose it to the customer.

It’s also important to remember that AI has been used for many years in customer interactions, including the efficient handling of claims to the benefit of the customer. In addition, trust in these tools is driven by providing meaningful services that are considered a value-add. Generative AI can significantly contribute to customer satisfaction. For example, if my car wasn’t working and I was stranded in the middle of the night, I could find a voice bot that could offer immediate support, potentially with access to my policy. Or if I as a customer would like to better understand my complex life policy, generative AI could facilitate that. These scenarios demonstrate the generative AI benefit to the customer, which encourages them to more readily embrace it.

Q: Reflecting on your experience so far, what is your most significant learning about integrating AI?

Westermann: With generative AI, we have seen that we can more quickly demonstrate a solution’s value. As we use foundation models, which are pre-trained, the use-case implementation takes less time than in the past. However, the duration from the introduction of a new generative AI use case to a full rollout to the entire organization is still a lengthy process.

I have not yet seen a significant acceleration. The limiting factor is not related to technology or trust—but to people. People still need time to adapt to new processes and technologies. When I saw that ChatGPT was the fastest-growing application in history, I initially thought that organizations would also benefit from a faster change culture. But that was likely too optimistic.

Tags

Ready to talk?

We work with ambitious leaders who want to define the future, not hide from it. Together, we achieve extraordinary outcomes.

Vector℠ is a service mark of Bain & Company, Inc.