Why Insurance Providers Need Explainable AI to Mitigate Risk

  • November 18, 2021
two people explaining a result on a screen

Advances in Big Data and Artificial Intelligence (AI) have led to promising digital acceleration, widespread automation, and the mass development of machine learning systems capable of performing a significant number of manual tasks in a matter of milliseconds. As a result, insurance providers can provide customers with hyper-personalized experiences and more efficient processes.

However, with the introduction of new technology, insurance carriers can have an inherent challenge to ensure they are meeting regulatory obligations. According to the National Association of Insurance Commissioners ("NAIC"), "The fundamental reason for government insurance regulation is to protect American consumers." With new technology, especially AI, understanding—and explaining how the technology works — is vital to ensuring compliance with regulations and protecting consumers. As a result, insurance companies will need to use Explainable AI (XAI) to develop new methodologies that explain and interpret machine learning models.

Researchers Doshi-Velez and Kim define Explainable AI as "the ability to explain or to present in understandable terms to a human." Humans must understand how machines "think" to fully harness the capabilities, remove machine-led biases, and predict future patterns. AI is smart enough to enhance its predictive power with every task completed. As a result of these continuous iterations, humans lose the ability to explain the decision-making of the AI. Moreover, the AI's actions become increasingly challenging to understand and even harder to predict.

Due to the very nature of AI to continuously modify its decision-making, there is an inherent risk that AI could produce unintended consequences or unintended results, i.e., inherent bias. As such, Insurance companies must explain how their AI works and why decisions were made to regulators to demonstrate compliance with applicable regulations.

Guiding principles for Explainable AI

In the wake of COVID-19, regulators within the various Departments of Insurance started closely monitoring insurers using AI and data models. Insurance regulators are vowing to monitor the use of consumer and non-consumer data by companies, artificial intelligence, and the availability of insurance products for racial minorities. The NAIC went on to say, "The public wants two things from insurance regulators. They want solvent insurers who are financially able to make good on the promises they have made, and they want insurers to treat policyholders and claimants fairly." Thus, an Insurer's ability to explain clearly and in plain language how a new AI application treats the public fairly will be at the crux of regulatory activity arising from the states.

The NAIC unanimously adopted guiding principles that insurers should use to comply with regulations. These principles outline five fundamental tenets that insurers must follow, including:

  1. Fair and Ethical
  2. Accountable
  3. Compliant
  4. Transparent
  5. Secure/Safe/Robust

These principles apply to the creation and use of AI. However, for insurers to adhere to and demonstrate embracing these principles, they must clearly understand and explain how their AI operates.

Significant risks associated with the use of Big Data and AI

As the fundamental reason for insurance regulations is to protect the consumer, Insurers using Big Data and AI that could impact customers face the challenge of demonstrating compliance with existing regulations, as they still apply to the use of machine learning and AI.

Insurers who fail to implement XAI or develop a governance model for AI/ML are subject to risks related to non-compliance, bias, and security, including:

  • Bias in data sets used to make claims or underwriting decisions
  • Compliance with state, federal or other regulations
  • Security and cyber vulnerabilities could be present if AI is driving to a specific outcome without validation of how it arrives at that outcome

AI can discriminate based on social, racial, and gender data points

Unexamined AI models are not only risky but can be dangerous — without human direction, they can operate with social, racial, and gender biases. In one example of bias in AI, searchers discovered that women are less likely than men to be shown ads for high-paid jobs on Google. Google's AI algorithm uses personal information, browsing history, and internet activity to generate search results for users. The unconscious machine determined that men historically sought higher-paying jobs and perpetuated this trend in a harmful response to the data.

Countless regulations expressly prohibit discriminatory actions in Insurance against protected classes, such as race or gender. The challenge with AI is that it will no longer be achievable for a human to predict the outcomes of the AI at a certain point. This situation creates a tremendous risk and challenging problem when trying to prove to a regulator that your algorithm for your underwriting or claims process is not inherently biased, thereby violating the anti-discrimination laws. This bias could lead to negative underwriting outcomes or even unfair claim practices.

AI can lack transparency

In 2020, a Dutch court ruled that an automated surveillance system using AI to detect welfare fraud violates the European Convention on Human Rights and ordered the government to cease using it immediately. The Dutch government had used the model to calculate risk by gathering housing, employment, personal debt, and benefits record data. The system used the data to identify individuals who may pose a higher risk of committing benefit or tax fraud. However, the legislation found the method "insufficiently clear, verifiable … and controllable" due to a lack of transparency. The lack of transparency can put Insurers at risk with their consumers or regulations.

In another example, InsurTech Lemonade received backlash from the public after tweeting how their facial recognition technology allows them to save money by denying insurance claims. Caitlin Seeley George, campaign director of digital rights advocacy group Fight for the Future, said, "It's incredibly callous to celebrate how your company saves money by not paying out claims (in some cases to people who are probably having the worst day of their lives)." This demonstrated the general public's fundamental lack of understanding of AI and that insurtech's ability to explain how their AI operated related to claims. The company could not explain how their AI worked and touted an outcome without explanation, leaving the end consumer upset with what this meant to them. The company could not effectively explain to the consumer that their claim would not be inaccurately denied based on AI bias or discriminatory practices.

Notable examples of unexplainable Big Data and AI

As digital transformation kicks into high gear, insurers continue to deploy more complex machine learning applications to keep up with fast-moving InsurTechs and digital-first competitors. Third-party machine learning applications remove manual tasks, rapidly comb through raw data sets, and support insurers in their mission to deliver hyper-personalized customer journeys. However, with all the returns AI applications offer, additional risks and regulatory challenges must be addressed. Specifically, insurers must be confident that they and their third party's use of external data sources comply with all applicable insurance laws and regulations.

How to develop a model governance framework development process

As XAI represents the next evolution of AI, organizations should develop a governance framework to mitigate the risks posed by new AI technology. The development and operation of a model governance framework are well-established in the risk management world. Organizational leaders can leverage existing practices to kickstart the development of a process that works for their organization. Every organization is susceptible to unique risks, and insurers must tailor the framework for specific risks.

To be effective, insurers must use the framework, the framework must add regulatory value by deepening insight into risks, and regulators must verify the framework. Insurers can start crafting a unique model governance framework by taking the following five actions:

  1. Agree on the anatomy of the AI model (i.e., key components)
  2. Create consensus on key risks associated with Big Data and AI models
  3. Draft baseline requirement around each element for users to build on
  4. Draft procedures for departments of insurance examiners
  5. Use and amend over time

Learn more about the changing regulatory requirements in the article No Time Off for Insurance Compliance.

Subscribe to our blog

ribbon-logo-dark
Bill Sun

As Director of Compliance and Internal Controls for NTT DATA's licensed third-party administrator, Bill Sun brings his expertise as a licensed attorney and comprehensive background in compliance to help our clients meet and understand their regulatory obligations. His passion for compliance helps drive organizational effectiveness within the life insurance regulatory environment. Bill has helped build a comprehensive compliance and internal controls organization for the third-party administrator that our clients recognize as a key market differentiator.

Related Blog Posts