New AI Regulation: How Will It Affect Insurance Risk Assessment in the AI Age?

The integration of AI into the insurance industry brings forth various challenges related to risk assessment. These challenges encompass ethical considerations, liability and accountability, bias and discrimination risks, data and privacy concerns, as well as technical obstacles. The proposed Artificial Intelligence Act (AIA) is introduced as a regulatory measure to govern AI usage in many industries, including insurance. The need for responsible AI development and collaboration among stakeholders is emphasized to ensure effective regulations that align with societal values and goals.

An article by Lam Dieu Hien Phan

As AI has become a trend, also insurance companies worldwide have swiftly adopted and implemented different these technical systems in their various processes and operations. The application entails certain risks and threats that demand caution and consideration from insurers. Also, the introduction of the AI Act (“AIA”) to govern AI has significant implications and challenges for the insurance sector. This writing presents how AI is used for risk assessment in the insurance industry (1.) and the specific issues that arise from it (2.). This is followed by an overview of the potential regulatory tools that could be enacted through the AI Act (3.). Finally, I will determine whether AI could solve existing problems (4.).

  1. The use of AI to evaluate risks in the insurance business

Long before the term “AI” became a trendy word, insurers had already fed big data to risk assessment algorithms for the purpose of decision-making. When AI comes, part of the core functions of insurance activities – evaluating risks and making decisions – have been partially or entirely automated by the machine, : in particular the insurance products (“insurance policy”) – the legal contract between the insurer and contract owner (“policyholder”). For this, the policyholder pays contract fee (“premium”) to Insurer to ensure insurance coverage for the subjects of insurance policies (“the insured”, such as life, health, or assets of a person or entity). Data processing using AI impacts all three policy phases:

  • Pre-policy: To start with, Insurers collect and assess customers’ information and risk before issuing policies (“underwriting”). The Insurer creates a risk pool of policyholders with similar coverage and risk and accordingly sets a condition and price for each insured based on their risk level (“premium setting”). AI can improve this process by using machine learning algorithms that analyse huge amounts of data. These increase operation efficiency, manage approval rates and reduce application errors.
  • Policy phase: AI tools can access and analyze claim data for claim assessment and provide results as well as decisions. This allows the Insurer for example to simplify claims processing. If the policyholder claims coverage for an insured event (accident, death), AI tools can also detect fraud by studying data and finding patterns.
  • Post- policy: The AI risk assessment can be used to profile customers and improve products and services, spot trends and patterns, suggest new products or services or send feedback to the pre-policy for a new risk assessment round.
  1. Issues arising from the utilization of AI

The implementation of AI in insurance risk assessment gives rise to various significant challenges encompassing ethical considerations, liability and accountability, bias and discrimination, data and privacy concerns, and technical issues.

  • Ethical considerations. AI can consider and assess many insurance situations, but it can only do so using the logical and reasoning technique it was designed for. In situations that require complex decision-making and human thinking, such as those involving matters of life and/or death, humans are better suited than AI to make the right choice. For example, AI decides that the policyholder’s claim is a fraud and refuses to pay for surgery, even though it is a real case. This could cause someone else to die or have serious health issues because they did not get the expected and proper treatment.
  • Liability and accountability. When it comes to AI algorithms, the question of liability and accountability arises. If algorithms are viewed as tools for engaging in wrongdoing, the person responsible for the system and decision-making using AI tools shall take responsibility as current regulations appear to cover that. If algorithms act as independent dealmakers without human intervention (black box), current regulations don’t seem to address this. This raises the question of who is responsible if AI makes a bad or illegal decision.
  • Bias and discrimination: Creating assumed or predicted information about individuals can lead to inaccurate predictions regarding their behaviours, health, or insurance risk. This raises concerns about fairness and potential violations of people’s basic human rights, specifically their right to fair treatment. Additionally, it can further disadvantage vulnerable individuals. For instance, insurers rely on AI for risk assessment, premium setting, and the rejection of high-risk customers. However, when assumed or predicted information is used, it can result in unreliable predictions and raise fairness concerns in data processing. Using health data to assess risk and profile clients for life insurance, for instance, could result in the exclusion of certain individuals or the imposition of less favorable terms based on their medical conditions. Some risk difference is necessary, but too much could make insurance costs unreasonably high for some customers. Therefore, AI may lead to social discrimination.
  • Data and privacy considerations: In the context of data protection, the utilization of AI is subject to the regulatory framework outlined by GDPR’s, particularly with regards to rules concerning profiling and automation. GDPR article 22 (1) bans automated decisions that have serious legal impacts on anyone. Furthermore, AI used for risk assessment and customer profiling may breach people’s privacy and private life rights if insurers don’t have enough measures and human oversight.
  • Technical issue AI: Implementing AI in the insurance industry poses technical challenges, including the potential for uncontrollable AI systems, such as:
    – misaligned AI : the possibility of AI systems developing their own objectives and attempting to expand across space to achieve them,
    – misuse AI Risks: AI systems used unethically or maliciously,
    – accident risks AI: AI systems that cause unintended harms that could have been avoided,
    – structural risks AI: AI systems that change the environment and incentives in subtle ways.
  1. A brief overview of the proposal of AIA

Given the preceding concerns, it is not surprising that the previous draft of the AIA seeks to regulate the use of AI in the insurance industry. AIA’s risk-based approach asks businesses to check their AI risk category, do – among other things – risk and operation assessments. The draft proposes that AI systems for insurance should also be high-risk AI. This applies only to life and health insurance and the three main activities (insurance premium setting, underwritings and claims assessments) which use risk assessment (Annex III, No. 5 (d)). The reason is that AI systems designed to make or substantially influence decisions about individuals’ eligibility for health and life insurance may have a significant impact on individuals’ livelihoods and violate their fundamental rights, for example, by limiting access to health care or perpetuating discrimination based on personal characteristics (Rec. 37).

  1. Could AIA adequately solve the problems

Too soon to say AIA can help fix the issue, or, more optimistically, this is a work in progress. The insurance industry deals with one of the most critical aspects of human life (life/death/ valuable assets) and has a lengthy history of redlining biased treatment in pricing and claims investigation and payment. And, if AI is not used cautiously, it could continue and worsen these kinds of problems. That is the rationale behind EC’s consideration of adding life and health insurance to promote safe AI.

However, this could be an “overkill” to the business. Insurance is a highly operational regulated and supervised industry worldwide. Being added to the AI high-risk list would entail more obligations, such as human oversight, transparency, cybersecurity, risk management, data quality, monitoring, and reporting. This would make it too challenging and unfair to put all the burden of making sure AI behaves well in the business. For example, if insurance companies are compelled to increase their spending on systems due to the new law, resulting in higher premium costs passed on to customers, does that mean policyholders have to pay more for the same products or services? From this perspective, it doesn’t seem like AIA benefits general consumers to a great extent.

Are all these changes genuinely aimed at benefiting consumers? For instance, when AI systems seem unfair, how can an average user challenge an insurance company for discriminatory practices?

Suggestion and Conclusions

Despite ethical and legal concerns surrounding AI in insurance, navigating the transition to a world with advanced AI systems is inevitable and necessary. To prepare for future changes in AI regulations, insurance companies should focus on responsible AI development and use. This includes measures such as oversight and management of AI systems, seeking guidance from policymakers, and harmonising standards for risk management. The introduction of AIA is a necessary move, but cooperation from legislators, business leaders, and public policy leaders is needed to develop effective regulations that align with our society’s values and goals.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Hien Phan

    Hien Phan, a legal tech expert, holds an LL.M. in International Law from Bangor University and a Master's in IT Law from the University of Tartu. Passionate about Innovation, AI governance, Privacy and Data Protection, she brings her expertise to AB Inbev, where she currently works.

Hien Phan Written by:

Hien Phan, a legal tech expert, holds an LL.M. in International Law from Bangor University and a Master's in IT Law from the University of Tartu. Passionate about Innovation, AI governance, Privacy and Data Protection, she brings her expertise to AB Inbev, where she currently works.