Is Your Company’s Insurance Prepared for the Challenges of AI?
As artificial intelligence (AI) rapidly transforms business operations, it’s crucial for companies to assess their insurance coverage in light of the new risks that AI introduces. While AI holds immense potential, it also comes with the possibility of unforeseen defects, errors, and misuse. If these occur, a company could face legal claims and substantial financial liabilities. Without adequate insurance, businesses may find themselves exposed to significant financial risks related to AI.
The Broad Scope of AI and Insurance Challenges
The field of AI is vast and constantly evolving, encompassing areas such as machine learning, deep learning, robotics, natural language processing, and expert systems. The broad scope of AI makes it challenging to find the right insurance policy because different types of AI pose unique risks and require specific considerations for insurance coverage. These considerations typically fall into the following categories:
Data Privacy Risks
AI technologies can be exploited by cybercriminals. They can generate sophisticated deepfakes, steal sensitive information, or automatically pull data from various sources. This increases the risk of privacy invasion. Furthermore, if these technologies are not adequately secured, unauthorized access or malfunctions can expose sensitive information. This could result in severe legal consequences and fines. Class-action lawsuits have already been filed against major tech companies like Alphabet and OpenAI alleging privacy violations.
Intellectual Property Infringement Risks
Generative AI products, such as ChatGPT, can create potential legal liabilities if they use copyrighted material, trade secrets, artwork, or writings without permission. This can lead to copyright infringement claims and potential legal battles for businesses employing the technology.
Physical Harm and Property Damage
As seen in the film, “The Terminator,” AI systems can cause physical harm to users or lead to property damage if they malfunction. If a company releases a robotic product that interacts with consumers, this risk is even more relevant.
Impermissible Bias and Professional Errors
AI systems can be prone to bias, which can lead to discrimination and other forms of unfair treatment. This can subject companies to lawsuits and regulatory scrutiny. In 2023, the Equal Employment Opportunity Commission (EEOC) settled its first AI hiring discrimination lawsuit, highlighting the potential for AI to perpetuate bias. Additionally, companies and individuals relying on AI for business decisions or professional advice face increased legal liability if those decisions or advice are later proven to be erroneous.
Insurance Policies to Consider
Several insurance policies can potentially cover the risks associated with AI, including:
- Cyber Insurance: Cyber insurance policies may cover a wide range of cyber incidents involving AI, such as data breaches, ransomware attacks, and media liability.
- Intellectual Property Policies: Specific intellectual property policies are available to address intellectual property infringement issues.
- Errors and Omissions (E&O) Insurance: E&O insurance may offer coverage for negligence, errors, or omissions made while providing professional services and advice that rely on generative AI.
- Commercial General Liability (CGL) and Product Liability Insurance: CGL and product liability insurance are crucial if a company is planning on releasing robotic products that interact with consumers to mitigate claims alleging bodily injury or property damage.
- Directors and Officers (D&O) and Employment Practices Liability (EPL) Insurance: These policies can protect company executives in specific situations for AI-related decisions.
Navigating the Uncertainties of AI Coverage
Given the new nature of AI technology, most existing insurance policies do not explicitly address AI-related issues. This leaves some uncertainty over whether AI-related risks are covered. However, courts have generally applied standard contract interpretation principles. For instance, the Seventh Circuit Court of Appeals in Citizens Ins. Co. of Am. v. Wynndalco Enterprises, LLC determined that standard contract interpretation applied to a business owner’s insurance policy, regardless of the use of facial recognition technology. If courts adopt broad interpretations of insurance coverage to include AI-related claims, insurance companies could respond by adding AI-related exclusions, creating lower sub-limits, or charging higher premiums.
Separate, AI-specific insurance policies may become common in the future, akin to the rise of cyber insurance policies. For now, this is not the norm. Given this uncertainty, businesses should begin a two-pronged approach. First, they should identify the potential risks associated with their use of AI systems. Second, they should review their current insurance policies to determine coverage availability and limitations.
