As artificial intelligence (AI) in the insurance industry continues to transform the sector, bringing unprecedented innovation, efficiency, and enhanced customer service, insurers and regulators alike must grapple with how to harness its potential while safeguarding consumer data and maintaining trust.
A 2025 survey conducted by the National Association of Insurance Commissioners (NAIC) across 16 states revealed that 84% of health insurers are already utilizing AI and machine learning (ML) in some capacity, underscoring the urgent need for structured AI governance systems to ensure compliance with current and forthcoming regulations.
AI systems will only continue to grow and evolve. Therefore, it’s imperative for insurers to establish a robust data governance framework to govern these systems and manage risk effectively and responsibly.
Regulatory Concerns: The NAIC’s AI Governance Framework
The NAIC has taken a proactive stance, issuing governance principles and model bulletins to guide the ethical and effective use of AI in insurance operations. And while these principles are not legally binding, nearly 25 states have adopted the NAIC’s model bulletin, signaling a shift toward enforceable standards. States including California, Colorado, New York and Texas have enacted their own separate and distinct AI regulations for insurers.
The NAIC’s model bulletin on AI governance outlines a comprehensive framework built on five core principles:
- Transparency: Insurers must ensure that AI systems are explainable and that decision-making processes are understandable to regulators and consumers.
- Accountability: Clear lines of responsibility must be established for AI-driven decisions, especially when outcomes affect policyholders.
- Fairness and Equity: AI must be designed to avoid discriminatory outcomes, with mechanisms in place to detect and mitigate bias.
- Privacy and Data Protection: Robust safeguards are required to protect sensitive consumer data used in AI models.
- Safety and Reliability: AI systems should be rigorously tested to ensure consistent and safe performance.
These principles are not just aspirational; instead, they lay the groundwork for future regulatory initiatives, working together to best protect consumers and maintain industry integrity.
Industry-specific Considerations
Whether an auto, home or life insurer, AI applications used by these organizations span underwriting, claims processing, fraud detection, and customer engagement, including chatbots and 24/7 virtual assistants. However, each use case introduces unique risks that must be managed carefully.
While AI can expedite claims processing, it must not compromise fairness or due process, especially in denial scenarios. AI models must be designed to avoid proxy discrimination, where seemingly neutral variables correlate with protected characteristics. This requires oversight to ensure that predictive models are not embedded with algorithmic biases that could lead to unintentional discrimination in underwriting or claims adjustment.
A robust AI governance framework should prioritize the comprehensive management of algorithms and predictive models through meticulous inventory, documentation, interpretability and auditability. Such measures are crucial to maintaining transparency and accountability.
Additionally, many insurers rely on external AI providers, especially for pricing strategies. According to the NAIC’s Third-Party Data Models Task Force, state regulators agree insurers should retain full responsibility for the data and models they use, regardless of whether they are internally developed or provided by a third party. Therefore, it is critical to conduct thorough due diligence and establish contractual safeguards to ensure compliance with governance standards and further mitigate risks.
Building Efficient and Reliable AI Frameworks
Different types of insurance carriers employ AI in diverse ways, which means their AI governance frameworks might also vary from those outlined by the NAIC. Nevertheless, to meet regulatory expectations and operational goals, insurers need to invest in AI frameworks that are:
- Auditable: Systems should include logging and documentation to support internal audits and regulatory reviews.
- Bias-resistant: Regular testing for model drift and bias is essential. Many insurers now conduct equity audits and integrate human oversight into their AI decision-making processes.
- Secure and Compliant: Data governance must align with privacy laws, such as HIPAA and emerging state-level AI regulations.
- Scalable and Modular: AI architectures should be flexible enough to adapt to evolving business needs and regulatory changes.
AI Investment and Implementation Considerations
Cherry Bekaert’s 2025 CFO Survey found that a quarter of all finance leaders ranked AI integration among their top three concerns, with that number rising to 30% in the healthcare industry. The survey also noted that hesitation typically stems from uncertainty about how and where to implement AI and automation.
If your organization has not yet institutionally employed AI or is looking to explore additional capabilities — which 69% of CFOs said they will be finding more efficient ways to work over the next year through AI and automation — there are various factors to consider and prepare for when planning to invest and implement.
As with any IT framework or technology investment, it’s important to budget enough time, money and effort from start to finish. This is only amplified by AI’s newness and opportunism. Project scopes often miss integration, data and compliance costs. Because of this, they can miscalculate the total cost, which should include development, maintenance, training, vendor management, hidden startup and other expenses.
An alternate implementation strategy should be considered, such as scenario-based budgeting and development approval gating focused tied to delivering expected business value rather than technical milestones. Moreover, accounting for insurance-specific regulations and its other nuances reinforces the need to work alongside trusted advisors and vendors who understand the industry.
Strategic Recommendations for AI Governance in Insurance
To stay ahead, insurers should adopt a strategic approach to AI governance that not only fulfills compliance requirements but also fosters innovation and positions them as leaders in ethical AI deployment within the industry. These strategies include:
- Establishing AI Governance Committees: Cross-functional teams of legal, IT and operational expertise can oversee AI strategy, compliance and risk management, allowing for a more holistic approach by addressing diverse perspectives and potential challenges.
- Investing in Explainable AI (XAI): Tools that demystify model behavior, by helping stakeholders and consumers understand the reasoning behind AI decisions, are essential for transparency and trust.
- Engaging With Regulators and Industry Advisors: Proactive dialogue with state insurance departments, participation in industry forums, and staying informed about regulatory trends can help shape practical and forward-looking AI policies.
AI offers transformative potential for the insurance industry, but its deployment must be guided by robust governance, ethical principles and regulatory alignment. By embedding these values into their AI strategies, insurers can not only mitigate risk but also build a more resilient and customer-centric future.
Let Us Guide You Forward
It’s crucial to enlist a trusted advisor with extensive experience in the insurance industry and a deep understanding of the complex regulatory and risk challenges that insurance companies face. Cherry Bekaert’s Insurance practice understands the intricacies of insurance standards, maintaining financial integrity while adding value to your organization.
Our team of Risk Advisory and Cybersecurity professionals can assist your insurance organization in effectively complying with incoming AI regulations. Through the implementation of robust governance frameworks, AI risk management controls and internal audit functions, insurers can better mitigate risks associated with AI systems, promoting fairness, transparency and accountability within the industry.