Contributor:
Lauren Ross, Senior Manager | Cybersecurity Services
In the second episode of the AI Compliance series, host Lauren Ross is joined by Steve Ursillo, Partner in Cybersecurity at Cherry Bekaert, and Morgan Hague, Senior Manager at Meditology Services. Listen in as they explore the evolving landscape of artificial intelligence (AI) regulations, including the impact of the European Union (EU) AI Act and U.S. executive orders, and how organizations can proactively prepare for regulatory uncertainty.
The episode also covers what enterprises should look for when evaluating AI vendors, the changing role of procurement in assessing AI risk, and the most overlooked risks in AI systems today. Finally, they examine how compliance frameworks can help organizations mitigate reputational harm in the event of AI failures.
Tune in to learn more about:
- The impact of emerging regulations on global AI strategies
- How organizations can prepare for regulatory uncertainty and evolving compliance requirements
- Key compliance criteria and certifications enterprises should look for from AI vendors
- Overlooked risks in AI systems, from bias and privacy to shadow AI and automation bias
- How compliance frameworks and due diligence can help mitigate reputational damage from AI failures
View All Risk & Cybersecurity Podcasts
LAUREN ROSS: Welcome back to the Risk and Cybersecurity podcast. I am Lauren Ross, a senior manager in Cherry Bekaert's Cybersecurity practice.
LAUREN ROSS: Today I am joined once again by Steve Ursillo, a partner in our Cybersecurity group, and Morgan Hey, a senior manager at Meditology Services.
LAUREN ROSS: Meditology Services is a top-ranked provider of information risk management, cybersecurity, privacy, and regulatory compliance consulting services exclusively for healthcare organizations.
LAUREN ROSS: Today, on the second part of our three-part series, we are discussing the drivers of AI compliance and how to navigate them. Steve and Morgan, welcome back.
LAUREN ROSS: One of the first drivers that comes to mind is regulatory pressure. Morgan, I want to start with you.
LAUREN ROSS: What impact do emerging regulations like the EU AI Act or the U.S. Executive Order have on global AI deployment strategies?
MORGAN HEY: It is really interesting. There has been a divide between what we see on the domestic front in terms of regulation and how involved the movement has been compared to the European Union.
MORGAN HEY: Organizations must be prepared for regulatory shifts. The gauntlet has been thrown down in AI as countries and unions, like the European Union, allow organizations to figure it out on their own.
MORGAN HEY: We have the EU AI Act. While there were many requirements, enforcement for that has continued to be pushed back.
MORGAN HEY: There is speculation that this is due to influence from large American companies like Meta. In the short term, if you are doing business in the American market, there does not seem to be any significant regulatory action specific to AI.
MORGAN HEY: This will likely be the same story for at least a couple of years because there is a big incentive on the geopolitical side to have American companies at the forefront of that conversation.
MORGAN HEY: Even in the EU, the AI Act has significant detail around the development of models, prohibited use cases, and data sourcing. However, the security language is fairly light.
MORGAN HEY: There are some general obligations you have to meet, but it is mostly focused on use cases and how you are sourcing your data. Again, the enforcement for that has been delayed several times.
MORGAN HEY: As with any evolving regulation, such as what we colloquially call HIPAA 2.0 regarding the updates to the HIPAA Security Rule, organizations should start preparing now.
MORGAN HEY: If you operate in the European Union, you should be aligned with the AI Act and aware of things that will need to change when that goes into full effect.
MORGAN HEY: In the U.S., executive orders do not hold the same weight, but it is always better to look at the signals and adjust as needed.
MORGAN HEY: There is significant investment in the AI space. If you go the wrong way and your model gets flagged or you receive a significant fine, that is not good for anyone.
LAUREN ROSS: You mentioned being prepared, and there have been many changes leading to a great deal of uncertainty.
LAUREN ROSS: Steve, how would you recommend companies prepare for this regulatory uncertainty, and what kind of things can they start doing now?
STEVE URSILLO: Many organizations already have programs in effect to look at both contractual and regulatory obligations. If not, this is certainly something much broader than AI.
STEVE URSILLO: Given the fast-tracking of regulatory requirements and international influence, it is important to have a sound structure that allows you to be adaptive and agile.
STEVE URSILLO: An AI governance model should be structured so that accountability is clear, with executive management and leadership oversight.
STEVE URSILLO: A cross-functional AI risk and ethics committee should be established to help govern and drive decisions. These teams stay on top of different regulatory requirements.
STEVE URSILLO: This involves in-house staff monitoring current legislation or others tracking emerging issues that might require a longer runway.
STEVE URSILLO: When you empower a committee, you must make sure they have the tools to interject if there is high risk, allowing them to pause deployments and adapt quickly.
STEVE URSILLO: This includes identifying upcoming regulatory requirements, assigning ownership, and following typical risk management protocols.
STEVE URSILLO: Our last podcast discussed frameworks to identify risks across your AI stack. Assessments should define the likelihood, impact, and regulatory relevance to bring in proper control mitigation and reporting.
STEVE URSILLO: A major challenge organizations have is understanding the register of all their different AI models. This inventory must include data sources and any issues with lawful bias or ownership.
STEVE URSILLO: You cannot properly deal with risk mitigation or regulatory effects if you do not know the systems and data you have, where it is processed, and any sovereignty issues.
STEVE URSILLO: Governance also involves ongoing monitoring and triggers for deeper reassessments. This includes revisiting new models, data, or regulations, as well as the potential impact of incidents.
STEVE URSILLO: Regulatory obligations are becoming stricter regarding notification times. You must have the right capabilities built into your information security and cyber response programs to pivot based on AI use.
STEVE URSILLO: This culminates in a modular compliance approach using different frameworks, proper risk management measures, and oversight to stay on top of ongoing regulatory implications.
LAUREN ROSS: Beyond regulatory pressures, another component to consider is enterprise and vendor expectations.
LAUREN ROSS: Steve, what are the top compliance criteria enterprises look for when they are evaluating AI vendors?
STEVE URSILLO: We are seeing many more targeted questions now. Eighteen months ago, the questions were as simple as asking if you were using AI.
STEVE URSILLO: Now, there is a much more granular look based on risk levels and expectations from new frameworks and regulations. Organizations are dealing with this from the third-party risk management front.
STEVE URSILLO: They are looking at risk mitigation and asking how they can know if vendors are doing the things prescribed in ISO 42001 or the NIST AI Risk Management Framework.
STEVE URSILLO: Mature organizations now use a prescriptive runbook to understand where data is coming from, how it will be used in training, and what the boundaries and data flows are.
STEVE URSILLO: They want to know what safeguards are in place, such as protection against prompt injection. Prompts are the new zero trust; you must filter and govern them within guardrails to avoid injection.
STEVE URSILLO: Depending on how autonomous a system is, you must rely on a level of least privilege. Different aspects of that autonomy need to be monitored, safeguarded, and transparent.
STEVE URSILLO: Understanding how this affects your business and the nature of your transactions is very important.
STEVE URSILLO: Organizations look for certifications like ISO 27001 for security, ISO 42001 for AI governance, or SOC 2 reports designed to take AI systems and risks into account.
STEVE URSILLO: There must be strong proof of responsible AI. This includes transparency regarding testing for bias and fairness, minimizing drift, and AI red teaming.
STEVE URSILLO: Mature organizations are using AI-targeted red teaming where products evaluate attacks against the organization's policies. Continuous monitoring can also detect adversarial attacks or prompt injection.
STEVE URSILLO: Enterprises may also look for model cards to get a better understanding of exactly what the model does, the clear lines of data utilization, and ongoing maintenance.
STEVE URSILLO: This should not be a one-time annual audit, but a continuous governance approach. There is always a balance in making sure you are getting a certain level of depth and transparency.
STEVE URSILLO: Do not just check a box because a vendor has an ISO certification or a SOC report. You must understand the risk to your organization.
STEVE URSILLO: Understand the concepts of shared responsibilities and hold these vendors accountable for what they need to do to keep up their end of the bargain.
LAUREN ROSS: Morgan, what are you seeing? How are procurement teams evolving to assess this AI risk as part of their vendor onboarding?
MORGAN HEY: Responsible procurement and cross-functional teams are starting to educate themselves on how AI works and what a model actually means.
MORGAN HEY: Many people do not realize that the majority of AI capabilities are built on a very small selection of frontier labs. Most people use Claude or OpenAI to do summarization or analysis.
MORGAN HEY: Very few organizations are developing internal models purely based on their own logic. If you do not know that, you cannot ask the right questions about security and data responsibility.
MORGAN HEY: Simply giving a green light without understanding these components can introduce a significant amount of risk. Third parties are currently the largest risk vector, and AI complicates that.
MORGAN HEY: We have seen a varied approach. Small teams might just update questionnaires to include six or seven specific questions around the AI model to evaluate if they are thinking about these areas.
MORGAN HEY: Beyond that, adding validation is key. This means requiring a certification or an attestation to prove the vendor takes AI security and data seriously.
MORGAN HEY: Ultimately, the best way to protect yourself as an organization is through contracts. This includes updating BAAs, agreements, or SLAs to specifically codify the requirements around AI use.
MORGAN HEY: Some organizations initially put outright prohibitions on using their data in AI models, but the tone has shifted as they want to leverage AI-enabled service providers.
MORGAN HEY: You must put specific considerations in place, just as you would around your data subjects and privacy. If a model is compromised and data is exposed, you are still on the hook if it is your third-party provider.
MORGAN HEY: Many organizations have started moving into a cross-collaborative environment. It is no longer just a GRC team checking things off during procurement.
MORGAN HEY: If you have data leaders or a data science function, tapping them to look over what you are seeing from your vendor is critically important.
MORGAN HEY: Instead of diving head-first into these solutions, give yourself a POC period where you can test these things out and see how the organization actually operates.
LAUREN ROSS: You both have given some great examples of the strategies being deployed to address AI-related risks.
LAUREN ROSS: Steve, what do you think are some of the most overlooked risks in AI systems today?
STEVE URSILLO: We could do an hour just on those components. Breaking risk out into different areas helps drive some of the focal points.
STEVE URSILLO: Regulatory risks include the auditability, explainability, and transparency of processing. Data protection is also impacted, such as PHI for healthcare, card processing for PCI, or CMMC for the defense supply chain.
STEVE URSILLO: You must ensure those risks are not presented in the way your AI is handled. If decision-making involves bias or discrimination, you could be in a bad spot with applicable laws.
STEVE URSILLO: Technical risks include cyberattacks against AI models that can provide for data leakage. Adversarial attacks may involve prompt injection or model theft to skew a model's behavior.
STEVE URSILLO: As proof-of-concept use cases become apparent, you will see more of these attacks in the threat landscape. Using plugins or add-ins also creates dependencies across the AI supply chain.
STEVE URSILLO: You must safeguard across your model and anything brought in from a third party or interoperability through API calls.
STEVE URSILLO: Operational risks could have an impact on decision-making and process integrity. Attackers are using AI for more sophisticated fishing, vishing, or deep fakes.
STEVE URSILLO: Employee awareness is a vital safeguard to ensure assets are not improperly reallocated or stolen.
STEVE URSILLO: There is also a huge automation bias where folks start to rely on the AI more than they should. You must have safeguards and monitoring to validate the level of trust you put into the system.
STEVE URSILLO: Shadow AI is another real concern. While an organization has vetted systems, an employee might subscribe to a cloud AI to do their work, which opens up significant exposure.
STEVE URSILLO: You need systems that can detect the inappropriate use of AI behind the scenes. Additionally, if you are not monitoring the integrity and output of these models, model drift can occur.
STEVE URSILLO: Inaccurate information can continuously feed bad behavior to the model, and if that is not safeguarded, things can go way off the rails.
MORGAN HEY: I love the call out around people being overly dependent upon the systems. As defenders in an organization, it is something very important to consider.
STEVE URSILLO: When you think about the social aspects, it goes further than just the enterprise. There is a responsibility for sites providing services for children or any generative AI that could be used inappropriately.
LAUREN ROSS: Let's say there is an AI failure. Morgan, how can compliance frameworks help mitigate the reputational damage associated with that?
MORGAN HEY: Reputational harm is a little bit different with AI. First and foremost, due diligence is a major player.
MORGAN HEY: Leveraging an existing compliance framework, whether that is NIST CSF, HITRUST, or ISO, ensures people are committed to a common language of security and privacy control.
MORGAN HEY: Having a documented paper trail of leveraging these frameworks goes a long way with investors and business partners. If the OCR sees that due diligence, it is a significantly different conversation.
MORGAN HEY: Signaling transparency to your clients, investors, and patients also goes a long way. For example, Anthropic appears to be very transparent regarding security and privacy commitment.
MORGAN HEY: If you can signal your commitment to security by aligning with a framework like ISO or HITRUST, you demonstrate transparency to the market.
MORGAN HEY: If something bad happens, it shapes human perception. Instead of people thinking you were not taking care of what you needed to, it shows there was only so much you could control.
MORGAN HEY: That transparency is the difference between being able to survive an event and closing up shop.
MORGAN HEY: Leveraging a compliance framework tied to an attestation or certification, like ISO or HITRUST, also helps. These allow people to check your work and make negotiations and assurance much easier.
LAUREN ROSS: Thank you, Morgan and Steve, for joining us again. Thank you to everyone listening to the Risk and Cybersecurity podcast.
LAUREN ROSS: Don't forget to subscribe and join us next time as we close out this series with the differences in managing internal versus vendor AI.