AI Security and Risk: Side-by-side Comparison of AI Compliance and Risk Frameworks
The rapid rise of AI is reshaping security and compliance, but what do the leading frameworks actually say about managing AI-related risks? In this expert-led panel, Tevora’s AI and compliance specialists break down key AI security and risk management frameworks, including ISO 42001, HITRUST AI Framework, and NIST AI Risk Management Framework. This session explores the latest AI-specific compliance requirements, highlights key differences and commonalities, and provides actionable steps for organizations looking to stay ahead of evolving regulations.
Key Takeaways:
- Specific requirements outlined by ISO 42001, HITRUST AI Framework, and NIST AI Risk Management Framework
- Differences and commonalities between these new standards
- Steps your organization should take to comply
If your organization is navigating AI risk and compliance, this discussion is a must-watch.
Hi everyone. We’re going to give everyone a few minutes to hop on before We get started. For those that have just hopped on, we’re just giving everyone a second to join before we get started. Thanks for your patience.
Okay, perfect. I think we’re probably ready to get started. Firstly, thank you everyone for joining us today. We are super excited to share the conversation on a very important topic, navigating AI, security and risk. My name is Charlotte Densham. I’m a senior manager here at Tevora, and I’ll be moderating this session today. Definitely looking forward to getting started. AI is a hot topic. There’s no denying that the proliferation of artificial intelligence has changed the way we understand security and risk in our organization. It is why many of you are here today to talk about it. Over the past few years, several compliance frameworks and risk frameworks have popped up to address these new threats and privacy concerns. So today we’ll be diving into three specific AI frameworks, but before we get started, I’d like to ask my colleagues and three panelists here to give some brief introductions. Bhavin, I’ll pass it over to you to get started.
Thank you, Charlotte. Pleasure to meet everyone. My name is Bhavin Patel. I’m a manager here at Tevora. I oversee the ISO practice that encompasses artificial intelligence, information security, business continuity, disaster recovery and privacy. I’m happy to be here and talk about some of the ISO requirements for artificial intelligence. I’ll pass it over to Justin to introduce himself.
Hey everybody. My name is Justin Graham. I oversee the federal and healthcare practices at Tevora. These are compliance related services such as HITRUST and HIPAA, and on the federal side, FedRAMP, StateRAMP and CMMC. Primarily, I’m also serving a VC so capacity, Anir, I’ll go ahead and pass it to you.
Thank you, Justin. Everyone, thank you all for being here. My name is Anir Desai. I’m a senior manager over our Strategic Services tower here at Tevora. My team primarily is responsible for enterprise risk assessments, privacy engagements, third party risk engagements, AI risk management, AI risk assessments and whatnot. Yeah, perfect, Charlotte. I’ll pass it to you.
Thank you so much. I’m just going to go over a few housekeeping items here. Before we get started, please know that if you have any questions for the panelists, feel free to add them into the Q and A box below you should see at the bottom of the screen. We’ll do our best to respond to those live at the end of the presentations. However, if we’re not able to get through everything, please feel free to send those questions over to [email protected] and we’ll make sure to get some responses for all of those. In addition, throughout our talk today, we will also be asking a few poll questions to better understand your situation and the ways AI has been impacting your business and your role. Hopefully, this is a great way for you to learn more about your peers. Please keep an eye out for those polls and participate. We’re actually going to kick it off and bring up the first poll for today. So again, feel free to join in on this poll. First is around AI usage, really looking at what your biggest concern is for the use of AI, whether you’re using it internally within your organization, and then how you feel, you’d rate your current AI security posture. We’ve used the CMMI levels here, look at level one is kind of really that initial so just getting started through to level five, being that really, you’ve optimized, built out a great AI program there. We’ll give everyone a few seconds to answer those and then we’ll share the results with the audience here.
Perfect. I would say we’ll go ahead and close the poll so we can head into some of the conversations today, so you should be able to see the results there. And as we go into the conversations, hopefully we’ll be able to address a lot of these concerns that you’re talking about. And hopefully, of course, those frameworks will help you build out a program to really get to that optimized level there. To get those off, I’m going to pass it over to Bhavin, who will be covering the ISO 42001 framework.
Thank you, Charlotte. Before I kind of start presenting the ISO 42001 process one thing I want everyone to keep in mind is there’s going to be three separate topics we’re discussing. I’m going to talk about ISO, Justin’s going to talk about HITRUST, and Anir’s going to talk about NIST. But there are some overlap and correlations and some differences. When we’re talking about each of the topics, focus on the content. At the end of the presentation, we’ll circle back on some of those differences and similarities. I think that’s kind of the heart of what we’re trying to achieve with this webinar. So just kind of keep that in mind. I know it might seem a little bit like content, focus on the first part of it, but later half of what trying to circle back and put everything together for you guys. That being said, starting off with ISO 42001 not sure people have heard or not, but it’s been a hot topic over the last year. ISO 42001 was published in December 2023, so last year was the first year where a lot of organizations and companies are starting to talk about it. If you look at Q4 of 2024 there’s a lot of organizations from Google to Facebook to Amazon that started publishing that they got ISO 42001 certified. So, what is it? Well, ISO 42001 is an international standard. International meaning that if you have clients in Europe or India or wherever, the processes are the exact same. It’s recognized throughout the globe, similar to other ISO standards that you might have seen in the past. The ISO 42001 standard is focused on artificial intelligence management system. You’re here AI Management System, or aims, for short, essentially, it’s around creating a governance process around how do you manage and use AI tools and technologies within the organization. There are also some technical aspects to it that we’ll dive into a little bit. But it’s not just a governance piece, but also a technical implementation of controls as well. The main purpose of the certification is so you can get a certificate at the end of the process. That certificate is crucial for your stakeholders. Especially when you’re talking about your customers or your vendors that you’re partnering with, similar to security certifications, like 27001 or SOC 2 reports that they ask for. A lot of your customers and clients are going to be asking for 42001 and that’s really the main reason to kind of get the certification, and it’s also a differentiator. If you think about it, you have two competitors trying to pursue a client or a deal. Well, if you have ISO 42001 and the other doesn’t, you’re more than likely to have a better chance of getting that deal. That’s kind of the high-level overview ISO 42001 What kind of talk about the audience and applicability. When we’re talking about the audience, who does it apply to? One of the beautiful things about ISO is that it’s comprehensive. It’s a pleasure for any organization, any shape or size. You could be a startup, or you could be a multi-billion-dollar organization. You could be in the industry. It’s not industry specific. It could be FinTech, it could be healthcare, it could be financial institutions. The requirements are the exact same, but that does lead to some complications. Like implementing a control for a multi-billion-dollar company might look a little bit different than a startup. One of the things that ISO does do, is it kind of helps you to tailor that. For example, the requirement might be the same, but how it impacts your company, and organization might be a little bit different. You can change the applicability to meet your industry and your size, but the requirements and the processes are the exact same. That’s one of the beautiful things about the certification. We’ll kind of talk a little bit on the next slide about some of the processes. But one thing to kind of note before we get into that, is that certification applies to any organization that uses any AI technology. You could be just using open like source AI tools and technologies like ChatGPT or Copilot, etc.; or you could be developing your own AI tools and technology. The framework tailors for any of those sub cases as well. It’s not specific to any one organization. You’ll see this is a little bit different than when Justin talks about HITRUST. The applicability is going to be a little bit different as well. Now, jumping into the actual process on the next slide of achieving 42001, if you guys have already achieved, like 27001 or seen any other certification process, the process is super similar. You always want to start off with the readiness assessment. The readiness assessment is kind of a gage of where you currently are at point in time. And then where are your gaps to the requirements? Once you’ve done the readiness assessment, identify the gaps you want to resolve and practice them and fix them. Then once that’s done, the next piece of it is the actual risk assessment. The risk assessment is going to be critical. Anir is going to touch a little bit more on this later, but the risk assessment is the fundamental of ISO 42001 as it was in 27001 so organizations are going to have to do a robust risk assessment and impact assessment. Once you’ve completed your risk assessment and impact assessment, the next stage is to do your audits, both the internal and external audit. Those are going to have to be done by accredited organizations to have the right expertise to do it. Then once you complete your external audit, you’re really going to have your certification issuance, similar to previous ISO certifications. Your certificate is valid for three years. It’s good for three years, and you do have to do annual assessments against it just to maintain the certification. One of the questions that a lot of people ask is, how long does it take to get certified? Anywhere from six to 18 months, is generally what we see, depending on the maturity of the organization. Now that we kind of talked about that, you can see on the screen, let’s talk a little bit about the structure. The structure of 42001 is really similar to 27001. You have your management clauses; these are mandatory core governance requirements. You have to have these in place. Then you have your Annex A, these are your control objectives that are more policy, procedure and technical in nature, related to AI tools and AI technologies. These are going to be based on your risk assessment. So earlier I mentioned risk assessments, you have to do it based on some of the risks you identify. You’re going to have to pull these requirements at the scope to help mitigate some of those risks. One of the things that ISOs notorious for, is it tells you to do something in the requirements, but it doesn’t provide guidance. Well, they kind of solved out what the Annex B. If you look at Annex B, it’s the guidance of how to implement a control or to give you recommendations of what to consider. If they’re saying, hey, you need to create a AI policy, what are some of the things that you should consider as a part of that? There’s also Annex C and D. These are more information on nature, not mandatory for the standard, but they are available to just understand how the AIMS is used across different sectors and domains. That’s kind of the structure at a high level. Now we’ll dive into some of the actual requirements. If you see here on the left-hand side, or your core management clauses that we spoke about a little bit earlier, and the right-hand side are those technical Annex A requirements on the left-hand side, the causes four to ten mirror the 27001, security framework. All of it is really understanding who and what is your AI, tools and technology. Understanding the context of your organization, who cares about AI within your company, both internally, externally, from a legal and contractual point of view, et cetera, from a leadership point of view, who’s responsible for oversight and leading all the initiatives, from planning those initiatives to supporting those initiatives, to making sure that they’re operational, also evaluating those initiatives, and then ISOs core fundamental is improvement. All of this is really going to be a governance piece of it that’s going to create a framework of how any AI tour technology within the organization should be following. That’s going to be the meat of where a lot of the organization is going to have a lot of work to do. Every organization is going to create this framework. On the right-hand side are some of the technical requirements. I’m not going to go into every single technical requirement, but you can kind of see here you have technical items for creating policies and procedures to software development to data protection. If you’re developing an AI tool or technology, how do you make sure that the data that’s being used or trained on is protected and secure, etc., from a software development point of view, are developers allowed to use AI tools and technologies to help write code. Like everyone is using Copilot to help write code, what risk does they introduce those types of stuff? And then even from third party and customer relationship, how are you handling incidents? If you are sharing data with a customer or vendor, something happens from an incident point of view. Who’s responsible for it? What do you do? How do you handle it? Other aspects here are going to be intellectual property rights. So, if an AI tool is creating new data, generating new data, then is it ownership of your organization? Is it ownership of a different organization? All of those are different things that the organizations will have to consider as part of their risk assessment and then implementing these controls to help mitigate some of those risks. On the next slide, I’ll kind of pick on like four or five different things that we’re really seeing as the main factor across the organization. The first thing we’re seeing is obviously artificial intelligence management system. Any organization that wants to create ISO 42001 standards will need to create an AIMS policy. The AIMS policy, as I mentioned, is kind of the core governance process terms of who’s responsible, what’s your scope, what’s the leadership requirement? How do you, you know, approach new initiatives, how do you comply with those initiatives etc. Every organization is going to need to create an AIMS, policy and standard and a framework. Organizations are going to have to do a risk assessment and impact assessment. This is going to be similar or add on to your information security and privacy risk assessments, but the focus of this is going to be more on AI assets and AI tools and AI risk. As a part of that you’re going to have to do asset discovery for your AI tools and technologies. That’s going to be a critical impact. A lot of organizations going to have to go through from a data security and a data protection point of view, organizations are going to have to start to identify the different types of data, the classifications identifying any type of like DLP solutions, data loss prevention solutions, to make sure that they’re protecting per those classifications. Then you’re going to have to create policies and procedures of like what users can do with the data. If you’re using a ChatGPT or a Copilot, what type of data can be used on those versus what type of data you shouldn’t be using on it, if you’re training models on certain types of data sets, then how do you actually train them? What data can be trained on those types of stuff? These are kind of the core fundamental requirements from ISO 42001 I wanted to focus on. There’s a lot of other items that I talked about, but really just wanted to focus on these as well. But now I’ll kind of pass it over to Justin to talk a little bit about some of the high trust aspects. Justin over to you.
Cool. Thanks, Bob. All right, so the HITRUST AI security certification. It’s one of the first AI certifications on the market, and it was developed with input from Ai industry experts. A lot of thought went into this. It is a framework, and you have to really understand the HITRUST CSF framework as it’s built on literally dozens of other security frameworks, and they tailor the HITRUST CSF framework around the most relevant security controls really hand chosen from this larger subset of security frameworks, like missed ISO OS, this is a security assessment. I think that’s very important to note, with the HITRUST assessment process, it is possible to do a HITRUST AI risk assessment as well. That does not end in a certification report on the risk side. The certification efforts is security only. In its simplest terms, you know, the AI security assessment offers, very prescriptive and relevant AI security controls, a means to assess those controls, the ability to inherit security. Controls from AI solution providers and very reliable reporting that can be shared with your either internal or external stakeholders. What we are really seeing in the market, as to the why, organizations are going down this route. There are two areas, primarily one listed here, just a market differentiator. We want to distinguish our AI product from the rest on the market, to show that we have implemented security controls around this AI technology to protect it. And likewise, this framework itself is built on all these other ones. Bhavin was mentioned, ISO 42001 that’s a big piece of it, but it’s good. It’s going beyond that, and it’s looking at other frameworks and really pulling those in as well. It’s a great way to stand up, to do an assessment and see from a security lens how your AI product is really standing up. A couple of key features here, mentioned on this slide, comprehensive control set. Again, it’s, it’s up to 44 controls. It’s specifically designed, you know, to address security around these AI platforms. I’m going to go into a little bit more detail about how these are tailored. These controls are tailored to the specific AI technology that you have deployed the rigorous assurance mechanism. HITRUST has a very robust quality assurance program. Every Certification Report gets reviewed by HITRUST before they will issue the Certification Report, proactive threat adaption by design and ingrained into this framework. This framework evolves over time, actually on a quarterly basis. AI is a rapidly evolving technology, and in order to keep up, the AI specific threats are also evolving. They do a really good job of keeping the framework very relevant, and they’re very proactive in them.
Next slide, really who this is for? This certification is for providers of AI technologies. It’s important to note that HITRUST certifies systems and not organizations. This certification is for AI platform providers, AI product providers. At the simplest level, it has to use your system, has to use an AI model to qualify for the certification. So, if you’re just using, let’s say, an AI tool to take meeting notes or something like that, that’s not something that’s really certifiable. It has to be an AI deployed AI system to be eligible to get this certification. The assessment process. This is not a standalone assessment. You can’t go to HITRUST and just say, Hey, I just want to get my system assessed, and get this new AI security assessment, that’s very important. Must be combined with a traditional HITRUST certification effort. I don’t want to go into too much detail here, but they’ve extended their portfolio. There are three certification types you want. I want to think of them as small, medium, large, with the e1 being the more foundational certification, with r2 being the gold standard. This is an add on to a traditional HITRUST certification. There’s a lot of questions around that. It adds an additional 44 requirements right now to this assessment, and it’s tailored very specifically around your AI system. There’s an example here of some AI tailoring questions that are asked. What type of AI models are used, and based on whether it’s rule-based AI, predictive AI or Gen AI, there might be different security controls you would have to assess against. It’s very adaptive to your specific environment. And you’ll see some other examples here, were covered or confidential data used to train the models. You know the answer to that will drive you know what requirements are in place. This is a very tailored assessment to your specific implementation of your AI technology. Just a couple examples on what this touches on, and again, is very specific around security. You’re going to see some examples within the framework of having an AI Policy and Governance program in place is going to touch on change management. How are we managing changes to our AI models? How are we pen testing our AI models? AI model training data protections, access controls, specifically for training data, testing data, validating AI models. This goes well above and beyond the traditional kinds of security assessments, and it is very specific to AI related requirements here. That’s just a really high-level overview of that HITRUST new AI security certification. If you have any questions, please drop them in the chat. But with that near I think we’ll pass it to you.
Perfect. Sounds good. Thank you, Justin. Alright, so getting started with NIST AI RMF. Most of you may be familiar with the traditional NIST RMF. A NIST released the AI RMF in January of 2023. As a voluntary framework to help organizations identify, measure and mitigate AI related risks, some of my based on Justin and Bhavin and so this RMF is not to be confused with the compliance frame. It’s a risk-based approach. So unlike ISO 42001, HITRUST, AI, NIST, RMF doesn’t really enforce strict controls, but provides guidance for managing AI risk responsibly. It’s aimed at AI stakeholders, developers, users, executives, anyone’s, anyone that’s involved in AI decision making can benefit from this framework. The core focus of this framework is to build trustworthy, responsible and resilient AI systems by assessing and mitigating risks across the AI life cycle. It aligns with some of the ongoing global efforts that you may already know of, such as the EU AI Act, the US executive order on AI and then any sector specific regulations. Think healthcare, finance, etc. So now we covered from a high level. What is NIST, AI, RMF. Let’s jump into why it is important. So why is it important, right? So many organizations are using many types of AI, so it could be directly through a third party, fourth party, fifth party, you name it, and there’s multiple types of AI threats and risks. As complex as it gets and as evolving as it gets, issues such as bias explainability, security vulnerabilities and compliance challenges kind of emerge with the specific industry that you’re operating in now. You combine that with the regulatory momentum, the EU AI Act, the executive order, China’s AI regulations. It can become a lot to consume, right? But the regulations, alongside with the RMF, are pushing towards a proactive risk management so there is a bright light at the at the horizon there, so mitigating some of the operational risks. AI failures kind of lead to the reputational damage, the regulatory fines, the operational disruptions. Examples of this can be bias in hiring algorithms, right? And then faulty financial models, and then some of the rising ones that I’ve seen as the AI driven customer service I’m sure some folks attendees here have listened to some of the AI based customer service bots that are out there now that are helping organizations kind of take that AI approach in customer service and kind of doing that intake and the management of Customer related relations, lastly, understanding the urgency. Most of us know this, but accelerated speed and technology, regardless of AI, have an impact on security and privacy compliance. I know some of the poll questions or the results was focused on data privacy being one of the main topic areas that focus, or folks are concerned about. Some of the AI related driven data processing kind of introduces new risks. Think about regulations like GDPR, HIPAA, security, privacy, CCPA, the impact is cross functional. Affecting legal teams all the way down to product level teams and kind of having that translation model in place really becomes key. And that’s where the RMI, the RMF, kind of becomes a little bit more important. Perfect. How does the NIST AI RMF work? If you’re familiar with the normal NIST RMF, there’s core functions of the AI RMF. Government measure and manage. Govern, which establishes accountability and oversight for AI systems, much like the new NIST CSF, govern function, focusing on things such as policies, roles and responsibilities and the overall AI risk management and how that fits into the overall risk management program for an organization. Question how map, which focuses on identifying risks and its impacts, such as security, bias, reliability and the specific use cases in identifying some of the AI related risks. The third one here measure focus on how you address and monitor AI risks, such as, the severity, the model behavior. You can take it in a quantitative method or qualitative method. And then lastly, managing, implementing some of the mitigation strategies, and having continuous monitoring in place for some of these AI related risks that are rising up. It’s designed to be flexible and scalable. It’s not a compliance related requirement. It’s truly foundational to build your AI program off of. It can be adapted in different industries, different levels of AI maturity, and in different layers of regulatory landscapes as well. And lastly, I want to mention that it is interoperable. With existing frameworks, so it can be integrated with ISO, it can be integrated with HITRUST AI, it can be integrated with 853, and then the SOC 2, AI controls as well. Now jumping into how and why the RMF may apply to organizations. Who should really care about this? Companies deploying in high-risk areas, finance, healthcare, HR, security, you name it. Organizations preparing for AI regulations, such as the EU AI Act, the FTC guidance on AI, the upcoming US AI rules that’s going to be on the horizon here. And then the various state related AI rules that are, that are out there as well. And then lastly, CISOs, risk managers, compliance teams, as AI introduces new security and privacy related risks. Traditional frameworks don’t necessarily cover how to manage these risks. Rather, they cover how to meet this requirement. So that’s where the RMF kind of applies to organizations. It helps improve the AI governance, the accountability aspect of things, reducing compliance related risk, building trust. For example, if you’re procuring AI models for vendors, AI RMF helps assess the risk that is that imposed on your organization from that model, and then, aligning the other AI regulations. Helping in this cross functional decision-making aspect of security, compliance, ethics business, and forming a whole holistic AI strategy, and providing a roadmap to how you can meet some of the regulations. So what should you do now, now that you know some of the ISO related requirements, the HITRUST, the AI, then AI RMF? The first thing that Tevora’s helped organizations do is assess the AI Risk Management Maturity. Identifying AI systems in use, assessing their risk, exposure, severity, privacy, bias, legal and then evaluating the vendor AI models as well. What controls are in place for third party AI providers, helping your organization feel confident if you are leveraging some of the third party AI providers that are out there, and establishing AI governance and oversight, so having clear insights into who owns AI risk many organizations currently don’t have that ownership, defining AI policies around such as acceptable use, data, privacy, security, and then assigning AI risk officers or AI governance committees. They can be a functional form of your regulatory committees that you may already have, such as your risk committees or your risk officers, but now just carving out a separate space for AI as things evolve and mature, and then lastly, mapping AI related risk to existing areas. If you’re ISO certified, consider the 4242001, for AI governance, if you’re HITRUST implementing HITRUST AI, and then if you’re in this CSF shop, integrating the AI RMF into security practices as well. Now tying it all together, this is my favorite graphic here to showcase exactly side by side, what Justin covered, what Bhavin covered, and what I covered on the RMF side. There’s a focus, there’s a mandatory industry scope, the structure, risk focus, and ultimately, who should use it. Covering the RMF side, being flexible, non-prescriptive, knowing that is going to be very key, compared to the ISO and the HITRUST control-based frameworks there, and mandatory. Not mandatory, voluntary, but highly recommended. As you start integrating AI, there’s going to be risks that pop up and then ultimately scope any organizations using AI. Its risk based. You can map, measure, manage, govern the structure AI, risk management and the trustworthy. Venus kind of ties into the risk focus here, similar to, ISOs, AI, governance, compliance, AI security, privacy and risk controls for HITRUST AI. Then lastly, who should use it? Organizations prioritizing AI risk management and best practices similar to ISO and HITRUST as well, seeking AI structured governance. Then ultimately, in the middle there the certification piece being applicable.
One thing to note on this slide is it’s really an integrated process. Right before you can do HITRUST certifications or ISO certifications, you kind of have to go through your risk assessment using NIST AI risk management framework or any other framework that’s out there. A lot of our organizations, what we’re seeing is kind of like consolidating or unifying a lot of these activities together. If your organization already has HITRUST or ISO and wants to kind of talk about that, or ease in talking about that, as you guys pursue all of these. You’re like, hey, ISO applies to me. Hydro supplies to me. Don’t have a risk assessment. We’re also seeing a lot of our organizations unify a lot of these items. ISO and NIST are kind of industry like agnostic, so any scope applies high trust might apply in certain cases or not. So just kind of keep that in mind. As we’re having conversations and talking that a lot of organizations are not just picking one, they’re also just doing all of it. There are new regulations that are coming as well. I think as those new regulations or new frameworks come; organizations will start to have those conversations as well. Just kind of keep that in mind, this is three that we focus on. There’s also multiple that are coming, new laws, new regulations, new frameworks.
I was going to bring up one of the questions we had; is this one certification on mutual, different frameworks? Hopefully this table here kind of shows you the different frameworks, and as Bhavin mentioned, how they can be integrated, but they are, you know, separate different frameworks, HITRUST and ISO being compliance based, NIST being that risk-based framework. I’ll lead us off here in terms of what, what Tevora has seen, and then I’ll pass it to Justin. Bhavin to chime in here. My team has successfully helped organizations focus on security controls first, mapping them to framework should come after this helps in ownership, that helps in accountability for AI risk and keeping organizations away from some of that blind trust in third party AI models. You can start with the AI based risk assessment to kind of tee this up a little bit, and then truly identifying your gaps and then the controls that apply to you. Rather than approaching multiple frameworks individually, there is going to be overlap depending on scope developing or having Tevora create a singular security controls list has helped organizations get a full picture in one take, rather than jumping around and seeing for yourself where the overlap may be. Having a common set of controls that are going to apply to your organization from ISO, from HITRUST, and then having RMF kind of be that glue that brings it all together. There’s also the EU AI act as well. A lot of language and a lot of text to be had in each having the translation also between the security controls and what they truly mean and how they impact your organization is going to be key. And then conducting a AI business impacts analysis. Know and prepare for your worst-case scenario. How is this introduction of AI going to truly impact my organization? What can it break? And what is my resiliency to something breaking is definitely something that we’ve seen as of late, before even it’s been generated very early onset. We’re approaching this. We’re thinking about this. What is the impact on the business, my objectives, and some of the critical business operations. Think product teams, think application related teams, and how is this going to impact their day to day, and how is this going to fit into their model? What are the risks that’s going to introduce to their function areas? Justin, Bhavin anything to add in here.
Yeah, one thing I’m also seeing, and it’s kind of ties in with the impact assessment and the risk assessment is asset discovery. One of the biggest things that you have to do before you do a risk assessment or impact assessment is, what are your assets? Where are they? What type of data do you have, all of those things. It’s asset management and data classification and data discovery. A lot of organizations right now are considering, how do we do that. Are there any tools or technologies that are out there that can help with that? How do we actually approach that. So, one of the things that organizations just need to start having conversation for is, where is your asset management system? What are you using? Does that cover the scope of all of your technology? A lot of times what we’re noticing the asset management is just covering physical hardware or virtual hardware. It’s not covering data and information. As AI technology grows and develops, AI the data that’s used within that is going to be really critical, and that’s an asset. You’re going to need to identify where those data is, how’s it used, how’s that protected? That’s going to be a longer initiative. We’re seeing organizations approach this in different ways, whether it’s using automated tools or manual processes, but a lot of organizations right now are having that conversation, and before we do anything AI related, let’s actually try to identify some of our assets. That’s the biggest conversation that we’re seeing as well. I mean, that will kind of feed into all of the other processes that we talked about in terms of developing a security program and having a risk assessment and impact assessment, etc. I just kind of want to throw that out there. That’s another hot topic that we’re seeing pretty often.
I’ll add one thing to that, Bhavin, it’s kind of related, but looking at the poll questions that went out, you know, there’s, there’s a lot of organizations out there just starting out down this path. And I think you really nailed it there with just identifying where the AI technologies are in use in your organization. Really, one of those initial steps is setting up that, AI governance. What is our policy on the implementation of AI? I think, from an organization who’s just starting down that path, those two items are very important.
Just to add on to you, Justin, what you’re saying is, a lot of organizations are thinking of AI as a separate tool or technology. AI has been around. It’s been around for hundreds of years. It’s just an advent of new chips and hardware that allows us to do new things that previously couldn’t. That being said, one of the things I want everyone to kind of keep in mind is AI tools and technology, or technology at the heart of it, all of the security concepts and programs that organizations have created still apply. All of your privacy concepts and programs that you have created still apply, right? So just kind of think of all of your security and privacy programs as just to add on to the additional AI systems. Don’t think of it as a complete siloed activity. Think of it as a holistic triangle, where you have security, you have privacy, you have AI within that as piece of it, and then you have availability, all of those things that you historically have covered from applicability will still apply to your AI systems the way that looks and feels a little bit different. I know Justin was talking about threat testing. You’re still going to have to do penetration testing. The risk and the issues you find might be a little bit different, but from a security and a privacy and availability point of view, all of those same concepts you already have will be a good starting point for you. A lot of companies think, I don’t know my what my AI tools or technologies are, but all your existing security programs and privacy programs is the starting fundamental to all of those AI tools and technology controls that you want.
I see we have a question that I actually want to answer live here within our Q and A. The question is, do you recommend a company implementing a standalone AI management tool for use cases, inventorying the risk tracking or integrating these into existing enterprise tools. From a risk perspective, I think it truly depends on how mature your risk management tool is, or your risk management program is. If you have a specialized case that AI risk management case that exceeds the capabilities of the existing enterprise tools, then a standalone may make sense. But the stand alone also comes with the challenges such as potential duplication of risk tracking between the enterprise function and then the standalone AI function and then resource constraints as well. That ties into this managing two separate areas, can play a huge factor. Now, if you want to integrate, I think that’s a great option to integrate into existing tools, but you would have to have a well established risk management program to do so. You have to have tooling you can use some of the service now the one trust you name it. Some, if you’ve already been using some of this, then aligning some of the AI risk management workflows into the broader workflows makes sense. But that too, also you would need to have AI based capabilities within those tools, such as tracking and whatnot, that can help establish the integration there.
All right, anything else that you want to add in here, Justin, before we move through. I think I’m good. Thanks for everyone for attending. Would take questions and answers at this point in time,
Perfect, and before we jump into the questions, we do have one last poll for the audience, just to get a better again, continue that understanding of the impact across the organization. Feel free to provide responses there. The first is, have you already updated your information security policies, policies to address AI security and privacy risks can a lot of what we’ve talked about today is really building out that program. Then secondly, have you seen changes in your customer assurance questionnaires and requests in regard to AI? Justin and Bob and went a lot through compliance frameworks for artificial intelligence, so we’re wondering to see if your customers are bringing that up as part of their security questionnaires as well. Give everyone a few seconds to answer, and then we’ll we’ll jump into the questions.
Okay, let’s go ahead and close the poll here, and we can take a look at those results.
Perfect. It’s great to see 57% have already kicked off and started updating the information security policies. We know it’s a heavily evolving area, and we’ve received a lot of interest in it, so no people are still pushing towards updating. And yeah, a good response that 56% have said that they are seeing an interest in their customer assurance questionnaires as well. Thank you, everyone for participating. All right, so in terms of Q and A here, so first question we have is, let me pull up here. Are there other AI related frameworks that have been introduced? If so, what are they?
Yeah, I can. I can start off and just send Anir, feel free to jump in. One of the things to kind of keep in mind is, all of this new AI compliance security is it’s a hot topic right now. Every day, every week, every month, there’s a new framework legislation that’s coming out, right in terms of what’s public and available right now. HITRUST and ice are the main ones, but there are other frameworks that are coming. If you guys are familiar with the Cloud Security Alliance, CSA star, they’re actually in the process of finalizing their AI cloud matrix. These are for customers that have AI tools and cloud service providers so they’re in the process of going through peer review and getting that finalized that should be published in the next few months. There’s also a lot of other ISO frameworks and standards that are being created. 42005, is one that’s specifically focused it’s under development right now. It’s specifically focused on developing your AI impact assessment. Anir kind of talked about the importance of doing an impact assessment. Well, a lot of organizations don’t even know where to do or start with that. The 42005, is going to be a guidance in terms of how to implement those impact assessments and processes. Beyond that. There are various other standards that are coming out there as well, but we’re seeing Europe specifically in Germany. If you have a SOC 2 and a c5 there’s a AIC four that’s also out there as well. For organizations that provide AI initiatives in Germany, that’s an add on to the existing c5 and the SOC 2. The threat and the landscape is always changing. There are new frameworks, new standards that are always evolving. There are new regulations, I know in here, we talked about the EU AI act, but there’s, the Colorado AI act in US goes into effect in 2026. You have California acts that are coming out, those types of stuff. I would say the biggest thing here is, don’t worry too much about what frameworks there are currently. Keep an eye on the news, because things are changing every day, every week, every month, and you’ll see things that are coming up. And you know, tomorrow, we’ll continue to do presentations and webinars. If you ever have questions or just want to know what information is currently available, reach out to any of us or just look at some of the content we have on our website. But that’s what I have, Anir Justin, anything else to add onto your side?
Yeah, I’d say, have a common controls. There’s going to be overlap. Know that there’s going to be overlap. There’s SOC 2 with AI specific controls. There’s HITRUST with AI, the HITRUST AI framework and whatnot. There is going to be overlap. Having your set of controls similar to privacy rights. When some of these state level privacy related requirements came out, I think a lot of organizations took this the strictest. Align the organization and their processes around to that. Documented that. Now, when Colorado came out with theirs, or Texas or Maine or whatnot. It’s easy, okay, is this the same? Is this asking for the same thing, essentially. Then that way you can showcase compliance with that. Know your common controls, know your overlaps, and how your environment may fit with those overlaps.
Perfect. And then we did have a couple questions asking what certification will be best. One asked, best for the use of AI tools, glean, ChatGPT, etc, internally? Another asked for when you have a SaaS company leveraging AI. So which certifications would you guys recommend there?
Can you ask a question again, Charlie? No problem. Yeah. They were just asking which certification you feel would be best for their company, one if they’re using AI tools, so glean, ChatGPT, etc, and then another is asking, what would be best if for a SaaS company leveraging AI? Got you?
Yeah. So I think for the first one, ISO as a base framework makes sense, because HITRUST, as Chesa mentioned earlier, is more of you have an AI model and a platform that you’re providing so HITRUST might not make sense necessarily, if you’re just using AI tools or technologies in that case, but as you use third party tools like, copilot, etc, there’s still risk. ISO kind of creates that fundamental framework of risk. Obviously, with ISO comes risk assessment, then this RMF that we talked about as well. I would say that’s a good starting point for organizations that are kind of just using AI tools or technologies, in that case. For the second one, if they’re providing, like a SaaS product or an offering, it’s kind of up to you. If you’re into healthcare space and already doing, you know, HITRUST, then it’s a good add on. That’s definitely good for the type of data that you manage and how you’re doing it. If you want to have international presence or customers that are going to ask for it, you can go down ISO at the end of it. These are just the two ones that are available right now that we’re seeing. There’s new one that’s going to be releasing. I do kind of want to go back to what Anir was saying, create a common controls framework. Don’t worry too much about actual compliance frameworks right now, see what AI tools and technologies you’re using. What are some of the risk factors associated with that create your common controls and then once you kind of create that, as your customers and vendors start asking for Compliance Certification, you’ll just have to go through that process. If you already have a AI policy, you already have threat testing done, and then your customers, hey, I want ISO, then you can pursue the ISO process, because you already have gone through all of the processes historically. You can use ISO or HITRUST, and this as a framework to start off with, but try to definitely develop that common control like risk-based approach, because as things develop, you’re going to be trying to add on additional frameworks, additional legislation and laws, and it’s just going to make it a little bit more complicated. At that point, you’re going to have to do a commons control mapping. So why not just start with that to begin with? Hopefully that answers the question.
We have another one here, with everyone racing to make AI part of that offering, do you have recommendations on dealing with vendors using AI?
Yeah, I’ll chime in on this one here. Since my team primarily takes vendor assessments, and we’ve seen we filled out vendor assessments for our clients, and then we’re also sometimes on the receiving end of vendor questionnaires as well, my recommendation would be to demand AI transparency. Require the vendors to disclose if they’re using AI and their product or service, and then explain, how are they making their AI making decisions? Is there home human oversight? Then lastly, I would say, review their AI governance. Do they follow a standard like NIST, AI, RMF or the ISO or SOC 2, AI or HITRUST, whatever it may be? Do they follow a reputable framework and align their AI program to that frame? I would assess that first to see and have them explain that to you guys within a questionnaire, and then showcase transparency there and accountability there, especially vendors are using that AI model.
I would add real quick to that as well. And you have to look beyond just security and risk, which we’ve been talking about, and really dive into privacy and legal issues as well. You really have to hit the whole gauntlet of, AI issues when you’re interacting with your third parties and trying to get assurance.
Exactly, yeah, there should be, I think folks have seen the questionnaire model. You click Yes, and then 10 other questions pop up. You click No, they go away. So, make sure that when they do click yes, you’re asking the right questions. Obtaining the right information from your vendor that’s using AI is going to be key for you. Make sure that those questions are not just yes or no questions, but allow for explainability within those questions and within the answers. Make sure that they’re monitored, maybe they’re monitored on a more frequent basis because of the AI usage and just the landscape right now and how it’s going, versus some of the less critical vendors that are maybe monitored on a biannual basis.
And then we have a question here. You mentioned a little bit about asking your vendors how they’re making those decisions with their AI. Someone’s asked, can we trace all the decisions made by AI? With predictable models versus AI that learns with the unpredictable models? Have you seen companies implement any controls around that.
I don’t know, Bobby or Justin, but the I would say, at least what comes to mind is, how to trace the decisions. The example AI types decision trees come to mind, rule based. Did the expert systems come to mind? Audit logs come to mind. I don’t know. Is there anything else that you guys can think of?
Yeah, typically, like PLP solutions, right? Going back to the asset management system and tool, identifying your assets, the data, the type of classification, implementing DLP on top of it. So now the DLP tool will recognize, this is a sensitive data let’s not send it, or it’s been sent here, so you’re able to trace it, so essentially going through like a whole data flow diagram. Organizations do this, but this is going to be more robust. Now you’re not just going to consider what is in your realm. You have to consider what is in your vendors realm. Not only your vendors realm. When you look at what you were saying, it was like when you look at your vendors, look at the contracts, are they sending data? Are they set taking your data, sending it to a third party or fourth party or fifth party? You’re going to have to really go through a data flow diagram and a mapping from where your sensitive data is to where it’s going, from point A to point Z, where that might be so it’s not one right answer. I think it’s a multiple answers. In the sense of, you have to do a data flow diagram. You’re going to have to implement asset management tools, as in here mentioned, there should be some locking and monitoring. Where’s the data attract? What type of data do you care for those types of stuff. But that’s the additional add on to what Anir was saying.
It really goes back to testing as well. How unpredictable could this thing be? I mentioned the AI penetration testing earlier, that’s some wild stuff. Like the way that you really go in and pen test AI and try to get it to be unpredictable, just to see the results of that. A little beyond tracing, that’s more testing, but it’s, I feel like that’s kind of related.
Perfect. And we just have one last question here before we’ll need to wrap up. Are these frameworks pretty stable, or do you expect to see further changes to them in the future.
I can take that one, there’s a couple ways to look at it. I think first, you have to understand AI is rapidly evolving. It’s a rapidly evolving technology, and these AI threats, they’re evolving as well. It’s a very unique environment we’re in. And from a security and risk perspective, a static framework just really isn’t going to cut it anymore. I know earlier I broke down the AI, the HITRUST AI certification. I think what Hydros is doing with the CSIP framework, at that foundational level, making it threat adaptive really speaks to this. The expectation really is that this framework is going to evolve over time based on that threat landscape. And really ISO and NIST are doing some of these same things last year, NIST RMF was updated, included profile for Gen AI. Charlotte, I think I would say we should all be expecting changes to these frameworks over time, primarily just to keep pace with that changing threat landscape specifically related to these AI tools technologies, absolutely.
Thank you so much. You hit it on the nail on the head, there is a quickly changing area in space, and so hopefully this was really useful for everyone, as Bhavin mentioned, we’ll continue holding these webinars and providing resources to people. But for now, just want to say thank you everyone for joining, and a thank you to our panelists. If you do have any additional questions that we weren’t able to address today, please feel free to send them to [email protected] Okay, well, thank you everyone. Enjoy the rest of your day.



