Skip to Content

The 2026 CISO Report is Here Download Now

Dark teal and black gradient

Blog

​​The New Cybersecurity Frontier – What America’s AI Action Plan Means for Information Security​ 

The artificial intelligence revolution has arrived at a critical juncture where innovation and security must advance in lockstep. In July 2025, the White House unveiled “Winning the Race: America’s AI Action Plan,” a comprehensive strategy that fundamentally reshapes how the United States approaches AI development while addressing the mounting cybersecurity challenges that accompany this technological transformation. 

The AI Action Plan represents more than just policy guidance; it establishes a new paradigm for how America will secure its technological future while maintaining global competitiveness. Built on three foundational pillars- accelerating innovation, building AI infrastructure, and leading international diplomacy and security- this plan directly addresses the cybersecurity imperatives that will define the next decade of digital transformation. 

Why Now? The Acceleration of AI Adoption and Associated Risks 

The timing of America’s AI Action Plan reflects the exponential growth in AI adoption across industries and the corresponding escalation of security threats. The data reveals a dramatic transformation in how organizations integrate artificial intelligence into their operations, creating both unprecedented opportunities and significant vulnerabilities.  

AI adoption has experienced remarkable growth over the past eight years, fundamentally altering the technological landscape: 

Year AI Adoption Rate (%) 
2017 20 
2018 47 
2019 58 
2020 50 
2021 56 
2022 50 
2023 55 
2024 72 
2025 78 

This adoption trajectory illustrates several critical trends. The initial surge from 2017 to 2019 reflected early enterprise experimentation with AI technologies. The temporary decline in 2020 corresponded with pandemic-related disruptions and budget constraints, followed by steady recovery as organizations recognized AI’s essential role in digital transformation. The sharp acceleration from 2023 to 2025, jumping from 55% to 78%, coincides with the mainstream adoption of generative AI technologies and large language models. 

Of course, this rapid adoption has created a perfect storm of new cybersecurity challenges. The surge in AI usage has outpaced the development of corresponding security frameworks, leaving organizations vulnerable to new attack vectors. According to Menlo Security, Browser-based phishing attacks have increased by 140% compared to 2023, with cybercriminals creating nearly one million new phishing sites each month, a 700% increase since 2020. This exponential growth in threats directly correlates with the expanding AI attack surface as more organizations integrate these technologies without adequate security controls.  

The human factor compounds these risks significantly. With more than half of surveyed organizations experiencing data leakage from employee AI usage, it’s clear that the democratization of AI tools has created unintended security gaps. Employees, eager to implement AI for productivity gains, often bypass traditional security protocols or inadvertently expose sensitive information through AI platforms that lack enterprise-grade security controls. 

Secure-by-Design Mandate: Embedding Security from the Ground Up 

America’s AI Action Plan establishes a comprehensive framework for secure-by-design AI technologies and applications, recognizing that AI systems are inherently susceptible to adversarial inputs such as data poisoning and privacy attacks that can compromise their performance and reliability. The plan emphasizes that the U.S. Government has a fundamental responsibility to ensure that AI systems it relies on, particularly for national security applications, are protected against spurious or malicious inputs. 

The secure-by-design approach outlined in the plan requires that all AI usage in safety-critical or homeland security applications must employ robust and resilient AI systems that are specifically instrumented to detect performance shifts and alert operators to potential malicious activities. This includes implementing comprehensive monitoring for data poisoning attacks and adversarial example attacks that could compromise system integrity. 

The plan’s secure-by-design mandate encompasses several critical implementation areas. The Department of Defense (DoD), in collaboration with NIST at the Department of Commerce and the Office of the Director of National Intelligence, will continue refining the DoD’s Responsible AI and Generative AI Frameworks, Roadmaps, and Toolkits. These frameworks provide foundational guidance for implementing security controls throughout the AI development lifecycle. 

A particularly significant development is the requirement for the Office of the Director of National Intelligence to publish an Intelligence Community Standard on AI Assurance under Intelligence Community Directive 505 on Artificial Intelligence. This standard will establish specific security requirements for AI systems used in intelligence operations, ensuring that the most sensitive government AI applications maintain the highest levels of security and resilience. 

The secure-by-design approach also addresses the unique challenges of AI systems in critical infrastructure environments. As AI systems advance in coding and software engineering capabilities, their utility as tools of both cyber offense and defense will expand dramatically. The plan recognizes that maintaining a robust defensive posture will be especially important for owners of critical infrastructure, many of whom operate with limited financial resources but must defend against increasingly sophisticated AI-enabled attacks. 

For information security professionals, the secure-by-design mandate creates new requirements for understanding and implementing AI-specific security controls. This includes developing expertise in detecting and mitigating adversarial attacks against AI models, implementing secure AI development practices, and ensuring that AI systems can maintain their security posture even when faced with sophisticated nation-state level threats. 

AI-ISAC & Threat Intelligence: Building Collective Defense 

A cornerstone of the AI Action Plan’s cybersecurity strategy is the establishment of an AI Information Sharing and Analysis Center (AI-ISAC), which will be led by the Department of Homeland Security in collaboration with the Center for AI Standards and Innovation (CAISI) at the Department of Commerce (formerly known as the Artificial Intelligence Safety Institute, or AISI) and the Office of the National Cyber Director. This initiative represents a critical evolution in how the United States approaches collective cybersecurity defense specifically tailored to the unique challenges of artificial intelligence systems. 

The AI-ISAC addresses the growing recognition that as AI systems advance in coding and software engineering capabilities, their utility as tools of both cyber offense and defense will expand dramatically. The center will serve as the primary hub for promoting the sharing of AI-security threat information and intelligence across U.S. critical infrastructure sectors, filling a crucial gap in the current cybersecurity information sharing ecosystem. 

The establishment of the AI-ISAC reflects the plan’s understanding that AI use in cyber and critical infrastructure environments exposes those AI systems to adversarial threats that traditional cybersecurity frameworks may not adequately address. The center will focus specifically on threats unique to AI systems, including data poisoning attacks, adversarial example attacks, and other forms of AI-specific malicious activities that could compromise system performance or integrity. 

Under the AI Action Plan, the Department of Homeland Security will also issue and maintain comprehensive guidance to private sector entities on remediating and responding to AI-specific vulnerabilities and threats. This guidance will complement the threat intelligence sharing functions of the AI-ISAC by providing actionable recommendations for organizations seeking to protect their AI systems from emerging threats. 

The plan emphasizes the importance of collaborative and consolidated sharing of known AI vulnerabilities from within Federal agencies to the private sector as appropriate. This process will make use of existing cyber vulnerability sharing mechanisms while extending them to address the unique characteristics of AI system vulnerabilities. Unlike traditional software vulnerabilities, AI system weaknesses often involve complex interactions between training data, model architecture, and deployment environments that require specialized analysis and communication protocols. 

For cybersecurity professionals, the AI-ISAC represents both a valuable resource and a new operational requirement. Organizations will need to establish processes for contributing to and consuming AI-specific threat intelligence, integrating ISAC feeds into their security operations centers, and participating in collaborative defense initiatives focused on AI systems. This requires developing new capabilities in AI threat analysis and establishing relationships with other ISAC members across industries. 

The threat intelligence framework will also support the broader goal of ensuring that critical infrastructure providers can stay ahead of emerging AI-enabled threats. Given that many critical infrastructure owners operate with limited financial resources, the AI-ISAC will serve as a force multiplier, enabling smaller organizations to benefit from the collective intelligence and defensive capabilities of the broader community. 

Datacenter and Supply Chain Standards: Securing AI Infrastructure 

The AI Action Plan recognizes that securing artificial intelligence requires comprehensive protection of the entire AI infrastructure ecosystem, with particular emphasis on high-security datacenters for military and intelligence community usage. The plan acknowledges that because AI systems are particularly well-suited to processing raw intelligence data and because of the vastly expanded capabilities AI systems could have in the future, AI will inevitably be used with some of the U.S. government’s most sensitive data. 

The plan establishes that data centers where these critical AI models are deployed must be resistant to attacks by the most determined and capable nation-state actors. This represents a significant elevation in security requirements beyond traditional datacenter protections, reflecting the strategic importance of AI systems in national security operations. 

Under the AI Action Plan, new technical standards for high-security AI datacenters will be created through a collaborative effort led by the Department of Defense, the Intelligence Community, the National Security Council, and NIST at the Department of Commerce, including the Center for AI Standards and Innovation (CAISI). This multi-agency approach ensures that security standards address both technical requirements and operational security considerations across different government sectors. 

The plan also mandates advancing agency adoption of classified compute environments to support scalable and secure AI workloads. This requirement addresses the unique challenges of running AI systems that process classified information while maintaining the computational scale necessary for advanced AI operations. These environments must balance security requirements with the performance demands of modern AI systems. 

Infrastructure security extends beyond datacenters to encompass the broader AI supply chain. The plan includes specific provisions to maintain security guardrails that prohibit adversaries from inserting sensitive inputs into AI infrastructure. This includes ensuring that the domestic AI computing stack is built on American products and that the infrastructure supporting AI development, such as energy and telecommunications systems, remains free from foreign adversary information and communications technology and services (ICTS), including both software and relevant hardware components. 

The plan addresses the critical intersection of AI infrastructure and national security by requiring that all AI-related infrastructure development maintain robust security controls. This includes implementing secure development practices for AI systems, establishing secure deployment pipelines, and ensuring that AI infrastructure can withstand sophisticated attacks while maintaining operational capability. 

For information security professionals, these infrastructure requirements create new challenges in securing AI systems at-scale. Organizations must develop expertise in protecting AI workloads in high-security environments, implementing security controls that don’t impede AI performance, and ensuring that AI infrastructure meets the stringent security requirements necessary for national security applications. This includes understanding the unique security considerations of AI accelerator hardware, specialized AI networking requirements, and the security implications of large-scale AI model deployment. 

Incident Response & Regulatory Sandboxes: Adaptive Security in Practice 

The AI Action Plan introduces a comprehensive framework for AI incident response that acknowledges the unique challenges posed by AI system failures and the need for specialized response capabilities. The plan recognizes that the proliferation of AI technologies requires prudent planning to ensure that if systems fail, the impacts to critical services or infrastructure are minimized, and response is immediate. 

Central to this approach is the development and incorporation of AI Incident Response actions into existing incident response doctrine and best practices for both the public and private sectors. This represents a significant evolution from traditional cybersecurity incident response, which may not adequately address the complex failure modes and attack vectors specific to AI systems. 

Under the AI Action Plan, NIST at the Department of Commerce, including the Center for AI Standards and Innovation (CAISI), will partner with the AI and cybersecurity industries to ensure AI is included in the establishment of standards, response frameworks, best practices, and technical capabilities of incident response teams. This includes developing specialized “flyaway kits” and other technical capabilities specifically designed for AI incident response scenarios. 

A critical component of the incident response framework is the modification of the Cybersecurity and Infrastructure Security Agency’s (CISA) Cybersecurity Incident & Vulnerability Response Playbooks to incorporate considerations for AI systems. These updated playbooks will include requirements for Chief Information Security Officers to consult with Chief AI Officers, Senior Agency Officials for Privacy, CAISI via NIST at the Department of Commerce, and other agency officials as appropriate during AI-related incidents. 

This multi-stakeholder consultation requirement reflects the complex nature of AI incidents, which often involve not just technical security considerations but also privacy implications, AI safety concerns, and potential impacts on AI system performance and reliability. The integration of Chief AI Officers into incident response procedures ensures that technical AI expertise is available during critical response activities. 

The plan also establishes a framework for responsible sharing of AI vulnerability information as part of ongoing efforts to strengthen the nation’s cybersecurity. Led by the Department of Defense, Department of Homeland Security, and Office of the Director of National Intelligence, in coordination with OSTP, NSC, OMB, and the Office of the National Cyber Director, this initiative will encourage the sharing of AI vulnerability information while protecting sensitive details about AI capabilities and national security implications. 

The incident response framework addresses the unique attribution challenges associated with AI-related security events. Unlike traditional cybersecurity incidents, AI system anomalies may result from malicious attacks, flawed training data, adversarial inputs, or inherent limitations in the AI model itself. Response teams must be equipped to distinguish between these different causes and implement appropriate remediation strategies for each scenario. 

For cybersecurity professionals, the AI incident response framework creates new requirements for developing specialized skills and capabilities. This includes understanding AI system architectures, recognizing signs of AI-specific attacks such as data poisoning or adversarial examples, and implementing response procedures that can restore AI system integrity without losing valuable training data or model capabilities. Organizations will need to update their incident response plans, train their response teams on AI-specific scenarios, and establish relationships with AI experts who can provide technical guidance during complex incidents. 

Practical Checklist: Implementing AI Action Plan Security Measures 

For information security professionals seeking to align their organizations with the AI Action Plan’s cybersecurity requirements, the following practical checklist provides actionable steps for immediate implementation: 

  • Conduct AI Security Assessment: Perform comprehensive inventory of all AI systems, tools, and applications currently in use across the organization, including shadow AI deployments by employees. 
  • Develop AI Security Policies: Create specific policies governing employee use of AI tools, data sharing with AI platforms, and approval processes for new AI implementations.  
  • Implement Secure-by-Design Practices: Establish security requirements for all new AI projects, including threat modeling, security architecture review, and penetration testing specifically for AI systems. 
  • Establish AI Incident Response Procedures: Update incident response plans to include AI-specific scenarios, train response teams on AI threat indicators, and develop procedures for investigating AI-related security events. 
  • Join AI-ISAC: Participate in the AI Information Sharing and Analysis Center (AI-ISAC) to receive threat intelligence and contribute to collective defense efforts. 
  • Secure AI Development Environments: Implement isolated development environments for AI projects, establish secure coding practices for AI development, and require security validation for all AI models before production deployment. 
  • Monitor AI System Behavior: Deploy monitoring systems capable of detecting anomalous AI behavior, unusual data access patterns, and signs of adversarial attacks against AI models. 
  • Validate AI Supply Chain: Implement vetting procedures for AI vendors, require security assessments for third-party AI services, and maintain detailed inventories of AI software components and dependencies. 
  • Train Security Personnel: Provide AI security training for cybersecurity staff, including understanding of AI-specific threats, attack vectors, and mitigation strategies. 
  • Establish Data Protection Controls: Implement data classification systems for AI training data, establish access controls for sensitive datasets, and ensure compliance with data protection regulations in AI contexts. 
  • Create AI Governance Framework: Establish cross-functional teams responsible for AI security oversight, define roles and responsibilities for AI risk management, and implement regular security reviews for AI systems. 
  • Prepare for Regulatory Compliance: Stay informed about emerging AI security regulations, participate in regulatory sandbox programs where appropriate, and maintain documentation demonstrating compliance with AI security standards. 

Conclusion: Securing America’s AI Future 

America’s AI Action Plan represents a watershed moment in the evolution of cybersecurity strategy, acknowledging that the future of national security and economic competitiveness depends on our ability to innovate safely in the artificial intelligence domain. The plan’s comprehensive approach, spanning secure-by-design mandates, infrastructure protection, threat intelligence sharing, and adaptive regulatory frameworks, provides a roadmap for navigating the complex security challenges of the AI era. 

The statistics driving this initiative paint a clear picture of urgency. According to Metomic, 68% of organizations already experiencing data leakage from employee AI usage and over 600 documented incidents of AI-enabled fraud, the cybersecurity community cannot afford to treat AI security as a future concern. The time for action is now, and the AI Action Plan provides the framework for that action. 

For information security professionals, this plan represents both challenges and opportunities. The challenge lies in developing new skills, implementing novel security controls, and adapting existing frameworks to address AI-specific risks. The opportunity lies in shaping the future of cybersecurity, contributing to collective defense efforts through the AI-ISAC, and positioning organizations for success in an AI-driven economy. 

The success of America’s AI Action Plan will ultimately depend on the cybersecurity community’s ability to translate policy into practice. This requires not just technical implementation but also cultural change by fostering a security-first mindset in AI development, promoting collaboration across sectors, and maintaining vigilance against evolving threats. 

As we stand at the threshold of an AI-powered future, the choices we make today about security will determine whether artificial intelligence becomes a force for prosperity and progress or a source of vulnerability and risk. The AI Action Plan provides the foundation for making the right choices, but execution remains in the hands of cybersecurity leaders across the nation. 

The new cybersecurity frontier is here. The question is not whether we will face AI-related security challenges, but whether we will be prepared to meet them with the comprehensive, collaborative, and adaptive approach that America’s AI Action Plan envisions. 

Ready to Secure Your AI Future? We Can Help. 

Implementing the AI Action Plan’s cybersecurity requirements doesn’t have to be overwhelming. Tevora’s team of cybersecurity experts specializes in helping organizations navigate the complex landscape of AI security, from secure-by-design implementation to AI-ISAC participation and incident response planning. 

How Tevora Supports Your AI Security Journey 

AI Security Assessment & Strategy: We conduct comprehensive evaluations of your current AI systems and develop tailored security strategies aligned with the AI Action Plan’s requirements. Our assessments identify vulnerabilities, reveal shadow AI usage, and gaps in your current security posture. 

Secure-by-Design Implementation: Our experts guide you through implementing secure-by-design principles for your AI systems, ensuring protection against data poisoning, adversarial attacks, and other AI-specific threats from the ground up. 

AI-ISAC Integration & Threat Intelligence: We help you establish processes for participating in the AI Information Sharing and Analysis Center, integrating AI threat intelligence into your security operations, and contributing to collective defense efforts. 

AI Incident Response Planning: Our team develops AI-specific incident response procedures, trains your security teams on AI threat scenarios, and ensures you’re prepared for the unique challenges of AI-related security events. 

Regulatory Compliance & Documentation: We assist with maintaining compliance documentation, preparing forpreparing regulatory requirements, and ensuring your AI security measures meet federal standards and industry best practices. 

Don’t let the complexity of AI security hold back your innovation. Contact us today to schedule a consultation and discover how Tevora can help your organization implement the AI Action Plan’s cybersecurity framework effectively and efficiently.

About the Author

Bill Kachersky is an information security analyst at Tevora.

Explore More In-Depth Risk & Strategic Services Resources

View Our Resources