Explore Elite Risk Management Services

Private Strategic Group

Article

AI: Markers Along the Inflection Path

20 MAR 2025

/

13 min read


AI innovation and policy updates

Innovation and Policy Updates, and Implications for the Physical Security Industry1

In December 2024, my Crisis24 colleague Nick Hill and I forecasted that Agentic AI would become the most prominent theme for Generative AI (GenAI) in the security industry in 2025.2   

Thus far, we appear to have been right.  Given the pace of change, this blog provides an update in three key areas:

  • Latest AI advancements
  • Latest AI policy developments
  • Impacts of the above in physical security

AI Advancements in Q1 2025

The pace of AI innovation continues to accelerate, with significant LLM advancements from foundation labs like OpenAI, Google, Anthropic, Meta, and Perplexity. OpenAI released GPT-4.5, improving factual accuracy and multimodal capabilities,3 while Google’s Gemini model demonstrated superior reasoning skills4 in AI-driven decision-making. Similarly, Anthropic’s Claude 3.7 Sonnet showcased enhanced reasoning,5 and Claude Code improved agentic capabilities – allowing autonomous execution of multi-step tasks.

New AI Entrants: China’s DeepSeek AI (R1 release, January 2025) and ManusAI (March, 2025) have emerged as competitive threats, developing highly efficient AI models at lower costs. DeepSeek’s R1 model rivals GPT-4 at a fraction of the compute cost,6 while Manus AI specializes in autonomous task execution.  Initial assessments appear thematically similar to Anthropic’s Claude release—in use cases and capabilities.  That said, some new models raise concerns about AI-driven cyber and surveillance risks.  

Privacy, Cybersecurity, and Geopolitics

The rise of new AI players has sparked discussions about AI espionage and cybersecurity risks. US policy experts particularly warn that Chinese models, including DeepSeek7 and Manus,8 could be subject to government access requests under China’s cybersecurity laws, posing particular risks for Western corporate users.  More generally, there are concerns that these models could be used for automated cyberattacks, deepfake propaganda, and large-scale surveillance operations.

In response, the US and allied nations are considering restricting access to foreign AI models in sensitive industries, much like previous restrictions on Huawei technology.9 Corporate procurement and security leaders should evaluate AI procurement decisions carefully, ensuring that AI-powered security tools align with data protection and national security policies.

Policy Updates for AI and Implications for the Physical Security Industry

The Evolving AI Policy Landscape

AI governance has become a top priority for governments and corporations, with major developments in the last quarter shaping the regulatory environment. The Paris AI Action Summit in February 2025 marked a shift from previous AI safety discussions to a focus on AI adoption, inclusivity, and economic impact. France led the initiative, securing €109 billion in AI investments, but the US and UK notably abstained from signing the final declaration, signaling concerns over multilateral constraints on AI innovation. The European Union continued to push forward its AI Act, aiming to balance innovation with strict regulation, while the United States, under the new administration, has favored a light-touch regulatory approach to maintain global AI leadership.

At the United Nations, AI governance discussions are gaining traction. The UN’s Global Digital Compact and proposed International AI Scientific Panel aim to bring a structured, global perspective to AI governance, addressing data sovereignty, security risks, and equitable AI access. Meanwhile, China has expanded its domestic AI regulatory framework, requiring state approval for generative AI models and raising concerns about government control over AI development and deployment.

The above international context in 2025 comes on the heels of growing US domestic legislation in 2024.  We’ll provide a few examples below.   Note that some experts forecast that the new US administration will provide more flexibility in implementation (e.g., voluntary best practices rather than prescriptive rules) – particularly where national security or critical infrastructure is less of a concern.

  • In 2024, the FTC held its first AI-focused summit and warned it will enforce consumer protection laws against biased or privacy-invasive AI systems​.10
  • The Department of Homeland Security (DHS) released a landmark framework in November 2024 aimed at the safe and secure deployment of AI in critical sectors.​11
  • In 2024, at least 45 states introduced bills and 31 states plus territories actually enacted AI-related laws or resolutions.12
  • The National Institute of Standards and Technology (NIST) published an AI Risk Management Framework to guide organizations in developing trustworthy, ethical AI systems.​13

Corporate AI Policy and Buyer Implications

Private sector AI governance has also evolved. OpenAI, Google, and Microsoft have implemented voluntary AI safety commitments, including rigorous red-teaming for risks, watermarking AI-generated content, and external audits of AI capabilities. However, technology firms have simultaneously lobbied against overregulation, arguing that excessive constraints hamper innovation and undermine competitiveness.

For corporate security and risk management teams, these policy shifts carry direct implications. The EU AI Act, for instance, mandates transparency and risk assessments for AI-driven surveillance and decision-making systems, impacting CSOs and GSOCs deploying AI-enhanced security operations. The more relaxed regulatory stance of the US may spur faster AI adoption but raises concerns over inconsistent global compliance frameworks.

Policy Forecasting: The Next 12-24 Months

  • Diverging AI Regulatory Blocs: The US is expected to maintain a pro-innovation, market-driven AI strategy, while the EU refines its AI Act’s implementation, hoping to avoid stifling business adoption. China will likely expand state control over AI deployment and push for international adoption of its own AI governance models.
  • International Collaboration on AI Security: Global discussions may lead to voluntary safety standards for frontier AI models, particularly for cyber and national security risks. The UN’s push for AI capacity-building programs may see broader adoption among developing nations.
  • Impact on Physical Security: AI policy shifts will affect how AI-powered security solutions are adopted globally – including the requirement for risk classifications (EU AI Act), and prohibitions against some models.  CSOs and their procurement teams should monitor regulatory developments, ensure AI solutions comply with evolving legal frameworks, and adapt security strategies to align with international AI governance norms.

Practical Applications for CSOs and GSOCs

Of course, the above developments in innovation and policy aren’t occurring in a vacuum.  Early 2025 has brought seismic strategic and tactical shifts in the business and security environment:  the role of the US internationally; the role of alliances and the future of NATO; China-Taiwan; tariffs and protectionism; new phases in Ukraine, Israel, Lebanon, and Russia; and the LA Fires.

In this dynamic context, recent AI advancements present transformational opportunities for corporate security operations:

  • AI-Powered Threat Intelligence: AI can now process large-scale security data feeds, detect anomalies, and provide real-time risk alerts based on LLM-powered threat assessment.     
  • Autonomous Security Response: Agentic AI capabilities will soon enable automated security workflows, such as camera-based threat recognition, smart access control, and AI-assisted crisis communication.
  • Enhanced Surveillance & Situational Awareness: Advanced multimodal AI can analyze video, audio, and text inputs simultaneously, improving incident detection and response speed in GSOCs.
  • AI-Powered Cybersecurity Defense: AI-driven security platforms can identify sophisticated cyber threats, flagging unusual behavior patterns before breaches occur.

Challenges: Despite its transformational potential, AI in security still comes with vulnerabilities. Wise security leaders - and our industry more generally – must address each in turn:   

  • Reliability & Hallucinations:  LLMs like GPT-4.5 have improved but still generate false positives or misinterpret security data.
  • Data Privacy & Compliance Risks: Using AI models from unregulated sources may violate GDPR, CCPA, or national cybersecurity laws, requiring strict governance over AI data flows.
  • AI Over-Reliance & Automation Bias: Security teams must maintain human oversight to prevent misplaced trust in AI-driven decision-making.
  • Adversarial AI Exploits: Threat actors can use AI to bypass security systems through AI-generated cyberattacks or adversarial input manipulation.

The Future of AI in Security

The security industry is at a pivotal moment: AI offers powerful new capabilities, but these must be deployed responsibly and securely. Security practitioners – and our industry more broadly - must balance AI-driven automation with human intelligence, ensure compliance with evolving AI regulations, and closely monitor geopolitical risks tied to AI adoption. The coming 12-24 months  – including the soon-coming rise of artificial general intelligence (AGI)  – will see AI security tools become more autonomous, making proactive security planning essential for risk management leaders.

At Crisis24, we remain at the forefront of AI’s evolution, helping organizations unlock its full potential. Let us guide you in integrating these groundbreaking systems into your security and risk management frameworks. 

 

1 Several GenAI models were used in the development of this blog – including directing several models to argue “against” each other.  We advise transparency in the use of GenAI and seek to exemplify that here.  

2 Our December 2024 prediction: The Inflection Point: Agentic AI in the Evolution of Security and Risk Management

3 GPT 4.5 performance: GPT 4.5 Released: Here are the Benchmarks; Everything You Need to Know About OpenAI’s GPT-4.5

4 Gemini performance: Introducing Gemini: Our Largest and Most Capable AI Model

5 Claude 3.7 performance: Anthropic’s New ‘Hybrid Reasoning’ AI Model is its Smartest Yet

6 In this footnote, we provide nuance on a common trope, that goes something like “DeepSeek has upended assumptions about the “multi-billion investment” required to build these models.”  In reality, DeepSeek is allegedly transferring the expertise from larger foundation models to its own system using knowledge distillation, by training a smaller model on high-quality, synthetic data generated by established models—thus bypassing the need for massive training from scratch. Next, DeepSeek refines this distilled knowledge through a multi-stage reinforcement learning process. This combination lets DeepSeek achieve performance comparable to top U.S. models at a fraction of the cost – but this performance would not be possible without the existing of the U.S. foundation models.  

7  Why DeepSeek is Sparking Debates Over National Security, Just Like TikTok; for other concerns about R1, please see the following: vulnerability to malicious prompts: DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot; susceptibility to jailbreak techniques: DeepSeek RI Exposed – Security Flaws in China’s AI Model; exposure of sensitive data: Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History; chain-of-thought reasoning exploitation: Exploiting DeepSeek-R1: Breaking Down Chain of Thought Security

8 Manus AI: Glowing reviews – China’s New AI Agent Manus Calls its Own Shots; Manus AI and China-related Security Concerns: AI Agent Manus Sparks Debate on Ethics, Security and Oversight

9 An example of policy restrictions on DeepSeek: US Mulling a Ban on Chinese App DeepSeek From Government Devices, Source Says

10 FTC Gears Up for AI Enforcement: No Brakes in Sight

11 Groundbreaking Framework for the Safe and Secure Deployment of AI in Critical Infrastructure Unveiled by Department of Homeland Security

12 Artificial Intelligence 2024 Legislation

13 Ibid. 

Sharpen your 
view of risk

Subscribe to our newsletter to receive our analysts’ latest insights in your inbox every week.