Jody Russell, Senior Solutions Engineer at Ambient.ai, sets out the frameworks for AI-native security.
The race to adopt AI in physical security is well underway, but much of the industry is still running on sand. As enterprise buyers, C-suites and policymakers pursue modernization, a critical misstep is attempting to deploy AI over outdated infrastructure that was never designed to support layered threat detection, context or dynamic inference.
Bolt-on analytics, patchwork integrations and surface-level enhancements like slick large language model wrappers may appear progressive, but they often reinforce the same silos, inefficiencies and noise that AI is supposed to solve.
What’s needed now is a ground-up transformation: foundational AI-native infrastructure that sees, understands and acts across the entire security ecosystem.
Over the past decades, security operations have grown exponentially more complex. Today’s security operation center (SOC) teams confront a relentless stream of sensor data, access events, alarm noise and video feeds – all demanding rapid interpretation and response.
Yet many of the core systems in place remain rooted in architectures developed during the IP camera and video management system (VMS) boom of the early and mid-2000s.
Layering modern AI on top of last-generation thinking is a formula for stagnation, not transformation.
The cloud revolution offers a telling analogy. A decade ago, many enterprises tried to “go cloud” by lifting and shifting legacy applications into hosted environments.
These efforts rarely delivered the benefits of scale, flexibility or resilience. True value emerged only when systems were re-architected with cloud-native principles: modular, distributed, API-driven platforms designed for dynamic conditions.
The same lesson now applies to AI in physical security. Without AI-native design from the start, systems become bottlenecks rather than breakthroughs.
So what defines AI-native infrastructure in this context? It begins with perception. Foundational AI vision platforms do not simply attach analytics to a video feed; they reconstruct visual environments in real time using computer vision models trained on millions of behavioral patterns, object interactions and spatiotemporal dynamics.
These systems detect not just presence but also posture, movement, carried objects, behavioral shifts and contextual correlations from access control, Internet of Things (IoT) and even audio data.
AI-native platforms are engineered to ingest and fuse multimodal signals into coherent threat narratives.
Bolt-on analytics operate in isolation – flagging a motion event here, a door held open there, then relying on human operators to manually connect the dots.
The result is more alerts and less clarity, more tools and less integration, more cost and lower return.
Foundational architecture changes this equation. The AVS-01 alarm scoring standard, introduced in a whitepaper by The Monitoring Association, underscores the shift.
It prioritizes threats based on corroborated inputs such as forced entry alerts supported by sequential sensor evidence and video verification.
Intelligence, in this model, emerges not from any single alert but from the entirety of signals. Only AI-native platforms can interpret those relationships in real time and escalate threats based on behavioral context.
Ambient.ai exemplifies this approach. However, not by adding AI to existing workflows, but by building the platform entirely around it.
Designed to act as a real-time security operator, Ambient understands behavior, interprets context and makes decisions based on threat probability.
It embodies what Gartner refers to in adjacent sectors as an “intelligent operations layer”: a system that doesn’t just observe but reasons and responds.
This evolution is anchored in five essential tenets for any modern AI-powered security platform:
AI-native platforms can be super-charged when applied in the hybrid cloud architecture. In sensitive sectors like healthcare and critical infrastructure, there is understandable caution around full cloud deployment.
Hybrid models combining edge-based video ingestion with cloud-delivered AI modeling, workflow orchestration and policy management strike the right balance: localized control with global intelligence.
Because this model mirrors proven IT patterns, security teams can apply DevOps practices such as telemetry, continuous feedback loops and agile iteration to physical security. The result? Dependable, accessible, scalable and affordable AI.
The data backs this up. The Urban Institute and Salt Lake City Police Department’s 2002 analysis of alarm activity found that 98% to 99% of alarms were false positives.
When a verified response policy was implemented, false alarms dropped by over 90%, saving the city more than $500,000 annually. AI-native systems take this even further.
By incorporating behavior scoring, multimodal sensor fusion and spatiotemporal modeling, organizations across utilities, transit and enterprise campuses have reported false alert reductions exceeding 80%.
Beyond savings, the human impact is profound. A 2014 study by Harvard Medical School found that human operators miss up to 95% of screen activity after just 22 minutes of continuous CCTV monitoring.
The tendency to miss rare events, known as the prevalence effect, exacerbates this in low-incident environments, compounding the broader issue of attention decrement during prolonged surveillance tasks.
Ironically, the more successful your deterrent strategy, the harder it becomes for operators to maintain vigilance. The only sustainable solution is to shift this burden to machines and let humans focus on judgment and intervention.
The near-term future of physical security isn’t full autonomy – it’s Agentic AI pairing: intelligent software agents that act independently when confidence is high and defer to humans when contextual nuance is needed.
Organizations that invest in AI-native infrastructure today are positioning themselves to confidently deploy Agentic AI tomorrow.
Those clinging to bolt-on fixes will remain trapped in alert fatigue and vendor churn.
Buyer beware: the Gartner 2024 Hype Cycle places Generative AI at the very peak of inflated expectations, promising disruptive change but not yet delivering consistent operational value.
Many of today’s so-called AI Agents in physical security are little more than GenAI wrappers: conversational interfaces built on large language models, often lacking integration, contextual awareness or reliable decision-making.
In contrast, computer vision has decades of proven deployments and real-time inference behind it. Relying solely on generative models as the operational core of Agentic AI is premature until the technology matures beyond novelty and delivers sustained productivity.
Security organizations are now facing an important strategic decision. Leadership should be asking themselves now: Are we building infrastructure that can evolve into true Agentic AI operations or are we reinforcing complexity on top of chaos?
One path leads to scalability, speed and insight. The other leads to burnout, blind spots and operational drag.
The NIST framework for cloud computing provides a valuable reference point. During the early stages of digital transformation, it gave organizations a clear set of principles to guide their “Cloud First” strategies by offering clear definitions, expectations and design standards to ensure scalable, future-ready infrastructure investments.
AI-powered physical security needs a similar foundation. As the market floods with point solutions and overpromised capabilities, a widely accepted framework, comparable to NIST’s cloud model, would help buyers assess maturity, alignment and interoperability across vendors.
It would also establish baseline expectations for what AI-native security platforms should deliver: automation, integration, contextual intelligence, adaptability and transparency.
By applying the spirit of the NIST model to physical security, enterprises can avoid vendor lock-in, reduce wasted spend and ensure their investments are guided by long-term operational strategy not short-term hype.
Organizations that embrace this alignment and apply AI-native thinking to physical security will operate faster, scale smarter and build systems that learn and evolve in tandem with their environments. They will:
The shift to AI-powered security operations requires a new foundation, one purpose-built to translate information into intelligence, and intelligence into action.
AI-native platforms, built from the ground up for perception, context and correlation, are what enable the SOC to evolve from a reactive command center into a proactive source of operational intelligence.
Progress hinges not on how much data is collected, but on how meaningfully it’s understood.
The organizations that invest in AI-powered security systems focused on context and human-machine collaboration will be positioned to prevent, not just react.
This article was originally published in the July edition of Security Journal Americas. To read your FREE digital edition, click here.