Detecting high-risk scenarios before they escalate: This is one of the core motivations behind the development of artificial intelligence (AI) for security applications. With AI in the quiver, operators deploying surveillance solutions are able to move beyond mere monitoring to leverage every video frame and piece of data available to identify threats and inform emergency response. AI is still an emerging technology; but the benefits that its capabilities deliver are designed to minimise risks, maximise crime prevention and save lives.
In the past, video footage was archived for a short time before being overwritten. Today, segments of AI, such as video analytics, machine learning and deep learning make use of the high volumes of data generated by IoT ecosystems to distinguish meaningful patterns in data sets, which are then translated into insights that are bolstering crime deterrence strategies around the world. This technology takes a more holistic view of data, connecting individual data points to describe what is happening, in order to quickly identify high-risk situations before they escalate.
The overall market for real-time video analytics, in particular, was estimated at US$3.2 billion worldwide in 2018 and is expected to grow to US$9 billion by 2023, while the market for AI itself is expected to reach US$208.49 billion by 2025, according to Brandessence Market Research. All that to say, AI is no longer just a buzzword or a trend. It is becoming an integral component of our ever-growing datasphere.
Contrary to popular belief, AI is not, however, the exclusive property of development powerhouses like Google, Amazon or Apple, who largely utilise AI to optimise speech and image recognition as well as content curation. Growing physical security concerns have also been a catalyst for steady growth in AI.
The Los Angeles Times described AI as critical “in a time when the threat of a mass shooting is ever-present.” Six of the ten deadliest mass shootings in US history have happened in the last ten years, according to BBC News—not to mention the 180 school shootings and 356 victims those shootings claimed in that same period. For this reason, schools are among the early adopters of AI—making up an estimated US$$450 million portion of the market in 2018, according to IHS Markit.
“What we’re really looking for are those things that help us to identify things either before they occur or maybe right as they occur so that we can react a little faster,” Paul Hildreth, Emergency Operations Coordinator for Fulton Country Schools in Atlanta, told the Los Angeles Times in an interview in September 2019.
Behavioural analytics, a subset of AI, have emerged as one of the tools to do just that. Behavioural analytics brings together emerging computer hardware, deep learning and the proliferation of data that make up today’s datasphere to recognise hazardous situations based on the detection of certain human postures. This might include a cashier’s raised arms, for example, or an individual crouching near an ATM. Behavioural analytics can also be used to ensure workplace safety. For example, tracking whether employees are holding the handrails when using the stairs and sending man-down alerts. Some software can even detect a potential gunman in real time, transmitting instantaneous alerts to first responders that minimise the risks to students, employees and facilities alike.
The adoption of behavioural analytics will only grow in the future. In the meantime, behavioural analytics have increased awareness of the value of AI-fortified surveillance systems and their benefits to enterprises across several industries.
AI-based video analytics at large, for example, are creating efficiencies and offering non-security-related insights for businesses. In the retail market, for example, store owners using surveillance cameras with analytics can spot shoplifters and alert security personnel to intervene in real time. In-store analytics can also measure hotspots, visitor flow, dwell time and product display activity. Smart cities are also leveraging networks of intelligent sensors for data capture and to organise system response to incidents as they unfold, as well as improve processes like traffic flow.
Police in New York, New Orleans and Atlanta now use cameras equipped with video analytics to improve investigations. In Hartford, Conn., a police network of 500 cameras includes some AI-enhanced units that can search hours of video to find people wearing certain clothes or use license-plate recognition to identify places where a suspicious vehicle was last seen. These units can also issue loitering alarms, detect discarded objects and people as well as objects that enter a pre-defined field. These deployments represent some of the early adopters of video analytics in surveillance applications in the US.
Though the breadth of its capabilities seems to grow every day, one of the more impressive facets of AI analytics lies “under the hood.” The driving model underlying the development of video analytics is a network of artificial neurons that learn to recognise patterns in the same way a human or animal brain and nervous system does. In this sense, cameras behave like the retina and data networks process information like the brain.
The first thing that makes all this possible are graphics processing units (GPUs). These introduce the advent of parallel processing, as opposed to serial processing, which allows multiple computations to occur at the same time, rather than in a series. In other words, GPUs allow a computer to multi-task, much in the same way billions of interconnected neurons allow the brain to do the same. GPUs also allow for scalability. This means, as “Big Data” continues to get bigger, AI-powered systems should have no problem keeping up as users can add GPU units to servers to accommodate the increased data analytics workload.
Another important contributing factor to video analytics is data. For a neural network to “learn”—to identify objects in a still frame, such as weapons or postures that indicate disconcerting behaviour—deep learning must first take place. This involves processing large amounts of data in order to identify anomalies and learn the common characteristics they share. The more data a system digests, the more refined its identification becomes.
Neural networks used in video surveillance today typically analyse still frames. They recognise “what is.” As neural networks continue to evolve, they will one day be able to recognise “what is happening,” which will be a significant advancement in truly being able to understand behaviour.
As AI analytics put surveillance solutions on the front lines of crime detection, the data storage and technologies powering these solutions must operate at the highest level. Neural networks can meet a facility’s needs by learning from video material obtained on-site; but none of that learning is able to take place if recording throughput is not highly reliable. Moreover, none of those deep learning insights will be able to benefit an organisation if video frames are dropped, due to low-performing storage systems.
In order for intelligent surveillance systems with AI analytics to function optimally, edge to cloud storage infrastructure must evolve. To accommodate such an influx of video and metadata from the surveillance AI, a new architecture that leverages both edge and cloud computing is needed. Storage leaders today refer this configuration as IT 4.0. Deploying AI-enabled NVRs and appliances at the edge allows initial analysis to take place on-site, nearest where the data was first captured. This reduces latency and improves efficiency. Thus, sports and athletics department personnel at a university could receive immediate notification if an unauthorised individual, detected by an outdoor camera, walks into a football stadium after hours.
With IT 4.0 architecture, after basic processing takes place at the edge, video and data are then transferred to a centralised environment for long-term retention and deep learning. Continuing with the education example, a university operating a public or private cloud could aggregate video and data from all surveillance systems deployed across the various departments on campus. With this holistic picture, school directors could identify foot traffic patterns on campus and other insights to aid in operations planning.
Building storage to accommodate standard surveillance systems is one thing. However, building storage to support Big Data applications that utilise dozens of high-definition cameras and process AI events simultaneously is quite another. Drilling down to the storage components, it is critical for customers to consider the hard drives powering their appliances and servers. These hard drives must “write” large quantities of data, as footage is transmitted from the edge to the cloud and “read” that same data in real time, in order to detect, identify and deliver intelligent insights.
As a best practice, system integrators should swap out standard hard drives, only designed to operate 40 hours a week and use new surveillance-optimised hard drives built for 24/7 workloads. Select hard drives with built-in health monitoring software so that any issues that could lead to data loss are identified prior to failure. Also, consider subscribing to data recovery services to add additional peace of mind for customers.
Intelligent systems enhanced by AI and other specialty software like behavioural analytics provide tools to identify everyday crime—such as shoplifting, workplace violence and perimeter intrusion—and equip decision-makers with information to improve real-time response. These smart solutions are designed to save lives; but more than anything, they are deployed to keep staff, employees, students and residents safe. For a world that experiences devastating headlines every day of the week—the feeling of safety can go a long way.
Co-written by Jason Bonoan, Global Product Marketing Manager at Seagate Technology and Alan Ataev, Chief Executive Officer at AxxonSoft