Despite the desire to leverage onboard processing, much work will be required to make these dreams a reality, writes Matt Powell, Managing Director of ISS (Intelligent Security Systems).
It seems that for as long as IP technology has come to dominate the video surveillance landscape, there has been a simultaneous effort to push more and more capabilities to the edge.
However, the zeal with which some are advocating for the industry to increasingly move to “serverless” environments is simply not matched by the technological capabilities of today’s cameras.
In theory, the ability to move from server-based to onboard processing would provide several significant benefits to end users, with infrastructure savings being chief among them.
If organizations could deploy cameras without growing their existing network backbone or installing customized servers then businesses would realize substantial cost savings on all future video surveillance deployments; however, under current technology paradigms, these monetary savings would quickly be negated by performance challenges.
While there has been some heated debate as to whether or not Moore’s Law – the idea that the number of transistors on computer chips doubles every two years thus creating more powerful processors – is still valid or not, even with the current pace of innovation, the ability to process high-resolution images in conjunction with other data-intensive applications like analytics and even basic searching and archiving of footage inside the camera itself is still years away from becoming a realistic possibility.
So, there is a delicate balance that needs to be achieved by businesses that want to take advantage of the tools provided by modern video surveillance offerings while also reducing their hardware infrastructure.
Here are a few of the current limitations as it pertains to edge processing and the challenges awaiting both integrators and end users looking to shift to more decentralized types of video surveillance systems.
With the bulk of security technology developments today being focused on how businesses can start to take advantage of AI solutions in greater numbers, the current state of onboard processing means end users opting for an edge-focused approach would have few analytic tools at their disposal.
Outside of running basic things, such as a virtual tripwires, there is precious little processing within the majority of most cameras today to be able to run any type of complex analytics along the lines of facial recognition, license plate recognition (LPR) or object detection and identification.
Failing to account for the growth in use cases and applications for analytics means that you would not be able to effectively futureproof your system.
Will a new generation of embedded chip technology one day address the processing requirements of having onboard AI and other capabilities?
Perhaps, but will organizations be able to hold out on taking advantage of analytics until these chipsets are ready? That is highly unlikely.
According to a recent research report published by Markets and Markets, the global video analytics market was valued at just over $7 billion in 2022 and is projected to grow at a compound annual growth rate (CAGR) of 23.4% over the next five years, resulting in a projected total market size of $20.3 billion by 2027.
As an ever-increasing number of businesses start to realize the value of analytics, there will be downward pressure on all organizations to begin utilizing these technologies and those businesses lacking the necessary surveillance infrastructure will find themselves struggling to keep pace.
We therefore face the challenge of customers simply buying the wrong product for their applications.
All analytics are not built the same, but many in the industry still lack the prerequisite knowledge to be able to differentiate between the basic capabilities that were introduced to the market over a decade ago and the new, AI-enabled solutions that are being deployed today.
Virtual tripwires are not the same as advanced, neural network-trained analytics that solve challenging business problems, such as logistics management and worker safety.
This could result in many end users purchasing “AI cameras” that do not have the processing power to meet the needs of the business, giving both manufacturers and integrators a bad name.
Despite the cruciality of storage to video surveillance operations, it remains an afterthought to too many organizations when evaluating their security infrastructure.
Though there may not be that many vendors that are dedicated storage providers like there were in the mid to late-2000s, it is still one of the primary factors that will dictate video surveillance system design and must also be considered by anyone looking to make the shift to an edge-based processing environment.
Without either an on-premises appliance or cloud-based recording solution in place, attempting to both store footage and process images and other applications in tandem would be a recipe for disaster.
Of course, the evolution of microSD cards means that businesses can and do store video footage on cameras themselves depending upon the circumstances, but these capabilities have not yet reached a point where they can be relied upon 24 hours a day and seven days a week, as is required in most security environments.
Typically, when video is being stored on a microSD card, it is done so for a designated period, say 30 days, until it can be offloaded somewhere else internally for longer-term storage and/or use.
This is especially prevalent in mobile and wireless environments where dedicated, centralized management of video from each fleet vehicle or remote facility is simply not feasible.
However, exclusively using onboard storage in traditional brick-and-mortar environments is nearly unheard of and would place an extraordinary performance burden on these devices which were not originally designed to be used in this fashion.
Recording a day or two days’ worth of video onto a microSD card and then moving it to another location, such as is the case with many school and transit vehicles, is much less demanding than trying to both record video and run other applications on top of it for real-time situational awareness.
Commonly, the argument for “the edge” is a reduction of hardware, but considering recording, the customer must still have a device onsite.
For many integrators, providing a customer with the most capable cameras and an appliance for recording and analytics processing will remain the best option for customer happiness and integration bottom-lines in a world where direct selling models and limited hardware are cutting into their margins.
In addition to the tech limitations of current edge processing solutions, the industry must also ask itself if the perceived cost benefits of moving to an edge-based surveillance model will really pan out in the long run.
On one hand, less physical hardware equates to less required infrastructure but what about when something breaks internally?
Rather than going to a server closet to switch out a drive, you’re now going to be calling an integrator or perhaps even the professional services team of a vendor to roll a truck and put a technician on a ladder to replace the defective part or possibly the entire camera.
While this may be only a nuisance in a small, quick-service restaurant (QSR) environment with four cameras, in enterprise-type deployments with large numbers of cameras, this can quickly become quite expensive.
Aside from being both costly and time-consuming, this also impacts the ability of an end user to scale their system as needed.
Trying to individually monitor and address issues like these as they arise would quickly become unwieldy for those with high camera counts. Let’s face it – the more complex a camera, the more likely it is to have some maintenance challenge over its lifetime.
That’s not even considering the fact that many of the innovations that have made edge capabilities viable, such as high-capacity microSD cards, were also not designed for long-term use and would need to be updated on a per-camera basis over the life of the system.
Lastly, we come to the most important customer of all – the customer we already have. With over 70 million cameras installed in the US alone and by some estimates, there being over a billion installed globally, the move to the edge creates a significant cost challenge to those who have already purchased and installed a camera system.
While every camera manufacturer wants to sell a new camera and every integrator wants to install a new camera, we must take into account that for the vast majority of installations that exist – a server can bring them into the world of AI, right now, today.
Will all the current obstacles to edge processing be solved at some point in the future? More than likely they will, but trying to simply wish them into existence won’t make the current technologies any more capable of handling these challenges.
In the meantime, be sure that the infrastructure you have in place meets the demands of the application as on-prem or cloud processing is still needed to get the job done in most cases today.
Matt Powell is Managing Director for North America at ISS (Intelligent Security Systems), a pioneer and leader in the development of video intelligence and data awareness solutions. He has over two decades of experience in security and transportation technologies having formerly served as Principal-Infrastructure Markets at systems integrator Convergint and as a developer of transportation market strategies for Videolarm and Moog prior to that.
This article was originally published in the August edition of Security Journal Americas. To read your FREE digital edition, click here.