International Security Journal speaks with esteemed organisational psychologist, Dr Craig Donald, about the use of AI in security and the upcoming Global MSC Security online event.
What impact is AI having on security?
I think one of the problems I am having is defining AI. If we talk about things like motion detection and sensitive zones in terms of areas that will trigger a response, I think it’s quite extensive. If we talk about behavioural analysis however, such as moving towards a perimeter fence, moving up and down a fence or being in an area of an airport where you are not supposed to be, I think it’s got a lot of application opportunities, especially when an organisation is trying to protect very large areas which are supposed to be under surveillance.
If we have a look at more specific applications around particular behaviour, I think a lot of it is still in trial. To some extent, security and operations are, in a sense, a bit of a guinea pig for experimentation around the use of AI.
How can AI impact the roles of camera operators?
AI is ideal for detection purposes and in situations where operators are too tied up to do things themselves, such as when there is too much area and too many cameras to cover. Where it can set an alarm and notify the operator, that is fantastic.
I’ve got examples where number plate recognition is being linked to crime databases. For example, if a vehicle enters an area with known suspects, it means that you can pick up on the suspect vehicle and respond very quickly. Similarly, facial recognition could alert you to a person of interest who you then might follow more carefully. This is definitely a good utility for operators that just haven’t got the time or the expertise, or the scope of coverage. At the same time, whilst it can enhance, it can also detract; as soon as you start providing information into a control room, you have to question if that information is of a high quality? How reliable is it?
What will your presentation focus on at the Global MSC Security online event?
We are looking at behaviour and AI recognition through cameras, I want to convey that behaviour is a complex thing to understand. To examine social behaviour is not easy for people, never mind for machines and there are a load of different cultural and environmental contexts that you need to assess.
Fortunately, we have a set of crime behaviour criteria that we can look at which is fairly generic across different places and cultures – the more we can pick up those crime behaviours, the more effective we can be in detection of people who are going to commit a crime or are in the process of committing a crime.
This is one of the problems with AI, however – AI doesn’t recognise behaviours, it recognises pixel movements. It is programmed with an algorithm and it reinforces itself in certain ways. You need to be able to get the AI to recognise those kinds of situations and at this time, it’s done by security people, who we hope to have the knowledge to do that, but that is not always the case. One of the big pushes that people say is that AI learns itself. There are lots of different situations where people learn themselves in entirely the wrong way. Often when people sell you technology, or want to introduce it, they say, “you can train this,” even though they are not the ones spending the time on the training. They give it to you and then you have the obligation to do the training, which can be very time consuming.
How has crime behaviour analysis evolved in recent years?
I think there’s been a really big push for behaviour analysis around the aviation industry – for obvious concerns around people hijacking or causing other problems – and it has been relatively successful in a number of places where people have been identified. I think the criteria for it has also been addressed in a lot more ways.
Police have had profilers for a number of years and police officers are supposed to be good at getting a feel for who is showing behaviour that might represent criminal behaviour. However, if you don’t have exposure to these environments, or the cases of crime, then you are not so familiar with what goes on.
Initiatives such as the development of the International Association of Behaviour Detection and Analysis as a professional home for behaviour detection officers is also a great step in making an impact in the area.
How do you see the future of the sector developing over the next five years?
I believe there’s going to be a lot of interaction between AI manufacturers and users but not all of it will be smooth.
I think AI is good at usually one aspect. There’s AI looking at micro facial expressions in customs control, for example. Although it can’t tell other areas of what is going on, it’s really good at picking up micro facial expressions because the camera is looking for micro facial expressions. However, if you have a camera at your supermarket, that camera is not capable of identifying the same details unless it’s extremely close to your face. There might be some marriage of different technologies so you can start relating things together more, which I think is going to be an essential part of where AI needs to move forward.
You need multiple indicators about a situation to get a good perspective and I think a lot of AI tends to be more specialised in one area, rather than a generic application. That’s because the people generating the algorithm are good at doing that particular thing.
There’s a lot of talk about situational awareness now and how technology can improve situational awareness, but it can also distract from situational awareness. A person in a control room has to be aware of what is going on around a whole range of inputs. How you integrate the technology into the ongoing operations is going to provide some interesting human factor insights.
The Global MSC Security ‘Developing Smart Surveillance Operators’ Special Online Event is free-to-attend and takes place on 16 March at 13:00 (GMT). Registration is open now at: www.globalmsc.net/seminars