Skip to main content

Transform Video Surveillance into Digital Retail Insights

Retail tech

In a world beset by health and safety concerns, video surveillance has gained new value. For example, automated check-ins can reduce interpersonal contact—and provide a friendly reminder to mask up.

But video is still a challenge to manage. To learn how retailers can simplify matters, technology power couple Sarah-Jayne and Dean Gratton spoke with experts from Digital Barriers, a leader in AI-powered and IoT-connected safety and security systems. Here’s what they discovered about intelligent cameras and video-as-a-service.

Video Over Cellular Networks

Dean Gratton: So, Graham, you’re new to the team.

Graham Herries: I’m Graham Herries. I’m SVP Engineering for Digital Barriers.

Sarah-Jayne Gratton: Zak, great to have you here. Tell us, what does Digital Barriers do, and what’s your role there?

Zak Doffman: I’m one of the founders of the business and the chief executive. Digital Barriers specializes in edge intelligent video solutions. And what that means is that we specialize in getting video back from wherever it’s captured to wherever it’s needed in absolutely real time, over primarily wireless networks. We do lots of analysis on the edge as well, so the video that we return is the video that’s actually required.

We started our life in the military and intelligence space, and then about two years ago we pivoted the business, so that we could take the same technologies into the broader commercial and critical infrastructure worlds.

Dean: What was your experience like when you first started with the military compared to today where you’ve got 5G technology?

Zak: It’s a great question. The core technology that we use to do very low bandwidth, zero latency video streaming is proprietary, and it was invented 15, 20 years ago. It’s kind of long in the tooth, as they say. It was able to stream live video over 2G, so way before any of the broadband wireless networks that we see today.

Every time there’s a new wireless technology, it becomes more broadband, and the desire for live video just becomes greater. Each time there has been a change in network protocols, we’ve seen a huge surge in our growth, and I think we’ll see the same thing with 5G.

What we’ve got is a technology that is very happy on 5G but then will move down to LTE, to 4G, or to 3G as required, so that the quality of service that a customer actually gets in the real world is always good.

Dean: That really does echo, “If it ain’t broke, don’t fix it.” I did have a question maybe a few weeks ago where someone said, “Well, I’ve got a fixed infrastructure which is wired. It’s working. Shall I upgrade it to a 5G service?” I just wondered, “Why would you do that?” Because if it’s working, why change it?

Zak: Again, you’ve hit on another problem as well which is, we’re in a hybrid world of fixed and mobile connections, wireless connections, and what customers want is, they don’t really care whether a connection is coming over wireless or fixed. They just want it to work and they want it to be the same.

What we’re seeing, and what we have is the ability to run the same kind of analytics regardless of the bearer. So, if I’m a customer and I’ve got 10,000 video streams, two-thirds of them might be fixed and a third on wireless. But I want the same dashboard, I want the same analytics. I want to be able to manage all those video streams in the same way.

The fact that it’s a different technology streaming video from a vehicle or a body-worn camera than from a CCTV point in an open public space is irrelevant to the customer; they don’t care, they just want it to work.

Video Privacy and Security

Sarah-Jayne: How do you secure the privacy and security of the videos that are transmitted over public airways, for example?

Zak: We do this in two ways. The first thing is, when people talk about secure transmission of video, often what they mean is they’ve just put a VPN around it. In essence, they’ve built a secure tunnel, and they’ve piped the video through that.

The problem with that is it adds quite a significant overhead to the amount of data that you’re pushing. It could be a 20, 25% overhead. So if you were constrained for bandwidth before, you simply make the problem worse.

What we do is control the streaming technology, the codec. We’re able to build encryption into that codec. There’s a 1 or 2% overhead only, in terms of making sure that it’s encrypted.

The second thing is we’re end-to-end, so we’re encrypted at both sides. Although we’ll decrypt and decode when the video lands in its secure location, we can ensure that that video isn’t compromised. We can wipe endpoints. We can watermark video. We can do everything to ensure that that video is exactly what’s captured, and we can tell you when it was captured.

Edge AI

Sarah-Jayne: Yeah. So, guys, tell me, what is edge AI and why does it matter?

Zak: Our USP is the combination of the ability to get live video back where it’s needed, when it’s needed, but also to analyze what’s taking place at the edge. One way to reduce the amount of video that you’re streaming, and to ensure that the stuff that lands back on somebody’s screen is important, is to ensure that you’re analyzing it.

What edge AI means simply is that you’re running AI-based analytics on the edge, as opposed to trying to pipe all of that video back to the cloud and run all of your analytics in the cloud, which traditionally is how these things have worked.

What we’re seeing at the moment is almost like two camps. We’re seeing increasing amounts of AI capabilities within silicon AI devices. They’re, in essence, pre-programmed to conduct certain levels of analysis. Or we’re seeing these huge cloud players that are able to run all kinds of different business and security analytics but on huge volumes of cloud, video data. But that means that you’ve pretty much got a fixed cable from the camera back to the cloud, and you’re piping everything back.

What we do is what we call hybrid analytics. We’re able to mix and match what we do at the edge and what we do at the center, and that means, for example, if you’ve got a more capable edge process, you can do more at the edge, and if you haven’t, then you push more data back to the cloud. You make it very efficient for a customer.

What’s also important is that we can use the cloud to provide, as needed, backup, if you like, to what’s taking place at the edge. Great way to think about this is that what we try to do at the edge is kind of narrow down the bit of the haystack where the needle may be, but what we do in the cloud is find the needle.

We do that by, in essence, sending certain events back to the cloud where we think there may be something that fits whatever’s being looked for, and then we run much more efficient and effective and powerful analytics in the cloud, where we have unlimited processing, to determine if that’s a false alert or that’s a real detection. And all that happens sub-second.

By the time the customer gets an alert, we’ve done all that. They don’t know that that’s what’s taken place, but that makes our analytics much more accurate.

Dean: So what does AI actually mean? Is it about the predictive analytics? Is it about the data munching?

Graham: Yeah. As you highlighted, deep learning neural networks are a subset of the kind of broad term AI. We’ve been investing in deep neural networks, DNNs, for quite a few years now. Luckily, I’ve got a very specialist team who’ve been doing this for 15, 20 years, having got multiple people with PhDs in video analytics.

One of the things we’re seeing increasingly is, due to the advancements in chipsets and frameworks and libraries, is the ability to do DNNs at the edge a lot more cost-effectively.

Post-Pandemic Surveillance Needs

Sarah-Jayne: How do you think retail and hospitality applications for video surveillance have changed in the wake of the pandemic?

Zak: It’s a great question. I think things were changing anyway. There’s a lot of talk about the virtualization of video storage and using the cloud as a back-end, rather than have complex, on-premises solutions. I think everyone was heading in that direction.

Even before the pandemic, you were starting to see much lighter-weight, cost-effective, easier-to-deploy video-surveillance-as-a-service applications hitting the hospitality and the retail sectors. Now, the pandemic’s completely changed the relationship between those sectors and their customers, and the responsibilities that they have.

There are all kinds of rules and regulations in different countries around the world about how many people can enter a particular location and do they have to be wearing face coverings, and do they have to be a certain distance apart. There’s lots of premises that are still closed or have different opening hours, and that leads to a different level of security requirement.

What we pride ourselves on is that we have a platform which enables us to build new capabilities quickly. A great example of that is what we’re doing at the moment, where addressing customer needs around the latest, I guess, impact to those sectors is mask detection and people counting.

If you’re running a store or a hospitality facility and there’s only a certain number of people allowed in at a time and you need to make sure that those people know they’re supposed to be wearing face coverings, that’s quite an onus to put on your staff, to have to confront people every day of the week.

What we’re able to do is use technology. We can let people know if their location is full. If they’re not wearing a mask or a face covering, that they should put one on. Not confrontational, it’s an advisory notice. In that way we’re able to take some of the sting and the confrontation out of it and make life easier for the people running those sectors.

As we move into the next year, I think we’re starting to see certain trends again that technology will reflect. We’re looking at things like contact-free identity assurance. I think we’re already seeing some of the technologies, and this is back to the facial recognition point, where we’re all used to going to E-passport gates and using our faces instead of handing over a piece of paper to a border officer and then we can go through one of those kiosks.

Facial Recognition Regulations

Dean: Zak, how do you overcome those public concerns with facial recognition?

Zak: It’s difficult because there’s a vacuum, a kind of a regulatory vacuum in most countries at the moment, and where we are seeing regulation, it might be seeing an overstep of that. But if you look here, say in the U.K. and in the U.S., traditionally there hasn’t been any rules or regulations, any limitations, and what that’s allowed is the industry and its customers, in essence, to overstep because there’s no guidance.

I think what we need is regulation. We need to say, look, there are some absolutely clear use cases for things like facial recognition in a security environment. But if you try to use the technology to identify shoplifters or to stop somebody who’s kicked out of a bar from getting back in, you’ve got no public consensus. Most people think that’s an overstep.

So, I think it falls to the regulators, the government, the lawmakers, to actually set some limits and say, “Look, we start with the obvious stuff, where the use cases are clearly not contentious, and then we have to decide where the line is, what we’re prepared to do.”

Dean: How accurate is facial recognition? For example, when I go out and about in public, I tend to wear a hat, and I’ve got this fuzzy face. How accurate is the technology today to overcome those subtleties?

Zak: The normal rule of thumb with face rec is, the best technologies would recognize somebody if somebody who that person knows would recognize them. If one of your friends passed you on the street, and they were a bit disguised, you’d probably recognize them. If they were too disguised, you wouldn’t. And the technologies are broadly the same. But clearly there’s no limitation. It can recognize more than just people you know. It can recognize an unlimited set of subjects.

Facial recognition is all about maths and data quality. Those are the two things to keep in mind: the quality of the images or the video against which you’re comparing people, and then the quality of the video that you’re capturing at the scene, which is based on lighting and environmentals and the positioning of the camera.

If that’s all very good, you have a very good chance of your 99.99% recognition. The more you compromise that, if you have poor captured imagery, so you might have a surveillance photograph that’s very poor, or if you’re operating in a shadowy environment, in bad weather, in bad light, then you make it harder. So that’s the first consideration.

The second is about maths. If I put tens of thousands of people on a watch list and put a camera in a very busy place, and tens of thousands of people walk past, every person is being compared to tens of thousands of people. You’re into the hundreds of billions of calculations. So, even a 0.001% error rate, that’s a lot of people who are actually going to be misidentified.

You factor that in to how you do it, and because we specialize in operating in a difficult environment, we’ve got all kinds of tools and tricks to make it much more accurate than competing technology. We’ll help our customers understand whether there are constraints in the quality of imagery or the quality of the environment to factor into it, and we’ll help segment watch lists so that they can help it be as accurate as possible.

It’s all about outcomes. Where we’ve worked with law enforcement agencies, they’re hugely positive about the impact that the technology’s had to help them pick very bad guys off the street. Our technology has tended only to be used for the really bad guys, serious criminals, dangerous individuals, terrorists, threats to national security.

Video Surveillance as a Service

Sarah-Jayne: Digital Barriers, you offer video surveillance as a service. Can you tell us a bit more about what that means?

Zak: VSaaS is the biggest shakeup to the video surveillance and security industry probably ever, realistically, certainly since IP video became mainstream. What it is, it’s taking all of the complexity, all of that hardware, all of that cabling out of the equation, and it’s putting everything into a virtual cloud environment. It’s giving you complete flexibility of your endpoint.

You could take a camera, in essence, configure it through the cloud, that’s then set up on your cloud back-end, but, in essence, you’re renting it as your service. It means that you can push analytics out to those edge endpoints, whether they’re CCTV cameras or body-worn cameras or vehicles, and manage the whole thing virtually as a service, cost-effectively. Because the real impediment to these huge, large video schemes in the past has been the service, the cabling, the redundancy and the power management. All of that is taken out of the equation.

Right now, VSaaS is still tiny compared to the whole market. We’ve seen in the home market, with the likes of Amazon and Google and others getting into the game in terms of providing the cameras that many of us have at home now, which are clearly linked to a cloud back-end, and we’re starting to see that make it into the commercial environment, and that’s what we’re providing.

We work with partners like Vodafone to provide a kind of a hosted video surveillance solution, video security solution, and we think that that is the future of the industry. If you look at the analysis, the VSaaS market is going to completely disrupt the video surveillance and security market over the next 10 years, and it will have the same impact that IP had on analog video 10, 15 years ago.

Sarah-Jayne: Everything’s heading toward cloud, isn’t it? Everything’s moving to the cloud.

Zak: I think for this, video’s hard to manage. Video’s a really badly behaved data type. It’s hard to move around. It’s hard to store. It’s hard to search. It’s hard to retrieve. So if you can use common tools and techniques and somebody else’s scalable back-end to manage it, and then if you can run really sophisticated analytics to limit what you stream, what you store, because what you don’t want is petabytes of unneeded video that you’re never going to watch, that just clog up somebody else’s cloud service and cost you a lot of money.

The whole thing is not just about taking a server and putting it in the cloud, it’s a re-architecture. It’s around what do people want.

What’s really interesting here is, and this has been driven by the California tech giants, traditionally, in video security, you captured everything, you stored everything that happened for weeks or months, or even years, and it was there just in case you needed it.

What we’re seeing now is that requirement to get video to where it’s needed, when it’s needed, and then you store the important stuff. We can obviously do either, but actually that second shift really lends itself to a kind of a cloud VSaaS model.

Dean: You talked about video grade, because it is quite large and now it’s 4K and 8K. It’s going to be huge. Are there techniques to compress that video content down to manageable sizes across the cloud?

Graham: Actually, I would say largely, for the surveillance video market, the thought of streaming 4K video to the cloud, or for processing and analysis, is just an almost unimaginable feat. It’s one of the reasons why our technology to deploy our analytics and encoder capability at the edge to just trigger on events, is one of the powerful features for dealing with certainly high-resolution video.

The more resolution you have, the more ability to actually discern objects and understand the features, using the DNNs: Is this a person, and is this person wearing a red jumper or a green jumper? Is this a car? A blue car or a green car? What’s this number plate? Is it easily discernible? All these things preferably you want to do at the edge because otherwise your CPU cost and data charges, especially at 4K resolution, are going to be enormous.

Cutting-Edge Technology

Sarah-Jayne: I understand that Intel actually came to you guys.

Zak: We’ve always used Intel and technology through the life of DB, but two or three years ago Intel knocked on the door and said they’re running analysis; they’re looking for innovative AI startups or growing businesses in the U.K. We were one of the top five they’d found, and they were really interested in working out if there was a partnership opportunity, an opportunity to work together, which was great to hear, and clearly we told our board that at the earliest opportunity because it sounded cool.

But actually, more importantly, Intel have followed up and have been true to their word. They really do, time and attention, and it’s been amazingly helpful to us in terms of that relationship. I think Graham can talk about some of what’s actually taken place on the ground, but it has been great.

Sarah-Jayne: Graham, can you give us some examples of how Intel have helped you deliver your capabilities?

Graham: It’s been really exciting actually because the technology that Intel have been delivering, especially around the kind of AI and their OpenVINO framework, has gone through almost exponential increase in capability and performance over the last couple of years, so much so that if I look back to where we were two or three years ago, everything was very custom, very bespoke, and less. We had to knife and fork a solution onto a hardware platform, be it Intel or be it the competition.

But the power of the tools now, it’s very much switched to, “Okay, let’s look at OpenVINO first approach,” because their hardware acceleration, as well as the flexibility of their library, is just worlds away from where it was, and in some respects we can’t thank them enough for doing that because, as Zak says, it’s enabled us to have frameworks in place so that we can retrain for new situational video analytics, methods, really quickly, and start to analyze them.

One of our key areas of know-how has been around training and data because data, and data quality in particular, is really fundamental to AI. Without good data quality, then your result in a DNN solution will be quite poor. We’ve invested a lot in that, and now being able to just leverage that with a really powerful framework has been great.

We’re really excited to see the new hardware coming out of Intel as well because as we’ve talked about AI at the edge and hybrid analytics, and we see such a great opportunity for even greater neural processing using the kind of neural computation approach that Intel have got at the edge, it could be an enormous game changer.

Realizing the Benefits of Advanced Technology

Sarah-Jayne: Is there anything you’d like to talk about that we haven’t covered?

Zak: I think what’s interesting now is that the imperatives we were seeing around the shift to better edge technology, VSaaS we’ve talked about, I think all of that is being accelerated. I think we’ve already seen levels of disruption over the last few months as technology has started to find its way into the frontline. I think we’ll see more.

We haven’t talked about the use of our mobile technology in triage. Frontline medical workers can send video back to more senior doctors elsewhere over a secure network. We haven’t talked about body-worn cameras as we see workers in potentially hostile environments, as we different requirements placed on the police, as we see the implications on retail, where retail staff are being thrust into the frontline.

I think wherever we look right now in the security world, we’re seeing disruption, and obviously that lends itself to businesses that can move quickly and be flexible.

Graham: We’ve taken an approach of remotely holding our customers’ hands. It’s just not enough anymore just to have good tech. It’s great to have phenomenal USPs, but you’ve got to really have customer empathy, and it’s really important to how we deliver everything tech at the moment.

Dean: I think that you touched upon that, Graham, I think that’s right. I also get frustrated with the people who over-inflate technology’s capabilities. How do you control that?

Graham: That’s a really good question. Can you control it? I’m not convinced you can because, as you say, everybody wants to be an early adopter. I think what you have to do is ensure you can actually deploy something.

So, again, I kind of come back to that spiel I just talked about, and it’s about how can a customer deploy a solution to actually add value to their business?

This is not about how we apply tech just for the sake of applying tech. This is about customer use cases and operational use cases, and we really need to understand what’s required and then deliver a solution and deliver the tech to support that, rather than deliver the tech for the sake of the tech.

Related Content

To learn more about video surveillance management, listen to our podcast on Retail Tech Chat Episode 6: Safety, Security, & In-Store Intelligence.

About the Author

Kenton Williston is an Editorial Consultant to insight.tech and previously served as the Editor-in-Chief of the publication as well as the editor of its predecessor publication, the Embedded Innovator magazine. Kenton received his B.S. in Electrical Engineering in 2000 and has been writing about embedded computing and IoT ever since.

Profile Photo of Kenton Williston