Skip to main content

Safety, Security, & In-Store Intelligence

Retail Tech Chat

In a world beset by health and safety concerns, video surveillance has gained new value. For example, automated check-ins can reduce interpersonal contact—and provide a friendly reminder to mask up.

But video is still a challenge to manage. Find out how you can simplify matters in this conversation between technology power couple Sarah-Jayne and Dean Gratton, and experts from Digital Barriers, a leader in AI-powered and IoT-connected safety and security systems.

You will hear:

  • How video-as-a-service can cut costs even as you add capabilities
  • What the latest intelligent cameras can do for your store
  • How you can use cellular networks to eliminate the need for new infrastructure

Available on Apple Podcasts, Spotify, SoundCloud, and iHeartRadio, the Retail Tech Chat is a limited-run podcast focused on recovery of the retail and hospitality sector. Subscribe now so you don’t miss an episode!

Related Content

To learn more about video surveillance management, read Transform Video Surveillance into Digital Retail Insights. For the latest innovations from Digital Barriers, follow them on Twitter at @DigitalBarriers.

Listen to Retail Tech Chat Episode 1: AI Innovations for the Customer Experience

Listen to Retail Tech Chat Episode 2: Touchless & RFID for Safer Stores

Listen to Retail Tech Chat Episode 3: Digitizing the In-Store Experience

Listen to Retail Tech Chat Episode 4: Accelerating Digital Transformation

Listen to Retail Tech Chat Episode 5: New Roles for Digital Signage

Transcript

Dean Gratton: Welcome to the Retail Tech Chat, sponsored by Intel. I'm Dean Gratton.

Sarah-Jayne Gratton: And I'm Sarah-Jayne Gratton.

Dean Gratton: Together we explore the world of technology and the ways it is reshaping our lives.

Sarah-Jayne Gratton: And in this podcast series we want to take you on a journey into retail innovation with Intel and its partners.

Dean Gratton: So today we are talking to Zak Doffman and Graham Herries from Digital Barriers, a leader in AI-powered and IOT-connected safety and security systems.

So, Graham, you're new to the team. What do you do?

Graham Herries: Yeah. So I'm Graham Herries. I'm SVP Engineering for Digital Barriers, which is actually a new role created, and had a really interesting interview experience actually.

Sarah-Jayne Gratton: Okay. Zak, great to have you here. So tell us, what does Digital Barriers do, and what's your role there?

Zak Doffman: Okay. So Digital Barriers specializes in edge intelligent video solutions, and what that means is that we specialize in getting video back from wherever it's captured to wherever it's needed in absolutely real time, over primarily wireless networks. We do lots of analysis on the edge as well, so the video that we return is the video that's actually required.

We started our life in the military and intelligence space doing kind of spooky, high-end stuff for very hard-to-reach customers, and then about two years ago we adapted, kind of pivoted the business so that we could take the same technologies into the broader commercial and critical infrastructure worlds, which are clearly much bigger but have a different set of requirements, and it's kind of on that basis that the relationship with Intel is so important, as I'm sure we'll come to later.

I'm one of the founders of the business and the Chief Executive.

Sarah-Jayne Gratton: Fantastic stuff.

Dean Gratton: You talked about video streaming. What was your experience like when you first started with the military compared to today where you've got 5G technology? I'm not sure if all areas across the U.K. have got 5G. Can you offer any comparison between your experience then and now?

Zak Doffman: It's a great question. So the core technology that we use, our technology that we use to do very low bandwidth, zero latency video streaming is proprietary and it was invented 15, 20 years ago. So it's kind of long in the tooth, as they say. It was able to stream live video over 2G, so way before any of the broadband wireless networks that we see today.

Every time there's a new wireless technology, the question gets asked of us, "What would be the impact? 3G, 4G, now 5G, will that have a detrimental impact on the business?" What's actually happened is every time there's a new capability, it becomes more broadband, if you like, then the requirements and the desire for live video just becomes greater, and it kind of swamps the capability to provide that in a kind of ubiquitous fashion. So each time there has been a change in network protocols, we've actually seen a huge surge in our growth, and I think we'll see the same thing with 5G.

Now everybody expects to be able to get live video from wherever they are and send it to wherever it needs to be, and that clearly is very difficult to do even over highly broadband networks, and we can make that happen.

Dean Gratton: That's the thing with 5G. I think some time ago with 4G LTE or 4G Advanced, I think the ambition was to create true wireless broadband, but I don't think we're seeing that now. Now with 5G, I think we can actually see that now. Having said that, where we live at the moment, our backhaul is 4G. So we're talking to you, we're a 4G network, and that's our broadband service.

Sarah-Jayne Gratton: That's right.

Zak Doffman: Five is just misunderstood, and I think there is an expectation that you could just sit there and watch as many live TV channels as you want in glorious 4K, over an unlimited 5G network, and it doesn't work that way, as you know.

So 5G's all about pushing huge quantities of data very quickly to where it's needed and then kind of cutting the connection and moving onto something else. It isn't actually designed for real-time live video to be streaming continuous. It's not how the network works. You're right, there's an asymmetry as well, and often the update might be in LTE and the download might be in 5G as well. Even beyond that, when you talk about critical infrastructure, blue-light services, the military, they're not going 5G anytime soon.

So I think what it's doing is it's creating an expectation that those capabilities will be met and those requirements will be met, but that can't be delivered against. What we've got is a technology that is very happy on 5G but then will move down to LTE, to 4G, or to 3G as required, so that the quality of service that a customer actually gets in the real world is always good.

Sarah-Jayne Gratton: Yeah, fantastic.

Dean Gratton: That really does echo, "If it ain't broke, don't fix it." I did have a question maybe a few weeks ago where someone said, "Well, I've got a fixed infrastructure, which is wired. It's working. Shall I upgrade it to a 5G service?" I just wondered, "Why would you do that?" Because if it's working, why change it?

Zak Doffman: Again, you've hit on another problem as well, which is we're in a hybrid world of fixed and mobile connections, wireless connections, and what customers want is, they don't really care whether a connection is coming over wireless or fixed. They just want it to work and they want it to be the same.

So what we're seeing, and what we have is the ability to run the same kind of analytics regardless of the bearer. So if I'm a customer and I've got 10,000 video streams, two-thirds of them might be fixed and a third on wireless, but I want the same dashboard, I want the same analytics. I want to be able to manage all those video streams in the same way. The fact that it's a different technology streaming video from a vehicle or a body-worn camera than from a CCTV point in an open public space is irrelevant to the customer. They don't care, they just want it to work.

Sarah-Jayne Gratton: Yeah. It's what I always say. To use a sort of plumbing analogy, I say, as a consumer, we turn on our taps and we want water. We don't want to know where it's coming from; we just want consistency. Wherever it's coming from, we just want the water.

Zak Doffman: Exactly right.

Dean Gratton: [crosstalk]

Zak Doffman: It's really interesting. You asked about, going back, our experience of 2G to where we are today, and I can probably talk about this now because enough time has passed. But our heritage is in the military, and that's where these technologies came from 10 years ago. The fact is that when the British military was going out into theater, it was using commercial cellular back in the kind of the 2G, early 3G days, just normal commercial cellular to stream live video back to where it was required. So at the time, other countries were having to put on these huge military networks to get the same video back and our guys were just using this commercial cellular stuff.

So we understand the need for it has to work. When you press that big red button, you need to make sure that video gets back to where it's needed. It can't fail. It needs to be secure, it needs to be resilient, and that, in essence, is the heritage of the company.

Sarah-Jayne Gratton: How do you secure the privacy and security of the videos that are transmitted over public airways, for example?

Zak Doffman: So we do this in two ways. So the first thing is, so when people talk about secure transmission of video, often what they mean is they've just put a VPN around it. So, in essence, they've built a secure tunnel and they've piped the video through that. The problem with that is it adds quite a significant overhead to the amount of data that you're pushing. So it could be a 20, 25 percent overhead. So if you were constrained for bandwidth before, you simply make the problem worse.

So what we do is, we actually... because we own the stream, we control the streaming technology, the codec, we're able to build encryption into that codec. So there's no overhead, kind of a 1 or 2 percent overhead only, in terms of making sure that it's encrypted.

The second thing we do is we're clearly end-to-end, so we're encrypted at both sides. So although we'll decrypt and decode when the video lands in its secure location, we can ensure that that video isn't compromised. We can wipe endpoints. We can watermark video. We can do everything to ensure that that video is exactly what's captured, and we can tell you when it was captured.

But the other point, of course, is privacy. So we've operated in incredibly hostile environments where there's lots of sniffers out there trying to intercept videos, detect us on a network and sniff it out. Again, because we're a proprietary technology, we don't look like video. So the way these technologies work is, they understand how video comes across on a network, how it spikes, what its profile is, and they try and pull that video down, and then they can set about trying to decrypt it. We don't look like video on those networks. So we look more like VOIP traffic. So we just pass by undetected.

So in combination, we do this for the U.S. military, for the federal agencies, from the Ministry of Defence over here in the U.K., and we're trusted to provide highly secure, private, uncompromised video streaming in, as I say, very hostile environments.

Dean Gratton: So you mentioned network sniffers. So the actual data going over the network, no one can really identify the packets being transmitted.

Zak Doffman: You wouldn't be able to distinguish it from other chatter on the network. It wouldn't look like a video stream. So if I was out on a surveillance operation, I was using standard technology to stream video over a wireless network, it would be very obvious, you'd be able to see immediately. Video looks very specific on a network. It's got a specific spiky profile. It's how the codecs work. We don't. We have a very flat profile.

So one of the reasons that we're able to push video over such low networks is we, in essence, flatten the spikes. So we ensure that we never exceed the amount of bandwidth that's available, we control it. That, as I say, keeps the profile different and low, so it doesn't detect or doesn't present in the same way. But it also means that as networks dynamically adjust, as the amount of bandwidth changes, as contention builds up and bandwidth reduces, we're able to, in absolute real time, reduce the amount of data so that the customer, who's sitting there trying to watch a particular scene or a particular event, doesn't have that disrupted.

Sarah-Jayne Gratton: Yeah. So, guys, tell me, what is edge AI and why does it matter?

Zak Doffman: So what we do, our USP, is the combination of the ability to get live video back where it's needed, when it's needed, but also to analyze what's taking place at the edge, a scene that the camera can see, if you like, and can draw conclusions, inferences, so it can analyze that. Because you only want certain amounts of video, and one way to reduce the amount of video that you're streaming, and to ensure that the stuff that lands back on somebody's screen is important, is to ensure that you're analyzing it and you're detecting certain events, or a car or person that you're looking for has actually turned up.

So what edge AI means simply is that you're running AI-based analytics on the edge, as opposed to trying to pipe all of that video back to the cloud and run all of your analytics in the cloud, which traditionally is how these things have worked.

What we're seeing at the moment is almost like two camps. So we're seeing increasing amount of AI capabilities within silicon AI devices. So they're, in essence, pre-programmed to conduct certain levels of analysis. Or we're seeing these huge cloud players that are able to run all kinds of different business and security analytics but on huge volumes of cloud, video data, but that means that you've pretty much got a fixed cable from the camera back to the cloud and you're piping everything back.

What we do is what we call hybrid analytics. So we're able to mix and match what we do at the edge and what we do at the center, and that means, for example, if you've got a more capable edge process, you can do more at the edge, and if you haven't, then you push more data back to the cloud. So you make it very efficient for a customer.

What's also important is that we can use the cloud to provide, as needed, backup, if you like, to what's taking place at the edge. Great way to think about this is that what we try to do at the edge is kind of narrow down the bit of the haystack where the needle may be, but what we do in the cloud is find the needle. We do that by, in essence, sending certain events back to the cloud where we think there may be something that fits whatever's being looked for, and then we run much more efficient and effective and powerful analytics in the cloud, where we have unlimited processing, to determine if that's a false alert or that's a real detection, and all that happens sub-second. So by the time the customer gets an alert, we've done all that. They don't know that that's what's taken place, but that makes our analytics much more accurate.

Sarah-Jayne Gratton: Oh, yeah.

Dean Gratton: The classic example I've heard about edge is the fire alarm. If the sensor triggers maybe a smoke, a whatnot in a building, do you wait to go to the cloud to inform the people in the cloud that something's going on here, or do you actually trigger instantly at the edge that there is a fire alarm, where you trigger off the signal to the fire department, for example, and sound the alarm in the building? That's the classic use case of edge computing.

But you guys, you mentioned predictive analytics, which, for me, I think you take deep learning or machine learning, which are subsets to AI. So from an edge-AI point of view, what does AI actually mean? Is it about the predictive analytics? Is it about the data munching?

Sarah-Jayne Gratton: Or is it the relationship between…

Zak Doffman: Graham, why don't…

Sarah-Jayne Gratton: ...the edge and the cloud?

Zak Doffman: A good one for Graham to answer.

Graham Herries: Yeah. As you highlighted, deep learning neural networks are a subset of the kind of broad term AI, and we've been investing in deep neural networks, DNNs, for quite a few years now. Luckily I've got a very specialist team who've been doing this for 15, 20 years, having got multiple people with PhDs in video analytics.

One of the things we're seeing increasingly is, due to the advancements in chipsets and frameworks and libraries, is the ability to do DNNs at the edge a lot more cost-effectively because, let's face it, we're trying to produce commercial products, traditionally moving more away from that very high-spec military equipment into something more commercially available, which has a much more aggressive price point.

Sarah-Jayne Gratton: Yeah. Talking about price point and moving away from the military, how do you think retail and hospitality applications for video surveillance have changed in the wake of the pandemic?

Zak Doffman: It's a great question. So I think things were changing anyway. There's a lot of talk about the virtualization of video storage and using the cloud as a back-end, rather than have complex, on-premise solutions. So I think everyone was heading in that direction.

So even before the pandemic you were starting to see much lighter-weight, cost-effective, easier-to-deploy video surveillance as service applications hitting the hospitality and the retail sectors. Now what the pandemic's done is, it's, in essence, completely changed, I guess, the relationship between those sectors and their customers, and the responsibilities that they have. As ever, technology doesn't solve the problem, but it helps its customers address the problem and try and do so effectively.

So, for example, there's some really obvious ones, aren't there? So there are all kinds of rules and regulations in different countries around the world about how many people can enter a particular location and do they have to be wearing face coverings, and do they have to be a certain distance apart. There's lots of premises that are still closed or have different opening hours, and that leads to a different level of security requirement.

In essence, what we're able to do is put efficient technology in place. Graham talked about AI and the level of expertise within the team, and what we pride ourselves on is that we have a platform, which enables us to build new capabilities quickly. So a great example of that is what we're doing at the moment, where addressing customer needs around the latest, I guess, impact to those sectors is mask detection and people counting.

So if you're running a store or a hospitality facility and there's only a certain number of people allowed in at a time and you need to make sure that those people know they're supposed to be wearing face coverings, that's quite an onus to put on your staff, to have to confront people every day of the week. What we're able to do is use technology. So we can just let people know if their location is full and we can let them know, if they're not wearing a mask or a face covering, that that's the regulation and they should put one on. Not confrontational, it's an advisory notice, which just says, "You're required to cover your face to enter these premises." In that way we're able hopefully to take some of the sting and the confrontation out of it and make life easier for the people running those sectors.

As we move into the next year, I think we're starting to see certain trends again that technology will reflect. We're looking at things like contact-free identity assurance. I think we're already seeing some of the technologies, and this is back to the facial recognition point, where we're all used to going to E-passport gates and using our faces instead of handing over a piece of paper to a border officer and then we can go through one of those kiosks. I think we'll see the same, and whether that's a reception desk or entering a gym or a leisure center, turning up to an appointment where you're expected, they know who you are but you could, just by looking at your face, they can recognize it's you, and all done in a very secure, private environment: You've provided a photo and they're just checking that it's the person that they're expecting to see.

I think some of those are going to stick. It's clearly convenient; it's easy. It can be done in a completely compliant way with all of the various regulations around the world. But the imperative to do stuff quickly now is people don't particularly want to be typing into an iPad or signing a book or handing over pieces of paper. It feels like there's a better way to do things these days.

Sarah-Jayne Gratton: I guess, rather contentiously as well, with the 10:00 P.M. imposed closures of pubs, clubs and venues, I guess that the technology could be used to spot those people that aren't necessarily toeing the line in terms of that.

Dean Gratton: And breaches.

Sarah-Jayne Gratton: Yeah.

Zak Doffman: This is an interesting point, isn't it? We've seen the backlash against facial recognition in a law-enforcing environment through the processes that have taken place in various countries around the world over the last six months or so. I think it's the same. There's clearly, with something like facial recognition, there's obviously a balance to be struck, a place where the broad public consensus would be it's appropriate and proportionate and it helps the security or the law enforcement agencies or the government get the job done.

I think the point you're making about curfews and early closing of pubs and restaurants is the right one because it's clear in the U.K. that the consensus isn't right. You feel that what the public expect and what's being done, there's a disconnect, and that's a difficult place for technology to exist…

Sarah-Jayne Gratton: It is.

Zak Doffman: ... because it obviously is going to be taken badly. We, as technologists, need to be conscious of that because you very easily fuel the backlash, which in many ways has got out of hand, and there's some very sensible questions that have been raised, but technology is just there to do what it's told, if you like. It itself isn't at fault.

Dean Gratton: Zak, how do you overcome those public concerns with facial recognition? I understand we need it, but how do we overcome those concerns?

Zak Doffman: It's difficult because there's a vacuum, a kind of a regulatory vacuum in most countries at the moment, and where we are seeing regulation, it might be seeing an overstep of that. But if you look here, say in the U.K. and in the U.S., traditionally there hasn't been any rules or regulations, any limitations, and what that's allowed is the industry and its customers, in essence, to overstep because there's no guidance.

So I think what we need is regulation. We need to say, look, there are some absolutely clear use cases for things like facial recognition in a security environment. The example I always give is counter-terrorism, which is where our facial recognition was born. It was designed originally to serve that marketplace. If you know that you've got a dangerous cell of individuals operating in a city location, here in London or New York or something like that, and your imperative is to find them before they're able to do serious harm to the public, then if you're able to use their facial recognition to try and spot them entering a railway station or an airport, then the broad public consensus would be that's a sensible thing to do.

But, conversely, if you try to use the technology to identify shoplifters or to stop somebody, who's maybe kicked out of a bar on a Saturday night for being rowdy, from getting back into the bar, you've got no public consensus. Most people think that's an overstep.

So I think it falls to the regulators, the government, the lawmakers, to actually set some limits and say, "Look, we start with the obvious stuff, where the use cases are clearly not contentious, and then we have to decide where the line is, what we're prepared to do." I think as you go down to low-level crime, it shouldn't be used wherever it can be used. Just because you can, doesn't mean you should, as they say.

So I think there always have to be limitations on powerful technologies, particularly where biometrics are concerned. But it would be, in my view, criminal trying to throw the baby out with the bathwater, as they say, and just say, "Okay, even at the very obvious use cases, we're not going to do that either," because I think that puts the public in harm's way unnecessarily. I think actually it's harder, but much more sensible, to put regulations in place.

Sarah-Jayne Gratton: How far do you think we are away from seeing those regulations put in place?

Zak Doffman: I think it varies. Obviously we've seen in the U.S. a number of states and cities that…

Dean Gratton: Are we going to go to a police state Britain?

Zak Doffman: Well, no. My customers are police forces, and the conversations we have are completely sensible. The challenge they have is that there's no clarity or guidance currently in terms of what they should or shouldn't be doing. We saw that in the U.K. with law cases against... challenging the use of facial recognition, and it was clearly a vacuum, and it was unclear what the rules and regulations were.

But in the main, you're talking about organizations that are charged with keeping us safe from harm and taking bad guys off the street, and they see technology as an aid to do that. If they're guided, as they are with things like the use of force and other things, then they clearly need to follow those regulations. Where there are no regulations, clearly their imperative is to keep the public safe. So they're going to do what they can.

But the conversations we have are incredibly sensible and I think there's a frustration, which is easier for me to say than for them to say, in terms of that lack of clarity coming from upstairs. I do think we'll see that because I think the genie is out of the bottle, if you like, with facial recognition, and I don't think it's going to get put back in.

I think there are huge oversteps in places like China, which is on kind of a roll; it has no limitations in terms of what it's willing to do. We absolutely don't want to go anywhere near that. I think we, in the West, need to decide what's obvious what we want to do and then just, in essence, rule out the more trivial, noncompliant uses of the technology.

The thing I'd add to that is, I think... your question was around how you get the public's acceptance. I think we do a lot of demonstrations and media events with facial recognition, and we're always very keen to show members of the public how it works. I think when they see the safeguards that are built in and how accurate it is, they relax because it's seen as a little bit of a bogeyman technology: People don't see it, they just hear about it or read about it. I think if you show them, it takes some of the mystery away from it.

But the other thing that's happening is, it's clear that we're using it every day. We're using it to unlock our phones. We're using it at airports, if and when we go to airports these days. I think you'll see a lot more of that. Access to airport lounges, when you're going into potentially VIP areas in hospitality locations, potentially check-ins. So I think you'll see opt-in identity assurance just because it's easy and convenient and normalized by the phone manufacturers and others, and I think that as well will take some of the mystery out and will generate a lot of public acceptance.

Dean Gratton: How accurate is facial recognition? For example, I use Windows Hello, for example, which tends to be quite accurate. But when I go out and about in public, I tend to wear a fedora or I've got a hat on, and I've got this fuzzy face, I've got this beard and whatnot. How accurate is the technology today to overcome those subtleties?

Sarah-Jayne Gratton: Yeah. With a hat and a mask, for example?

Dean Gratton: And a facial mask, for that matter.

Zak Doffman: The craziest question I ever got asked was from an Intel operator in a Western country asked me whether we could protect people dressed up like clowns, and I said, "No, we can't. If they're dressed up like a clown, you wouldn't notice them and neither would we." So I think there's a line, right? If you clearly disguise yourself, if you've got your hat pulled over your eyes, and maybe a bandana around your face, it's going to be very hard, isn't it?

Sarah-Jayne Gratton: Mm-hmm (affirmative).

Zak Doffman: The normal rule of thumb with face rec is, the best technologies would recognize somebody if somebody who that person knows would recognize them. So if one of your friends passed you on the street and they were a bit disguised, you'd probably recognize them, if they were too disguised, you wouldn't, and the technologies are broadly the same. But clearly there's no limitation. It can recognize more than just people you know. So it can recognize an unlimited set of subjects.

Facial recognition is all about maths and data quality. Those are the two things to keep in mind: the quality of the images or the video against which you're comparing people, and then the quality of the video that you're capturing at the scene, which is based on lighting and environmentals and the positioning of the camera. If that's all very good, you have a very good chance of your 99.99 percent recognition. The more you compromise that, if you have poor captured imagery, so you might have a surveillance photograph that's very poor, or if you're operating in a shadowy environment, in bad weather, in bad light, then you make it harder. So that's the first consideration.

The second is about maths. So if I put tens of thousands of people on a watch list and put a camera in a very busy place and tens of thousands of people walk past, every person is being compared to tens of thousands of people. You're into the hundreds of billions of calculations. So even a 0.001 percent error rate, that's a lot of people who are actually going to be misidentified. So you factor that in to how you do it, and because we specialize in operating in a difficult environment, we've got all kinds of tools and tricks to make it much more accurate than competing technology. We'll help our customers understand whether there are constraints in the quality of imagery or the quality of the environment to factor into it, and we'll help segment watch lists so that they can help it be as accurate as possible.

It's all about outcomes. Where we've worked with law enforcement agencies, they're hugely positive about the impact that the technology's had to help them pick very bad guys off the street. Our technology has tended only to be used for the really bad guys, serious criminals, dangerous individuals, terrorists, threats to national security, where…

Dean Gratton: [crosstalk] …is a pair of glasses and a Superman suit. No one can ever detect that.

Zak Doffman: Well, as they say, the fact that you're walking around looking like that might be a cause to tap you on the shoulder anyway!

Dean Gratton: I'd be arrested.

Sarah-Jayne Gratton: Absolutely. Absolutely. Digital Barriers, you offer video surveillance as a service. Can you tell us a bit more about what that means?

Zak Doffman: So VSaaS is the biggest shakeup to the video surveillance and security industry probably ever, realistically, certainly since IP video became mainstream. What it is, it's taking all of the complexity, all of that hardware, all of that cabling out of the equation, and it's putting everything into a virtual cloud environment. It's giving you complete flexibility of your endpoint.

So you could take a camera, in essence, configure it through the cloud, that's then set up on your cloud back-end, but, in essence, you're renting it as your service. It means that you can push analytics out to those edge endpoints, whether they're CCTV cameras or body-worn cameras or vehicles, and manage the whole thing virtually as a service, cost-effectively. Because the real impediment to these huge, large video schemes in the past has been the service, the cabling, the redundancy and the power management. All of that, if you like, is taken out of the equation.

So right now VSaaS is still tiny compared to the whole market. We've seen in the home market, with the likes of Amazon and Google and others getting into the game in terms of providing the cameras that many of us use at home now, which are clearly linked to a cloud back-end, and we're starting to see that make it into the commercial environment, and that's what we're providing.

So we work with partners like Vodafone to provide a kind of a hosted video surveillance solution, video security solution, and we think that that is the future of the industry. If you look at the analysis, the VSaaS market is going to completely disrupt the video surveillance and security market over the next 10 years, and it will have the same impact that IP had on analog video 10, 15 years ago.

Sarah-Jayne Gratton: Everything's heading towards cloud, isn't it? Everything's moving to the cloud.

Zak Doffman: I think for this, video's hard to manage. Video's a really badly behaved data type. It's hard to move around. It's hard to store. It's hard to search. It's hard to retrieve. So if you can use common tools and techniques and somebody else's scalable back-end to manage it, and then if you can run really sophisticated analytics to limit what you stream, what you store, because what you don't want is petabytes of unneeded video that you're never going to watch, that just clog up somebody else's cloud service and cost you a lot of money. So the whole thing is not just about taking a server and putting it in the cloud, it's a re-architecture. It's around what do people want.

What's really interesting here is, and this has been driven by the California tech giants, traditionally, in video security, you captured everything, you stored everything that happened for weeks or months, or even years, and it was there just in case you needed it. What we're seeing now is that requirement to get video to where it's needed, when it's needed, and then you store the important stuff. We can obviously do either, but actually that second shift really lends itself to a kind of a cloud VSaaS model.

Dean Gratton: Actually, you talked about video grade, because it is quite large and now it's 4K and 8K. It's going to be huge. Are there techniques to compress that video content down to manageable sizes across the cloud?

Graham Herries: Actually, I would say largely, for the surveillance video market, the thought of streaming 4K video to the cloud, or for processing and analysis, is just an almost unimaginable feat. It's one of the reasons why our technology to deploy our analytics and encoder capability at the edge to just trigger on events, is one of the powerful features for dealing with certainly high-resolution video because actually…

Dean Gratton: Yeah, [unintelligible] quality.

Graham Herries: ... the more resolution you have, the more ability to actually discern objects and understand the features, using the DNNs: Is this a person, and is this person wearing a red jumper or a green jumper? Is this a car? A blue car or a green car? What's this number plate? Is it easily discernible? All these things preferably you want to do at the edge because otherwise your CPU cost and data charges, especially at 4K resolution, are going to be enormous.

Dean Gratton: It's almost clichéd. We see on the news, for example, "Have you seen this person doing this untoward thing in this scenario?" and it's very grainy, the video. So why not 4K and 8K streaming across? If there are better techniques to deliver that quality and to actually store that quality, surely that would be better for your service.

Zak Doffman: Yeah, but even with 5G, even with these unlimited cloud back-ends, simply storing all of that 4K, or even 8K video, is just not realistic, and not necessary. There's an adage in surveillance that if you have too much data, you don't have any data at all, you have too much information to manage.

So to Graham's point, if you can provide some intelligent overlay, some analysis, either in real time or it's done after the fact, around the metadata, around the events that you're looking for, then what you actually have is what you need, not the 90 percent of video that you'll never watch. I can't remember the exact numbers, but the stats around the amount of video that's stored but never watched is embarrassing when you think about just the volume of data and storage hardware that's required to look after all of that.

So I think what we're talking about is two things in tandem…

Sarah-Jayne Gratton: What your AI is doing... Sorry to interrupt, but making sense of this for myself, and I do think it's incredible, is actually, as you say, I love it, it's getting closer to finding that needle in the haystack through the edge and then that's what they're going to get, they're going to get the important stuff.

Zak Doffman: Let me paint a scenario, which I think will hopefully bring this alive. So let's say, for example, I'm looking at the perimeter of an airport and all I want to know is if somebody is climbing over the fence, that's what I care about. So I've got a camera and the camera will alert. It will alert its back-end if it thinks somebody may be climbing over the fence. But there's bad weather, there's wind, the trees are blowing around, there's shadows, the lighting changes.

So the technology available on that camera at the edge is limited. So it takes what it thinks is an alert and it sends maybe a small piece of that alert, a frame, a piece in a frame, a snippet of video to the cloud, and it says, "I think I've seen something," and the cloud then goes, "Yeah, you're right. This is a person climbing under the fence or cutting through the fence." Or it says, "Actually, it's a shadow. We can analyze it on a different level."

But in the event that that's a real issue, then obviously you've now got that event is captured. It's stored on the cloud. Somebody's been alerted, they can then look at the live stream. That's what people need. What they don't need is the 23 hours and 45 minutes of video of that bit of that fence captured that day stored forever. They don't need that, it's irrelevant, no one's ever going to go back to it.

Now what's interesting and where there's an exception to this, and, again, something that we do, is we manage the storage of video at the edge and in the cloud and we can sync between the two. Why that's important is often you don't have enough bandwidth to get all of the video back to the cloud. So what you can get is, we can get a compressed version of what we've seen, and that is enough. It's situational, where it tells your customer, it tells your operator that there's an issue they need to deal with, but it might be that that level of compressed video doesn't have details of the face or the license plate or something else. So you've got a rolling storage available at the edge as well that you can go back and either pull a bit of a frame, a bit of video, all of the video if you want to, so you don't lose any of that detail.

But seven or 30 days later, whatever you've chosen to do, if you haven't seen anything that's interesting enough to follow up, the chances are there's nothing there. We can provide this level of flexibility to very large-scale video programs, which is a game changer for customers in terms of how effective it is, how flexible it is, and how much they can save from a cost perspective.

Dean Gratton: So you're talking about motion-sensitive recording. So you're not filming over 24 hours at least.

Zak Doffman: You can film it. No, I think the point is, I can put a video camera next to that fence and I can record it 24/7, and I can store that video on a rolling basis for, say, 30 days on the edge. I don't move that video around. What I move around are events that happen, every time I see something that is an issue, something that the analytics has been told to look for, but if it's ever needed, you've got that video for a period of time at the edge. It's not clogging up any cloud. It's not transmitted over any network. You only get it if you need it.

So the point is we can design a flexible scheme that meets a customer's requirements, but you have to take into account cost, effectiveness, efficiency. It's not just a case if you have servers in a control room and you're just storing everything, you're storing everything just in case, which has been the traditional view. This is an opportunity for a complete rethink about what people want and what is going to be most effective for them.

Sarah-Jayne Gratton: So what capabilities they need. It's fascinating stuff. I understand, so fascinating, that Intel actually came to you guys. So I really want to hear more about this.

Zak Doffman: Yeah, look, clearly we've always used Intel and technology through the life of DB, but two or three years ago Intel knocked on the door and said they're running analysis, they're looking for innovative AI startups or growing businesses in the U.K. and we were one of the top five they'd found, and they were really interested in working out if there was a partnership opportunity, an opportunity to work together, which was great to hear, and clearly we told our board that at the earliest opportunity because it sounded cool.

But actually, more importantly, Intel have followed up and have been true to their word. So they really do, time and attention, and it's been amazingly helpful to us in terms of that relationship. I think Graham can talk about some of what's actually taken place on the ground, but it has been great.

Sarah-Jayne Gratton: Yeah, fantastic. Graham, can you give us some examples of how Intel have helped you deliver your capabilities?

Graham Herries: Yeah. So it's been really exciting actually because the technology that Intel have been delivering, especially around the kind of AI and their OpenVINO framework, has gone through almost exponential increase in capability and performance over the last couple of years, so much so that if I look back to where we were two or three years ago, everything was very custom, very bespoke, and less... we had to knife-and-fork a solution onto a hardware platform, be it Intel or be it the competition.

But the power of the tools now, it's very much switched to an "Okay, let's look at OpenVINO first” approach, because their hardware acceleration, as well as the flexibility of their library, has just... it's just worlds away from where it was, and in some respects can't thank them enough for doing that because, as Zak says, it's enabled us to have frameworks in place so that we can retrain for new situational video analytics, methods, really quickly and start to analyze them.

One of our key areas of know-how has been around training and data because data, and data quality in particular, is really fundamental to AI. Without good data quality, then your result in a DNN solution will be quite poor. So we've invested a lot in that, and now being able to just leverage that with a really powerful framework has been great.

We're really excited to see the new hardware coming out of Intel as well because as we've talked about AI at the edge and hybrid analytics, and we see such a great opportunity for even greater neural processing using the kind of neural computation approach that Intel have got at the edge, it could be an enormous game changer.

Dean Gratton: It's so refreshing to hear, Graham, how companies like yourself understand the value in data and, more importantly, how to use it. Are you really maximizing your new currency?

Graham Herries: I think we maximize it very effectively, to be honest. It's…

Dean Gratton: [crosstalk, laughter]

Graham Herries: We really maximize it at its hours.

Sarah-Jayne Gratton: That's great. That's the best kind of currency.

Dean Gratton: I have some terrible bad news now. We're coming to the end of our session. Would you like to share anything else with us now?

Sarah-Jayne Gratton: Oh, yeah. Is there anything you'd like to talk about that we haven't covered?

Zak Doffman: I think what's interesting at the moment is we've all had a climactic six months, a crazy first seven months of this calendar year, and that has thrown up all kinds of challenges, whether it's workforces working from home, limitations on people's ability to travel, the changes to sector we talked about on this call about, hospitality and the retail sectors which have been impacted, along with travel, more than others.

I think what's interesting now is that the imperatives we were seeing around the shift to better edge technology, VSaaS we've talked about, I think all of that is being accelerated. I think we've already seen levels of disruption over the last few months as technology has started to find its way into the frontline. I think we'll see more.

We haven't talked on this call about the use of our mobile technology in triage. So frontline medical workers can send video back to more senior doctors elsewhere over a secure network. We haven't talked about body-worn cameras as we see lead workers in potentially hostile environments, as we different requirements placed on the police, as we see the implications on retail, where retail staff are being thrust into the frontline. I think wherever we look right now in the security world, we're seeing disruption, and obviously that lends itself to businesses that can move quickly and be flexible.

Sarah-Jayne Gratton: Yeah. That's some great examples.

Graham Herries: One of those rare occasions where you're sat down talking to the boss and he's saying, "Okay, we've got an office in the U.K. We've got an office in France. We do some AI. Lots of software management combined with some hardware. And we want to upscale the organization." I'm literally just sat there going, "Okay. PhD in AI, tick. Worked in France for five years, tick. Speak French, tick. Been a software development manager in U.K. and France, tick." It was just one of those amazing experiences where you're just literally doing the virtual checkboxes as what turns out to be your boss is rolling down his list of what he's looking for.

Sarah-Jayne Gratton: Well, guys, unless there's anything else you want to touch upon, and we still have time if you do, if you want to go over anything else…

Graham Herries: Do you know? I've got one thing I'd like to talk about, which I think, just as technologists, it's really easy to get consumed by the quality of the tech, and especially at the moment, it's been really essential to empathize with the problem our customers are facing. We know it's unprecedented times: no staff on site, no usual procurement, complex supply chains, risk of adoption, security, GDPR. But if we take a step back, actually, for the first time, certainly in my career, I've got exactly the same problem as my customers. Every customer's slightly confused, they want help, and it's just really important to empathize.

We've taken an approach of remotely holding our customers' hands. It's just not enough anymore just to have good tech. It's great to have phenomenal USPs, but you've got to really have customer empathy, and it's really important to how we deliver everything tech at the moment.

Dean Gratton: I think that you touched upon that, Graham, I think that's right. I also get frustrated with technology. Not technology as such, I get frustrated with the people who are developing or creating the technology who often over-inflate the technology's capabilities. A good example of that is artificial intelligence at the moment. I really get annoyed. I started developing Bluetooth products a long time ago when... in '99 I was working with Bluetooth technology and it was over-hyped, over-inflated, it could do all this X, Y and Z, and it was nowhere near ready.

Dean Gratton: I was developing software against a specification that had not been ratified, zero point nine, and I still see the whole cycle today, whether it's IOT, whether it's industrial internet things or whether it's artificial intelligence. Everything's over-inflated, exaggerated beyond its capabilities. How do you control that?

Sarah-Jayne Gratton: That's a question!

Graham Herries: That's a really good question. Can you control it? I'm not convinced you can because, as you say, everybody wants to be an early adopter. I think what you have to do is ensure you can actually deploy something. So, again, I kind of come back to that spiel I just talked about, and it's about how can a customer deploy a solution to actually add value to their business? Because, let's face it, this is not about how we apply tech just for the sake of applying tech. This is about customer use cases and operational use cases, and we really need to understand what's required and then deliver a solution and deliver the tech to support that, rather than deliver the tech for the sake of the tech.

Dean Gratton: Perfect.

Sarah-Jayne Gratton: Yeah. I know that Intel are really helping you guys do this. They're helping you meet these expectations and deliver what you need. So it's a great partnership.

Dean Gratton: I think what you just said was perfect. I think that helps solve the probably actually, the question I asked. That helps solve the problem. First let's look at the solution and how we solve it, rather than, "Oh, we've got Bluetooth technology. Oh, we've got artificial intelligence. What can we do with that?" No, let's first look at the problem, how do we solve it, and look what's around us that could help us solve that problem.

Sarah-Jayne Gratton: Mm-hmm (affirmative).

Graham Herries: Indeed. Indeed. You're right, Intel have been fantastic. One of the best partnerships I've ever seen in my career. Just so open-minded. You're not having a sales pitch rammed down your throat. It's more, "How can we enable you?" and that's been fantastic for me, even just these last nine months since joining.

Sarah-Jayne Gratton: That's wonderful to hear. Well, we're big fans of them, as you know. Slightly biased, but in a good way.

Dean Gratton: Thank you both.

Graham Herries: No, indeed, it's been really great. Thank you.

Sarah-Jayne Gratton: That's it. Thank you so much for tuning in to this episode.

Dean Gratton: If you've enjoyed this podcast, you can find out more about retail innovation at insight.tech.

Sarah-Jayne Gratton: On behalf of Intel, this has been Sarah-Jayne.

Dean Gratton: And Dean Gratton. Until next time.

Sarah-Jayne Gratton: Until next time.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

About the Host

Kenton Williston is an Editorial Consultant to insight.tech and previously served as the Editor-in-Chief of the publication as well as the editor of its predecessor publication, the Embedded Innovator magazine. Kenton received his B.S. in Electrical Engineering in 2000 and has been writing about embedded computing and IoT ever since.

Profile Photo of Kenton Williston