Did you know the average bare-metal physical security hardware system uses only 60% of available CPU resources and 45% of memory? As more industries transform their video camera systems into intelligent solutions capable of providing valuable insights, they could be underutilizing their resources.
Fortunately, by adopting a virtual architecture, businesses can optimize their system resources by consolidating them within a virtual environment. This approach not only maximizes resource utilization but also reduces reliance on physical hardware, sometimes even by up to 50%.
In this episode, we delve into the profound impact of virtualization on the video analytics field. We explore strategies to overcome implementation challenges and discuss why businesses should seriously consider making this transition sooner rather than later.
Our Guest: BCD
Our guest this episode is Darren Giacomini, Director of Business Development at BCD, a video storage solution provider. Darren has more than 16 years of experience in the IT field, specifically designing, implementing, and troubleshooting LAN and WAN infrastructure. Prior to his current role, he was the Director of Networking for BCD and Avaya, and a Senior Network Systems Engineer for Pelco.
Darren answers our questions about:
- (2:24) The evolution of physical security systems
- (4:06) Challenges with obtaining valuable video data analytics
- (7:44) The role of virtualization in providing valuable insights
- (12:20) How to successfully make the move toward virtualization
- (15:17) Lessons learned from others in the industry
- (17:42) Partnerships backing the move toward virtualization
- (20:50) The future of virtualization for businesses
To learn more about virtualization within the video analytics space, read The Future of Video Analytics Is Virtualized. For the latest innovations from BCD, follow them on Twitter at @BCDvideo and on LinkedIn.
Christina Cardoza: Hello, and welcome to the IoT Chat where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking about virtualization in the video space with Darren Giacomini from BCD. But before we get started, let’s get to know our guest a bit more. Darren, welcome to the IoT Chat. What can you tell us about yourself and what you do at BCD and what BCD is?
Darren Giacomini: So, BCD is a manufacturer of servers and storage and networking equipment for the physical security industry. And we specialize in taking what you might consider like a standard-production Dell server, and really kind of putting the modifications into it to actually make it adapt to the visible-security market.
So, despite the fact that you can get servers and you can get storage and works stations and whatnot from multiple locations, unfortunately most of those are more often than not catered to the IT environment—meaning that in an IT world you have a lot of data that’s stored in a repository or data center somewhere. And while that may be a high-availability application, or that may be something that you want to make sure that the data’s always available, it’s also a complete reverse paradigm—meaning that the data’s going outward to selected and few individuals that request it at any given time.
So when you build your hardware specifically for that, the things that ship from the major manufacturers are set up for that type of environment. Unfortunately, physical security, you have hundreds and thousands of cameras and IoT devices simultaneously bringing data inward. What we do is we specialize in redesigning and re-implementing those particular devices to make sure that they’re optimized for that type of application.
Christina Cardoza: Great. And I’m sure this is where a lot of the talk around the virtualization conversation is going to come in, but I wanted to start off the conversation talking a little bit about—you mentioned it, when you have physical security you have all this data coming in from all of these different camera points, and I think the use of cameras has just evolved so much more, even beyond the security aspects for businesses. So I’m curious to see what are the trends you’re seeing or what is the importance that BCD really sees organizations and businesses utilizing and benefiting from video camera systems and these analytics today.
Darren Giacomini: Well, they’re getting—they’re becoming incredibly powerful. People are building analytics in to not only count objects, look for objects that have been left behind. We also see a lot of things where they’re starting to use these cameras to set up IoT devices at the edge in smart cities, where they can actually look at parking stalls and parking lots and determine what lots or what spaces are open or empty.
And now you’ve got smart cities where you can open up your smartphone instead of driving around endlessly looking for a place to park your car. It will say, “Oh, by the way, there’s two 300 feet up on the right.” Or “No, you need to go down one level in the parking garage, and there’s five here.”
So, cameras can pull a lot of that analytical data and a lot of that metadata in to be analyzed. There’s a lot of applications we’re seeing for cameras specifically in IoT devices that reside at the edge of networks.
Christina Cardoza: Yeah, absolutely. And going back to that point you made in your intro, where you have all of this data coming and going into different places, and now it’s becoming even more important for businesses to be able to extract value from the data from what the video cameras are capturing in real time and make sure they can understand what are the false positives versus what’s actually actionable or they need to react to, I think that just amplifies the data—so much more of it going to different places and being able to get to the people and businesses faster. So, what—can you talk a little bit more about that data challenge you introduced with the company in the beginning? What are the problems that businesses are facing with their camera systems, especially now as their use cases grow beyond just physical security?
Darren Giacomini: Well, it’s twofold. I mean, number one, when you start talking about hundreds if not thousands of cameras, and you’re seeing a trend where people are starting to expand their retention periods—and retention periods meaning how long do you have to keep the video at full-frame rate. And in some cases you may be talking about seven days; some people keep it for thirty days; some people want to keep it, in some of these correction facilities, for two years. Think for evidentiary value.
When we start talking about holding that high-quality of video for that time frame, you’re talking about an enormous amount of storage petabytes, and petabytes of storage and analytics come into play with that. When you look at smart cities and other things that may not have a hundred, may have thousands of cameras put throughout the city, and if you’re having to store all the data from all those cameras all the time for extended periods of time, of 12 months or 24 months or even longer, it can become not only incredibly expensive but difficult to maintain. You are going to have drive failures. You are going to have things that happen in trying to retain that data.
So a lot of what you’re seeing in the movement with what we’re seeing with 5G networking and IoT devices at the edge, people are trying to push the decision-making out to the edge to determine what’s important for video and what’s not. A little known fact is in most cases you’ll see that maybe only 5% to 10% of the video that’s recorded is ever used. So the rest of that is just being stored places where we have these large volumes of data that we have to sift through, and analytics and whatnot can help us find things quicker.
For instance, I can run a search over two years’ period of data that says I want all white trucks that went this direction during this time frame on any day. And you can pull that video back and look at it and see based upon an analytical analysis. But the idea of doing that at the edge for 5G is, if we can determine what’s important and what’s not important, then we don’t have to store everything. So I think analytics is going to play a huge, huge, huge part in trying to scale back the massive amount of data and resources that we’re currently seeing.
And I think the approach is going to change. Today it’s: keep everything, everything has to be kept for two years. And we’ve seen throughout the years people have changed the rules a little bit. Everything has to be kept at 30 frames or 60 frames per second, 1080p for maybe six months. And then we can drop that frame rate down and go down to 15. But what we can’t do is drop it below the thresholds for evidentiary value in municipal courts, in if you bring video into a court system and it’s used as evidence it has to meet a certain threshold to municipalities—meaning certain frame rates, certain resolutions that you can actually identify the people that you’re looking at.
So we have big changes on the horizon, and I believe that analytics are going to play a big part in helping us get some of these massive amounts of data under control.
Christina Cardoza: Yeah, and you make a great point that it’s not just the real-time analytics that businesses are getting. They want to store this data so that they can—the historical data—so they can predict patterns or see how they did year over year and make—gain more insights outside of just the real-time analytics that they can get. Which can, like you mentioned, cause a storage issue that can be expensive to store all that data, and then it can be slow to be able to access that data.
So I’m curious, we mentioned that we’d be talking about virtualization in the beginning of this podcast. I’m curious what’s the role of virtualization in easing some of this congestion—some of this storage issue in this video data–analytics space today.
Darren Giacomini: It comes down to the—when you look at, in particular, virtualization, it’s utilization of your resources. So, in a typical physical security environment you’re going to have cameras that reside at the edge, or IoT devices or sensors or things that are bringing data back. You’re going to have a network infrastructure—whether that be backhaul wireless, or whether that be hardwired network infrastructure—that’s going to bring that back to a centralized or decentralized point of recording where we store it for a certain amount of time frame.
And then in some cases you may have second-tier storage behind that, where you have a primary high-performance storage tier, and then you have a secondary, lower-performance storage tier. But, regardless, you have to get that data back to those storage tiers. And then at the same time you have people who are responding in real time, people who are actually watching these cameras, that are pulling them up and need to see a particular event. You’ve got a car accident, you’ve got road blockage, you’ve got this and that. You need to be able to pull that video up in real time and actually see what’s going on there. And, in any of those, that requires taking that data and either bringing it directly from the camera, or redirecting it through the server out to the workstation—all of that utilizes resources.
But if we analyze specifically the most important segment in there, and what people are most concerned about, it’s where we store the video. And when we store the video, these are nothing more than servers. Maybe they’re backended by a SAN or a NAS for more retention on the backend, but they’re servers, and servers have finite resources. You have CPU, you have memory, you have network resources, you have things in there that drive the horsepower, the efficiency of that particular device. And, on average, only about 55% to 60% of the CPU cycles are used on any given archiver.
So when you’re doing a bare-metal server approach—when you’re buying a server and you’re putting it in there and you’re not virtualizing—you may be leaving 40%, 45% of the CPU cycles in cores that are allocated to that server unutilized. And it has nothing to do with the server’s capability itself. It may have to do with the fact that you’re running on Windows 2019 server or Linux or whatever you’re running on, and you can only load one instance of that software application on there. And if that instance that can run on that operating system only utilizes 60% under max capacity, the other 40% becomes unused. There’s really nothing you can do with it.
So virtualization allows us to add in basically an abstraction layer like VMware, ESXI, or Hyper-V or Nutanix—one of the many platforms that are out there. And it allows you to put that abstraction layer in and take the hardware and make them a pool of resources for that particular device. So now you virtualize those machines—what would be a bare-metal server as an archiver, you virtualize that into a flat-file structure, and you virtualize maybe the directory servers into a flat-file structure, or you virtualize whatever other entity for access control or whatever else is in there. You virtualize that into a flat-file structure that can be stored somewhere into a common share point.
And when you do that you have the ability to create more than one instance on that machine. So instead of running one instance of Windows 2019 server maybe I run five, and I divide the resources up amongst that machine, and I can take that CPU in memory that wouldn’t traditionally be utilized and actually more effectively utilize that and get more production out of what I bought.
So naturally you’d think for a company like BCD, where we sell high-performance servers into this market, that would be something we don’t want to happen, but it’s going to happen regardless. Virtualization has been in the IT field for a very long time. It’s penetrated in this side of the market; there’s nothing you can do about it. You have to embrace the fact that people are wanting to do more with less with respect to footprint, and the power footprint and the cooling footprint and everything that goes along with that naturally makes these projects work more efficiently when you virtualize.
Christina Cardoza: So the benefits seem obvious, being able to utilize this virtual architecture and maximize all of the system’s resources. The one thing that I’m curious about is, obviously with any move or changes that businesses have to make, there’s always a knowledge constraint, or there are technology constraints, or there’s just not a clear path on how they do this.
So I’m curious, from your perspective, how can businesses successfully move to virtualization, or how set up or ready would you say they are to make this move for those who have been relying on just bare metal and some of these traditional approaches you mentioned?
Darren Giacomini: I think we’re embarking on a journey that’s—it’s very similar to what we already went through when we saw the shift from analog to digital or IP. And in essence what happened is there used to be analog matrix base with analog cameras, and physical matrix switchers that would switch the views for you to look at these cameras. And when I first came into this market that’s what was predominantly out there, and I worked for Pelco at the time; it was all matrix-based switching, it was everybody was utilizing those, and IP digital was starting to emerge in the market, and we could see that it was coming like a freight train.
That the fact of the matter is things were going to go IP digital; the days of a matrix-switching bay were just not efficient. While they were, they were very good at what they did. The IP digital was catching up to it. And so during that time frame of seven, eight, nine years I did a lot of training and discussing with integrators about skill sets and how they were going to have to modify and change. And now almost everything we push out into the market is digital.
I think we’re going to have a lot of the same growing pains, where you’re going to have people that have a particular skill set today that they’re used to, and they’re going to have to modify their approach. If they don’t modify their approach where they’re really going to feel it is in their pocketbook.
Because the fact of the matter is, if you’re quoting eight servers and I can do the same thing with three, I’m going to outbid you—I don’t care how much VMware costs. Three servers versus eight—even if we’re close or even if I’m a little bit more, when I go back to the interested parties or the people who are making the decisions specifically on the financial side, and I tell them, “Okay, take a look at the total cost of ownership of running eight servers versus three. Okay, this is going to be more efficient, this is going to take up less real estate in your data center, this is going to take less power to keep it cool.” And all these things come into play.
And I think there’s going to be a little bit of a struggle, like we saw in the movement from analog to digital, and you are going to have some people that unfortunately drop out of the market because they don’t want to make that transition. But I don’t think it’s going to be as big. I think everybody’s used to working with Windows now; everybody’s used to working with servers. It’s just that next step of learning that, instead of plugging into the front of a server, I’m going to get into a webpage that’s going to represent that server, and that’s the virtual machine that I’m going to work off of. It will be an educational experience, and people are going to have to take their employees and send them out probably for some training to come up to speed on that.
Christina Cardoza: So how is BCD helping with this educational portion of it, or helping businesses make this transition? Do you have any—we talked about use cases in industries in the beginning—but do you have any particular examples or customers that you can share with us how you guys went in and helped them make the transition, and what the benefits they saw were from moving to virtualization?
Darren Giacomini: Yeah. I’m not going to call out the particular city, but there’s a city on the East Coast where literally their entire municipality is running off of what was our revolve platform, which was a virtualized hybrid hyper-converged approach to things based on virtualization, but also gives you high availability—to where if a particular server failed the VMs move and migrate and they continue to operate. And that literally cut down on the things that they had to do and getting people up at 2:00 or 3:00 in the morning to deal with a server that lost power supply.
You know, when you build a high-availability infrastructure like that, and you virtualize and you see the benefit that it is to those particular customers—meaning that you’ve built this virtualized environment that not only can you build high availability—meaning that virtual machine mobility between nodes is possible, that if I unplug a node all the VMs that ran on that are going to move the other nodes in three to five minutes and start running again—that’s a very powerful thing. Because the fact of the matter is, in the time frame that that happens, they couldn’t even pick up a phone and call their integrator, and the chances are their integrator is not going to roll a truck there until tomorrow. So what do you do in in the meantime? You scramble to figure out whether you have to put people out on the street to patrol, or whether you have to watch things.
There’s some huge benefits that we brought to that particular municipality, in the fact that we gave them a much, much smaller footprint than what they had before—20, 25 servers versus I think 4 or 5 to run everything in a virtualized environment. And then we gave them that high-availability aspect. That means that in the middle of the night when things go wrong you’re not scrambling to send somebody out to site; it’s going to do an automated recovery-feature set for you.
Christina Cardoza: Wow. So it sounds like you guys were really able to come in and make some of these improvements. One thing that keeps coming to mind—and I should mention, the IoT chat and insight.tech as a whole, we are sponsored by Intel—but when we’re talking about all of the storage and memory and compute, and being able to get access to data and real-time analytics and gain value, I know Intel and Intel processors are behind a lot of this, making it happen and making it easier for businesses. And I know BCD has partnered with Intel before, so I’m just curious what their role is, or how the technology, the Intel technology, plays into making some of this happen, as well as if there are any other partners that you work with in this whole move to virtualization.
Darren Giacomini: Sure, we do. In virtualization work with Nutanix; we work with VMWare; we obviously work with Microsoft Windows and Hyper-V. Those are all staples and companies that we work with. But, for the most part, when you start to think about partnerships, from the Intel side it’s been very instrumental in the things that we’ve done, and meaning that if I want to take—in my particular role at the company I work with the professional services team, but I also do new-product development. So, emerging technologies and new products that we develop typically come through my office to actually engineer them, validate them, and make sure that they’re up to spec.
And that’s, I guess, well, I didn’t really hit on it—that’s one of the things that sets BCD up as a differentiator. We don’t just build servers; we really do holistic, end-to-end solution sets. We have expertise in operating systems, networking, sand storage, infrastructure, all the VMS platforms. So when those integrators find themselves in a situation where they can’t find a very simple solution to something, or they can’t figure something out, we’re there to back those integrators up—whether through professional services or giving them assistance to point them in the right direction.
And that also plays part in this virtualization that we’re headed towards. Today most of the industry’s not ready for virtualization, yet my team will actually build it and ship a turnkey for them—meaning that we’ll ship them out a turnkey solution that’s already been put together with a topology, and all they really have to do is plug it in and turn it on.
But that’s where partnerships with Intel comes in. For me to go out and put a request in to get 40 or 100 gig NIC cards to see how far I can push that, that’s going to get shot down before it even reaches the upper-management levels or executive levels in most cases, because they’re not cheap. And also, the fact is it’s an unproven technology. We’re seeing as we, as ESXI or VMware brought in the 40 and 100 gigs, we need to figure out what the choking point is. Is it the actual box itself? Is it the NIC card? Is it the software? Where—what’s its upper bounds? What is it capable of doing? And Intel has been really good about providing resources like these network cards and other resources that we need to do analytics and things of that nature to help push the envelope and figure out exactly what we could do on their platforms. And it’s been instrumental specifically to what my team does.
Christina Cardoza: Yeah, that’s great to hear. And I think it’s important to note: that turnkey solution, that businesses don’t have to have all the expertise or knowledge in-house. You know, there’s partners out there like yourselves that can really make this easy for them and really make it foolproof for them.
You mentioned some of the other services that you guys are hitting within the company, and we’ve been talking about the IoT—all of this is moving to the edge and all of these IoT networks then become created. So, I’m wondering, beyond the physical aspects, where do you see virtualization going, or where else can virtualization help organizations?
Darren Giacomini: Across the board it’s really about utilizing your resources more efficiently. And there are some companies out there that just have so much money they don’t care and they’ll—it’s easier for them to do pizza-box servers or do bare-metal solutions, and they don’t want to get involved with the complexity. But there are certain things you can’t do with that as well.
When you start talking about virtualization, you start talking about the ability to create recovery points and snapshots. And we partner with a company called Tiger Technology that is, in my opinion, is absolutely outstanding at looking at next-generation hybrid-cloud storage—meaning that you’re going to have an on-prem presence, you’re not going to push your cameras right to the cloud, but you have the abilities to get the hooks into the NTFS or the actual file structure inside of Windows and make it an extension of that platform—meaning at any given time I can take backups, or I can take multiple instances of backups and push those out to the cloud for recovery. Or the same can be done locally. But you can’t really do that type of deal in a bare-metal environment.
In bare metal, what were we doing? We had tape carousels, we were running these just atrocious backups that would take forever to back up things to tape, and then storing that offsite, back in the day. Fact of the matter is, VMware gives you the ability to do snapshots, and most virtual platforms do. If I can take a snapshot and I can create a repository of snapshots, when something goes wrong… And, you know, one of the things that’s always high on my list is talking about cybersecurity initiatives. What is your disaster recovery plan? What is your business-continuity plan if you get hit? And the fact of the matter is everybody is going to get hit. I don’t care how careful you are, there are zero-day instances out there that are going to hit you at some point.
And most things that we’re seeing today are ransomware. They’re going to lock or crypto lock your critical resources in your company and say, “How valuable is it to you? And how much Bitcoin do you have? Send us this much, and we’ll send you the key to unlock it.” You don’t find yourself in that scenario when you’re taking a virtualized environment and taking regular snapshots. You can actually say, “This is an acceptable loss. Roll the snapshot back one week; let’s take the one-week loss rather than paying out the crypto. Go in, make sure we’re back up and operational. We’re not crypto lock, and check for it and move forward.”
And to do that in a bare-metal environment is just not realistic. I mean you could, but it—the simplicity of being able to take those snapshots and roll back, and how quickly you can roll back to a different version of a virtual machine is unparalleled in the bare-metal market.
Christina Cardoza: Yeah, absolutely. And it’s important to continue to remain reliable, and just the brand standards that you have, to keep that up to be able to roll back and not really have everything shut down, or not really have continued issues. So I think that that’s also something important to note.
We’re nearing the end of our conversation, but before we go I just wanted to throw it back to you one last time, sort of an open-ended question. Is there anything else about this topic you think our listeners should know, or are there any final thoughts or key takeaways you want to leave listeners with today?
Darren Giacomini: It just really comes down to, if you’re an integrator out there, if you own an integration company, you deal in the physical security market—you can ill afford to ignore the fact that virtualization is coming, and if you’re today you’re only using a fraction of the resources on those servers. The fact of the matter is, there are people who are figuring this out, and when they’re going into these bids and they’re giving a more functional solution, a more highly available solution, and they’re $60,000 cheaper from you or than you, it’s going to be very, very difficult to actually go in and be competitive.
And there are going to be stragglers in the market, and there are going to be places that are not going to virtualize for a very, very, very, very long time. But the large majority of the market, I predict in the next three to five years is going to hit mainstream virtualization, and you either need to get on board with that or you’re going to find yourself in a situation where it’s going to be very, very difficult to be competitive in the market.
Christina Cardoza: Absolutely. And I like that point, the three-to-five year prediction, because I think that businesses and organizations—they’re going to have to make the move, or the transition to virtualization is going to come sooner than they think. And it just sounds like it’s going to benefit them, and this is going to help their businesses succeed moving forward. So, with that, I just want to thank you, Darren, for joining the IoT chat today, and for the insightful conversation. And thank you to our listeners for tuning in today. Until next time, this has been the IoT Chat.
The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.
This transcript was edited by Erin Noble, copy editor.