Skip to main content

COMPUTER VISION

AI Video Analytics Empower Communities: With Videonetics

Srivikraman Murahari

There is a common misconception that AI will become an intrusive part of our everyday lives—but it’s actually quite the opposite. The reality is that AI has the potential to enhance everyday life in many ways that we may not even notice. For example, AI video analytics can be used in smart cities to monitor traffic flow, identify hazards, and help emergency responders quickly respond to incidents.

Of course, there are still concerns around security and privacy. But many companies are committed to implementing AI in a way that prioritizes user security.

In this podcast, we talk about the opportunities AI video analytics provide communities, real-world use of AI video analytics in smart cities, and how to successfully deploy these systems in a safe and secure way.

Listen Here

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Videonetics

Our guest this episode is Srivikraman Murahari, Vice President of Products and Strategic Alliances at Videonetics, a video management, video analytics, and traffic management provider. In his current role, Sirvikraman leads the company’s product strategy and roadmap as well as manages alliances with technology and ecosystem partners around the world. Prior to joining Videonetics, Sirvikraman worked at Huawei for 20 years in various roles such as Head of Consumer Software, Associate Vice President, and Senior Product Manager.

Podcast Topics

Srivikraman answers our questions about:

  • (3:06) Challenges AI video analytics address in smart cities
  • (5:40) Implementing AI solutions that balance citizen privacy and well-being
  • (7:29) Developing and deploying solutions citywide
  • (9:09) Technology infrastructure that goes into successful deployments
  • (11:38) Creating end-to-end solutions with ecosystem partners
  • (14:25) Additional opportunities and use cases for AI video analytics

Related Content

To learn more AI video analytics in smart cities, read Giving the Green Light to AI Video Analytics. To learn more about what Videonetics is doing in the AI video analytics space, follow them on Twitter and LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking about the power of AI video analytics in our everyday lives. Joining us today we have Vikram Murahari from Videonetics. So, before we jump into the conversation let’s get to know our guest more. Vikram, welcome to the IoT Chat. What can you tell us about yourself and Videonetics?

Vikram Murahari: Thank you, thank you, Christina. I’m the Vice President of Products and Strategic Alliances in Videonetics. I also handle the standards body. So, Videonetics is primarily—you know, we are into developing a video-computing platform, primarily focusing on video-management solution. Video analytics is one of our key areas, and we have about 100-plus smart cities running our intelligent traffic-management system, and about 80-plus airports and more than 100-plus enterprises running our video-analytics solution. In fact, about 200K cameras are monitoring through our platform, and also about 20K lanes, I would say.

One of the main, key USPs—we have our own deep-learning platform called Deeper Look, so we have about 100 video-analytics applications developed out of this platform. The analytics applications cover a wide range of analytics, which includes people, crowd, vehicle, transport, women safety, retail, and we cover a lot of verticals, which includes smart city, enterprises, national highway, and retail, finance, and so on. And for smart city we are the number one in India, and also we are compliant to ONVIF standards—that is the open-network video interface standards. So that’s about a short introduction of myself and Videonetics.

Christina Cardoza: Yeah, that sounds great. And that’s exactly what we want to get into today, this idea of smart cities. AI and video analytics, like you mentioned—technology’s only getting smarter, and now they’re being applied in ways that we never would’ve thought of, and sometimes in ways that we don’t even notice, especially in the context of a smart city.

Citizen life is changing every day and I know there’s a lot of efforts to make citizen life improve the communities and improve everyday life with these technologies. So can you talk a little bit about what challenges that government officials or city planners may have seen that are attributed to the use of AI video analytics in a city context?

Vikram Murahari: See, the major challenges the government officials see are in the non-compliance of citizens in traffic rules, okay? So that puts a lot of stress and pressure on the government officials in streamlining the traffic situation and smoothening the traffic flow. It also causes a lot of fatalities and accidents and things like that, you know? So creating issues for the safety of citizens.

So our Videonetics applications, as I said, it’s 100-plus smart cities where our intelligent traffic-management solution is deployed; it has very powerful analytics capabilities on traffic. And then throughout all this we provide a smart visualization for the government officials, which gives a lot of insights for taking further action. So by this experience of 100-plus smart cities deploying this analytics application, I can confidently say that the traffic flow is smoothened and streamlined and, see, more awareness is created among the citizens to adhere to the traffic rules. So this has been our experience, Christina.

Christina Cardoza: Yeah, absolutely. And I think everybody can agree: no one likes to sit in traffic. And so I’ve seen these video analytics just help city planners also be able to install traffic lights in different places or to help with their plans and how to structure roads and everything. Then make it smoother to get where you’re trying to go to ease up some of that congestion that we’re talking about. Also I’ve seen AI being able to detect when people are crossing the roads, alerting vehicles or alerting pedestrians that it’s not safe to cross yet—really, just safeguarding and protecting the wellbeing of citizens all around.

One thing I’m curious about—because sometimes we’re collecting this information and we’re collecting this data without even knowing that it’s happening—I’m wondering, because I know a big concern for the citizens on the side of this is that they may have concerns with their privacy and how that data is being used. So, how can we implement these types of solutions that really focus on the wellbeing of citizens, but also balancing their right to privacy?

Vikram Murahari: Yeah, that’s a good question. I would say we have to enforce responsible and collaborative AI. So when I say “collaborative AI,” the government officials, the independent software vendors like us, and the citizens should know what is happening, should know how the data is getting used. We should have a very transparent data policy. And then the second thing I would say is use minimized, anonymized data. That means, don’t store so much data, undelivered data, and then the data should be anonymized. Everything is objects; we don’t have any people data with us.

So, moving on, we have very, very strict security standards. That means, comply to the international security compliances and have very strict security standards in our protocol and the way how we handle the data. And be transparent and then ensure the data safety and compliance to the international standards. I think these are my suggestions, and that’s how we handle it.

Christina Cardoza: Yeah, absolutely. That sounds like a good setup and best practices when dealing with privacy and using these AI solutions. You mentioned in your introduction that you guys have been deployed and helped a number of different smart cities. I’m wondering if you can expand a little bit on those use cases—what your experience has been developing these solutions citywide and what the results have been, if you have any specific examples you can give us how you guys came in and utilized either existing infrastructure or implemented new infrastructure and technology?

Vikram Murahari: Yeah, sure. So, as I have already informed, so about 100-plus smart cities, we have deployed our platform and it has helped to smoothen the traffic, streamline the traffic, and ensure the safety of the citizens. I can talk about a case study in one of the premier cities in India.

So, there are about 400 cameras monitoring the traffic and another 700 cameras in pipeline. So I’m talking about 1,100 cameras monitoring the city traffic, ensuring the lane discipline, wrong-way, one-way traveling. It has eased the operations of the administrators in smooth traffic flow. And from technology perspective, more and more cities are adopting cloud for computing as well as storage. So we do have several projects running where the compute is in the cloud, and the storage is in cloud.

Christina Cardoza: Wow, that’s great. You said in one smart city you’re working with about 400 different cameras. So I’m sure there’s a lot of data and analytics coming in which can be overwhelming for the officials that are planners, that are looking at all of this. I’m curious, how were those cameras implemented?

Were these cameras that smart cities already had all throughout their cities that they’re able now to add intelligence on top of it to get this data? And what sort of AI algorithms or other technology are you guys using to be able to gather all of that data, make sure the performance is still high quality, and that you’re getting information in real time where it matters most, and getting the right information so you’re not also getting a whole bunch of false positives of things happening? You know exactly what the problem is and how to address it.

Vikram Murahari: So, as far as the camera is concerned, we have a collaboration with all the leading camera vendors around the world. And then for each project Videonetics—along with the system integrator, the partner who is involved in the project—decide the best suitable camera. And then the analytics happen on the edge. For edge we extensively use Intel platform—the Intel® Core i5, i7, i9 series; and the latest generation chipsets, 11 to 13. So we extensively use Intel platform on the edge, and then in certain scenarios we have cloud for the storage.

So, coming to your question of how to do it efficiently, we are looking at—continuously our R&D is putting in effort. We have dedicated, continuous effort on how to optimize the compute. We have traveled a long way, I can say, from the time we started. Now we are, let’s say, 20x, 30x in improved computing efficiency. So we are looking at how to use very less frames of the video to deduct the event instead of processing the entire video. We are looking at collaboration with the partners and using their latest technologies, platforms, solutions to optimize the performance and the computing powers. That’s how it goes.

Christina Cardoza: Yeah. That’s great to hear that you’re working with so many different partners. Every time I think about implementing these technologies, especially on a large scale, like a city, I think a theme we always talk about is better together. You know, this is not something that one organization can do alone. Really leveraging the expertise and the technology from others and putting it together to create a whole end-to-end system. I should mention that the IoT Chat and insight.tech as a whole, we are sponsored and owned by Intel.

But I’m curious what additional benefits or values—how did you and Intel come together to create this partnership and really form this end-to-end solution that you can use within smart cities and some other use cases that I want to get into in a second?

Vikram Murahari: Yeah, see, talking about the partnership with Intel, it has been very great, very exciting, because we are focusing more on, or traveling more in the direction of analytics on the edge. So that’s the same direction Intel is also promoting—more analytics on the edge, more analytics by CPU. So that is the direction which we are traveling. And so there Intel is our best, top partner in this direction which matches both the organizations.

Secondly, we have used Intel’s OpenVINO platform, the OpenVINO deep-planning platform. So that can actually optimize the models using techniques such as post-training optimization and neural-network compression. So these things enhance and reduce the total TCO for the customer because the computing power is enhanced.

And one of the best thing to talk about Intel is their DevCloud platform which is always available for us to benchmark, contest our latest models. As we speak our models are getting benchmarked in 11th to 13th generation series of Intel chipsets. In fact, I’m very happy to announce that we won the Outstanding Growth ISV Partner Award from Intel for 2023 for enabling Intel to win more customers, outshining the competitors as well as enabling them to onboard more partners. So that has been a very long, successful journey with Intel for us.

Christina Cardoza: Wow, congratulations. And I’m sure that just helps the ecosystem as a whole, being able to work with more and different partners, and I can definitely understand why edge is such a big component here when we’re dealing with all these video cameras and data that you’re talking about. You know, the edge is really where you’re going to get that fast performance, low latency, and real-time data that you need with some of these.

One thing that interests me is we’re talking about traffic detection and detection of all of these things—the underlying AI capabilities that go into this, like object detection—and you mentioned you’re using OpenVINO to train some of these algorithms for a smart city use case. But they can be applied to, I think, a number of different use cases. Can you talk about what other use cases or AI advantages you see, maybe outside of smart cities, that we can look forward to?

Vikram Murahari: Yeah. Outside of smart cities we are into quite a good number of verticals. The first thing outside of smart city, the biggest space where we are in is aviation and airport securities. And then enterprises such as oil and gas, and thermal. So in these enterprises and airport securities we are helping more than 80-plus airports in analytics such as quickly detecting fire and smoke, which is—for all the three industries which I spoke about—the fire and smoke is pretty dangerous. And then some object moving in a certain area which it is not supposed to move, or a person falling.

So these kinds of video-analytics applications are quite a hit, and then creating a lot of value for the enterprises. Besides that we do support lot of industries, and one of the most widely used use cases is PPE detection for the workers, for their safety. And then in retail we have a heat map, which gives insights to the retail owners to help them understand their selling patterns. And then we are also into other areas such as mass transport, not just city—mass transport, mass railways. Most of the railway in the country are using our Videonetics deep-learning platform.

So there are quite a good number of—I mean, we’re into a lot of verticals, more, I can also say about banking, finance, and things like that. And another interesting area is we also support forensic search, which is very useful for investigation.

Christina Cardoza: Yeah, absolutely. It’s great to see all of these different ways that AI can be used in the smart city, but then also outside of the smart city.

I know we are almost close to the end of our time together, but before we go I just want to throw it back to you real quick, Vikram, if there are any final thoughts or key takeaways you want to leave our listeners with today—moving towards video analytics or moving towards the edge or how they can successfully do this and approach AI video analytics solutions.

Vikram Murahari: My key takeaways could be adopt data and technology, which includes responsible and collaborative technology, responsible and collaborative AI, and to increase the vigilance of governance, increase the operational efficiency of enterprises, enhance the safety of people, and go beyond security. Video and IoT are an excellent combination, where there can be lots of use cases which will enrich the quality of human lives.

In traveling this journey there could be some challenges such as, for example, field of view. In camera the field of view is restricted, but we are exploring innovative methods such as sending drones to difficult places to capture the video. And in intelligent traffic-management systems—in the city which I talked about—all the police have the body-worn cameras. So we are looking at innovative ways how to reach difficult areas.

And regarding the computing, how we should travel, I already explained, we have to continuously invest in optimizing the computing power, and we have to be open in our API, and we have to show a lot of openness so that our platforms would be easily interoperable with the other third-party vendors. So that is also quite important.

And finally, again, I repeat: ensure responsible and collaborative AI, take the administrators and citizens in confidence. I think these are my key takeaways.

Christina Cardoza: Yes, some great final thoughts. One thing that I love that you said was it’s not just about the technology, it’s about solving real-world problems. So it starts with the problems and the use cases that we need technology to solve. And then we have the technology there, and working with third-party operators, and making sure that you guys fit within this ecosystem I think is really important, so that when you are moving towards these types of solutions you’re not locked in, and you have lots of different choices and partners that you can rely on.

So, thank you for the conversation, Vikram. It’s been great talking to you, and I look forward to seeing what else Videonetics does in this space. And thank you to our listeners for joining us today. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

About the Author

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza