From busy city centers to suburban front doors, video cameras are widely used to protect people and property. This is nothing new. In fact, the first use of video for city safety was almost 60 years ago when the London police deployed cameras in Trafalgar Square during a visit by the Thai royal family.
Since then, London has become one of the most monitored cities in the world. And Londoners appear to be just fine with this. In one poll, 84 percent of people surveyed indicated they are comfortable with video cameras on the streets and in public settings. To most citizens, the technology represents personal safety and prevention of harmful events.
Video cameras are widely deployed in the hope of enabling cities and first responders to act on crime, emergencies, traffic problems, and more. But installing thousands of cameras across a city is no guarantee that events will be detected, despite the building of expensive command-and-control centers. There are simply too many images to be viewed by safety personnel, and too much data to analyze for immediate action.
Legacy citywide surveillance systems can enable after-the-fact response, but they can’t initiate preemptive action or handle a situation in progress.
Today’s advanced AI-based systems are taking safety applications to a new level of sophistication. High-performance edge devices bring more intelligence and functionality to existing systems to offer multi-sensory awareness.
Sensors advance surveillance applications beyond vision, to include detection of sounds, smells, and other environmental factors. And real-time analytics from edge-to-cloud, combined with deep learning, enable automated response.
Combining AI and Multi-Sensory Analysis
iOmniscient Corporation provides safety and security well beyond image recognition and motion-based video analysis. Its IQ Smart City Solution goes a step further with its patented automated-response capability. Acting simultaneously on diverse data, the system analyzes the moment when an incident occurs, automatically locates the nearest appropriate responder, and directs rapid action.
Here’s one example of how the solution can automatically carry out a chain of events: A car accident is detected by the system’s Traffic Management capability. The system notes that the vehicle is also on fire. The nearest police vehicle is located and informed about the accident with a video of the event and detailed information on the location.
The nearest fire brigade and ambulance are also informed with similar information. As a result of this system, response times for street incidents such as this have in some cities reduced from an average 25 minutes to less than 5 minutes.
The heart of the system is a broad set of AI-based software building blocks, combined to provide safety and surveillance solutions for specific applications.
The solution can understand the behavior of vehicles and people, and notably can protect privacy. All identifying features such as faces and license plate numbers can be removed. And in the case of a lost child, for example, these facial features are available to authorized safety personnel.
What’s more, the recognition system can provide accurate results with minimal resolution, even in very crowded scenarios. “Traditional solutions require somewhere between 60 to 100 pixels to accurately recognize someone’s face,” said Ivy Li, co-founder and Managing Director of iOmniscient. “Our algorithms can achieve high accuracy with as low as 12 to 22 pixels.”
The solution combines rules-based heuristics with deep learning algorithms, based on what is most suitable for the environment.
“Deep learning can be very accurate, but it has certain disadvantages. First of all, you have to do the learning and that takes a lot of compute power and time,” explained Dr. Rustom Kanga. “Heuristics, on the other hand, is a very light computing process that operates in much the way a human thinks. We use our own blend of both these and other techniques in combination to generate very accurate and cost-effective results.”
Analytics at the Edge and in the Cloud
The software can operate in either a centralized, distributed, or hybrid architecture as shown in Figure 1.
Geared to meet the requirements of multiple stakeholders, reporting can also be centralized or decentralized. For example, a police officer on the street may need specific, actionable information, while operational personnel require a system-wide view.
“We empower reporting by converting the sensor data into text on a real-time basis,” said Dr. Kanga. “For example, we may capture images of a man in a blue shirt placing a package under a bench and then walking away. The system translates and converts each of these activities into a text format. Once in text, it can be analyzed in any big data engine.”
Based on Intel® technology, the system can turn any IP camera into an intelligent one, providing the ability to process advanced video analytics at the edge.
Intel is a key part of the solution architecture—from a NUC located at the camera, to a high-powered server in the cloud. “We use Intel processors because they have very good core technology, which is upward and downward compatible, reliable, robust, and scalable,” said Dr. Kanga. “And among our other tools for building algorithms, we use OpenVINO™ for deep learning.”
Municipal planners, first responders, and safety managers recognize they need more effective ways to protect people, places, and things. Companies such as iOmniscient are bringing innovative IoT and AI technologies to cities worldwide.
The result: real-time incident detection and situation analysis, enabling timely response and resolution.
About the AuthorMore Content by Georganne Benesch