Skip to main content

EDGE COMPUTING

Empowering Traffic Operators with AI Video Analytics

computer vision camera

In 1999, a transport truck traveling through the 7.2-mile-long Mont Blanc Tunnel between France and Italy caught fire, fatally trapping 39 commuters. While the tunnel was fitted with various video surveillance cameras, traffic operators were not alerted to the problem until drivers started calling in reports—and by then the damage was already done.

Unfortunately, this problem has continued over the years as European nations experienced an alarming increase in road deaths, especially in tunnels.

That’s why in 2004, European Union member countries decided to issue minimum road safety requirements for tunnels over 500 meters long, known as Directive 2004/54/EC. Part of the requirements included installation of safety cameras. The idea was that with potentially dozens or even hundreds of cameras in a single tunnel, officials could monitor things like wrong-way drivers, smoke/fire, stopped vehicles, or pedestrians on the road.

But, of course, any good transportation system manager will tell you that streaming traffic footage from camera endpoints generates too much data for human operators to analyze manually. And Directive 2004/54/EC created a reality where multiple streams per camera and multiple cameras per tunnel across most of a continent would need to be analyzed by someone. Or perhaps, something.

Transit authorities need to automate camera footage analysis as much as possible. They need AI video analytics to monitor roadways for potential safety events at the edge.

Traffic Cam Monitoring in Real Life

To give you an idea of Directive 2004/54/EC’s scale, let’s look at a single roadway. The Boulevard Périphérique Nord de Lyon (BPNL) is a 10-kilometer toll road in Lyon, France that connects to three major highways. It consists of four tunnels that span a total of 6 kilometers, two viaducts, and no fewer than 200 traffic cameras.

The BPNL is operated by the Société D’exploitation Du Boulevard Périphérique Nord De Lyon (SE BPNL), whose 50 employees are responsible for managing the toll booths, maintaining the road infrastructure, and monitoring camera feeds for incidents that could present hazards or disrupt traffic flows.

Transit authorities need #AI #video analytics to monitor roadways for potential safety events at the edge. @SprinxTech via @insightdottech

It’s easy to see why this won’t work without automation. If every SE BPNL employee monitored camera footage around the clock, they’d still have to watch four camera feeds simultaneously. Instead of human monitors, the company tried computer vision camera monitoring software based on traditional image processing algorithms. But even these struggled to identify people, objects, and events with enough accuracy to avoid overwhelming operators with false-positive alarms.

“This kind of technology can understand that a blob of pixels is moving, but it is not able to identify a blob of pixels as a pedestrian. It may understand that a blob is the shape of a person, for example, or moving at the speed of a person, but it cannot identify the object as a person,” Renato Clerici, Co-Founder and CTO of Sprinx Technologies, a mobility video analytics company.

To overcome these challenges, SE BPNL turned to Sprinx, which uses neural networks to detect and recognize vehicles, people, and pedestrians in real time through its TRAFFIX.AI automatic incident detection (AID) software.

“AI and deep learning are much more accurate than standard computer vision technologies,” Clerici states. “Neural networks are trained to identify, recognize, and detect people or vehicles in a picture. And with 3D object tracking technology, it allows us to provide very real, very accurate detection and reduce a lot of false alarms.”

The OpenVINO Road to Automated Incident Detection, Everywhere

Since deploying in the spring of 2020, TRAFFIX.AI has helped SE BPNL reduce false alarms significantly, thanks to high-fidelity analysis that can detect everything from wrong-way drivers, slowdowns, and stopped vehicles to spilled cargo and even smoke or fog (Video 1).

Video 1. TRAFFIX.AI has helped SE BPNL reduce false positives by supporting high-fidelity video analysis of events like sudden loss of roadway visibility. (Source: Sprinx Technologies)

From an end user or system integrator perspective, TRAFFIX.AI’s built-in intelligence makes the system easy to configure and calibrate for specific use cases like those mentioned above. And although the platform’s 3D object detection software is proprietary to Sprinx and the MobileNet SSD neural nets were developed internally using TensorFlow, the software is optimized for edge execution using the Intel® OpenVINO Toolkit.

This means TRAFFIX.AI can run on any CPU-, GPU-, FPGA-, VPU-, or other Intel-based hardware platform, be it an edge computer, PC, or server. It can even connect to intelligent transportation systems (ITS) out-of- the-box for a truly high-performance, plug-and-play AI video analytics deployment experience.

“OpenVINO plays one of the main roles in our solution because it runs the neural networks we are using to detect and identify vehicles and pedestrians,” Clerici says. “You can use the existing hardware and it works. It’s very easy to connect a PC or a server with our software to the existing network infrastructure and process the existing camera feeds. The only limit is the number of cameras that can be processed on that hardware, but if you need to process more cameras, you can just add a new PC.”

In installations like the BPNL, Sprinx is running TRAFFIX.AI on Intel® Core i9 and Intel® Xeon® Gold processors that support up to 24 cameras at once. But as Clerici notes, smaller deployments can leverage endpoint targets based on more power-efficient devices like Core i5 or Core i7 processors that support up to 10 simultaneous video streams.

Smart-City Solutions: Already Ready for AI Video Analytics

Sprinx’s TRAFFIX.AI software has already transformed some 15,000 computer vision cameras across the roadways of Europe into intelligent video analytics data capture devices. And with the ability to deploy their software on almost any hardware, they are collaborating with Intel on a not-so-distant future in which the AI analytics software can be deployed directly on computer vision camera endpoints that send real-time alerts directly to on-premises servers or cloud platforms.

That would turn the billions of already-installed cameras around the world into potential AI vision endpoints. From smart-transit systems to smart-traffic data collection to smart-city solutions, the possibilities become limitless almost instantly. And it’s all possible because of the enabling capabilities of the OpenVINO toolkit.

But for now, giving operators like SE BPNL accurate road condition information in real time to save lives is a great place to start.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

About the Author

Brandon is a long-time contributor to insight.tech going back to its days as Embedded Innovator, with more than a decade of high-tech journalism and media experience in previous roles as Editor-in-Chief of electronics engineering publication Embedded Computing Design, co-host of the Embedded Insiders podcast, and co-chair of live and virtual events such as Industrial IoT University at Sensors Expo and the IoT Device Security Conference. Brandon currently serves as marketing officer for electronic hardware standards organization, PICMG, where he helps evangelize the use of open standards-based technology. Brandon’s coverage focuses on artificial intelligence and machine learning, the Internet of Things, cybersecurity, embedded processors, edge computing, prototyping kits, and safety-critical systems, but extends to any topic of interest to the electronic design community. Drop him a line at techielew@gmail.com, DM him on Twitter @techielew, or connect with him on LinkedIn.

Profile Photo of Brandon Lewis