Skip to main content

AI and Computer Vision Direct City Traffic

computer vision edge computing smart cities AI

Editor’s Note: stands in support of ending acts of racism, inequity, and social injustice. We do not tolerate our sponsors’ products being used to violate human rights, including but not limited to the abuse of visualization technologies by governments. Products, technologies, and solutions are featured on under the presumption of responsible and ethical use of artificial intelligence and computer vision tools, technologies, and methods.



San Francisco is just one example of a transportation landscape gone wild. Since 2018, vehicle traffic is up 27 percent. And alternative transportation sharing has skyrocketed. On an average weekday, city streets are filled with 6,300 bike shares; 2,000 mopeds; and 2,300 power scooters.

In addition to traditional forms of transportation, rideshare options like Uber and Lyft are abundant. While these new options make travel more convenient than ever, they are pushing the limits of transportation management systems.

City managers need 21st-century solutions to keep up with such dynamic transportation trends that will optimize traffic planning, commuter and pedestrian safety, and emergency services well into the future.

They need better data to analyze how one form of transportation influences another. This includes situational awareness of events over a period of weeks, days, or hours instead of years or decades.

To address these challenges, smart cities are deploying intelligent transportation systems (ITS). IoT and artificial intelligence (AI) technologies are vital to realizing the vision of real-time transportation management.

Layers of Management

Using existing sensors such as traffic monitoring cameras throughout the urban transportation infrastructure gives transit managers a head start. The next step is deploying a tiered data intelligence architecture within ITS solutions.

Justin Bean, Global Director of Smart Spaces and Video Intelligence Marketing at Hitachi Vantara, explained how.

“With computer vision and machine learning, we’re able to analyze existing data and transform it into a wealth of insights,” he said. “For example, we can look at the composition of traffic. How many bikes, cars, trucks, and buses are on the street? Where are cars parking? What is the flow of people on sidewalks and in crosswalks?”

This type of computer vision requires a large amount of processing performance that isn’t available on legacy cameras already in the field. Instead, edge gateways or servers can be used. These systems compile video streams, apply computer vision algorithms in real time, and send relevant metadata back to operations centers for more analysis.

One solution that enables computer vision on existing cameras is the Intel® NUC. The compact platform provides the compute and graphics performance required for tasks like automatic license plate recognition (Figure 1).

The Intel NUC can be customized to meet computer vision requirements of camera retrofits
Figure 1. The Intel® NUC can be customized to meet computer vision requirements of camera retrofits. (Source: Intel® Corp.)

The next, and most influential, tier of data intelligence is a complete visualization suite. These software applications integrate data from sensors, gateways, and other traffic management systems. As a result, transit managers can view both real-time video streams and long-term traffic trends through a single pane of glass.

Open Framework Brings It Together

The challenge is combining these infrastructure components. For instance, different cameras and sensors leverage a mix of communication protocols and data formats. This can result in silos of information that limit the ability of transportation management systems to provide real-time feedback.

Connecting this infrastructure to visualization and analytics dashboards requires an end-to-end data acquisition strategy. And it must be based on a scalable ITS design that isn’t constrained by proprietary solutions or dependent on technologies that could soon become obsolete.

IoT application frameworks that use device connectors are one way to achieve this. Connectors are a thin layer of software that take in data from one system and repackage it for interpretation by others. In this way, data can make its way from edge systems to cloud visualization platforms.

Hitachi Vantara has integrated one such framework into its Smart Spaces and Video Intelligence platform (Figure 2).

Hitachi Smart Spaces and Video Intelligence is an end-to-end IoT application framework
Figure 2. Hitachi Smart Spaces and Video Intelligence is an end-to-end IoT application framework. (Source: Hitachi Vantara)

The Hitachi platform is an ultra-scalable smart city management solution that provides a loose data integration framework based on a series of connectors. Hitachi has worked with device-makers to write connectors for their platform, regardless of a device’s native communications protocol or original data format.

“Our application framework provides services that acquire multiple types of data,” explained Bean. “It turns that into metadata, which is sent through, for example, JSON format. It orchestrates that data appropriately, and pools it in a data lake.”

“That type of time-series data is easy to display on a map,” Bean said. “But then if we want to tap into a video stream, we can just go directly into the footage itself.”

See how the city of Moreno Valley, California leveraged the solution to improve both traffic management and public safety (Video 1).


Video 1. Moreno Valley monitors traffic in real time.


Trends Analysis with Computer Vision and AI

Using this framework, video analytics streams and other transportation data can be drawn into the Hitachi Visualization Suite (HVS). HVS is a smart management dashboard that supports countless layers of data to help traffic operators assess the transit environment in real time and for the long term (Figure 3).

The Hitachi Visualization Suite
Figure 3. The Hitachi Visualization Suite. (Source: Hitachi Vantara)

HVS integrates both historical time-series data and real-time video streams into the same dashboard. This information can be presented in geospatial views to help transit officials visualize traffic trends.

These layers can help authorities make best use of transportation resources. HVS also allows operators to configure data sets into customizable charts, graphs, and other formats to help them group one type of information with other traffic trends.

“You can also feed data into real-time applications like parking guidance apps so people know where parking is available,” said Bean. “This helps operators understand traffic flow in real time so they can adjust light timing. It also provides visibility into incident locations, pedestrian flow, transit options, and much more.”

Another important solution component is Hitachi Video Analytics, which augments the edge analytics delivered by platforms like the Intel NUC. It supports functions such as people and vehicle counting, traffic analysis, and parking detection. It also allows users to fast-forward, rewind, and search for specific objects or events that may be buried in hours’ worth of video.

Smarter Transit, Smarter Cities

The openness of Hitachi Smart Spaces and Video Intelligence allows it to fold in nicely with existing transportation systems. This helps cities maintain their investments in hardware, software, and connectivity. But the solution becomes a force multiplier for smart city management when it integrates with other city management systems.

Extending the connector concept even more offers cities new opportunities. Data can flow between transportation systems, utilities delivery, emergency services, and other civic information repositories. The ability to visualize all this information means that city managers can design policies that make urban areas cleaner, safer, and more convenient than ever before.

About the Author

Amanda Nielsen is a 5-year veteran of the high-tech media industry, covering enterprise communications and military electronics. Amanda lives in Phoenix with her husband Lelund and pug Cooper. When she's not working she enjoys exploring Arizona's craft breweries and attending concerts, comedy shows, and book readings.

Profile Photo of Amanda Nielsen