The trucking industry has always been an essential part of the supply chain, and its role has never been more crucial than today. Transportation companies are under more pressure than ever to meet deadlines and quality standards. But even in the best of conditions, long hours on the road may lead to driver fatigue and distraction—common safety risks.
Innovative use of new technologies can help operators more efficiently move goods and provide services. For example, with AI and computer vision, drivers can be monitored for drowsiness and distractions, and be alerted automatically.
By recording how often they blink and/or yawn, an alarm is triggered to caution drivers after a predefined threshold is crossed. Even more, it can alert control center operators about a driver’s condition in real time, allowing them to take immediate action.
The Nuts and Bolts of Smart Transport
The PC, equipped with AI workloads, is at the heart of this driver-monitoring solution. It comes in a low-power, palm-size, and fanless form factor to overcome vehicle space constraints and rugged conditions.
The platform also comes pre-installed with the Mustang-MPCIE-MX2 mini-PCIe AI accelerator card for deep learning workload consolidation (Figure 1). The accelerator incorporates the Intel Atom® x5-E3930 processor and two Intel® Movidius™ Myriad™ X VPUs to accelerate neural networks by performing workloads in parallel.
“We are continually getting more data and retraining the model, so efficiency is projected to improve to up to 90 percent”
—Brian Chen, @ieiworld
AI Technology + Computer Vision Fuel the Platform
A driver monitoring system is a lot about vision, and here, the Movidius VPU offers both the performance for running Deep Neural Networks (DNNs) and low power consumption mandated by surveillance and transportation applications. The VPUs also allow developers to rapidly port and deploy neural networks in Caffe and TensorFlow formats.
The system encompasses a wide range of expressions, so an extensive amount of data is required to determine a driver’s condition. For example, it involves thorough checking of the driver’s face, eye angle, or yawning. That’s where AI technology enters into the embedded design to capture these conditions and trigger alarms accordingly. It must discern between normal behavior and fatigue or distracted conditions.
IEI trains and deploys models into the inference system of its embedded computer. “Intel® has made available a lot of open-source samples for AI models, and that significantly lowers the overall design cost,” said Brian Chen, Senior Product Manager at IEI.
Regarding accuracy, a crucial measure of how well the AI inference is working, Chen said that the accuracy of the demo embedded system is currently around 70 percent. “We are continually getting more data and retraining the model, so efficiency is projected to improve to up to 90 percent.”
The company has also employed the Intel® OpenVINO™ Toolkit in its embedded solution, which allows developers to adopt two hardware solutions to run two different AI programs and thus maximize the workload (Figure 2). After they build an embedded system, designers can deploy it in another application without having to program it twice. All that’s needed is a change to the CONFIG file.
Case in point: The AI-based demo developed for driver monitoring can also be deployed for monitoring traffic conditions such as tracking paths of pedestrians and cars. Drivers can respond to alarms and potentially avoid traffic accidents.
One Platform, Multiple Applications
In this demonstration, the IEI hardware and software building blocks provide edge computers with enough AI performance mileage with the possibility to support new computer vision applications in less time and with fewer resources. For example, ITG-100AI can also be deployed to monitor traffic conditions, such as pedestrians in an intersection. And as with the trucking use case, an immediate alarm would be sent, so that the driver can react more quickly to avoid an accident.
“Instead of starting the design from scratch, it’s very convenient to deploy,” said Chen. “There’s no need to develop twice. With our platform and OpenVINO, you just put the AI model into the system and start running a program without even resetting the configuration.”
About the AuthorFollow on Twitter Follow on Linkedin More Content by Majeed Kamran