Skip to main content

IoT INNOVATIONS

Bringing AI and Computer Vision to the Industrial Shop Floor

Machine vision is transforming quality control. Modern inspection systems can do much more than just accept or reject parts. Thanks to deep learning, they can automatically adjust the manufacturing process, cutting waste and inefficiency.

Consider additive manufacturing processes like welding and 3D printing. These processes can be highly sensitive to factors like temperature, line speed, machine calibration, and variation in materials. When parameters go out of spec, it can quickly lead to unusable parts.

A standard inspection system might identify bad parts, but it could not identify the cause of anomalies. That would be up to a human, who would be able to adjust the process only after inspecting discarded parts. In contrast, a deep learning inspection system can provide greater insights into the nature of the problem.

Consider a metal additive manufacturing process. A standard inspection system could evaluate only the finished product, and issue a simple accept or reject decision. In contrast, a deep learning system can monitor the manufacturing process itself, constantly evaluating the consistency of the melted metal track. This creates an opportunity to fix problems as they occur, salvaging a part that might otherwise be rejected.

In more traditional manufacturing lines, a deep learning system can make adjustments such as instructing a PLC to slow the line speed or raise the process temperature. This constant adjustment can greatly reduce the number of discarded parts.

Deployment Challenges

The challenge for developers has been finding ways to deploy machine vision. One common approach is to build custom hardware around graphic processing units (GPUs). While this approach can deliver the needed performance, GPUs typically do not meet industrial reliability requirements. Power is also a problem, because GPUs tend to run hot and require cooling fans—not a great fit for rugged industrial environments.

The effort required for a full custom system is also an issue. In addition to the fundamental difficulty of creating custom hardware, engineers have to integrate their design into the control loop. Given the specialized nature of the PLCs used to control industrial equipment, this can be a nontrivial challenge.

Easier AI Integration

To resolve these issues, SIEMENS AG and Intel® collaborated on a new deep learning module for the Siemens SIMATIC S7 line of PLCs. “With this module, one can augment an existing PLC system that is controlling a machine simply by adding the AI extension module for local inferencing,” said Thomas Dietrich, Technical Account Manager at Intel. “It’s an easy add-on if you have a SIMATIC S7-1500 Controller already installed. You just plug in the AI extension module and a camera or non-vision sensor, and the hardware setup is complete.”

The SIMATIC S7-1500 TM NPU is a PLC extension module based on the Intel® Movidius Myriad X visual processing unit (VPU). Designed specifically for power-efficient AI, this Intel technology enables the module to process up to 720 stereo pairs from multiple camera streams—with only passive cooling—running computer vision in near real time without compromising power consumption or accuracy. This is 3X higher resolution compared to other platforms at VGA resolution, or 6X compared to other platforms at 30 Hz.

The module processes the visual or non-visual data—such as audio or vibration—and then sends the analytic results to the PLC over the backplane. The PLC then runs the control algorithm, using the analysis data as an input, and adjusts the control flow.

Figure 1 shows one example of how a production line might use the system in a pick-and-place scenario. A production item enters a conveyor belt on the left. Overhead LED lighting illuminates the item. A camera mounted on a platform acquires the image and transmits it to the PLC extension module. In turn, the PLC directs the robot arm orientation to lift the item and place it onto the conveyor to the right.

Figure 1. The Siemens vision system excels at pick-and-place, among other use cases. (Source: Siemens)

With an AI model for grasping, the module will calculate hundreds of grasp points within milliseconds and select the best ones for the given object. It can then relay this information to the PLC controlling the robot arm to pick up the object in the best way—either with a dedicated hardware PLC SIMATIC S7-1500 family or the PC-based SIMATIC ET 200SP Open Controller v2, which provides a Windows partition for additional applications.

Editor’s note: This demo will be shown at Hannover Messe, April 1-5, 2019. To see it for yourself, visit the Siemens booth at Hall 9, Booth D35.

This automation can save huge amounts of labor. “For example, there are lots of manual processes at a discrete manufacturing line today, either for quality control relying on the human eye or assembly of mixed parts like Through-Hole-Technology at PCB manufacturing,” said Dietrich. “Those are use cases where AI can help to improve quality and/or yield by increased automation.”

Instead of a robotic arm lifting the item, the PLC could direct any other sort of factory device, like soldering equipment mentioned or a CNC machining tool. In fact, the system isn't even restricted to vision. “The primary use case is video, but you could do other things,” Dietrich said. “It opens a range of different possible use cases like vibration or even sound analytics for predictive maintenance.”

Building the AI model

Of course, none of this impressive hardware is useful until you’ve built a deep learning model. Here again, Intel and Siemens addressed the problem with collaboration, integrating their toolchain for an end-to-end solution.

It starts with developing a deep learning model using the most popular AI frameworks like Caffe or Tensorflow, which can then be optimized and deployedto the module, together with a small application program, via an SD-card. Typically, you can start off with using an existing, freely available DL model and adjust it to your specific use case by retraining with available manufacturing data. From here the module can be configured in the Siemens Engineering framework, the TIA Portal, to implement and use the data from the TM NPU module(s) in the PLC program.

The idea for the (near) future is to be able to provide an AI-workbench, to simplify creation, deployment, and realization of industrial AI solutions and make them accessible not only to AI experts but every automation engineer.

A New Look for Machine Vision

For manufacturers, this simpler approach to machine vision opens up a world of possibilities. Rather than being limited to fixing problems after the fact, manufacturers can continuously adjust their processes for maximum efficiency. And with the highly integrated approach Siemens has pursued, manufacturers can get these machine vision systems running faster than ever.

About the Author

Erik Sherman is a journalist, analyst, and consultant with a background in engineering, technology, and business management. He's written about such topics as semiconductors, enterprise software, logistics, software development, advertising technology, scientific instruments, biotechnology, economics, finance, marketing, and public policy.

Profile Photo of Erik Sherman