Terms like computer vision, machine learning, and artificial intelligence (AI) have become so common it’s easy to think of each as a single application. The truth is that each represents an entire class of applications that can enable a variety of use cases.
This is why computer vision systems employ a variety of hardware architectures. In some applications, a standard CPU can get the job done by itself. In other cases, a GPU, FPGA, or even a specialized vision processor (VPU) may be needed.
And, of course, these compute architectures can be combined in a variety of configurations. As illustrated in Figure 1, a vision processing pipeline consists of several steps, and different steps are better suited to different processors. For instance, GPUs are often used to process raw video data and perform feature extraction, while VPUs and FPGAs are well suited to analysis.
Seeing the Complexity of Computer Vision
But even a single use case can require multiple types of computer vision, each with different compute requirements. This is particularly true in manufacturing, where every stage on the production line may involve different algorithms, cameras, and compute hardware to perform its own specialized defect-detection task.
Take the manufacturing of diamond-coated saw blades as an example. There are many steps in the process, including:
- Cutting the raw billet into disks
- Sintering the diamond coating onto the cutting edge
- Punching out mounting holes
- Polishing the disk
- Sharpening the blade
- Spray-painting the disk
- Printing a label
- Engraving the blade specification onto the disk
As illustrated in Figure 2, a variety of defects can crop up at each of these steps. Each type of defect has its own unique characteristics, and therefore requires a different visual inspection.
That’s why it’s useful to have a scalable platform that can be deployed in a variety of configurations. One example of this approach is the Intelligent Vision Processing System (IVPS) from APQ Science & Technology. The baseline system can be specified with a range of Intel® Core™ processors, giving developers options not only in terms of CPU performance but also in terms of the built-in GPUs. Plus, the IVPS supports drop-in accelerators based on the Intel® Movidius™ processor, providing neural network acceleration when needed.
Machine vision applications don’t all look the same. Why should their hardware? via @insightdottech
Dequan Wang, the company’s CTO, explained why both specificity and flexibility are essential to address a wide range of technical issues. For some tasks like detecting shape cut errors, the algorithms require only modest compute power. But other tasks are more demanding.
“The diamond coating texture makes it difficult to detect scratches,” said Wang, because the coated surface is irregular by nature. Thus, a multiplayer neural network is needed to identify scratches and other flaws.
Labels and other print are also surprisingly hard to evaluate. Because the details can be very fine, this stage requires a high level of precision and image resolution, which again bumps up the compute load.
Crack detection, too, has its own requirements. In this example, image preprocessing is critical to eliminate noise, which can interfere with recognizing cracks in the blade surface.
But there is another reason scalability matters: Manufacturing processes tend to change over time. Examples here include changes in input materials, blade dimensions and other specifications, and updates to the manufacturing equipment. This creates a moving target for computing performance.
Smart Software Makes It Work
While a flexible hardware platform can help meet all of these varied and changing requirements, that’s only the start. Developers need a software platform that is equally scalable and flexible.
Here, the Intel® OpenVINO™ toolkit plays a critical role. This platform can target the CPU, GPU, and VPU with equal ease, enabling developers to seamlessly move between hardware configurations as needed.
“OpenVINO helps accelerate our system,” said Wang, “and it provides a basic multimedia processing library and a neural network computing framework. This reduces our development workload, maximizes performance, and accelerates development.”
The IVPS takes this scalable approach one step further by enabling integration of computer vision and motion control on the same hardware. This not only improves how computer vision and motion control work together, it reduces cost while increasing precision and performance.
“Our integrated PLC SDK allows our customers to accomplish a wide range of motion control applications on the IPC,” said Wang, “and perform the compute wherever it’s needed: CPU, GPU, acceleration card. That helps them gain the most benefit from computer vision.”
Services Complete the Picture
Even with all of these sophisticated tools at their disposal, manufacturers can find it challenging to develop complex machine vision algorithms. That’s what motivated APQ to offer development services to complement its hardware.
“We will develop customized algorithms, which will be especially developed for these application scenarios,” explained Wang. “Our goal is to reduce false alarms from the inspection systems, and, of course, improve detection accuracy.”
This effort goes beyond creation of algorithms for each step in the process. Instead, APQ looks at the overall manufacturing line. Wang said, “We fully simulate their process in software, so that they can use it quickly.”
For manufacturers, the goal is to produce defect-free units at low costs. Computer vision and inference training can solve that challenge. Using the optimum hardware to perform machine learning at each step of the process can reduce or even eliminate false positives that can halt production.
About the AuthorMore Content by Robert Moss