Skip to main content

RETAIL

Low-Code AI Eases Computer Vision Application Development

Edge AI

Identifying potholes along thousands of miles of roadway. Stocking shelves and rearranging inventory. Spotting minuscule product defects that a factory inspector might miss. These are just a few of the things today’s AI and computer vision systems can do. As capabilities improve and costs decrease, adoption is rapidly expanding across industries.

Once in place, a computer vision system can save humans countless hours of toil, as well as reduce errors and improve safety. But developing a solution can be painstaking and time-consuming. Humans often play an outsize role in training AI algorithms to distinguish a Coke can from a water bottle, or a shadow from a break in the asphalt. But as the technology evolves, solution providers are finding new ways to make training more efficient and creating systems easier for nontechnical users to operate.

Solving Problems with Computer Vision and Edge AI Technology

Computer vision applications are as varied as the industries and organizations they serve, but they share two common goals. The first is saving time and money by automating tedious manual tasks with machine learning. The second is creating a growing repository of knowledge from large amounts of data that will shed light on operations and lead to further improvements over time.

“We start with a base system, then we work with our clients to specialize it for their needs,” says Paul Baclace, Chief AI Architect at ICURO, a company that builds AI and computer vision solutions for deployment on robots, drones, and in the cloud.

For example, for the U.S. Department of Transportation, ICURO created a successful proof-of-concept drone that uses computer vision cameras to detect and relay information about road cracks and other highway defects in real time. Normally, a drone’s camera images aren’t processed until after the flight.

“When you check the images later, some may be blurry, or the contrast might be terrible. Then you have to go back and redo them, and that’s very expensive. By processing them in real time, you have fewer errors,” Baclace says.

To save warehouse and retail workers time and labor, ICURO developed the Mobile Robot AI Platform. It navigates to specified objects, grabs them, and loads them onto transport robots for packing and shipping—all without human intervention. The robot can also integrate with factory machines and sensors to detect and resolve production problems. “It has a lower error rate than humans, who can get tired and injured,” Baclace explains.

The robot uses Intel® RealSense cameras and lidar—light detection and ranging—to navigate. Another RealSense camera, enclosed in its “hand,” enables it to grasp the correct item and load it into a basket before heading off to its next job (Video 1).

Video 1. The ICURO mobile picking robot uses Intel® RealSense cameras and lidar to navigate to specified items, grasp them, and deliver them to a transport robot for packing and shipping. (Source: ICURO)

As companies become more comfortable using automation, computer vision solutions are expanding—and becoming more visible. For example, ICURO created a picking robot for a cashierless retail store that gathers customers’ shopping list items from a storeroom and delivers them to the front counter.

As companies become more comfortable using automation, #ComputerVision solutions are expanding—and becoming more visible. @icuro_ai via @insightdottech

Creating Cutting-Edge Computer Vision Solutions

To develop its robot-controlling computer vision applications, ICURO programs and tests them in the Intel® Developer Cloud and uses the Intel® OpenVINO toolkit to optimize them for best performance.

“Without Intel’s tools, we could look at the specs we need and estimate, but there would be some guesswork involved. This way, we can check the performance and say, ‘OK, that’s what we need to put on this robot,’” says Baclace.

ICURO doesn’t make hardware, but Intel software tools help the company determine which devices would work best for its mobile software applications. Most can run on compact and lightweight edge CPUs, such as the Intel® NUC.

Faster Deployment and No-Code Operation

Before computer vision solutions can be implemented, their algorithms must be trained to recognize customer images, which can range from stop signs, vehicles, and pedestrians, to different goods with similar-sized packaging. Usually, much of the training is done by humans, who use online tools to outline and label images of all of the objects a robot might encounter. After all the images have been annotated, they are fed to the algorithms, whose performance is tested, corrected, and validated before deployment.

To speed up this painstaking process, ICURO experiments with a newer method known as active learning, in which each image is annotated and fed to algorithms right away. If they interpret it correctly, a domain expert can mark the image as validated, which adds to a growing database that guides the algorithms in making future decisions. The learn-as-you-go method speeds training and saves personnel from doing annotations that may be unnecessary. “With the push of a button, you increase the dataset. Training and feedback go from days to minutes,” Baclace says.

In addition, ICURO is working on solutions that will allow its customers to make changes to their computer vision models, training the software to recognize new products or new locations without having to write code. The company also regularly hones its algorithms to maintain a competitive edge in the fast-moving world of AI and computer vision.

“Neural networks keep changing and improving their accuracy every six months to a year, and we like to use the latest ones,” Baclace says. “This is a very exciting time for deep learning systems.”
 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.