Skip to main content

SMART CITIES

Accelerating the Developer Journey: AI at the Edge

OpenVINO

Building AI applications to run at the edge can seem like a formidable undertaking. But with the right development tools and platforms like the Intel® OpenVINO Toolkit 2022.1, it is easy to get started, streamline your effort, and deploy real-life solutions.

For a deep dive into the operational and business value of edge AI, I spoke to Adam Burns, Vice President of OpenVINO Developer Tools in Intel’s Network and Edge Group. Burns talks about the strategy in bringing new capabilities to OpenVINO 2022.1 and making it easier for developers to focus on building their applications. Our conversation covered everything from where to get started to solving the biggest AI developer challenges.

Let’s start by discussing what developers should know about building edge AI solutions.

At the end of the day, the edge is where operational data is generated. It’s in a store or restaurant where you’re trying to optimize the shopper or the diner experience. In medical imaging, it’s where an X-ray is taken. Or take a factory that wants to increase yields and manufacturing efficiencies.

Then you need to look at how AI marries up with an existing application. For example, in a factory, you’ve got a machine that’s running some part of the operation on the assembly line. You can use the data coming from that application to do visual inspection and ensure the quality of goods. Or you can use audio and data-based machine learning to monitor machine health and prevent failures. It’s this combination of how you use the data for the application and then use it to augment what the system is doing.

And the edge is very diverse. You have different sizes of machines, costs, and reliability expectations. So when we think about edge AI, we’re thinking about how we address a diversity of applications, form factors, and customer needs.

What was the strategy and thinking behind the OpenVINO 2022.1 release?

When we first launched OpenVINO, many of the applications for edge AI were focused on computer vision.

Since then, we’ve been working with and listening to hundreds of thousands of developers. There are three main things that we’ve incorporated into this release.

First and foremost is the focus on developer ease of use. There are millions of developers that use standard AI frameworks like PyTorch, TensorFlow, or PaddlePaddle, and we wanted to make it easier. For example, somebody is taking a standard model out of these frameworks and wants to convert it for use on a diverse set of platforms. We’ve streamlined and updated our API to be very similar to those frameworks and very familiar for developers.

When we think about #EdgeAI, we’re thinking about how we address a diversity of applications, form factors, and customer needs. @Inteliot via @insightdottech

Second, we have a broad set of models and applications at the edge. It could be audio, it could be natural language processing (NLP), or it could be computer vision. In OpenVINO 2022.1, there is a lot of emphasis on enabling these use cases, and really enhancing the performance across these diverse systems.

The third is automation. We want developers to be able to focus on building their application on whatever device or environment they choose. Rather than requiring a lot of parameters to really tweak and get best performance, OpenVINO 2022.1 auto-detects what kind of platform you’re on, what type of model you’re using, and determines the best setup for that system. This makes it very easy for developers to deploy across a wide range of systems without having to have optimization expertise.

Can you tell me more about how audio and NLP AI are being used today?

Let’s start with a client example and then we’ll go to edge. A lot of people are using video conferencing platforms today. In the background, those platforms are processing what we say so we can do closed captioning for clarity and assistance where needed. That’s the natural language processing.

They also do noise suppression. If somebody comes to work on my house and has a blower on high speed behind me, the video conferencing platform is going to do its best as it can to capture my voice and reduce those other aspects.

When we look at the edge, similar types of workloads are critical. Automating ordering in dining situations and retail stores has been a big focus. NLP can be used to process orders coming into a drive-through to make sure they’re getting orders accurate and then displaying those back to the customer.

Audio processing can be used in a factory to gauge machine health, especially in motors and drives and things like that. You can put an audio signal on many types of equipment and there’ll be certain audio signatures that can be detected, which are indicative of failure or anomalies.

So you start to get more defects noticed through computer vision while at the same time your audio signature is picking up an abnormality in a motor. That’s a sign to flag a potential repair or initiate some type of a corrective action.

What are the biggest challenges developers face when building AI apps today?

One of the chief problems is that a lot of the research around AI and the existing models are built in a cloud environment where you have almost unlimited compute. Now at the edge, a lot of developers are working in constrained environments.

How do you take applications and capabilities out of research and get them into deployment? One of the things we’re doing is making it efficient and economical enough to run on the edge, so the value you get out of deploying is greater than the cost of deploying. OpenVINO gives developers the ability to leverage some of the most advanced AI applications but in a way that’s efficient enough to really deploy on the edge.

For developers who want to learn more and do more, where can they get started?

The place to start is openvino.ai. You’ll find getting-started guides that walk through model optimization, access to Jupyter notebooks, different types of applications, and code samples. And, of course, you can download OpenVINO for free.

For those who want to do work in a hosted environment or want to prototype across different types of Intel systems, we have an IoT DevCloud. In minutes you can log in and have a session running with OpenVINO. There’s the same access to those notebooks and code samples that allow people to do something right away, whether it’s to optimize a network or run a specific type of application on their data sets. There’s access to a bunch of different model types and applications, and people can use their own sample data as well.

And finally, we have the Edge AI Certification Program. This is more about teaching the application of AI at the edge, while at the same time you’re using OpenVINO as a tool.

I think all three of those are great places to get started depending on where you are in your development journey.

Is there anything else would you like to add?

There are so many applications where data’s being generated at the edge. And that data can drive savings, or customer experience, or operational efficiency from combining AI. OpenVINO is all about taking what’s already working on the edge from an operational perspective and enhancing it with AI.

A lot of AI today, especially in the cloud, is deployed on expensive accelerators. In many cases, these solutions are too hot or too expensive. OpenVINO helps solve that problem by tuning these AI workloads and these AI networks to run efficiently on standard off-the-shelf Intel CPUs, which today have great AI performance and are ubiquitous in deployment around the world—meaning there is no need to buy something extra. That opens a whole range of new opportunities where you couldn’t deploy these applications a few years ago because they just weren’t efficient enough or they just weren’t cost-effective enough.

We’re trying to bring more developers to the edge with OpenVINO and really make sure there’s as much investment in these technologies that we think are incredibly valuable in terms of customer experience, saving money, improving manufacturing, and getting more goods out there.

From that standpoint, we’re trying to solve two things with OpenVINO. One is making it economical enough to deploy. And then really democratizing AI from by making it more accessible from a developer perspective, bringing more developers into the fold who can create and deploy these applications.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

About the Author

Georganne Benesch is an Editorial Director for insight.tech. Before this she was an independent writer, authoring blogs, web content, solution guides, white papers and more. Prior to her freelance career Georganne held product management and marketing positions at companies such as Cisco, Proxim and Netopia. She earned a B.A. at University of California at Santa Cruz.

Profile Photo of Georganne Benesch