Skip to main content


Tools of the Trade: Empowering AI Developers to Innovate


Artificial intelligence is disrupting industries, creating opportunities, and enhancing customer experiences. AI developers are at the forefront of this revolution, building the solutions that will shape the future. That’s why it’s so important that they equip themselves with the right tools to bring their AI solutions and computer vision applications to life.

To learn all about the latest trends and technologies developers should keep up with, we talked to Intel’s Bill Pearson, VP, Network & Edge Group, General Manager Solutions Engineering; and Adam Burns, VP IoT, Director Edge Inference Products. Pearson and Adams discuss industry trends and the Intel technology, tools, and programs that make it easier to keep ahead of the game.

What are the industry trends driving the need for IoT, edge, and AI solutions?

Bill Pearson: There are four industry trends that come to mind:

  • The world is becoming more software defined. This is true of networks, applications, and infrastructure. AI has become more pervasive across nearly every use case.
  • The rate of change is rapidly increasing.
  • The way that the world is evolving in this space is getting faster and we’re moving very quickly.
  • There is a need to move towards the simplicity and accessibility that’s expected by modern AI developers.

Think about it as a cloud-native paradigm: All those learnings that developers gathered, they now expect to apply everywhere else.

Take what Apple’s done for the phone. They’ve shown us that that any experience should just be delightful. It should be simple and straightforward. And now that type of expectation is entering into the development space. When you bundle it all up, basically we need to build software-defined AI use cases that are super simple for people to apply in their daily lives.

Adam Burns: I wholly agree. If you apply those trends to the shift in the market, particularly in the edge IoT world, there’s been a slow burn that has rapidly accelerated over the last few years. In the embedded world that Intel started over 30 years ago, the focus was on reliability. Developers were looking for a combination of software and hardware that’s ultra-reliable that could be used in production for five to 10 years without having to worry about it. Now the shift is to “I want to know everything that’s happening with that device and the system it lives in. I want to know how to make it more efficient.”

This is enabled by all the things Bill talked about in terms of software-defined systems, AI, and how all this is coming together. And that shift from a developer and operator mindset fundamentally changes what people are asking for versus what we traditionally think about embedded computing.

“It’s an exciting time to be a #developer, an exciting time to be part of building these modern solutions that we’re all on this journey to help create it” – Bill Pearson, @intel via @insightdottech

What are the challenges developers face when building edge AI applications?

Bill Pearson: The first one is just how do I get started? There are so many options and a lot of noise in the industry. First, people are asking what is the path to get started in accomplishing their goal and KPIs. Next, they’re looking for the most effective way of achieving what they’re trying to do in their unique use case.

Third, developers want to identify the right solution that’s going to best meet that use case. For example, if they take something from a vendor and it offers a reference solution or a product, is that going to meet the need that they’re intending? And for Intel it’s about how we are helping developers and making sure that they can not only accomplish their goal, but the solution they choose helps lead them there.

Part of the solution is the hardware that goes in it. I saved this for last because it’s not the first choice that the developer makes, but it is an important choice. And Intel wants to make it easier for a developer to use the right hardware that’s going to give them best outcomes. So that they don’t build something that’s too big, consumes too much power, produces too much heat, or doesn’t fit in the physical space—particularly at the edge.

Adam Burns: So say I want to produce a computer vision application to do machine defect detection on an assembly line. There’s lots of good classification models out there. For example, our partner Hugging Face has one of the largest model ecosystems in AI, with an array of models or transformers that people can apply to computer vision.

Now that they have a general model that works well, how do they fine-tune it for their specific application? A sophisticated data scientist may want to take a wealth of data and do that training themselves. But application developers may want to have specialized tools like Intel® Geti—which can take relatively small amounts of data in confined levels of training, compute—and be able to produce a very accurate model.

Now how do they deploy so it’s optimized to the right type of hardware? Developers can use something like Intel® DevCloud, Intel Geti, and the Intel® Distribution of OpenVINO toolkit to compress the model down to a size that’s suitable for the edge. And then they can use DevCloud to determine if it’s best to run on an Intel® Core with a GPU, or if it should be running on an Intel Atom®. Or do they need to move up to Intel® XEON® because it’s a little heavier workload? It’s these types of decisions that Bill talked about in terms of finding the right application, tuning it for purpose, and making sure it’s deployed on the right hardware.

We want to guide developers through that complete workflow. We find especially in AI that more than 50% of the ideas developers have with those models don’t make it to production. So, for us it’s about easing their path to making it to production and deploy the solution in the most cost-effective means possible.

What are some other Intel tools that can ease that path?

Bill Pearson: Adam did a nice job of setting this up. When you think about solutions, let’s look at the Intel® Edge Software Hub and all the reference implementations it has. For example, a developer wants to know how to put together something for frictionless checkout. The Edge Software Hub can show them how the different ingredients fit together, the code that helps them put it together, and then go play with it, if you will.

You’re seeing that increasingly. We offer Jupyter notebooks, which are part of the extended OpenVINO toolkit with hands-on sample sets that developers can immediately apply and now run those on DevCloud. So immediately they can say, “I’m interested in AI solution, I can use OpenVINO, and I’ve got these Jupyter notebooks, let me try them right now.”

We put these things together, as Adam was saying, into this workflow where they can visualize the solution they want to create, use the samples and reference we provide for how to do it. Then they can immediately go and use our tools to get a feel for how they’re going to apply it, what hardware they’ll need. And then of course they can always use Geti and OpenVINO to figure out how to build that into the product they’re ultimately trying to deploy.

Can you talk a little bit more about the OpenVINO toolkit?

Adam Burns: OpenVINO is about expanding its breadth from a model and network perspective. While we started with a focus on computer vision, we see more multimodal uses of AI. An industrial example is using computer vision applications to understand defects and audio signatures to listen to a motor or bearing and determine whether or not a failure may happen on that system.

We see more and more customers interested in using generative AI, combining different types of AI, and we’re expanding OpenVINO to keep up with those types of models. For example, we publish blogs with Hugging Face on stable diffusion performance. We’re working on new open chatbot systems like Dolly and LLaMa to make sure we have the right performance for those. And we just keep focusing on breadth and developer efficiency.

So, we offer a diverse roadmap to meet a diverse set of developer needs. With the OpenVINO 23.0 release and the performance and efficiency cores we have in our CPU roadmap, we’ve automated the usage of those cores for what is most efficient for the system and the workloads running on it.

How is OpenVINO supporting new trends like generative AI?

Adam Burns: What’s happened from a market perspective is that generative AI is part of every conversation in every enterprise. We’re seeing tremendous demand and generative AI is starting those conversations.

And we’ve been focusing on optimizing OpenVINO through several techniques, starting out with popular NLP-style models and ChatGPT, for example. And we look at optimizations and portability within OpenVINO.

But it isn’t the answer to every problem. Where generative AI has a ton of power is when you start to look at not just the main application but all the integration work. It has the power to understand interfaces and help customers automate integration, system settings, and a number of different things. And it makes operators and developers incredibly effective.

Leading AI developers in the industry are saying things like, “I only write about 20% of my code now because generative AI is doing a lot of the code completion and the setup-type work. I can really focus on the algorithm and the unique places where I’m adding value.” So, it’s an amazing force multiplier to make developers more efficient. It’s been really interesting to see what applications enterprises are coming up with. And from an OpenVINO standpoint, it’s critical that we support that not only in the cloud, but also adapting and fine-tuning these models so they’re purpose-built for the edge.

Bill Pearson: Despite all the years of research, it’s early days and we’re just getting started. As generative AI has broken out into public perception, it created more awareness of AI. But it has also created more experiments and it turns out it’s remarkably good for that. There are a lot of interesting use cases that are being explored, but I don’t think the story’s been written yet.

What’s interesting for me is that we have two things going on. One is generative AI creates this art of the possible. That story is just one for the imagination, and we’re going to be amazed by where that goes. Practically, many customers today can use that as an opportunity to explore what they really need: the KPIs they’re trying to achieve, the use case they’re trying to implement. But in many cases, we can do that without generative AI, and frankly there are great solutions that are more focused and more cost-effective to help with that. The key is to help our customers find the right solution to the problem they are trying to solve.

For developers who want to learn more, how do they get started?

Bill Pearson: If you’re looking to build solutions, the Intel® Developer Zone is the place to start. You’ll find all the tools that Intel provides, like the Edge Software Hub and OpenVINO. If you’re specifically interested in building edge AI applications, you can go directly to, which is another great starting place.

Adam Burns: I think we live in a world where people want to get hands on and tinker with things. That’s where people can use the Edge Software Hub to really dig into the solutions and understand them.

Is there anything else either of you would like to add to our conversation?

Bill Pearson: For me, there is no better time to be in this industry, with its exciting, rapid pace of change in the marketplace, software-defined everything, and AI becoming so pervasive. It’s an exciting time to be a developer, an exciting time to be part of building these modern solutions that we’re all on this journey to help create it.

Adam Burns: Building on what Bill says, it’s incredibly rewarding and satisfying to see what developers and customers and partners are able to do with our technology. Just one example is Royal Brompton Hospital and pediatric lung disease detection. It so happens one of my cousin’s daughters has lung disease. You get these cases where we immediately can see tangible value, whether it’s making sure somebody gets the diagnosis they need faster or making a factory more efficient. Being able to be part of that and enable developers to create what they can is incredibly satisfying and rewarding.

This article was edited by Christina Cardoza, Associate Editorial Director for

About the Author

Georganne Benesch is an Editorial Director for Before this she was an independent writer, authoring blogs, web content, solution guides, white papers and more. Prior to her freelance career Georganne held product management and marketing positions at companies such as Cisco, Proxim and Netopia. She earned a B.A. at University of California at Santa Cruz.

Profile Photo of Georganne Benesch