Skip to main content

PODCAST

Deploy AI Apps with Intel® OpenVINO™ and Red Hat

Audrey Reznik Ryan Loney

What can artificial intelligence do for your business? Well, for starters, it can transform it into a smart, intelligent, efficient, and constantly improving machine. The real question is: how? There are multiple ways organizations can improve their operations and bottom line by deploying AI apps. But it’s not always straightforward and requires skills and knowledge they often do not have.

Thankfully, companies like Red Hat and Intel® have worked hard to simplify AI development and make it more accessible to enterprises and developers.

In this podcast, we discuss: the growing importance of AI, what the journey of an AI application looks like—from development to deployment and beyond—and the technology partners and tools that make it all possible.

Listen Here

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guests: Red Hat and Intel®

Our guests this episode are Audrey Reznik, Senior Principal Software Engineer for the enterprise open-source software solution provider Red Hat, and Ryan Loney, Product Manager for OpenVINO Developer Tools at Intel.

Audrey is an experienced data scientist who has been in the software industry for almost 30 years. At Red Hat, she works on the OpenShift platform and focuses on helping companies deploy data science solutions in a hybrid cloud world.

Ryan has been at Intel for more than five years, where he works on open-source software and tools for deploying deep-learning inference.

Podcast Topics

Audrey and Ryan answer our questions about:

  • (2:24) The business benefits of AI and ML
  • (5:01) AI and ML use cases and adoption
  • (8:52) Challenges in deploying AI applications
  • (13:05) The recent release of OpenVINO 2022.1
  • (22:35) The AI app journey from development to deployment
  • (36:38) How to get started on your AI journey
  • (40:21) How OpenVINO can boost your AI efforts

Related Content

To learn more about AI and the latest OpenVINO release, read The AI Journey: Why You Should Pack OpenShift and OpenVINO and AI Developers Innovate with Intel® OpenVINO 2022.1. Keep up with the latest innovations from Intel and Red Hat, by following them on Twitter at @Inteliot and @RedHat, and on LinkedIn at Intel-Internet-of-Things and Red-Hat.

 

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about deploying AI apps with experts Audrey Reznik from Red Hat and Ryan Loney from Intel®.

Welcome to the show guys.

Audrey Reznik: Thank you. It’s great to be here.

Ryan Loney: Thanks.

Christina Cardoza: So, before we get started, why don’t you both tell us a little bit about yourself and your background at your company. So Audrey I’ll start with you.

Audrey Reznik Oh, for sure. So I am a Senior Principal Software Engineer. I act in that capacity as the data scientist. I’ve been with Red Hat for close to a year and a half. Before that, I’m going to be dating myself here, I’ve spent close to 30 years in the software industry. So I’ve done front-end to back-end development. And just the last six years I’ve concentrated on data science. And one of the things that I work on with my team at Red Hat is the Red Hat OpenShift data science platform.

Christina Cardoza: Great. And Ryan?

Ryan Loney: Yep. Hi, I’m Ryan Loney. So I’m a Product Manager at Intel® for OpenVINOtoolkit. And I’ve been in this role since back in 2019 and been working in the space for—not as long as Audrey—so, less than a decade. But the OpenVINO toolkit: we are in open-source software and tools for deploying deep learning inference. So things like image, classification, object detection, natural language processing—we optimize those workloads to run efficiently on Intel® hardware whether it’s at the edge or in the cloud, or at the edge and controlled by the cloud. And that’s what we do with OpenVINO.

Christina Cardoza: Great. Thanks, Ryan. And I should mention the IoT Chat and insight.tech program as a whole are published by Intel®, so it’s great to have someone with your background and knowledge joining us today. Here at insight.tech, we have seen AI adoption just rapidly increasing and disrupting almost every industry—if not every industry.

So Ryan, I would love to hear from your perspective why AI and machine learning is becoming such a vital tool, and what are the benefits businesses are looking to get out of it?

Ryan Loney: Yeah, so I think automation in general—everything today has some intelligence embedded into it. I mean, the customers that we’re working with, they’re also taking general purpose compute, you know like an Intel® Core processor, and then embedding it into an X-ray machine or an ATM machine, or using it for anomaly detection on a factory floor.

And AI is being sort of integrated into every industry, whether it’s industrial, healthcare, agriculture, retail—they’re all starting to leverage the software and the algorithms for improving efficiency, improving diagnosis in healthcare. And that’s something that is just—we’re at the very beginning of this era of using automation and intelligence in applications.

And so we’re seeing a lot of companies and partners of Intel® who are starting to leverage this to really assist humans in doing their jobs, right? So if we have a technician who’s analyzing an X-ray scan or an ultrasound, that’s something where with AI we can help improve the accuracy and early detection for things like pneumothorax.

And with factories, we have partners who are building batteries and PCBs, and they’re able to use cameras to just detect if there’s something wrong, flag it, and have somebody review it. And that’s starting to happen everywhere. And with speech and NLP, this is a new area for OpenVINO, where we’ve started to optimize these workloads for speech synthesis, natural language processing.

So if you think about, you know, going to an ATM machine and having it read your bank balance back to you out loud, that’s something that today is starting to leverage AI. And so it’s really being embedded in everything that we do.

Christina Cardoza: Now, you mentioned a couple of use cases across some of the industries that we’ve been seeing, but Audrey, since you have been in this space for—as you mentioned, a couple of decades now—I would love to hear from your perspective how you’re seeing AI and ML being deployed across these various use cases. And what the recent uptake in adoption has been.

Audrey Reznik Okay, so that’s really an excellent question. First of all, when we’re looking at how AI and ML can be deployed across the industry, we kind of have to look at two scenarios.

Sometimes there’s a lot of data gravity involved where data cannot be moved off prem into the cloud. So we still see a lot of AI/ML deployed on premises. And, really, on premises there are a number of platforms that folks can use. They can create their own, but typically people are looking to a platform that will have MLOps capability.

So that means they’re looking for something that’s going to help them with the data engineering, the model development, training/testing the deployment, and then the monitoring of the model and the intelligent application that communicates with the model. Now that’s being on prem.

What people also do, they’ve taken advantage of the public cloud infrastructure that we have. So a lot of folks are also moving, if they don’t have data-gravity issues or security issues, because we do see—such as defense systems or even government—they prefer to have their data on prem. If there are no issues with that, they tend to move a lot of their MLOps creation and delivery/deployment to the cloud. So, again, they’re going to be looking for a cloud service platform that, again, is going to have MLOps available there so that they could go ahead and look at their data, curate their data. Then be able to go ahead and create models, train, test them, deploy them, and again, be able to have that capability once things are deployed to go ahead and monitor those models. Again, check for drift. If there are any issues with the models, be able to retrain those models. In both instances, what people are really looking for is something easy to use. You don’t want to put together a number of applications and services piecemeal. I mean, it can be done, but at the end of the day we’re looking for something ease of use. We really want a platform that’s easy to use for data scientists, data engineers, application developers, so that they can collaborate. And the collaboration then kind of drives some of the innovation and their ability, again, to deploy an intelligent application quickly.

And then, I should mention for everybody in IT, whether you’re on prem or in the cloud, IT has to be happy with your decision. So they have to be assured that the place that you’re working in is secure, that there’s some sort of AI governance driving your entire process. So those are on prem in the cloud, kind of the way that we’re seeing people go ahead and deploy AI/ML, and increasingly we’re seeing people use both.

So we’re having what we call a hybrid cloud situation, or hybrid platforms.

Christina Cardoza: I love all the capabilities you mentioned that people are looking for in the tools that they pick up, because AI can be such an intimidating field to get into. And, you know, it’s not as simple as just deploying an AI application or solution. There’s a lot of complexity that goes into it. And if you don’t choose the right tool or if you’re piecemealing it, like you mentioned, it can make things a lot more difficult than they need to be. So with that, Ryan, what are some of the challenges, the biggest challenges that businesses face when they’re looking to go on this AI journey and deploy AI applications in their industry and in their business?

Ryan Loney: I think Audrey brought up one of the biggest challenges, and that’s access to data. So, I mean, it’s really important. I think we should talk about it more, because when you’re thinking about training or creating a model for an intelligent application, you need a lot of data. And when you factor in HIPAA compliance and privacy laws and all of these other regulatory limitations, and of course, ethical choices that companies are making—they want to protect their customers’ privacy and they want to protect their customers. So having a secure enclave where you can get the data, train the data, you can’t necessarily send it to a public cloud, or if you do, you need to do it in a way that’s secure. And that’s something that Red Hat is offering. And that’s one of the things I’m really impressed with from Red Hat and from OpenShift, is this approach to hybrid cloud where you can have on prem, managed OpenShift. You can have—run it in a public cloud and really give the customer the ability to keep their data where they’re legally allowed to, or where they want to keep it for security and privacy concerns. And so that’s really important.

And when it comes to building these applications, training these models for deep learning, for AI, everything is really at the foundation built on top of open source tools. So we have deep learning frameworks like TensorFlow and PyTorch. We have toolkits that are provided by hardware vendors like Intel®. We have OpenVINO, OpenVINO toolkit, and there’s this need to use those tools in an environment that is safe for enterprise that has access rights and management. But at the core they’re open-source tools, and that’s what’s really impressive about what Red Hat is doing. They’re not trying to recreate something that already exists and works really well. They’re taking and adopting these open source tools, the open source Open Data Hub, and building on top of that and offering it to enterprises.

So they’re not reinventing the wheel. And I think that’s one of the challenges for many businesses that are trying to scale is they need to have this infrastructure, and they need to have a way to have auto-scaling, load-balancing infrastructure that can increase exponentially on demand when it needs to. And building out a Kubernetes environment yourself and setting it all up and maintaining that infrastructure—that’s overhead and requires DevOps engineers and IT teams. And so some of that’s really where I think Red Hat is coming into, in a really important space, to offer this managed service so that you can focus on getting the developers and the data scientists access to the tools that they would use on their own outside of the enterprise environment, and making it just as easy to use in the enterprise environment. And giving them the tools that they want, right? So they want to use the tools that are open source, that are the deep learning frameworks, and not reinventing the wheel. So I think that’s really a place where Red Hat is adding value. And I think there’s going to be a lot of growth in this space, because our customers that are deploying at scale and including devices at the edge, they’re using container orchestration, right? These orchestration platforms, you need it to manage your resources, and having a control plane in the cloud and then having nodes at the edge that you’re managing—that’s the direction that a lot of our customers are moving. And I think that’s the future.

Christina Cardoza: Great. And while we’re on the topic of tools, you’ve mentioned OpenVINO a couple of times, which is Intel®’s AI toolkit. And I know you guys recently just had one of the biggest launches since OpenVINO was first started. So can you talk a little bit about some of the changes or thought process that went into the OpenVINO 2022.1 release? And what new capabilities you guys added to really help businesses and developers take advantage of all the AI capabilities and opportunities out there.

Ryan Loney: Yeah. So this was definitely the most substantial change of feature enhancements, improvements that we’ve made in OpenVINO since we started in 2018.

It’s really driven by customer needs. And so some of the key things for OpenVINO are that we have hardware plugins, so we call them device plugins to our CPU or GPU and other accelerators that Intel® provides. And Intel®, we’ve recently launched our discrete graphics. We’ve had integrated graphics for a long time, so, GPUs that you can use for doing deep learning inference, that you can run AI workloads on these GPUs. And so some of the features that are really important to our customers that are starting to explore using these new graphics cards—which we’ve launched some of the client discrete graphics and laptops, and later this year we’re going to be releasing the data center server, edge server skews for discrete graphics—the customers need to do things like automatic batching. So when you have a GPU card deciding the correct batch size for the input for a specific model, it’s going to be a different number depending on the model and depending on the compute resources available.

So some of our GPUs have different numbers of execution units and different power ratings. So there’s different factors that would make each GPU card slightly different. And so instead of asking the developer to go out and try batch size 32 and batch size 16 and batch size 8 and try to find out what works best for their model, we’re automating some of that so that they don’t have to, and they can just automatically let OpenVINO determine the batch size for them.

And on a similar note, since we’ve started to expand to natural language processing, if you think about question answering, so if you had asked a chat bot a question like, what is my bank balance? And then you ask it a second question like, how do I open an account? Both of those questions have different sizes, right? The number of letters and number of words in the sentence—it’s a different input size. And so we have a new feature called dynamic shapes, and that’s something we introduced on our CPU plugin. So if you have a model like a BERT natural language processing model, and you have different questions coming into that model of different sizes, of different sequences, OpenVINO can handle under the hood, automatically adjusting the input. And so that’s something that’s really useful, because without that feature you have to add padding to every question to make it a fixed sequence link, and that adds overhead and it wastes resources. So that’s one feature that we’ve added to OpenVINO.

And just one additional thing I’ll mention is OpenVINO is implemented in C++ at its core. So our runtime, we have it written in C++. We have Python bindings for Python API. We have a model server for serving the models in environments like OpenShift where you want to expose a network endpoint, but that core C++ API, we’ve worked really hard to simplify it in this release so that if you take a look at our Python code, it’s really easy to read Python. And that’s why a lot of developers, data scientists, the AI community really like Python because the human readability is much better than C++ for many cases. So we’ve tried to simplify the C++ API, make it as similar as possible to Python so that developers who are moving from Python to C++—it’s very similar. It’s very easy to get that additional boost by using C++.

So those are some of the things that we changed in the 2022.1. There are several more, like adding new models, support for new operations, really expanding the number of models that we can run on Intel® GPU. And so it’s a really big release for us.

Christina Cardoza: Yeah. It sounds like a lot of work went into making AI more accessible and easier entry for developers and these businesses to start utilizing everything that it offers. And Audrey, I know when deploying intelligent applications with OpenShift, you guys also offer support with OpenVINO. So I would love to hear what your experience has been using OpenVINO and how you’re gaining more benefits from the new release. What were some of the challenges you faced before OpenVINO 2022.1 came out, and what are you guys experiencing now on the platform?

Audrey Reznik: Right. So, again, very good question. And I’m just going to lead off from where Ryan left on expanding on the virtues of OpenVINO.

First of all, you have to realize that before OpenVINO came along, a lot of the processing would have been done on hardware. So clients would have used GPU, which can be expensive. And a lot of the times when somebody is using a GPU, not all of the resources are used. And that’s just kind of a, I don’t want to say waste, but it is a waste of resources that you could probably use those resources for something else, or even have different people using that same GPU.

With the advent of OpenVINO that kind of changed the paradigm in terms of, how I can go and optimize my model or how I can do quantization.

So let’s go ahead with optimization first. Why use a GPU if you can go ahead and, say, process some video and look at that video and say, you know what? I don’t need all the different frames within this video to get an idea of what my model may be looking at. Maybe my model may be looking at a pipe in the field and we’re just, from that edge device, we’re just checking to make sure that that nothing is wrong with that pipe. It’s not broken. It’s not cracked. It’s in good shape. You don’t need to use all of those frames that you’re taking within an hour. So why not just reduce some of those frames without impacting the ability of your model to perform. That optimization feature was huge.

Besides that, with OpenVINO, as Ryan alluded to, you can just go ahead and add just a couple little snippets of code to get this benefit. That’s not having to go through the trouble of setting up a GPU. So that’s like a very quick and easy way to optimize something so that you can take the benefit of OpenVINO and not use the hardware.

The other thing is quantization. Within machine learning models, you may use a lot of numerics in your calculations. So I’m going to take the most famous number that most people know about, which is pi. It’s not really 3.14; it’s 3.14 and six or seven digits beyond that. Well, what if you don’t need that precision all the way? What if you can just be happy with just the one value that most people equate with pi, which is 3.14. In that respect, you’re also gaining a lot of benefit for your model in terms of you’re still getting the same results, but you don’t have to worry about cranking out so many digit points as you go along. And, again, for customers this is huge because, again, we’re just adding just a couple lines of code in order to use the optimization and quantization with OpenVINO. That’s so much easier than having to hook up to a GPU. I’m not saying—nothing bad about GPUs, but for some customers it’s easier. And, again, for some customers it’s also cheaper. And some people really do need to save some of that money in order to be more efficient with the funds that they could divert elsewhere in their business. So, if they don’t have to get a GPU, it’s a nice, easy way to kind of save on that hardware expense, but really get the same benefits.

Christina Cardoza: Now we’ve talked a lot about the tools and the capabilities that we can leverage in this AI-deployment journey. But I would love to give our listeners a full picture of what an AI journey really entails from end-to-end, start-to-finish. So Audrey, would you be able to walk us through that journey a little bit from development to deployment, and even beyond deployment?

Audrey Reznik: Yeah, I can definitely do that. And what I will do is I will share some slides. For those that are just listening through their radio, I’ll make sure that my description is good enough for you so that you won’t get lost. So what I’m going to be sharing with you is the Red Hat OpenShift data science platform. This is a cloud service that is available on AWS. And of course this can have hybrid components, but I’m just going to focus on the cloud services aspect. And this is a managed service offering that we have to our customers. And we’re mainly targeting our data scientists, data engineers, machine learning engineers, and of course our IT folks so that they don’t have to manage their infrastructure. So, what we want to look at in the journey, especially for MLOps, is there are a couple of things that are very important or steps.

We want to gather and prepare the data. We want to go ahead and develop the model. We want to integrate the models in application development. We wanted to do model monitoring and management. And we have to have some way of going ahead and retraining these models. These are  four or five very important steps. And at Red Hat, and again as Ryan talked about earlier, we don’t want to reinvent everything. We want to be able to use some of the services and applications that companies have already created. And a lot of open source companies have created some really fantastic applications and pieces of software that will fit each step of this MLOps journey or model life cycle.

So before I go into taking a look at all the different steps of the model I circle, I’m just going to build up this infrastructure for you to take a look at. So really this managed cloud services platform, first of all, sits on AWS, and within AWS Red Hat OpenShift has two offerings: We have Red Hat OpenShift Dedicated, or some may be familiar with Red Hat OpenShift service on Amazon Web Services, which we affectionately call Rosa.

Now, even though we have these platforms, we want to take care of any hardware acceleration that we may want. So we want to be able to include GPUs, and we have a partner with Nvidia where we use Nvidia GPUs for hardware acceleration. We also have Intel®. Intel® not only helps with that hardware aspect, but, again, we’ll point out where OpenVINO comes in a little bit later.

Over top of this basic infrastructure, we have what we call our Red Hat managed cloud services. These are going to help to take any machine learning model that’s being built all the way, again, from gathering and preparing data—where you could use something such as our streaming services for time series data—to developing a model where we have the OpenShift data service application or platform, and then to be able to deploy that model using source-to-image, and then model monitoring and management with Red Hat OpenShift API management.

Again, as I mentioned, we didn’t want to go ahead and create everything from scratch. So what we did is for each part of the model life cycle we invited various independent service vendors to come in and join this platform. So if you wanted to gather/prepare data, you could use Starburst Galaxy. Or if you didn’t want to use that, you could go back to the Red Hat offering. If you wanted to develop the model, you could use Red Hat OpenShift data science, or you could use Anaconda, which comes with prebaked models and an environment where you can go ahead and develop and train your model and so forth.

But what we also did was add in a number of customer-managed software. And this is where OpenVINO comes in. So what we have with this independent software is, again, we can go ahead and develop our model, but this time we may use Intel®’s oneAPI AI analytics toolkit. And if we wanted to, again, integrate the models in app development, we may go ahead and use something like OpenVINO, as well as we could also use something like IBM Watson.

The idea though is at the end of the day, we go ahead and we invite all these open source products into our platform so that people have choice. And what’s really important about the choice is they can pick which solution works better for them to solve the particular problem that they’re working on.

And, again, with that choice, they may see something that they haven’t used before that may actually help them innovate better, or actually make their product a lot better.

So by having this type of platform where you can go ahead and do everything that you need to ingest your data, develop, and train, and deploy your model, to bring your application engineers in to create the front-end and the REST API services for an intelligent application. And then being able to go ahead and deploy your model, and then being able to retrain it when you need it is something that makes the whole process of the MLOps a lot easier. This way you have everything, and within one consecutive platform you’re not going ahead and trying to fit things together and, I think I mentioned before, piecemeal solutions together. And at the end of the day you do have a product then that everyone on your team can use to collaborate and push something out into production a lot easier than they may have been able to do in the past.

Christina Cardoza: That’s great. Looking at this entire AI journey and the life cycle of an AI intelligent application, Ryan, I’m wondering if you can talk a little bit more about how OpenVINO works with OpenShift, and where in this journey does it come in?

Ryan Loney: Yeah. So if I could go ahead, and I’ll share my screen now and just show you what it looks like. So, if we take a look at—and for those who can’t see the screen, I’ll try my best to describe—so I’m logged into OpenShift console and this is an OpenShift cluster that’s hosted on AWS. And you can see that I’ve got the OpenVINO toolkit operator installed. And so OpenShift provides this great operator framework for us to just directly integrate OpenVINO and make it accessible through this graphical interface.

So I’ll start maybe from the deployment part at the end here, and work backwards. But Audrey mentioned deploying the models and integrating with applications. So once I have this OpenVINO operator installed, I can just create what’s called a model server. And so this is going to take my model or models that my data scientists have trained and optimized with OpenVINO and give an API endpoint that you can connect to from your applications in OpenShift.

So, again, the great thing about this is the ability to just have a graphical interface, so when I create a new instance of this model server, I can just type in the box and give it a name to describe what it’s doing. So maybe this is a product classification for retail. So maybe I’d say product classifier, and give it a name. And then it’s going to pull the image that we publish to Red Hat’s registry that has all of our dependencies for OpenVINO to run the with the Intel® software libraries baked into the image. And then if I want to do some customization, like a change where I’m pulling my model from, or do a multimodel deployment versus single, I can do that through this drop-down menu.

And the way that this deployment works is we use what’s called a model repository. So, once the data scientists and the developer have the model ready to deploy, they can just drop it into a storage bucket, into a persistent volume in OpenShift, or any pretty much any S3 compatible storage or Google Cloud storage bucket—you can just create this repository. And then every time an instance or a pod is created, it can quickly pull the model down so you can scale this up. And so basically once I click “create,” that will immediately create an instance that’s serving my model that I can scale up with something like a service mesh, using the service mesh operator and put this into production.

I’ll go backwards now. So we talked a little bit about optimizations. We also have a Jupiter notebook integration, so if you want to have some ready-to-run tutorials that show, how do I quantize my model? How do I optimize it with OpenVINO? You can do that directly in the Red Hat OpenShift data science environment, which is another operator that’s available through Red Hat. It’s a managed service, and I’ve actually already set this up and this is sort of what it looks like. And I’ll just show you the Jupiter interface. So if I wanted to learn how to quantize a model, which Audrey described, reducing the precision from FP32 to integer 8, there’s some tutorials that I can run. And I will just show that the output of this Jupiter notebook. It does some quantization of where training—it takes a few minutes to run. And you can see that the throughput goes from about 1,000 frames per second to 2,200 frames per second without any significant accuracy loss. So very minimal change in accuracy, and that’s one way to compress the model, boost the performance, and there’s several tutorials that show how to use OpenVINO and generate these models. And then once you have them, you can deploy them directly, like I was showing, through the OpenShift console and create an instance to serve those models in production. That’s what’s really great about this, is if you want to just open up a notebook, we give you the tutorials to teach you how to use the tools and at a high level. OpenVINO, when we talk about optimization, we’re talking about reducing binary size, reducing memory footprint, and reducing resource consumption. So if we want, this OpenVINO was originally focused on just the IoT space on the edge. But we’ve noticed that people care about resource consumption in the cloud just as much if not more, when you think about how much they’re spending on their cloud bill. Well, if I can go and apply some code to optimize my model, and go from processing 1,000 frames per second, which if you think about processing video, like Audrey said, 30 FPS or 15 FPS is standard video. This is going from 1,000 frames to 2,200. Being able to get this sort of for free, right? You don’t have to spend more money on expensive hardware. You don’t have to—you can process more frames per second, more video streams at the same time, and you can unlock this by using our tools.

OpenVINO also—even if you don’t want to quantize the model because you want to make sure you maintain the accuracy—you can also use our tools to change from FP32 to FP16, which is floating point 32 to floating point 16, that reduces the model size and the memory consumption, but it doesn’t impact the accuracy. And even if you just perform that step or you don’t perform quantization, we are doing some things under the hood, like operation fusion, convolutions fusion—these are all things that give you performance boost, reduce the latency, increase the throughput, but they don’t impact accuracy. And so those were some of the reasons why our customers are using OpenVINO, to squeeze out a little bit more performance and also reduce the resource consumption compared to if you just tried to deploy with the deep learning.

Christina Cardoza: Great. Listening to you guys and seeing the tools in action and the entire life cycle, it’s very clear that there is a lot that goes into deploying an AI application. And luckily, the work that Intel® and Red Hat have been doing has sort of eased the burden for businesses and developers. But, I can imagine if you’re just getting started, you’re probably trying to wrap your head around all of this and understand how you approach AI in your company, how you start an AI effort. So Audrey, I’m wondering, where is the best place to get started? How do you be successful on this AI journey?

Audrey Reznik: It’s funny that you should mention that. One of my colleagues wrote an article that the best data science environment to work on isn’t your laptop. And he was alluding to the fact that when you first start out, going ahead and creating some sort of model that will fit in intelligent app, usually what data scientists will do is they’ll put everything on their laptop. Well, why did they do that? Well, first of all, it’s very easy to access. They can load whatever they want to. They can be able to efficiently go in and know that their environment isn’t going to change because they’ve set it up, and they may have all their data connections added. That’s really wonderful for maybe development, but they’re not looking towards the future where, how do I scale something that’s on my laptop? How do I share that something that’s on my laptop? How do I upgrade? This is where you want to move to some sort of platform, whether it’s on prem or in the cloud, that’s going to allow you the ability to kind of duplicate your laptop. Okay, so Ryan was able to show that he had an OpenVINO image that had the OpenVINO libraries that were needed. It’s within Python, so it had the appropriate Python libraries and packages. He was able to create something—an ephemeral IDE that he was able to use. What he didn’t point out within that one environment was that he’d be able to use the GitHub repo very easily, so that he could check in his code and share his code.

When you have something that is a base environment that everybody’s using, it’s very easy then to take that environment and upgrade it, increase the memory, increase the CPU resources that are being used, add another managed service in. You have something that’s reproducible, and that’s key, because what you want to be able to do is take whatever you’ve created and then be able to go to deploy it successfully.

So if you’re going to start your AI journey, please go ahead and try to find a platform. I don’t care what kind it is. I know Red Hat and Intel® will kill me for saying that, but find a platform that will allow you to do some MLOps. So something that will be able to allow you to explore your data. Develop, train, and deploy your model. Be able to work with your application engineers where they could go ahead and write a UI or REST end points that could connect to your model, and something that will help you deploy your model where you can monitor, manage it for drift, or even to see if your model’s working exactly how it’s supposed to work. And then the ability to retrain. You want to be able to do all those steps very easily. I’m not going to get into GitOps pipelines and OpenShift pipelines at this point, but there has to be a way that, from the beginning to the deployment, it’s all done effortlessly and you’re not trying to use chewing gum and duct tape to put things together in order to deploy it to production.

Christina Cardoza: That’s a great point. And Ryan, I’m curious, once you get started on your AI journey, you have some proven projects behind you, how can you use OpenVINO, and how does Intel® by extension help you boost your efforts and continue down a path of a future with AI in your business and operations?

Ryan Loney: Yeah. So I’d say a good first step would be to go to openvino.ai. We have a lot of information about OpenVINO, how it’s being used by our ISVs, our partners, and customers. And then docs.openvino.ai and the “get started” section. We have a lot of information about, I know Audrey said not to do it on your laptop, but if you want to learn the OpenVINO piece, you can at least get started on your laptop and run some of our Jupiter notebooks, the same Jupiter notebooks that I was showing on the OpenShift environment. You can run those on your Red Hat Linux laptop, or your Windows laptop, and start learning about the tools and start learning about the OpenVINO piece.

But if you want to connect everything together, in the future we’re going to have—I believe we’ll have a sandbox environment that Red Hat will be providing where we can—you can log in and replicate what I was just showing on the screen.

But really to get started and learn, I would check out open vino.ai, check out docs.openvino.ai and get started. And you can start learning if you have an Intel® CPU and Linux, Windows, or Mac, and start learning about our tools.

Christina Cardoza: Great. Well, this has been a great conversation and I’m sure we could go on for another hour talking about this, but unfortunately we’re out of time. So I just want take the time and thank you both for joining us today and coming on the podcast.

Ryan Loney: Thank you.

Audrey Reznik: Yeah, thank you for having us.

Christina Cardoza: And thanks to our listeners for joining us today. If you enjoyed the podcast, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

About the Host

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza