Skip to main content


The Full Scope of Deploying Industrial AI at the Edge

Industrial AI

The smart-manufacturing space has been evolving rapidly over the past few years so as to keep up with the demands of the digital era. Edge computing is a big part of that digital-transformation journey. But edge isn’t a fixed destination; it’s part of the process.

But businesses may still need signposts on this journey. So who has the roadmaps? And how might those businesses know when they’ve actually arrived where they need to be? Blake Kerrigan, General Manager of the Global ThinkEDGE Business Group at Lenovo, a global leader in high-performance computing, and Jason Shepherd, Vice President of Ecosystem at ZEDEDA, a provider of IoT and edge-computing services, confirm that there’s no one-size-fits-all approach.

They discuss the orchestration of edge computing, bringing what’s best about the public cloud experience right to the edge, and the relationship between cloud and edge computing in the first place.

What does a digital transformation in the manufacturing space look like these days?

Blake Kerrigan: For the past 15 to 20 years most industrial customers have been focused on automation, but some of the biggest trends we’re seeing now are around computer vision and AI use cases. Other trends I’m seeing a lot in manufacturing and distribution are things like defect detection and safety applications.

The question is: How do you create efficiencies in the processes that already exist? We’re starting to see unique solutions, and they’re getting easier and easier for our customers to adopt.

How does this change the role of edge computing and the cloud?

Jason Shepherd: The only people who think that sending raw video directly to the cloud is a good idea are the ones who want to sell you internet connectivity. With computer vision, the whole point is to be able to look at live camera or video streams at the edge, where they can be continuously monitored, and intelligence can be built in to trigger human intervention if needed.

What’s the most successful way for manufacturers to navigate this edge journey?

Jason Shepherd: Edge is a continuum—from really constrained devices up through on-prem. Eventually you get to the cloud, and running workloads across that continuum is a balance of performance costs, security, and latency concerns.

For manufacturers, first and foremost, it’s important to understand that it is a continuum, then to understand the different trade-offs. If you’re in a secure data center, it’s not the same as being on the shop floor—the security needs are different, for example. Navigating the landscape is the first problem.

When you get into actual deployment, always start with a use case, then do a POC. At this stage we see a lot of experimentation. But taking the lab experiment into the real world can be really challenging—camera angles change, lighting changes, contexts switch, etc.

The main thing is to break down the problem, and separate out infrastructure investment from investment in the application plane. Work with vendors that are architecting for flexibility, and evolve from there. Eventually it comes down to domain expertise with consistent infrastructure—consistent infrastructure like we’re doing with Lenovo and ZEDEDA and Intel®.

Blake Kerrigan: You can build something in a lab, and typically the last thing an engineer’s going to think about is the cost of developing or deploying the solution. The biggest inhibitors to scale are deployment, management of life cycle, and transitioning from one silicon to another over time.

The first step is understanding what kind of business outcome you want to drive, and then being conscious of what costs are associated with that outcome. To select the right hardware, the customer has to understand what the iterations of the program are throughout the solution’s life cycle. At Lenovo, we work with people on solution architecture and thinking about what type of resources they need today—and then how does that scale tomorrow, next week, and next year, and the next five years?

Tell us more about how to approach edge computing.

Jason Shepherd: There are a lot of special, purpose-built vertical solutions. With any new market, I always say, it goes vertical before it goes horizontal. It’s about domain knowledge.

What’s new is that everything is becoming software defined—where you abstract the applications from the infrastructure. In the manufacturing world, control systems have historically been very closed, which is a play to create stickiness for that control supplier. And, of course, there are implications around not being tightly controlled in terms of safety and process uptime.

What’s happening with edge is that we’re able to take public cloud elements—platform independence, cloud-native development, continuous delivery of software that’s always updating and innovating—and we’re able to shift those tools back to the edge. Basically, we’re taking the public cloud experience, and extending it right to the box on the shop floor.

What we do at ZEDEDA is that—while we help expand those tools from a management standpoint, from a security standpoint—we also have to account for the fact that even though the same principles are in play, it’s not happening in a physically secure data center. When you’re in a data center, you have a defined network perimeter; if you’re not, we have to assume that you’re deployed on untrusted networks. Also, when you’re outside of the data center, you have to assume that you’re going to lose connectivity to the cloud at times, and you’ve got to be able to withstand that. One-size-fits-all doesn’t come into play here.

So when should you use the cloud versus the edge?

Blake Kerrigan: The cloud means different things to different people. At Lenovo we feel that, ultimately, edge will essentially become an extension of the cloud. Edge computing is all about getting meaningful data to either store, or to do more intensive AI on; what we’re trying to do is to comb down the amount of uneventful or un-insightful data.

There are really two main things to consider. The first one is orchestration: How can I remotely create and orchestrate an environment where I can manage applications off the site? And the second one is—to make these models better over time—doing the initial training. Training is a big part of AI and computer vision, and one that’s woefully underestimated in terms of the amount of resources and time it takes. One of the most effective ways to do it is in collaboration in the cloud.

Let’s use defect detection as an example. Let’s say you have 50 different plants around the United States, and every single one of them has a defect-detection computer vision application running on the factory floor. Ultimately, you’ll want to share the training and knowledge you’ve acquired from one factory to another. And the only real, practical way to do that is going to be in the cloud.

So I do think there’s a place for the cloud when it comes to edge computing and, more specifically, AI at the edge—in the form of crunching big data that’s derived from edge-computed or edge-analyzed data. And then, in addition, training AI workloads to be redistributed back to the edge to become more efficient and more impactful and insightful to users.

Jason Shepherd: What we say at ZEDEDA is: The edge is the last cloud to build. It’s the fringes of what the cloud is. There are three buckets there. One is cloud centric, with lightweight edge computing and then a lot of heavy crunching in the cloud. A second one uses the power of the cloud to train models, and then deploys, say, inferencing models to the edge for local action. So it’s a cloud-supported, or cloud-assisted, model. And third, there’s an edge-centric model, where there might be training in the cloud, but all the heavy lifting on the data is happening on-prem. So, as Blake said, it’s not one-size-fits-all.

If manufacturers lack the proper IT expertise, what tools or technologies might help?

Jason Shepherd: Is a fair answer ZEDEDA?

It really is about finding the right tools, and then applying domain knowledge on top. There are a lot of people who have domain knowledge—the experts are the folks on the floor. But when you’re trying to deploy in the real world, you don’t usually have the staff that’s used to scripting and working in the data center space. Plus, the scale factor is a lot bigger. That’s why ZEDEDA exists: to just make that process easier and, again, to provide the public cloud experience all the way down into the field.

Where does Lenovo and its partnership with Intel® fit into this space?

Blake Kerrigan: The value of the relationship with Intel goes beyond just edge computing, and Intel is our biggest and strongest partner from a silicon perspective when it comes to edge computing. It holds a lot of legacy ground in the embedded space, the industrial PC space. But the other side of it is that Intel continues to be at the cutting edge. It continues to make investments in feature functions that are important at the edge—not just in data center, and not just in PC.

OpenVINO sits within the larger ecosystem of tools from Intel, but another one I really like—because it helps our customers get started quickly without having to send them four or five different machines—is Intel DevCloud. It lets those customers get started in a development environment that is essentially cloud based. They can control all sorts of different parameters, and then run applications and workloads in the environment. This creates efficiencies in terms of time to market or time to deployment.

At Lenovo we want to be able to create the most frictionless experience for a customer trying to deploy infrastructure at the edge, which is why Lenovo and ZEDEDA really complement each other in their alignment with Intel.

Jason Shepherd: ZEDEDA is basically a SaaS company—all software, but coming from the hardware space. And hardware is hard, so partnering with Lenovo makes things simpler. It’s important to work with people who are building reliable infrastructure.

Any final takeaways for the edge computing journey?

Blake Kerrigan: As Jason mentioned, hardware can be hard. I think a lot of people start there, but it’s not necessarily the best first step—though I say that coming from a hardware company. But at Lenovo we still do want to be a part of that first step on the journey. Reach out to our specialists and see how we can help you understand what the potential roadblocks are. And then we can also open you up to our ecosystem of partners—whether that’s Intel or ZEDEDA or others.

Bring us your problems, bring us your biggest and most difficult problems, and let us help you design, implement, deploy, and realize those insights and outcomes.

Jason Shepherd: It’s all about ecosystem. Invest in community so you can focus on more value.

This isn’t about free; it is about making money and all that. But it is also very much about partnership.

Related Content

To learn more about edge computing in manufacturing, listen to Manufacturers Unlock AI at the Edge: With Lenovo and ZEDEDA and read Cloud Native Brings Computer Vision to the Critical Edge. For the latest innovations from Lenovo and ZEDEDA, follow them on Twitter at @Lenovo and @ZededaEdge, and LinkedIn at Lenovo and Zededaedge.


This article was edited by Erin Noble, copy editor.

About the Author

Christina Cardoza is an Editorial Director for Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza