Skip to main content

PODCAST

Manufacturers Unlock AI at the Edge: With Lenovo and ZEDEDA

AI at the Edge

Computer vision has already cemented its place in smart manufacturing and industrial IoT solutions. But to unlock the full potential of these applications, more and more processing needs to be moved to the edge. The problem is that there is no one-size-fits-all approach when it comes to edge computing.

In this podcast, we will talk about the different “flavors” of edge computing, how to successfully deploy AI at the edge, and the ongoing role of the cloud.

Listen Here

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guests: Lenovo and Zededa

Our guests this episode are Blake Kerrigan, General Manager of the Global ThinkEDGE Business Group at Lenovo, a global leader in high-performance computing, and Jason Shepherd, Vice President of Ecosystem at ZEDEDA, a provider of IoT and edge computing services.

Blake has worked in the industrial IoT space for many years. Prior experience includes companies like Sierra Wireless and Numerex, where he was responsible for product delivery, customer success, and solution development and delivery. At Lenovo, he and his team are focused on edge computing, go-to-market, product development, and product strategies.

Jason has a proven track record as a thought leader in the IoT and edge computing space. Before joining ZEDEDA, he worked for Dell Technologies as a Technology Strategist, Director of IoT Strategy and Partnerships, and CTO of IoT and Edge Computing. Additionally, he helped build the Dell IoT Solutions Partner Program, which received the IoT Breakthrough Award for Partner Ecosystem of the Year in 2017 and 2018.

Podcast Topics

Jason and Blake answer our questions about:

  • (2:50) Recent transformations in the manufacturing industry
  • (5:04) The role of edge computing in industrial IoT solutions
  • (6:52) Successfully deploying AI at the edge
  • (10:20) The tools and technologies for edge computing
  • (18:55) When to use (and not use) the cloud
  • (23:05) Having the proper AI expertise and IT support
  • (31:11) Future-proofing manufacturing process and strategy

Related Content

To learn more about edge computing in manufacturing, read The Full Scope of Deploying Industrial AI at the Edge and Cloud Native Brings Computer Vision to the Critical Edge. For the latest innovations from Lenovo and ZEDEDA, follow them on Twitter at @Lenovo and @ZededaEdge, and LinkedIn on at Lenovo and Zededaedge.

 

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about edge computing in industrial environments with Jason Shepherd from ZEDEDA and Blake Kerrigan from Lenovo. But before we jump into our conversation, let’s get to know our guests. Jason, I’ll start with you. Welcome to the show.

Jason Shepherd: Thanks for having me.

Christina Cardoza: Yeah. Thanks for being here. What can you tell us about ZEDEDA and your role there?

Jason Shepherd: So ZEDEDA is—we’re all focused on orchestration of edge computing—so, management and security, remotely, of assets out in the field, deploying applications, understanding the state of the hardware. Take the data center principles and extend them out as far as you can into the field to enable cloud data development, while also supporting legacy assets. I lead our ecosystem, so I work a lot with strategic partners, I work a lot with industry consortium serving as our field CTO. And one of my mottoes is, “If it’s fuzzy, I’m on it.” I always find myself on the front end of emerging technologies, so, hence edge right now.

Christina Cardoza: Great. Can’t wait to dig a little bit deeper into that. And Blake, thanks for joining us today.

Blake Kerrigan: Yeah, thanks for having me.

Christina Cardoza: So what can you tell us about Lenovo and what you’re doing there?

Blake Kerrigan: Well, look, I think most people know who Lenovo is—as you know, one of the largest personal compute, and mobile compute, and data center compute hardware providers in the world. But my role essentially here at Lenovo is I manage our edge computing practice.

So here at Lenovo we’re hyperfocused on digital transformation as a whole, for most enterprises. And we feel that edge computing is essentially core to our customer’s journey. And so I’ve been here for about three years. I’m based in Raleigh, North Carolina, and me and my team are uniquely focused, not just on edge computing, but also defining what is our strategy as a company. You know, how do we develop products differently for use cases outside of traditional data center or personal compute. So mainly go-to-market, product development, and product strategy.

Christina Cardoza: Perfect. I love how you mentioned how edge computing is part of a manufacturer’s digital transformation journey. I think that’s the perfect place to kick off this conversation today. No surprise to you two that the manufacturing space has been rapidly evolving over the last couple of years to keep up with the demands of the digital era. So Blake, I’m wondering if you can walk us through what some of those transformations in manufacturing have looked like recently?

Blake Kerrigan: Well, I think recently, I think they look a lot different just even in the last two years—things have had to change quite a bit, you know. Lenovo being a large manufacturer, this is a space that’s very close to home for us. You know, probably some of the largest trends that we see is around are computer vision and AI use cases.

So, for the last, probably 15 to 20 years, I think most industrial customers have been uniquely focused around automation—whether it’s a simple process around manufacturing or some sort of a logistics optimization or automation process.

And today, what we’re starting to see is the use of AI on a more binary state in terms of how do you create more efficiencies in some of those processes that already exist. But when you lump computer vision applications and solutions on top of that, we’re starting to see unlocking all sorts of new insights that beforehand we didn’t really have a way to capture some of these insights with some of the sensor technology that existed in the world.

So some of those trends that I see a lot in manufacturing and even in distribution is things like defect detection—there’s all sorts of different safety applications. Usually these were done as kind of point solutions in the past, and with the adoption and transition from more purpose-built compute to general-built compute for AI and computer vision, we start to see a lot of unique types of solutions that we’ve never seen before, and they’re getting easier and easier to adopt for our customers.

Christina Cardoza: That’s great. Here at insight.tech, we’ve definitely been seeing all of those use cases, and the opportunity with computer vision and AI just expanding those opportunities for manufacturers. Traditionally they’ve been taking all that data and processing it in the cloud, but what we’ve been seeing is even that’s not enough, or not fast enough, to get that real time insight and to make informed decisions. So, Jason, can you tell us more about why edge computing is playing a role in this now?

Jason Shepherd: Well, I mean, the only people that think that sending raw video directly to the cloud are people that sell you internet connectivity. It’s very expensive to stream, especially high res video straight over a wide area connection. So, clearly with computer vision, the whole point at the edge is to look at live camera streams or video streams. It could be thermal imaging, it could be any number of things, and look for an event or anomalies in the moment, and only trigger those events over those more expensive connections.

I mean, it goes from manufacturing through of course all different types of use cases, but it used to be where you got someone just sitting there looking at something, and then call somebody if something happens. And now you can have it being continuously monitored and have the intelligence built in to trigger it.

So edge is key there. I mean, the same thing with, like, 5G is a big trend in manufacturing. I know we’re talking about computer vision now, but every new technology is like, “Oh, you know, this is short lived.” Well, 5G actually drives more edge computing too, because you’ve got a super, super fast local connection, but the same pipe upstream. And so we’re going to see more use cases too, where you mash up these kinds of private 5G small cells in a factory with computer vision. And then of course other sensing technologies. But yeah, we’re just kind of at the beginning of it as it pertains to edge, but there’s just so many possibilities with it.

Christina Cardoza: It’s funny that you mentioned the only people that talk about processing in the cloud are the people that it would benefit most, but I think that’s also a real issue in the industry, is that there’s so many people telling you so many different things, and it could be hard to cut through all of the noise.

So Jason, can you walk through what are some of the challenges that manufacturers are facing when going on an edge computing journey, and how can they successfully move to the edge?

Jason Shepherd: Yeah. I mean, there’s kind of in general, like you said, it’s the hammer-nail syndrome. Everyone tells you that they can do everything. I mean, edge is a continuum from some really constrained devices up through kind of on-prem or on the factory floor—say the shop floor up into sort of metro and regional data centers. Eventually you get to the cloud, and where you run workloads across that continuum is basically a balance of performance costs, security, and latency concerns. And I think people are first and foremost are just confused about what is the edge. It’s just this—there’s a lot of edge washing going on right now. And whoever the vendor is, what they sell and where they sell it, that’s the edge.

So for—I think for manufacturers, first and foremost, it’s kind of understanding that it’s a continuum, understanding there’s different trade-offs inherently. If you’re in a secure data center, it’s not the same as if you’re on the shop floor, even though you want to use the same principles in terms of containers and VMs and things like that. Security needs are different. So, it’s concerns around getting locked in. You know, everybody loves the easy button until you get the bill. So the whole thing with the clouds model is, make it really easy to get data in, but then very expensive to get data out or send it somewhere else. And so that’s why we’re seeing this kind of—that’s another reason why we’re seeing this shift. It’s not just about bandwidth and latency and security and all the reasons you see ads.

So long story short, just navigating the landscape is the first problem. Then when you get into actually deploying things, I mean, these always start with a use case, say, I’m doing, trying to do, quality control or improved worker safety or whatever using computer vision. It always starts with a POC—I figure out a use case, then I’m doing a POC. At this stage, people aren’t thinking about management and security and deploying in the real world. They’re thinking about an app, and we see a lot of experimentation with computer vision applications, and there’s really cool innovation happening. But to take the lab experiment into the real world, that’s also really challenging—camera angles change, lighting changes, contexts switch. Just getting the infrastructure out there and the applications and continuously updating those models remotely. These are those infrastructure things that I think are really important.

I think that the main thing is to break down the problem, separate out your investments in infrastructure from the application planes that you invest in—consistent infrastructure like we’re doing with Lenovo and Intel® and ZEDEDA. We’re obviously focused on infrastructure and building it in a modular way. So, whereas you kind of evolve, you can build new applications, you can take in different types of domain expertise. Eventually it’s about domain expertise with consistent infrastructure. And so I think the key for manufacturers is to break down the problem, work with vendors that are architecting for flexibility, and then you can evolve from there, because no one knows all the answers right now. You just want to build in that future proofing.

Christina Cardoza: That’s a great point that you make: edge is a continuum. There’s no one-size-fits-all approach. There’s no one way of doing manufacturing. Everyone’s building different things and applying technology in different ways. So on that note, Blake, can you talk about how manufacturers can approach this, what they need to be looking at, and how they decide what technologies or path is going be the best for them?

Blake Kerrigan: Yeah. You know, I mean, the first approach I think is—well, even before you get to the POC, I think the biggest challenge is just understanding what kind of business outcome you want to drive with the particular POC, because you also have to scale the business case.

And one of the challenges is, you can build something in a lab and typically the last thing an engineer’s going to think about is cost when they go to develop or deploy the solution. It’s an exponential factor and it’s—in my opinion, and I’m sure Jason would agree with me, that the biggest inhibitor to scale is deployment and management and life cycle and end of life and transitioning from one silicon to another over time as products come in and out of their own life cycles.

So I think the first step is making sure that you understand what kind of business outcome you want to drive, and then keeping a conscious understanding of what the costs are associated with that. And that’s something that we at Lenovo—we work with people more on solution architecture and thinking about what type of resources do you need today? And then, how does that scale tomorrow, next week, and next year, and the next five years? So that’s critical.

I also think it’s important to understand that, at least in this edge computing spectrum today, there’s a wide array of different types of resources or hardware platforms that you could choose from, some of which may have better performance. Others may have better longevity or reliability in some terms, but I think it’s important for a customer to understand that, in order to select the right hardware, you kind of have to understand what are the iterations of the program throughout the life cycle of whatever solution you’re trying to implement.

So those are the first things, and all what I would call more fundamentals when you approach some of these new solutions. You know, there’s a lot of tools out there because if you think about it PCs today or personal computers that maybe Lenovo sells in our core commercial business are based on user personas. So, if you’re an engineering student or professional, you may use a workstation machine with great graphics, some good performance. If you’re a mobile executive you’re probably using a ThinkPad and traveling around the world—you need that mobility. Or if you’re a task-based worker you might have a desktop computer.

In edge computing there are no personas, and the applications are endless. And I would say I think ZEDEDA is proof that there is no standard ecosystems of applications. So you have to be able to build in that elasticity, and you can do that with ZEDEDA and Lenovo, frankly.

Christina Cardoza: Now I want to expand a little bit on some of those—the hardware and software platforms that you just mentioned. Jason, can you talk a little bit more about how you deploy AI at the edge and how you approach edge computing? What tools and technologies are you seeing manufacturers using to approach this?

Jason Shepherd: There’s obviously a lot of kind of special-built, purpose-built solutions, vertical solutions. Any new market, I always say it goes vertical before it goes horizontal. It is, as I mentioned about domain knowledge. And it’s attractive upfront to buy, like a turnkey solution that has everything tied together and someone that knows everything you need to know about quality control who does everything for you.

And there’s been computer vision solutions for a long time that are more proprietary—kind of closed systems for things like quality control on the factory floor. That’s not new, but what’s new is everything becoming software defined where you abstract the applications from that infrastructure. So if you look at in terms of tools, if you look at historically—I mean, constrained devices, really, really low-end compute power sensors, just kind of lightweight actuators, things like that—those are inherently so constrained that they’re embedded software.

In the manufacturing world, control systems have historically been very closed. And that’s a play to create stickiness for that control supplier. And of course there’s implications if it’s not tightly controlled in terms of safety and process uptime, okay? So that’s kind of like the world that it’s been for a while.

Meanwhile, in the IT space we’ve been kind of shifting, and the pendulum swings between centralized and decentralized. Over the past 10 years we’ve seen the public cloud grow. Why do people like public cloud? Because it basically abstracts all of the complexity, and I can just sign up and just—I’m looking at resources, compute storage, networking, and just start deploying apps and go to town.

What’s happening with edge and the way we’ve evolved technologies and the compute power that’s being enabled, of course, by Intel and all the portfolio, like from a Lenovo, is we are able to take those public cloud elements—this platform independence, cloud-native development, continuous delivery of software, always updating and innovating—and we’re able to use those tools and shift them back to the edge. And there’s a certain footprint that you can do this with. And it goes all the way to the point where basically we’re at the point where we’re taking the public cloud experience and extending it right to the process, right above the manufacturing process that’s always been there, to where now we can get that public cloud experience, but literally on a box on the shop floor. I don’t need to one-size bootstrap everything I’m looking at; I want to deploy an AI model, I want to sign it to this GPU, I want to add this protocol-normalization software. I want to move my SCADA software and my historian onto the same box. It’s this notion of workload consolidation. It is using these tools that we’ve developed, the principles from the public cloud, but coming down.

Now what we do at ZEDEDA, what’s different is while we help expand those tools from a management standpoint, a security standpoint, we have to account for the fact that even though it’s same principles, it’s not in a physically secure data center. We have to assume that someone can walk up and start trying to hack on that box. When you’re in a data center, you have a defined network perimeter. We have to assume that you’re deployed on untrusted networks. So the way that our solution is architected, and there’s a bunch of different tool sets out there, is take the public cloud experience, extend it out as far as you can, to where basically it starts to converge with historical process, stuff in the field, but you build a zero-trust model around it to where you’re assuming that you’re not locked up in a data center. When you’re outside of the data center, you have to assume you’re going to lose connectivity to the cloud at times. So you’ve got to be able to withstand that. So this is where the one-size-fits-all thing doesn’t come into play.

There’s great data center tools out there for scaling. They’re evolving, with Kubernetes coming out of the cloud and down, but they start to kind of fall apart a bit when you get out of a traditional data center. That’s where solutions that we’re working on, and with the broader community pickup, then eventually you get into constrained devices, and it is inherently death by a thousand cuts. Everything’s custom. And so those tools there, and then of course, there’s—we’ll talk a little bit about some of the frameworks and kind of the AI tools, but as Blake mentioned, I’m very much stressing when you get to the real world, this foundational infrastructure, this notion of how do you manage the life cycle. How do you deploy it? Oh, and how do you do it without a bunch of IT skillsets running around everywhere. Because it’s not—you don’t have those skills everywhere. It’s going to be usable and definitely secure—and security usability is another big one. Because if you make it too locked down no one wants to use it. Or they start to bypass things. A lot of stuff. But I think the key is the tools are there—you just need to invest in the right ones and realize that it is that continuum that we’re talking about.

Christina Cardoza Now I want to touch on some points you made about the cloud. The cloud isn’t going anywhere, right? And there may be some things that manufacturers want to do that may not make sense to do at the edge. So Blake, can you talk a little bit about the relationship between cloud and edge computing? What the ongoing role of cloud is in edge computing, and what sort of makes sense from a manufacturer’s perspective to do in the cloud and to do at the edge?

Blake Kerrigan: Yeah. I mean, in the line of what Jason was just talking about, we kind of see—ultimately edge will—essentially, in a virtual world we become an extension of the cloud. You know, the cloud means a lot of different things to a lot of different people, but if we’re talking about major CSP, or cloud service provider, I think the central role that they’ll play in the future is probably more around—I mean, obviously with edge computing, it’s all about getting meaningful, insightful data that you would want to either store or do more intensive AI on—which may happen in a hyperscale data center when the data gets so big and it can’t be crunched. But essentially what we are doing is trying to comb down the amount of uneventful or uninsightful data.

But I do think once you get the meaningful data in the cloud—if, as an example, we were talking about defect detection, once you have enough information from—let’s say you have 50 different plants around the United States and every single one of them has a defect detection, computer vision application running on the factory floor, well ultimately you want to share the training and knowledge that you have from one factory to another. And the only real practical way to do that is going to be in the cloud.

So for me, there’s really two main purposes. The first one is really around orchestration. So, how can I remotely orchestrate and create an environment where I can manage those applications outside of the onsite, or out not at the edge, or in the cloud. And then the other one is, in order to make these models better over time, you do have to train them initially. That’s a big part of AI and computer vision that’s, back to our earlier point, probably woefully underestimated in terms of the amount of resources and time that it takes to do that.

One of the most effective ways to do that is in collaboration in the cloud. So I do think there’s a place for the cloud when it comes to edge computing and, more specifically, AI at the edge, in the form of crunching big data that’s derived from edge-computed or edge-analyzed data. And then the other side of that is training of AI workloads to be then redistributed back to the edge to become more efficient and more impactful, more insightful to the users.

Jason Shepherd: Yeah, definitely. One way I would summarize it is there’s kind of three buckets. One is cloud centric where it’s maybe I’m doing light preprocessing at the edge, normalizing IoT data. And then I’m kind of—so I’m doing a lightweight edge computing, so to speak, and then I’m doing a lot of the heavy crunching in the cloud. So that’s one. Another one Blake mentioned, it’s where I’m using the power of the cloud to train models. And then I’m deploying, say inferencing models to the edge for kind of local action. You know, that’s kind of like cloud-supported or cloud-assisted model. And then there’s like an edge-centric model where I’m doing all the heavy lifting on the data. Maybe I’m even just keeping my data on prem, you know. I might still be kind of training in the cloud or whatnot, but maybe then I just do orchestration from the cloud because it’s easier to do that over wide areas and remote areas, but the data still stays in location so that maybe I’ve got data sovereignty issues or things like that.

So it’s exactly what Blake said. It’s not kind of one size fits all, but that’s one framework to look at: where is the centricity in terms of the processing? And, of course, the cloud helps support it. I mean, we always say at ZEDEDA, the edge is the last cloud to build; it’s basically just the fringes of what the cloud is. It’s a little abstract, or just becoming more gray.

Christina Cardoza: Now, going back to a point you made earlier, Jason, manufacturers don’t always have the IT staff on hand or the IT expertise to do all of this. So I know there’s no silver bullet tools out there, but are there any tools and technologies that you can mention that may help them on this journey, especially if they’re lacking the dedicated IT staff that it takes to do all of this?

Jason Shepherd: Is a fair answer, ZEDEDA?

Blake Kerrigan: That’s what I was going to say.

Jason Shepherd: I mean, you know, let’s face it. So like, again, there’s a lot of people that have the domain knowledge, the experts are the folks on the floor and whatnot— it’s not the folks that do the data center. I mean, everyone’s experts in their own right. But, and that’s why a lot of these different tool sets as they become more democratized—I mean, you look at public cloud, it’s attractive because I can sign up and I might not know anything about IT, but I can start playing with apps and maybe I start getting into the OpenVINO™ community and working with that community. I mean, there’s a lot of resources out there for just those initial experimentations. But when you get into trying to deploy in the real world, you don’t have the staff out there that’s used to scripting and doing data center stuff and all that. Plus, the scale factor is a lot bigger. That’s where tools like—why we exist is to just make that much easier and, again, to give you the public cloud experience, but all the way down out into the field, delivering the right security models and all that.

You know, there’s a lot of other tools just in terms of, we’ll talk more about OpenVINO, but you know, there’s the whole low-code, no-code platform solution. It’s, it really is about finding the right tools. And then applying domain knowledge on top. A friend of mine used to work on factory floors and doing kind of from the IT space. And you bring all the data science people in and the AI frameworks and yada yada, and then you’ve got, like, the person that’s been on the factory floor for like 30 years, that knows, “Okay, when this happens, yeah, it’s cool. Don’t worry about it. Oh, that’s bad.” And so literally they brought these people together and the data scientists had to be told by the domain expert, “Well, here’s how you program it,” because they don’t know about the domain stuff. And literally at the end, they called it “Bradalytics” —the guy’s name is Brad. And so we got Bradalytics on the floor. It’s important to bring those right tools that simplify things with the domain knowledge.

Christina Cardoza: Now you mentioned OpenVINO. I should note that insight.tech and IoT Chat are Intel publications. So Blake, I want to turn the conversation to you a little bit since Jason mentioned ZEDEDA, learn a little bit more about where Lenovo fits in this space, but also how you work with Intel, and what the value of that partnership has been.

Blake Kerrigan: Yeah, look, the value of the relationship goes beyond just edge computing, obviously. I mean, the—one of our—Intel is our biggest and strongest partner from a silicon perspective when it comes to edge computing. It’s interesting because Intel holds a lot of legacy ground in the embedded space, the industrial PC space, which just is more or less just a derivative of, an evolution of—. But one of the things in working with Intel, a couple things that come to mind—one of which is “works with,” right? So, most applications, most ISVs, most integrators are familiar with x86 architecture and have worked with it for years. So that’s one thing.

The other side of it is Intel continues to be at the cutting edge of this. They continue to make investments in feature functions that are important at the edge and not just in data center and not just in PC. Some of those are silicon based, whether we’re talking about large core, small core architectures or we’re thinking about integrated GPUs, which are extremely interesting at the edge where you have constraints on cost, more specifically.

Some of the other areas where I feel like our customers understand that “better together” story is really around, number one, OpenVINO. So if you are trying to port maybe AI workloads that have been trained and developed and on some sort of a discreet GPU system which isn’t really optimized to run at the edge, you can port these AI applications over to and optimize them for maybe an integrated GPU option like you have with Intel. So that’s very important from a TCO and ROI perspective.

I talked earlier about what kind of outcome you want to derive. What’s typically driven by cost or increase in revenue or increase in safety. And in order to do that, you have to be extremely conscious of what those costs are. You know, not just with the deployment, but also in the hardware itself. And another part of—I guess OpenVINO sits within this larger ecosystem of tools from Intel, and the one of the ones that I really like, because it helps our customers get started quickly, is Intel DevCloud. And what that essentially allows us to do is instead of sending four or five different machines to a customer, we let them get started in a development environment that is essentially cloud based. This could be cloud work, could be an on-prem depending on what type of sovereignty issues you might have, or security requirements. But this allows a customer to basically emulate, if you will, and do almost real-world scenarios. So they can control all sorts of different parameters and run their applications and their workloads in this environment. So, obviously that creates efficiencies in terms of time to market or time to deployment for our customers.

You know, once our customer can use some of these tools to become ready for that first POC, they go through the POC and they realize those objectives, the Lenovo value proposition is pretty straightforward. We provide very secure, highly reliable hardware in over 180 markets around the globe. There are very few companies in the world that can make that statement. And that’s what we’re trying to bring to the edge computing market, because we understand our customers are going want to deploy systems in unsecure or very remote places. And that’s why edge computing, Lenovo’s DNA, is—lends itself to be a real player in this edge computing space. So when you think about, when I get to scale and I want to deploy hundreds of factories of thousands of nodes, hundreds of different AI models, you’re going to want partners that can provide things complete zero root of trust provisioning, all sorts of—you’re going to want to make sure they have a trusted supplier program, or transparent supply chain in other words. And then you’re also going to want a partner that can help you with factory imaging, making sure that we can provide the right configuration out of the factory so you don’t have to land products in a landing zone for the imaging of the product, either within your own company as a manufacturer or maybe some third party who, as expected, would want to create a business around just imaging your machine. So, with Lenovo we want to be able to create the most frictionless experience for a customer who is trying to deploy infrastructure at the edge, which is why Lenovo and ZEDEDA really complement each other in our alignment with Intel.

Jason Shepherd: Yeah. I’ll say that we’re basically a SaaS company—it’s all software, but coming from the hardware space. I can be the first to say hardware is hard. And so partnering with Lenovo makes that simple, I mean, especially now with the supply chain things that we’ve been going through and all that, it’s, you got to find a trusted supplier that can help simplify all that complexity. And of course make things that are reliable. I mean, we see a lot of people throwing Raspberry Pis out in the field, but it’s, like, sure, it was a hundred bucks, but once you drive a truck out, you just spent a thousand. But yeah, I think it’s important to work with people that are building reliable infrastructure.

Christina Cardoza: Now, big point you made at the beginning Jason, was customers are afraid of getting locked into a particular technology or a vendor. So when you’re choosing tools and partners, how do you make sure it’s going not only meet the needs you have today, but be able to scale and change as time goes on?

Jason Shepherd: Yeah, I mean, I think that’s kind of going back to some of the things we’ve touched on, this shift from proprietary systems to more open systems. We’ve seen this throughout technology. I mean, it used to be plain old telephone systems—you know, POTS—then all of a sudden we get “void”, that’s that transition from facilities to more IT led, but working together—CCTV to IP-based cameras. We’re in that transition period now, where we’re taking these proprietary, purpose-built technologies, and we’re democratizing them and you’re opening them up.

And so one way to avoid getting locked in is, as we’ve been saying, is to separate the infrastructure plane from the application plane. You know, once you get tied into a full vertical stack, sounds great. You’re doing everything for me, from analytics to management and security and whatever. You just got locked in. But if you decouple yourself with edge infrastructure, the moment—as close to the data as possible, this is why ZEDEDA uses an open foundation.

Of course, the Lenovo portfolio is agnostic to data or to application stacks—super flexible. If you decouple yourself from a cloud as close to the source of data, you’re a free agent to send your data wherever. If you decide to go to one public cloud, great, have at it. But once you get the bill, you’re probably going to want to figure out a multicloud strategy. And so that’s one key.

The other thing is communities. We mentioned OpenVINO, this notion of democratizing technologies by working in communities, so the OpenVINO community, of course. Then there’s Onyx, which I know you know—the OpenVINO community is working with Onyx about, how do I standardize how AI frameworks work together, like a TensorFlow and on OpenVINO, et cetera. The root of our solution, we sell a SaaS orchestration cloud for edge computing, but we use EVE-OS from LF Edge. Linux Foundation’s LF Edge community is democratizing a lot of the kind of middleware of the plumbing for edge computing. By investing in those technologies it not only reduces the undifferentiated heavy lifting that so many people too often do, it helps you focus on value.

So as all of these technologies start to converge, we’re going see more and more acceleration and transformation. And the key is to not feel like you should be inventing the plumbing. The worst thing you could do right now is to be focused on trying to own the plumbing. The plumbing is going to be—and I always say, you have to democratize the south, like towards data, to monetize the north. And that’s where the real money is.

And so we work a lot with Intel, just on other—quick point is we really like Intel’s infrastructure, because I mentioned this whole notion of moving the public cloud experience as close to the physical world as possible. We leverage all the virtualization technologies within the silicon. You know, we can basically abstract all of the application layer using our solution to where, as a developer, I don’t have to have a lot of skills. I can just—I’m going deploy that AI model and assign it to that GPU. I want this data-analytics or data-ingestion stack to be assigned to those two Intel CPU cores. And so it gives you that sort of, again, that public cloud experience. I don’t care about—all I care about is compute storage, networking, and just make—gimme the easy button to assign stuff. So we see that OpenVINO, as mentioned, but it really is important to do all the abstraction, but also invest in communities that are doing the heavy lifting for you. So then you can focus on value.

Christina Cardoza: Great. Unfortunately we are running out of time, but before we go, I just want to throw it back to you guys one last time for any final key thoughts or takeaways you want to leave our listeners with today. Blake, I’ll start with you.

Blake Kerrigan: I think that the key takeaway for me is—and it goes back to maybe some of what Jason said, and some of what I’ve said—selecting hardware is hard, and I think a lot of people start there, and that’s probably not necessarily the first step. It’s interesting, me saying that, coming from a hardware company, but you know, at Lenovo what we want to be a part of is that first step in the journey. And I would encourage all of our customers, or even folks that aren’t our customers, to reach out to our specialists and see how we can help you understand what are these roadblocks that you’re going run into. And then also open you up to the ecosystem of partners that we have, whether it’s Intel or ZEDEDA or others, there’s all sorts of different application layers that run on top of these fundamental horizontal hardware or software stacks, like ZEDEDA as well as our hardware at Lenovo.

My takeaway for, or I guess my leave behind for this would be bring us your problems, bring us your biggest and most difficult problems and let us help you design that, implement it and deploy it, and realize those insights and outcomes.

Jason Shepherd: Yeah. I mean, I would just add, as we close out, it’s to totally agree. It’s all about ecosystem, invest in community so you can focus on more value. You know, the “it takes a village” mantra and, for us, if you do all the abstractions and you create this more composable software, definable infrastructure. It’s like another mantra of mine is, I’m all about choice, but I’ll tell you the best choices. So then it’s like, okay, if you come to me, if we work together to architect it right, then we can kind of focus on what are the right choices, both open source and of course proprietary.

This isn’t about a free-for, this is about making money and helping customers and new experiences and all that. But very much it’s about partnership, like we’re talking about here. But then also to navigate the crazy field out there, but also focus on real value versus reinvention.

Christina Cardoza: Well, with that, I just want to thank you both again for joining the podcast today.

Jason Shepherd: Yeah, great. Thanks for having us.

Christina Cardoza: And thanks to our listeners for tuning in. If you liked this episode, please like, subscribe, great review, all of the above, on your favorite streaming platform. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

About the Host

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza