Skip to main content

IoT INNOVATIONS

IoT Edge Computing: The Road to Success

insider-perspective-IoT-edge-computing-0

IoT edge computing is known across industries for its ability to bring computation closer to where the data involved is generated. This allows for the real-time insights, high performance, and low latency that businesses need to succeed today. And what business wouldn’t want that? But some industries, like manufacturing, may have a more difficult digital transformation journey before them—with legacy infrastructure and little or no downtime to make changes. And some organizations are unsure how best to bring this transformation about, with all its various advantages and complexities.

For guidance on this journey, we turn to two people with deep backgrounds in IoT edge computing and extensive knowledge of the field: Martin Garner, Head of IoT Research at CCS Insight; and Dan Rodriguez, VP and General Manager of the Network and Edge Solutions Group at Intel. They’ll take us through some of the challenges and opportunities involved, and remind us that no one has to—no one should—go it alone (Video 1). In addition, CCS Insight has made available its recent IoT Initiatives to Scale Industrial Edge Computing report to insight.tech subscribers. Check it out, and get your ticket to digital transformation.

Video 1. Martin Garner from CCS Insight and Dan Rodriguez from Intel discuss the state of edge computing, how to overcome challenges, and key opportunities to look forward to. (Source: insight.tech)

What can you tell us about the benefits of edge computing today?

Dan Rodriguez: Edge compute is driving an incredible amount of change across all sorts of industries. And it’s fueling digital transformation, as businesses aim to automate their infrastructures and improve operational efficiency. AI, along with the advent of 5G, will only accelerate this trend.

Companies want to have more control. They’re seeing supply chain challenges, unstable energy production, sometimes labor-force shortages. They want to find ways to optimize their operations, their costs, and their data. So there’s a lot of opportunity for edge AI here. It can also provide new monetization opportunities. Because companies are looking to save money and manage their TCO, but also of course to make money.

If you think about one industry—manufacturing—we’re already seeing customers start their AI journeys. Initially they’re utilizing it for simple things, such as supply chain management, with autonomous robots stocking or pulling inventory. And then quickly advancing into employing computer vision and AI for things like defect detection.

What is the state of edge computing, and what challenges still remain?

Martin Garner: A huge amount of it already exists across all industries, including quite a lot that you might not even think of as edge computing. And that highlights a couple of things about the whole space.

One is that it’s very broad. It runs from sensors all the way out to local-area data centers. It’s also quite deep. It goes from the infrastructure at the bottom up through the networking, through applications, to AI. And because of those two things, it’s quite a complicated area.

In adoption terms, we think there are several big drivers. One is IoT. High volumes of data are being generated there that need to be analyzed and dealt with in near-real time using machine learning or AI. Also, the telecoms have recently become very interested in the subject as suppliers, with multi-access edge computing and private networks. Last, there’s the economic climate. Many companies are reviewing their cloud spend, and that is a bit of a spur to do more with edge computing.

Tell us about the different opportunities for edge computing across industries?

Dan Rodriguez: Let’s talk about retail. One of the biggest costs that retailers have is theft; believe it or not, it’s a $500 billion-a-year problem. The use of computer vision with AI can attack this problem, helping to prevent theft at the front of the store, i.e., at the checkout area; in the middle of the store, where you sometimes get in-aisle shoplifting; and even in the back of the store, where there can be theft in warehousing and distribution centers.

And when you think about how retailers make money, they can utilize AI in all sorts of new and interesting ways. The shopping experience, for example: It can provide feedback on different merchandising-display strategies. It can quickly identify when items are out of stock on the shelf. Sometimes very simple things can really lead to better results.

“#Manufacturing processes are being streamlined onto fewer and fewer #software-defined platforms, which increases overall efficiency and reduces the infrastructure’s complexity” – Dan Rodriguez, @intel via @insightdottech

Then consider manufacturing and industrial edge computing; it’s going through a massive transformation in the types of infrastructure that are deployed. Generally speaking, manufacturers are moving away from what I would call fixed-function appliances—appliances that do one thing very, very well—to more software-defined systems that are easier to manage and upgrade.

So diverse kinds of manufacturing processes are being streamlined onto fewer and fewer software-defined platforms, which increases overall efficiency and reduces the infrastructure’s complexity. And once you have this software-defined infrastructure in place, then you start combining it with the use of robots, with sensing, with 5G and AI. Then you can do all sorts of magic across a factory floor, helping with everything from inventory management to defect detection.

What challenges do manufacturers face when it comes to industrial edge computing?

Martin Garner: The opportunities are huge, but honestly there are a few challenges, and some of those are faced by anyone using edge computing.

The first one is scale. Industrial edge computing is one of those technologies where it’s quite easy to get started and do a few bits and pieces, but as soon as you scale it up, it all becomes trickier. The larger players will have thousands of computers on tens of sites across multiple geographic regions. And they have to keep all of that updated, secure, and synchronized as if it were a single system in order to make sure they’re getting what they need out of it.

Linked to that, with a large estate of edge computing, you end up with a really big distributed-computing system, with things like synchronization of clock signals, of machines, of data posts into databases. On top of all that there are different types of data going through the system and a different mix of application software—some cloud, some multi-cloud, some local data. All of that needs a complex architecture.

There are also a couple of challenges that are probably specific to the manufacturing and production industries. One is real-time working, which is a special set of demands that, by and large, IT doesn’t have. There are feedback loops measured in microseconds; there are chemical mixes measured in parts per million. Timeliness and accuracy are incredibly important. And what’s really important is that it’s a system-level thing—not just one component but the whole system has to cope with that.

And then there’s the robustness of the system. Many factories work three shifts per day, nonstop, 365 days a year. An unplanned stoppage is a really expensive thing—millions of dollars per day in many cases. All of the computing has to support that with things like system redundancy, hot standby, and automatic failover. That’s so if something goes wrong, the system doesn’t stop. That means doing software patches and security upgrades live, without interrupting or rebooting the systems at all. It also means that if you need to expand the hardware—say you want a new AI—you’ve got to be able to put that in without stopping the production line.

So hardware and software need to be self-configuring and cannot break other things down. Again, those are constraints that IT doesn’t have, but in the industrial area they are things that just have to be worked with.

And how can manufacturers approach those challenges most successfully?

Martin Garner: The first thing we would recommend is don’t build your own infrastructure. It’s too slow, too much resource, too expensive over time, and it’s a specialist area.

The second thing is to design the system around modern IT and cloud-computing practices. It should be almost seamless across them. And there are lots of good technology frameworks to choose from, so most of the customer-design work can focus on the application level.

Third, in the operations-technology world, equipment and software lifetimes are typically 10 to 20 years. We think with edge computing it’s sensible to plan for shorter lives, 5 to 10 years. The data volumes are going up and up, and the more data you get the more you want to do with it, and the more you can do with it. So you’re going to need more AI, more edge computing capacity, and you’re going to have to expand what you have quite quickly.

How are you seeing manufacturers approach this type of technology?

Dan Rodriguez: As I mentioned before, the first part of the journey is the movement away from fixed-function appliances to more of a software-defined infrastructure. Imagine if you had to have a specific phone for each application you used; that would be really difficult to manage. It’s the same thing on a factory floor. Think how much the complexity would be reduced if more applications were loaded onto fewer software-defined infrastructures.

The future is that servers will host most or many of these software workloads. Then you’ll be able to provide automated updates in a much more controlled way, and they’ll be much easier and more efficient to operate and maintain. You’ll also be able to layer on all sorts of new capabilities.

Give specifics example where these approaches have been used.

Martin Garner: One example highlights the scale issue. A very large university hospital was installing a mesh network to keep track of ventilators and other key equipment, and to gather information from sensors. They did a trial with battery-powered nodes that went well, and they loved it. But they realized that, as they scaled it up to the whole hospital, they would have thousands of devices with batteries to monitor. They would always be changing batteries somewhere, risking dangerous outcomes if it wasn’t done thoroughly. So they asked the supplier to produce mains [grid]-powered versions instead.

The lesson that came out of that for me was that from the start, suppliers have to design to the scale they’re going to face in the end. And customers need to think big in that design phase, too. As Dan mentioned, it is a journey, and you learn a lot as you go through.

Tell us about the importance of partnerships to achieving these goals.

Dan Rodriguez: Intel creates a diverse ecosystem that utilizes both open and standard platforms. And having an ecosystem like this is incredibly important for the overall health of the market; the community will not only have a lot of vendor choice, but it also increases the overall innovation spiral.

Martin Garner: Edge computing is broad, deep, and complicated, as I mentioned earlier. Very few customers can take all of that on. Very few suppliers can take it on either because they tend to specialize. Actually, most of the systems we’re talking about will need to be designed with three to five players involved. And I think that’s the expectation we should all bring to this—that it’s going to be a team effort.

How do you see the role of IoT edge computing in industrial environments evolving?

Dan Rodriguez: The first phase is that migration toward software-defined infrastructure, where workload consolidation supports multiple applications on fewer and fewer servers or devices.

And then obviously, generative AI is all the buzz right now, and over time it will be incorporated into this strategy as well. It’s going to be super exciting to see all the gains in production, the reduction in defects, and also the use of new simulation and modeling techniques in that factory of the future.

Martin Garner: A couple of things came out of our report that are not so big right now, but you can see them coming.

The first one is around mission-critical manufacturing processes, where any unplanned downtime, as I said earlier, is really expensive. A key question there is about how to learn from what’s gone wrong. The aircraft industry has always been quite good at this. The aim is to make systems more and more resilient by ensuring that failure modes are understood and mitigated. Then new scenarios are built up to cope better under fault conditions. That looks to us like an important area for more general use across manufacturing.

Another one is linked to industrial robustness. If an application can run on one machine and automatically switch over to another one if there’s a failure, you have to ask—which is the optimal machine for it to run on normally? And you realize that optimal could mean fastest, it could mean lowest latency, or the highest uptime, or cheapest on capital costs, cheapest on operating. It’s all about optimizing the system in different ways for different things. We haven’t found anybody who’s actively exploring this yet, but we do expect it to become a thing in edge computing fairly soon .

Any final thoughts or key takeaways you want to leave us with?

Martin Garner: It’s a bit of an analyst cliché to say, “Oh yes, but it’s complicated.” But edge computing actually is complicated, and I think many companies see it and get it, but it still feels quite a long way away. From our point of view at CCS Insight, we think it’s key for customers to just get started, working with a few, carefully chosen partners.

At the start you should be fairly ambitious in how you think about all of it and what scale it could get to—knowing you won’t get there all in one go. You’ll probably find, though, that it’s not the technology that’s the limiting factor; it’s probably the organization. You will need to invest at least as much time and effort into bringing the organization along as you do in working out what technology, and how much of it, to use.

Dan Rodriguez: First, edge computing is fundamentally changing nearly every industry. And second, when you combine edge computing with AI and 5G, it’s driving a lot of transformation, which is truly creating a massive opportunity—everything from precision agriculture to sensory robots to cities that intelligently coordinate vehicles, people, and roads.

Third, I do strongly believe that industry collaboration and open ecosystems are fundamental to all of this. As Martin mentioned, it’s going to be a team sport, and multiple players will be needed to drive these solutions and implement them in a way that’s easy for customers to consume the technology and easy for them to scale the technology. And Intel is truly invested in driving this unified ecosystem.

Related Content

To learn more about adopting edge computing, read IoT Initiatives to Scale Industrial Edge Computing and listen to Industrial Edge Computing: Strategies That Scale. For the latest innovations from CCS Insight and Intel, follow them on Twitter @ccsinsight and @intel, and on LinkedIn at CCS Insight and Intel Corporation.

This article was edited by Erin Noble, copy editor.

About the Author

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza