Skip to main content

AI • IOT • NETWORK EDGE

AI Everywhere—From the Network Edge to the Cloud

AI Everywhere

At a recent launch event, Intel CEO Pat Gelsinger introduced not just new products but the concept of “AI Everywhere”. In presenting the 5th Gen Intel® Xeon® processors and Intel® Core Ultra processors, Gelsinger talked about how Intel is working to bring AI workloads to the data center, the cloud, and the edge.

Now, in a conversation with Gary Gumanow, Sales Enablement Manager – North American Channel for Intel® Ethernet Products, we learn more about the idea of AI Everywhere and the role of the network edge. Gary has spent his career in networking, which may be why he’s also known as “Gary Gigabit.” With a background in systems integration at some of the top law firms in New York City, Gary works closely with Intel distributors and solution providers. Gary says understanding the technology, customer needs, and how products get moved through the channel are near and dear to his heart.

When Intel talks about AI Everywhere—from the data center to the edge device, what does that mean in terms of the network edge?

AI Everywhere means from the edge to the network core to the data center. By the edge, we’re talking about the endpoints: sensors, cameras, servers, PCs, adapters—the devices that connect to the network. And the core refers to the components that provide services to the edge. AI in the data center is nothing new and has the power and storage to handle big AI loads. But inferencing at the edge is new. And there are a number of challenges from processing power in compact/rugged PCs to the time-sensitive networks and connectivity needed to transport data back and forth.

And there are several areas that impact the network and how the network is important to those areas. What is AI going to mean to an edge device? The AI model is only as good as the data that can get to it, but how does that data get to an edge device and vice versa, and how does that data get back to the data center?

It’s important that you’re putting the optimal amount of smarts there—right-sizing the architecture so as not to burden the network between the data center. This means running AI Everywhere with the right CPUs while lowering the cost while increasing performance.

We’re continually working on improving bandwidth, improving data security, and confidential computing in our network devices, so that when they go down to the edge, they’re secure, they have low latency, they have the performance that’s required to connect the data center with the edge. And doing it in a way that’s low power and sustainable in terms of its price performance per watt and optimizing the power.

Let’s expand this idea to the factory, where we’ve got AI and computer vision—taking all of this data and inferencing it at the edge. What does the network edge look like here?

Believe it or not, some factory floors are so large they can have their own weather patterns. And one of the things that’s really hot right now for manufacturing and automation is going the distance between robotic devices. So how can these devices communicate when they are football fields apart from each other? And how do you get real-time data out to those edge devices, which are important to the assembly line?

This is a reason why manufacturers are deploying private 5G networks in factories—so that they can communicate from a local server or from a data center, all the way out to these endpoints. But this type of communications takes timing accuracy, low latency, and performance.

So, one cornerstone to 5G virtualized radio access networks (vRANs) is precision timing technology. And global positioning satellite (GPS) devices are key components of a precision timing network. Essentially networks have an atomic clock, which is typically a network appliance, and you have all of your devices synchronized with that appliance. But that’s expensive and proprietary.

The other thing that’s important for 5G is forward error correction (FEC) that is looking forward in the flow and correcting for any errors, so that you’re heading any errors off at the pass—you’ve got the precision timing and you’ve got the forward error correction. All of this can get complicated.

How is Intel making it less complicated to deploy private 5G in factories as one example?

We’ve built these functions directly into our Ethernet products. For example, take the atomic clock technology that’s been appliance-based, and is now integrated into some of our network adapters. You can eliminate those appliances in the network and have the timing accuracy that’s required for 5G networks built in. It saves power, it saves money, and it simplifies the network design because you don’t have to have all of these devices coming back to an atomic clock. It can be out on the nodes where it needs to be. GPS timing synchronization and FEC are other technologies built into our network adapters and devices as well.

We have this progression of shrinking the requirements of discrete components down to a smaller set of things. So now we have Intel® vRAN Boost doing a lot of the work via an accelerator on the 4th Gen Intel® Xeon® processors. This is fully integrated, high-capacity acceleration with vRAN Boost that increases the performance and the calculations that are required to run Ethernet over vRAN. And again, this reduces component requirements, power consumption, and overall system complexity.

It’s like the progression of everything at Intel. It’s consolidating it into the processor or to a smaller number of components and simplifying it and making it easier to deploy. Another example is how Ethernet is finding itself embedded in Intel Xeon D processors. The system-on-chip (SoC) processors have the logic of an Ethernet controller to support 100 gigabits in the actual chip.

It’s sized for a network appliance or edge device versus the cloud data center so it has fewer cores and requires less power. And it’s specialized to handle network flows and network security. The Intel Xeon D processer is “right sized” for where it should be sold and where it should be embedded. You can deploy it in medical sensors, gateways, industrial PCs, the factory floor—all where you need near real-time actionable insights.

Is there anything you would like to add in closing?

We feel very strongly about interoperability with multiple vendors. In fact, in the AI space, we’re doing something called HPN or high-performance networking stacks based on open APIs and open software. We’re working with multiple vendors like Broadcom, Arista, Cisco, and a whole bunch of other ones. There’s the Ultra Ethernet Consortium open to organizations that want to participate in an open ecosystem and support AI in the data center.

My customers are telling me that they like the openness approach that Intel is taking with the industry. This consortium that’s coming about to bring data center Ethernet in an open environment is critical for the industry, for AI to really extend out as far as it can go.

Clearly Ethernet has stood the test of time because its five principles: backwards compatibility, insatiable need for bandwidth, interoperability, open software, and evolving use cases. The network—whether it’s 802.11, Gigabit Ethernet, or 100 Gigabit Ethernet—it’s the fabric that alongside 5G puts this whole story together to bring AI Everywhere—from edge to cloud.

 

Edited by Christina Cardoza, Associate Editorial Director for insight.tech.

About the Author

Georganne Benesch is an Editorial Director for insight.tech. Before this she was an independent writer, authoring blogs, web content, solution guides, white papers and more. Prior to her freelance career Georganne held product management and marketing positions at companies such as Cisco, Proxim and Netopia. She earned a B.A. at University of California at Santa Cruz.

Profile Photo of Georganne Benesch