Skip to main content

Build the Edge-Cloud Continuum with Intel® Xeon® D

Ice Lake-D

AIoT (Artificial Intelligence of Things) applications are demanding more data center-class performance closer to the edge. The simplest solution would be adding network processors to ruggedized servers. But looking deeper into the rack, you quickly find the feature requirements of operational endpoints are far different from the core network.

Emerging edge use cases need high-performance computing (HPC) solutions that aren’t quite embedded, but not quite for the data center, either. They also require software experts versed in enterprise and real-time technology who can optimize these hybrid platforms to the deployment at hand.

But these don’t have to be custom solutions. I talked with Michel Chabroux, Senior Director of Product Management at Wind River, a leader in software for intelligent connected systems. We discuss how the deterministic, virtualization-enabled feature set of new high-performance Intel® Xeon® D processors, formerly known as Ice Lake-D, are enabling next-generation microservers that span the edge-cloud continuum.

What comes to mind when you hear the term “microserver”?

For me, a microserver is a box that behaves like a server for a subsystem of a potentially much larger system. This would be a highly specialized piece of equipment for an environment that is not traditional IT, where servers are neatly placed on air-conditioned racks.

If you think of an industrial or factory installation, it could be an environment where you have a lot of dust, vibration, or where limited space is available. It’s also not always necessarily connected, or the connection may be sporadic. The setting is going to be very different from traditional IT and the functions are also going to be different.

This seems like a gray area because there are embedded processors with industrial functionality and network processors for enterprise and data center markets. Are microservers stretching the limits of what’s available?

The services that microservers provide are not emails or web searches or business logic from an IT environment. The applications that run on some of these edge servers are background logic for connecting multiple embedded devices. Others are highly compute-intensive, so customers are looking for the best performance on the market today. And to be frank, that is Intel®. No one provides the same compute-per-dollar.

As Wind River has worked with Intel® in the recent past, we have seen two different types of processors. Our customers want hyper-powerful, top-of-the-line processors with the maximum number of cores for their equipment, but don’t want the systems to generate too much heat. So, you have to compromise somewhere.

But with the Intel Xeon D processors, it provides hardware quite well-suited for that gray area. It is really very good at doing compute. Intel has expertise doing it in IT and has managed to transfer some of that into lower-power profiles adapted to being closer to the intelligent industrial edge.

The processor parts are divided into low core count (LCC) and high core count (HCC) devices. Core counts on some of the HCC devices are quite high, and from a software point of view this enables us to provide a platform for mixed-criticality applications where you can run multiple operating systems at the same time by sharing the hardware leveraging Intel® Virtualization Technology (Intel® VT-x) and Intel® Virtualization Technology for Directed I/O (Intel® VT-d). Wind River software solutions take advantage of these Intel technologies.

Now you can get one box and put, say, four operating systems in it, each one doing its own thing.

“The #compute power on these #processors is fantastic, which is extremely appealing for the #avionics markets where there are data processing-intensive applications such as radar or mission computers.” —Michel Chabroux @WindRiver via @insightdottech

Are there any specific features that make this new generation of processors a form, fit, and function match for edge microserver applications and deployments?

There are a few things of interest to the markets Wind River serves.

The first is Intel® Time-Coordinated Computing (Intel® TCC) and support for Time-Sensitive Networking (TSN), which is very specific to industrial applications where network timing is key. Alongside this is single-root I/O virtualization (SR-IOV) functionality that allows the end user to share a network card between multiple operating systems—without having to deal with paravirtualization or other software techniques.

The other thing is that some SKUs are certifiable to the DO-254 avionics standard. The aerospace market is highly interested in IntelXeon D processors, and Wind River had early pre-silicon conversations about using these processors with Wind River Studio’s operating platforms: VxWorks, Wind River Helix Virtualization Platform, and Wind River Linux. Again, the reason is that the compute power on these processors is fantastic, which is extremely appealing for the avionics markets where there are data processing-intensive applications such as radar or mission computers.

There’s significant oomph in these parts that makes running multiple payloads at the same time very doable and very appealing. And because of virtualization, we can now tell these customers: “Whatever you’ve done on Linux, you can manage it side by side with real-time flight or other safety-critical aspects of the system, but your Linux remains your Linux.”

By leveraging Intel processors and Wind River software offerings such as VxWorks® and Wind River Linux, you can take a large portion of what you’ve done and reuse it almost as is.

You mentioned support for Intel® TCC, TSN, and other deterministic connectivity. Will these present a learning curve for enterprise developers? And will virtualization have one in the other direction?

Yes, there is a learning curve.

The standards for TSN are broad. There is very wide range of standards, and all of them require very fine configuration. It’s non-trivial to configure the OS layer, stacks, drivers, and hardware. And then you need to configure your entire system. And by system, I mean different boxes, because all of the participants in a network must be TSN-aware.

Virtualization brings more complexity initially, but once it’s done, that complexity becomes invisible to the end user if the system is set up properly. If the operating systems and virtualization software are done well, you can bring in your IT team and your embedded team and they can meet in the middle, for example, by doing AI processing closer to the industrial edge.

At Wind River, we’re trying to minimize that learning curve by enabling both ends of the engineering spectrum—traditional IT and traditional embedded—to leverage platforms using the same kinds of tools, processes, etc. And because Intel processors are so compatible with one another, we can leverage enhancements across multiple segments from Intel Atom® to Intel® Core to Intel® Xeon®.

One of the first TSN-enabled NICs was the Intel® Ethernet Controller i225, and we supported that with our real-time operating platforms. Fast-forward to today, we also support it on the Intel Xeon D processors.

Intel hardware was also the first on which Wind River had true hardware virtualization support, starting with VT-x, then VT-d, and now SR-IOV and Intel® Graphics Virtualization Technology (Intel® GVT) in the future. These enhancements to the CPUs make our own, and our customers’ lives, easier.

The hyperconverged-infrastructure concept has been around for years but seems to be becoming fundamental to modern technology stacks. Now that it is, how will the edge evolve?

You’ll have this end-to-end continuum that starts with IT business logic in the cloud, and as you get closer and closer to the edge, you’ll still have this cloud infrastructure backing you up. But as opposed to being in a system, then a gateway, then behind the gateway, those lines between the edge and the cloud will be fluid and smooth within your device or equipment.

You’ll have an entire ecosystem, and you’ll use the same paradigm, the same thought processes, and the same tooling across the entire continuum.


This article was edited by Georganne Benesch, Associate Content Director for

About the Author

Brandon brings more than a decade of high-tech journalism and media experience to his current role as Editor-in-Chief of the electronics engineering publication Embedded Computing Design. His coverage focuses on artificial intelligence and machine learning, the Internet of Things, cybersecurity, embedded processors, edge computing, prototyping kits, and safety-critical systems, but extends to any topic of interest to the electronic design community. Brandon leads interactive YouTube communities around platforms like the Embedded Toolbox video interview series and Dev Kit Weekly hardware reviews, and co-hosts the Embedded Insiders Podcast. Drop him a line at or DM him on Twitter @techielew.

Profile Photo of Brandon Lewis