Skip to main content


congatec at embedded world 2023

Autonomous robots need a shocking amount of intelligence to perform tasks like picking, placing, and sorting objects while moving in an open environment where humans, as well as other valuable equipment, are also operating. All the vision, control, and AI needed to accomplish this really requires multiple computing platforms.

The question is, how do you consolidate all those platforms into a single system that can perform the functions within the form factor, power consumption, and cost requirements of automation environments?

In a demonstration at his company’s embedded world 2023 booth, Christian Eder, Director of Product Marketing at the leading supplier of embedded computer modules congatec, shows how a single, multicore Intel® Core processor–based computer-on-module (COM) can be partitioned with a real-time hypervisor so that AI, vision, and control capabilities can run as discrete software stacks on one piece of hardware. This enables the execution of OpenVINO-optimized vision and AI algorithms in one, non-critical operating system environment that operates completely separately from another real-time OS environment that’s reserved for control functions.


Christian Eder: I’m Christian Eder. I work for congatec, and I’m the Director for Product Marketing. Of course it’s to recognize objects—that’s a very easy task—but also to be intelligent. So we have a nice demo here all based on Intel Technology, which brings the intelligence. Because otherwise you have to sort components, you have to sort parts. That’s one driving force. On the other end of course, there’s a lot of drive for autonomous vehicles, autonomous robots, AMRs, as they are called; and that’s a huge market, which is growing extremely. Industrial robot market is great, but the mobile-intelligent market for mobile-moving robots is even much larger, and it’s just starting. So, big fun to implement Intel Technology into those segments.

Most robotics companies do have SIR Robot controls. So the robot itself does work by itself. But how to add extra functionalities? If you want to have an AI system, a vision system, usually this requires different operating systems. It’s not, let’s say, on a real-time operating system, then you can mix and merge those things.

When utilizing Hypervisor, we use the real-time Hypervisor, which splits up the hardware into, let’s say, multiple systems, you can say. So there’s the real-time operating system, which operates the robot; and there’s a another operating system at your choice. In this case it’s a standard Linux here, which does the complete image analysis, and which also runs the complete AI stuff. And with this you don’t have to change your original application, you just add pieces on top of it because we have many, many cores here.

We see the latest Intel Core Platforms, and we can separate those cores and then make multiple systems out of it. We call that system consolidation. Of course it’s the amount of cores, it’s the performance. We can run in multiple of those robots here with one platform. And it’s the power of the graphic engine as well. So the CHIP U is used here to do all this AI processing to recognize the objects. And this information is sent, used to control the robot, to control the arm, to really grab the objects, which are just in a random order here. Of course we use here in this demo, for example, let’s see, OpenVINO to do all this AI application stuff. And the beauty of OpenVINO is you can run it on the CPU or you can accelerate it just running on a GPU; or if you have other accelerators, like FPGA cards or so, you can even boost performance.

But even this integrated graphics, which is great, you can do so much things. That’s easy and simple, straightforward implementation, and all based on Intel Technology. Of course the demo here, you see, it’s random objects here being around and I can make it—we can play with this, this guy. Actually, you see how fast the AI is working here. The camera is grabbing that information and it’s all in real time. And see, it’s really very interactive. So coworkers, robot as coworkers, are kind of easy to be implemented with a good vision system and AI system, of course. See compute technology is a complicated thing. What we bring extra to the table is the standardization. So we’re very active. I’m very active here in computer modules. The target is to simplify the use of embedded technology, and that’s what we do with the computer modules. But also we see software stacks. Also with our complete ecosystem to make it easy to compile computer systems. On top of it, we do have the real-time Hypervisor, which allows you to split the workloads here to real time and non-real time in order to, let’s say, simplify also the system application and to really structure the applications by itself. You don’t have to do everything in one block. You can really split it into nice logical units. Of course, here at the show, that’s first thing, but the show is over by tomorrow. And, of course, online at you’ll find all that information.

About the Author

Brandon is a long-time contributor to going back to its days as Embedded Innovator, with more than a decade of high-tech journalism and media experience in previous roles as Editor-in-Chief of electronics engineering publication Embedded Computing Design, co-host of the Embedded Insiders podcast, and co-chair of live and virtual events such as Industrial IoT University at Sensors Expo and the IoT Device Security Conference. Brandon currently serves as marketing officer for electronic hardware standards organization, PICMG, where he helps evangelize the use of open standards-based technology. Brandon’s coverage focuses on artificial intelligence and machine learning, the Internet of Things, cybersecurity, embedded processors, edge computing, prototyping kits, and safety-critical systems, but extends to any topic of interest to the electronic design community. Drop him a line at, DM him on Twitter @techielew, or connect with him on LinkedIn.

Profile Photo of Brandon Lewis