Skip to main content

INDUSTRY

How NASCAR AI Runs a Factory

Joe Speed IoT Chat

A conversation with Joe Speed @ADLINK_IoT

Stock car racing has a reputation for being low-tech, but today’s NASCAR teams are using sophisticated AI to push engines to the limit. Now this same machine monitoring technology is available for factories, where it can keep manufacturing assets running at peak performance.

In this podcast, we discuss the extreme requirements of racing—and the surprising ways 800 HP engines resemble everyday industrial applications—with Joe Speed, Field CTO for Global Partners at ADLINK Technology. Our conversation starts with Joe’s real-world NASCAR experience, and then explores everything from optical character recognition to communications protocols. It’s a wild ride!

Listen to find out how you can take advantage of the latest in AI and machine learning. You will discover:

  • Why edge inferencing is the key to analyzing massive data streams
  • How to build an edge computing platform that can talk to legacy equipment
  • Why sensor fusion is not as hard as it sounds
Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, a production of insight.tech. I’m Kenton Williston, editor in chief of insight.tech and your host for today’s podcast. This podcast was recorded during quarantine, so you may notice some differences in audio quality, as well as possible guest appearances from pets and kids. Well, we’re glad you could join us either way. Let’s get into the conversation. In today’s podcast, we’ll look at the latest in machine monitoring and discuss ways AI and machine learning can give engineers superhuman capabilities. Plus, we’ll talk about NASCAR. No, really, I promise it will all make sense.

Joe Speed: Sure. Happy to. Kenton, real pleasure to meet you today. So, my name is Joe Speed. I’m the Field CTO for global partners here at ADLINK Technology, an Intel partner. For myself, I’m a longtime machine to machine internet of things nerd, going way back. Spent several years taking an obscure telemetry protocol and getting it made open, standard open source, and convincing people to use it. So, that’s MQTT, which is now the IoT on ram for Amazon, Microsoft, IBM, Google, and most others that matter. And is built into all the cars. And is in most of the things. I also very passionate about cars. I like cars. And so, one of the great things is, I’ve over the years convinced a lot of companies to let me build cars. So, with Intel I’ve actually built quite a few advanced technology concept cars, cognitive cockpit autonomous vehicles, accessible bus for elderly, disabled, a lot of things. So, very long relationship working with the Intel Technologies and the people there.

Kenton Williston: Very, very cool. And of course, I’m sure, I’m not the first person observed with a name like Joe Speed, he forced to get into cars.

Joe Speed: Well, it is a prerequisite, we’ll see from my son who’s Dash. So, we’ll see what he does.

Kenton Williston: Oh, well, that’s great.

Joe Speed: And I come from a long line of family, people going fast. So, my entire family, except for myself, they’re all military pilots.

Kenton Williston: Oh, wow. Amazing. Amazing. So, you mentioned cars and we recently interviewed you for an article about cars, specifically NASCAR, the racing. It is for our international listeners who are not familiar with that. And it’s definitely worth checking out, but I have to ask, what in the world NASCAR has to do with machine monitoring?

Joe Speed: Well, it has a lot to do with it. So, NASCAR cars are machines. And they are made by a machine and they are tested by machines. And I’ve spent some time in the past at the NASCAR R&D site and then working with some of the racing teams. And what’s amazing, is how much technology, how much really current and cutting-edge technology goes into design, manufacturing, testing and monitoring of such an aggressively low tech product.

Kenton Williston: Funny thing about NASCAR of course, is that it is, like you said, aggressively low tech. It’s coming from that whole, race it on Sunday, sell it on Monday mentality. And of course, the cars are not really all that similar to road going cars these days, but that is the legacy still, is having a very low-tech approach to racing.

Joe Speed: Sure. But the part that they don’t tell you, is that the track side pit boxes are rolling data center, stuffed full of Xeon CPUs. They don’t tell you about streaming the telemetry in real time to the team operation center, where they’re doing the streaming analytics. They don’t tell you about all the FP64 fluid dynamics analysis, in simulation for the airflow, calculating grip coefficient across all four tires. These are all the things that they don’t really tell you about.

Kenton Williston: Yeah. And in our article you talked about, how engine failures are a huge, huge deal. And I think you said, it was $250,000 for one of these engines. So, you want to monitor them very, very closely to make sure there are no problems, while they’re out there, assuming around the track.

Joe Speed: Yeah. So, certainly, monitoring during race, but even better, is to go in the race with a hardened product, that’s free of defects. So, you think about optical inspection, having very precise controls during the manufacturing. Optical inspection during and after. A test to destruction on samples of units. Being able to intercede with test situations, so that, if there is a failure, you get clean breaks, you can inspect under microscope, all of these kinds of things.

Kenton Williston: Yeah. Totally makes sense. So, what is ADLINK doing to help put together a really robust engine and keep it from failing on the track?

Joe Speed: When you look at a NASCAR engine, what is an engine? It’s rotary machinery. And we have a long history of working with such things. Our company ADLINK stands for, analog to digital link. So, the company was founded doing high quality analog to digital data acquisition devices, what have you. Right? So, we manufacture thing, for the testing instrument industry, things like, doing analog to digital at 2 million samples per second. And things like, doing analog to digital at 256,000 samples per second, per channel, across many channels into 24-bit digital. So, we’re capable of generating quite large amounts of data.

And once you have the data, you need to do something with it. So, you look at our edge compute business, the things that we do around manufacturing. The rugged edge compute, now edge AI, with the hardware accelerated ML and analytics. And so, this is how it all fits in. So, really the same technologies that we use with the NASCAR engines, is the same technologies that we deploy on factory floor. That are used with 50-megawatt gas turbines. That are used in all kinds of things.

Kenton Williston: Well, so this is some really impressive numbers, in terms of the amount of data that you’re talking about taking in. And of course, I’m sure, NASCAR is not exactly the most typical example. So, I’m wondering, if you could highlight some areas, where you are seeing a need for these really intense data processing in terms of machine monitoring and you outline for me, what some of the typical requirements in those applications are.

Joe Speed: Sure. Yeah. I will say though, I love my cars, right? I love my car projects with NASCAR, Porsche racing team, accessible autonomous bus for elderly and disabled, all kinds of other vehicles. The most common case for us though, is working, people who make things, right? So, people who make things, use our technology, our products, in the thing that they make. And they use them in the manufacturing of the thing. And so, whether you’re talking about, autonomous tractor, CNC machines, robots, autonomous vehicles, all of these kinds of things. And so, for the machine monitoring, think about any rotary machine, compressors, CNCs, conveyors, all of these things.

I know a lot of people go straight to the whole predictive maintenance topic, and that’s interesting, but I tend to think more of just, what is the health of this thing? Like, right here, in the here and now, what is the health of this particular asset? And there’s a lot that you can tell from the analog data. So, we can drop in our MCM-100 product. We can drop in these high-quality analog sensors for vibration, pressure, voltage, temperature, all kinds of things, right? There’re hundred different sensors that it works with. And then, we can also take telemetry from the machine itself. And this is where it gets interesting, is when you start to use these things in combination. And so, this is used today, a lot of factories around the world and in a lot of things out in the field as well.

Kenton Williston: Yeah, that makes sense. And I know, one of the big challenges there is, when you’re talking about adding new instrumentation onto an existing asset, figuring out how to connect that instrumentation is, I guess, a relatively small problem, because you can specify whatever interfaces you want. But when you’re adding in the extra step of, getting the telemetry out of the existing compute hardware, the existing management hardware, that can be a bit challenging. So, how do you go about accessing that legacy data?

Joe Speed: For the legacy data, we’ve been doing that for many years. The factory floor is where we live, so for 25 years now. So, we have hardware support for connecting to the Modbus, the CAN bus, and all of these different legacy industrial interfaces from an electromechanical perspective. And then, from a software perspective, we have our ADLINK edge software, which runs on all of our Intel products. And that gives us things out of the box support for the Siemens, Allen-Bradley, Rockwell SCADA protocol, the Modbus, PROFINET, PI historian. So, being able to stream data, to and from OSI soft PI historian, and all of these other things that you run into in a manufacturing, warehouse or other kinds of operations environments, right? So, like oil rigs, refineries and even things around the wind farms.

Kenton Williston: Yeah. And if I’m not mistaken, ADLINK even has an offering where you can scrape data off of a display. Isn’t that right?

Joe Speed: The DEX. Yeah. Yeah, That’s a fun product actually. So, there’s a lot of equipment that we find at customers, that is new enough and modern enough, that it has a digital display, a keyboard, and sometimes a mouse. But it’s old enough, that it doesn’t have any open standard interfaces, right? There’s no OPC UA. There’s no DDS. There’s none of these open standards that you see in modern systems. And so, the engineer does something really clever with Intel. So, this was a jointly invented a product. And what you do is, it’s a little brick that you put on the machine and you take the keyboard, mouse and video and you plug it into this brick. And then, you plug the actual keyboard, mouse and video into the other end of the brick. And what it does in between is, while you’re using the system, it screen scrapes the display. So, it does OCR using Google’s Tesseract OCR library.

We do OCR on each video frame, right? So, think like 60 frames a second, 30 frames a second. So, we OCR the video frames and convert every piece of data in the screen to a IoT telemetry stream. Right? So, then you can publish it via the DDS or MQTT. You can push it up to your clouds, the AWS and what have you, the temperature is, pressure is. And the really clever trick it does, is when someone’s not using the machine, it’ll actually script. It will go in and navigate to all of the screens of the application of the equipment’s user experience. It will navigate all the screens and capture all the data. So, it doesn’t need somebody sitting there driving.

Kenton Williston: Yeah. Yeah. I really liked that. It’s a very cool little box.

Joe Speed: It is.

Kenton Williston: One of the things that makes that possible, is this whole notion of AI and machine learning, which has been getting so much traction over the last couple of years. It’s really amazing how much that’s grown, and not just for things like scraping these screens, but just the broader use for analyzing all this data, that you’re taking off of these machines. So, I’m wondering how you see machine learning and AI changing how people approach this problem, machine monitoring?

Joe Speed: Yeah, it’s definitely changing in a lot of ways. The most obvious one that everyone sees, is the things around vision. So, computer vision, right? So, being able to make sense of any situation, any problem that you can tell optically. So, basically, if you can figure out a situation with your own eyes, then a machine can be trained to come to that same decision. Okay? For the date of the telemetry, we’re able to do interesting things like for NASCAR, they record tons of telemetry. And so, in the course of say, you do the Daytona 500, so you get all the telemetry from the Daytona 500, all the breaking, all the acceleration, the Gs, the change in engine load, all of these things are captured.

And so, then you can play that back on a dyno. So, you can play that back basically, with the real engine. But in simulation, you can simulate the entire Daytona 500 on a test stand. And, if you know this was the perfect run, this was the winning race, this was the perfect lap from the winning race, you can play that back through, and you can look for deviation, right? So, it can detect any deviations from that and those can be flagged for the human. And the humans have a lot to do with that training. Right? So, one of the engineers I work with, Duane, he’s been doing this for so many years. So, he sits there, he’s looking in the test chamber. It’s like Bulletproof glass in case, when engines at high RPM come apart, it can be a very kinetic event.

So, you’re looking through the bulletproof glass, in the chamber, you’re watching, you’re listening, you’re looking at a display of all the telemetry. And he’s been doing this for so long. He freaked me out. He leans over to me and he says, “Joe, that engine is going to come apart in the next five minutes.” He can just tell. But you need that kind of expertise to work with the data scientists, to train, to do the annotation of the data. So, these human elements are very important for making the machine learning effective.

Kenton Williston: Yeah, absolutely. And that’s one of the things I think, is most exciting about this whole field. It’s pretty easy to get super focused on what the computers can do, but really what you’re doing as much as anything is taking that expertise out of one person’s head and making it accessible more broadly. And I think, that’s really, really exciting.

Joe Speed: It’s more broadly, but even think about, it is a tool for, not only can you take your most expert person and more broadly share that knowledge, but you can also take that knowledge and make it applied faster. So, what if you need that same decision that he would have made, but you need that decision executed in microseconds instead of seconds, okay? You need low latency interventions. What if it’s optical inspection, and the expert has trained you how to see the defects in these machine parts. But now, by codifying, capturing their expertise and building it into these models, you give him a tool, where when he goes to inspect the part, every suspect area is instantly highlighted. And then, he can just click through them one by one, either accepting or rejecting each of those. And as he’s doing so, he is continuing to refine, to retrain the model. So, it’s good stuff.

Kenton Williston: Yeah, very much so. And something we had talked about in earlier conversations, is how this can turn an engineer into almost a superhuman, where you can respond so much faster, you can handle so much more of a workload, you can monitor so many more machines. It’s really like gaining superpowers.

Joe Speed: Very much so.

Kenton Williston: So, of course the flip side to this though, is that, it does take a lot of computer hardware and some really intense software development and so forth, to actually bring all of these amazing capabilities to life. And I’m wondering, if you can sketch a picture for me of what the compute requirements are, for these applications?

Joe Speed: Sure.

Kenton Williston: And what ADLINK is delivering to help take on these, just insane workloads.

Joe Speed: Yeah. It’s definitely, it’s a different kind of workload. And you can scale to do that. Certainly, Intel has done a lot with enabling the optimizing, these kinds of workloads on Xeon and some of the higher-class processors. Being able to push some of those optimizations down into the smaller ones like Adam and Intel Core and what have you. But a lot of it is, it actually requires different compute. So, instead of just more compute, think about as needing different compute. So, there’s some pretty specialized hardware out there, right? So, you see things in the market, there’s the TPUs, the GPUs, the VPUs, all of these things. And we work with all of those. And they are particularly, well suited for these kinds of problems, where can be highly paralyzed and the like.

So, we do a lot with the Movidius Myriad X, with the OpenVINO. OpenVINO it’s nice, because we can take the test data, whether that be acoustic telemetry, vibration, video. We use that to develop the ML models or to help our customers with the tools and best practices around developing the ML models. And for that it’s your usual cast of characters. It’s your tensor, right? So, that’s TensorFlow, TensorLight, all of the variants there. PyTorch, all these things. Well, once you have a model that’s trained, you can now take that and you run it through OpenVINO, which generates a quantized optimized runtime, that you use for the inference. And one of the things to distinguish here, is when you talk about machine learning, you have training and you have inference. And they’re asymmetric, right? Training is very expensive and you do it occasionally. Inference is, in comparison is computationally cheap but if you do it all the time.

Kenton Williston: Yeah, absolutely. Absolutely. And I totally agree with you, that platforms like OpenVINO, have been a really huge deal for enabling these kinds of workloads. Among other reasons I should say, the fact that one, you can address these series specialized hardware platforms like the Movidius.

Joe Speed: Yeah.

Kenton Williston: And then of course, the fact you can take all of this work that you’ve done in the cloud, developing your models on very expensive big sets of hardware and turn it into something that is practical to run at the edge. The other piece of that though in my mind, we talked about this a little bit earlier, it’s just the raw amount of data that’s coming in. And I think that also really, it points to why it’s so important to do all this compute at the edge. So, I’m wondering, you can talk a little bit more about that. And I’m also curious, I had no idea that you were involved in the development of MQTT. That’s very cool. And it’s something we talk about on our site all the time.

So, one, you can talk a little bit about how you ingest all this data, where it gets processed and how to keep the communication’s flowing. And you may be not such a big problem with, if you just got a NASCAR engine on a test stand. But if you’re talking about something like say, a factor with distributed machinery, where and how you get the computing done. And where and how you traffic the data.

Joe Speed: Sure. I’m happy to talk about that. And just to be clear, MQTT was invented by my friends, Arlen Nipper and Andy Stanford Clark. And so, they’re the real geniuses behind it. One of my abilities is, I’m very quick to see value or maybe not everyone has. And then, I get others to see it. So, I took what was at the time, an obscure proprietary telemetry protocol and worked with many others. And I did none of this by myself, but together we made it open standard. We made it open source. We convinced the world to adopt it. And that’s for talking to cloud, like thing to cloud, there’s nothing better than MQTT. But what people need to understand is, in technology everything is a tool, right? And if somebody asks you if it’s good tool, the correct response is, “I don’t know. What is it you want to do with it?” Okay?

So, MQTT, it’s not a panacea, it’s not a cure all. What it is, it’s a really great light way, pub sub protocol for talking to clouds, for talking over resource constrained or unreliable networks. But it doesn’t fit everything. And then, you’ve got DDS, data distribution service, which historically comes out a military and arrow. And that’s optimized around, this UDP based multicast being able to do high volume, extremely low latency quality of service. So, our work with DDS, everything we talk about is microseconds, right? At most, tens of microseconds, not milliseconds. And so, you start to look at using these things together. So, I’ve got MQTT to the cloud, North, South, and North to the cloud. And then, I’ve got DDS for what we call east-west, right?

So, how can things on a factory floor, discover and work with one another and coordinate on manufacturing tasks with very low latency. This is why DDS, is what’s built into open robotics. So, all of the ROS 2 open robotics, is built on DDS. And DDS that we contributed to open source, which is an Eclipse Foundation project, called Eclipse Cyclone DDS, is a tier one ROS middleware. So, that’s part of our contribution.

So, you’ve got these protocols that helps you move the data, but where do you move it to? What do you do it with? We’re big fans of train in the cloud, run at the edge. As I mentioned earlier, there’s this asymmetric relation between the training and the inference, right? The training is done occasionally and it’s computationally expensive. The inference is done all the time and it’s computationally cheap. But the inference needs to be done on the data, and where’s the data generated? It’s generated at the edge. So, we believe that the correct approach in most situations is going to be, that you collect data at the edge, you bring it to the cloud as a training set. You develop and train your models. You publish those models back down to the edge. So, then you have your runtime inference is happening on the data, where the data is created.

And we think that, from an economics, that also works. So, there’s a brilliant university study that looked at, if I’m using AWS, if I’m using Azure, what is the cost of doing video and acoustic analytics in those clouds or with their technology, with their tools at the edge. And the answer was, that it’s eight times less expensive to do it at the edge. But for me, even more interesting than the financial cost, is the latency. So, we deal with use cases where latency matters, right? If you have to take data to the cloud and do analytics and then bring it decisions back, it’s too late. The product’s been damaged, the equipment is broken, the buildings burned down, a person has been injured. And so, that’s why we really think, that inference belongs at the edge in almost all situations.

Kenton Williston: So, that makes sense. But I’m wanting to take a step back, and think about the bigger picture here. So, we’ve talked about a lot of different things, the volumes of data, where the data needs to be trafficked, what kind of hardware you need, the software development workflows. It seems to me, there’s a lot of complexities involved in developing this advanced machine monitoring technology. So, I’m sure there’s a lot of ways for folks to go wrong. So, I’m wondering, if you’ve seen some common pitfalls and have some tips on how folks can avoid those.

Joe Speed: Sure. We’ve all seen failed projects. Fortunately, they haven’t been mine. And a lot of them where they start to get in trouble, it’s this being so grand, so ambitious, it’s like I bet my business, right? Boil the ocean, all these euphemisms. And where I’ve seen people be successful, is when they just pick a business problem. “What is a business problem I have?” And the business problem I have is, at this one step in the manufacturing process, I’m getting too many false positives during my quality inspection, right? So maybe, we just focus and hunker down and solve this. And because of that, the business impact is X, Y, Z, right? So, they can talk about what it means in terms of revenue, productivity, customer sad, employee sad, injuries, whatever the metric is, and just focus on that, just work that. Just pick a thing and just work that.

And from that, you actually learn a lot, that you can start to, once that successful, okay. I was doing this for a particular step and within a particular work cell in my manufacturing process. Now, how to expand that to my inputs on the left and my outputs on the right. So, how do I start to expand it to other things within my manufacturing process. So, when you take a very focused approach, that’s where we’ve seen, customers be able to get a lot of success, get very fast ROI. And start simple. Just get the data flowing, right? I see people, they start arguing about which analytics tools, which ML framework, and all these things. And that is so cart before the horse. Right? You’re arguing about analytics tools and you haven’t even seen any data yet. You haven’t even opened the data spigot.

Kenton Williston: Yeah, that makes sense. Start small and go from there. So, we’re getting close to the end of our time and I wonder, if there is any closing thoughts you’d like to share with our audience?

Joe Speed: Yeah. We had talked a bit about the analog data, vision. Vision has a great general-purpose sensor. You can use it for a lot of things, but not everything. So, it doesn’t capture the entire picture. So, think about when you’re in an environment, when you’re on a factory floor, what do you see? What do you hear? What do you feel? So, there’s not one any one sensor that gives you the complete picture, right? So, how do you use these in combination? How do you do things with sensor fusion? Sensor fusion is a more accessible topic than most people might believe. And we can certainly help you with that. So, if you want to do things like, combine the video analytics with the acoustic, with the vibration and pull telemetry, pull data off of your existing legacy manufacturing equipment, PLCs, SCADA, all of these kinds of things. We have our ADLINK edge, which supports over a hundred legacy protocols right out of the box. We can really work with you to help you figure that out.

Kenton Williston: Excellent. Well, that just leaves me to say thank you for joining us, Joe. And ask where our listeners can find you online.

Joe Speed: It’s not hard, if you Google Joe Speed IoT, I think that’s something stupid, like a million hits. On Twitter, I’m Joe Speeds, like he goes too fast. And feel free to hit me up on LinkedIn as well. So, I’m always happy to help anyone here in the community.

Kenton Williston: Thanks so much for listening. As always, if you enjoyed listening today, please make sure to support us by subscribing and rating us on your favorite podcast app. And, if you want to chat more about industrial technology, make sure to tweet us @insight.tech. This has been the IoT Chat podcast. Join us next time for more conversations with industry leaders at the forefront of IoT design.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

About the Author

Kenton Williston is an Editorial Consultant to insight.tech and previously served as the Editor-in-Chief of the publication as well as the editor of its predecessor publication, the Embedded Innovator magazine. Kenton received his B.S. in Electrical Engineering in 2000 and has been writing about embedded computing and IoT ever since.

Profile Photo of Kenton Williston