Skip to main content

Multisensory AI: The Future of Predictive Maintenance

Rustom Kanga

Downtime is a costly killer. But traditional predictive maintenance methods often fall short. Discover how multisensory AI is used to uplevel equipment maintenance.

Multisensory AI uses sight, sound, and smell to accurately predict potential equipment failures, even with limited training data. This innovative approach can help businesses reduce downtime, improve efficiency, and save costs.

In this podcast, we explore how to successfully implement multisensory AI into your existing infrastructure and unlock its full potential.

Listen Here

Apple Podcasts      Spotify      Amazon Music

Our Guest: iOmniscient

Our guest this episode is Rustom Kanga, Co-Founder and CEO of iOmniscient, an AI-based video analytics solution provider. Rustom founded the company 23 years ago, before AI was “fashionable.” Today, he works with his team to offer smart automated solutions across industries around the world.

Podcast Topics

Rustom answers our questions about:

  • 2:36 – Limitations to traditional predictive maintenance
  • 4:17 – A multisensory and intuitive AI approach
  • 7:23 – Training AI to emulate human intelligence
  • 8:43 – Providing accurate and valuable results
  • 12:54 – Investing in a multisensory AI approach
  • 14:40 – How businesses leverage intuitive AI
  • 18:16 – Partnerships and technologies behind success
  • 19:36 – The future of multisensory and intuitive AI

Related Content

To learn more about multisensory AI, read Multisensory AI: Reduce Downtime and Boost Efficiency and Multisensory AI Revolutionizes Real-Time Analytics. For the latest innovations from iOmniscient, follow them on X/Twitter at @iOmniscient1 and LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, edge, AI, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today I’m joined by Rustom Kanga from iOmniscient to talk about the future of predictive maintenance. Hi, Rustom. Thanks for joining us.

Rustom Kanga: Hello, Christina.

Christina Cardoza: Before we jump into the conversation, I love to get to know a little bit more about yourself and your company. So, what can you tell us about what you guys do there?

Rustom Kanga: I’m Rustom Kanga, I’m the Co-Founder and CEO of iOmniscient. We do autonomous, multisensory, AI-based analytics. Autonomous means there’s usually no human involvement, or very little human involvement. Multisensory refers to the fact that humans use their eyes, their ears, their nose, to understand their environment, and we do the same. We do video analysis, we do sound analysis, we do smell analysis, and with that we understand what’s happening in the environment.

And we’ve been doing this for the last 23 years, so we’ve been doing artificial intelligence long before it became fashionable, and hence we’ve developed a whole bunch of capabilities which go far beyond what is currently talked about in terms of AI. We’ve implemented our systems in about 70 countries around the world in a number of different industries. This is technology that goes across many industries and many areas of interest for our customers. Today we are going to, of course, talk about how this technology can be used for predictive and preventative maintenance.

Christina Cardoza: Absolutely. And I’m looking forward to digging in, especially when you talk about all these different industries you’re working in—railroad, airports. It’s extremely important that equipment doesn’t go down, nothing breaks, that we can predict things and don’t have any downtime. This has been something that I think all these industries have been looking to strive for quite some time, but doesn’t seem like we’ve completely achieved it, or there are still accidents, or the unexpected still happens. So I’m curious, when it comes to detecting equipment failure and predictive maintenance, what have been the limitations to traditional approaches?

Rustom Kanga: Today, when people talk of artificial intelligence, they normally equate it to deep learning and machine learning technologies. And you know what that means, I’m sure. For example, if you want to detect a dog, you’d get 50,000 images of dogs, you’d label them, and you say, “This is a dog, that’s a dog, that’s a dog, that’s a dog.” And then you would train your system, and once you’ve trained your system the next time a dog comes along, you’d know it’s a dog. That’s how deep learning works.

The challenge with maintenance systems is that when you install some new equipment, you don’t have any history of how that equipment will break down or when it’ll break down. So the challenge you have is you don’t have any data for doing your deep learning. And so you need to be able to predict what’s going to happen without the data that you can use for deep learning and machine learning. And that’s where we use some of our other capabilities.

Christina Cardoza: Yeah, that image that you just described—that is how I often hear thought-leaders talk about predictive maintenance, is the machine learning collecting all this data and detecting patterns. But, to your point, it goes beyond that. And if you’re implementing new technology or new equipment, how do you find that you don’t have that data and you don’t have that pattern?

I want to talk about first, though—the multisensory approach that you brought in your introduction, how does this address some of those challenges that you just mentioned and bring more of a natural, I guess, human inspection to predictive maintenance, human-like inspection?

Rustom Kanga: Well, it doesn’t involve human inspection. First of all, as we saw, you don’t have any data, right, for predicting how the product will break down. Well, very often with new products you might have a meantime between failures of, say, 10 years. That means you have to wait 10 years before you actually know how or when or why or how it’ll break down. So you don’t have any data, which means you cannot do any deep learning.

So what are the alternatives? We have developed a capability called intuitive AI which uses some of the other aspects of how humans think. Artificial intelligence is all about emulating human intelligence, and humans don’t just use their memory function, which is essentially what deep learning attempts to replicate. Humans also use their logic function. They have deductive logic, inductive logic; they use intuition and creative capabilities and so on to make decisions on how the world works. So it’s very different to the way you’d expect a machine learning system to work.

So what we do is we use our abilities as a human to advise the system on what to look for, and then we use our multisensory capabilities to look for those symptoms. For instance, just as an example, if a conveyor belt has been put in place, has been installed, and we want to know if it is about to break down, what would you look for to predict that it’s not working well? You might listen to its sound, for instance; you might know that when it starts going clang, clang, clang, that something’s wrong in it. So we can use our ability to see the object, to hear it, to smell it, to tell us how it’s operating at any given time and whether it’s showing any of the symptoms that you’d expect it to show when it’s about to break down.

Christina Cardoza: That’s amazing. And of course there’s no humans involved, but you’re adding the human-like elements into it, say that somebody manually inspecting would look for—if anything’s smoking, if they smell anything, if they hear any abnormal noises. So, how do you train AI to be able to provide this interactive or be able to detect these capabilities when it is just artificial intelligence or a sensor on top of a system?

Rustom Kanga: Exactly how you said you do it: you tell the system what you’re likely to see. For instance, let’s say you’re looking at some equipment, and the most likely scenario is that it’s likely to rust, and if it rusts there’s a propensity for it to break down. You then tell your system to look for rust, and over time it’ll look for the changes in color. And if the system sees rust developing, it’ll start telling you that there’s something wrong with this equipment. it’s time you looked at replacing it or repairing it or whatever.

Christina Cardoza: Great. Now I want to go back to training the AI and the data sets—like we talked about how do you do this for new equipment? I think there’s a misconception for a lot of providers out there that they need to do that extensive training that takes a long time; they need that data to uncover these patterns to learn from them, to identify these abnormalities. So, how is your solution or your company able to do this with less data sets but ensure that it is accurate and it does provide value and benefits to end user or organization?

Rustom Kanga: Well, as I said, the traditional approach is to do deep learning and machine learning, which requires massive data sets, and you just don’t have them in some practical situations. So you have to use other methods of human thinking to understand what is happening. And these are the methods which we call intuitive AI. They don’t require massive amounts of data; we can train our system with something like, maybe 10 examples of the data set or even less. And because you require so few data sets, you don’t need massive amounts of computing, you don’t need GPUs.

And so everything we do is done with very little training, with no GPUs. We work purely on the standard Intel CPUs, and we can still achieve accuracy. Let me give you an example of what I mean by achieving accuracy. We recently implemented a system for a driverless train system. They wanted to make sure that nobody walked in front of the train, because obviously it’s a driverless train and you have to stop it, and that requires just a simple intrusion system.

And there are hundreds of companies who do intrusion. In fact, camera companies provide intrusion systems as part of their—embedded into their cameras. And so the railway company we were talking to actually did that. They bought some cameras from a very reputable camera company and they could do the intrusion, the intrusion detection.

The only problem they had was they were getting something like 200 false alarms per camera per day, which made the whole system unusable. Then finally they set the criteria that they want no more than one false alarm across the entire network. And they found us, and they brought us in, and we could achieve that. And, in fact, with that particular train company we’ve been providing them with a safety system for their trains for the last five years.

So you can see that the techniques we use actually provide you with very high accuracy, much higher than you can get with some of these traditional approaches. In fact, with deep learning you have the significant issue that it has to keep learning continuously almost forever. For instance, you know the example I gave you of detecting dogs and recognizing dogs? You have 50,000 dogs, you train your system, you recognize the next dog that comes along; but if you haven’t trained your system on a particular type, unique type of dog, then the system may not recognize the dog and you have to retrain the system. And this type of training goes on and on and on—it can be a forever training. You don’t necessarily require that in an intuitive-AI system, which is type of technology we are talking about.

Christina Cardoza: Yeah, I could see this technology being useful in other scenarios too, rather than just different types of dogs. I know sometimes equipment moves around on a shop floor or things change, and if you move camera and positioning, usually you have to retrain the AI from there because of that relationship that has been changed. So it sounds like that’s something that it would be able to continue to provide the results without having to be completely retrained if you move things around.

In that railroad example that you gave, you mentioned how they installed cameras to do some of the things that they were looking to do. But if the—I know a lot of times manufacturers shops and the railroad systems, they have their cameras, they’re monitoring for safety and other things. Now, if they wanted to be able to take advantage of your capabilities on top of their already existing infrastructure, is that something that they would be able to do? Or does it require the installation of new hardware and devices?

Rustom Kanga: Well, in that example of the railway we use the existing cameras that they had put in in the first place. We can work with anybody’s cameras, anybody’s microphones. Of course the cameras are the eyes; we are only the brain. So the cameras have to be able to see what you want to see. We provide the intelligence, and we can work with existing infrastructure for video, for sound, for smell.

Smell is a very unique capability. Nobody makes the type of smell sensors that are required to actually smell industrial smells. So we have built our own e-Nose which we provide our customers with. It’s a unique device with something like six sensors in it. You do get sensors in the market, of course, for single molecules. So if you wanted to detect carbon monoxide, you can get a sense of carbon monoxide.

But most industrial chemicals are much more complex. For instance, even a cup of coffee has something like 400 different molecules in it. And so to understand that this is coffee and not tea you need a sensor of the type of our e-Nose which has multiple sensors in it and understanding the pattern that is generated across all those sensors. We know that it is this particular product rather than something else.

Christina Cardoza: So I’m curious, I know we talked about the railroad example, but since your technology spans across all different types of industries, do you have any other use cases or customer examples that you can share with us?

Rustom Kanga: Of course. You know, we have something like 300 use cases that we’ve implemented across 30 different industries, and if you just look at predictive maintenance, it could be a conveyor belt, as I said, that is likely to break down, and you can understand whether it’s going to break down based on its sound. It might be a rubber belt used in an elevator; it might be products that might rust and you can detect the level of rusting just by watching it, by looking at it using a camera. You can use smell; you can use all these different senses to understand what is the current state of that product.

And in terms of examples across different industries, I’ll give you one which demonstrates the real value of a system like this in terms of its speed. Because you are not labeling 50,000 objects you can actually implement the system very quickly. We were invited into an airport to detect problems in their refuse rooms. Refuse rooms are the garbage rooms that they have under the airport. And this particular airport had 30 or 40 of them where the garbage from the airport and from the planes that land over there and so on—it’s all collected over there.

And of course when the garbage bags break and the bins overflow, you can have all sorts of other problems in those refuse rooms, so they wanted to keep these neat and tidy. And to make sure that they were neat and tidy, they decided to use artificial intelligence systems to do that. And they invited, I think it was about eight companies to come in and do POCs over there—proofs of concept. Now they said, “Take four weeks. Train your system and show us what you can do.”

And after four weeks nobody could do anything. So they said, “Take eight weeks.” Then they said, “Take twelve weeks and show us what you can do.” And none of those companies could actually produce a system that had any level of accuracy, just because of the number of variables involved. There are so many different things that can go wrong in that sort of environment.

And then finally they found us, and they asked us, “Can you come and show us what you can do?” So we went, sent in one of our engineers on a Tuesday afternoon, and on that Thursday morning we were able to demonstrate the system with something like 100% accuracy. That is how fast the system can be implemented, because you don’t have to go through 50,000 sets of data that you have to train. You don’t need massive amounts of computing, you don’t need GPUs. And that’s the beauty of intuitive AI.

Christina Cardoza: Yeah, that’s great. And you mentioned you’re also using Intel CPUs. I should mention, insight.tech and the “insight.tech Talk,” we are sponsored by Intel. So I’m curious, how do you work with Intel? And the value of that partnership and the technology in making some of these use cases and solutions successful.

Rustom Kanga: Being a partner of Intel for the last 23 years, and so we work exclusively with Intel, we’ve had a very close and meaningful relationship with them over these years. And we find that the equipment that they generate has benefit in that it is—we can trust it, we know it’ll always work, we understand how it works. It’s always backward compatible, which is important for us because customers buy products for the long term. And because it delivers what we require, we do not need to use anybody else’s GPUs, and so on.

Christina Cardoza: Yeah, that’s great. And I’m sure they’re always staying on top of the latest innovation, so it allows you to scale and provides that flexibility as multisensory AI continues to evolve. So, since you said in the beginning you guys started with AI before it was fashionable, I’m curious, how has it evolved—this idea of multisensory intuitive AI? How has it evolved since you’ve started, and where do you think it still has to go, and how will the company be a part of that future?

Rustom Kanga: Well, it’s been a very long journey. When we first started we focused on trying to do things that were different to what everybody else did. There were a lot of people who used standard video analysis, video motion detection, and things like that to understand the environment. And we developed technologies that worked in very difficult, crowded, and complex scenes that positioned us well in the market.

Today we can do much more than that. We can—we do face recognition, number-plate recognition—it’s all privacy protected. As I said, we do video-, sound-, and smell-based systems. Where are we going? The technology keeps evolving, and we try and stay at the forefront of that technology.

For instance, in the past all such analytics required the sensor to be stationary. For instance, if you had a camera, it had to be stuck on a pole or a wall somewhere. But what happens when the camera itself is moving? For instance, on a body-worn camera where the person is moving around or on a drone or on a robot that’s walking around. So we have started evolving technologies that’ll work even on those sorts of moving cameras, and we call that “wild AI.” It works in very complex scenes, in moving environments where the sensor itself is moving.

Another example is where we’ve started—we’d initially developed our smell technology for industrial applications, for things like waste-management plants, for things like airport toilets. They clean the toilet every four hours, but it might start smelling after 20 minutes. So the toilet itself can say, “Hey, I’m Toilet Six, come back and clean me again.” It can be used in hospitals where a person might be incontinent and you can say to the nurse, “Please go and help the patient in room 24, address the smelling.” And so on. It can be used for industrial applications of a number of types.

But we also discovered that we could use the same device to smell the breath of a person, and using the breath we can diagnose early-stage lung cancer and breast cancer. Now, that’s not a product we’ve released yet. It is—we are going through the clinical tests and clinical trials that one needs to go through to release this as a medical device, but that’s where the future is. It’s unpredictable. We wouldn’t have imagined 20 years ago that we’d be developing devices for cancer detection, but that’s where we are going.

Christina Cardoza: It’s amazing to see, and I can’t wait to see what else the company comes up with and how you guys continue to transform industries and the future. I want to thank you, Rustom, again for coming onto the podcast; it’s been a great conversation.

And thanks to our listeners. I invite all of our listeners to follow us along on insight.tech as we continue to cover partners like iOmniscient and what they’re doing in this space, as well as follow along with iOmniscient on their website and their social media accounts so that you can see and be a part of some of these technologies and evolutions that are happening. So thank you all again, and until next time this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

About the Author

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza