Location intelligence is already a part of our everyday lives, from using our phones to get directions to finding a nearby restaurant. But businesses now are starting to see the transformative potential of location intelligence.
In fact, 95% of businesses now consider location intelligence to be essential, thanks to its ability to lower costs, improve customer experience, and enhance operations. But many businesses struggle to get the most out of their location data because it’s often siloed in different departments or systems.
AI and digital twins can help businesses to break down data silos and create a single, comprehensive view of their spaces in real time. AI can be used to analyze large volumes of location data and identify patterns and trends. Digital twins are virtual replicas of real-world objects or environments that can be used to track and monitor changes over time.
In this podcast, we discuss the importance of location intelligence, the use of AI and digital twins for tracking and monitoring, and how to implement AI-powered tracking and monitoring safely and securely.
Our Guest: Intel
Our guest this episode is Tony Franklin, General Manager and Senior Director of Federal and Aerospace Markets at Intel. Tony has worked at Intel for more than 18 years in various positions such as General Manager of Internet of Things Segments, Director of IoT/Embedded Technology Marketing and Communications, and Director of Strategic and Competitive Analysis.
Tony answers our questions about:
- (1:49) The importance of location intelligence
- (3:56) Businesses’ challenges with achieving real-time insights
- (6:19) The role digital twins and artificial intelligence play
- (8:50) Tools and technologies necessary for success
- (11:21) Using Intel® SceneScape OpenVINO™, Geti™ for location intelligence
- (17:19) Addressing privacy concerns with AI-powered tracking and monitoring
- (19:49) Future advancements of AI, digital twins, and location intelligence
To learn more about the importance of location intelligence, read The Power of Location Intelligence with AI and Digital Twins and Monitor, Track, and Analyze Any Space with Intel® SceneScape. For the latest innovations from Intel, follow them on Twitter @intel and on LinkedIn at Intel Corporation.
Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech. And today we’re going to be talking about using AI and digital twins to track and monitor environments with Tony Franklin from Intel. But before we jump into the conversation, let’s get to know our guest a bit more. Tony, thanks for joining us.
Tony Franklin: Thank you. Thank you for having me.
Christina Cardoza: What can you tell us about yourself and what you do at Intel?
Tony Franklin: Sure. I am in what’s called the Network and Edge Group, and so we are responsible for all those things and devices that are connected outside of data centers and traditional cloud enterprises, which, again, are usually areas that are in our daily lives. I specifically am responsible for the federal and aerospace markets, but what’s interesting about this particular market is when you think about federal governments, they have every application, every market—they have retail, they have manufacturing—they have, you name it—they have healthcare, etc. So it’s a pretty interesting space to be in.
Christina Cardoza: Yeah, absolutely. And monitoring environments in all of those areas that you just mentioned becomes extremely important. Like I mentioned in my introduction, we’re going to be talking about tracking and monitoring environments with AI and digital twins, also known as gaining this location intelligence. And we’ve talked about digital twins on the podcast before, but in the context of, like a healthcare environment or a manufacturing environment, where you’re stimulating these environments before you’re adding it to something.
So, tracking and monitoring in these situations for location intelligence sounds more real time rather than stimulation. So can you tell us a little bit more about how you’re seeing digital twins being used in this space, and what’s the importance of location intelligence today to do this?
Tony Franklin: Yeah, absolutely. What’s funny is I think we’re used to location intelligence without even knowing it’s there. I mean, we all have maps on our phones. Anybody that’s had children, many times they use the Life360 app and you know exactly where someone’s at. You know how fast they’re moving, you know how long they’ve been there.
I literally was just reading an article last night, and while they didn’t say it, I know they were talking about UPS and how important location intelligence is to just things like sustainability, right? They said 27% of gas emissions in the US are from transportation. And for them one mile—they looked at literally one mile on a map—and it could cost them $50 million if the location wasn’t accurate to get from point A to point B.
And so we are just used to it in our daily lives. I think on the business side, like a UPS, we are starting to understand how impactful it can be financially more and more. And in addition to location intelligence, I think what we’re starting to really understand is time-based spatial intelligence. So it’s not just the location, but do we really understand what’s going on around us, or around that asset or object or person in the time that it’s in.
And so digital twins allow you to recreate that space, and then also understand at the particular time—like you said, we’re talking about more real time, in general, but really adding on to the type of digital twins you were talking about. So, both real time, and, if I need to hit the rewind button and do analysis, I can do that also.
Christina Cardoza: Yeah, absolutely. And we’ll get more into how to use digital twins for these types of capabilities. But you bring up a great point. A lot of these things are already just naturally integrated into our everyday lives as consumers, but then the advances in the technology, the capabilities keep evolving, and businesses want to take even greater advantage of all of these.
So I’m curious, on the consumer side it comes so naturally to us, but what are the challenges that businesses face to implement this and to really gain all the benefits of location intelligence and the technology that helps them get that?
Tony Franklin: Yeah. I think today one of the biggest challenges is just siloed data, and the siloed data coming from having a number of applications. Again, I’ll use the consumer side because it’s easy for us to relate. We have a ton of apps on our phones, but they work on that phone, they work together. It doesn’t mean the data that comes in between the apps works together.
And so in businesses many times I’ll have an app to monitor my physical security, but I’ll have another app to monitor, say, the robots that are going around in a factory. And they all have cameras, they all have sensory data, but they’re not connected. I may have another app that’s taking in all of the weather data or environmental data within or outside of my particular area, but all of this data is sitting in different silos. And so how do I connect this data to increase my situational awareness and to be able to make better decisions? And with better decisions ideally I’m either saving money, I have an opportunity to make money, I’m creating a more safe environment—maybe saving lives or just reducing injuries, etc.
And so I think that’s one of the biggest challenges. I think the other challenge is sometimes a mental shift. Like you said, we’re so used to it on the consumer side, and I do think some of this is changing with demographics. Like I think about my son, where I look at the video games they have, and they are so realistic. And a lot of the technology that we’re using, or that we’re used to, is coming from games. Within games you can see everything in the environment—it’s 3D. I know location, I have multiple sensory data that’s coming in—whether it’s sound and environmental and etc. And all of that is integrated. And so we’re just more and more starting to want to incorporate that into business, our day-to-day lives and business, and use that to actually have a financial impact.
Christina Cardoza: Yeah. And building on that video gaming example, it’s 360—you can look around and you can see the entire environment everywhere. But especially with the data silo I think there’s also just so much data that businesses can have access to today. So it’s the addition of having data in all these different places, being able to connect it, but then also being able to derive the valuable insights from it. Otherwise it’s just an overwhelming load of data coming at you.
Tony Franklin: That’s right.
Christina Cardoza: So how do you see the artificial intelligence and digital twins in this context being able to overcome those challenges, overcome the data silos, and really give you that real-time location intelligence that businesses are looking for?
Tony Franklin: Yeah. What’s valuable about digital twins is it naturally creates an abstraction, right? So you’re re-creating essentially—obviously we know a digital twin is a digital replica of the real world. And so what you’re generally doing analysis on is the replica, not the actual data coming in. So you’re doing analysis on the data coming in, but then you’re also applying it to this digital replica. And so that digital replica now can make the data available to multiple applications.
Now you need to be able to use standard-based technology—so, standard-based technology that allows you to communicate with multiple applications so that data’s available to them—standard-based technology that allows you to apply different types of AI. You may need one type of AI to identify certain animals or certain people or assets; you may need another to identify different cars or weather or more physics-like models. So, understanding the environment better.
So you need an application that allows you to be able to inject that data. And so by having that replica that allows you to expose the data, and it also—from a data silo perspective—it keeps you from being locked into a particular vendor. You can have—what’s interesting is there are applications out there.
I like to use some of the home-monitoring systems as an example. I can buy a door camera or an outside camera, but it’s all within the context of that particular company rather than—oh, I could buy the best camera for outside and I can buy better camera for inside, and I can use a different compute model for whether it’s PC or whatever I want to use—where I can open up and give me flexibility and make that data more available. And, again, with the digital twin that data can come in, I can replicate it, I can apply AI to it, etc., and using the other technology that’s available to me.
Christina Cardoza: So you mentioned using standard-based technologies. Obviously when businesses want to implement AI and digital twins they need some technology, some hardware, some software to help them do this. And I know on the AI side—I’m familiar with Intel has OpenVINO™ and they have the Geti™ toolkit. Do you see those technologies being used to gain this location intelligence? And also, what are the technologies available for digital twins businesses to take advantage of to successfully deploy these sensors and capabilities out in their environments?
Tony Franklin: Yeah, absolutely. So, you mentioned those two products. And when you think about the AI journey that customers and developers have to have to go on—so, like you said, there’s a ton of data. So you need to be able to label the data to make the data available to you—whether it’s streaming data, in this case if we’re talking real time. Then you have OpenVINO, which will allow you to apply inference to that data coming in and to use a range of compute technologies—you know, pick the best compute for the particular data coming in.
You then mentioned Geti on the other end, where—well, it’s really on both ends—where I’m bringing this data in, I’m applying inference, I’m continuing a loop, and Geti allows you to train the data, which you can then apply back on the front side for inference when you actually implement it. And it allows you to do it quickly instead of needing necessarily thousands and thousands of images, if we’re talking images for computer vision—you can do it with fewer images, and everyone doesn’t need a PhD in data science. That’s what Geti is for.
And in the middle we have something called Intel SceneScape, which uses OpenVINO. So it brings in the sensor data; OpenVINO will then apply inference so I can do object detection, I can do object classification, etc. I can use Open Zoo, which is a range of models from all of the partners that we have and we work with. Then I can implement that with SceneScape, and then I can use Geti to take this data to continue training and then to apply the new training data.
So, again, it’s a full spectrum. Back to your question about AI—like you said, there’s a ton of data, and these allow you to simplify that journey, if you will, to make that data available and usable in an impactful way—something that’s measurable.
Christina Cardoza: I always love the examples of OpenVINO and Geti. Because obviously AI is a big thing that a lot of businesses want to achieve and do, and they don’t have a lot of the knowledge or skillset available in-house, but with Geti it connects developers and business users together. So business users can help out and be a part of it and make these models, and developers can focus on the priority AI tasks.
But tell me a little bit more about SceneScape, because I think this is one of the first times I’m hearing about this technology—is this a new technology from Intel? And what is the audience—like OpenVINO you have developers, Geti you have business users. What is the end user for Intel SceneScape, and where do you see the biggest opportunities for this technology?
Tony Franklin: Yeah. Like Geti, it’s for the end users, and it’s really a software framework that relies on our large ecosystem that Intel has and that we work with. And so, like OpenVINO and like Geti, it’s intended to simplify making sense of the data that you have, like you said, without needing a PhD necessarily. In the case of SceneScape—if you think of SceneScape sitting in the middle of OpenVINO and Geti. And, again, it definitely uses OpenVINO, but it can use both. It really simplifies being able to create that digital twin; it simplifies being able to connect multiple sensor types and making that data available to multiple applications.
So a simple way I put it is it allows you to use any sensor for any application to monitor and track any space. That’s essentially what it does. So whether you have—obviously video is ubiquitous. We’re so used to video—we’re on video right now, so we’re used to video.
But there are other sensors that allow you to increase situational awareness for your environment. You could have LiDAR, which all of the electrical vehicles and autonomous vehicles have that. You can have environmental sensors, temperature, etc. Some we’ve even heard of things like radiation sensors, sound sensors, etc. Bring all of that data and as well as text data.
If you happen to scan data in some retail locations, they actually want to be able to scan. We know when you go to the grocery store they have all the labels. I want to scan that, but I want to associate it with the location and the environment where that particular food item is.
And then we usually take 3D—whether it’s standard, it could be 2D, 3D maps. So you can do that with your phone; most iPhones today you can take a 3D map of the environment. Some people don’t even know that you can take a really nice 3D environment with your phone, or there public companies that do it, or you can use simple things like Google Maps.
We have our lead developer actually just use Google Maps, and he uses SceneScape for his home monitoring, and he uses whatever camera he wants to use, and he uses AI to tell the difference between, say, a UPS truck versus a regular truck going by. And so, again, that’s AI. So, again, these tools are allowing the end users—and from an OpenVINO perspective and the developers—to just make it easy to implement AI technology in an open, standard way, and leverage the best computing technology underneath it.
Christina Cardoza: Yeah, I love that, because obviously, like AI and digital twins, businesses hear all the benefits about it, but then it can be intimidating—how do you get started? How do you successfully implement this? So it’s great to see companies like Intel that are really making this easy for business users and making it more accessible to start adding some of these things.
You mentioned some sensors that these businesses can add with these technologies. And in the beginning we talked a little bit about the different industries, especially in the federal space, where you can apply some of these. So I’m curious if you had any case studies or industry examples you can share with our listeners about exactly how you’ve seen this technology put in place. What were the results problems, results solutions, things like that?
Tony Franklin: Yeah, sure. I’d say before specific examples, the one common need or benefit that’s available to the customers that have been using SceneScape is they need to understand more about the environment—either that they’re in or that they’re monitoring. That’s one. And they need to be able to connect the sensors and the data and make it available. So generally they already have something, they’re monitoring something, and they want to increase the use of that data and additional data, and, again, let them get more situational awareness from it.
Some examples—you think about an airport. We’ve seen where that’s a common area where we all fly. You go to the airport, they need to track where people are congregating, they need to track queue times. How long are the lines? In some cases, particularly when we were in the early stages of Covid, they need to track some bodily measurements. They may need to track—they have the forehead sensors—when you come into some of the TSA areas they have the sensors and make sure you’re socially distanced at the time. Do you have lost luggage? So you can track has luggage been sitting someplace and no one’s with that luggage for too long?
So that’s another situation where, again, we have a number of sensors you are already monitoring—airports have spaces that they’re already monitoring, but now they need more information and they need to connect this data. This sensor that’s looking at the forehead generally isn’t connected to the cameras that are looking at the queue line. Well, now they need to be; now they need to be connected.
And I don’t need to just look at Terminal A, Gate 2, if you will. I need all the terminals, and I need all the gates, and I need to see it in a single pane of glass. And that’s one of the benefits that SceneScape allows you to do. It builds up the hierarchy, and it really associates assets and objects. So it helps to build relationships between—oh, I see this person and I see they’ve been in line for 30 minutes, but I see that they have a high temperature but they’re not socially distanced. Or I see this person was with a bag and they were moving with the bag, and now the bag stopped but they kept moving and the bag is sitting stationary. So, again, it helps you with motion tracking in the particular environment. So that’s one general example that we all usually can understand is at an airport.
Christina Cardoza: Yeah, I love those examples. When you think about cameras and monitoring and awareness, it’s typically associated with security or tracking. And this is really to not only help with security but to help with business operations. Like you said, like somebody waiting in line, they can deploy more registers or have somebody go immediately over to somebody who’s been waiting too long.
I know one concern that people do have is when they are being tracked or when cameras are on them is just making sure that their privacy is protected. So can you talk a little bit about how Intel SceneScape does this? Like you said, it’s people tracking or behavior tracking, but not necessarily tracking any identifiable information.
Tony Franklin: Right, our asset tracking—and we actually don’t do anything like facial recognition or anything like that—but what we actually deploy, we’re just looking at detecting the actual object. Is it a person, is it a thing, is it a car? And so we want to identify what that is: we want to identify distance, we want to identify motion. But, yes, privacy is absolutely important. So we’re inferring the data but then allowing the customers—based on their own application—they can implement what they choose to for their particular application. And to your point, they can do it privately today or in the future.
One of the use cases I’m still waiting for all of us to be able to implement is the patient digital twin. I have a doctor’s appointment this afternoon, actually. And for anybody that’s been to the doctor, you’ve got different medical records at different places, and all of the data’s not together, and they’re not using my real-time data with my historical data and using that against just reams and reams of other medical history from across many patients to apply to me.
So I would love to see a patient digital twin that’s constantly being updated; that would be ideal. But today we have applications that, maybe we’re not quite there yet, but how about just tracking instruments, just the medical instruments. Before surgery, I had 10 instruments, and I want to make sure when I’m finished with surgery, I have 10 instruments—they’re not inadvertently left someplace where they shouldn’t be. And so that’s just basic.
Where are there crash carts at in the hospital—I want to get to the crash carts as quickly as possible. Or actually where are there those check-in carts, where before you can actually get anything done you have to pay for the medical services that are going to come. They don’t let you go too far before you actually pay. So where are those at? So there are immediate applications today that can help, as you said, with business operations. And then there are the future-state ones, which I think we’re all waiting for, which I want my patient digital twin.
Christina Cardoza: Absolutely. It all comes back to that data-silo challenge we were talking about. I can’t tell you how many times at a doctor I forget to give them my historical information, like my family history, just because I’ve given it so many times I expect it to be on file. And then I’ve mentioned it and they’re like, “You didn’t tell us that.” “Well, I told you last time, or I told my last doctor.” So, definitely waiting to see that also.
And it seems like every day AI and like digital twins, it’s changing. Capabilities are being added; it’s rapidly evolving. So where do you think this space is going to go? How do we get to a place where it is more widely adopted and we do see some of these use cases and capabilities that we are looking for and that would really improve lives?
Tony Franklin: I think it’s coming. I think it’s one of these—I’ll say technological evolutions. I won’t call it a transformation, but evolution that at some point it’s just going to hit a curve. We’re just so used to it. I mean, how many people use Alexa Echo and Apple, Siri and Google Earth. These cars that are driving around have more sensors in them now than they ever had. They’re like driving computers with tires on them, basically.
And so it’s as if it’s happening, but we’re not always consciously paying attention to it. It just sort of—I mentioned to somebody the other day, I said, “I don’t remember ever asking for an iPhone, but I know I use it a lot.” And so now that I have it I don’t know that I could actually live without it. And so I think as companies are starting to realize—wow, I can de-silo my data, and as I make relationships or connections between the data that I have, and between the systems that I have across the range of my applications—not just in one room, not one floor, not one building, but a campus, just as an example—I actually can start to get actual value, and I that can impact my business. Again, my bottom line—I can make more money, I can save more money.
I think about traffic as a use case. It could save lives. One example we talk about often—and we’re seeing customers look at SceneScape with this application—is many cars today, they have the camera sensors. You just think about backup sensors or the front cameras and LiDAR, etc. And most intersections have the cameras at the actual intersections. They don’t talk to each other today.
Well, what if I have a car that’s coming over speed, and I have a camera that can see pedestrians coming around a blind spot. I want the car to know that and start breaking automatically. Right now it breaks for some of the—for most cars; if I’m coming up on a car too fast it will automatically start breaking. It doesn’t do that if I don’t know that a person is coming around the corner and I can’t see it. That’s an application that that can be applied today, and cities are actually looking at those type of applications today.
Christina Cardoza: Yeah. I love all of these examples because they’re so relatable, you don’t even realize it. Like you mentioned the sensors in your car. I go into a different car that’s maybe an older model than my car, and I expect it to have the backup camera or the lane-changing alert, that I’ll just go into the lane, and I’m like, “Oh, that was bad.” Because I rely on technology so much. So I can’t wait to see how some of these things get implemented more, especially with Intel and Intel SceneScape, and how you guys continue to evolve this space.
But unfortunately we are running a little bit out of time. So, before we go, I just want to throw it back to you one last time, if there’s any final thoughts or key takeaways you want to leave the listeners with today. What they should know about what’s coming, or location intelligence, the importance of digital twins, anything like that?
Tony Franklin: Yeah, I’m going to piggyback off of something you just said. You get into the car and, you know, it has the lane change, and you’re just so used to everything around you. But we take for granted that our brains do that. We know how fast the car is going; we know whether somebody’s coming. Cameras don’t just know that. They can see it, but they don’t know how far it is, necessarily. They don’t know how fast the car is going.
And that is why AI and the integration of these technologies and sensor data is so important. It allows now these systems to be more intelligent and to actually understand the environment. Again, time-based spatial intelligence—distance, time, speed, relationships between objects. And that’s exactly what we’re working on.
You mentioned some of the technologies that we have. We’re working with our large ecosystem and community, and so we just want to make it continue to be easy for these companies to implement this technology in an easy way and have actual financial impact on their companies. So it’s an exciting time, and we’re looking forward to helping these companies make a difference.
Christina Cardoza: Yeah, can’t wait to see it. And, like you said, I know you guys work with a bunch of ecosystem of partners in all of these different areas. So, looking forward to seeing how you partner up with different companies to do this and make this happen. So, just want to thank you again for the insightful and informative conversation today. And thanks for our listeners for tuning in. Until next time, this has been the IoT Chat.
The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.
This transcript was edited by Erin Noble, copy editor.