Fill form to unlock content
Error - something went wrong!
Your content is just a step away. Please submit below.
Thank you!
Transforming Healthcare Operations and Patient Care with AI
When your organization delivers patient care, every process behind the scenes must be optimized to keep the focus where it belongs—on the patient. But does your infrastructure support that goal effectively?
AI has become central to driving healthcare efficiencies and improving operations—both in patient-facing care and within the complexities of medical infrastructure. As highlighted in a recent report from CCS Insight, this trend continues to grow, reshaping how healthcare systems operate at every level.
We recently caught up with Ian Fogg, Research Director of Network Innovation from the research firm CCS Insight, to talk about some of his recent findings in the healthcare space. He was joined by Alex Flores, Director of the Global Health Solutions Vertical at Intel, to discuss the impact of AI on the healthcare space, how data figures into the equation—and what we can figure out from the data—and where healthcare stands on the question of cloud vs. edge (Video 1).
What can you tell us about this latest CCS Insight report and its findings?
Ian Fogg: One finding, I think, is just how extensive AI usage in healthcare already is. In the last couple of years, it has really arrived in the popular mindset and mainstream media, but it’s clear that in healthcare it was already well embedded and widely used. As of August 2024, the FDA had approved 950 AI-enabled medical devices across all categories. That’s an enormous number, and of course it’s growing all the time.
I think the other thing that’s really striking is how much AI is moving from diagnostics and imaging and research into other parts of the healthcare ecosystem—organizational tasks, room management, the tying together of disparate systems, multimodal-input transcription. There’s just this enormous, burgeoning range of activities right across the sector.
And it’s not only devices and not only directly related to the healthcare; there are things in hospitals or offices or buildings to improve operations and efficiencies, but as a consumer you don’t see it happening right up front.
What are you seeing in the healthcare space from an engineering perspective?
Alex Flores: As Ian mentioned, we are absolutely seeing AI being rapidly adopted in healthcare; a lot of new compute technology is coming into play. And, again, what’s interesting is that a lot of that is analytics going on behind the scenes—which is where we want it to be. But it does impact the clinician’s workflow, hopefully allowing them to do their job faster, better, and more easily, so that they can spend more time with the patient.
Or think of the importance of latency. When a radiologist in a triaging situation brings up an image, they want to be able to see that image in real time, or near-real time, because every second counts. Technologies like compression and decompression are all there working in the background.
And Intel is also there in the background, really looking at how we can optimize medical-device technology—its workflows, its algorithms—so that the clinician can have that real-time or near-real-time experience. And if it’s done correctly, it’s seamless.
Another thing that’s unique about healthcare is that roughly one-third of the world’s data is coming from this space, and maybe 5 percent of it is actually turned into actionable insights. So there’s this tremendous opportunity to use AI to really unlock those insights from that data.
Then you layer in some of the macro trends that are happening in healthcare, including an aging population and the fact that people are getting sicker and being diagnosed with multiple chronic diseases. And then you have the fact that there’s a global shortage of clinicians.
“#AI becomes more important in order to allow clinicians to increase their efficiencies so that they can really focus on the patient and on patient outcomes.” – Alex Flores @intel via @insightdottech
Because of that, AI becomes even more important, and the rapid adoption of that AI becomes more important in order to allow clinicians to increase their efficiencies so that they can really focus on the patient and on patient outcomes.
Can you expand on the data aspect of healthcare?
Ian Fogg: Alex said that AI is making clinicians more efficient, and you can see that in the way that data is being analyzed; you can see data volumes going up enormously. One study I remember said that the size of a CT scan could be 250 megabytes for an abdomen, a staggering one gigabyte for the heart, and that digital pathology could be even greater. Those are enormous, enormous amounts of data for a single scan. Compare that with a smartphone camera that might have a 5-megabyte image.
And one of the other things that’s striking is that you can’t use the same techniques to compress medical-imaging data that you can use for a photograph, because the tools used to compress a photograph are lousy tools. They’re perceptual-compression algorithms. You can’t use those for medical. You have to look at the full image, because you need to have all that detail so you can spot irregularities in the scan. So that just makes the challenge even harder.
So how does AI, and AI at the edge, come into play?
Ian Fogg: AI has two slightly competing implications. One is that it means you can analyze the data more quickly and have an efficiency benefit. But, on the other hand, one of the companies we talked about in the report framed it the other way around and said, “Actually, because you’ve got this AI tool that can analyze more data, what you can do is analyze a greater part of a biopsy.” That means that if there are just a few cells in a cancer scan that are irregular, you are more likely to spot them because you’ve scanned a bigger sample.
That means your scan is more accurate, which means you’ll identify problems and healthcare issues earlier, which means you’ll save costs and reduce the load on the healthcare system down the line. So there are some interesting dynamics there that are striking.
The other piece is that when the clinician needs a very responsive experience, if you can do it at the edge rather than the cloud, it can be faster. It’s also easier to make it private, because the data can stay closer to where it’s being captured, closer to the patient. And that’s a trend we’ve seen in many areas with AI: Things start in the cloud, and then as edge devices get more capable, things move onto edge devices to get that performance benefit.
What’s the best way to think about implementing AI in the healthcare industry?
Ian Fogg: Two things jump out. One, as I said before, is that it isn’t just about imaging and scans and computer-vision; we’ve seen a lot of examples of AI being used to make the organizational aspects more efficient. Operating theaters are incredibly costly assets, and if you can schedule their cleaning and sanitization efficiently, you can reduce downtime between operations.
The other thing is what’s called federated learning. When you have a machine learning model, you want a broad and diverse data set to improve the quality of that model, but you also want to maintain privacy. A federated learning approach means that you can potentially have multiple hospitals or healthcare facilities contributing to the model, making the model more capable and more sophisticated, but the data that’s used to improve it remains within the facility.
How do you approach the deployment question from an engineering perspective?
Alex Flores: It starts with giving our customers options. As Ian mentioned, a lot of organizations are deploying in the cloud. Other organizations are taking a hybrid approach: They want the benefits of the cloud, but they also want to be able to access data in real time or near-real time at the edge. And then there are other customers that are looking at an edge-only approach, maybe for reasons of cost or reasons of security and privacy. It’s about showing the customers the ecosystems, the choices, the benefits, and then seeing what’s best for their particular implementation.
Can you share any real-world examples?
Alex Flores: One that comes to mind is patient positioning during a scan. Oftentimes the patient isn’t positioned correctly on the table, so the technician has to redo the scan. It takes longer, and the patient may be exposed to additional radiation. So AI-based algorithms can help with positioning the patient correctly before the scan—that’s one example.
A second one is around one of the major bottlenecks for radiation therapy, and that is contouring delineation of radiation targets or of nearby organs that might be at risk. Based on the image quality, there can be a lot of error in that, so having AI-based contouring can help the clinicians improve their planning and their process.
A last example I have is on ultrasound, and this is a personal story. My wife had a medical procedure a couple of years ago, and she told me afterward that the anesthesiologist had used an ultrasound machine to identify the vein where the anesthesia would be administered. And I got really excited. I said, “I know exactly what algorithm that was!” because we were working with the ultrasound manufacturer to optimize that algorithm.
Seeing the practicality of the technology being implemented with a solution is a really great aspect of my job.
How will AI in healthcare evolve from here?
Ian Fogg: That ultrasound example is a fascinating one, because it’s about augmenting an existing tool. Ultrasound is a very cost-effective, very accessible type of scanning that’s been around for decades, and you are making it more effective.
Also, we’re clearly going to see cloud-based AI continue, but we’re going to see increasing use of AI on the edge, too, for that responsiveness piece. Another thing I think we’ll see is more small AI models come to market for a particular use or task. And as they become more portable, they’ll become even easier to put onto edge devices. We’ve seen that in other fields outside of healthcare.
Alex Flores: I do want to mention that when you’re doing AI at the edge on a device, power becomes a really important feature. If you think about it, it’s kind of a snowball effect: With more power, you need bigger fans; so you’re going to need a bigger device, a new form factor, and so forth. But oftentimes you don’t need that; you can run the right amount of AI at the edge without needing to redesign or reconfigure your device. There’s new technology, new compute that allows you to do that. So it’s going to be easier and easier to run at the edge.
Ian Fogg: I also think we’ll see a multimodal-input element—audio based, video based, still-image based, and text based. And that means both a way of interacting with the model but also what the model is able to understand and perceive about the world. So it might be able to use a camera to identify if there are queues forming in certain parts of a hospital.
Lastly, AI is very good at correlating trends across different data sets. This could be used in a public health context more. AI models can’t do causation, so when you find those correlations, you’ll still need to push them in front of a researcher or a clinician for validation. But it will probably uncover underlying causes for conditions and new ways of approaching healthcare that we haven’t thought about before.
Related Content
To learn more about healthcare at the edge, listen to AI at the Edge: Take the Leap in Healthcare. For the latest innovations from CCS Insight, follow it on X/Twitter at @ccsinsight and LinkedIn. For the latest innovations from Intel, follow it on X/Twitter at @intel and LinkedIn.
This article was edited by Erin Noble, copy editor.