Skip to main content


Advance AI Data Security and Collaboration in Healthcare

AI data security

Mary Beth Chalk considers herself lucky.

A stage-one breast cancer survivor, Chalk says she’s lucky that she was perfectly positioned on the mammogram machine and lucky because the radiologist was able to spot a pinhead-sized tumor lodged against the chest wall.

But she doesn’t want to leave such life-altering diagnoses to luck. Instead, Chalk is keen on using AI in healthcare to improve outcomes for all patients. It’s why she co-founded BeeKeeperAI, a startup that enables secure collaboration between algorithm developers and healthcare institutions. BeeKeeperAI resulted from Chalk’s previous work at the University of California, San Francisco (UCSF), where she focused on industry collaborations that required accessing and computing with real-world, protected health information (PHI).

At UCSF the roadblocks to AI development and implementation came into full view. There, Chalk noticed that innovation depended on collaborations between healthcare institutions and algorithm developers. But when even collaboration is possible, it takes an extremely long time because of worries over intellectual property (IP) and the privacy laws safeguarding PHI.

Such bottlenecks are unfortunate, Chalk says, because AI has tremendous potential for innovation in healthcare—algorithms detecting breast cancer at the earliest stages are only a fraction of what’s possible.

#AI has tremendous potential for innovation in #healthcare—algorithms detecting breast cancer at the earliest stages are only a fraction of what’s possible. @BeeKeeperAI via @insightdottech

Confidential Computing Ensures AI Data Security

Chalk, Co-founder & Chief Commercial Officer at BeeKeeperAI, co-launched the company to help reduce roadblocks to data access for AI development by leveraging confidential computing, a hardware-first approach to security.

BeeKeeperAI’s software with imbedded confidential computing provides a solution in which both the data and the intellectual property are fully protected at rest, in transit, and during computing. The operating principle behind confidential computing is the creation of a fully attested trusted execution environment (TEE). TEE isolates data and algorithm in the processor and memory, and uses hardware-based encryption keys to maintain Total Memory Encryption. Computing happens in these confidential environments, protecting both data and intellectual property.

Paving the Way to AI Collaboration

EscrowAI is BeeKeeperAI’s zero-trust collaboration platform. It alleviates both pain points the sector routinely faces—processing patient health data securely and preserving intellectual property. EscrowAI allows data holders and algorithm developers to work together with “push-button ease,” Chalk says. Another advantage of the platform is thorough documentation for audit compliance. “Every action that’s taken on the platform is recorded and archived for complete traceability,” Chalk adds.

Such proof of data security is vital to demonstrate compliance with jurisdictional privacy protection regulations and to collect evidence that supports market clearance regulatory filings for medical devices, digital therapies, and pharmaceuticals. Under the hood, the solution integrates policy and cryptographic key management from Fortanix.

Intel® Software Guard Extensions (Intel® SGX) is built directly into Intel® Xeon® Scalable processors and enables the creation of isolated TEEs called enclaves. “We’ve been an Intel SGX user from the very beginning because it ensures the protection of both the algorithms and the data at runtime. And that’s a competitive differentiator for us,” Chalk says. “The enclaves eliminate any access by the virtual machine operating system, or the VM administrator, or even BeeKeeperAI. So that prevents any outside interference.”

Chalk is grateful that Intel provided a grant for the company to conduct a proof of design while the team was still at UCSF. “Intel has been an early and great partner for us,” Chalk says.

Confidential Computing Use Cases

The healthcare industry is very familiar with roadblocks to AI implementations, so solutions have been knocked around for a while. For example, artificially produced synthetic data, which has the characteristics of real-world data without compromising information, has been touted as a workaround for privacy and security challenges.

But, says Chalk, synthetic data is wholly inadequate. For one thing, when you scramble patient data, you introduce noise that’s not consistent with real-world data, she points out. Besides, in critical applications “you want an algorithm that has been validated and tested on real-world data,” Chalk says. “We would not trust a cancer-detecting algorithm based mostly on synthetic data to do its job accurately.”

Chalk is not convinced that we’re going to see large-scale adoption of AI in healthcare without confidential computing. But with it, new avenues open, such as when BeeKeeperAI helped Novartis address challenges related to a rare pediatric disease. The healthcare company had developed an algorithm but needed to validate it on real-world data sets. In addition to the familiar privacy concerns, Novartis faced an additional problem: The data set was limited to only 27 wholly unique patients, so that any level of deidentification would destroy the ability to test the algorithm.

BeeKeeperAI’s EscrowAI solution helped Novartis navigate these challenges and assured that the data would never be seen, and the associated IP would also be protected. Novartis has progressed in its studies in this field. “It was an extremely powerful demonstration of what’s possible,” Chalk says.

Chalk is also excited about the potential for confidential computing to assuage concerns related to HIPAA compliance because the patient information is never exposed, never seen, and is always under the control of the data steward. Such sightless computing might convince lawmakers to modify HIPAA in the future, Chalk hopes.

The Future of Confidential Computing in Healthcare

As for what’s coming down the pike, Chalk expects confidential computing to do its job at the edge, too. “Institutions that aren’t ready to push all their data into the cloud can leverage confidential computing for AI analytics at the edge,” she says. “It also allows algorithm developers to deploy securely into jurisdictions with restrictive data controls.”

Until today, healthcare has had to work with incomplete data. “Our healthcare treatment system has been built on a small percentage of the available information,” Chalk points out. But all that will change as confidential computing enables AI to realize its full potential in the field.

And the cancer survivor could not be happier about the brilliant possibilities, including the era of precision medicine. “The treatment that may be effective for you may not be effective for me. And so rather than all of us being treated as an average in a bell curve, we’re going to be able to be treated as a unique set of one,” Chalk says. “That gives me great comfort and great hope about the future of healthcare.”

And unlike Chalk, our healthcare outcomes need not depend mostly on luck.

Edited by Georganne Benesch, Associate Editorial Director for

About the Author

Poornima Apte is a trained engineer turned technology writer. Her specialties run a gamut of technical topics from engineering, AI, IoT, to automation, robotics, 5G, and cybersecurity. Poornima's original reporting on Indian Americans moving to India in the wake of the country's economic boom won her an award from the South Asian Journalists’ Association. Follow her on LinkedIn.

Profile Photo of Poornima Apte