AI Poised To Revolutionize Healthcare—But How Can We Combat the Risks?

Profile photo of Arti Arora Raman
Arti Raman

By Arti Raman, CEO and founder, Portal26.

There is no question that artificial intelligence (AI) has tremendous implications for the world of healthcare. The list of applications is long. From using machine learning to analyze medical images to facilitating patient communication through chatbots and even using predictive tools to identify high-risk patients, AI has the potential to enhance healthcare systems.

And that’s not to mention all the time AI can save on the backend by automating things like data entry and appointment scheduling, thereby granting healthcare professionals more time to focus on actually diagnosing and treating their patients.

Still, many in the industry have approached this new technology with trepidation. Potential violations of medical privacy laws are a perennial concern for healthcare organizations, and AI—with its seemingly opaque algorithms and its potential susceptibility to breach—can seem like more trouble than it’s worth on this front.

The reality is more complicated. Yes, generative AI does present a risk to healthcare organizations when handled without the proper precautions, as any technology does. In fact, nearly 60 percent of healthcare industry respondents to a recent survey conducted by Portal26 cited at least one GenAI-related security or misuse incident in the preceding six months. But with the right security mechanisms in place, the benefits of AI significantly outweigh the possible downsides.

The problem is that—as the same survey revealed—almost 30 percent of healthcare respondents said their employers lack any guidelines or policies at all surrounding AI usage. Building those guidelines—implementing AI as carefully, and cautiously, as possible—is essential to realizing the true possibility of this technology.

Secure full visibility into your AI systems

At the center of any concern around AI and medical privacy violations is protected health information (PHI). PHI encompasses any information relating to:

Feeding PHI into the large language models (LLMs) that are at the foundation of GenAI can pay massive dividends to healthcare organizations looking to optimize their day-to-day operations. But successfully achieving this objective, with a minimum of risk, requires taking an extremely proactive attitude towards precisely how this data is being used.

The key word here is “visibility.” If you are going to be feeding massive quantities of sensitive PHI into your systems, you need to ensure that you are aware of what it is, who is using it, and for what purpose. The need for this is especially acute given the rise of “shadow AI”—i.e., AI-related activities occurring out of sight of those tasked with overseeing it.

Unsurprisingly, 67% of healthcare industry respondents to the Portal26 survey are concerned about shadow AI at their companies. It is a problem that is growing daily—and one that can only be curtailed through increased visibility.

Dedicate significant resources to AI governance

For a healthcare organization, there can be no such thing as out-of-the-box AI technology: every instance of AI use needs to be aligned to a larger set of predetermined and ethics-based governing principles. Typically, these will affirm a commitment to doing no harm, to deploying transparent protocols and methodologies, to ensuring a minimum of bias, and—crucially—to keeping patients informed.

As every LLM exists in a state of constant flux—constantly adapting to new inputs and conditions—so too should your AI governance be a living, adaptable organism. Specialized, cross-departmental groups should be convened to hash out matters of privacy and ethics; physicians should be comprehensively trained in how to leverage the technology and make sense of its results. Furthermore, algorithms should be rigorously tested, subjected to peer review, and constantly monitored once in action.

According to the Portal26 survey, 71% of healthcare organizations expressed concern around AI governance. Of course, the scale most healthcare organizations operate on—and the many thousands of micro-decisions that need to be made each day—makes implementing these policies challenging. To that end, dedicated AI governance tools can go a long way toward easing the process and keeping healthcare organizations in compliance.

Make sure your AI models are encrypted and (where necessary) de-identified

Again: no AI program can get off the ground without being fed large volumes of potentially sensitive PHI. Inevitably, concentrated sensitive data turns an organization into a target for attackers—which is why it’s understandable that the majority of respondents to Portal26 expressed concerns around GenAI and data security.

And so, while visibility and adequate governance are both important, they will only get you so far without robust encryption mechanisms in place. To actually protect PHI, these encryption mechanisms need to be in place at every step of the process—before, during, and after the training process. Healthcare organizations also need to be able to de-identify PHI where necessary and (ideally) to substitute that data with data that preserves patient privacy.

The potential applications of AI in the healthcare field are so various, manifold, and innovative that collectively, they can rightfully be said to constitute a revolution in the field. But “revolution” can be a misleading word: it suggests a rapid overhaul of existing practices, without a concomitant dedication to the kinds of careful, multi-stakeholder, policy-driven decision-making that makes revolutions last. It is only when the two are combined—radical technological innovation plus careful security and privacy guardrails—that a true revolution in existing practices can occur.


Write a Comment

Your email address will not be published. Required fields are marked *