Nov 20
2023
AI Poised To Revolutionize Healthcare—But How Can We Combat the Risks?
By Arti Raman, CEO and founder, Portal26.
There is no question that artificial intelligence (AI) has tremendous implications for the world of healthcare. The list of applications is long. From using machine learning to analyze medical images to facilitating patient communication through chatbots and even using predictive tools to identify high-risk patients, AI has the potential to enhance healthcare systems.
And that’s not to mention all the time AI can save on the backend by automating things like data entry and appointment scheduling, thereby granting healthcare professionals more time to focus on actually diagnosing and treating their patients.
Still, many in the industry have approached this new technology with trepidation. Potential violations of medical privacy laws are a perennial concern for healthcare organizations, and AI—with its seemingly opaque algorithms and its potential susceptibility to breach—can seem like more trouble than it’s worth on this front.
The reality is more complicated. Yes, generative AI does present a risk to healthcare organizations when handled without the proper precautions, as any technology does. In fact, nearly 60 percent of healthcare industry respondents to a recent survey conducted by Portal26 cited at least one GenAI-related security or misuse incident in the preceding six months. But with the right security mechanisms in place, the benefits of AI significantly outweigh the possible downsides.
The problem is that—as the same survey revealed—almost 30 percent of healthcare respondents said their employers lack any guidelines or policies at all surrounding AI usage. Building those guidelines—implementing AI as carefully, and cautiously, as possible—is essential to realizing the true possibility of this technology.
Secure full visibility into your AI systems
At the center of any concern around AI and medical privacy violations is protected health information (PHI). PHI encompasses any information relating to:
- The past, present, or future physical or mental health/condition of an individual.
- The provision of health care to that individual.
- The individual’s payment history.
Feeding PHI into the large language models (LLMs) that are at the foundation of GenAI can pay massive dividends to healthcare organizations looking to optimize their day-to-day operations. But successfully achieving this objective, with a minimum of risk, requires taking an extremely proactive attitude towards precisely how this data is being used.
The key word here is “visibility.” If you are going to be feeding massive quantities of sensitive PHI into your systems, you need to ensure that you are aware of what it is, who is using it, and for what purpose. The need for this is especially acute given the rise of “shadow AI”—i.e., AI-related activities occurring out of sight of those tasked with overseeing it.
Unsurprisingly, 67% of healthcare industry respondents to the Portal26 survey are concerned about shadow AI at their companies. It is a problem that is growing daily—and one that can only be curtailed through increased visibility.