Tag: AI governance

AI Governance Is Becoming Healthcare’s Next Major Compliance Burden

By Gilda D’Incerti, Founder and CEO, PQE Group.

Healthcare organizations have rapidly adopted artificial intelligence across clinical decision support, diagnostics, revenue cycle management, and operational systems.

AI tools are now embedded across many hospital environments, promising better clinical outcomes, decreased administrative burden, and smarter use of healthcare data.

But as adoption accelerates, oversight continues advancing rapidly.

Regulators are increasingly scrutinizing how AI is developed, validated, and deployed in healthcare, making AI governance a new compliance focus for health system leaders. Healthcare executives and boards must urgently manage the operational, legal, and regulatory obligations that accompany AI adoption.

AI Is No Longer Solely an IT Decision

Historically, new technologies in healthcare have often been treated primarily as IT decisions. Artificial intelligence changes that dynamic. AI systems influence clinical decision making, patient risk scoring, workflow prioritization, and reimbursement. Their effect goes beyond technology deployment to clinical accountability along with regulatory oversight.

This shift demands comprehensive oversight.

Effective AI oversight now demands coordination across compliance, legal, clinical leadership, risk management, and IT teams. Health systems must begin asking foundational questions about the algorithms they deploy:

Without formal governance structures in place, health systems risk deploying tools they cannot fully explain or defend during regulatory review.

Regulators Are Catching Up

Oversight advances alongside AI adoption. In the United States, the FDA has already begun developing guidance frameworks for AI-enabled medical software and adaptive algorithms, signaling greater regulatory attention to the lifecycle management of AI systems.

This signals accountability for algorithm development, testing, monitoring, and documentation. This means AI systems may require similar documentation, validation, and performance monitoring as medical devices. Many hospitals lack readiness for this operational rigor.

The Hidden Operational Workload

One of the most common mistakes health systems make is underestimating the operational effort required to govern AI effectively. This includes committing time to oversight, establishing new processes, and allocating resources to promote ongoing compliance and risk mitigation.

Deploying an algorithm is only the starting point. Responsible AI programs require regular oversight, including:

Each item needs dedicated governance and clear accountability. Without them, AI meant to improve efficiency can add complexity and risk.

AI Is Becoming Part of Clinical Infrastructure

Many healthcare leaders still view AI as a pilot initiative or innovation program. Increasingly, however, AI tools are becoming embedded within everyday clinical processes. If algorithms help determine triage priorities, diagnostic interpretation, or patient risk stratification, they effectively become part of the organization’s clinical infrastructure.

This reality heightens the stakes.

Boards and executives are realizing AI oversight is fundamental. As systems affect care and decisions, governance becomes a strategic and safety-critical responsibility.

Preparing for the Next Phase of AI Adoption

The next phase of AI adoption in healthcare may be defined less by technological capability and more by governance maturity.

Health systems that establish structured oversight programs early will be better able to scale innovation while continuing regulatory readiness.

Essential steps include:

Organizations that move from reactive compliance to forward-looking governance will be better prepared for the emerging regulatory landscape in healthcare AI. AI is growing essential to healthcare delivery. Governance must evolve accordingly. Treating AI oversight as core compliance, not solely a technical matter, is vital to health innovation.

AI Poised To Revolutionize Healthcare—But How Can We Combat the Risks?

Profile photo of Arti Arora Raman
Arti Raman

By Arti Raman, CEO and founder, Portal26.

There is no question that artificial intelligence (AI) has tremendous implications for the world of healthcare. The list of applications is long. From using machine learning to analyze medical images to facilitating patient communication through chatbots and even using predictive tools to identify high-risk patients, AI has the potential to enhance healthcare systems.

And that’s not to mention all the time AI can save on the backend by automating things like data entry and appointment scheduling, thereby granting healthcare professionals more time to focus on actually diagnosing and treating their patients.

Still, many in the industry have approached this new technology with trepidation. Potential violations of medical privacy laws are a perennial concern for healthcare organizations, and AI—with its seemingly opaque algorithms and its potential susceptibility to breach—can seem like more trouble than it’s worth on this front.

The reality is more complicated. Yes, generative AI does present a risk to healthcare organizations when handled without the proper precautions, as any technology does. In fact, nearly 60 percent of healthcare industry respondents to a recent survey conducted by Portal26 cited at least one GenAI-related security or misuse incident in the preceding six months. But with the right security mechanisms in place, the benefits of AI significantly outweigh the possible downsides.

The problem is that—as the same survey revealed—almost 30 percent of healthcare respondents said their employers lack any guidelines or policies at all surrounding AI usage. Building those guidelines—implementing AI as carefully, and cautiously, as possible—is essential to realizing the true possibility of this technology.

Secure full visibility into your AI systems

At the center of any concern around AI and medical privacy violations is protected health information (PHI). PHI encompasses any information relating to:

Feeding PHI into the large language models (LLMs) that are at the foundation of GenAI can pay massive dividends to healthcare organizations looking to optimize their day-to-day operations. But successfully achieving this objective, with a minimum of risk, requires taking an extremely proactive attitude towards precisely how this data is being used.

The key word here is “visibility.” If you are going to be feeding massive quantities of sensitive PHI into your systems, you need to ensure that you are aware of what it is, who is using it, and for what purpose. The need for this is especially acute given the rise of “shadow AI”—i.e., AI-related activities occurring out of sight of those tasked with overseeing it.

Unsurprisingly, 67% of healthcare industry respondents to the Portal26 survey are concerned about shadow AI at their companies. It is a problem that is growing daily—and one that can only be curtailed through increased visibility.

Continue Reading