Tag: AI in healthcare

Why Blindly Trusting AI in Healthcare Could Be Our Next Public Health Crisis

Nandini Patel

By Nandini Patel, digital marketing, emorphis Technologies.

We’ve all seen the headlines: AI diagnosing diseases faster than doctors, chatbots offering mental health support, or predictive models guiding treatment plans. Sounds revolutionary, right? And it is. But here’s the catch: are we trusting AI a little too much in healthcare?

As we race towards an AI-powered medical future, we may be overlooking some serious red flags. Trusting AI blindly without transparency, oversight, or ethical clarity could open the door to a public health crisis we’re not prepared to handle.

1. The Seduction of Accuracy: Why We’re Hooked on AI

AI’s ability to process vast datasets, identify patterns, and provide fast results is undeniably powerful. In radiology, for example, AI models can detect lung nodules and fractures with stunning precision. But here’s what often gets buried in the excitement: AI accuracy is context dependent.

If the training data is skewed, incomplete, or unrepresentative, AI can deliver dangerously wrong results. Yet, because it “sounds scientific,” many clinicians and administrators take its output as gospel. That’s not just risky, it’s irresponsible.

2. The Problem of Opacity: When You Can’t Ask “Why?”

AI systems, especially those powered by deep learning, are often called black boxes, you feed in data, get a result, but don’t always know how that result was generated.

In medicine, where accountability and evidence matter, this lack of transparency is a ticking time bomb. If an AI system denies a cancer diagnosis or suggests the wrong dosage, who takes responsibility? You can’t just shrug and say, “The algorithm said so.”

3. Bias in, Bias Out: When AI Reflects the World’s Injustices

Healthcare systems already struggle with inequalities, and AI can unintentionally make them worse. If your algorithm is trained mostly on data from urban, affluent, white populations, it might fail miserably when treating rural patients, minorities, or underrepresented groups.

There have already been real-world examples. AI models giving lower risk scores to Black patients or missing early signs of disease in women. When AI amplifies bias, it’s not just a software flaw—it’s a life-threatening issue.

4. The Illusion of Efficiency: Fast Isn’t Always Better

Hospitals and health systems are eager to cut costs and improve efficiency and AI seems like the perfect solution. Automated diagnostics, virtual assistants, predictive analytics; sounds like a dream.

But in practice, rushing decisions based on AI can lead to misdiagnoses, missed nuances, and overdependence on automation. The human side of medicine (empathy, judgment, contextual decision-making) cannot be replaced by code.

Efficiency without empathy is a dangerous shortcut in healthcare.

5. Security Threats: AI Is a Cyber Target

With AI tools integrated into EHRs, telehealth, and medical devices, the attack surface for cybercriminals has widened dramatically. An AI system trained on patient data becomes a goldmine for hackers.

A compromised algorithm can not only leak sensitive data, it can change how medical decisions are made. Imagine a manipulated AI tool misguiding cancer treatment or altering drug prescriptions. That’s not science fiction, it’s a real risk.

Conclusion: Proceed, But With Caution

AI has the potential to transform healthcare for the better. But only if we treat it as a partner, not a prophet. Blind faith in technology especially in matters of life and death—has never ended well.

As healthcare continues its digital transformation, we must ask tough questions, demand accountability, and design AI systems that serve people first. The future of public health depends on it.

Let’s not sleepwalk into a crisis—let’s build a future where AI and humans work together, not at the cost of one another.

What AI Thinks AI Will Do in Healthcare

Scott E. Rupp

By Scott E. Rupp, editor, Electronic Health Reporter.

In 2025, AI in healthcare is no longer a distant ambition—it’s an operational force. But as we stare down the next five years, what matters isn’t what AI could do. It’s what it will do, based on current trajectory, real-world deployment, and policy infrastructure.

Let’s cut past the marketing fluff. Below is a grounded look at how AI is reshaping healthcare now—and how it will evolve by 2030—through the lens of diagnostics, documentation, monitoring, drug development, operations, and governance. This isn’t speculation. It’s what the tech, the economics, and the outcomes are already showing us.

AI in Diagnostics: From Hype to Clinical Utility

Recent developments in diagnostic AI underscore a leap beyond narrow models. Microsoft’s Multimodal AI Diagnostic Orchestrator (MAI-DxO), for example, has shown 85.5% accuracy in diagnosing complex conditions—significantly outperforming unaided physicians in a controlled study. It isn’t replacing clinicians, but rather augmenting them by synthesizing imaging, lab values, and clinical notes into actionable differentials.

What’s next? Between now and 2030, expect diagnostic support tools to become embedded into EHR workflows. AI won’t just suggest differential diagnoses—it will flag overlooked symptoms, propose appropriate next steps, and track care adherence. Clinicians who adopt this technology will find themselves practicing “assisted medicine,” with reduced cognitive load and more consistent care across patient populations.

Clinical Documentation: The Administrative Front Line

Physician burnout continues to correlate with time spent in EHRs—often charting late into the night. AI scribes and ambient listening tools like Suki, Abridge, and Nuance DAX are making measurable inroads. One recent study found documentation time dropped by over 60% after implementing voice AI, with corresponding improvements in patient satisfaction and physician experience.

This is one of the lowest-risk, highest-yield applications of AI in healthcare, and adoption is accelerating. By 2027, we should expect clinical documentation to be mostly machine-generated and human-edited in ambulatory care and some inpatient settings. Expect significant expansion into coding, utilization review, and real-time note summarization. In revenue cycle management, this will radically improve claims accuracy and reduce denials.

AI in Remote Monitoring: Early Intervention, Not Just Passive Data

The convergence of wearables, ambient sensors, and AI analytics is quietly becoming one of the most effective tools for managing chronic conditions. What’s changing now is contextualization: AI doesn’t just measure—it interprets and flags risk. Systems are already showing promise in detecting atrial fibrillation, early-onset heart failure, and even cognitive decline through pattern recognition in voice and movement.

Expect AI to play a growing role in longitudinal care between visits. More than 35% of U.S. health systems are expected to integrate AI-driven monitoring solutions by 2026. Hospital-at-home models will increasingly rely on these tools to support early discharge, flag adverse trends, and prevent readmissions—helping address the financial strain from value-based care models.

AI in Drug Discovery and Trial Design: Time-to-Therapy Will Shrink

AI is accelerating drug discovery by optimizing target identification, simulating molecular interactions, and streamlining trial recruitment. Insilico Medicine, Recursion, and Exscientia are examples of companies slashing preclinical timelines by up to 50% using AI.

By 2030, expect AI to redesign how clinical trials are run—from adaptive designs that learn during execution, to digital twins that simulate patient responses to reduce trial size. Large language models will also aid protocol writing, patient matching, and compliance documentation. The result? Fewer failed trials, faster paths to market, and dramatically lower costs.

Back-Office Automation: The Real Cost Frontier

Administrative complexity remains one of the largest sources of waste in the U.S. healthcare system. AI is already reducing this burden through automations in prior authorizations, denial management, supply chain logistics, and call center operations.

By 2030, back-office automation powered by AI will be table stakes. Health systems will deploy intelligent agents for high-volume tasks like eligibility checks, appointment reminders, claims scrubbing, and patient financial counseling. This will reshape the workforce, reallocating humans to oversight and exception handling, rather than repetitive processing.

Estimates from McKinsey and others suggest that automation could drive over $150 billion in annual savings across the U.S. healthcare system, without touching a single clinical procedure.

Regulatory Momentum and Ethical Infrastructure

As of mid-2025, over 340 AI-enabled tools are FDA-cleared, mostly in radiology and cardiology. The regulatory environment is slowly catching up to the pace of innovation, with a push toward lifecycle oversight, real-world performance data, and post-market surveillance.

The next challenge is equity and transparency. Recent studies highlight significant performance discrepancies across demographic groups. To avoid algorithmic bias becoming clinical harm, AI developers and health systems must prioritize diverse training data, model interpretability, and explainable outputs.

We’re also likely to see a move toward mandatory algorithm audits and AI “nutrition labels”—initiatives that clarify how models were trained, tested, and validated for real-world use.

What Health IT Professionals Should Do Now

As stewards of digital infrastructure, health IT leaders are at the center of this transformation. But the task isn’t just implementation; it’s orchestration. Here’s where to focus:

Final Thought: Beyond the Buzzwords

AI in healthcare is real, impactful, and increasingly essential. But this isn’t about science fiction. It’s about systems — designed, tested, and governed by people — serving other people.

By 2030, the systems that win will be those that operationalize AI in ways that are trusted, useful, and invisible to the patient. We don’t need to marvel at AI. We need to make it mundane, baked into the background, improving care every day, without fanfare.

That’s the AI future worth working toward.

Streamlining Hospital Discharge with Technology: A Strategic Imperative for Reducing Readmissions

Nutanix logo in transparent PNG and vectorized SVG formatsNutanix, a leader in hybrid multicloud computing, announced the findings of its seventh annual global Healthcare Enterprise Cloud Index (ECI) survey and research report, which measures enterprise progress with cloud adoption in the industry. The research showed that 99% of healthcare organizations surveyed are currently leveraging GenAI applications or workloads today, more than any other industry.

This includes a mix of applications from AI-powered chatbots to code co-pilots and clinical development automation. However, the overwhelming majority (96%) share that their current data security and governance measures are insufficient to fully support GenAI at scale.

“In healthcare, every decision we make has a direct impact on patient outcomes – including how we evolve our technology stack,” said Jon Edwards, Director IS Infrastructure Engineering at Legacy Health. “We took a close look at how to integrate GenAI responsibly, and that meant investing in infrastructure that supports long-term innovation without compromising on data privacy or security. We’re committed to modernizing our systems to deliver better care, drive efficiency, and uphold the trust that patients place in us.”

This year’s report revealed that healthcare leaders are adopting GenAI at record rates while concerns remain. The number one issue flagged by healthcare leaders is the ability to integrate it with existing IT infrastructure (79%) followed closely by the fact that healthcare data silos still exist (65%), and development challenges with cloud native applications and containers (59%) are persistent.

“While healthcare has typically been slower to adopt new technologies, we’ve seen a significant uptick in the adoption of GenAI, much of this likely due to the ease of access to GenAI applications and tools,” said Scott Ragsdale, Senior Director, Sales – Healthcare & SLED at Nutanix. “Even with such large adoption rates by organizations, there continue to be concerns given the importance of protecting healthcare data. Although all organizations surveyed are using GenAI in some capacity, we’ll likely see more widespread adoption within those organizations as concerns around privacy and security are resolved.”

Healthcare survey respondents were asked about GenAI adoptions and trends, Kubernetes and containers, how they’re running business and mission critical applications today, and where they plan to run them in the future. Key findings from this year’s report include:

For the seventh consecutive year, Nutanix commissioned a global research study to learn about the state of global enterprise cloud deployments, application containerization trends, and GenAI application adoption. In the fall of 2024, U.K. researcher Vanson Bourne surveyed 1,500 IT and DevOps/Platform Engineering decision-makers around the world. The respondent base spanned multiple industries, business sizes, and geographies, including North and South America; Europe, the Middle East and Africa (EMEA); and Asia-Pacific-Japan (APJ) region.

How AI-Driven Care is Bridging the Gaps in Post-Discharge Healthcare

Caroline Hodge

By Caroline Hodge, CEO and co-founder, Dimer Health

The healthcare industry is at a pivotal juncture, with $320 million in Medicare readmission penalties impacting 2,273 hospitals this year alone. These penalties go beyond financial strain, influencing patient well-being and the sustainability of hospital operations. Amid these challenges, AI-driven innovation offers transformative solutions that could change how hospitals manage post-discharge care. The pressing question is: how can healthcare systems better support patients after discharge to reduce readmissions and enhance overall care?

AI-driven solutions are revolutionizing post-discharge care with a proactive, predictive approach that surpasses traditional reactive methods. By leveraging AI-powered predictive analytics and continuous patient monitoring, potential health issues can be identified early, enabling timely intervention to prevent complications from escalating into readmissions. This accurately predictive component is pivotal, as it not only enhances patient outcomes but also eases financial burdens on hospitals, shifting the model from penalty-focused to performance-driven incentives.

Dimer Health is at the forefront of this movement. By combining real-time AI analytics with a dedicated clinical team, Dimer informs a predictive and proactive care delivery system that bridges the critical gap between hospital discharge and full recovery. This comprehensive approach has already demonstrated significant reductions in readmission rates, showcasing a new benchmark for effective, continuous patient care.

As the adoption of AI in healthcare grows, questions about its impact on future policy and reimbursement frameworks come to the fore. Could integrating AI into post-discharge care pave the way for a shift from penalty-heavy models to value-based, patient-centric incentives? Policymakers and healthcare leaders will soon need to assess how these technologies can promote sustainable care models that benefit patients and the healthcare ecosystem.

The implications are substantial. In an era marked by an aging population, escalating healthcare costs, and workforce shortages, AI-enabled care can become a cornerstone of hospital strategy. As healthcare systems start to leverage these capabilities, there is potential for more resilient, patient-focused care models that align with both economic and clinical objectives.

This evolution is about more than technology; it represents a shift toward reimagining patient care, making continuous, personalized support the new standard in healthcare. The question now is how swiftly and effectively the industry can adapt to this promising frontier.

This Healthcare Payment Breakthrough is Actually Analog

By Bob Chin, chief information officer, PayMedix.

I’m a confirmed AI optimist and believe the technology will improve healthcare on a broad scale, from diagnosis to drug discovery, precision medicine, robotic surgery, record keeping, analytics, population health, and streamlined claims processing.

But there remains one nut that AI, for all its astonishing promise, hasn’t yet cracked – the growing burden of healthcare costs on the American family. No large language models or artificial neural networks are likely to change that in the near future.

Rather, the nearest-term solution to rising premiums, deductibles, co-pays and out-of-pocket costs is embarrassingly analog. It’s a conceptual change in the payment process. We need to change the business model until technology can do more to lower our collective costs.

The cost of care avoidance

The current model is broken. Most Americans are covered by an employer’s health insurance plan, but it’s not a gift. The employer and employee share the premium.

Unfortunately, family coverage premiums have increased by 22% in the last five years, reaching almost $24,000. When a covered employee seeks treatment, they pay out of pocket up to their deductible and often owe a co-pay. Since 1960, out-of-pocket costs have grown nearly twice as fast as the economy.

If the patient can’t pay at the time of service, which is increasingly common, the household carries a balance and pays interest on that balance indefinitely, absorbing considerable financial stress along the way. Providers become de facto bill collectors, something they did not sign up for when pursuing careers in healthcare. Shamed patients avoid the doctor, risking their health and nudging up longer-term healthcare costs for everyone. More than four in 10 adults (43%) say they or a household member have put off or postponed care due to cost.

Continue Reading

A HIPAA-Compliant Path to Efficiency and Relief with Telehealth

By Nate MacLeitch, founder and CEO, QuickBlox.

We’ve come a long way since the University of Nebraska pioneered two-way video communication for telemedicine in 1959. Today, telehealth enables healthcare providers to expand access to medical support, improve patient convenience, streamline workflows, and enhance clinical outcomes. But as telehealth has grown, so too have the regulations around it. 

The software behind telehealth — including on laptops, tablets, the cloud, and increasingly AI — requires careful governance and robust security protocols to ensure patient privacy in accordance with the Health Insurance Portability and Accountability Act (HIPAA). 

The following explores how telehealth can alleviate administrative burdens and create a more efficient experience, while HIPAA compliance ensures the product is safe for both patients and providers to trust.

HIPAA Compliance Builds Patient and Clinicians’ Trust

While demand for digital health adoption from patients (33%) and providers (36%) is rising, over half of clinicians surveyed are still concerned about data breaches or malware attacks on their healthcare data, global advisor HIMSS (Healthcare Information and Management Systems Society) reports.

Securely communicating sensitive patient information (PHI) within a healthcare organization can be challenging when relying on everyday tools like SMS, Skype, and email. While these platforms offer encryption mechanisms, they fall short in two key areas. 

Firstly, achieving effective encryption requires a uniform environment. Everyone within the organization must use compatible operating systems and the same encryption/decryption software. This uniformity can be difficult and expensive to maintain across a large organization with diverse devices and software versions. 

Secondly, even with encryption, the service providers themselves — like Verizon, Skype, and Google — still have access to the underlying data on their servers. Business associate agreements (BAAs) can be established to address these concerns, but these rely on the providers to maintain the integrity of the encrypted data. Here, Skype’s past actions raise red flags, and it’s fair to question if major communication companies like Verizon or Google would be comfortable with the additional responsibility and potential legal implications of healthcare data breaches.

Continue Reading

Concerns and Uncertainty In the Wake of Sweeping HTI-1 Rule

Stephanie Jamison

By Stephanie Jamison, Executive Committee Chair and Public Policy Leadership Workgroup Vice Chair, EHR Association.

In the months that have passed since the Office of the National Coordinator for Health Information Technology (ONC) issued the final Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule, the health IT sector has been working diligently to meet the earliest compliance timelines even as it continues an in-depth analysis of the regulatory impact on both developers and the providers who use certified technology.

For the EHR Association, that analysis has given rise to several concerns and ambiguities that need to be addressed to ensure HTI-1, which was published in the Federal Register on Jan. 9, 2024, achieves ONC’s stated goal of advancing patient access, interoperability, and standards.

The new regulations are an important step toward implementing key provisions of the Cures Act and enhancing ONC’s Certification Program. However, there are several aspects of HTI-1 that we believe may have unintended consequences for certified EHR technology (CEHRT) developers and users.

Decision Support Interventions (DSI)

One significant area of concern is with regulations around DSI, which carry the earliest compliance deadlines. While the scope of DSI requirements was narrowed in the final rule, many of the compliance timelines are still insufficient for developing, testing, and implementing the necessary upgrades.

The first deadline is Dec. 31, 2024. That is when CEHRT developers must deliver DSI capabilities to maintain certification. Achieving compliance will necessitate substantial development efforts, including in novel areas for the program like AI/ML for predictive DSIs. Other areas of concern include requirements for:

Meeting these requirements within the 12-month timeframe presents a formidable challenge for CEHRT developers – a challenge amplified by the lack of a certified companion or other resource guide to support developers with compliant updates. Also coming into play are current CMS requirements governing providers’ use of CEHRT that would force developers to deliver updated technology to their customers well in advance of the ONC deadline.

To alleviate these challenges, we are urging ONC to consider implementing an enforcement discretion period of six to 12 months. This would provide much-needed relief for CEHRT developers and healthcare providers alike, while still ensuring that meaningful progress is made toward real-world implementation of DSI provisions by the 2024 deadline.

Continue Reading

Are There Dangers To AI Taking Such a Prominent Role In Healthcare?

Over the last few years, artificial intelligence has become hugely prominent in society, and when it comes to healthcare that’s no different.

All parts of healthcare are now using it, and there are plenty of positives for it, whether it be personalised addiction treatment plans, spotting trends quicker and more efficiently, or improving diagnostics, there’s so much it’s improving.

However, are there concerns about AI taking such a prominent role in healthcare too?

Well, yes, there are. Mainly because there are some dangers that run alongside the positives too.

Erosion of patient privacy

That primarily lands with the lack of patient privacy. AI systems use huge amounts of sensitive data, which essentially means that the danger of misuse and unauthorised access is higher. While there are major security measures in place, patient records do become more vulnerable to hackers, which can lead to all manner of exploitation of such details.

Bias in algorithms

AI is always learning, but that also means that just like humans, there’s the opportunity of getting it wrong. For example, if the data used reflects existing societal biases then AI systems may inadvertantly discriminate against other groups, potentially leading to further inequality within the system.

Therefore, AI needs to be fed addressing such biases to ensure that how it operates reflects the entirety of society and not just one area.

Who’s accountable?

If AI becomes an increasingly bigger part in decision making, where does that leave accountability? Traditionally, whether it be addiction, cancer treatment, physio or anything else, there’s a clear chain of responsibility, with professionals accountable for their decisions.

While a human will likely still interpret the data given to them from AI, it still blurs the lines much more on accountability for diagnosis or course of treatment.

Misintepretation of data

AI is designed to enhance efficiency and accuracy, but there could be unforeseen circumstances where it doesn’t. Again, that falls into the accountability argument, but also could lead to incorrect diagnoses and courses of treatment that could prove dangerous.

Essentially, AI will benefit all parts of healthcare, but at the same time it needs to be monitored carefully by humans to get the best out of it. Artificial intelligence can’t replace medical professionals, but it can complement the practice of them, creating a much more streamlined process and ensuring patients are treated more efficiently and effectively.