By Caroline Hodge, CEO and co-founder, Dimer Health.
The healthcare industry is at a pivotal juncture, with $320 million in Medicare readmission penalties impacting 2,273 hospitals this year alone. These penalties go beyond financial strain, influencing patient well-being and the sustainability of hospital operations. Amid these challenges, AI-driven innovation offers transformative solutions that could change how hospitals manage post-discharge care. The pressing question is: how can healthcare systems better support patients after discharge to reduce readmissions and enhance overall care?
AI-driven solutions are revolutionizing post-discharge care with a proactive, predictive approach that surpasses traditional reactive methods. By leveraging AI-powered predictive analytics and continuous patient monitoring, potential health issues can be identified early, enabling timely intervention to prevent complications from escalating into readmissions. This accurately predictive component is pivotal, as it not only enhances patient outcomes but also eases financial burdens on hospitals, shifting the model from penalty-focused to performance-driven incentives.
Dimer Health is at the forefront of this movement. By combining real-time AI analytics with a dedicated clinical team, Dimer informs a predictive and proactive care delivery system that bridges the critical gap between hospital discharge and full recovery. This comprehensive approach has already demonstrated significant reductions in readmission rates, showcasing a new benchmark for effective, continuous patient care.
As the adoption of AI in healthcare grows, questions about its impact on future policy and reimbursement frameworks come to the fore. Could integrating AI into post-discharge care pave the way for a shift from penalty-heavy models to value-based, patient-centric incentives? Policymakers and healthcare leaders will soon need to assess how these technologies can promote sustainable care models that benefit patients and the healthcare ecosystem.
The implications are substantial. In an era marked by an aging population, escalating healthcare costs, and workforce shortages, AI-enabled care can become a cornerstone of hospital strategy. As healthcare systems start to leverage these capabilities, there is potential for more resilient, patient-focused care models that align with both economic and clinical objectives.
This evolution is about more than technology; it represents a shift toward reimagining patient care, making continuous, personalized support the new standard in healthcare. The question now is how swiftly and effectively the industry can adapt to this promising frontier.
I’m a confirmed AI optimist and believe the technology will improve healthcare on a broad scale, from diagnosis to drug discovery, precision medicine, robotic surgery, record keeping, analytics, population health, and streamlined claims processing.
But there remains one nut that AI, for all its astonishing promise, hasn’t yet cracked – the growing burden of healthcare costs on the American family. No large language models or artificial neural networks are likely to change that in the near future.
Rather, the nearest-term solution to rising premiums, deductibles, co-pays and out-of-pocket costs is embarrassingly analog. It’s a conceptual change in the payment process. We need to change the business model until technology can do more to lower our collective costs.
The cost of care avoidance
The current model is broken. Most Americans are covered by an employer’s health insurance plan, but it’s not a gift. The employer and employee share the premium.
Unfortunately, family coverage premiums have increased by 22% in the last five years, reaching almost $24,000. When a covered employee seeks treatment, they pay out of pocket up to their deductible and often owe a co-pay. Since 1960, out-of-pocket costs have grown nearly twice as fast as the economy.
If the patient can’t pay at the time of service, which is increasingly common, the household carries a balance and pays interest on that balance indefinitely, absorbing considerable financial stress along the way. Providers become de facto bill collectors, something they did not sign up for when pursuing careers in healthcare. Shamed patients avoid the doctor, risking their health and nudging up longer-term healthcare costs for everyone. More than four in 10 adults (43%) say they or a household member have put off or postponed care due to cost.
We’ve come a long way since the University of Nebraska pioneered two-way video communication for telemedicine in 1959. Today, telehealth enables healthcare providers to expand access to medical support, improve patient convenience, streamline workflows, and enhance clinical outcomes. But as telehealth has grown, so too have the regulations around it.
The software behind telehealth — including on laptops, tablets, the cloud, and increasingly AI — requires careful governance and robust security protocols to ensure patient privacy in accordance with the Health Insurance Portability and Accountability Act (HIPAA).
The following explores how telehealth can alleviate administrative burdens and create a more efficient experience, while HIPAA compliance ensures the product is safe for both patients and providers to trust.
HIPAA Compliance Builds Patient and Clinicians’ Trust
While demand for digital health adoption from patients (33%) and providers (36%) is rising, over half of clinicians surveyed are still concerned about data breaches or malware attacks on their healthcare data, global advisor HIMSS (Healthcare Information and Management Systems Society) reports.
Securely communicating sensitive patient information (PHI) within a healthcare organization can be challenging when relying on everyday tools like SMS, Skype, and email. While these platforms offer encryption mechanisms, they fall short in two key areas.
Firstly, achieving effective encryption requires a uniform environment. Everyone within the organization must use compatible operating systems and the same encryption/decryption software. This uniformity can be difficult and expensive to maintain across a large organization with diverse devices and software versions.
Secondly, even with encryption, the service providers themselves — like Verizon, Skype, and Google — still have access to the underlying data on their servers. Business associate agreements (BAAs) can be established to address these concerns, but these rely on the providers to maintain the integrity of the encrypted data. Here, Skype’s past actions raise red flags, and it’s fair to question if major communication companies like Verizon or Google would be comfortable with the additional responsibility and potential legal implications of healthcare data breaches.
By Stephanie Jamison, Executive Committee Chair and Public Policy Leadership Workgroup Vice Chair, EHR Association.
In the months that have passed since the Office of the National Coordinator for Health Information Technology (ONC) issued the final Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule, the health IT sector has been working diligently to meet the earliest compliance timelines even as it continues an in-depth analysis of the regulatory impact on both developers and the providers who use certified technology.
For the EHR Association, that analysis has given rise to several concerns and ambiguities that need to be addressed to ensure HTI-1, which was published in the Federal Register on Jan. 9, 2024, achieves ONC’s stated goal of advancing patient access, interoperability, and standards.
The new regulations are an important step toward implementing key provisions of the Cures Act and enhancing ONC’s Certification Program. However, there are several aspects of HTI-1 that we believe may have unintended consequences for certified EHR technology (CEHRT) developers and users.
The first deadline is Dec. 31, 2024. That is when CEHRT developers must deliver DSI capabilities to maintain certification. Achieving compliance will necessitate substantial development efforts, including in novel areas for the program like AI/ML for predictive DSIs. Other areas of concern include requirements for:
Developing an end-user feedback function for evidence-based DSIs, including an export capability for machine-readable formatted feedback data.
Developing support for a significantly expanded set of data concepts for which selection of evidence-based DSIs must be available.
Developing support for enabling the selection of predictive DSIs using any data expressed in the USCDI.
Producing nine new source attribute data points for all evidence-based DSIs supplied by developers and more than 30 source attribute data points for all developer-supplied predictive DSIs.
Developing support for customer users to access and modify source attribute information provided by developers for those DSIs they supply.
Developing support for enabling customer users to record and modify their own source attribute entries in the system for DSIs they create or implement on their own.
Developing detailed intervention risk management policies and procedures for ongoing management of predictive DSIs supplied by developers.
Meeting these requirements within the 12-month timeframe presents a formidable challenge for CEHRT developers – a challenge amplified by the lack of a certified companion or other resource guide to support developers with compliant updates. Also coming into play are current CMS requirements governing providers’ use of CEHRT that would force developers to deliver updated technology to their customers well in advance of the ONC deadline.
To alleviate these challenges, we are urging ONC to consider implementing an enforcement discretion period of six to 12 months. This would provide much-needed relief for CEHRT developers and healthcare providers alike, while still ensuring that meaningful progress is made toward real-world implementation of DSI provisions by the 2024 deadline.
Over the last few years, artificial intelligence has become hugely prominent in society, and when it comes to healthcare that’s no different.
All parts of healthcare are now using it, and there are plenty of positives for it, whether it be personalised addiction treatment plans, spotting trends quicker and more efficiently, or improving diagnostics, there’s so much it’s improving.
However, are there concerns about AI taking such a prominent role in healthcare too?
Well, yes, there are. Mainly because there are some dangers that run alongside the positives too.
Erosion of patient privacy
That primarily lands with the lack of patient privacy. AI systems use huge amounts of sensitive data, which essentially means that the danger of misuse and unauthorised access is higher. While there are major security measures in place, patient records do become more vulnerable to hackers, which can lead to all manner of exploitation of such details.
Bias in algorithms
AI is always learning, but that also means that just like humans, there’s the opportunity of getting it wrong. For example, if the data used reflects existing societal biases then AI systems may inadvertantly discriminate against other groups, potentially leading to further inequality within the system.
Therefore, AI needs to be fed addressing such biases to ensure that how it operates reflects the entirety of society and not just one area.
Who’s accountable?
If AI becomes an increasingly bigger part in decision making, where does that leave accountability? Traditionally, whether it be addiction, cancer treatment, physio or anything else, there’s a clear chain of responsibility, with professionals accountable for their decisions.
While a human will likely still interpret the data given to them from AI, it still blurs the lines much more on accountability for diagnosis or course of treatment.
Misintepretation of data
AI is designed to enhance efficiency and accuracy, but there could be unforeseen circumstances where it doesn’t. Again, that falls into the accountability argument, but also could lead to incorrect diagnoses and courses of treatment that could prove dangerous.
Essentially, AI will benefit all parts of healthcare, but at the same time it needs to be monitored carefully by humans to get the best out of it. Artificial intelligence can’t replace medical professionals, but it can complement the practice of them, creating a much more streamlined process and ensuring patients are treated more efficiently and effectively.
In today’s dynamic healthcare landscape, the integration of artificial intelligence (AI) solutions is becoming increasingly crucial. Healthcare organizations need to balance superior patient care with operational complexities. The adoption of AI presents a transformative opportunity, empowering your organization to enhance various facets of your healthcare services.
You may be wondering, “Why should I embrace AI solutions for my healthcare organization?” The answer lies in the numerous benefits AI brings to the table. In this blog, we’ll shed light on a few reasons why healthcare organizations, just like yours, are wholeheartedly embracing AI.
Enhance Diagnostics and Precision Medicine
In the healthcare sector, AI technologies are actively advancing diagnostic accuracy, particularly concerning dangerous diseases. A study published in the National Library of Medicine highlights the extensive utilization of AI in enhancing medical diagnostics. These technologies, proficient in interacting with medical image data, contribute significantly to disease diagnosis and prediction.
For instance, AI’s capability to detect tumors in medical images stands out, providing a crucial advantage in early-stage diagnosis and subsequent treatment. The study emphasizes the pivotal role played by AI-based algorithms in identifying patients who might otherwise go undiagnosed, including rare diseases. This effectiveness opens up abundant opportunities for early intervention and improved patient outcomes.
Personalize Patient Care with Predictive Analytics
Harnessing predictive analytics through AI is revolutionizing patient care, offering tailored interventions, and enhancing healthcare outcomes. Grand View Research reveals that the global healthcare predictive analytics market reached $11.7 billion in 2022, underscoring the widespread adoption of this transformative approach.
Predictive analytics allows healthcare organizations to proactively anticipate patient needs, facilitating personalized treatment plans. This proactive approach enables timely interventions and preventive measures based on individual patient data, contributing to a more patient-centric healthcare model. The insights derived from analyzing vast datasets enable healthcare professionals to optimize resource allocation, reduce unnecessary procedures, and streamline patient care pathways.
Enable Clean Water for Healthcare Facilities
Ensuring clean water for healthcare facilities is a paramount objective embraced by healthcare organizations leveraging AI solutions. AI technologies facilitate the efficient management of water resources, optimizing usage and minimizing wastage within healthcare infrastructure. By actively monitoring water quality and consumption patterns, AI-driven systems enable healthcare facilities to identify potential issues promptly.
Access to clean water is crucial for healthcare organizations, as a lack of it can lead to disease outbreaks, challenging the healthcare industry’s ethos. Your organization should learn from past incidents, such as the Camp Lejeune water contamination, which resulted in severe chronic conditions. According to TorHoerman Law, the incident affected veterans, family members, and workers, causing them to suffer from diseases like cancer and Parkinson’s disease.
There is no question that artificial intelligence (AI) has tremendous implications for the world of healthcare. The list of applications is long. From using machine learning to analyze medical images to facilitating patient communication through chatbots and even using predictive tools to identify high-risk patients, AI has the potential to enhance healthcare systems.
And that’s not to mention all the time AI can save on the backend by automating things like data entry and appointment scheduling, thereby granting healthcare professionals more time to focus on actually diagnosing and treating their patients.
Still, many in the industry have approached this new technology with trepidation. Potential violations of medical privacy laws are a perennial concern for healthcare organizations, and AI—with its seemingly opaque algorithms and its potential susceptibility to breach—can seem like more trouble than it’s worth on this front.
The reality is more complicated. Yes, generative AI does present a risk to healthcare organizations when handled without the proper precautions, as any technology does. In fact, nearly 60 percent of healthcare industry respondents to a recent survey conducted by Portal26 cited at least one GenAI-related security or misuse incident in the preceding six months. But with the right security mechanisms in place, the benefits of AI significantly outweigh the possible downsides.
The problem is that—as the same survey revealed—almost 30 percent of healthcare respondents said their employers lack any guidelines or policies at all surrounding AI usage. Building those guidelines—implementing AI as carefully, and cautiously, as possible—is essential to realizing the true possibility of this technology.
Secure full visibility into your AI systems
At the center of any concern around AI and medical privacy violations is protected health information (PHI). PHI encompasses any information relating to:
The past, present, or future physical or mental health/condition of an individual.
The provision of health care to that individual.
The individual’s payment history.
Feeding PHI into the large language models (LLMs) that are at the foundation of GenAI can pay massive dividends to healthcare organizations looking to optimize their day-to-day operations. But successfully achieving this objective, with a minimum of risk, requires taking an extremely proactive attitude towards precisely how this data is being used.
The key word here is “visibility.” If you are going to be feeding massive quantities of sensitive PHI into your systems, you need to ensure that you are aware of what it is, who is using it, and for what purpose. The need for this is especially acute given the rise of “shadow AI”—i.e., AI-related activities occurring out of sight of those tasked with overseeing it.
Unsurprisingly, 67% of healthcare industry respondents to the Portal26 survey are concerned about shadow AI at their companies. It is a problem that is growing daily—and one that can only be curtailed through increased visibility.
Healthcare has a vibrant startup and innovation ecosystem, but that doesn’t mean everyone shares the perks that come with technological developments. Historically, payors have often been ahead of the game in adopting and benefiting from new tech, forcing providers to play catch up.
But artificial intelligence (AI) is changing the game. A persistent trend I’ve witnessed is the steady rise of providers prioritizing technology – especially AI – to inform strategic priorities and address chronic headwinds such as staff shortages, increasing cost pressures, and slow reimbursement times, to name a few.
As healthcare leaders catch on to the enormous potential of AI to combat thorny issues, AI will take center stage next year, reshape the larger healthcare ecosystem, and begin to even the playing field between payor and provider.
As the end of the year approaches, here’s how I see this playing out in 2024:
Autonomous medical coding will be widespread — if not the norm.
The latest health IT report from Bain & Company and KLAS Research highlights the increasing importance of software and technology. Per the report, 70% of providers think AI will have a more significant impact on their organizations this year vs. last year, and an impressive 56% of those surveyed view software and technology as one of their top three strategic priorities, with revenue cycle management (RCM) coming in at a resounding first place. With many health systems focused on reducing administrative burdens for clinicians and a continued shortage of medical coders, autonomous coding adoption will surge.
Large language models (LLMs) like ChatGPT won’t work as advertised.
There’s plenty of commotion about the capabilities of language models, but they will likely disappoint when functioning as the core of autonomous coding engines. However, they will be enormously valuable in solving smaller pieces and edge cases, pushing coding automation rates to 100% for all the high-volume outpatient specialties.