Tag: Flatiron Health

Q&A with EHR Association AI Task Force Leadership

Artificial intelligence (AI) is evolving rapidly, reshaping the health IT landscape while state and federal governments race to put regulations in place to ensure it is safe, effective, and accessible. For these reasons, AI has emerged as a priority for the EHR Association. We sat down with EHR Association AI Task Force Chair Tina Joros, JD (Veradigm), and Vice Chair Stephen Speicher, MD (Flatiron Health), to discuss the direction of AI regulations, the anticipated impact on adoption and use, and what the EHR Association sees as its priorities moving forward.

Stephen Speicher, MD

EHR: What are the EHR Association’s priorities in the next 12-18 months, and is/how is AI changing them?

Regulatory requirements from both D.C. and state governments are a significant driver for the decisions made by the provider organizations that use our collective products, so a lot of the work the EHR Association does relates to public policy. We’re currently spending a fair amount of our time working on AI-related conversations, as they’re a high-priority topic, as well as tracking and responding to deregulatory adjustments being made by the Trump administration. Other key areas of focus are anticipated changes to the ASTP/ONC certification program, rules that increase the burdens on providers and vendors, and working to address areas of industry frustration, such as the prior authorization process.

EHR: How has the Association adapted since its establishment, and what areas of the health IT industry require immediate attention, if any?

The EHR Association is structured to adapt quickly to industry trends. Our Workgroups and Task Forces, all of which are led by volunteers, are evaluated periodically throughout the year to ensure we’re giving our members a chance to meet and discuss the most pressing topics on their minds. Most recently, that has meant the addition of new efforts specific to both consent management and AI, given the prevalence of those topics within the general health IT policy conversation taking place at both the federal and state levels.

Tina Joros

EHR: If you were to welcome young healthcare entrepreneurs to take on the sector’s most pressing challenges, what guidance would you offer them?

Health IT is a great sector for entrepreneurs to focus on. The work is always interesting because it evolves so quickly, both from a technological perspective and the fact that public policy impacting health IT is getting a lot of attention at the federal and state levels. There are a lot of paths to work in the industry, so it’s always helpful for both entrepreneurs and potential health IT company team members to have a clear understanding of the complexities of our nation’s healthcare system and how the business of healthcare works. Plus, they need a good grasp of the increasingly critical role of data in clinical and administrative processes in hospitals, physician practices, and other care settings.

EHR: What principles are critical to the safe and responsible development of AI in healthcare? How do they reflect the Association’s priorities and position on current AI governance issues?

One of the first things the AI Task Force did when it was formed was to identify certain principles that we believe are essential for ensuring the safe and high-quality development of AI-driven software tools in healthcare. These guiding principles should also be part of the conversation when developing state and federal policies and regulations regarding the use of AI in health IT.

  1. Focus on high-risk AI applications by prioritizing governance of tools that impact critical clinical decisions or add significant privacy or security risk. Fewer restrictions on other use cases, such as administrative workflows, will help ensure rapid innovation and adoption. This risk-based approach should guide oversight and reference frameworks like the FDA risk analysis.
  2. Align liability with the appropriate actor. Clinicians, not AI vendors, maintain direct responsibility for AI when it is used for patient care, when the latter provides clear documentation and training.
  3. Require ongoing AI monitoring and regular updates to prevent outdated or biased inputs, as well as transparency in model updates and performance tracking.
  4. Support AI utilization by all healthcare organizations, regardless of size, by considering the varying technical capabilities of large hospitals vs. small clinics. This will make AI adoption feasible for all healthcare providers, ensuring equitable access to AI tools and avoiding the exacerbation of the already oversized digital divide in US healthcare.

 Our goal with these principles is to strike a balance between innovation and patient safety, thereby ensuring that AI enhances healthcare without unnecessary regulatory burdens.

EHR: In its January 2025 letter to the US Senate HELP Committee, the EHR Association cited its preference for consolidating regulatory action at the federal level. Since then, a flurry of state-level activity has introduced new AI regulations, while federal regulatory agencies work on finding their footing under the Trump Administration. Has the EHR Association’s position on regulation changed as a result?

Our preference continues to be a federal approach to AI regulation, which would eliminate the growing complexity we face in complying with multiple and often conflicting state laws. Consolidating regulations at the Federal level would also ensure consistency across the healthcare ecosystem, which would reduce confusion for software developers and providers with locations in multiple states.

However, while our position hasn’t changed, the regulatory landscape has. In the months since submitting our letter to the HELP Committee, California, Colorado, Texas, and several other states have enacted laws regulating AI that take effect in 2026. Even if the appetite for legislative action was there, it’s unlikely the federal government could act quickly enough to put in place a regulatory framework that would preempt those state laws. Faced with that reality, we’re working on a dual track of supporting our member companies’ compliance efforts at the state level while continuing to push for a federal regulatory framework.

EHR: What benefits will be realized by focusing regulations on AI use cases with direct implications for high-risk clinical workflows?

Centering AI regulations on high-risk clinical workflows makes sense because they represent a higher possibility of patient harm, and that focus would simultaneously ensure room for innovation on lower-risk use cases. Our collective clients have many ideas as to how AI could help them address areas of frustration, and that’s where our member companies therefore want room to move from development to adoption more expediently, unencumbered by regulation—for example, administrative AI use cases like patient communication support, claims remittance and streamlining benefits verification, all of which our internal polling shows are in high demand by physicians and provider organizations.

A smart, efficient risk-based regulatory framework would be grounded in the understanding that not all AI use cases have a direct or consequential impact on patient care and safety. That differentiation, however, is not happening in many states that have passed or are contemplating AI regulations. They tend to categorize everything as high-risk, even when the AI tools have no direct impact on the delivery of care or the risk to patients is minimal.

The unintended consequence of this one-size-fits-all approach is that it stifles AI innovation and adoption. It’s why we believe the better approach is granular, differentiating between high- and low-risk workflows, and leveraging existing frameworks that stratify risk based on the probability of occurrence, severity, and positive impact or benefit. This also helps ease the reporting burden on all technologies incorporated into an EHR that may be used at the point of care.

EHR: Where should the ultimate liability for outcomes involving AI tools lie–with developers or end users–and why?

This is an interesting aspect of AI regulation that remains largely undefined. Until recently, there hasn’t been any discussion about liability in state rulemaking. For example, New York became one of the first states to address liability when a bill was introduced that holds everyone involved in creating an AI tool responsible, although it’s not specific to healthcare. California recently enacted legislation stating that a defendant—including developers, deployers, and users—cannot avoid liability by blaming AI for misinformation.

Given the criticality of “human-in-the-loop” approaches to technology use—the concept that providers are ultimately accountable for reviewing the recommendations of AI tools and making final decisions about patient care—our stance is that liability for patient care ultimately lies with clinicians, including when AI is used as a tool. Existing liability frameworks should be followed for instances of medical malpractice that may involve AI technologies.

EHR: Why must human-in-the-loop or human override safeguards be incorporated into AI use cases? What are the top considerations for ensuring those safeguards add value and mitigate risk?

The Association strongly advocates for technologies that incorporate or public policy that requires human-in-the-loop or human override capabilities, ensuring that an appropriately trained and knowledgeable person remains central to decisions involving patient care. This approach also ensures that clinicians use AI recommendations, insights, or other information only to inform their decisions, not to make them.

For truly high-risk use cases, we also support the configuration of human-in-the-loop or human override safeguards, along with other reasonable transparency requirements, when implementing and using AI tools. Finally, end users should be required to implement workflows that prioritize human-in-the-loop principles for using AI tools in patient care.

Interestingly, we are seeing some states address the idea of human oversight in proposed legislation. Texas recently passed a law that exempts healthcare practitioners from liability when using AI tools to assist with medical decision-making, provided the practitioner reviews all AI-generated records in accordance with standards set by the Texas Medical Board. It doesn’t offer blanket immunity, but it does emphasize accountability through oversight. California, Colorado, and Utah also have elements of human oversight built into some of their AI regulations.

Top Startups That Will Change Healthcare

By Michael Dehoyos, web developer, PhDKingdom and Next Coursework.

Michael Dehoyos

Technology is advancing at an unbelievably rapid pace and there are new breakthroughs every day for artificial intelligence and big data in multiple industries. They have the potential to completely revolutionize the field of healthcare and some changes are already starting to take place. This article will cover the top startup companies in healthcare that are at the forefront of the AI revolution and what their products and discoveries mean for the medical industry.

Sword Health

Sword Health is the first digital physical therapist that is powered by AI. The company recognized a need in the industry because they noticed that there was always a lack of experienced medical specialists. Even with the number of professionals finishing their medical training each year, there is an increasing demand for them which cannot be met. The AI-powered physical therapist created by Sword Health limits the input of human specialists to elaborating the initial personalized physical therapy journey for each patient. After this step, the patient does their regular exercises at home and their motion tracker will let them know if each exercise was done properly. This saves the patient having to make regular trips to the hospital for physical therapy and allows the specialists to see more people quicker. 

Prognos

Prognos’ goal is to completely eradicate the world of diseases by developing a tool that will identify disease at the onset. This company, formerly known as Medivo, has an ambitious goal which is to use AI and data analysis to predict disease at the earliest possible moment. A tech blogger at Australia2Write, Richard Key, explains why Prognos is so valuable to the medical industry: “Their database can access 13 billion patient records and then AI is used to gain actionable information. The registry helps Prognos figure out which patients are most at risk and develop a plan for treatment that’s personalized to each individual. It can also identify populations at risk and look for gaps in existing care.” 

Flatiron Health

Flatiron Health is aimed at the challenge posed by cancer in the diversity and complexity of the disease. It’s too difficult for doctors or healthcare organizations to deal with on their own so the entire industry needs to be used to make any advancement in this field. Flatiron Health is a company that’s developed an electronic health record (EHR) to unify all the medical data that exists thus far and pull insights from it in order to move cancer research forward at a rapid pace.

Babylon Health

At its core, Babylon Health is a tool that permits patients to consult human doctors or other medical specialists online, through text or video in real time. As explained by a data analyst at Brit Student, John Hunt, “It also has an AI component which is a chatbot that can assist patients with simpler, more straightforward issues. Its capacity extends to making diagnoses and recommending treatment options.”

The goal of Babylon Health’s company is to eventually have a completely self-sustainable doctor powered by AI, so it seems as though the future of healthcare is receiving treatments without seeing a human doctor at any point.

Arterys

Arterys’ focus is radiology and it uses AI to help existing human radiologists. Their oncology AI software is already approved in the United States by the FDA and it actually assists radiologists in measuring and tracking tumors in MRIs and CT scans using a user-friendly, easy browser. The goal with this is to get accurate and quick cancer diagnoses. The AI-powered tool uses what it’s learned to recognize and alert to lung and liver lesions and its accuracy has been shown to equal human specialists.

Modern Fertility

This is another American startup, but one whose focus is on female fertility. Modern Fertility is a fertility hormone test for women to use in the comfort and privacy of their own home. All they need to do is take a finger-prick test and mail it to the company, and then they’ll receive a report approved by a physician which tells them more information about their hormones and a complete fertility profile. It’s partnered with a medical facility in the United States and was founded by former Uber executives.

Healthcare Startups a Mixed Bag, But Success Still In Reach

Guest post by Wendy Dessler of Outreachmama.

Startups are taking the world by storm; it seems like now, you can’t possible go through the day without hearing the word, especially given that many—such as Joyable and Ruby Cup—are making a positive impact in the world. Still, you’d be surprised to find that there’s one industry in particular where startups get mixed results: healthcare. Like finance, law and education, healthcare is a field with one of (if not) the most extensive regulations.

It makes sense, given that lives are literally on the line. Equally as intriguing is that we are talking about a multi-billion-dollar industry (investors gave $16.1 billion to healthcare in 2015 alone), loaded with opportunity for tech startups to become successful while saving lives, improving hospital work flow, and speeding up research. That being said, we’ll examine the ways startups are succeeding in the healthcare industry, as well as why a startling number of them fail.

Startups that Are Doing It Right

Flatiron Health is one of a handful of successful startups that aced the healthcare industry. As reported by Inc., the young startup created a tech platform that shares data collected from cancer patients (information remains anonymous) with research and pharmaceutical facilities. As of now, 260 clinics use Flatiron Health’s cloud-based invention.

By sharing health information more easily, hopefully cutting-edge cancer treatments and medical options can be more readily available.

Majority of Healthcare Startups Are Not Cutting It

While Flatiron Health and other startups succeed in the world of healthcare, a majority are barely making it—if that. Forbes reports that as many as 98 percent of startups funded by angel investors fail in the healthcare industry because of a poorly thought-out business marketing strategy and uninspiring business model.

The Story of Healthcare.gov

Nonetheless, the tech startups that do succeed can make a huge difference in individuals’ lives. As is the case with Marketplace Lite, a young tech startup, rebuilding healthcare.gov from the ground up. As told by The Atlantic, healthcare.gov originally was a failing website. On launch day, only six people were able to sign up for healthcare insurance. The reason for such the low signup numbers had more to do with the site’s poor login features than the number of people trying to sign up.

Continue Reading