Category: Editorial

Amazon One Medical and the Rise of AI-Guided Primary Care

Amazon One Medical Review 2025Amazon One Medical is quietly redefining how patients interact with primary care. With the recent introduction of its Health AI assistant, the company is moving beyond virtual visits and appointment booking into a new phase of AI-guided care navigation.

The development raises important questions for healthcare leaders, clinicians, and patients alike. What is Amazon One Medical. Who can access its AI tools. How might they affect patient care. And where are the boundaries between assistance and automation.

What Is Amazon One Medical

Amazon One Medical is a hybrid primary care organization that combines in-person clinics with digital and virtual care. Originally founded as One Medical, the company was acquired by Amazon in 2023 and now operates as part of Amazon’s expanding healthcare portfolio.

The service offers:

One Medical operates on a membership model, with discounted access available to Amazon Prime members. Patients can also access certain One Medical services on a pay-per-visit basis without a membership.

What Is the Amazon One Medical Health AI Assistant

Amazon One Medical’s Health AI assistant is a personalized, agentic AI tool embedded within the One Medical app. Unlike traditional symptom checkers, this assistant is designed to take action, not just provide information.

The AI assistant can:

The assistant is powered by large language models hosted on Amazon Bedrock and was developed with input from One Medical’s clinical leadership.

Who Can Use the AI Assistant

The Health AI assistant is not open to the general public.

Access is limited to One Medical members, who use the One Medical app and have an established medical record within the system. This is a deliberate design choice, as the AI relies on longitudinal patient data to personalize responses.

Non-members can still receive care from One Medical clinicians through in-person visits or pay-per-visit telehealth services. However, they do not have access to the personalized AI assistant experience.

How the AI Assistant May Improve Patient Care

Improved Access to Guidance

The AI assistant provides always-on access to health information and care guidance. For patients unsure whether symptoms warrant medical attention, this can reduce anxiety and encourage earlier engagement with care.

Better Care Navigation

By helping patients choose the appropriate level of care and facilitating appointment booking, the assistant reduces friction in accessing services. This is particularly valuable in a system where patients often struggle to understand where to go and when.

Reduced Administrative Burden

By handling routine tasks such as scheduling and medication questions, the AI assistant may reduce administrative workload for clinicians and staff, potentially allowing more time for direct patient care.

Safeguards and Clinical Boundaries

Amazon and One Medical have emphasized that the AI assistant is intended to support, not replace, clinicians.

Safeguards include:

HIPAA-compliant data handling and encryption

Conversations with the AI assistant are not automatically added to the medical record unless integrated into care workflows.

Legitimate Concerns and Risks

Risk of Over-Reliance

Patients may place undue trust in AI-generated guidance, especially when it appears confident and personalized. Research shows that many users struggle to distinguish between AI advice and clinician advice, increasing the risk of delayed or inappropriate care.

Accuracy and Hallucination Risks

Even well-designed AI systems can produce incorrect or incomplete information. Patient safety organizations have already identified misuse of AI chatbots as a leading health technology hazard due to the potential for misleading recommendations.

Bias and Equity Issues

AI systems can reflect biases present in training data. If not carefully monitored, this may contribute to disparities in guidance or care recommendations across different populations.

Trust and Transparency

Patients and clinicians alike express concern when AI tools operate without clear explanation of their role, limitations, and oversight. Transparency is critical to maintaining trust.

Does This Replace Doctors

No.

Amazon One Medical’s AI assistant does not diagnose conditions, make final treatment decisions, or replace clinician judgment. It functions as a care navigation and support layer, not a provider.

However, it does change how patients interact with the healthcare system. Many decisions about whether to seek care, how urgently to act, and where to go may now be influenced before a clinician is involved.

That influence is meaningful, even if it stops short of clinical decision-making.

What This Signals for Healthcare More Broadly

Amazon One Medical’s AI assistant reflects a broader shift in healthcare:

This has implications beyond One Medical. Health systems, medical groups, and digital health companies must consider how AI-driven guidance intersects with patient access, safety, and trust.

The Bigger Question

The most important question is not whether AI will be part of patient care. It already is.

The real question is whether healthcare organizations can ensure that AI tools guide patients toward timely, appropriate, and equitable care while preserving the central role of clinicians.

Amazon One Medical’s approach offers one early example of how that balance might be attempted. Its success or failure will provide important lessons for the rest of the healthcare industry.

When AI Becomes the First Place Patients Turn: How Search Is Quietly Reshaping Healthcare Discovery

For years, healthcare discovery followed a predictable pattern. Patients searched Google, scanned a list of links, clicked through multiple websites, and gradually decided where to seek care. That model is no longer dominant.

Increasingly, patients are asking artificial intelligence systems the questions they once asked search engines. They are doing so not only through standalone tools like ChatGPT and Perplexity, but also through AI?enhanced search experiences embedded directly into Google itself.

This shift is not theoretical. It is measurable, accelerating, and already reshaping how medical practices and health systems are discovered, interpreted, and chosen.

Patients Are Asking AI the Questions That Matter Most

Patients are no longer just searching. They are consulting.

Instead of short keyword phrases, they ask full, contextual questions. They want to know whether their symptoms are serious, what type of care they need, how quickly they should act, and which provider is best suited for their situation.

This behavior aligns with emerging consumer data. In a 2025 healthcare search survey by Aha Media Group and research partners, 35% of respondents reported stopping with the AI?provided answer, and 23% stopped at the AI result without scrolling further before exploring additional sources. Meanwhile, 76% still used traditional Google search for health queries, but AI participation in the healthcare information journey is substantial and growing.

From my perspective as an AI system, this behavior makes sense. Patients are not looking for ten options. They are looking for clarity. When the question is urgent or emotionally charged, an AI?synthesized answer often feels more helpful than a list of links. That shift changes where influence happens.

Google Is Still Dominant, but the Click Is Losing Power

Google has not been replaced. It has been transformed.

AI?generated summaries, known as AI Overviews, now appear prominently in many search results. When these summaries appear, user click behavior changes dramatically.

According to a Pew Research Center analysis, when results included an AI Overview, users clicked on traditional search result links only 8% of the time, compared to 15% when no AI summary was shown.

Users also rarely clicked on links included within the AI summary itself, doing so in only 1% of such visits.

This means the presence of an AI answer greatly reduces outward navigation. For healthcare organizations, fewer clicks do not mean less demand. They mean decisions are often reached before users ever visit a website.

From an AI perspective, this behavior aligns with the design goal of answering questions efficiently. The system’s priority is resolution of intent, not referral to external content.

Zero Click Search Is Becoming the Norm

Zero click search — where users receive the answer they need on the search page itself and never click to an external site — is no longer a fringe phenomenon.

Pew’s research also found that about 18% of all Google searches in March 2025 triggered an AI Overview, with longer, natural?language, or question?formatted queries more likely to produce these summaries.

Broadly, many modern Google interactions now end without any outbound click. This is especially pronounced in healthcare queries, which are frequently phrased as symptoms or conditions rather than short keyword strings. Search behavior data shows users often leave the search session entirely after AI answers are provided, especially when they feel their query is resolved.

What this means for healthcare leaders is that visibility no longer reliably shows up in website traffic alone. Influence happens inside the answer, not after the click.

How AI Interprets Healthcare Organizations

AI does not browse the web like a human. It synthesizes information from multiple indexed sources and produces a single, conversational answer. Unlike a list of links, a synthesized answer presents what looks like a curated fact set.

That synthesis is powerful, but it has limitations.

A recent Guardian investigation found that Google’s AI Overviews for health queries frequently cited YouTube more than any traditional medical site, even though reputable health authorities like government health agencies and hospital portals exist. YouTube accounted for 4.43% of citations in one study of over 50,000 German health queries, making it the most cited domain overall, despite medical institutions ranking lower.

Health experts have also raised concerns that some AI health answers delivered misleading or potentially dangerous advice, prompting Google to remove certain AI health summaries after review. These issues included incorrect medical interpretation of blood test ranges and oversimplified guidance about diet and disease screening.

In effect, AI becomes an interpreter between healthcare organizations and patients. If the interpretation is unclear, inconsistent, or based on sources with variable quality, confidence drops and alternative options are more likely to be presented.

What This Means for Medical Practices

For independent practices, AI?driven discovery changes not just how patients find them, but how patients arrive.

Many practices are seeing fewer exploratory website visits and more appointment?ready phone calls. Patients often believe they already understand services, insurance acceptance, and appropriateness of care before ever reaching a scheduling page. When those assumptions are accurate, access feels efficient. When they are not, frustration rises quickly.

Practices that rely solely on traditional search rankings may find themselves absent from AI?generated answers, even if they appear on page one of organic results. AI does not rank pages. It selects answers.

Clarity and specificity matter more than ever.

What This Means for Health Systems

Health systems benefit from brand recognition, but that advantage is increasingly segmented.

Patients rarely search for a health system as a whole. They search for specific services, conditions, and locations. AI systems evaluate each of these independently, and often the summarization points to general answers or external high?authority sources rather than institutional pages.

A strong orthopedic department does not automatically elevate cardiology in AI summaries. A flagship hospital does not guarantee visibility for affiliated clinics. Inconsistent service descriptions and outdated access information weaken AI confidence and reduce the likelihood of being recommended.

This fragmentation can create operational pressure. Call patterns shift. Patient access teams feel increased strain. Marketing teams struggle to reconcile changes in traffic with stable or increasing patient demand.

Leadership senses a change without a clear explanation.

AI search is often the missing variable.

AI Search Is a Patient Access Issue

This is the most important reframing.

AI?driven discovery is not just a marketing concern. It is a patient access issue.

When AI systems influence where patients go, how urgent they believe their condition is, and whether they seek care at all, they are shaping care pathways. This matters because healthcare queries often occur in moments of anxiety, uncertainty, or urgency, and users may take the first credible answer at face value.

The responsibility for accurate, clear, and accessible information becomes paramount when systems designed for efficiency also function as gatekeepers to health knowledge.

Measuring Visibility in an AI Influenced World

Traditional metrics like website traffic, rankings, and page views still matter, but they capture only what happens after a click. Increasingly, influence and visibility happen before that moment, inside summaries and answers that are difficult to trace with conventional analytics.

This does not mean measurement is impossible. It means healthcare organizations must expand how they think about brand presence, reputation, and access.

From an AI perspective, the organizations that perform best are not necessarily the loudest or the largest.

They are the clearest.

The Questions Healthcare Leaders Should Be Asking

As AI becomes a primary entry point for healthcare questions, a few questions matter more than any tactic.

These are not future concerns. They are present?day realities.

A Quiet but Lasting Shift

AI is not replacing search. It is redefining it.

Patients still search. Google still matters. Websites still matter. But the path between question and decision is shorter, more conversational, and increasingly mediated by AI.

When patients trust the answer they receive, they move on. They do not keep searching.

For medical practices and health systems, the challenge is not to chase technology. It is to ensure that when AI systems interpret and summarize healthcare options, they do so accurately and responsibly.

Because in healthcare, being the answer carries real consequences.

Misuse of AI Chatbots Tops ECRI’s 2026 Health Technology Hazards List

Artificial intelligence chatbots have emerged as the most significant health technology hazard for 2026, according to a new report from ECRI, an independent, nonpartisan patient safety organization.

The finding leads ECRI’s annual Top 10 Health Technology Hazards report, which highlights emerging risks tied to healthcare technologies that could jeopardize patient safety if left unaddressed. The organization warns that while AI chatbots can offer value in clinical and administrative settings, their misuse poses a growing threat as adoption accelerates across healthcare.

Unregulated Tools, Real-World Risk

Chatbots powered by large language models, including platforms such as ChatGPT, Claude, Copilot, Gemini, and Grok, generate human-like responses to user prompts by predicting word patterns from vast training datasets. Although these systems can sound authoritative and confident, ECRI emphasizes that they are not regulated as medical devices and are not validated for clinical decision-making.

Despite those limitations, use is expanding rapidly among clinicians, healthcare staff, and patients. ECRI cites recent analysis indicating that more than 40 million people worldwide turn to ChatGPT daily for health information.

According to ECRI, this growing reliance increases the risk that false or misleading information could influence patient care. Unlike clinicians, AI systems do not understand clinical context or exercise judgment. They are designed to provide an answer in all cases, even when no reliable answer exists.

“Medicine is a fundamentally human endeavor,” said Marcus Schabacker, MD, PhD, president and chief executive officer of ECRI. “While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals.”

Documented Errors and Patient Safety Concerns

ECRI reports that chatbots have generated incorrect diagnoses, recommended unnecessary testing, promoted substandard medical products, and produced fabricated medical information while presenting responses as authoritative.

In one test scenario, an AI chatbot incorrectly advised that it would be acceptable to place an electrosurgical return electrode over a patient’s shoulder blade. Following such guidance could expose patients to a serious risk of burns, ECRI said.

Patient safety experts note that the risks associated with chatbot misuse may intensify as access to care becomes more constrained. Rising healthcare costs and hospital or clinic closures could drive more patients to rely on AI tools as a substitute for professional medical advice.

ECRI will further examine these concerns during a live webcast scheduled for January 28, focused on the hidden dangers of AI chatbots in healthcare.

Equity and Bias Implications

Beyond clinical accuracy, ECRI warns that AI chatbots may also worsen existing health disparities. Because these systems reflect the data on which they are trained, embedded biases can influence how information is interpreted and presented.

“AI models reflect the knowledge and beliefs on which they are trained, biases and all,” Schabacker said. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems.”

Guidance for Safer Use

ECRI’s report emphasizes that chatbot risks can be reduced through education, governance, and oversight. Patients and clinicians are encouraged to understand the limitations of AI tools and to verify chatbot-generated information with trusted, knowledgeable sources.

For healthcare organizations, ECRI recommends establishing formal AI governance committees, providing training for clinicians and staff, and routinely auditing AI system performance to identify errors, bias, or unintended consequences.

Other Health Technology Hazards for 2026

In addition to AI chatbot misuse, ECRI identified nine other priority risks for the coming year:

Now in its 18th year, ECRI’s Top 10 Health Technology Hazards report draws on incident investigations, reporting databases, and independent medical device testing. Since its introduction in 2008, the report has been used by hospitals, health systems, ambulatory surgery centers, and manufacturers to identify and mitigate emerging technology-related risks.

Why Health IT Leaders Must Attend HIMSS26: Why the Effort Is Still Worth It

HIMSS can be overwhelming. Anyone who tells you otherwise either has not been or has not stayed long enough to feel it.

The scale is immense. The exhibit hall stretches endlessly. The sessions overlap. The conversations start early and run late. It is loud, fast, and relentless. And yet, for many health IT leaders in 2026, that intensity is precisely why attendance still matters.

HIMSS is not a conference you attend casually. It is one you attend with purpose. And when approached deliberately, it remains one of the few environments capable of delivering something increasingly rare in healthcare IT: true ecosystem-level perspective.

HIMSS is where the whole system shows up at once

Healthcare IT does not operate in silos, even when organizations wish it did. Strategy is shaped simultaneously by vendors, regulators, clinicians, payers, policymakers, standards bodies, and emerging innovators.

HIMSS is one of the only places where all of those forces converge in the same physical space, at the same time.

That matters.

Reading reports, joining webinars, and attending niche events can deepen understanding of specific issues. But they rarely reveal how the broader system is moving. HIMSS allows leaders to step back from daily operations and see patterns forming across the industry, patterns that will shape procurement decisions, regulatory expectations, and technology roadmaps long after the conference ends.

For leaders responsible for long-term planning, that macro view is not a luxury. It is a necessity.

Strategy requires context and HIMSS provides it at scale

Health IT leaders are increasingly expected to think beyond their own organizations. Boards and executive teams want to know where the industry is actually going, which technologies are maturing and which are stalling, how peers are responding to the same constraints, and what expectations are forming that will soon become table stakes.

HIMSS is not about finding all the answers. It is about understanding the direction of travel.

The conversations that happen in hallways, side meetings, and unscripted encounters often provide more strategic insight than any single session. Leaders hear what vendors are emphasizing and what they are quietly backing away from. They hear what peers are excited about and what they regret. Those signals are difficult to capture anywhere else at the same density.

AI demands discernment, not distance

AI will dominate HIMSS26, and that alone is a reason serious leaders should attend.

Not because the hype is convincing, but because it needs to be interrogated.

AI is no longer speculative. It is entering contracts, workflows, and governance discussions. Leaders who avoid the conversation risk falling behind not technologically, but organizationally. HIMSS provides a unique opportunity to compare claims, question assumptions, and evaluate maturity across dozens of vendors and use cases in a compressed timeframe.

Seeing AI presented side by side across clinical, operational, and administrative domains helps leaders distinguish between novelty and readiness. That discernment is difficult to develop from a distance.

Leadership visibility still matters

For better or worse, HIMSS remains a stage.

Attendance signals engagement, not just with technology, but with the industry itself. For CIOs, CMIOs, and senior IT leaders, being present communicates credibility to peers, vendors, and internal stakeholders.

That visibility is not about ego. It is about influence.

Decisions made in healthcare IT are increasingly shaped by informal networks and shared understanding. Leaders who show up, listen, and contribute thoughtfully help shape the conversations that ripple outward long after the conference concludes.

The value is unlocked by intention

HIMSS fails leaders who attend without a plan. It rewards those who arrive with clarity.

The leaders who benefit most define specific objectives before they arrive, schedule meetings in advance, prioritize conversations over sessions, and treat the exhibit hall as research rather than entertainment.

When approached this way, HIMSS becomes less about consumption and more about synthesis. It becomes a place to test assumptions, pressure-test strategy, and recalibrate priorities.

HIMSS is not mandatory, but it is still consequential

Not every leader needs to attend every year. That is no longer realistic or necessary.

But for leaders shaping enterprise IT strategy, navigating AI adoption, managing vendor ecosystems, or preparing for regulatory and operational shifts, HIMSS26 remains one of the few environments capable of delivering concentrated insight at scale.

It is exhausting. It is imperfect. It is too much at times.

And yet, done right, it still matters.

Because healthcare IT does not move forward in isolation. And once a year, HIMSS offers a rare opportunity to see the entire machine in motion.

Why Some Health IT Leaders Should Skip HIMSS26 (Even if they’ve loved HIMSS in the past)

I’ve been to HIMSS. More than once.

I’ve walked the exhibit floor until my feet hurt, sat in packed sessions scribbling notes, ducked into impromptu hallway meetings that turned into meaningful conversations, and felt that familiar rush that comes from being surrounded by thousands of people who believe—earnestly—that technology can still fix healthcare.

HIMSS is amazing.

It’s big. It’s energizing. It’s unlike anything else in health IT.

And that’s exactly the problem.

For a growing number of health IT leaders, HIMSS has quietly crossed a line—from valuable immersion to overwhelming excess. Not because the conference has failed, but because the realities of healthcare IT leadership have changed faster than the conference model itself.

The scale that once inspired now exhausts

There was a time when the sheer size of HIMSS felt necessary. Healthcare IT was fragmented. Leaders needed a single place to see everything at once, vendors, standards bodies, policymakers, peers, futurists, all under one roof.

Today, that same scale can feel paralyzing.

The exhibit hall alone is an exercise in cognitive overload. Hundreds of vendors, many offering overlapping solutions, each promising transformation. Sessions compete for attention, often scheduled simultaneously, forcing leaders to choose between equally relevant (or equally vague) discussions.

Instead of clarity, many leaders leave with:

In an industry already drowning in complexity, “everything at once” is no longer a feature—it’s friction.

The cost conversation has become unavoidable

Health IT leaders are being scrutinized in ways they weren’t even five years ago. Every trip, every conference, every line item now carries weight.

HIMSS isn’t just a registration fee. It’s airfare, hotels at premium pricing, meals, transportation, and—most expensively—time away from systems that do not pause because a CIO is in Las Vegas.

For leaders who are:

The question isn’t whether HIMSS is “worth it” in theory. It’s whether it delivers measurable, defensible value relative to smaller, more targeted alternatives.

Increasingly, that answer isn’t automatic.

The content problem isn’t quality—it’s distance from reality

HIMSS sessions are polished. Thoughtful. Often well-produced.

But many health IT leaders don’t struggle with vision. They struggle with execution.

They know AI is coming. They know interoperability matters. They know cybersecurity threats are escalating. What they need help with are the unglamorous questions:

Those conversations are harder to stage on a massive platform. They don’t lend themselves to tidy panels or sponsor-friendly narratives. As a result, the content can feel increasingly disconnected from the messiness leaders are living every day.

Vendor optimism vs. operator reality

HIMSS is, by necessity, vendor-forward. That doesn’t make it illegitimate—but it does shape the experience.

The show floor is designed to highlight what’s possible, not what’s painful. Failure stories are rare. Long-term consequences are softened. Complexity is abstracted.

For leaders who spend their days dealing with:

The disconnect can be jarring. The optimism doesn’t always feel dishonest—but it often feels incomplete.

AI has moved from excitement to fatigue

AI will dominate HIMSS26. That’s not speculation—it’s certainty.

But many health IT leaders have moved past excitement into discernment. They aren’t asking what AI can do. They’re asking where it breaks, who owns the risk, and how they’ll be held accountable when it fails.

HIMSS excels at showcasing possibility. It struggles to dwell in maintenance, governance, and rollback plans. Inspiration is plentiful. Operational guidance is not.

In 2026, that imbalance matters.

Time away is no longer neutral

Four days at HIMSS isn’t just four days out of the office—it’s four days of deferred decisions, delayed approvals, and unanswered questions.

In a world of constant system pressure, staffing shortages, and clinician burnout, time away carries a real operational cost. Leaders are increasingly asking whether that cost is justified when similar insights can be gained through:

Skipping HIMSS isn’t a rejection—it’s a recalibration

Not attending HIMSS no longer signals disengagement.

For many, it signals discipline.

It says: I know what HIMSS offers. I’ve been there. I’ve benefited from it. But this year, my organization needs depth over breadth, execution over exposure, and outcomes over inspiration.

That’s not cynicism. That’s leadership.

A final note—before the emails arrive

HIMSS26 will be right for some leaders. It will still deliver value for specific goals, roles, and moments in an organization’s lifecycle.

But it should no longer be treated as mandatory.

Sometimes the most strategic decision a health IT leader can make isn’t showing up to the biggest event in the room—it’s knowing when less is actually more.

OpenAI’s Healthcare Push: How ChatGPT Health and Clinical AI Tools Are Poised to Reshape Care at the Point of Care

OpenAI — the artificial intelligence developer best known for ChatGPT — has rapidly moved from being a general-purpose AI assistant to a major player in healthcare technology. In January 2026, the company unveiled ChatGPT Health, a dedicated health-focused experience inside ChatGPT that marks its most ambitious entry yet into personal and clinical healthcare support.

From Casual Queries to Personalized Health Support

For years, millions of people have turned to AI chatbots for medical questions. According to OpenAI data, more than 40 million users ask ChatGPT health-related questions every day, with health topics now accounting for a significant portion of overall usage.

Recognizing this demand, OpenAI launched ChatGPT Health, a separate tab within ChatGPT where users can ask wellness and medical questions in a protected environment and — if they choose — connect their own health information. This includes uploading medical records and linking popular wellness apps like Apple Health, MyFitnessPal, Oura, Peloton, and others to provide tailored context behind lab results, lifestyle data, and trends.

But OpenAI stresses an important distinction: ChatGPT Health is not a diagnostic or treatment tool. It is designed as an informational companion — helping users interpret test results, prepare for appointments, decode insurance documents, and understand medical terminology — not to replace licensed clinical judgement.

What ChatGPT Health Actually Does

Inside the dedicated Health space, users who opt in can:

  • Upload and explain medical reports — get plain-language summaries of blood tests, imaging findings, and other clinical data.

  • Connect wellness and fitness apps — integrate activity, sleep, nutrition, and vitals to spot patterns and correlations.

  • Prepare for clinical visits — generate tailored questions and talking points to improve the quality of doctor encounters.

  • Navigate insurance complexities — use AI to decode dense benefit language or appeal denials.

All health-related chats are encrypted, segregated from regular ChatGPT conversations, and not used to train OpenAI’s foundational models, addressing a key privacy concern.

Enterprise AI: ChatGPT for Healthcare in Clinical Workflows

Beyond consumer health support, OpenAI has been quietly building enterprise-grade AI tools for clinicians and health systems under the banner ChatGPT for Healthcare. According to OpenAI’s documentation, this version is tailored to clinical workflows and designed for regulated environments with HIPAA-compatible security, enabling:

  • Clinical evidence retrieval with citations — answers linked directly to peer-reviewed studies and guidelines, helping clinicians verify responses.

  • Automated drafting of clinical documents — from discharge summaries to prior authorizations and patient instructions.

  • Integration with internal systems — support for SharePoint, Teams, and custom care pathways so answers reflect organizational policies.

  • Custom templates for repetitive tasks — reducing administrative burden for physicians, nurses, and support staff.

These capabilities aim to reduce the non-clinical workload that contributes to provider burnout and frees up time for direct patient interaction.

Implications at the Point of Care

1) Real-Time Decision Support

AI that can surface evidence-based information with transparent citations means clinicians could get clinical decision support during patient encounters — reducing time spent navigating guidelines and literature. This could speed diagnostic reasoning and inform shared decision-making with patients.

2) Reduced Administrative Drag

Healthcare professionals spend an estimated half of their time on documentation and administrative work. AI tools that automate note creation, prior authorizations, and letters could return those hours to patient care. Although rigorous real-world evaluations are still emerging, early enterprise deployments underscore this potential.

3) Enhanced Patient Engagement

For patients outside clinical settings, ChatGPT Health offers 24/7 access to health information, helping them prepare more informed questions, understand treatment options, and manage chronic conditions through personalized insights drawn from their own data. This is especially meaningful in rural and underserved areas where clinicians are less accessible.

4) New Risks and Ethical Considerations

Despite strong privacy features, OpenAI’s tools are not subject to HIPAA by default when used by consumers, and experts caution about over-reliance. Inaccurate or “hallucinated” AI responses remain a recognized risk, and clinicians must guard against incorporating flawed suggestions into care.

There’s also a broader ethical conversation around AI filling care gaps in underserved populations — a symptom of systemic access issues rather than a substitute for equitable healthcare infrastructure.

Competitive and Regulatory Landscape

OpenAI’s healthcare push has not gone unnoticed. Competitors such as Anthropic have launched similar offerings aimed at health systems and payers, broadening the AI-health ecosystem and intensifying focus on accuracy, safety, and compliance.

Regulators and healthcare leaders are watching closely as AI platforms increasingly intersect with sensitive medical workflows, with questions about liability, oversight, and standards for AI’s role in clinical settings yet to be fully resolved.

Conclusion: A Transformative But Cautious Future

OpenAI’s entry into healthcare represents a milestone in AI adoption across both consumer health and clinical domains. With tools like ChatGPT Health and ChatGPT for Healthcare, the company is setting a new baseline for how artificial intelligence can support understanding, preparation, and clinical work at the point of care.

But the journey from informational assistant to trusted clinical partner involves navigating privacy concerns, regulatory frameworks, and the perennial challenge of ensuring accuracy. As healthcare organizations and patients experiment with these technologies in 2026, the outcomes will shape the future of AI’s role in medicine.

Healthcare Payments in 2026: Five Shifts Providers Can’t Ignore

Mike Peluso

By Mike Peluso, Chief Product and Strategy Officer, Rectangle Health.

Rising costs, consolidation, and new mandates have pushed healthcare payments to a breaking point. Both payers and providers are recognizing that the current system is too slow, too manual, and too fragmented to keep pace with the rest of the digitalized consumer economy.

Administrative work now accounts for a significant share of total U.S. healthcare spending, and outdated payments platforms are a major contributor to this trend. Much of this inefficiency stems from fragmented payer and provider payment systems that rely on manual workflows to move funds and remittance data.

At the same time, patients are bringing retail expectations into every interaction. They want clear, convenient, digitally native ways to understand and pay their bills, and they will gravitate toward providers who offer that experience.

Here are five ways healthcare payments are likely to shift in 2026:

Payer–Provider Payment Rails Will Finally Connect

Today, claims, remittances, and patient responsibility calculations often move across separate rails, with staff bridging the gaps through spreadsheets, uploads, and manual re-keying. This separation creates avoidable delays, limits visibility, and places unnecessary strain on provider revenue cycles.

In the coming year, more organizations will invest in platforms that connect payers and providers in real time. Eligibility, benefits, and point-of-service estimates will sync more cleanly with downstream billing and payment workflows, and funds and data will travel together instead of being split across separate systems.

Rising healthcare costs and mounting financial pressure are making the modernization of these rails impossible to ignore. Recent surveys show that more than two-thirds of payer executives say their firms’ manual payment platforms are reducing efficiency, reinforcing the need for payer and provider payment systems to operate in a more connected way. For providers, better connectivity means faster, more predictable reimbursement, and fewer unpleasant surprises for patients at the end of the process.

Card-on-File Technology Will Become the Default, Not the Exception

Patients are already accustomed to card-on-file experiences in retail and streaming, and are now expecting the same convenience from healthcare, especially for recurring visits, membership plans, and subscription-style care. A large number (73%) already prefer to pay medical bills online using digital wallets.

In 2026, storing payment information securely will be standard practice for many medical offices. Secure vaulting and pre-authorization will make recurring charges easier to manage, reduce repetitive payment conversations, and shorten the lag between service and payment. Providers that pair card-on-file with clear consent, strong security, and transparent policies will be in the best position to build patient trust, and increase payment efficiency..

Paper Checks Will Fade into the Background

Paper checks and cash are unlikely to disappear entirely in 2026, but they’ll continue to decline in healthcare as patients opt for online bill pay, text-to-pay links, and digital wallets. More organizations will encourage patients to opt for mobile-first options and unified online experiences, where they can review charges, ask questions, and pay all in one place. As digital tools and real-time payment rails gain ground, paper-based payments will continue to shrink. For providers, moving away from checks reduces delays, cuts fees, and lowers the risk of misplaced or misapplied payments.

Automation Will Push Practices Toward Zero-Day A/R

Automation becomes significantly more effective when payer and provider systems are connected, allowing payments and data to move together without manual intervention. Practices can’t afford to have staff tied up with manual billing, posting, and reconciliation while wages climb and margins shrink, making automation capabilities in healthcare payments shift from a convenience to a core operational need.

In 2026, more organizations will rework their revenue cycle to collect as close to the time of service as possible and keep days in accounts receivable near zero. Automated tools will calculate patient responsibility in real time, send digital statements and reminders without staff intervention, and post payments while reconciling across systems in the background.

That shift frees staff to focus on higher-value work, such as patient conversations, financial counseling, and maintaining full schedules, instead of keying line items into aging systems. According to insights from PwC, with administrative costs already accounting for approximately 25% of total U.S. healthcare spending, utilizing automation to bend that curve will no longer be optional.

Patient Payment Experience Becomes a True Differentiator

Payment is no longer just a back-office function. It shapes how patients feel about their care. People are accustomed to transparent pricing, straightforward statements, and flexible digital options in other aspects of their financial lives.

As the new year unfolds, more providers will treat payments as a key differentiator in their competitive strategy. That will show up in clearer, jargon-free bills, unified digital experiences instead of scattered portals, and flexible options such as payment plans, cards, ACH, and digital wallets. According to the previously cited PwC research, nearly half (47%) of higher-income healthcare consumers plan to invest more in their health, and 40% are ready to use unified digital records and virtual-first providers, signaling that ease, access, and transparency now drive patient loyalty.

Practices that combine empathetic communication with modern tools for estimating, collecting, and managing payments will be better equipped to build and retain patient trust.

From Intent to Implementation in 2026

Together, these trends point toward smarter, faster, more integrated payment experiences for payers, providers, and patients. Healthcare systems will increasingly favor platforms that support secure, digital movement of both funds and payment data across the payer and provider ecosystem. Automation will push accounts receivable closer to zero, stored payment methods will become routine, paper will continue to fade, and the rails between payers and providers will grow more connected.

Organizations that act now will ensure a future with more stable cash flow, reduced administrative burden, and patients who feel respected rather than overwhelmed by the financial aspects of care. Those who wait may find it harder to thrive in a market where payment experience has become a key factor in how quality is judged.

When AI Becomes Your Front Door: Preparing Your Practice for the New Patient Search

Evan Steele

By Evan Steele, founder and CEO, rater8.

Over the past decade, patients have steadily shifted from word-of-mouth referrals to digital search when making healthcare decisions. Today, that evolution is accelerating even faster as artificial intelligence (AI) tools, not traditional search engines, emerge as the new front door to finding care.

Instead of spending time talking to friends or browsing through pages of Google results, patients now often simply ask ChatGPT, Google’s AI Overviews, and other consumer AI assistants a simple question: “Who is the best doctor near me?” These tools don’t return lists anymore. They return answers. And which doctors are recommended depends on signals most practices still don’t understand or fully control.

The result? A patient visibility vortex is emerging, where AI will decide which providers appear, which disappear, and which rise above their competitors in 2026 and beyond.

AI is Rewriting the Patient Search Journey

According to rater8’s 2025 Patient Preferences Survey, 31% of patients already use AI tools to research providers. Even more striking: 52% trust AI results as much as or more than traditional search. This shift is accelerating.

AI models now ingest large volumes of public information: practice websites, review sites, news articles, directory listings, Reddit posts, and social media posts. They synthesize all of it into a single recommendation.

As John Bulmer, Public Information Officer at Capital Cardiology Associates, observed in a recent rater8 panel webinar: “Your website is no longer the first place prospective patients meet you — it may be the second or third. And now, with AI pulling information from sources you may not even realize, your broader online presence has never mattered more.”

The old rules of patient search habits no longer apply. Online visibility isn’t earned once and done; it must be constantly maintained because AI evaluates recency, consistency, and credibility across every corner of the internet. Practices that can’t keep pace risk becoming digitally invisible, even if they provide exceptional care in real life.

Inconsistent Information is Killing Your Online Visibility

When AI tools scan the web, they look for clarity. If a practice’s online presence is fragmented or difficult to parse (e.g., different hours listed across directories, mismatched provider bios, or outdated service information), AI hesitates to recommend that practice.

This is where many practices fall behind. Their information may be technically available, but it isn’t standardized. Imagine telling someone to visit your practice, but on one map the building is open, on another it’s closed, and on a third the doctor they’re trying to see doesn’t even work there anymore. That inconsistency erodes trust instantly.

Practices that maintain consistent provider names and credentials, matching hours and phone numbers across major directories, schema-optimized provider pages, and regularly updated content give AI confidence. That confidence translates directly into recommendations.

As healthcare consumerism moves into an AI-first model, structured data will become the new digital bedside manner that signals accuracy, reliability, and professionalism before the patient ever walks through the door.

The Power of the Patient Voice in the Age of AI

Of all the signals AI consumes, verified patient feedback has emerged as one of the most powerful trust indicators.

Unlike testimonials or website copy, verified reviews provide rich, unfiltered, keyword-dense sentiment about the patient experience. AI systems favor this content because it’s recent, specific to the provider, generated by real patients, and difficult to manipulate. This explains why many practices with strong clinical reputations still underperform in AI-driven search. They lack the volume and recency of patient-generated content that AI models prioritize.

Verified reviews, particularly those captured through structured, patient-initiated systems, give AI the credibility it needs to confidently recommend a provider. These reviews also reduce the influence of outdated or unrepresentative feedback, helping practices build a more balanced and accurate online reputation.

Preparing for the Visibility Vortex of 2026

As AI assistants become the default method of care navigation, practices need to think less about SEO tactics and more about visibility ecosystems. That includes:

The patient search process is changing faster than most organizations realize. But with the right strategy, healthcare providers can position themselves at the center of this visibility vortex: earning trust, improving transparency, and making sure their best physicians are the ones AI recommends next.