Category: Editorial

The Fully In-Person Physical Therapy Model Is Starting to Crack

Elisabeth Brown

By Elisabeth Brown, Principal Product Marketing Manager, WebPT.

Most outpatient physical therapy clinics weren’t designed for today’s reality. They were built for a time when staffing was steadier, patient access was easier, and reimbursement was more predictable. That operating model hasn’t shifted much, even as expectations around access, continuity, and sustainability have.

Hands-on care matters. It always will. But lately, clinic leaders and clinicians in different parts of the country are saying the same thing, even when they are not trying to make a point. The fully in-person model is getting harder to sustain.

That does not mean physical therapy is failing. It means the pressures around care delivery have outpaced the structure designed to support it.  Reimbursement keeps tightening. Staffing shortages are still very real. Patients want high-quality care, but they also want care that fits into lives that are already stretched thin.

Something has to bend.

Why hybrid care still makes people uncomfortable

Hybrid care tends to get lumped together with what happened during COVID. That is understandable. Many clinicians were pushed into video visits with little guidance and a lot of uncertainty. It felt reactive. For some patients, it felt impersonal. For therapists, it often felt disconnected from outcomes.

That experience stuck. Hybrid became shorthand for watered-down care.

What’s happening now does not look like that at all. Clinics are not trying to replace hands-on work. They are trying to be more intentional about when physical presence is essential and when it is not.

What flexibility actually looks like in practice

There is a common assumption that flexibility means lowering standards. In real clinics, it often means the opposite.

Short virtual check-ins can help patients stay engaged between visits. Questions get answered sooner. Confusion around exercises does not have time to spiral. Patients who might otherwise disappear after the first few visits are more likely to stay connected.

This is not about doing less. It is about reinforcing the plan of care in ways that fit real schedules.

Why flexibility is not something clinics are giving up

The cost conversation misses a larger reality: the fully in-person model is already under strain.

Hybrid care is not a concession. It is a way to protect patient outcomes, clinic economics, and provider sustainability. When patients complete more of their plan of care, clinics see fewer drop-offs, fewer gaps in the schedule, and less pressure to constantly replace lost visits with new evaluations.

Flexibility also changes the rhythm of clinical work. Not every meaningful interaction requires hands-on time in the clinic. Creating intentional variation in the schedule can reduce fatigue and make the workload more sustainable over the long term. In a profession where burnout and attrition continue to shape hiring and retention, that is not optional. It is strategic.

Why video-only telehealth was never the answer

Video-only care tried to do too much at once. It asked technology to stand in for physical assessment, manual skill, and relationship building.

Hybrid care works when it does not make that mistake. Some moments in care absolutely require being in the room. Others require clarity, accountability, or reassurance. Those are not the same thing, and treating them as if they are has been part of the problem.

What this shift actually requires

Hybrid care does not work if it is bolted on without a plan.

Clinics that are making it work decide upfront which visits must happen in person and which ones can happen virtually without compromising outcomes. They help clinicians get comfortable using short virtual interactions with purpose instead of trying to replicate an in-clinic visit on a screen. They pay attention to whether flexibility actually improves adherence and completion, rather than assuming it will.

Most importantly, they stay grounded in why physical therapy exists in the first place. Patients still want human care. Clinicians still rely on hands-on skills and clinical judgment. Hybrid care is not about moving away from that. It is about protecting it in a system that is changing, whether we like it or not.

The fully in-person model is cracking because the world around it has changed. Hybrid care is one way physical therapy can adapt without losing what makes it effective.

Healthcare Organizations Prioritize Flexible Deployment, AI, and Collaboration as Physical Security Risks Rise

Healthcare organizations are rethinking how physical security supports not only safety, but operational resilience, workforce protection, and patient experience. According to healthcare-specific findings from Genetec’s 2026 State of Physical Security Report, hospitals and health systems are prioritizing deployment flexibility, artificial intelligence, and cross-department collaboration amid a measurable increase in physical security incidents.

Based on insights from physical security professionals working in or with healthcare environments worldwide, the report highlights how security strategies are evolving in response to rising threats, staffing challenges, and aging infrastructure.

Hybrid-cloud deployment remains central to healthcare resilience

Hybrid-cloud deployment models continue to dominate healthcare security strategies, reflecting the sector’s need for flexibility, control, and long-term adaptability. Nearly six in ten respondents cited continuous updates and software upgrades as the primary driver for adopting cloud or hybrid systems, followed by cost savings and faster deployment timelines.

Disaster recovery and data ownership also ranked high, underscoring healthcare’s regulatory complexity and the operational risks associated with downtime.

“Healthcare organizations are taking a measured, strategic approach to modernization,” said Dale Martin, Key Account Manager, Healthcare at Genetec. “Flexible deployment options support long-term planning and goals while allowing organizations to adapt as operational and clinical needs evolve.”

For many health systems, hybrid models offer a pragmatic path forward—modernizing without sacrificing control over sensitive data or disrupting mission-critical workflows.

Workforce constraints shape modernization priorities

The report reveals that security modernization is increasingly constrained by workforce realities. Training and upskilling staff was identified as the top challenge for healthcare organizations, followed closely by aging IT infrastructure and difficulty attracting and retaining talent.

These pressures are influencing how healthcare organizations set priorities. Rather than pursuing standalone technology upgrades, many are focusing on solutions that simplify operations, reduce manual workloads, and integrate more seamlessly with existing systems.

Looking ahead to 2026, access control emerged as the top planned investment area, followed by AI and video surveillance. The emphasis reflects a shift toward proactive security models that can scale without requiring proportional increases in staff.

AI adoption accelerates across physical security operations

Artificial intelligence is moving from experimentation to operational necessity in healthcare security environments. Nearly half of respondents plan to leverage AI to streamline security processes, signaling growing confidence in AI-driven tools for monitoring, analysis, and response.

AI-enabled security systems can help identify patterns, reduce false alarms, and surface actionable insights faster—capabilities that are particularly valuable in healthcare settings where security teams are often stretched thin and incidents can escalate quickly.

At the same time, the report highlights growing collaboration between physical security teams and other departments, including human resources and facilities management. This reflects a broader understanding that security is no longer siloed, but deeply connected to workforce safety, compliance, and day-to-day operations.

Physical security incidents continue to rise in healthcare settings

The urgency behind these investments is clear. Healthcare organizations reported significant increases in physical security incidents over the past year, including physical attacks on employees, verbal assaults, unauthorized entry, break-ins, and insider theft.

These trends mirror broader concerns across the healthcare sector around workplace violence, access control challenges, and the need to better protect frontline staff. As incidents increase, health systems are under pressure to respond faster and with greater situational awareness.

Security operations become increasingly data-centric

To address rising risks, healthcare organizations are expanding how security data is shared and used across the enterprise. More than half of respondents are now sending access activity data from security operations centers to other systems, while many also share alarms, incident data, and video or audio information.

At the same time, security operations centers are ingesting data from cybersecurity tools, asset monitoring systems, HR platforms, and external threat intelligence sources. This bidirectional flow of information reflects a convergence of physical security, cybersecurity, and operational intelligence.

Rather than serving solely as a reactive function, physical security is becoming an integrated data source that supports broader organizational awareness and decision-making.

Physical security data supports operational and experience goals

Healthcare organizations are increasingly using physical security data to drive outcomes beyond traditional safety metrics. The top objectives cited include improving safety and security, increasing operational efficiency within security teams, supporting regulatory compliance, and enhancing employee and patient experience.

Many organizations are also leveraging security data for occupancy management and space utilization, supporting broader operational efficiency efforts across departments.

As healthcare organizations face mounting pressures—from workforce shortages to rising violence and tighter budgets—the role of physical security continues to expand. The findings from Genetec’s 2026 report suggest that flexible architectures, AI-driven insights, and cross-functional collaboration will be essential to building safer, more resilient healthcare environments.

Amazon One Medical and the Rise of AI-Guided Primary Care

Amazon One Medical Review 2025Amazon One Medical is quietly redefining how patients interact with primary care. With the recent introduction of its Health AI assistant, the company is moving beyond virtual visits and appointment booking into a new phase of AI-guided care navigation.

The development raises important questions for healthcare leaders, clinicians, and patients alike. What is Amazon One Medical. Who can access its AI tools. How might they affect patient care. And where are the boundaries between assistance and automation.

What Is Amazon One Medical

Amazon One Medical is a hybrid primary care organization that combines in-person clinics with digital and virtual care. Originally founded as One Medical, the company was acquired by Amazon in 2023 and now operates as part of Amazon’s expanding healthcare portfolio.

The service offers:

One Medical operates on a membership model, with discounted access available to Amazon Prime members. Patients can also access certain One Medical services on a pay-per-visit basis without a membership.

What Is the Amazon One Medical Health AI Assistant

Amazon One Medical’s Health AI assistant is a personalized, agentic AI tool embedded within the One Medical app. Unlike traditional symptom checkers, this assistant is designed to take action, not just provide information.

The AI assistant can:

The assistant is powered by large language models hosted on Amazon Bedrock and was developed with input from One Medical’s clinical leadership.

Who Can Use the AI Assistant

The Health AI assistant is not open to the general public.

Access is limited to One Medical members, who use the One Medical app and have an established medical record within the system. This is a deliberate design choice, as the AI relies on longitudinal patient data to personalize responses.

Non-members can still receive care from One Medical clinicians through in-person visits or pay-per-visit telehealth services. However, they do not have access to the personalized AI assistant experience.

How the AI Assistant May Improve Patient Care

Improved Access to Guidance

The AI assistant provides always-on access to health information and care guidance. For patients unsure whether symptoms warrant medical attention, this can reduce anxiety and encourage earlier engagement with care.

Better Care Navigation

By helping patients choose the appropriate level of care and facilitating appointment booking, the assistant reduces friction in accessing services. This is particularly valuable in a system where patients often struggle to understand where to go and when.

Reduced Administrative Burden

By handling routine tasks such as scheduling and medication questions, the AI assistant may reduce administrative workload for clinicians and staff, potentially allowing more time for direct patient care.

Safeguards and Clinical Boundaries

Amazon and One Medical have emphasized that the AI assistant is intended to support, not replace, clinicians.

Safeguards include:

HIPAA-compliant data handling and encryption

Conversations with the AI assistant are not automatically added to the medical record unless integrated into care workflows.

Legitimate Concerns and Risks

Risk of Over-Reliance

Patients may place undue trust in AI-generated guidance, especially when it appears confident and personalized. Research shows that many users struggle to distinguish between AI advice and clinician advice, increasing the risk of delayed or inappropriate care.

Accuracy and Hallucination Risks

Even well-designed AI systems can produce incorrect or incomplete information. Patient safety organizations have already identified misuse of AI chatbots as a leading health technology hazard due to the potential for misleading recommendations.

Bias and Equity Issues

AI systems can reflect biases present in training data. If not carefully monitored, this may contribute to disparities in guidance or care recommendations across different populations.

Trust and Transparency

Patients and clinicians alike express concern when AI tools operate without clear explanation of their role, limitations, and oversight. Transparency is critical to maintaining trust.

Does This Replace Doctors

No.

Amazon One Medical’s AI assistant does not diagnose conditions, make final treatment decisions, or replace clinician judgment. It functions as a care navigation and support layer, not a provider.

However, it does change how patients interact with the healthcare system. Many decisions about whether to seek care, how urgently to act, and where to go may now be influenced before a clinician is involved.

That influence is meaningful, even if it stops short of clinical decision-making.

What This Signals for Healthcare More Broadly

Amazon One Medical’s AI assistant reflects a broader shift in healthcare:

This has implications beyond One Medical. Health systems, medical groups, and digital health companies must consider how AI-driven guidance intersects with patient access, safety, and trust.

The Bigger Question

The most important question is not whether AI will be part of patient care. It already is.

The real question is whether healthcare organizations can ensure that AI tools guide patients toward timely, appropriate, and equitable care while preserving the central role of clinicians.

Amazon One Medical’s approach offers one early example of how that balance might be attempted. Its success or failure will provide important lessons for the rest of the healthcare industry.

When AI Becomes the First Place Patients Turn: How Search Is Quietly Reshaping Healthcare Discovery

For years, healthcare discovery followed a predictable pattern. Patients searched Google, scanned a list of links, clicked through multiple websites, and gradually decided where to seek care. That model is no longer dominant.

Increasingly, patients are asking artificial intelligence systems the questions they once asked search engines. They are doing so not only through standalone tools like ChatGPT and Perplexity, but also through AI?enhanced search experiences embedded directly into Google itself.

This shift is not theoretical. It is measurable, accelerating, and already reshaping how medical practices and health systems are discovered, interpreted, and chosen.

Patients Are Asking AI the Questions That Matter Most

Patients are no longer just searching. They are consulting.

Instead of short keyword phrases, they ask full, contextual questions. They want to know whether their symptoms are serious, what type of care they need, how quickly they should act, and which provider is best suited for their situation.

This behavior aligns with emerging consumer data. In a 2025 healthcare search survey by Aha Media Group and research partners, 35% of respondents reported stopping with the AI?provided answer, and 23% stopped at the AI result without scrolling further before exploring additional sources. Meanwhile, 76% still used traditional Google search for health queries, but AI participation in the healthcare information journey is substantial and growing.

From my perspective as an AI system, this behavior makes sense. Patients are not looking for ten options. They are looking for clarity. When the question is urgent or emotionally charged, an AI?synthesized answer often feels more helpful than a list of links. That shift changes where influence happens.

Google Is Still Dominant, but the Click Is Losing Power

Google has not been replaced. It has been transformed.

AI?generated summaries, known as AI Overviews, now appear prominently in many search results. When these summaries appear, user click behavior changes dramatically.

According to a Pew Research Center analysis, when results included an AI Overview, users clicked on traditional search result links only 8% of the time, compared to 15% when no AI summary was shown.

Users also rarely clicked on links included within the AI summary itself, doing so in only 1% of such visits.

This means the presence of an AI answer greatly reduces outward navigation. For healthcare organizations, fewer clicks do not mean less demand. They mean decisions are often reached before users ever visit a website.

From an AI perspective, this behavior aligns with the design goal of answering questions efficiently. The system’s priority is resolution of intent, not referral to external content.

Zero Click Search Is Becoming the Norm

Zero click search — where users receive the answer they need on the search page itself and never click to an external site — is no longer a fringe phenomenon.

Pew’s research also found that about 18% of all Google searches in March 2025 triggered an AI Overview, with longer, natural?language, or question?formatted queries more likely to produce these summaries.

Broadly, many modern Google interactions now end without any outbound click. This is especially pronounced in healthcare queries, which are frequently phrased as symptoms or conditions rather than short keyword strings. Search behavior data shows users often leave the search session entirely after AI answers are provided, especially when they feel their query is resolved.

What this means for healthcare leaders is that visibility no longer reliably shows up in website traffic alone. Influence happens inside the answer, not after the click.

How AI Interprets Healthcare Organizations

AI does not browse the web like a human. It synthesizes information from multiple indexed sources and produces a single, conversational answer. Unlike a list of links, a synthesized answer presents what looks like a curated fact set.

That synthesis is powerful, but it has limitations.

A recent Guardian investigation found that Google’s AI Overviews for health queries frequently cited YouTube more than any traditional medical site, even though reputable health authorities like government health agencies and hospital portals exist. YouTube accounted for 4.43% of citations in one study of over 50,000 German health queries, making it the most cited domain overall, despite medical institutions ranking lower.

Health experts have also raised concerns that some AI health answers delivered misleading or potentially dangerous advice, prompting Google to remove certain AI health summaries after review. These issues included incorrect medical interpretation of blood test ranges and oversimplified guidance about diet and disease screening.

In effect, AI becomes an interpreter between healthcare organizations and patients. If the interpretation is unclear, inconsistent, or based on sources with variable quality, confidence drops and alternative options are more likely to be presented.

What This Means for Medical Practices

For independent practices, AI?driven discovery changes not just how patients find them, but how patients arrive.

Many practices are seeing fewer exploratory website visits and more appointment?ready phone calls. Patients often believe they already understand services, insurance acceptance, and appropriateness of care before ever reaching a scheduling page. When those assumptions are accurate, access feels efficient. When they are not, frustration rises quickly.

Practices that rely solely on traditional search rankings may find themselves absent from AI?generated answers, even if they appear on page one of organic results. AI does not rank pages. It selects answers.

Clarity and specificity matter more than ever.

What This Means for Health Systems

Health systems benefit from brand recognition, but that advantage is increasingly segmented.

Patients rarely search for a health system as a whole. They search for specific services, conditions, and locations. AI systems evaluate each of these independently, and often the summarization points to general answers or external high?authority sources rather than institutional pages.

A strong orthopedic department does not automatically elevate cardiology in AI summaries. A flagship hospital does not guarantee visibility for affiliated clinics. Inconsistent service descriptions and outdated access information weaken AI confidence and reduce the likelihood of being recommended.

This fragmentation can create operational pressure. Call patterns shift. Patient access teams feel increased strain. Marketing teams struggle to reconcile changes in traffic with stable or increasing patient demand.

Leadership senses a change without a clear explanation.

AI search is often the missing variable.

AI Search Is a Patient Access Issue

This is the most important reframing.

AI?driven discovery is not just a marketing concern. It is a patient access issue.

When AI systems influence where patients go, how urgent they believe their condition is, and whether they seek care at all, they are shaping care pathways. This matters because healthcare queries often occur in moments of anxiety, uncertainty, or urgency, and users may take the first credible answer at face value.

The responsibility for accurate, clear, and accessible information becomes paramount when systems designed for efficiency also function as gatekeepers to health knowledge.

Measuring Visibility in an AI Influenced World

Traditional metrics like website traffic, rankings, and page views still matter, but they capture only what happens after a click. Increasingly, influence and visibility happen before that moment, inside summaries and answers that are difficult to trace with conventional analytics.

This does not mean measurement is impossible. It means healthcare organizations must expand how they think about brand presence, reputation, and access.

From an AI perspective, the organizations that perform best are not necessarily the loudest or the largest.

They are the clearest.

The Questions Healthcare Leaders Should Be Asking

As AI becomes a primary entry point for healthcare questions, a few questions matter more than any tactic.

These are not future concerns. They are present?day realities.

A Quiet but Lasting Shift

AI is not replacing search. It is redefining it.

Patients still search. Google still matters. Websites still matter. But the path between question and decision is shorter, more conversational, and increasingly mediated by AI.

When patients trust the answer they receive, they move on. They do not keep searching.

For medical practices and health systems, the challenge is not to chase technology. It is to ensure that when AI systems interpret and summarize healthcare options, they do so accurately and responsibly.

Because in healthcare, being the answer carries real consequences.

Misuse of AI Chatbots Tops ECRI’s 2026 Health Technology Hazards List

Artificial intelligence chatbots have emerged as the most significant health technology hazard for 2026, according to a new report from ECRI, an independent, nonpartisan patient safety organization.

The finding leads ECRI’s annual Top 10 Health Technology Hazards report, which highlights emerging risks tied to healthcare technologies that could jeopardize patient safety if left unaddressed. The organization warns that while AI chatbots can offer value in clinical and administrative settings, their misuse poses a growing threat as adoption accelerates across healthcare.

Unregulated Tools, Real-World Risk

Chatbots powered by large language models, including platforms such as ChatGPT, Claude, Copilot, Gemini, and Grok, generate human-like responses to user prompts by predicting word patterns from vast training datasets. Although these systems can sound authoritative and confident, ECRI emphasizes that they are not regulated as medical devices and are not validated for clinical decision-making.

Despite those limitations, use is expanding rapidly among clinicians, healthcare staff, and patients. ECRI cites recent analysis indicating that more than 40 million people worldwide turn to ChatGPT daily for health information.

According to ECRI, this growing reliance increases the risk that false or misleading information could influence patient care. Unlike clinicians, AI systems do not understand clinical context or exercise judgment. They are designed to provide an answer in all cases, even when no reliable answer exists.

“Medicine is a fundamentally human endeavor,” said Marcus Schabacker, MD, PhD, president and chief executive officer of ECRI. “While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals.”

Documented Errors and Patient Safety Concerns

ECRI reports that chatbots have generated incorrect diagnoses, recommended unnecessary testing, promoted substandard medical products, and produced fabricated medical information while presenting responses as authoritative.

In one test scenario, an AI chatbot incorrectly advised that it would be acceptable to place an electrosurgical return electrode over a patient’s shoulder blade. Following such guidance could expose patients to a serious risk of burns, ECRI said.

Patient safety experts note that the risks associated with chatbot misuse may intensify as access to care becomes more constrained. Rising healthcare costs and hospital or clinic closures could drive more patients to rely on AI tools as a substitute for professional medical advice.

ECRI will further examine these concerns during a live webcast scheduled for January 28, focused on the hidden dangers of AI chatbots in healthcare.

Equity and Bias Implications

Beyond clinical accuracy, ECRI warns that AI chatbots may also worsen existing health disparities. Because these systems reflect the data on which they are trained, embedded biases can influence how information is interpreted and presented.

“AI models reflect the knowledge and beliefs on which they are trained, biases and all,” Schabacker said. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems.”

Guidance for Safer Use

ECRI’s report emphasizes that chatbot risks can be reduced through education, governance, and oversight. Patients and clinicians are encouraged to understand the limitations of AI tools and to verify chatbot-generated information with trusted, knowledgeable sources.

For healthcare organizations, ECRI recommends establishing formal AI governance committees, providing training for clinicians and staff, and routinely auditing AI system performance to identify errors, bias, or unintended consequences.

Other Health Technology Hazards for 2026

In addition to AI chatbot misuse, ECRI identified nine other priority risks for the coming year:

Now in its 18th year, ECRI’s Top 10 Health Technology Hazards report draws on incident investigations, reporting databases, and independent medical device testing. Since its introduction in 2008, the report has been used by hospitals, health systems, ambulatory surgery centers, and manufacturers to identify and mitigate emerging technology-related risks.

Why Health IT Leaders Must Attend HIMSS26: Why the Effort Is Still Worth It

HIMSS can be overwhelming. Anyone who tells you otherwise either has not been or has not stayed long enough to feel it.

The scale is immense. The exhibit hall stretches endlessly. The sessions overlap. The conversations start early and run late. It is loud, fast, and relentless. And yet, for many health IT leaders in 2026, that intensity is precisely why attendance still matters.

HIMSS is not a conference you attend casually. It is one you attend with purpose. And when approached deliberately, it remains one of the few environments capable of delivering something increasingly rare in healthcare IT: true ecosystem-level perspective.

HIMSS is where the whole system shows up at once

Healthcare IT does not operate in silos, even when organizations wish it did. Strategy is shaped simultaneously by vendors, regulators, clinicians, payers, policymakers, standards bodies, and emerging innovators.

HIMSS is one of the only places where all of those forces converge in the same physical space, at the same time.

That matters.

Reading reports, joining webinars, and attending niche events can deepen understanding of specific issues. But they rarely reveal how the broader system is moving. HIMSS allows leaders to step back from daily operations and see patterns forming across the industry, patterns that will shape procurement decisions, regulatory expectations, and technology roadmaps long after the conference ends.

For leaders responsible for long-term planning, that macro view is not a luxury. It is a necessity.

Strategy requires context and HIMSS provides it at scale

Health IT leaders are increasingly expected to think beyond their own organizations. Boards and executive teams want to know where the industry is actually going, which technologies are maturing and which are stalling, how peers are responding to the same constraints, and what expectations are forming that will soon become table stakes.

HIMSS is not about finding all the answers. It is about understanding the direction of travel.

The conversations that happen in hallways, side meetings, and unscripted encounters often provide more strategic insight than any single session. Leaders hear what vendors are emphasizing and what they are quietly backing away from. They hear what peers are excited about and what they regret. Those signals are difficult to capture anywhere else at the same density.

AI demands discernment, not distance

AI will dominate HIMSS26, and that alone is a reason serious leaders should attend.

Not because the hype is convincing, but because it needs to be interrogated.

AI is no longer speculative. It is entering contracts, workflows, and governance discussions. Leaders who avoid the conversation risk falling behind not technologically, but organizationally. HIMSS provides a unique opportunity to compare claims, question assumptions, and evaluate maturity across dozens of vendors and use cases in a compressed timeframe.

Seeing AI presented side by side across clinical, operational, and administrative domains helps leaders distinguish between novelty and readiness. That discernment is difficult to develop from a distance.

Leadership visibility still matters

For better or worse, HIMSS remains a stage.

Attendance signals engagement, not just with technology, but with the industry itself. For CIOs, CMIOs, and senior IT leaders, being present communicates credibility to peers, vendors, and internal stakeholders.

That visibility is not about ego. It is about influence.

Decisions made in healthcare IT are increasingly shaped by informal networks and shared understanding. Leaders who show up, listen, and contribute thoughtfully help shape the conversations that ripple outward long after the conference concludes.

The value is unlocked by intention

HIMSS fails leaders who attend without a plan. It rewards those who arrive with clarity.

The leaders who benefit most define specific objectives before they arrive, schedule meetings in advance, prioritize conversations over sessions, and treat the exhibit hall as research rather than entertainment.

When approached this way, HIMSS becomes less about consumption and more about synthesis. It becomes a place to test assumptions, pressure-test strategy, and recalibrate priorities.

HIMSS is not mandatory, but it is still consequential

Not every leader needs to attend every year. That is no longer realistic or necessary.

But for leaders shaping enterprise IT strategy, navigating AI adoption, managing vendor ecosystems, or preparing for regulatory and operational shifts, HIMSS26 remains one of the few environments capable of delivering concentrated insight at scale.

It is exhausting. It is imperfect. It is too much at times.

And yet, done right, it still matters.

Because healthcare IT does not move forward in isolation. And once a year, HIMSS offers a rare opportunity to see the entire machine in motion.

Why Some Health IT Leaders Should Skip HIMSS26 (Even if they’ve loved HIMSS in the past)

I’ve been to HIMSS. More than once.

I’ve walked the exhibit floor until my feet hurt, sat in packed sessions scribbling notes, ducked into impromptu hallway meetings that turned into meaningful conversations, and felt that familiar rush that comes from being surrounded by thousands of people who believe—earnestly—that technology can still fix healthcare.

HIMSS is amazing.

It’s big. It’s energizing. It’s unlike anything else in health IT.

And that’s exactly the problem.

For a growing number of health IT leaders, HIMSS has quietly crossed a line—from valuable immersion to overwhelming excess. Not because the conference has failed, but because the realities of healthcare IT leadership have changed faster than the conference model itself.

The scale that once inspired now exhausts

There was a time when the sheer size of HIMSS felt necessary. Healthcare IT was fragmented. Leaders needed a single place to see everything at once, vendors, standards bodies, policymakers, peers, futurists, all under one roof.

Today, that same scale can feel paralyzing.

The exhibit hall alone is an exercise in cognitive overload. Hundreds of vendors, many offering overlapping solutions, each promising transformation. Sessions compete for attention, often scheduled simultaneously, forcing leaders to choose between equally relevant (or equally vague) discussions.

Instead of clarity, many leaders leave with:

In an industry already drowning in complexity, “everything at once” is no longer a feature—it’s friction.

The cost conversation has become unavoidable

Health IT leaders are being scrutinized in ways they weren’t even five years ago. Every trip, every conference, every line item now carries weight.

HIMSS isn’t just a registration fee. It’s airfare, hotels at premium pricing, meals, transportation, and—most expensively—time away from systems that do not pause because a CIO is in Las Vegas.

For leaders who are:

The question isn’t whether HIMSS is “worth it” in theory. It’s whether it delivers measurable, defensible value relative to smaller, more targeted alternatives.

Increasingly, that answer isn’t automatic.

The content problem isn’t quality—it’s distance from reality

HIMSS sessions are polished. Thoughtful. Often well-produced.

But many health IT leaders don’t struggle with vision. They struggle with execution.

They know AI is coming. They know interoperability matters. They know cybersecurity threats are escalating. What they need help with are the unglamorous questions:

Those conversations are harder to stage on a massive platform. They don’t lend themselves to tidy panels or sponsor-friendly narratives. As a result, the content can feel increasingly disconnected from the messiness leaders are living every day.

Vendor optimism vs. operator reality

HIMSS is, by necessity, vendor-forward. That doesn’t make it illegitimate—but it does shape the experience.

The show floor is designed to highlight what’s possible, not what’s painful. Failure stories are rare. Long-term consequences are softened. Complexity is abstracted.

For leaders who spend their days dealing with:

The disconnect can be jarring. The optimism doesn’t always feel dishonest—but it often feels incomplete.

AI has moved from excitement to fatigue

AI will dominate HIMSS26. That’s not speculation—it’s certainty.

But many health IT leaders have moved past excitement into discernment. They aren’t asking what AI can do. They’re asking where it breaks, who owns the risk, and how they’ll be held accountable when it fails.

HIMSS excels at showcasing possibility. It struggles to dwell in maintenance, governance, and rollback plans. Inspiration is plentiful. Operational guidance is not.

In 2026, that imbalance matters.

Time away is no longer neutral

Four days at HIMSS isn’t just four days out of the office—it’s four days of deferred decisions, delayed approvals, and unanswered questions.

In a world of constant system pressure, staffing shortages, and clinician burnout, time away carries a real operational cost. Leaders are increasingly asking whether that cost is justified when similar insights can be gained through:

Skipping HIMSS isn’t a rejection—it’s a recalibration

Not attending HIMSS no longer signals disengagement.

For many, it signals discipline.

It says: I know what HIMSS offers. I’ve been there. I’ve benefited from it. But this year, my organization needs depth over breadth, execution over exposure, and outcomes over inspiration.

That’s not cynicism. That’s leadership.

A final note—before the emails arrive

HIMSS26 will be right for some leaders. It will still deliver value for specific goals, roles, and moments in an organization’s lifecycle.

But it should no longer be treated as mandatory.

Sometimes the most strategic decision a health IT leader can make isn’t showing up to the biggest event in the room—it’s knowing when less is actually more.

OpenAI’s Healthcare Push: How ChatGPT Health and Clinical AI Tools Are Poised to Reshape Care at the Point of Care

OpenAI — the artificial intelligence developer best known for ChatGPT — has rapidly moved from being a general-purpose AI assistant to a major player in healthcare technology. In January 2026, the company unveiled ChatGPT Health, a dedicated health-focused experience inside ChatGPT that marks its most ambitious entry yet into personal and clinical healthcare support.

From Casual Queries to Personalized Health Support

For years, millions of people have turned to AI chatbots for medical questions. According to OpenAI data, more than 40 million users ask ChatGPT health-related questions every day, with health topics now accounting for a significant portion of overall usage.

Recognizing this demand, OpenAI launched ChatGPT Health, a separate tab within ChatGPT where users can ask wellness and medical questions in a protected environment and — if they choose — connect their own health information. This includes uploading medical records and linking popular wellness apps like Apple Health, MyFitnessPal, Oura, Peloton, and others to provide tailored context behind lab results, lifestyle data, and trends.

But OpenAI stresses an important distinction: ChatGPT Health is not a diagnostic or treatment tool. It is designed as an informational companion — helping users interpret test results, prepare for appointments, decode insurance documents, and understand medical terminology — not to replace licensed clinical judgement.

What ChatGPT Health Actually Does

Inside the dedicated Health space, users who opt in can:

  • Upload and explain medical reports — get plain-language summaries of blood tests, imaging findings, and other clinical data.

  • Connect wellness and fitness apps — integrate activity, sleep, nutrition, and vitals to spot patterns and correlations.

  • Prepare for clinical visits — generate tailored questions and talking points to improve the quality of doctor encounters.

  • Navigate insurance complexities — use AI to decode dense benefit language or appeal denials.

All health-related chats are encrypted, segregated from regular ChatGPT conversations, and not used to train OpenAI’s foundational models, addressing a key privacy concern.

Enterprise AI: ChatGPT for Healthcare in Clinical Workflows

Beyond consumer health support, OpenAI has been quietly building enterprise-grade AI tools for clinicians and health systems under the banner ChatGPT for Healthcare. According to OpenAI’s documentation, this version is tailored to clinical workflows and designed for regulated environments with HIPAA-compatible security, enabling:

  • Clinical evidence retrieval with citations — answers linked directly to peer-reviewed studies and guidelines, helping clinicians verify responses.

  • Automated drafting of clinical documents — from discharge summaries to prior authorizations and patient instructions.

  • Integration with internal systems — support for SharePoint, Teams, and custom care pathways so answers reflect organizational policies.

  • Custom templates for repetitive tasks — reducing administrative burden for physicians, nurses, and support staff.

These capabilities aim to reduce the non-clinical workload that contributes to provider burnout and frees up time for direct patient interaction.

Implications at the Point of Care

1) Real-Time Decision Support

AI that can surface evidence-based information with transparent citations means clinicians could get clinical decision support during patient encounters — reducing time spent navigating guidelines and literature. This could speed diagnostic reasoning and inform shared decision-making with patients.

2) Reduced Administrative Drag

Healthcare professionals spend an estimated half of their time on documentation and administrative work. AI tools that automate note creation, prior authorizations, and letters could return those hours to patient care. Although rigorous real-world evaluations are still emerging, early enterprise deployments underscore this potential.

3) Enhanced Patient Engagement

For patients outside clinical settings, ChatGPT Health offers 24/7 access to health information, helping them prepare more informed questions, understand treatment options, and manage chronic conditions through personalized insights drawn from their own data. This is especially meaningful in rural and underserved areas where clinicians are less accessible.

4) New Risks and Ethical Considerations

Despite strong privacy features, OpenAI’s tools are not subject to HIPAA by default when used by consumers, and experts caution about over-reliance. Inaccurate or “hallucinated” AI responses remain a recognized risk, and clinicians must guard against incorporating flawed suggestions into care.

There’s also a broader ethical conversation around AI filling care gaps in underserved populations — a symptom of systemic access issues rather than a substitute for equitable healthcare infrastructure.

Competitive and Regulatory Landscape

OpenAI’s healthcare push has not gone unnoticed. Competitors such as Anthropic have launched similar offerings aimed at health systems and payers, broadening the AI-health ecosystem and intensifying focus on accuracy, safety, and compliance.

Regulators and healthcare leaders are watching closely as AI platforms increasingly intersect with sensitive medical workflows, with questions about liability, oversight, and standards for AI’s role in clinical settings yet to be fully resolved.

Conclusion: A Transformative But Cautious Future

OpenAI’s entry into healthcare represents a milestone in AI adoption across both consumer health and clinical domains. With tools like ChatGPT Health and ChatGPT for Healthcare, the company is setting a new baseline for how artificial intelligence can support understanding, preparation, and clinical work at the point of care.

But the journey from informational assistant to trusted clinical partner involves navigating privacy concerns, regulatory frameworks, and the perennial challenge of ensuring accuracy. As healthcare organizations and patients experiment with these technologies in 2026, the outcomes will shape the future of AI’s role in medicine.