Tag: history of care utilization

Control The Costs You Can: Empower Providers To Adjudicate Real Costs At Point of Care

G.T. LaBorde

By G.T. LaBorde, CEO, IllumiCare

More than half of hospitals in the U.S. are projected to experience negative margins in 2022, with expenses estimated to increase by nearly $135 billion over 2021 levels, according to a recent Kaufman, Hall & Associates report.[1]

While health systems have no direct means of controlling the rising rate of inflation, they are able to reduce the impact of losses through the use of utilization management strategies and tools designed to ensure that patients get the care that they require, without excessive testing and unnecessary costs associated with care they don’t need.

Utilization management, while effective at addressing the most obvious sources of waste within a health system, has been less successful at a more granular level, due in part to the disconnect between those who create and enforce clinical cost guidelines and those who actually provide the care. Hospital-based utilization reviews grew in popularity throughout the 1960s and 1970s, as a result of growing doubts that greater medical care expenditures resulted in improved health status.

By the 1980s, utilization efforts began to transition to third-parties, such as health plans, in response to research that suggested that many medical services were unnecessary or inappropriate; an increased emphasis by purchasers on linking cost containment with quality assurance; and a proliferation of information resources and assessment tools that made case-by-case review of proposed services feasible on a large scale.[2]

Throughout its history, utilization efforts have placed increasing pressure on providers with regards to the cost of care, starting with prospective pay, then HMOs, and now value-based care and bundled payments. Each new effort has sought to transfer greater economic risk onto providers, perhaps because those who administer costs take the perspective that since providers are the ones making decisions about what to spend, they should also manage the implications of their decisions.

Over the same period, provider access to cost data remained very limited, so decisions about what drugs to prescribe or treatment to undertake were made with no exposure to the related costs. As a result, clinicians have long been outspoken critics of utilization management because it’s been seen as limiting their clinical autonomy and contributing to an ever-increasing administrative burden.[3]

Continue Reading