Guest post by Michael Barbouche, founder and CEO, Forward Health Group.
As clearly identified in the PCAST Report on Health Information Technology (2011), and as echoed in the recent GAO report Electronic Health Record Programs — Participation Has Increased, but Action Needed to Achieve Goals Including Improved Quality of Care (2014), healthcare continues to have a data problem. The country has invested significantly to advance EHR adoption.
In simpler terms, healthcare data is messy and makes for building of accurate, actionable metadata a problem. It’s clear that the next generation of standards that are being developed by the numerous committees and acronyms and professional societies tackling measure development, harmonization and testing will now need to address the relevance of each measure.
More than a decade ago, a coalition of purchasers, payers and providers came together across Wisconsin to form the Wisconsin Collaborative for Healthcare Quality (www.wchq.org). Groundbreaking initiatives like Get with the Guidelines, Leapfrog and JCAHO revealed that “quality” and “healthcare” could be used in the same sentence (or displayed on a website). These efforts were largely inpatient-focused. Measurement in the outpatient setting, long considered the keystone of payment reform, was an unsolved riddle. WCHQ, at the urging of the IOM, IHI and others accepted the challenge of tackling performance in the ambulatory arena.
At the direction of some very engaged employers, and with input from most of the state’s payers, WCHQ was charged with one very simple goal — apples to apples quality measurement, regardless of health IT infrastructure. The focus had to include both processes of care and outcomes. Oh, and if health systems didn’t have any health IT in place, data still needed to be included for these groups in the measurement effort. What transpired over an 18-month period was remarkable. With unwavering support from administrative and clinical leadership, health systems rolled up their sleeves and dug into their very messy data. Each Monday, we would devise a fiendish list of new tasks to be completed in the next four business days.
- Have 10 PCPs manually chart review 500 (or even 5,000) patients to identify false positive diabetes’ diagnoses; have the nurse and medical assistant look over their PCP’s shoulder.
- Extract every unique CPT code used by the system in the past five years; sort by frequency and last date of use; distill all the custom codes that have been used more than two times; identify what those codes mean.
- Identify every physician that has worked in the system in the past three years; compile every permutation of ProviderID used for each physician.
A laboratory like this helped the entire group of stakeholders understand engagement and precision and outcomes. The most important lesson learned before we had EHRs (they were still called EMRs), before we had HIEs (they were still called RHIOs … and they, too, didn’t have a business model), before we had eCQMs (there were a few clusters of HEDIS groupies) — the data was all there. It had always been there. Yes, healthcare data can be quite messy and ugly, but healthcare produces a lot of rich data and most of it is rotting in place.
In 2004, we didn’t have an array of acronyms, measure developers, and formats for quality. There weren’t reporting standards. There was no PQRS, no MU, no VBPM. Only one third of the health systems that participated in this voluntary initiative had even started their EHR implementation. Said another way, the majority of participating sites remained happily paper-based; all were information-deprived. Leveraging one home-grown measure construct (no surprise, diabetes), and facing a mandate to measure all patients within each system, we focused on building data that all would accept as accurate. How did we pull it off? Well, for starters, we didn’t build a single interface. Never once was the term “HL7” uttered. We didn’t need a standard, a protocol, or even a measure definition to produce patient-level outcomes for more than half of Wisconsin’s residents. We simply needed a denominator.
Why does the denominator matter?
- Physician buy-in doesn’t come easy. If the list of patients quickly scanned by a clinician contains a bunch of false positives or a bunch of unrecognized names, you have lost 18 months (or more). Participation, enthusiasm and trust require accurate data.
- Size matters. If you have never suffered through a statistics class, an important tip, there are no “magical” sample sizes that are specific to healthcare. n=25 is neither significant nor statistical nor relevant. n=30 is not “5” better than 25; it is simply a fraction of a panel size and doesn’t mean a heck of a lot. And no, 60 is not “twice” as good.
- Data stewardship is for life. There are sticky challenges like patient attribution that will never be solved by an algorithm. Challenges that can only be solved by engagement and ongoing, iterative improvements in the underlying collection of data.
- Transactional data in — population data out. Much time and effort is spent today entering data one patient at a time, one visit at a time. As reimbursement shifts, so too must our use of data. Practice management data is no different than payer data is no different than patient-reported data is no different than EHR data. Each is a data source that must be harvested, aligned and aggregated.
The goal for every practice in the US should be to get out of the health IT implementation business and get into the performance improvement business. The far harder job facing health systems is improving quality while lowering costs. Changing culture, building systems of care and constructing new compensation programs is the hard lifting that needs to be done. Workflows are not easy to figure out in healthcare and we have a lot of workflows to map and implement and spread across this country. How best to do this? Start with a good denominator.