Guest post by Anil Jain, MD, FACP, senior vice president and chief medical officer, Explorys, and consulting staff, Department of Internal Medicine, Cleveland Clinic.
Despite advances in medical education, the proliferation of medical journals and the speed of light retrieval of information on the Internet, the lag time between when researchers identify life-saving clinical interventions and when they are put into practice ranges from 10 to 25 years, averaging 17 years. This lag time between the discovery at the “bench” and its practice at the “bedside” is even more startling when you consider the impact of care at the “bedside” to the “bottom-line.” This “bottom-line” has become increasingly important with the formation of accountable care organizations (ACOs) that aim to reward provider organizations and payers that meet the “triple-aim”: high-quality care for the population, high-quality care for the patient, at the most affordable cost. Unfortunately, current practices at the “bedside” reportedly generate approximately $700 billion in care that isn’t necessary and may even be potentially harmful to the “bottom-line.” Moreover, despite healthcare expenditures of 17 percent of our GDP, the U.S. lags behind most industrial nations when looking at composite measures of healthcare quality.
With the increasing use of health information technology and data we should be able to shorten the time between “bench” to “bedside” and improve the “bottom line.”
“Big data” is data that is of high volume, variety and of sufficient velocity that is not amenable to traditional data storage and analysis tools. This “big data” is most typically generated from health systems’ electronic health records (EHRs), laboratory, radiology, financial and billing systems, personal health records, biometrics and smart devices. In addition, patients today are oftentimes utilizing various mobile health and wellness apps and wearable devices which also collect a plethora of data, which only adds to the complexity.
The aggregation of de-identified medical information across millions of health records from varying venues of care facilitating a longitudinal view of a person can be incredibly beneficial for researchers focused on net new knowledge discovery. For data from disparate health systems to be aggregated, it is vital that it is standardized and that subjects across health systems can be matched. This harmonization of disparate data coupled with the appropriate analytics software is critical to identify patterns in the data.
In this setting, the larger the data set, the more likely that a signal can be detected through the noise, even in the rarest of conditions. Fortunately, many hypotheses can be conceived and tested through appropriate analytics within this real-world data set in a much more cost-effective manner than conducting full-scale clinical trials. Furthermore, if a signal is detected or a pattern is found, researchers can then design a more focused explanatory or pragmatic clinical trial to prospectively test the hypotheses. For example, over the past few years within the Explorys network, more than a dozen peer-reviewed abstracts and publications have been generated by leveraging a de-identified data set comprised of nearly 48 million subjects, searchable by a specialized browser-based analysis and query application.
Studies have shown that the average clinician generates at least one new question for every two patients that are seen (Del Fiol, 2014) and that to stay current in primary care, he or she would need to spend about 29 hours each weekday (Alper, 2004) just reviewing journal articles. Thus, many clinicians rely on printed and electronic textbooks knowing that the information may not as current. These factors explain some of the lag between the “bench” and “bedside.”
Some clinicians find systematic reviews and clinical guidelines as a source of their information knowing that experts have utilized the most convincing evidence to form their recommendations. It is often this source of guidance that is made available in the electronic health record as “rules” that govern clinical decision support systems. For example, the National Guideline Clearinghouse (www.guideline.gov), a public resource for evidence-based clinical practice guidelines, contains thousands of such documents. The numerous process and outcome quality metrics derived from these guidelines are the basis of many quality improvement programs, pay-for-performance programs, insurance-based reporting, medical homes and value-based care initiatives, such as the CMS Shared Savings Accountable Care Organization (MSSP ACO).
The National Quality Measure Clearinghouse, or NQMC, is a public resource for evidence-based quality measures and measure sets. The ability of using clinical data to identify how well providers and practices are performing on quality metrics and how well they are adherent to guidelines can be done at the “bedside” with the appropriate analytics. Most importantly, gaps in care can be identified and closed at the point-of-care improving the quality of care. Finally, as an example of big data at the bedside, aggregated data across the care continuum can be used to build predictive models to reduce hospital re-admissions – improving quality of care, patient experience and reducing cost.
So how does the use of big data affect the bottom-line? Over the last several years, there has been an increasing emphasis on the value that healthcare delivers, i.e., ratio of the quality of care to the cost of that care. Both commercial and public payers have incentivized provider organizations and practices to curtail spending by reducing unnecessary or redundant testing while addressing care gaps and providing a quality patient- and population-level experience. This population health management is supercharged when we use the big data analytics to identify gaps in care, and identify and mitigate high- utilizers of healthcare services.
This type of insight is not only critical, but mandatory in helping health systems provide value-based care (such as in ACOs) when they have assumed a fair amount of risk not only for clinical outcomes but financial ones as well. Big data allows us to construct risk models, such as those that utilize patient demographics, clinical encounters, diagnoses, medications, vital signs, healthcare expenses and laboratory results to predict next year’s healthcare expenditures, allowing healthcare providers to contract with payers and employers effectively, while identifying patients that benefit from more intense care coordination, ultimately improving the bottom-line for all stakeholders — especially the patient.
Big data analytics have the power to change how net new knowledge discovery (bench) translates to the care that clinicians render (bedside) in the most efficient (bottom-line) manner. This is a crucial toolset in addressing the challenges of the healthcare transformation occurring around us.