Tag: National Quality Measure Clearinghouse

The Value of Big Data: From Bench to Bedside to the Bottom Line

Anil Jain
Anil Jain

Guest post by Anil Jain, MD, FACP, senior vice president and chief medical officer, Explorys, and consulting staff, Department of Internal Medicine, Cleveland Clinic.

Despite advances in medical education, the proliferation of medical journals and the speed of light retrieval of information on the Internet, the lag time between when researchers identify life-saving clinical interventions and when they are put into practice ranges from 10 to 25 years, averaging 17 years. This lag time between the discovery at the “bench” and its practice at the “bedside” is even more startling when you consider the impact of care at the “bedside” to the “bottom-line.”  This “bottom-line” has become increasingly important with the formation of accountable care organizations (ACOs) that aim to reward provider organizations and payers that meet the “triple-aim”:  high-quality care for the population, high-quality care for the patient, at the most affordable cost. Unfortunately, current practices at the “bedside” reportedly generate approximately $700 billion in care that isn’t necessary and may even be potentially harmful to the “bottom-line.” Moreover, despite healthcare expenditures of 17 percent of our GDP, the U.S. lags behind most industrial nations when looking at composite measures of healthcare quality.

With the increasing use of health information technology and data we should be able to shorten the time between “bench” to “bedside” and improve the “bottom line.”

Big Data

“Big data” is data that is of high volume, variety and of sufficient velocity that is not amenable to traditional data storage and analysis tools.  This “big data” is most typically generated from health systems’ electronic health records (EHRs), laboratory, radiology, financial and billing systems, personal health records, biometrics and smart devices.  In addition, patients today are oftentimes utilizing various mobile health and wellness apps and wearable devices which also collect a plethora of data, which only adds to the complexity.

The Bench

The aggregation of de-identified medical information across millions of health records from varying venues of care facilitating a longitudinal view of a person can be incredibly beneficial for researchers focused on net new knowledge discovery. For data from disparate health systems to be aggregated, it is vital that it is standardized and that subjects across health systems can be matched. This harmonization of disparate data coupled with the appropriate analytics software is critical to identify patterns in the data.

In this setting, the larger the data set, the more likely that a signal can be detected through the noise, even in the rarest of conditions. Fortunately, many hypotheses can be conceived and tested through appropriate analytics within this real-world data set in a much more cost-effective manner than conducting full-scale clinical trials. Furthermore, if a signal is detected or a pattern is found, researchers can then design a more focused explanatory or pragmatic clinical trial to prospectively test the hypotheses. For example, over the past few years within the Explorys network, more than a dozen peer-reviewed abstracts and publications have been generated by leveraging a de-identified data set comprised of nearly 48 million subjects, searchable by a specialized browser-based analysis and query application.

Continue Reading