Jan 15
2020
Quality Is Critical In Today’s Data Deluge: Put Processes and Tools In Place For Robust Data Quality
By Rahul Mehta, senior vice president and head of data management proficiency, CitiusTech.
The sheer volume and variety of data, such as claims, EMRs, lab systems, and IoT now available to healthcare organizations is mind-boggling. The potential to pull data from these myriad sources to work for real-time care intervention, clinical quality improvement, and value-based payment models is unfolding fast.
Yet, as organizations seek to aggregate, normalize and draw insights from large and diverse data sets, the importance of data quality becomes apparent. Consider an activity as fundamental as identifying the correct patient. According to Black Book Research, roughly 33 percent of denied claims can be attributed to inaccurate patient identification, costing the average hospital $1.5 million in 2017.
For example, the average cost of repeated medical care due to inaccurate patient identification with a duplicate record is roughly $1,950 per inpatient stay and more than $800 per emergency department visit.
As data quality become more important, healthcare organizations need to understand the key characteristics that affect quality: accuracy, completeness, consistency, uniqueness and timeliness. However, data reliability and integrity also depend on other key factors, including data governance, de-duplication, metadata management, auditability and data quality rules.
With a strategic approach, healthcare organizations can employ a unified data strategy with strong governance for data quality across all data types, sources and use cases, giving them the ability to scale and extend to new platforms, systems and healthcare standards. The result is an approach that uses a combination of industry best-practices and technology tools to overcome common challenges and assure data quality for the long term.
Understanding Data Quality Challenges
Historically, providers and payers alike treated data quality as a peripheral issue, but that is no longer viable in today’s complex data ecosystems. First, there are a diversity and multiplicity of data sources and formats: EHRs, clinical systems, claims, consumer applications, and medical devices. Add to that, challenges associated with legacy applications, automation needs, interoperability, data standards and scalability.
Lastly, there are increasing numbers of use cases for clinical quality, utilization, risk management, regulatory submission, population health, and claims management that need to be supported.
Considering the current data environment, the downstream effects of data quality issues can be significant and costly. For example, in the case of patient matching as referenced above, something as common as two hospitals merging into the same health system, but following different data-entry protocols, can lead to duplicate and mis-matched patient records. It can also lead to critical patient data elements, such as date of birth, being documented differently by different facilities and then made available across multiple systems, in varying formats.