Feb 19
2013
Reducing Risk with a Practical Approach to Patient Data Management
Guest post by Kim Lennan, Director of Healthcare Markets at Sensage
Healthcare organizations of every size face a growing number of threats and regulations associated with patient data management. Pharmacies must be on the lookout for falsified prescriptions issued to employee family members. Hospitals must track access to patient records, from both inside and out, to identify individuals trying to gain health details about a celebrity, a neighbor or family member. Network connections must be analyzed to pinpoint situations when passwords have been compromised or mobile devices have fallen in the wrong hands. Finally, meaningful use Stage 1 requires the identification of devices, systems and applications that are dormant or redundant.
To address these scenarios, IT teams must establish monitoring capabilities around a disparate set of systems and activities. This leads to incredibly manual, risk-prone event data collection, correlation and analysis processes across clinical and non-clinical sources, which discourages most IT teams from even taking the first step.
A successful event data management initiative provides three important benefits, which are often overlooked:
- The ability to understand patterns and establish baselines by which risk can be measured against. When you know what “secure” activities look like, you can create alerts when an unusual activity exceeds acceptable boundaries or thresholds. For example, if you know a set of workstations are not used during the hours of 11 p.m. to 4 a.m., you can easily set up a notification when a flurry of activity takes place on one of them during that period.
- The much-needed context to drive better policy creation and compliance. If you are able to demonstrate events that create risk, you are more likely to drive understanding with users and influence appropriate behaviors. For example, correlate data from your time management system with log-out details on a shared workstation to identify high-risk individuals who fail to log out when they go off duty, leaving that system open to compromise.
- The valuable insight needed to investigate a breach or establish compliance with internal or external regulations and policies. All too often, the data that can tell the story was either not collected or is impossible to analyze after the fact. In cases where an incident or breach spanned more than 90 days, most organizations have no historical perspective to review, which could prove a non-event.
For greatest success, security practitioners, auditors and compliance teams will need to align around processes that aid their shared efforts and actions. Here are some must-haves that need to be in place — or at minimum discussed:
- Collect and centrally store all event data, even if you don’t think you need it. This is especially important since you don’t always know what you have—or what you will need—in the way of historical data analysis.
- Establish basic measurements, understand them, then expand. Start somewhere … anywhere … to establish a metric and then work to make that metric useful or replace it with a better one that you’ve discovered in the process. Don’t just poke around or take a whack-a-mole approach to your discovery process — prioritize your effort so that you can accumulate and maintain a portfolio of metrics that maximize the value of your initiative.
- Be consistent or face the consequences. Don’t spend a month on analysis then move on if nothing pops up. Maintaining consistent vigilance is the key to spotting trends or variance. Erratic monitoring and analysis leads to a false sense of security and reduces your ability to continuously reflect and refine based on known patterns.
- Be ready to change. There is a tendency to take a finding, create a counter-measure around it, and then never look back. Be intellectually honest when you make new discoveries, particularly if they show a need to change an established rule, alert or policy. While flexibility and change seemingly conflict with “be consistent,” get comfortable with the idea that you will often learn something new which will require a policy or process change.
- Engage experts and ignite managers. The dynamic nature of attacks may also lead you to integrate data from systems you didn’t initially consider using to drive critical correlations. As you think about what data to analyze, solicit input from teams who know the systems, devices, people or information associated with all areas of infrastructure. They may shed light on interdependencies or relationships that are critical to better metric definition. Leverage “the truth” established with the experts to ignite the support needed from managers.
A healthy, sustainable data management initiative starts with a single version of the truth. When everyone is looking at the same data, there is an increased likelihood that anomalies will be spotted and risks can be detected more rapidly. Here are some of the capabilities to look for:
- A data management solution that makes event data collection from any source a simple task.
- A scalable system that gives you the ability to collect and store vast amounts of data without ever-increasing hardware or maintenance costs.
- Correlation capabilities that leverage a standards-based event taxonomy so analysis is possible across all data, regardless of source, without additional work from you.
- Flexible analysis options that address the needs of every user – from standard reports to customizable dashboards and ad-hoc querying.
There is much we are learning every day when it comes to protecting patient data, and – to evolve – we must adopt new disciplines and continuous improvement around risk monitoring. We applaud Cerner, our innovative partner, and customers like Adventist Health Systems, who are breaking new ground with the “science of risk management” and developing a centralized approach to the systematic inspection across their clinical and non-clinical landscape.