Guest post by Marti Arvin, vice president of audit strategy, CynergisTek.
The myriad of regulations associated with clinical research can be daunting. In addition to complex regulations, most research entities have the added challenge of coordinating the process across multiple stakeholders that may be spread across different organizations. One way to begin to get a handle on this is to look at integration between the EHR and other systems used in the process. The research enterprise can leverage the availability of electronic systems to the reduce duplicate effort, increase researcher satisfaction and improve compliance.
By integrating the EHR and the clinical trials management system (CTMS), an organization can create a single source of truth, consolidate efforts in teams that have the appropriate expertise and decrease the overall timeline to get a study up and running. A typical research project has multiple processes that must occur for it to get approval from the Institutional Review Board (IRB), gain appropriate funding from the sponsor, and receive any necessary approvals from committees such as biosafety or radiation safety. Traditionally, the researcher has been responsible for assuring all of the necessary approvals have been obtained, the study is appropriately budgeted and all other steps are completed to help ensure compliance, despite the fact that the researcher is the scientific expert on the project, and is often the least experienced in ensuring the study meets all the regulatory requirements.
Creating a process where the CTMS and the EHR are integrated to capture necessary information at both the study initiation phase and throughout the study can help improve the process and ensure all necessary actions are completed.
System integration can offer improved efficiency and effectiveness in a number of areas. This discussion will focus on just two examples: Clinical trial billing and HIPAA compliance.
How can system integration help with these two issues?
Clinical Trial Billing
It is necessary to evaluate the proposed budget for the clinical trial to ensure services which can be legitimately billed to the patient/subject’s third-party insurance are identified as such and services that must be covered by another source are identified– also known as the coverage analysis. The rules around what can be billed to insurance, particularly to Medicare, are complex.
The process requires someone with expertise in not only general billing and coding, but the specifics of the rules in this area and the ability to read a study protocol. The language of the study’s informed consent document is also critical. The wrong wording in the informed consent can render a service otherwise billable to third-party insurance non-billable, meaning the cost of the service must be covered by another source and makes the initial coverage analysis extremely important. Equally important is a process to ensure the services are in fact billed in the manner anticipated by the original budget and coverage analysis. This helps ensure the organization does not submit a potential false claim by billing insurance when the study should be covering the cost.
By integrating the systems (CTMS, EHR), a process can be developed to trigger certain steps to happen contemporaneously. The researcher’s initial submission to the IRB system can trigger an alert that the study needs to be established in the CTMS. The establishment of the study in the CTMS then triggers the initiation of the process to develop a coverage analysis. The development of the coverage analysis can be built in to the CTMS so that all services provide to the participant are identified on a structured calendar with the corresponding payer source identified.
Guest post by Sean Hughes, EVP managed document services, CynergisTek.
Healthcare has spent a significant amount of both human and financial capital addressing the security of their environments over the last several years – but have we forgotten a major vulnerability?
Printers and print-related devices (e.g. copiers, fax machines, scanners, etc.) continue to be a major component of our infrastructure and a big part of our clinical and business workflows, yet in most organizations, they continue to represent a gaping hole in our defenses. The advent of the EHR has not equated to the perceived reduction in print, but rather some research shows it’s responsible for an 11 percent increase in print in healthcare over the same time as the implementation of this technology. This increase in print volume brings with it an increase in the number of devices required to process the paper.
The approach most organizations have taken related to the security of these devices falls into one of two categories: segmentation of the network or reliance on manufacturers for “secure” devices. These approaches vary significantly from the approach most organizations have taken for other endpoint computing devices and leaves an organization open to the possibility of negative outcomes.
The industry has seen an increase in the computing power of these devices (e.g. internal hard drives, scan to file or application, residual data on devices, mobile printing, USB-enabled device access, etc.) and the bad guys are aware of this. More and more we see stories in the news of print devices being used as entryways for bad guys to circumvent our protections and put our data and our organizations at risk. According to an article published by BBC News in February 2017, “Hacker Briefly Hijacks Insecure Printers,” a hacker was able to access more than 150,000 printers that were briefly left accessible via the web.
The most effective way to address this threat is to treat these devices no differently than all our other data endpoints, be it a desktop, server, or any other piece of infrastructure. We need to look at these devices and ensure they meet the same security standards.
The most effective way to mitigate risks starts with knowing what the risks are. The first step should be a comprehensive printer fleet security assessment that is part of your overall security program. This can be accomplished either through your internal processes or by engaging a competent third party. Either way, you need to know what you don’t know, and you need to know it now.
The results of that assessment will drive the remediation efforts as well as define the ongoing measures our organizations should take. These steps will be directly related to the vulnerabilities identified but will most likely fall into the following categories:
Guest post by Adam Hawkins, vice president client services, CynergisTek.
HIMSS 2016 is right around the corner, and I’m sure everyone is excited about the prospects of conferencing in Las Vegas. This location certainly has a lot going on to keep everyone busy, on and off the exhibit floor. There should be many new healthcare technology players to see and learn about, and it is always interesting to visit the innovation area. Hopefully, we’ll get to hear what folks like KLAS, HIMSS Analytics and other research organizations are working on in 2016 as well.
For instance, KLAS is continuing its work toward including security vendors as its own category, and has a new study underway to look at service providers in this space. That study won’t be completed in time for HIMSS, but they should be able to preview what they hope to accomplish with the study and what its report will include. I think it will be an important read for everyone in our industry.
Interoperability is a huge area of concentration in healthcare at the moment with the Office of the National Coordinator, Health & Human Services and HIMSS all very much involved in this discussion. There are sure to be several presentations on this and related topics. Hopefully we will hear how security and privacy will be addressed, as they are critical components of making many of our health initiatives successful and rely heavily on interoperability for success.
There is no doubt about it, healthcare as an industry is absolutely reliant on its systems environment and electronic information to the point that efficiency, safety and productivity are affected any time it suffers any disruption. Yet it seems we are destined to incur disruptions more often than not because of our own actions or in-actions.
This article takes a somewhat tongue in cheek look at some of the naïve or bad behaviors, misconceptions, short-sighted decisions and mistakes we make that contribute to making our own data security situation more difficult.
The list of examples here is virtually endless, from having too much confidence in vendors to underestimating employees to naïve beliefs about the internet, social media and applications. Hundreds of hospitals blindly relied on a vendor to process their billings without once questioning the company’s security practices. They were surprised when their revenue cycle was interrupted when that company suffered a Ransomeware attack. Other healthcare entities have found themselves embroiled in breach investigations when subcontractors they never knew existed lost their data, some overseas.
Expressing surprise may be a realistic response, but it’s hardly an acceptable excuse for lack of due diligence. Few organizations watch the folks who represent the highest risk to their systems and information – those with elevated privileges. Examples abound of administrators who became saboteurs. What is amazing is the almost immediate reaction when these kinds of things happen. How could we not be auditing these folks? It should be pretty simple to answer this question when they are usually the ones responsible for auditing. And then there is the internet and social media. The first myth organizations fall victim to is, “we’re too small to attract anyone’s attention” or “no one is looking at us.”
Most attacks from the internet are indiscriminate automated probing of systems looking for anyone vulnerable. You’re right they are not looking for you specifically, but if you are connected they may find you. Last but not least, the naïve belief that there is actual privacy on social media and applications when they tell you there is. Weekly we hear about another app compromised or information leaked from a site thought to be secure. There is no such thing as foolproof security and apps, even ones named “secret” should be approached with caution.
Organizations make bad decisions all the time based on misplaced or erroneous perceptions of risk, or just plain disregard for the risk. Bad decisions though, regardless of the reason, are still bad decisions. How about underestimating the risk from USB ports?
Organizations routinely underplay the fact that these ports unprotected can be the source of information loss or importation of malware. We encrypt mail, laptops, maybe even provide encrypted USB drives, but fail to manage the ports themselves. In complex environments it’s also easy to be overwhelmed with what seem like routine chores, like documenting all changes. Someone says it’s a routine change, it only affects one system, or the vendor is just applying a regular update… implying that it doesn’t have to go through change control and thus, does not get documented. There is also underestimating the risk when we acquire another entity. This risk comes in two forms. The first is the acquisition without the assessment, or rushing the acquisition so assessment is not possible, and assuming the risk blindly. Continue Reading