Good Product Vs. Healthcare Compliance? Not Anymore

By Mark Ferlatte is CTO of Truss.

  Mark Ferlatte  CTO, Founder   @ferlatte   Mark was one of the engineers that helped fix in 2013-2014 during the "tech surge". Mark is one of the pioneers of Dev Ops as he built one of the first cloud computing teams for Linden Lab in 2006 and was Amazon Web Services largest customer. Mark's team also built the ecommerce platform for Linden Lab.  Mark graduated from UC Berkeley in Computer Science and is an avid cyclist, including raising money for the yearly AIDS ride from San Francisco to Los Angeles. There’s no shortage of news stories and think pieces outlining the ways regulations have hurt healthcare in the U.S., from spending to physician burnout. (Notably, there’s also no shortage of stories claiming the opposite.) Regardless of this debate around benefits vs. protections, there are a few non-negotiables–like doing everything possible to prevent a breach. Patients are entrusting organizations with their health data in way that they don’t understand and failure to protect their data can lead to clear and direct harm (via embarrassment, or identity theft–healthcare records are considerably more valuable than credit card numbers, or discriminatory practices from employers).

As a result, many engineering and IT departments in the healthcare industry accept a reduced level of function and service in order to avoid costly penalties. Unfortunately, this also harms their customers because of reductions in the effective level of care.

New, smaller and more agile healthcare companies are encountering these legacy environments. For example, they may only be able to get a “data dump” every week (or month) from partners, and many of the organizations they partner with are exporting data in formats that are expensive to work with, like retro formats from ’70’s and ’80’s mainframes.

This is a problem in an era where customer service has become the crux of any business. The healthcare providers that don’t change because of the regulatory risk will not be able to build a quality consumer product, even for internal platforms. And internal products have to be consumer grade, now, as well. We’ve talked with doctors who changed jobs because their hospital adopted a medical record system that was bad.

The truth is that newer technologies can allow healthcare systems to do both, but fear of transition and possible compliance violations are holding progress back. And that’s why, in 2018, we can get a probe to Pluto but we can’t send over health records within minutes of a patient’s request. To scale a new infrastructure and workflow for the largest healthcare systems is a huge project, so changes with clear benefits–like DevOps practices, iterable software development and a constant release schedule–are met with resistance. Here are three ways healthcare systems can start digging themselves out of this:

#1: De-silo. Most have heard this advice, but acting on it is different for every organization. At a high level, most healthcare IT departments have a compliance group, an infrastructure group, a security group and a product engineering group, all working independently of each other. The compliance group (usually lawyers and analysts who often lack technical expertise), need ongoing conversations with engineering and security so that the latter understands the compliance requirements. In return, those teams can help the compliance group understand trade-offs, what’s realistic, anticipated roadblocks, etc.

Security teams tend to develop their own compliance controls internally and often don’t tie back their controls to actual regulation and policy. The infrastructure engineering teams are concerned with implementing compliance and also care that the system is always available to customers. The product engineering  team wants to build something of value that keeps customers safe and meets their needs. All of these different priorities require complex tradeoffs, making it unsurprising that systems don’t fulfill customer expectations. To de-silo here, compliance teams should act as consultants to product teams and help them understand the compliance requirements. Additionally, consider merging the defensive security and infrastructure teams into a single team with a safety and availability mandate; high-quality infrastructure and high quality security end up at the same place.

#2: End fear of new technologies. Perceived loss of control is, in our experience, the biggest resistance factor to working with modern cloud providers like AWS, rather than the old data systems that few actually maintain as they should.

The building blocks IT and engineering teams get from these new platform save money and time, leaving more resources for other initiatives. But the key is getting staff past their fears. Start by outlining how track records show these fears are unfounded. Think about it: Google Cloud Platform was built by world-class engineers, and is considerably better than what any healthcare system would be able to build on its own. These modern tools are like renting out one of the best infrastructure teams on the planet for pennies per hour. When shopping around for providers, look at existing large cloud providers (like AWS and Google) that are building large-scale APIs, and already have extremely well developed compliance documentation and technical controls. They are easy to work with, improve over time and shows what effective data management among insurance companies, hospitals and drug companies could look like.

Your security controls can also be productivity boosters as well. For example, modern single sign-on systems combined with innovated two-factor systems (like Duo) can provide extremely strong security assurances, while also making it easy for your employees and customers to access the systems and services they need to provide care. Modern IT architectures (like Google’s Beyondcorp) provide stronger protection than traditional VPNs and are also easier to use.

#3. Do Right by Data. Most healthcare providers are working with data sets that aren’t necessarily that big (imagery being a major exception), so the cost of using a cloud environment pays off quickly. Most providers’ data probably isn’t “big” now. While it’s tempting to invest in complex distributed systems and spend a lot on “big data” processing clusters, many of the data sets we see are measured in gigabytes, not petabytes. Healthcare systems can rent a single computer large enough to process data sets for the entire state of California, in a reasonable time frame, for less than $30/hour now. (In fact, when Truss was hired to help clean up, we saw that the amount of data from all 37 participating states equaled roughly that of California’s.)

These are big changes, but healthcare systems that aren’t thinking about them will have trouble surviving the competition. Set realistic expectations for all stakeholders, and remember that these workflow changes go beyond the technical—there are social (and sometimes political) shifts to navigate. For the best transition, start small and iterate. Don’t do a big lift and shift operation; find a small piece, get it working well, learn from it, then try again.

Write a Comment

Your email address will not be published. Required fields are marked *