May 26
2020
How Automating Triage Can Reduce Emergency Department Bias
By Suresh Yannamani, president, Exela Technologies.
The emergency room is a place of extreme efficiency. In many ways, this is by necessity and by design. In the ER, there’s often more work to be done than people available to do it. ER staffers aren’t typically working to cure anyone; they’re most likely just trying to keep them alive. Decisions in the ER happen quickly because failure to do so can have lethal consequences.
When faced with a demand for care that far outstrips the supply of doctors and nurses, ER staff use a triage method to establish the order in which medical staff sees patients. People with more minor injuries can afford to wait, but others with very serious conditions must be tended to immediately if they are to survive at all.
The job of processing and ranking patients is primarily left to triage nurses and their application of the Emergency Severity Index, or ESI. The ESI provides a one-to-five scale that nurses use to help quickly prioritize patient needs and distribute medical resources accordingly. This scale helps triage nurses make difficult decisions more quickly and more fairly, but the process still has its limitations.
The pace of work in the ER means that staff must often rely on fast modes of thinking and default heuristics — or generalized, rule-based cognition — as opposed to slow, careful deliberations. This fast thinking tends to be more based on routine or habits of mind, and it’s potentially more prone to certain errors. Although triage and the ESI are intended to objectively rank patients in the order that they need care, the blind spots and biases of ER staff can lead to sub-optimal decisions that adversely impact outcomes and put lives in danger.
Cognitive Errors in the ER
Whenever decisions must be made under conditions of great stress and very limited time, mistakes are likely to occur. This is certainly the case in the ER, where a course of action has to be chosen within seconds and life or death might be on the line. A triage nurse who is forced to make a snap decision might rely on the standard course of action and a simple ESI application. There might be no opportunity to consider the specifics of each case. Instead, the nurse has to use default modes of thinking, and this is where mistakes could be made.
One reasoning error that’s particularly common in the ER is bounded rationality. Given limited information and very limited time, triage nurses are likely to settle on decisions that are “good enough,” rather than attempting to optimize. Essentially, this is to place boundaries on what’s considered relevant and possible and to only focus on the immediate and the obvious.
Other related cognitive biases are common in the ER as well, such as anchoring and the availability heuristic. Both of these modes of thinking place undue emphasis on information given at the outset of an event and make it less likely for decisions to change — even if conflicting information is discovered later. Much like the first price you saw for a new smartphone anchored you to the belief that a certain price range for those devices is reasonable, a nurse in the ER might judge the severity of a patient’s condition based on his first impression when that patient came through the door.
The framing effect is another type of bias that can have consequences in an emergency environment (although here, the bias could also be seen from the patient’s side). One way to see the framing effect is to compare the possibility of success and failure: When a procedure is framed as having a 10% chance of failure, a person is more likely to be averse to it than if it’s framed as having a 90% chance of success. When it comes to making quick decisions, the way we frame an issue can make all the difference in how we ultimately decide to act.
Perhaps the most worrisome type of bias, however, is the representativeness heuristic. This is where people, things, or symptoms (in the case of the ER) are grouped into categories. This isn’t because they belong to that category, but because they share certain characteristics with it. When it comes to medical care, this can lead professionals to jump to the wrong conclusion; what they observe might seem to fit neatly into a novel category they happen to be familiar with. It’s from this kind of mistake that we get this old adage: “When you hear hoof beats, think of horses — not zebras.”
The Consequences of Bias in Medicine
Suboptimal reasoning can occur on all sides of a medical transaction, and serious consequences can be the result. When biases aren’t limited to a judgment of symptoms but instead involve people, there’s even more cause for concern.
Again, the representativeness heuristic is one cognitive bias that can be particularly problematic in the ER. For example, women (and especially women of color) are particularly vulnerable to certain stereotypes that could give them a disadvantage during triage.
A woman who is very concerned about the health of her spouse, for example, might be seen as hysterical rather than as expressing legitimate concern. If that woman is also black, she runs the risk of being viewed as angry or threatening as well as hysterical.
And even if nurses and doctors don’t suffer from these biases, their women patients might feel pressure to downplay their concerns from fear of being associated with stereotypes. Add this to the existing stereotype that women might tend to minimize the severity of their symptoms, and they’re potentially less likely to be taken seriously — even when they express their full concerns.
The above only scratches the surface of how bias can adversely impact medical care, and there are certainly many more examples where that came from. We should hope that our healthcare system treats people as fairly as possible — that patients are treated in an order that matches the severity of their condition, and that their treatment is never hindered by bias or a practitioner’s faulty judgment. We should also be realistic in understanding that this is not the case. Systems like the ESI certainly help, but more can be done.
Automation Helps Eliminate Bias in Healthcare
One way to minimize this bias is to allow machines to do some of the decision-making for us. Automation can go a long way in preventing the above errors — automated systems not only help to optimize triage, but also assist with diagnosis and treatment decisions. When given accurate information about each patient, a well-designed algorithm can use objective criteria to determine a proper triage order and even help identify the best treatment options.
We should remember, though, that setting up an automated system to do this is not without challenges. Admittedly, most of these challenges arise again from the human element; humans are the ones who need to program the machines and set the parameters, so they’ll have to correct their own biases before they can hope to create machines that do it for them.
Luckily, data scientists are aware of these issues; they’re already working on these problems and have significantly improved quality as a result. Google gives us a prime example of this correction: They created the chief decision officer role to help employees make better decisions and create smarter machine-learning solutions by extension.
The next challenge comes from gathering enough patient data to make an accurate model. People aren’t always as straightforward as they could be when it comes to providing information, and especially to medical staff. Gathering a sufficient amount of quality data is a challenge of its own.
However, there’s reason to believe that patients might be more honest with a computer than they would be with another person. When interacting with a machine, people often feel more comfortable sharing openly because there’s no concern about immediate judgment or being forced into an awkward conversation. In this way, we could see automated systems providing great value in the healthcare space in terms of both data collection and analysis.
In healthcare (as in almost any area), perfection might remain elusive. But here, we have yet another automation application that can add tremendous value. With the right data, careful modeling, and iteration over time, triage and standards of care can become increasingly less biased. At the very least, this technology has significant potential to outperform our current way of doing things. Even just utilizing automated systems to generate second opinions could provide notable improvements and potentially save lives in the process.