Tag: Carlos Meléndez

Diagnosing Bias In Healthcare AI: Five Best Practices

By Carlos Meléndez, COO, Wovenware.

Carlos Meléndez

A recent Wall Street Journal article pointed to a biased algorithm widely used in hospitals that unfairly prioritized white patients over black ones when determining who needed extra medical help.

While AI has been cited as a data-driven technology that doesn’t make decisions based on emotions, but on actual facts – the reality is that the facts can be misleading.

In the above example, race wasn’t a deliberate factor in how the AI algorithm reached its decision. It actually appears to have used predictive analytics based on patients’ medical spending to forecast how sick patients are.

Yet, the problem is that black patients have historically incurred lower healthcare costs than white patients with the same conditions, so the algorithm put white patients in the same category (or higher) than black patients whose health conditions required much more care

Bias is inherent in a lot of things we do and often, we just don’t realize it. In this case, the data assumed that people who paid more for services were the sickest. As illustrated, we have to be considerate of the data we use to train algorithms, Cost of services or amount paid shouldn’t be information we use to determine who is sicker than another.

In another example, if skin-cancer-detection algorithms are typically trained on images of light-skinned patients, they would be less accurate when used on dark-skinned patients, and could miss important signs of skin cancer. The data must be inclusive to provide the best results.

While AI can accelerate disease diagnoses, bring care to critical patient populations, predict hospital readmissions, and accurately detect cancer in medical images, the example illustrates the caveat: AI bias –whether because of a lack of diverse data, or the wrong type of data – exists in healthcare and it can lead to social injustice, as well as harm to patients.

In addition to racial bias, unchecked algorithms can cause other types of bias as well, based on gender, language or genealogy. In fact, according to IBM research, there are more than 180 human biases in today’s AI systems, which can affect how business leaders make their decisions.

As an example of gender bias in healthcare, for many years cardio-vascular disease was considered a man’s disease, so information was available based on data collected from men only.

This could be fed into a chatbot and lead a woman to believe that pain in her left arm was less urgent – possibly a sign of depression – with no need to see a doctor right away. The consequences of this oversight could be devastating. 

Continue Reading

Chatbots Join the Ranks of Healthcare Workers

By Carlos Meléndez, COO, Wovenware.

Carlos Melendez, Official Member of Forbes CouncilsChatbots, or conversational AI, seem to be everywhere in our daily lives and go-to solutions for digital transformation initiatives. From banks to insurance companies and e-commerce sites, these automated assistants offer help, answer our questions and guide us – often without our really even knowing it. In today’s 24/7 environment, they fulfill the need for always-on service, anytime and anywhere, since it can be a challenge to staff call centers or customer service departments around the clock.

While we’re getting used to chatbots in customer service, there’s an emerging role for them in healthcare – helping to address the COVID-19 crisis.

Knowledge is Power — Easing Public Concerns One Bot at a time

The ability to provide information at a moment’s notice, anytime, anyplace and alleviating the burden on healthcare staff has made chatbots an important tool at Providence St. Joseph Health in Washington State. This health facility treated the first COVID-19 case in the U.S, and it implemented chatbots to help address the public’s demand for information, while at the same time, freeing up their overtaxed healthcare providers from having to deal with a deluge of calls from sick people and the “worried well.”

Providence Saint Joseph Health turned to technology to help it more effectively manage three critical stages of care: triage, testing and treatment, relying on chatbots to particularly assist during the triage phase of the process. By visiting its Coronavirus Assessment Tool online, people can find out more about which symptoms might indicate the virus, and figure out if they should be seen by a health professional. This chatbot is connected to a virtual patient care visit which enables people to discuss their symptoms with a nurse practitioner. It has had overwhelming success with the public; in its first day of use alone, more than 500,000 people used the chatbot.

Chatbots have been able to step up and meet these types of needs because they combine natural language processing with machine learning capabilities. This allows them to understand and communicate in a free flowing, conversational discussion. Because of the benefits it provides, market opportunities for the technology is growing rapidly: the global market for chatbots is predicted to reach $15.7 billion in 2024, up from $4.2 billion in 2019.  And the market for chatbots in healthcare is expected to be over 314 million by 2023.

It’s no wonder that conversational AI has a bright future in healthcare. In an industry where professionals are busy and continually strapped for time, chatbots can provide and collect information, conduct outreach, send reminders and schedule appointments. It can also provide support to patients, their families and the public and offers the convenience of meeting consumers wherever they are – whether it’s on their phone, through messaging, social media or elsewhere.

Continue Reading