Tag: Fairly AI

One Big Question Healthcare Technology Leaders Should Be Asking Right Now: How Are Biases In Machine Learning Going To Impact Care?

Response from Fion Lee Maden, COO, Fairly AI.

Fion Lee Maden

Machine learning is possibly the most disruptive technology of the 21st century. Good machine learning systems will, with limited interaction from a doctor, will be able to analyze almost any kind of information so that doctors can make better decisions.

But since a machine learning system is trained on data from humans, it reflects their internal bias, and can magnify them. Hospitals need to focus not only on what advantages these systems are bringing but also the biases that they encourage. WebText, a solution used to train natural language processing to analyze new articles and documents, was trained on posts from Reddit.

Reddit is almost 70% men with more than half of the users are from the US. The majority of the users are under the age of 35. These biases in the data create machine learning solutions that reflect these biases. One criticism, for instance, when asked “A man is a doctor, as a woman is, too” responded with “nurse.”

AI systems used in hospitals have already shown to be able to do incredible things from being able to diagnose disease from a simple, but also accomplish human mistakes at the speed of a machine; for example, an AI system rejected black medical students because data it was trained on was principally white students.

To prevent these effects companies need to carefully monitor these new artificial employees and make sure they are meeting the standard of governance reflecting the values of the company and the law. This can only be accomplished through specific tools that allow you to enter the mind of these artificial employees and understand how they think.