Feb 14
2019
Ethics of AI In Healthcare: Developing the Right Framework for Innovative Clinical Delivery
It’s an understatement to say that AI and machine learning were among the forefront of conversation drivers recently at HIMSS19 in Orlando. One session actually diving into the ethics of AI while leaders from Microsoft, the American Medical Association and the Cleveland Clinic spoke of the need for organizations to develop the right framework for innovative clinical delivery.
Hemnant Pathak, associate general counsel at Microsoft, led the round table discussion, which included Peter Lee, CVP AI and research at Microsoft; Dr. Susanna Rose, Phd., chief experience officer at the Cleveland Clinic; and Sylvia Trujilo, MPP, JD, senior Washington counsel for the American Medical Association.
Virtually every industry is being sharpened by the emergence of these new technologies, Pathak said during his opening remarks, while pointing out the prevalence of the technology throughout the HIMSS exhibition hall. “In many cases, existing frameworks can be adapted, but as they continue to improve we’re going to need sources from every sector and they need to be given equal weight for their considerations” to the healthcare landscape.
Lee said that there is some sensationalism around AI and machine learning, suggesting that the distinction between reality and the end goal of fully autonomous intellegince of or a mechanical being that can learn and take care of itself. The current reality, he said, is that efforts to develop AI solutions are certainly more effective now than when technologists and developers began to understand that, for about the last decade, mankind possessed enough data to begin AI computation, giving rise to what we now know as deep learning.
Today, AI is really about extraction of data, observation and applying the data and observation to machine learning. The boundaries between data extraction and some think of as the “Terminator” effect is fuzzy, he joked. However, “even if we never move beyond current state of the art, we have conservatively a decade of data to extract,” he said.
For the ethical development of the AI solutions, ethicists must conceptualize a framework. But public health tends to focus on the individual health, and on the public good. For example, immunizations and fluoride in water is for the good of the individual but also for the benefit of the entire population. Some may be able to opt out, but the greater good may need to override individualism, Rose said. Regarding the use of AI, it’s not the use but the construction of it and how it will support healthcare.
AI won’t fully automate and take over healthcare, she said, but whatever is being built must still undergo rigorous testing. As an industry, healthcare needs to conduct systematic research and evaluate potential how these solutions may effect outcomes. Drive down to principles that can drive us and give us a place to go. In healthcare, ethical approaches are needed to make sure technologies are being implemented ethically.
The AMA has adopted policies on augmented intelligence outlining fundamental principles clinically informed by health and AI. Essentially, Trujillo said, policies must be designed to transform workflows to reduce friction. “We must not circumvent the development and research of such technologies” but understand blind spots and bias, she said.
Ethical considerations and technology are evolving, and ethical considerations end up being a moving target. As our understanding becomes more sophisticated there is a process for … due consideration, Lee added.
AI can help achieve delivery models in clinical care, he said. “As we drive (forward) we must stay grounded in ethical principles not as an add-on,” Lee said.
Are we going to use the technology in the manner designed, Dr. Rose asked. “AI can enhance empathy. How can systems and machines keep what needs to be human, human but can automate what doesn’t need the human touch.”