By Oliver Harrison, CEO, Koa Health.
Artificial intelligence (AI) is already part of our everyday lives. According to research by PwC, 86% of businesses are already reaping the benefits of better customer experience through AI. Further, 25% of companies harnessing AI expect to see the tech pay out in increased revenue during 2021. It’s clear that AI and machine learning (ML) have grown firm roots in the digital landscape.
AI use cases are beginning to filter into healthcare, too, including the optimization of hospital staffing, patient monitoring, screening scans for abnormalities, and supporting clinician decision-making.
Given the huge supply-demand gap in mental healthcare, there is real potential for AI to play a role. However, it is important that it is deployed in the right way.
Customizable technology for a personal approach
There is no-one-size-fits-all in mental health. Every individual has distinctly different needs and goals, and as such, the mental health tools must reflect this. Good clinicians do this instinctively, channelling their training and years of experience. However, there are simply not enough clinicians to meet mental health demand. It’s estimated that more than 26 million individuals experiencing a mental illness are not receiving treatment in the US alone.
Technology can help address this yawning gap in supply, but this won’t help if the solutions provided aren’t tailored to the unique needs of the individual. This is where AI can play a role. Smartphones, and increasingly wearables, are able to capture data from which algorithms can generate insights that can be used to personalize care, leading to better mental health. We’ve invested in technology (protected by more than a dozen patents) and powerful algorithms that can track symptoms, emotions and activities to power a recommender system that gets the right tool to the right person at the right time.
Augmenting the clinicians capabilities
The unique ability of AI to rapidly analyze vast volumes of data also means that it can add capabilities that no human could hope to challenge. For instance, the medical records of many individuals with a mental health illness run to reams of pages. Time-stretched clinicians simply cannot hope to digest all of this information before every appointment. But AI can.
We’ve recently created a prototype AI-enabled mental health crisis prediction model. This powerful tool continuously reviews patient notes across the hospital’s medical record system and provides clinicians with suggestions on which patients might be at risk of crisis. The clinician can then take a closer look at the notes, and if they agree with the suggestion, contact the patient to intervene before there is a crisis. We are excited to be taking this prototype into clinical trial, but our initial study showed that the predictive power was as good as a clinician given all the time they needed to review a patient’s notes (but tirelessly deployed across all patients 24/7).
Building trust is vital
The power of AI can only be harnessed through accessing personal data. However, security and privacy concerns surround the idea of hackable data that can be manipulated or stolen. A recent survey found that over 45% of consumers are worried about hackers stealing personal sensitive information around their health. Developers must ensure that enhanced security is front and center of all product development, if we are to reap the benefits of AI in mental health with all products and solutions compliant with HIPAA, GDPR, and ISO27001. All data must be processed at the edge where possible, utilizing industry leading encryption where data is shared. This approach is table stakes.
Protecting data is not enough though. Technology companies must also ensure that any AI that they deploy avoids creating harm through unwanted bias. For instance an algorithm that predicted crises well for one ethnic group but not others could lead to significant harms as one group continued to suffer crises that could have been prevented.
We know from our own work that the solution here is not to simply avoid collecting data about different groups in society (gender, ethnicity, sexuality, etc.) and imagine that this will avoid any bias from arising. Such is the power of AI that there are many cases whereby the algorithm can learn a proxy for a particular group, and create harmful bias against it. Instead, developers and researchers must take the time to think about the potential biases and harms that an algorithm might create, and proactively work to avoid these at every stage: data collection, creating the algorithm, and deploying the algorithm.
AI is a technology that has been penetrating every industry, but is currently underutilized in healthcare, and particularly in mental healthcare. As we readjust to changing workflows and lifestyles, AI presents an unparalleled opportunity to revolutionize mental healthcare at a crucial moment in time, if we can seize that opportunity in an ethical and trustworthy manner.