By Krishna Kurapati, CEO, QliqSOFT.
For the first time in our lives, we have been able to see how artificial intelligence would influence a pandemic from identification and tracking to treatment and vaccination. Two things had to perfectly align to make this happen.
Technology had to advance to a place where it could analyze, predict, and engage with extreme accuracy and a virus had to be dangerous enough to spur massive funding and demand for action. We reached that tipping point in 2020. As the year comes to a close it is time to consider all that AI has done and where it is likely to continue to impact epidemiology and disaster response moving forward.
HealthMap, an AI application run by Boston Children’s Hospital, was launched in 2006 and was one of the first tools used to detect and track the COVID-19 outbreak in China. The algorithm uses online data about infectious disease events from news outlets and social media in more than a dozen languages. It then applied machine learning and natural language processing (NLP) to track outbreaks.
Tracking or predicting where cases might show up is just one step in a long journey to stopping the spread of the virus. An article published in May 2020 by researchers in the U.S. and China would reveal that Artificial Intelligence was accurately diagnosing COVID-19 in 68% of patients who had previously been thought to be negative and had normal results on chest imaging. The AI algorithm used to compare imaging, symptoms, medical history, and exposure was said to have “equal sensitivity as compared to a senior thoracic radiologist.” I have also had the pleasure of reading some yet-to-be-published articles about how AI is helping in the ICU to predictively determine ventilator utilization but it’s not just ventilators.
When it came time to harness AI in the diagnosis of COVID-19, even the CDC jumped on board. In partnership with Microsoft’s Azure platform, they embedded a symptom checker chatbot on their website. Likely out of an abundance of caution, their bot uses what I term “light-AI” to guide patients through a very basic decision tree. Answering simple yes-no questions to determine their likelihood of needing a test.
As long as we continue to prioritize data, AI will have the information needed to analyze and predict, it’s a very logical application of the technology — but what about using it to engage patients and address widespread misinformation and fear?