Can chatbots ease the burden on healthcare during COVID-19?
Chatbot technology could be used to screen patients for symptoms of COVID-19, according to new research into AI technology by Sezgin Ayabakan, a faculty member at Temple University’s Fox School of Business. The virtual assistants use learning algorithms to analyse answers provided by patients, helping to free up time for healthcare workers who have become overwhelmed since the pandemic began so they can focus their care on serious cases, while reducing unnecessary admissions to hospitals.
Ayabakan explained: “The main goal is that we hope chatbots can be used to screen patients for COVID-19, so the healthcare workers can deal with more severely ill patients rather than screening folks with mild conditions over the phone. There is a huge burden on the healthcare system. When we did this experiment in April, we did it with the hope that our findings might help alleviate that burden.”
His research paper “User reactions to COVID-19 screening chatbots from reputable providers” is co-authored by Fox School doctoral candidate Mohammad Rahimi, as well as Indiana University Kelley School of Business faculty members Alan Dennis and Antino Kim. It was recently accepted for publication in the Journal of the American Medical Informatics Association.
The study included input from 371 participants who viewed a COVID-19 screening session with a hotline agent, who could be either a human or a chatbot. They were then asked for their thoughts.
Ayabakan, who is also an assistant professor of Management Information Systems, explained: “We showed a vignette to half of the participants and said, ‘This conversation is between an individual and a chatbot agent’ and then with the others, we said, ‘This conversation is between an individual and a human agent,’” says Ayabakan, an assistant professor of Management Information Systems. “We wanted to see how people perceived this conversation. We tried to capture their bias.”
Participants initially had a negative bias towards the chatbots, however this changed if the provider was someone they knew and trusted; in this case the Centers for Disease Control and Prevention (CDC).
“What we found was that if they trusted the provider, their ability to trust the provider’s chatbot service increased greatly. So we found that folks seem to really trust the CDC, which then increased the trust they had in the online screening services,” Ayabakan says. “For this to work, we really need to emphasize the source or provider, so that participants will use the chatbot and follow its advice.”
Ayabakan concluded that as long as the chatbot was able to perform its job as well as the human agent, users viewed the virtual assistant no differently and trusted the technology.
Separate research from Ayabakan’s colleague and fellow Fox School faculty member, Xueming Luo on similar AI technology, drew parallel conclusions. His article “Machines versus Humans: The Impact of AI Chatbot Disclosure on Customer Purchases” found that chatbots are especially effective when it is not revealed that the agent is a chatbot.
Ayakaban believes that his research into virtual assistants and machine learning algorithms will benefit the healthcare industry moving forward, as screening for COVID-19 is likely to continue for some time.