Feb. 19, 2020
GYANT, a San Francisco based AI-enabled care navigation company, has found that the key to keeping a patient engaged when talking to a chat bot is the emotional appropriateness of the responses to the patient’s answers; something that is called algorithmic empathy, according to reporting by Erik Birkeneder in Forbes.
If a patient says they have cancer and an AI bot responds, “That’s too bad,” as if the patient had the flu, the patient will likely disengage and get frustrated, Birkeneder writes. But if the AI bot gives a more emotionally appropriate response, the patient will be more likely to continue engaging.
Other Articles to Explore
GYANT found that engagement was increased when patients are given the right feedback. The right feedback reflects listening and understanding, which builds trust. How did they do it? A method of clustering that categorized response types into groups like “concerning, but not dangerous,” or “frustrating, but not concerning.” So if the response indicates there is a dangerous scenario, the bot is programmed not to say, “I am sorry to hear that,” in order to prevent the patient from disengaging if it seems the bot is not listening.
GYANT’s data shows it works. In one of their studies, a traditional method of outreach—calling—had an engagement rate of 55 percent. After they coded their bot for empathy, this engagement rate increased to 82 percent. Engagement increased when the outreach was performed by non-human bots.