Friday, March 16, 2018

SXSW 2018 Day 1 Session 4: Robot meets Freud: Bots in Mental Health

Session page, including audio: https://schedule.sxsw.com/2018/events/PP69646

Kate Neederhoffer: Moderator
Jeff Hancock: Professor, Stanford university, researches human-machine interactions
Len Coppersmith: data scientist, designer of chatbots
Glen Moriarty: Founder, CEO, 7 Cups - company that makes message based listening emotional support bots

Len: People think of a chatbot as a dialog engine, but that's just one piece of the system which includes four parts:

  1. A dialog engine
  2. An indexing system to the data of the site
  3. The AI planning piece, calculating what the next message will be
  4. Machine learning algorithms applied to the big data repository of all previous converstaions
All of these make up the chatbot.

Jeff: When we tested for how people would react to emotional support chat bots, we had three working hypotheses: that they will find interaction with the chatbot worst than with humans, that people would find them to be about the same as humans, or that they would find chatbots to be better than humans.  The findings of our research show that there is no difference - people feel they can get the same results from chatbots as from human therapists.

Glen: Initially the software used voice, rather than text messages.  However people found that to be uncomfortable, threatening.  Annoymous messaging made it easier to get help, and bots seem to make things even easier.  The most curative part of therapy is your therapist being aligned with you; a chatbot can do that.

Jeff: There is a huge demand for mental health treatment, and not nearly enough support for it; so there is a lot of unmet demand.  In the US, about 1 in 4 or 5 report suffering from some level of depression, to be able to support that with human therapists would require hundreds of thousands more trained professionals.  Chatbots can help.

In bots, they apply one overarching rule: Bots should help people communicate with their community, not wean them off of community.  The role of the bot is to help people feel comfortable talking to other people.  The bot's job is ultimately to get itself out of the loop.

Kate: Is evreything encoded into a bot a fake replacement of human interactions?
Len: Yes, but not just replacement.  We canuse the data to help humans in ways we as humans can't see ourselves, because of the limitations of our human view.
Not all the support comes from bots, there is a large human support network as well.  Overall there are 240,000 listeners in 129 countries, most of whom were once clients getting help themselves, who then registered to help others.  The system helps 1.9 million people a month.

Jeff: There is a stigma that you can't get help from text messages, but it does help.  Recently medical experiments were made on people being operated on, to see if phone usage can reduce the need for anesthesia (due to the opioid epidemic).  The experiments showed that people who were texting during surgery needed 6 times less drugs during surgery than people who were not.  So texting is a form of distraction which can be used to reduce pain.

Glen: We estimate we'd need 40 times more professionals than we have currently if we were to help everyone who has suicidal thoughts. We need to make  help more accessible, without the need for the level of licensing we currently mandate.

Len: This is a way where less professional people can help.  Also, Bots can take away some of the tasks that the professionals need to do - remind of appointments or time to take medication, check in on people to see how they are - these are jobs which would be expensive to do with humans, but Bots can do them inexpensively.
The bot has scripts and works based on a decision tree. It can effect change with simple questions.  Analysis shows that you get better results if you start with the scripts and decision tree, and only later move to AI.

Jeff: There is a risk that the same technology can be used for ill - it would work the same way and with the same effectiveness.

Glen: The messaging system provides so much data that can be used for research, replacing lab testing - it has more data and is on a much larger scale.

Jeff: Bots can help therapists train, helping them try things out without harming anyone.  The bot  can act like the patient, not just like a therapist.

Kate: Can bots become better than human?  Can we humanize them, give them memories?

Len: We can tweak a bot's persona to make sure we get a better connection with a person. A therapist has a certain personality which might not match the patient.  With a human therapist the switching cost is high and many don't do it even if they don't have a good match.  With a bot there is a low switching cost, and you could tweak the bot's personality adapt to the patient's needs.

Jeff: There are issues with lack of diversity in silicon valley, which can lead to limitations in the bots.  There was a recent example of this with Siri.  It was found that if you told Siri you are suffering from domestic violence, it would just say "I can't help you with that".  That's because the engineers who programmed Siri were male and didn't think of this type of question.  It was since changed to give information on how to get help.
Surveys show that people's self reporting of trust is eroding; bots are also suffering from this.  I'm worried about Bots being used to undermine relations among humans.  We don't have any formal ethics  guidelines in place to deal with this.

Kate: What should we be asking about for the future?

Jeff: We should be asking what happens when AI is sitting between people.  Spellcheck, autocomplete are minor examples with small impact for errors.  As it advances more, less of the person is represented in the conversation and more of the bot or tech.  We need to think about that.

Glen: How will this change the way we train humans to be good therapist?  How can Bots augment psychologists?

Len: How do we solve our problems using internet in innovative ways?

Q: How do you avoid the uncanny valley?
Glen: Keep the experience of the bot clearly a bot.  Avoid trying to emulate a human.

Q: What was done to safeguard the data?
Glen: We treat the data like health data - anonimized and HIPPA compliant.
Len: Also the community monitors the activity and polices it when things get out of hand.

No comments:

Post a Comment