Mental health helpline cuts ties with AI customer support over data sharing

(Photo: @dbeltwrites/Unsplash)

A certain mental health helpline will no longer partner with an AI customer service to share data following criticism of their relationship over sensitive information and conversations.

Crisis Text Line ends relationship with Loris.ai

A conversational AI platform called Loris.ai has promised that its AI-powered chat solution will help customer service representatives respond efficiently and effectively to customers based on their tone of voice.

During its partnership with Crisis Text Line, a mental health helpline, Loris.ai developed AI systems to help customer service agents better understand sentiment in chats through anonymized data collected .

But in a statement to the BBC, the mental health helpline said it ended its data-sharing relationship with Loris on January 31 and asked for her data to be deleted.

CTL Vice President Shawn Rodriguez also spoke to the BBC and clarified that Loris.ai had not accessed any data since the start of 2020.

Additionally, Crisis Text Line said it had listened to community complaints about the relationship, and a CTL board member tweeted that he was “wrong” to accept it.

These concerns raised by the community followed a Politico report citing the ethical stance of collecting sensitive data from a conversation with a suicide hotline.

In defense, Loris.ai added that using insights gained from studying nearly 200 million texts, the origins of AI come from Crisis Text Line itself, where difficult conversations are key.

On the other hand, CTL further ensures that all shared data is fully anonymized and devoid of identifying information. It has been transparent with its users about data sharing, right down to its terms and conditions.

CTL Vice President Rodriguez points out that data and artificial intelligence remain critical to CTL’s assistance to people in need of mental health support after 6.7 million conversations with people.

The identification of those at risk was done using data and provided them with help as quickly as possible, he said.

“And, the data is being used successfully to defuse tens of thousands of texters in crisis who have suicidal thoughts,” Rodriguez added.

Read also : Maxence Bouygues explains how the customer support team uses Deep Learning to understand their own business

criticism of the partnership

Nevertheless, Politico spoke to several experts who were initially highly critical of said partnership. One questioned whether people with mental health issues could fully consent to sharing data.

“CTL may have legal consent, but does it have meaningful, emotional, and fully understood consent?” Jennifer King, privacy and data policy at Stanford University’s AI Institute, told Politico.

And as recalled, CTL itself also made it clear that they felt the partnership initially felt bad, so they took appropriate steps to make things right.

In response, CTL wrote in a statement on their website that they have heard feedback from the community, and it is clear and clear that anyone in crisis should be able to understand what they are agreeing to when asking for ugly.

Read the article : Next time you call customer service, you might be talking to a robot

This article belongs to Tech Times

Written by Thea Felicity

ⓒ 2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Keywords:

Joseph P. Harris