Debating the ethics of customer data use

When does customer data sharing in service and CX become unethical?

by

A data sharing partnership between a US suicide hotline and a customer service tech vendor raises major questions about ethics. But are the answers clear-cut?

4th Feb 2022

Crisis Text Line is a suicide hotline. As explained recently by Politico, it is one of the world’s “most prominent mental health support lines”, which very openly uses AI technology as its foundation for “helping people cope with traumas such as self-harm, emotional abuse and thoughts of suicide”.

Having launched in 2013, it has exchanged 219 million messages in more than 6.7 million conversations over text, Facebook Messenger and WhatsApp — and says its use of AI and data science has been fundamental in being able to connect with its largely ‘younger’ demographic of clients “where they are”, whilst also meeting the demands placed on its service by the volume of messages it needs to respond to. 

In order to match the technological demands placed on such a service, Crisis Text Line partners with an AI tech provider, Loris.ai. However, as Politico’s analysis explained in January, this partnership’s lines may appear, at face value, a little blurred. Loris supplies technology to Crisis Text Line. In return, the nonprofit provides Loris with an unfettered stream of data in the form of conversational text – anonymous in its content but all with the aim of helping Loris “create and market customer service software” to other businesses.   

Crisis Text Line holds an ownership stake in Loris.ai, while Loris has pledged to share some of its revenue with Crisis Text Line. The two organisations “shared the same CEO for at least a year and a half”. The partnership is undoubtedly a close one with clear and obvious benefits for both parties and their respective clients/customers. It does, however, raise important questions about how close is too close when it comes to the ethics of customer data and its uses in enhancing the customer service experience.

Data control

Politico reporter Alexandra S. Levine says it is important to ask serious questions about such relationships given the profit gains likely to be experienced by the commercial businesses outside of the partnership: “For Crisis Text Line, an organisation with financial backing from some of Silicon Valley’s biggest players, its control of what it has called “the largest mental health data set in the world” highlights new dimensions of the tech privacy debates roiling Washington: Giant companies like Facebook and Google have built great fortunes based on masses of deeply personal data.

“But information of equal or greater sensitivity is also in the hands of nonprofit groups that fall outside federal regulations on commercial businesses — with little outside control over where that data ends up.”

Jennifer King, privacy and data policy fellow at Stanford University also questions the true understanding of how personal data was being used by Crisis Text Line’s millions of users, given the vulnerable position many of them find themselves in when first making contact with the service.

“These are people at their worst moments. Using that data to help other people is one thing, but commercialising it just seems like a real ethical line for a nonprofit to cross.”

In response, Loris’s CEO Etie Hertz states that company enforces “a necessary and important church and state boundary” between itself and Crisis Text Line. The nonprofit is equally assertive. “We view the relationship with Loris.ai as a valuable way to put more empathy into the world, while rigorously upholding our commitment to protecting the safety and anonymity of our texters,” Crisis Text Line’s vice president and general counsel, Shawn Rodriguez states, adding that "sensitive data from conversations is not commercialised, full stop”.

There’s a paucity of best practice data in speech analytics because everything is so often held up in siloes

Andrew Moorhouse, the founder and director of Alitical and a seasoned speech analytics and conversation science expert, says this type of agreement is becoming increasingly common, with so many AI-related companies in great need for large volumes of accurate data to use as the basis for improvements to their technologies. And in the case of Crisis Text Line, he highlights that the benefits to its clients of the Loris agreement arguably surpass any blowback around ethics in relation to the commercial use of anonymised data.    

“In a previous role I once listened to 30 suicide threat calls for a major gambling organisation, so there are similarities to the Crisis Text Line example. I was in their contact centre and the calls had been previously tagged as self-harm or suicide risk calls. What was critically important was that the warning signs were never overt in conversation, and the suicide threat always came within the last 15-seconds of the call. But there were subtle conversational markers, where callers would say things like "I'm at my wit's end"; "I'm beside myself" or "I just don't know what to do".

“It took an awful lot of listening and analysing to get to understanding the true nature of the calls we were receiving and then being able to triage them effectively with people in support roles that really mattered, such as trained team leaders or even psychologists from GamCare (the hotline funded by the gambling commission).

“What I always felt was an issue though, was that that we could have done more to triage these calls and apply AI to route them more quickly to the right person. There’s a paucity of best practice data in speech analytics because everything is so often held up in siloes. That means crisis conversations in one organisation aren’t ever shared with another to aid them in helping their callers. If the data is sanitised and anonymised and obtained ethically, it’s incredibly valuable.

“ How Lori's.ai company uses the crisis line data to improve CX is most likely negligible, without combining it with metadata related to the outcome of counselling outcomes... As you wouldn't know which conversations were deemed ‘effective’ in any case.”

Terms and conditions

Irrespective of how beneficial the data is, questions about transparency remain a major concern and it’s far from an isolated issue. 68% of US and UK consumers don’t trust how their data is being used, whilst 90% of US consumers say they’ve “lost control of their data”.

While the partnership between Crisis Text Line and Loris.ai may have only good intentions, concerns about how they articulate their use of customers’ and clients’ data are central to the debate about ethics.  

Politico state that after speaking with Crisis Text Line the nonprofit changed wording on its website to emphasise that “Loris does not have open-ended access to our data; it has limited contractual rights to periodically ask us for certain anonymised data.”

However, many organisations are guilty of hiding customer data use by burying it deep within their terms and conditions of service, and it is this aspect that many take umbrage with.

 “We're seeing more and more how often data online is not just my shopping history; it's a real glimpse into my psyche,” Jennifer King explains.

“It probably passes legal muster, but does it pass the 'feel-good' muster? A lot less certain."

If your volunteers, staff and the users themselves are not aware of that use, then that's a problem

Dash Tabor, CEO and co-founder of the AI and machine learning start-up TUBR agrees, and says it is up to organisations to take the lead and provide customers with very clear and concise instruction about the pros and cons of how their data might be used, irrespective of how it might improve the overall experience.

“If the data is anonymised and no longer personal and is being used for a use case that does not allow a demographic of people to be taken advantage of then I do not believe this is unethical.

“However, I would question the use of anonymised data in the marketing fields and believe there should be some ethical transparency even if the data is anonymised. Otherwise, if the data is helping gain better insight into how to solve bigger problems and is no longer personal to the person then we should utilise the asset and make the world a better place.

“In the case where personal data is being used then I believe we need to do more to be transparent about how and when data is being used. I think companies should feel a social responsibility to present the positives and negatives of data use to the individual. However, there is a common misconception that all data is "personal" and I would argue that the majority of data decision-making the personal aspect of the data has been removed. In this case, we should use data to benefit society. I do not find this data dangerous to privacy in the same way as personal data.”

ProPrivacy.com research found that just 1% of people read terms and conditions prior to agreeing to them online, highlighting the need for businesses to rethink where in the customer journey they articulate data use. In the case of Crisis Text Line and Loris, however, Beck Bamberger, a volunteer for Crisis Text Line is more damning in her appraisal, stating that even as a worker she was not made aware of the agreement between the nonprofit and the commercial business.

"Mental health and people cutting themselves adapted to customer service?” she said. “That sounds ridiculous. Wow.

"If your volunteers, staff and the users themselves are not aware of that use, then that's a problem.”

Replies (0)

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.