Crisis Text Line tried to monetize its users. Can big data ever be ethical?

Share
  • February 3, 2022

Years after Nancy Lublin founded Crisis Text Line in 2013, she approached the board with an opportunity: What if they converted the nonprofit’s trove of user data and insights into an empathy-based corporate training program? The business strategy could leverage Crisis Text Line’s impressive data collection and analysis, along with lessons about how to best have hard conversations, and thereby create a needed revenue stream for a fledgling organization operating in the woefully underfunded mental health field. 

The crisis intervention service is actually doing well now; it brought in $49 million in revenue in 2020 thanks to increased contributions from corporate supporters to meet pandemic-related needs and expansion, as well as a new round of philanthropic funding. But in 2017, Crisis Text Line’s income was a relatively paltry $2.6 million. When Lublin proposed the for-profit company, the organization’s board was concerned about Crisis Text Line’s long-term sustainability, according to an account recently published by founding board member danah boyd

The idea of spinning off a for-profit enterprise from Crisis Text Line raised complex ethical questions about whether texters truly consented to the monetization of their intimate, vulnerable conversations with counselors, but the board approved the arrangement. The new company, known as Loris, launched in 2018 with the goal of providing unique “soft skills” training to companies. 

It wasn’t clear, however, that Crisis Text Line had a data-sharing agreement with Loris, which provided the company access to scrubbed, anonymized user texts, a fact that Politico reported last week. The story also contained concerning information about Loris’ business model, which sells enterprise software to companies for the purpose of optimizing customer service. On Monday, a Federal Communications Communications Commissioner requested the nonprofit cease its data-sharing relationship, calling the arrangement “disturbingly dystopian” in a letter to Crisis Text Line and Loris leadership. That same day Crisis Text Line announced that it had decided to end the agreement and requested that Loris delete the data it had previously received.

“This decision weighed heavily on me, but I did vote in favor of it,” boyd wrote about authorizing Lublin to found Loris. “Knowing what I know now, I would not have. But hindsight is always clearer.” 

SEE ALSO:

21 reasons to keep living when you feel suicidal

Though proceeds from Loris are supposed to support Crisis Text Line, the company played no role in the nonprofit’s increased revenue in 2020, according to Shawn Rodriguez, vice president and general counsel of Crisis Text Line. Still, the controversy over Crisis Text Line’s decision to monetize data generated by people seeking help while experiencing intense psychological or emotional distress has become a case study in the ethics of big data. When algorithms go to work on a massive data set, they can deliver novel insights, some of which could literally save lives. Crisis Text Line, after all, used AI to determine which texters were more at risk, and then placed them higher in the queue. 

Yet the promise of such breakthroughs often overshadows the risks of misusing or abusing data. In the absence of robust government regulation or guidance, nonprofits and companies like Crisis Text Line and Loris are left to improvise their own ethical framework. The cost of that became clear this week with the FCC’s reprimand and the sense that Crisis Text Line ultimately betrayed its users and supporters. 

Leveraging empathy

When Loris first launched, Lublin described its seemingly virtuous ambitions to Mashable: “Our goal is to make humans better humans.”

In the interview, Lublin emphasized translating the lessons of Crisis Text Line’s empathetic and data-driven counselor training to the workplace, helping people to develop critical conversational skills. This seemed like a natural outgrowth of the nonprofit’s work. It’s unclear whether Lublin knew at the time but didn’t explicitly state that Loris would have access to anonymized Crisis Text Line user data, or if the company’s access changed after its launch.

“If another entity could train more people to develop the skills our crisis counselors were developing, perhaps the need for a crisis line would be reduced,” wrote boyd, who referred Mashable’s questions about her experience to Crisis Text Line. “If we could build tools that combat the cycles of pain and suffering, we could pay forward what we were learning from those we served. I wanted to help others develop and leverage empathy.” 


“I wanted to help others develop and leverage empathy.” 

But at some point Loris pivoted away from its mission. Instead, it began offering services to help companies optimize customer service. On LinkedIn, the company cites its “extensive experience working through the most challenging conversations in the crisis space” and notes that its live coaching software “helps customer care teams make customers happier and brands stand out in the crowd.” 

While spinning off Loris from Crisis Text Line may have been a bad idea from the start, Loris’ commercialization of user data to help companies improve their bottom line felt shockingly unmoored from the nonprofit’s role in suicide prevention and crisis intervention.  

“A broader kind of failure”

John Basl, associate director of AI and Data Ethics Initiatives at the Ethics Institute of Northeastern University, says the controversy is another instance of a “broader kind of failure” in artificial intelligence. 

While Basl believes it’s possible for AI to unequivocally benefit the public good, he says the field lacks an “ethics ecosystem” that would help technologists and entrepreneurs grapple with the kind of ethical issues that Crisis Text Line tried to resolve internally. In biomedical and clinical research, for example, federal laws govern how research is conducted, decades of case studies provide insights about past mistakes, and interdisciplinary experts like bioethicists help mediate new or ongoing debates. 

“In the AI space, we just don’t have those yet,” he says. 

The federal government grasps the implications of artificial intelligence. The Food and Drug Administration’s consideration of a regulatory framework for AI medical devices is one example. But Basl says that the field is having trouble reckoning with the challenges raised by AI in the absence of significant federal efforts to create an ethics ecosystem. He can imagine a federal agency dedicated to the regulation of artificial intelligence, or at least subdivisions in major existing agencies like the National Institutes of Health, the Environmental Protection Agency, and the FDA. 

Basl, who wasn’t involved with either Loris or Crisis Text Line, also says that motives vary inside organizations and companies that utilize AI. Some people seem to genuinely want to ethically use the technology while others are more profit driven. 

Critics of the data-sharing between Loris and Crisis Text Line argued that protecting user privacy should’ve been paramount. FCC Commissioner Brendan Carr acknowledged fears that even scrubbed, anonymized user records might contain identifying details, and said there were “serious questions” about whether texters had given “meaningful consent” to have their communication with Crisis Text Line monetized.

“The organization and the board has always been and is committed to evolving and improving the way we obtain consent so that we are continually maximizing mental health support for the unique needs of our texters in crisis,” Rodriguez said in a statement to Mashable. He added that Crisis Text Line is making changes to increase transparency for users, including by adding a bulleted summary to the top of its terms of service.


“You’re collecting data about people at their most vulnerable and then using it for an economic exercise”

Yet the nature of what Loris became arguably made the arrangement ethically bereft. 

Boyd wrote that she understood why critics felt “anger and disgust.” 

She ended her lengthy account by posing a list of questions to those critics, including: “What is the best way to balance the implicit consent of users in crisis with other potentially beneficial uses of data which they likely will not have intentionally consented to but which can help them or others?” 

When boyd posted a screenshot of those questions to her Twitter account, the responses were overwhelmingly negative, with many respondents calling for her and other board members to resign. Several shared the sentiment that their trust in Crisis Text Line had been lost.

It’s likely that Crisis Text Line and Loris will become a cautionary tale about the ethical use of artificial intelligence: Thoughtful people trying to use technology for good still made a disastrous mistake.

“You’re collecting data about people at their most vulnerable and then using it for an economic exercise, which seems to not treat them as persons, in some sense,” said Basl. 

If you want to talk to someone or are experiencing suicidal thoughts, call the National Suicide Prevention Lifeline at 1-800-273-8255. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. Here is a list of international resources.

Source : Crisis Text Line tried to monetize its users. Can big data ever be ethical?