As the recent Congressional hearing on online child sexual exploitation demonstrated, the manipulation and abuse perpetrated by bad actors against vulnerable teens on social and digital media platforms can be devastating.
Consider a few high-profile cases:
A 54-year-old man reportedly targeted a 14-year-old girl on Instagram in December 2022, plying her with a gift card after she remarked in her own post that clothing was expensive. The man allegedly drugged and raped the teen multiple times after cultivating an in-person relationship with her, according to Manhattan District Attorney Alvin Bragg.
A 13-year-old boy from Utah was abducted by an adult male in late 2022, after the man groomed the teen on social media platforms, including Twitter (now branded as X), the teen and his parents reported. The boy returned home after five days, but prosecutors said he’d been repeatedly sexually assaulted.
Mashable’s own investigation into emotional support platforms recently found concerns about teen safety on 7 Cups, a popular app and website where users can seek and offer others compassionate listening. In a 2018 case originally reported by the Pittsburg Tribune-Review, a 42-year-old Butler, Pennsylvania, man lied about his age to gain access to the teen community on 7 Cups. The man posed as a 15-year-old boy and coerced a 14-year-old girl into sending him sexually explicit imagery of herself, crimes to which he pleaded guilty.
These cases reflect a chilling reality. Predators know how to weaponize social media platforms against youth. While this isn’t new, it’s an increasingly urgent problem. Screen time surged during the COVID-19 pandemic. Adolescents and teens are in the midst of a mental health crisis, which may prompt them to seek related information on social media and confide in strangers they meet there, too. Some research also suggests that youth are increasingly comfortable with conducting an online romantic relationship with an adult.
This story is part of our investigation into the emotional support platform 7 Cups and the growing marketplace for apps and platforms that pair people with someone who is supposed to be a compassionate listener. The series explores a failed experiment between the state of California and 7 Cups, as well as the myriad risks of seeking emotional support online from strangers. These dangers can include the manipulation of vulnerable youth and targeted abuse and harassment. The series also includes an analysis of why it’s so hard to stop online child exploitation, and looks at solutions to make platforms safer.
Bad actors and predators appear to be capitalizing on these trends. Data collected from the Exploited Children Division at the National Center for Missing & Exploited Children (NCMEC) show an alarming increase in online enticement, or types of predatory behavior designed to exploit a minor.
While there’s no single reason that explains the heightened risk, the largely unrestricted access adult bad actors have had to youth online, in the absence of robust safety measures and meaningful federal regulation, may have both emboldened predators and influenced youth attitudes about the adult behavior they’ll encounter online.
Though youth and their caregivers may think the risk of online exploitation is low, the design of many social media platforms tends to maximize opportunities for predators while leaving youth to fend for their own safety. Online child safety experts argue that platforms should take far more responsibility for ensuring their security, and urge youth to report any abuse or exploitation to a trusted adult or to the authorities.
“I think sometimes the pressure is on for these kids to figure it out for themselves,” says Lauren Coffren, an executive director of the Exploited Children Division at NCMEC.
“This is happening on every platform”
It doesn’t matter where youth spend their time online — a popular platform for adults or a space specifically created for teens — bad actors are targeting them, says Melissa Stroebel, vice president of research and insights at Thorn, a nonprofit organization that builds technology to defend children from sexual abuse.
“At the end of the day, this is happening on every platform,” she notes.
Popular social media platforms don’t effectively verify user age, making it possible for children younger than 18 to sign up for services that may put them at greater risk of coming into contact with adults who intend to exploit them. Similarly, adults can often access gated teen communities by simply lying about their age.
Safety features, like blocking and reporting, can be hard to access, or never explicitly introduced to minors as a way to protect themselves. Bad actors can evade platform bans by creating new accounts using burner email addresses or phones, because their profile isn’t tied to a verified identity.
Protecting one’s self as a teen from online exploitation, or safeguarding a child as an adult, can be extraordinarily hard under these circumstances.
Data collected by NCMEC suggests the problem is worsening. Between 2022 and 2023, NCMEC logged a 132-percent increase in reports related to online enticement. This increase included an emerging trend in which children are financially blackmailed by users who request and receive nude or sexual images of them.
Common tactics that predators use to entice children include lying about their age to appear younger, complimenting a child or connecting with them over mutual interests, engaging in sexual chat, providing an incentive like a gift card or alcohol, offering or sending sexually explicit images of themselves, and asking a child for such material.
Victimization is never a child’s fault, experts say. Nor should youth be expected to constantly guard against the threat of exploitation. Instead, prevention experts say that minors and their caregivers need better tools to manage risk, and that social media companies need to design platforms with youth safety as a key priority.
Unsafe by design
Thorn urges platforms to consider child safety from the outset. One best practice for platforms is to have content moderation features and humans who staff trust and safety efforts, plus the knowledge of what exploitation looks like, in order to recognize and report abusive interactions and material internally and to the authorities.
But that’s not enough. Stroebel adds that platforms must also have the capacity to scale those systems as the user base grows. Too often, the systems are implemented well after a product’s launch and aren’t designed to scale successfully.
“We end up trying to put Scotch tape over cracks in the dam,” says Stroebel.
Stroebel says it’s imperative that there are tools to recognize, report, and remove someone with predatory intent or behavior.
On an emotional support platform like 7 Cups, which relies heavily on a volunteer labor force, a safety report might be evaluated by a volunteer who receives little training for making a decision about escalating bad behavior to paid staff.
Other apps may use a combination of artificial intelligence and paid human moderation to review safety reports and still issue confusing decisions, like concluding that a clearly harmful offense doesn’t violate their terms of service. Instagram users have anecdotally found it difficult to get the platform to take action against bullying accounts, for example.
Coffren says the NCMEC CyberTipline, which receives reports of child exploitation, often hears from youth and caregivers that the platform reporting process is more difficult than they expected. Multiple links or clicks take users to different subpages, where they might encounter non-trauma-informed language that’s inappropriate for someone who’s been exploited online. Sometimes people never hear back from the platform once they’ve made a report.
Platforms should reduce the “friction” of using reporting tools, says Coffren. They could even require minors to complete a tutorial about how safety tools work before accessing the platform, she adds.
Coffren points out that every company gets to make its own decisions about safety practices, which creates a “giant disparity” from platform to platform and makes it difficult for youth and caregivers to know how to reliably protect themselves or their children.
There is legislation aimed at better protecting youth online. Proposed federal legislation known as the Kids Online Safety Act does not impose age and identity verification but would require online platforms to enable the strongest privacy settings for underage users. It would also mandate a “duty of care” so that social media companies have to prevent and mitigate harms associated with using their product, including suicide, eating disorders, and sexual exploitation.
The legislation has many backers, including the American Psychological Association and the American Academy of Pediatrics. Yet critics of the bill say it would curtail free speech and discourage marginalized youth, such as LGBTQ+ minors, from learning more about their identity and connecting with other queer and transgender community members online.
Youth more vulnerable online than adults realize
Youth view online experiences differently than many adults, which is why it’s critical to incorporate their overlooked perspectives in policy and design choices, says Stroebel.
Research on online grooming conducted by Thorn found that sharing nudes is now viewed as normal by a third of teens and that half of minors who’d shared such images did so with someone they only knew online. Slightly more than a third of those respondents said they’d given nudes to someone they believed to be an adult.
Stroebel says the “stranger danger” catchphrase that Gen X and older millennial parents grew up hearing isn’t sufficient as standalone advice for avoiding risky situations online. Instead, youth are accustomed to creating a digital social network comprising friends and acquaintances that they’ve never met before. For some of them, a stranger is just someone who’s not yet their friend, particularly if that unknown contact is a friend of a friend.
On its own, this isn’t necessarily risky. But Thorn’s research on grooming indicates that youth can be surprisingly open with online-only contacts. One in seven respondents said they’ve told a virtual contact something they’d never shared with anyone before, according to a 2022 Thorn survey of 1,200 children and teens between the ages of 9 and 17.
Worryingly, the norms around online romantic interactions and relationships, particularly with adults, appear to have shifted for youth, potentially making them more vulnerable to predation.
The survey found that a significant proportion of youth thought it was common for kids their age to flirt with adults they’d met online. A quarter of teens believed it was common to flirt with users ages 30 and older. Among 9- to 12-year-olds, one in five felt the same way about romantic interactions with older adults.
Stroebel says that youth struggle when responding to adult behavior that seems predatory. Many view reporting as more punitive than blocking, which creates an “immediate barrier of defense” but doesn’t trigger a platform protocol that ends in confronting or banning the adult user.
Stroebel says that manipulation plays heavily into the youth’s decision when, for instance, the adult tells the teen they misunderstood a comment.
“Think about how hard it is to recognize manipulation in a way that you trust your gut,” says Stroebel, adding that a young user may have confided in the adult or feel understood in a way they’ve never experienced before. Expecting youth to recognize a manipulative dynamic is an unreasonable burden, says Stroebel.
Even when a minor takes action, Thorn’s research shows that one in two youth who block or report someone say they were recontacted by the user, either on a different platform or from a new account created with another email address. In half of such cases, the minor experiences continued harassment. Stroebel says that ban evasion is “far too common.”
How to handle online exploitation
Coffren says that youth who’ve been exploited online should tell a trusted parent, adult, or friend. The minor or someone close to them can make a report to the CyberTipline, which assesses the information and shares it with the appropriate authorities for further investigation. (The center’s 24-hour hotline is 1-800-THE-LOST.)
Coffren emphasizes that minors who’ve been exploited have been tricked or coerced and should not be treated by law enforcement as if they have violated the law.
She also wants youth to know that nudes can be removed from the internet. NCMEC’s Take It Down program is a free service that lets people anonymously request the removal of nude or sexually explicit photos and videos taken of them before age 18 by adding a digital fingerprint, or hash, as a way of flagging that content. NCMEC shares a list of fingerprints with online platforms, including Facebook, TikTok, and OnlyFans. In turn, the platforms can use the list to detect and remove the images or videos.
Coffren urges youth who’ve been exploited to stay hopeful about their future: “There is a life after your nude images circulate online.”
But reducing the stigma of exploitation also requires the public to confront how the digital ecosystems youth participate in aren’t designed for their safety and instead expose them to bad actors eager to manipulate and deceive them.
“We have to accept that children are going to weigh the pros and the cons and maybe make the wrong decision,” says Coffren, “but when it’s on the internet, the grace isn’t given to make mistakes as lightly or as easily.”
If you are a child being sexually exploited online, or you know a child who is being sexually exploited online, or you witnessed exploitation of a child occur online, you can report it to the CyberTipline, which is operated by the National Center for Missing Exploited & Children.