FTC warns that AI technology like ChatGPT could ‘turbocharge’ fraud

Share
  • April 18, 2023

In a Congressional hearing on Tuesday focused on the Federal Trade Commission’s work to protect American consumers from fraud and other deceptive practices, FTC chair Lina Khan and fellow commissioners warned House representatives of the potential for modern AI technologies, like ChatGPT, to be used to “turbocharge” fraud. The warning was issued in response to an inquiry over how the Commission was working to protect Americans from unfair practices related to technological advances.

Khan replied by agreeing that AI presented new risks for the FTC to manage, despite the other advantages it may present.

“AI presents a whole set of opportunities, but also presents a whole set of risks,” Khan told the House representatives. “And I think we’ve already seen ways in which it could be used to turbocharge fraud and scams. We’ve been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action,” she stated.

Khan additionally warned that AI’s ability to turbocharge fraud should be considered a “serious concern.”

To help combat the problem, the FTC chair noted its technologists were being embedded across the agency’s work, both on the consumer protection side and the competition side, to ensure that any issues with AI would be properly identified and handled.

In a follow-up, FTC commissioner Rebecca Slaughter downplayed Khan’s remarks by explaining the FTC had adapted to new technologies over the years and has the expertise to adapt again to combat AI-powered fraud.

“There’s a lot of noise around AI right now and it’s important because it is [a] revolutionary technology in some ways,” Slaughter said. “But our obligation is to do what we’ve always done — which is apply the tools we have to these changing technologies, make sure that we have the expertise to do that effectively, but to not be scared off by the idea that this is a new revolutionary technology, and dig right in on protecting people,” she said.

The Commission testimony, delivered by Khan, Slaughter, and Commissioner Alvaro Bedoya, was presented before the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce, and addressed a wide range of topics beyond AI.

Among those that intersected with technology, the agency representatives detailed in their written testimony the FTC’s work to reduce the scourge of spam phone calls; its warning to online home buyer Opendoor regarding its deceptive claims about potential sales prices; the deceptive claims made by members of the crypto community; its work to protect consumers’ private health data collected by websites and apps; its handling of the COPPA (children’s privacy law) violations made by Fortnite maker Epic Games; its orders to online learning platform Chegg over its failure to protect personal data; its combating of junk fees and the inability for consumers to easily cancel subscriptions; deceptive practices in the gig economy; and more.

The agency also noted that it launched a new Office of Technology (OT) in February with the goal of supporting the agency’s law enforcement and policy work by offering in-house technical expertise, which could help it to keep pace with technological changes. The FTC’s testimony specifically referred to the OT’s focus in areas like security and privacy, digital markets, augmented and virtual reality, the gig work economy, and ad tracking technologies, in addition to “automated decision-making,” or what could include AI.

“The creation of the Office of Technology builds on the FTC’s efforts over the years to expand its in-house technological expertise, and it brings the agency in line with other leading antitrust and consumer protection enforcers around the world,” the FTC said.

FTC warns that AI technology like ChatGPT could ‘turbocharge’ fraud by Sarah Perez originally published on TechCrunch

Source : FTC warns that AI technology like ChatGPT could ‘turbocharge’ fraud