Legislators Aim to Help Celebrities and Consumers Fight Deepfake Scam Ads

CBS anchor Gayle King decried a deepfake ad using her likeness to promote a weight loss product. PHOTO: DIA DIPASUPIL/GETTY IMAGES
CBS anchor Gayle King decried a deepfake ad using her likeness to promote a weight loss product. PHOTO: DIA DIPASUPIL/GETTY IMAGES
Summary

The dangers to both movie stars and ordinary people are growing as AI-fueled deepfake technology becomes more readily available, experts say.

Some TikTok users recently encountered an ad in which the YouTube star known as MrBeast appeared to offer 10,000 new iPhones for just $2 each.

The deal sounded too good to be true, and it was. MrBeast last month joined film star Tom Hanks and CBS anchor Gayle King in a growing cohort of celebrities who say scammers have made unauthorized use of their likenesses in convincing, AI-generated deepfake ads hawking phones, fake dental plans and dubious weight-loss solutions.

“Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem," wrote MrBeast, whose real name is Jimmy Donaldson, on X, formerly known as Twitter.

Some politicians are trying to take on the issue. Members of both houses of Congress have in recent weeks introduced bills that would create a national standard prohibiting unauthorized deepfakes in a commercial context. If passed into law, these bills could help celebrities and ordinary citizens alike take action against scam marketers using their likenesses, their sponsors say.

But it is unclear if these efforts can counter a coming wave of malicious deepfakes.

Nor is it clear how effective social-media companies will be at blocking the spread of deepfakes, since the ads featuring Hanks, King and MrBeast appeared to bypass their filters.

Meta Platforms policies prohibit manipulated videos and content that uses public figures’ likenesses in a deceptive way, according to a spokesman who declined to provide specific data regarding deepfakes.

A TikTok spokesman referred to internal data suggesting that 1.3% of the 106 million posts removed from the platform between April and June fell into the integrity and authenticity category, and 0.4% of that total violated bans on synthetic or manipulated media, such as deepfakes.

By contrast, 39% of all videos removed in that period were pulled for posting prohibited adult content, such as nudity or animal abuse, according to the data.

Shadowy entities like the ones behind these scam ads often disappear as quickly as they pop up, then move on to other targets, said Hilary Krane, chief legal officer at talent firm Creative Artists Agency.

The increasing accessibility of free online deepfake tools means that any ordinary citizen who has publicly shared content featuring his or her likeness could join megastars like Hanks on scammers’ target lists, said Rijul Gupta, chief executive and co-founder of DeepMedia, a firm that helps developers and government agencies detect deepfakes.

“The future of those robocalls, the future of the Nigerian prince scam, is the deepfake scam," Gupta said.

Fines or jail time for deepfake scammers

Celebrities have for decades relied on state-level right of publicity laws, which prohibit the unapproved use of a person’s name or likeness for marketing purposes, in order to take action against advertisers.

These laws, along with existing prohibitions on copyright infringement, defamation and harassment, preclude the need for a deepfake law, said David Greene, civil liberties director at digital rights advocacy group Electronic Frontier Foundation. EFF has expressed skepticism about federal deepfake laws, arguing that they could curtail both freedom of speech and productive uses of AI technology.

Yet right of publicity laws vary widely by state, with many states offering no such protection, said Sen. Chris Coons (D., Del.).

Coons and three other senators in October introduced the bipartisan No Fakes Act, which would create a nationwide standard for individuals to bring civil action against parties who use their AI-generated likenesses without permission.

The bill proposes holding offenders liable for fines of up to $5,000 for each individual violation unless the damage to the offended party is determined to be greater than that. Any punitive damages and attorney’s fees would also have to be paid, according to a draft version.

“This helps protect individual rights, and the balance I’m trying to strike in this bill is fostering innovation in artificial intelligence and promoting and protecting creativity in the United States," said Coons.

In the House, Rep. Yvette Clarke (D., N.Y.) in September reintroduced the Deepfakes Accountability Act, first proposed in 2019, which would criminalize using a person’s likeness without their permission in a way that could potentially harm the person.

The House bill proposes harsher penalties than its Senate counterpart, including civil judgments of up to $150,000 per violation and as much as five years in prison for criminal violations.

It would also require creators and distributors to clearly mark any manipulated content.

“It’s not just the harm that’s done to the individual whose likeness has been captured. It’s also the deception of the public," Clarke said.

The bills would grant both public figures and private citizens more power to defend themselves. But enhanced legal protections only go so far, and the rapid evolution of deepfake technology could quickly make these bills obsolete, said Krane, the CAA chief legal officer.

“I think that technology runs ahead of policy, so this is going to have to be an evergreen process," she said.

Private citizens are not immune

Most deepfake scams focus on stars or minor celebrities, but the threat to private citizens could be greater than most people think.

Processed meat brand Steak-umm, which made a name for itself during the pandemic by countering misinformation on social media, recently made light of the technology with an ad campaign in which a group of vegans agree to participate in a filmed focus group, only to see deepfake versions of themselves talk about how much they love meat.

The ad then urges viewers to sign a petition supporting Clarke’s bill.

Max Scannapieco, vice president of sales and marketing for Steak-umm parent company Quaker Maid Meats, said he wasn’t aware that deepfake technology posed a threat to ordinary consumers until Steak-umm’s ad agency, Tombras, pitched the idea earlier this year.

“This is the very beginning; it could get a lot worse," he said.

Write to Patrick Coffee at patrick.coffee@wsj.com

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more
Read Next Story footLogo