close
close

Some say AI-created videos of people should be criminalized

Shortly before the election, democracy watchdogs in Arizona and technology companies are warning that deepfake videos could endanger democracy.

PHOENIX — Before his death in April, widely respected American philosopher and cognitive scientist Daniel Dennett publicly warned about deepfake videos, or, as he called them, “fake people.”

Dennett, with his shaggy white beard and raspy voice, projected the image of a biblical prophet (though, to be clear, he was an atheist) as he talked about how governments and corporations will exploit “our most compelling fears and anxieties.”

“These fake people are the most dangerous artifacts in human history and have the potential to destroy not only the economy but human freedom itself,” Dennett said.

Prohibition of “counterfeit money” such as counterfeit money

His solution? Criminalize “counterfeiters” like counterfeit money.

“Before it is too late (perhaps it is already too late), we must outlaw both the creation of fake humans and their 'distribution.' The penalties for both offenses should be extremely severe, as civilization itself is in danger.”

While Dennett's proposal may seem extreme, it highlights a problem that the U.S. government has yet to address, and election officials in particular are concerned.

“I have no doubt that this type of technology will be used in our state this year to disrupt our elections in some way,” Arizona Secretary of State Adrian Fontes said earlier this year at a forum for journalists and election officials.

The government cannot keep up

Astonishingly realistic computer-generated videos are increasingly appearing in online feeds, showing either fictional or real people speaking and acting exactly like the original. AI-generated audio is also considered a deepfake.

There is no federal law banning deep fakes, and various bills that would have set ground rules for AI have largely stalled in recent years.

“The problem we face with all technologies is that they are created and introduced into society and people start adopting them and using them long before we can figure out how to regulate them,” said Sarah Florini of ASU's Lincoln Center for Applied Ethics.

Trump, Taylor Swift and the “satire” gap

Recent incidents involving deepfakes show how the lines between satire and deception are becoming increasingly blurred.

In January, a fake automated call from Joe Biden in New Hampshire urged Democrats not to vote in the primaries.

In June, a video of an AI-generated “car dealer” in Paris went viral, falsely claiming the dealer had sold a Bugatti to the wife of the Ukrainian president, suggesting financial corruption. Disinformation experts told CNN the video was likely the product of a Russian disinformation campaign.

Last month, Elon Musk tweeted to more than a hundred million of his followers a doctored version of Kamala Harris' voice, which Musk later said was intended as satire. His original tweet contained no indication that it was fake.

In May, scammers used a deepfake video of former President Donald Trump and celebrities to solicit donations.

On Sunday, Trump posted deepfake videos of pop superstar Taylor Swift and “Swifties” supporting Trump. Trump downplayed the seriousness of his decision to share the images, saying he could not be sued because he did not create the content.

RELATED TOPICS: No, Taylor Swift did not support Donald Trump

Florini said plaintiffs and prosecutors could use existing truth in advertising and fraud laws to address some of the concerns raised by AI. She doesn't buy into Dennett's philosophy of banning deepfakes entirely.

“I think the idea that these are the most dangerous artifacts in human history is a bit over the top and probably just for effect,” Florini said. “I don't know if they (deepfakes) will destroy human freedom.”

If we don't ban counterfeiters, what do we do?

In the absence of new legislation, the Biden administration issued an executive order banning deepfake robocalls and persuaded the country's largest tech companies to make voluntary civil liability commitments.

Florini says Congress needs to balance freedom of speech with consumer protection when drafting legislation. She thinks it's problematic to compare deepfakes to “counterfeit money.”

“When you start to look at the nuances, you have to deal with the fact that there are many iterations, versions of, in quotes, wrong people,” she said, referring to the media industry and the rights of artistic expression.

A proposal in Congress would require watermarks on manipulated videos. Brad Smith, vice president and president of Microsoft, recently made a public appeal, saying, “One of the most important things the United States can do is pass a comprehensive law against deepfake scams to prevent cybercriminals from using this technology to steal from ordinary Americans.”

Before his death, Dennett predicted that deepfakes would make society less trusting and more paranoid, stressing that the problem is not the technology, but the bad guys who are allowed to use it with impunity.

“It's not that artificial intelligence (AGI) is going to enslave us. We're going to allow ourselves to be seduced by much, much, much dumber systems,” Dennett said.

Up to date

Catch up on the latest news and stories on the 12News YouTube channel. Subscribe today.

https://www.youtube.com/watch?v=videoseries