close
close

AI deepfake porn should be a crime, say advocates and victims

In this story

Lawmakers and activists are pushing for federal legislation criminalizing AI-generated pornography, saying so-called deepfake porn ruins the lives of victims, who are mostly women and girls.

“When there are no clear laws at the federal and state level, victims who come to the police are often told there is nothing that can be done,” said Andrea Powell, the director of an advocacy group called the Image-Based Sexual Violence Initiative, during a recent panel discussion on the issue. The online forum was hosted by the nonprofit National Organization for Women (NOW).

“These people were then threatened with sexual violence and harassment offline, and unfortunately we also found that some [victims] not survive,” Powell added. She calls AI deepfake nude apps a “virtual weapon” for men and boys.

The term “Deepfake” Was Formed at the end of 2017 by a Reddit user who Google's (GOOGL) Open source face swapping technology to create pornographic videos. AI-generated, sexually explicit content has spread like wildfire since ChatGPT brought generative artificial intelligence into the mainstream. Tech companies are vying to build better AI photo and video tools, and some people are using the tools for harm. According to Powell, Google Search lists 9,000 sites showing explicit deepfake abuse. And between 2022 and 2023 Deepfake content with sexual content on the Internet has increased by over 400%.

“It’s gotten to the point where 11- and 12-year-old girls are afraid to be online,” she said.

Regulations regarding deepfakes vary from state to state. 10 states have laws on the books, and six of those states impose criminal penalties. More deepfake laws are pending in Florida, Virginia, California and Ohio. And San Francisco this week filed a groundbreaking lawsuit against 16 deepfake porn websites.

But advocates say a lack of uniformity among state laws is creating problems and that federal regulations are long overdue. They also say platforms, not just individuals, should be held liable for non-consensual deepfakes.

Some federal politicians are working on it. Representative Joe Morelle (NY) introduced the Law to prevent deepfakes of intimate imagesthat would criminalize the non-consensual distribution of deepfakes. Shortly after deepfake nude images of Taylor Swift took the internet by storm, lawmakers DEFIANCE Actwhich would strengthen the victims’ right to civil action. And a bipartisan bill called the Intimate Privacy Protection Act would hold technology companies accountable if they fail to address the problem of deepfake nude images on their platforms.

In the meantime, victims and advocates are taking matters into their own hands. Breeze Liu was working as a venture capitalist when she became the target of sexual harassment through deepfakes in 2020. She developed an app called Alecto AI that helps people track down and remove deepfake content online that uses her image.

During the online stakeholder meeting, Liu recalled her own experience as a victim of deepfake abuse, saying, “I felt like I was probably better off dead because it was just absolutely horrific.”

“We have suffered from online image abuse for long enough,” she added. “I started this company in the hope that one day we all, and our future generations, would take it for granted that no one should have to die as a result of online violence.”

In addition to Alecto AI, Liu is also advocating for federal policy changes that would criminalize non-consensual AI deepfake pornography, such as Morelle's 2023 bill. However, the Preventing Deepfakes of Intimate Images Act has not advanced since its introduction last year.

Notably, some technology companies have already taken steps to address the issue. Google updated its policies on July 31 to curb non-consensual deepfake content. Others are under pressure. Metas (META) Oversight Board end of July said the company needs to do more to address explicit AI-generated content on its platform.

In this story