close
close

California's solution to combating AI disinformation is worse than the problem

Democracy is on the brink, warns California Governor Gavin Newsom. The culprit? A wave of “disinformation powered by generative AI” poised to “pollute our information ecosystems like never before.” With the 2024 election approaching, Newsom and California Democrats argue that artificial intelligence-generated content threatens to distort public perception. In response, the Golden State immediately passed two bold new laws designed to stem the tide of “fraudulent” content online.

Not only do these laws likely violate the First Amendment, which protects even false political speech, but they are also based on exaggerated fears of AI disinformation.

An obviously fake video of Vice President Kamala Harris that was widely shared by Elon Musk was the catalyst for Newsom's push to regulate online discourse, but of course these laws will also ban Donald Trump's many parody AI videos.

Of course, disinformation, deepfakes and propaganda can spread and have an impact on real lives. But as researchers have pointed out – mostly unheard – the scale and impact of disinformation so far is typically much smaller than assumed in the alarming scenarios. And a recent study by MIT researchers found that people can often detect deepfakes based on both auditory and visual cues. This is why widespread deepfakes of Harris or Trump failed to convince anyone that they were real.

Additionally, a closer look at the 2024 elections around the world shows that fears of AI deepfakes are largely overblown.

Ahead of this summer's European Parliament elections, headline after headline sounded the alarm that “AI could amplify disinformation” and put the future of democracy at risk. A perfect storm of Russian propaganda and artificial intelligence threatened to drown the integrity of an election involving 373 million eligible voters in 27 countries in disinformation and deepfakes.

This message was echoed by think tanks, researchers and European Union leaders ahead of the June elections. Věra Jourová, European Commission vice-president for values ​​and transparency, said AI deepfakes of politicians could “create a nuclear bomb… to change the course of voter preferences.” In response, the European Commission sent warnings to social media platforms and set up crisis units to deal with efforts to raise doubts about the legitimacy of the election results for weeks after the vote.

So what happened? Despite active disinformation networks on social media platforms, the EU-funded and often alarming European Digital Media Observatory has not detected any major disinformation incidents or a spate of deepfakes. In the UK election, British fact-checking group FullFact told Politico: “There was none [deepfake] which just dominated a day of actual election campaigning.”

What about the rest of the world? Elections have taken place in many countries, some with less robust democratic institutions and more fragile electoral processes than European democracies.

A Washington Post article highlighted India's 2024 elections as a “preview” of how AI is transforming democracy. Despite AI being “flooded in deepfakes,” researchers found that it had little impact and instead proved beneficial by connecting voters.

In Pakistan and Indonesia, observers reported minimal misinformation, with viral fake news fact-checked on social media. A coalition of civil society groups and government agencies in Taiwan provided transparency and crowdsourced fact-checking, mitigating Chinese interference attempts.

It should be a positive story that democracies around the world have so far demonstrated a higher level of resilience than many feared. More importantly, these election results show that a critical mass of voters can think for themselves and not slavishly fall for lies, propaganda and nonsense, even when skillfully produced using cutting-edge technology.

As the 2024 U.S. elections approach, we should be vigilant but resist the urge to sacrifice free speech in the name of combating disinformation. Our democracy is more resilient than fear mongers claim.

California's two new laws, on the other hand, are panic-driven and counterproductive, opening the door to state-sanctioned censorship of lawful expression.

AB 2839 bans the use of AI deepfakes about political candidates, while AB 2655 requires major platforms to block “deceptive” content about politicians, respond to any public complaint within 36 hours, and remove “substantially similar” content.

Both laws will restrict political expression, limit Californians' ability to criticize politicians, undermine platforms' right to moderate content and even prevent people from flagging “misleading” content as fake.

While AB 2839 exempts political satire and parody, it requires those responsible to provide disclosures that the “materially misleading” content is not real, which will certainly undermine the impact of these messages if commentators have to explain that they are just joking .

We should also remember that the very politicians making headlines about AI disinformation – and insisting that they should be trusted to define this nebulous concept – are often the sources of political misinformation.

Instead of succumbing to elite panic, we should rise to the challenge of disinformation, heeding the words of former Supreme Court Justice Anthony Kennedy, who said: “Our constitutional tradition contradicts the idea that we need Oceania's Ministry of Truth.” In defense of free speech, we must avoid giving the government unprecedented powers to decide what is truth, recognizing that the greatest threat to democracy often comes from those who claim to protect it.

(Disclosure: The Future of Free Speech is a nonpartisan think tank in collaboration with Vanderbilt University and Denmark-based Justitia. It has received limited financial support from Google for certain projects unrelated to the subject of this article. In all cases, The Future of Free Speech retains full independence and final authority for its work.)

This article was originally published on MSNBC.com