close
close

4 Ways AI Will Be Used and Abused in the 2024 Elections | Opinion

4 Ways AI Will Be Used and Abused in the 2024 Elections | Opinion

(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.)

Author: Barbara A. Trish, Grinnell College

The American public is concerned about artificial intelligence and the 2024 elections.

A September 2024 Pew Research Center survey found that more than half of Americans are concerned that artificial intelligence—or AI, a computer technology that mimics the processes and products of human intelligence—will be used to create and spread false and misleading information in during the election campaign.

My academic research in AI may help alleviate some concerns. While this innovative technology certainly has the potential to manipulate voters or spread lies on a large scale, most of the use cases for AI in the current election cycle are nothing new so far.

I’ve identified four roles that AI is playing or could play in the 2024 campaign—all perhaps updated versions of familiar campaign activities.

1. Voter Information

The launch of ChatGPT in 2022 brought public attention to the promise and dangers of generative artificial intelligence. The technology is called “generative” because it generates text responses to user queries: it can write poetry, answer history questions and provide information about the 2024 elections.

Instead of Googling voting information, people can ask generative AI a question. “How much has inflation changed since 2020?” For example. Or, “Who’s running for U.S. Senate from Texas?”

Some generative AI platforms, such as Google’s Gemini chatbot, refuse to answer questions about candidates and voting. Some, like Facebook’s artificial intelligence tool Llama, respond—and respond accurately.

But generative AI can also produce misinformation. In the most extreme cases, the AI ​​may experience “hallucinations” that produce highly inaccurate results.

A June 2024 CBS news report reported that ChatGPT was giving incorrect or incomplete answers to some prompts asking how to vote in battleground states. And ChatGPT did not always follow the policies of its owner, OpenAI, and did not direct users to CanIVote.org, a reputable site for voting information.

Just like on the Internet, people should check AI search results. And be warned: Google’s Gemini now automatically returns answers to Google search queries at the top of every results page. You may accidentally stumble upon artificial intelligence tools when you think you’re searching online.

2. Deepfakes

Deepfakes are fabricated images, audio and video created by generative artificial intelligence and designed to replicate reality. Essentially, these are highly convincing versions of what are now called “cheap fakes” – altered images created using basic tools such as Photoshop and video editing software.

The potential of deepfakes to deceive voters became apparent when an AI-generated robocall posing as Joe Biden before the New Hampshire primary in January 2024 advised Democrats to save their votes until November.

The Federal Communications Commission has since ruled that robocalls generated by artificial intelligence are subject to the same rules as all robocalls. They may not be automatically dialed or delivered to mobile phones or landlines without prior consent.

The agency also imposed a $6 million fine on the consultant who created the fake Biden call, but not for voter deception. He was fined for transmitting inaccurate caller ID information.

While synthetic media can be used to spread disinformation, deepfakes are now part of the creative toolkit of political advertisers.

One of the first deepfakes aimed more at persuasion than outright deception was an AI-generated ad for the 2022 mayoral contest, depicting the then-mayor of Shreveport, Louisiana, as a failed student called into the principal’s office.

The ad included a brief disclaimer that it was a deepfake, a warning not required by the federal government, but it was easy to miss.

Wired magazine’s AI Elections Project, which tracks AI use in the 2024 cycle, shows that deepfakes have not outpaced the ads voters see. But they have been used by candidates across the political spectrum, up and down the ballot, for many purposes, including deception.

Former President Donald Trump hints at a Democratic deepfake when he questions the size of the crowds at Vice President Kamala Harris’ campaign events. By making such accusations, Trump is trying to gain a “liar dividend”—an opportunity to instill the idea that truthful content is fake.

This kind of discrediting of a political opponent is nothing new. Trump has argued that the truth is really just “fake news,” at least since the 2008 Parenthood conspiracy, when he helped spread rumors that presidential candidate Barack Obama’s birth certificate was fake.

3. Strategic distraction

Some are concerned that AI could be used this cycle by election deniers to distract election administrators by immersing them in frivolous public records requests.

For example, the group True the Vote has filed hundreds of thousands of voter complaints over the past decade, working only with volunteers and through a web application. Imagine its capabilities if they are armed with artificial intelligence to automate their work.

Such widespread and rapid violations of voter rolls can distract election officials from other important tasks, disenfranchise legitimate voters, and disrupt elections.

There is currently no evidence that this is happening.

4. Foreign interference in elections

Confirmed Russian interference in the 2016 election underscored that the threat of foreign interference in US politics, whether from Russia or another country invested in discrediting Western democracy, remains a serious concern.

In July, the Justice Department seized two domain names and searched about 1,000 accounts that Russian actors used in a so-called “social media bot farm,” similar to those Russia has used to influence the opinions of hundreds of millions of Facebook users. in the 2020 campaign. Artificial intelligence could give these efforts a real boost.

Robert Mueller

Special Counsel Robert Mueller’s investigation into the 2016 US election concluded that Russia worked to elect President Donald Trump. Jonathan Ernst/Poole via APJonathan Ernst/Poole via AP

There is also evidence this cycle that China is using AI to spread harmful information about the US. One such social media post inaccurately transcribed Biden’s speech, suggesting he made sexual references.

AI may help election meddlers do their dirty work, but new technologies are unlikely to be necessary for foreign interference in US politics.

In 1940, Britain—an American ally—was so focused on forcing the United States into World War II that British intelligence officers worked to help interventionist congressional candidates and discredit isolationists.

One of the targets was the well-known representative of the US Republican isolationist party, Hamilton Fish. By distributing an out-of-context photograph of Fish and the leader of an American pro-Nazi group, the British sought to falsely portray Fish as a supporter of Nazi elements abroad and in the United States.

Can AI be controlled?

Recognizing that new technologies are not needed to cause harm, attackers can leverage the power of artificial intelligence to pose a serious challenge to election operations and integrity.

The federal government’s efforts to regulate the use of AI in electoral politics face as much of an uphill battle as most proposals to regulate political campaigns. States have become more proactive, with 19 now banning or restricting deepfakes in political campaigns.

Some platforms implement light self-moderation. Google’s Gemini responds to requests for basic election information by saying, “I can’t help with answers about elections and political figures right now.”

Campaigners can also use some degree of self-regulation. Several speakers at the May 2024 campaign technology conference expressed concern about how voters would react if they learned that a campaign was using artificial intelligence technology. In this sense, public concern about AI can be productive, creating a guardrail of sorts.

But the downside to this public concern—what Stanford University’s Nate Persily calls the “AI scare”—is that it could further undermine confidence in elections.

This article is republished from The Conversation under a Creative Commons license.

Read the original article here: https://theconversation.com/4-ways-ai-can-be-used-and-abused-in-the-2024-election-from-deepfakes-to-foreign-interference-239878.