Artificial Intelligence (AI) has evolved over the last 10 years at a rate no one saw coming. AI has a lot of beneficial uses, some of which include automating time consuming processes, helping reduce human errors and more. However, there’s growing concern surrounding the groundbreaking software — between claims of sentient AI chatbots to the increase of plagiarism in schools because of apps like ChatGPT.

One of the most concerning capabilities of AI is its ability to alter images, videos and even sound to mimic real life people, places and situations. The emergence of deepfakes — artificially created videos, images or sounds meant to take on the appearance of something it is not — has resulted in the massive uptick of misinformation, which in turn could have disastrous consequences if not regulated.

The first time I heard about deepfakes was in 2018. Buzzfeed teamed up with director and actor Jordan Peele to make a public service announcement as Barack Obama, utilizing AI to generate a virtual replication of the former president. The video went viral as it left viewers confused, impressed and afraid of the potential deepfake technology could hold.

Since the 2018 Obama deepfake, artificially created videos of politicians, celebrities and other powerful individuals have taken over the internet. Some of the AI-generated media is relatively harmless, one example being videos of video game playthroughs being voiced over by AI voices of politicians including Donald Trump, Joe Biden, Barack Obama and more. Some of the videos or images created by AI, however, have harmed others. From using deepfakes of celebrities to create pornographic content to scammers creating AI-generated images and voices to replicate real life people, the capabilities of this technology are boundless and the repercussions are unknown.

On February 7, 2023, The New York Times released an article detailing deepfake videos and how they were being used by pro-China bot accounts to spread misinformation across the internet. It marked the first time experts found artificially created videos being used to intentionally spread false information. The two videos found by Graphika, a research firm that studies disinformation, seemed to promote the agenda of the Chinese Communist Party and undermine the United States, according to Graphika Vice President of Intelligence Jack Stubbs.

Stubbs explained to the New York Times that A.I. software can create “videos in a matter of minutes and subscriptions start at just a few dollars a month,” and that it “makes it easier to produce content at scale.”

Political science and technology experts at The University of Virginia (UVA) released a statement in August 2023 sounding the alarm on this technology.

“Political consultants, campaigns, candidates and even members of the general public are forging ahead in using the technology without fully understanding how it works or, more importantly, all of the potential harms it can cause,” Carah Ong Whaley, academic program officer for the UVA Center for Politics, said. “Doctored photos and video footage, candidates’ comments taken out of context; it has already been used for decades in campaigns.

“What AI does is dramatically increase the scale and proliferation, leaving us numb and, hopefully, questioning everything we see and hear about elections. For some voters, exposure to certain messages might suppress turnout. For others, even worse, it could stoke anger and political violence.”

Deepfakes and AI technology have already infiltrated the U.S. Presidential race — before dropping out of the Republican Presidential Primary, Florida Gov. Ron DeSantis’ presidential campaign released a video where AI generated images of former President Donald Trump hugging and kissing the nose of Dr. Anthony Fauci. When confronted about using AI images in a campaign video, spokespeople for DeSantis cited Trump’s campaign’s use of AI images of the Florida governor riding a rhinoceros to show there were other people using the technology.

“The potential to sway the outcome of an election is real, particularly if the attacker is able to time the distribution such that there will be enough window for the fake to circulate but not enough window for the victim to debunk it effectively (assuming it can be debunked at all),” wrote UVA cyber privacy expert Danielle Citron.

The Congressional Research Service has even hopped on to address A.I. and deepfake technology, saying in an April 2023 report that other countries could utilize this technology to meddle in American elections. 

“State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately,” the service reported. “Doing so could, in turn, erode public trust, negatively affect public discourse, or even sway an election.”

Artificial intelligence as a technological development is a double-edged sword: on one hand, it can be used to help automate tedious tasks and shorten how long they take, but it can also be a dangerous weapon against democracy and seeking the truth. We need to be diligent as a society in addressing the risks associated with deep fakes and AI, otherwise, we’ll be paying for it down the road.

Leave a comment

Your email address will not be published. Required fields are marked *