A fake expletive-laden video of U.S. President Joe Biden’s July 21 announcement of his decision to leave the race began circulating on X almost immediately after the news broke.
PBS News, whose logo was featured in the video, issued a statement describing the video as a “deepfake,” adding, “PBS News did not authorize the use of this video, and we do not condone altering news video or audio in any way that could mislead the audience.”
With election season underway and artificial intelligence evolving rapidly, image and voice manipulation are becoming an issue of great concern. A recent report from Moody’s warns that generative AI and deepfakes could sway voters, impact the outcome of elections, and ultimately influence policy making, which would undermine the credibility of U.S. institutions.”
From manipulating elections to sowing confusion in Ukraine and the Israel-Hamas conflict, deepfakes are making a meaningful impact on society, notes deepfake and synthetic media expert, Henry Ajder. Henry identified the challenge of deepfakes and synthetic media over six years ago and was the first to start mapping their use, way before the explosion of interest in generative AI. In our wide-ranging conversation on deepfakes he noted that that there has been an uptick in the use of deepfakes and the volume of synthetic media being created and it is becoming more and more realistic.
Compounding these challenges is unreliable deepfake detection tools are providing false positive and false negative results, sowing doubt in authentic images and giving false confidence in AI-generated ones.
As realism improves and flaws are reduced, Henry says he believes these challenges will only become more acute, particularly in rapidly evolving and chaotic environments such as war zones and elections.
What’s more, it’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s Generative AI tools. A 2023 report by Home Security Heroes (a company that reviews identity-theft protection services) found that it took just one clear image of a face and less than 25 minutes to create a 60-second deepfake pornographic video—for free.
The world took notice of this new reality last January when graphic deepfake images of Taylor Swift circulated on social media platforms, with one image receiving 47 million views before it was removed. But people far from the public spotlight have been victimized in the same manner for years. Henry’s 2020 investigation into a Telegram bot that enabled anyone to easily synthetically strip women of their clothes found that this led to a massive increase in the volume of victims and changed the profile away from the celebrities, who were previously targeted, towards private individuals.
According to the 2023 report, 99 % of victims are women or girls. Women in general are not well served by Generative AI, a topic I will return to in future columns, but deepfakes have been a disaster for them. We should brace ourselves for more targeting of female politicians, but also women in general through the use of deepfakes. “The use of women’s faces and bodies in non-consensual deepfake pornography is still arguably the biggest harm in terms of victim numbers,” says Henry. Millions of ordinary women and girls are affected which can lead to deep trauma and, in some cases, suicide. The deepfake epidemic shows no sign of stopping. The Online Harms Act in the UK is one of the few attempts to stem the tide.
In an interview about deepfake porn with The Institute of Electrical and Electronics Engineers (IEEE) Nadia Lee, CEO of the Australia-based startup That’sMyFace explained how her company is first offering visual-recognition tools to corporate clients who want to be sure their logos, uniforms, or products aren’t appearing in pornography (think, for example, of airline flight attendants). But her long-term goal is to create a tool that any woman can use to scan the entire Internet for deepfake images or videos bearing her own face.
“Our generation is facing its own Oppenheimer moment,” she told IEEE. “ We built this thing”—that is, Generative AI—”and we could go this way or that way with it.”
To Lee’s point, every technology can be used for good or for evil, but I would argue that in the case of deepfakes there is not a strong positive use case. The technology can be used for face swapping or other kinds of synthetic media in entertainment, in memes and satire. In my view, this small and niche use of deepfakes for good does not outweigh the clear damage being done to individuals, society and business by this technology.
The term deepfake emerged organically on the platform Reddit in late 2017 and was used at the time to exclusively refer to an open source piece of software that was being used to swap female celebrities faces into pornographic footage.” This proves my point that this technology’s use never had a driver which was good for humanity.
As time has gone on, the term deepfake has expanded to include different kinds of AI generated synthetic media, from voice audio and music to images and various forms of video manipulation. For example, vishing, which is voice phishing, cloning someone’s voice to impersonate them on a call, is an increasing problem for individuals and businesses alike. The voice is used to extract money or confidential material from the victim.
Another example is thieves using real time face swapping tools on video calls to mask their identity, apply for jobs and then disappear once they’ve got the sign on bonus.
There are also cases of people using full AI-generated avatars in video calls, such as a reported case in Hong Kong, where allegedly an entire Zoom call was populated by AI avatars, leading to the only human in the room parting with $25 million.
In another attempt to access confidential information, thieves attempted to deepfake the CEO of WPP, one of the world’s largest advertising agencies.
Some attempts to curtail deepfakes have been made in the U.S. EU, China, UK and Australia, but in many countries even children are not protected against bullying through classmates use of deepfakes, often through doctored pornographic images. Unfortunately, legislation needs to be global, or the perpetrators simply hide in countries without enforcement.
Regulators also have a role to play here as many of them have existing powers which can control deepfake abuses. Julie Inman Grant in Australia, for example, is the E-Safety Commissioner in Australia. She has used her powers successfully against deepfake and cyber bullying.
The need for effective regulation is urgent. Several studies show that distinguishing between real and synthetic media is effectively a coin toss for humans. Many deepfakes are achieving close to parity or parity with authentic outputs of voice audio or images or music. The models that are being used to create deepfakes and synthetic media have also become much more efficient both in terms of data and compute requirements. A good example is Microsoft’s Vall-E 2, which claims it can generate a highly realistic human voice clone of an individual from just three seconds of voice audio, notes Henry.
The emergence of smaller fine-tuned models, such as stable diffusion, a deep learning, text-to-image model based on diffusion techniques, that can also be run on devices such as a laptop, are further driving the democratization of the tools for creating AI-generated synthetic media.
These tools remove the need for expertise to operate them, putting them into the hand of everyone from school bullies to extremists on social media and fraudsters attacking business.
The important point for everyone to take away from this column is that no one is safe and the technology to identify the deepfakes is less available than the technology to make them. Robust training of employees is needed but even then, the technology is so good it’s hard to blame someone for not recognizing a deepfake. Henry and I will discuss disinformation, the other principle malicious use of deepfakes, in my next column.
Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic, and social changes arising from the use of AI. This is the fifth of a planned series of exclusive columns that she is writing for The Innovator.
This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.
To access more of Firth-Butterfield’s columns and more Focus AI articles click here.