Focus On AI

AI-Powered Disinformation And The Liar’s Dividend

As Brazil grappled with a flood of online disinformation around its 2022 presidential election, the nation’s Supreme Court decided to give one judge sweeping powers to order social networks to take down content he believed threatened democracy.

The judge carried out an aggressive campaign, forcing social networks to pull down thousands of posts.  Some thought the nation had found a possible solution to one of the most vexing problems of modern democracy: disinformation.

Then, on August 30, that justice, Alexandre de Moraes, blocked the social network X across Brazil because its owner, Elon Musk, had ignored his court orders to remove accounts and then closed X’s office in the country, leading even supporters of Brazil’s policy to ponder whether the country had gone too far.

As the New York Times reported Brazil’s yearslong fight against the Internet’s destructive effect on politics, culminating in the current blackout of X, shows the pitfalls of a nation deciding what can be said online: “do too little and allow online chatter to undermine democracy; do too much and restrict citizens’ legitimate speech.”

The Times rightly points out that other governments worldwide are likely to be watching as they debate whether to wade into the messy work of policing speech or leave it to increasingly powerful tech companies that rarely share a country’s political interests.

The World Economic Forum has ranked disinformation as one of the top risks in 2024. The size of the problem and what to do about it is is why I decided to do a follow-up, with the help of deepfake and synthetic media expert, Henry Ajder, to my July column on deepfakes. Although the two issues are often linked, disinformation deserves to be treated as a category of its own.

AI has already played a significant role in disinformation campaigns across the world, from trying to manipulate Indian voters to altering perceptions about the war in Ukraine. When abused for political manipulation, the capability of deepfakes to fabricate convincing disinformation, could result in voter abstention, swaying elections, societal polarization, discrediting public figures, or even inciting geopolitical tensions.

“So far as we know, there haven’t been any cases where disinformation has changed the result of an election, or moved the needle,” says Henry.

It is only a matter of time.

Currently AI-generated disinformation events are individually generated, and they require human intervention. Someone needs to tell the AI what to do, give it direction, and pick a narrative target to exploit. But what if the AI itself could decide who and what to target, and how to craft the most viral content for maximum spread?

It already can. A website, called CounterCloud.io, hosts an unlisted YouTube video that begins with the narrator, a developer who uses a female voice and the name Nea Paw, explaining how she or he devised an experiment  to engineer an AI LLM to scrape, generate, and distribute AI-generated disinformation without human intervention and do it at scale, at a cost of only $400.

The feat is achieved by a cloud server running an AI that constantly scrapes the Internet for content. The AI decides, via the gatekeeper module, what content is worth targeting. When content is chosen by the AI, it then writes a counter-article, attributes it to a fake journalist profile, and then posts it to the CounterCloud website (along with images and sound clips). It also generates fake comments by fake readers below some of the articles to make it seem like there is an audience. The AI also goes to Twitter, searches for accounts and tweets that are relevant, and then posts links to the AI-generated articles, followed by posts that look like user commentary.

To avoid detection the AI uses different styles and methods of countering points, including fake historical events and creating doubt in the accuracy of the original arguments.

The developer created the ability for CounterCloud to have a set of values and ideologies for Counterloud to promote and oppose. A curated list of RSS feeds and Twitter aliases was used to align with the system’s ideology. The method of generating counter content proved effective, and within a month, a fully autonomous system was developed. To ensure no harm was done, the entire experiment was locked down.

If one developer can create  AI-generated disinformation at scale,  others, with nefarious aims, will surely follow, further exacerbating a growing problem known as the “liar’s dividend”: when people can’t discern the truth, they don’t trust what they see anymore, even when its authentic. The theory is that when people learn that deepfakes are increasingly realistic, false claims that real content is AI-generated become more persuasive too. In the UK, for example, a political candidate had some kind of post production, filtering or editing done on the profile image on his Twitter post. Because people speculated that AI was used to generate the photo, some believed the candidate was not a real person. (He is).

Synthetic Reality Creates Real World Harms

Many years ago, I recall speaking to Stuart Russell, a Professor of Computer Science at the University of California at Berkeley, and Director of the Center for Human-Compatible AI and the Kavli Center for Ethics, Science, and the Public. He sounded an early warning that deepfakes play to humans’ tendency to distrust, meaning that anyone can simply call something a deepfake and generate doubt. We saw this happen recently in regard to crowd size at a Harris Walz meeting or Taylor Swift’s endorsement of a particular political party. There are many other examples of people using AI to distort truth and undermine trust in media and in political processes, causing different types of harm.

“First order harms” are those which Henry describes as “serious harms or serious challenges, which legislators and businesses are really prioritizing”. But, he says, there are also a group of “second order harms” or challenges. One of these is what we’re seeing with this flood of synthetic spam or AI-generated slop, as it’s being called online, where the increasing access and ubiquity of this content is leading to a breakdown in the quality of the content online, contributing to what some people refer to as the ‘dead Internet theory’, an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated  content.

Some years ago, I was at Tim Riley’s Foo Camp where we discussed the future possibility that our ‘friends’ and audience on social media would be very convincing bots. With the invention of Generative AI programs to write your social media posts this could be coming true.

Henry explains that there are “many applications of generative AI which are not necessarily explicitly malicious or as harmful as, let’s say someone having their likeness swapped into pornographic footage, but are still changing the way that we think about the world and the way that we interact with each other and the content that we experience in an increasingly digital first world”.  There are meaningful concerns about what an increasingly synthetic reality means for the future of communication of content, media and of humanity, says Henry. He cites the growth of AI girlfriends  and the flood of AI generated music on Spotify as cases in point.

Searching For Solutions

How can we curtail these problems? Several approaches are being tested.

Using technology to detect disinformation is one of them, but it is by no means a panacea.  “While there is certainly a place for detection in the response framework, unfortunately we are seeing some difficult dynamics emerging where detection tools are not as robust or reliable as we would like them to be,” says Henry. “Unfortunately, this means people don’t understand the limitations of detection and it is already causing more harm that good in certain cases.”  For example, some real images and videos of disaster scenarios or in conflicts recorded by authentic media have been wrongly flagged as false by some less reliable detection tools. That’s why Henry tells me that such tools “ shouldn’t be treated as a godlike binary yes or no answer that can then definitively tell us what is real and what is not.”

Policymakers around the globe are examining ways to design and implement watermarking techniques to enable the detection and tracing of AI-generated content as another means of fighting fake information. At the moment, the limitation of this technique is resiliency. It may be possible to destroy invisible watermarks by things like compression or editing. Another drawback is that this forms part of what’s called an adversarial dynamic, in which bad actors are constantly engaged in trying to break or find ways around safety measures in the same way that cyber criminals find new ways to hack into networks.

Content provenance is a third approach. It involves creating a transparent  label for media which shows “how it was created, whether it was captured on a device, whether it was AI generated and also provides information about how it might have been edited, and other details that help us basically to understand at a glance what this piece of media actually is and where it’s come from,”  explains Henry. The content’s provenance is cryptographically secured to the media using standards called C2PA and Content Credentials, developed by a coalition of many leading tech companies, camera manufacturers, and news platforms, amongst others. Provenance allows a bottom-up secure process which bypasses some of the challenges of unreliable detection technologies and limited durability of watermarking, says Henry. However, to work it relies on broad adoption from social media platforms, news organizations and others and it won’t necessarily stop people from saying that the standard itself is corrupt or conspiratorial thinking by people who don’t want to believe  the media they’re viewing is real.

It is clear the challenges that deep fakes and synthetic media pose can’t be solved by technology alone. Many governments are considering legislation for mandating disclosure of AI generated content. The EU and China, have already passed legislation. Others, like the UK, U.S. and India, are all considering introducing legislation to similar effect.

Such regulation helps people to understand what responsible design and development looks like, be they makers or consumers. It also provides legal penalties against malicious actors. However, it’s very hard to catch perpetrators and so it’s likely that the emphasis will be on holding big companies responsible, as Brazil has done with X.

There is no silver bullet approach to effectively mitigating the threat of disinformation in digital spaces. A multilayered approach consisting of a combination of technological and regulative means, along with heightened public awareness, is necessary, says Anna Maria Collard of KnowBe4, an organization that specializes in raising awareness of threats to information security and training users to protect themselves and their institutions from those threats. It will require global collaboration among nations, organizations and civil society, as well as significant political will, Collard wrote in an article for the World Economic Forum.

I couldn’t agree more. The stakes are high, and action urgently needed but as Brazil’s pioneering efforts demonstrate, disinformation will not be an easy problem to fix.

Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield  is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She  was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic,  and social changes arising from the use of AI. This is the fifth of a planned series of exclusive columns that she is writing for The Innovator.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To access more of Firth-Butterfield’s columns and other Focus AI articles click here.

 

About the author

Kay Firth-Butterfield