Latest articles

How AI Is Fueling The Age Of Mass Surveillance

In a public square in Shanghai, a camera mounted on a lamp post does not simply record what it sees. It analyzes. In a fraction of a second, it scans the face of every passerby, cross-references it against a government database, flags anyone it deems a person of interest, and routes that alert to a waiting officer. Welcome to the age of AI-powered mass surveillance—a world where the watcher never blinks, never tires, and never forgets.

For decades, mass surveillance was constrained by a simple human limitation: there were not enough eyes. Governments could point cameras at public spaces, but they lacked the manpower to watch every feed, identify every face, or connect every dot in real time. Artificial intelligence has removed that constraint. Machine learning algorithms can now process thousands of video streams simultaneously, identify individuals from a library of millions of faces, track someone’s movements across an entire city, and predict their future behavior—all without a single human analyst in the loop.

From China’s Xinjiang province to the streets of American cities, AI is transforming surveillance. The result, critics warn, is a surveillance infrastructure of unprecedented scale and intimacy—one that is being deployed by authoritarian regimes to crush dissent, adopted by democracies in ways that erode civil liberties, and exported around the world at a pace that outstrips any regulatory response.

Even the technology industry is expressing concern. In recent weeks the U.S. Department of War ended a $200 million contract with Anthropic after the AI company refused to grant the Pentagon unrestricted access to its technology for mass domestic surveillance and fully autonomous weapons. In an exclusive interview with CBS  News, which took place on the same day the Pentagon designated Anthropic a supply chain risk to national security, Amodei argued that while private companies are currently holding the line on mass surveillance restrictions, it is ultimately Congress’s job to legislate those limits — not a matter to be settled through contract negotiations between a company and the Pentagon.

Ground Zero: Xinjiang

No place on earth better illustrates the danger of AI surveillance than China’s Xinjiang region, home to the Uyghur people, a Muslim minority who have long been viewed with suspicion by Beijing. Over the past decade, the Chinese government has transformed Xinjiang into what many observers describe as the most thoroughly monitored territory in human history.
The system is staggering in its scope. Digital checkpoints equipped with AI facial recognition cameras track Uyghurs’ movements across the province, matching their faces to photographs taken during mandatory government “health checks.” Procurement documents obtained by researchers in 2025 confirm that Chinese authorities have been acquiring software capable of automatically identifying Uyghurs in public spaces, triggering alerts—in the language of the procurement records—a “Uyghur alarm” when one is detected outside an approved area. The surveillance is explicitly and exclusively directed at Uyghurs, not other ethnic groups. A database specifically labeled “high-risk Uyghurs” has been documented in Shanghai government records.

Biometric data collection in the region goes far beyond cameras. Authorities have conducted compulsory mass collection of DNA samples, voice recordings, and iris scans. AI systems synthesize this data to build individual profiles tracking not just location, but social networks, religious practice, and what officials describe as “ideological reliability.”

A 2025 report by the U.S. House Select Committee on China described these technologies as helping Beijing carry out campaigns of cultural repression that some believe rise to the level of crimes against humanity.

The implications extend well beyond China’s borders. Chinese technology firms—including companies linked to the state—have marketed facial recognition platforms, data integration systems, and so-called “smart city” public security technologies to more than 80 countries. Analysts warn this export is not purely commercial, but strategic: Beijing is normalizing state monitoring of citizens as a model of governance and shifting global norms toward centralized political control.

A Threat To Democracy Itself

The danger of AI surveillance is not confined to authoritarian states. Legal scholars and civil liberties groups argue that the technology fundamentally threatens democratic governance wherever it is deployed—and that democracies are adopting it faster than their legal systems can respond.

Writing in Lawfare in May 2025, legal analysts identified three structural threats that AI surveillance poses to democratic systems. First, pervasive AI monitoring makes large-scale political organization harder, since citizens know that assembling in protest carries the risk of identification and retaliation. Second, replacing human police or military personnel with automated systems removes the possibility of human discretion—and human conscience. When soldiers hesitate before firing on civilians, it is because they possess moral agency; automated systems do not. Third, and most broadly, AI enforcement dramatically lowers the cost of panoptic surveillance, giving any government—democratic or not—the deterrent power of a massive security force without the expense of maintaining one.

The Bulletin of the Atomic Scientists observed in 2024 that 56 out of 176 countries now use artificial intelligence in some capacity for city surveillance and public security monitoring. “Frail non-democratic governments can use AI-enabled monitoring to detect and track individuals and deter civil disobedience before it begins,” the publication noted, quoting MIT economist Martin Beraja, co-author of an analysis of AI-powered authoritarian surveillance trends.

In a January essay entitled “The Adolescence of Technology” Anthropic’s Amodei noted that sufficiently powerful AI could likely be used to compromise any computer system in the world, and could also use the access obtained in this way to read and make sense of all the world’s electronic communications (or even all the world’s in-person communications, if recording devices can be built or commandeered). “It might be frighteningly plausible to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do,” he wrote. “A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow. This could lead to the imposition of a true panopticon on a scale that we don’t see today, even with the CCP [Chinese Communist Party].”

Mission Creep In America

The United States is not immune. In February, the ACLU of Massachusetts warned that AI-powered surveillance technology is enabling federal and local law enforcement to erode constitutional privacy protections at an alarming pace. Automatic license plate readers now track the movements of drivers across entire states. Social media surveillance tools scan millions of posts using AI to identify targets of interest. “Nearly every part of policing is becoming automated,” the ACLU report noted, “and there are few, if any, guardrails in place.”

U.S. Immigration and Customs Enforcement (ICE) is using it in immigration raids and the U.S. recently proposed that visitor applying for an ESTA visa should submit five years of social media history, along with 10 years of email addresses and phone numbers in their application.

Particularly alarming is what the American Immigration Council calls “mission creep”—the quiet expansion of surveillance tools built for one purpose into instruments of broad population monitoring. Palantir’s ImmigrationOS platform, originally framed as a tool to manage immigration enforcement, integrates predictive analysis, movement tracking, and behavioral profiling in ways that could, critics argue, just as easily be directed at any group the government flags as a concern. In September 2025, the Trump administration issued a presidential memorandum instructing the Justice Department to investigate civil society organizations and activists on the political left. The databases built by companies like Palantir and Babel Street provide the infrastructure to act on such directives.

According to reporting by Brookings in late 2025, government contractors now advertise their ability to scan millions of social media posts and use AI to summarize findings for agencies including the Department of Homeland Security. The concern is compounded by the fact that a bill that would close the legal loophole allowing the government to purchase bulk personal data about Americans—including location history, political affiliations, and online activity—passed the House of Representatives in 2024 but stalled in the Senate.

The Bias Problem

Beyond the political dangers, AI surveillance carries an embedded technical flaw that amplifies its harm: it does not work equally well on everyone. Research has consistently shown that facial recognition algorithms are significantly less accurate for people who are Black, East Asian, Indigenous, or female—a direct consequence of the biased datasets on which these systems are trained. In the United States, wrongful arrests have already resulted from facial recognition misidentification.

MIT and Penn State researchers jointly published findings in 2024 showing that if large language models were to be used in home surveillance, they could recommend calling the police even when surveillance videos show no criminal activity. In addition, the models the researchers studied were inconsistent in which videos they flagged for police intervention. For instance, a model might flag one video that shows a vehicle break-in but not flag another video that shows a similar activity. Models often disagreed with one another over whether to call the police for the same video. Furthermore, the researchers found that some models flagged videos for police intervention relatively less often in neighborhoods where most residents are white, controlling for other factors.

The combination of mass deployment and systematic bias is particularly dangerous. When AI surveillance is used to make consequential decisions—flag a person as a threat, add them to a watch list, send a police officer to their door—errors are not just inconveniences. They are accusations.

In my book “Coexisting with AI: Work, Love and Play in a Changing World” I suggest that Adolf Hitler would have made good use of AI in multiple ways. It would have been so much easier for him to round up people to send to concentration camps, infiltrate and block opposition media; monitor people’s speech within their own homes and use spy drones to collect information rather than relying on neighbors to turn people in.

The Reasoning Problem

A recent study by Cornell University exposes fundamental vulnerabilities in how visual-language models reason. In the study frontier models readily generated detailed image descriptions and elaborate reasoning traces, including pathology-biased clinical findings, for images never provided. The researchers term this phenomenon mirage reasoning. If AI is making up results without seeing anything at all then how can we depend on it for surveillance?

What Would Meaningful Oversight Look Like?

Experts across the political spectrum agree that the current legal framework is woefully inadequate for the surveillance technology that already exists, let alone what is coming. The Bulletin of the Atomic Scientists has called for democracies to establish ethical frameworks, mandate transparency, limit how mass surveillance data is used, and enshrine hard limits on government use of AI for social control. Export controls should scrutinize and restrict technology sales to regimes engaged in human rights abuses.

In March 2024, all 193 UN member states unanimously adopted a resolution affirming that human rights must be respected throughout the lifecycle of AI systems, calling on governments to refrain from deploying AI that cannot operate in compliance with international human rights law. The resolution represents a global normative consensus. Translating it into binding law, with real enforcement mechanisms, is the unfinished work that legislators on every continent have yet to complete.

The stakes could not be higher. A surveillance state that can identify any individual in any public space, track their movements across a lifetime, profile their beliefs, and predict their behavior represents not merely a threat to privacy—it represents a threat to the conditions that make political freedom possible. History offers plenty of examples of what governments do when they achieve that kind of knowledge about their citizens. The technology has arrived. The question is whether democratic societies will muster the will to govern it—before the algorithm learns too much about all of us.

Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory and the author of Co-existing with AI: Work, Love and Play in a Changing World (Wiley). Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. Last February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield is a barrister, former judge and professor, technologist and entrepreneur, and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic, and social changes arising from the use of AI. This article is part of a series of exclusive columns that she is writing for The Innovator.

This article is content that would normally only be available to subscribers. Become a subscriber to see what you have been missing

 

 

About the author

Kay Firth-Butterfield