Interview Of The Week

Interview Of The Week: Meredith Whittaker, AI Ethics Expert

Meredith Whittaker is President of Signal, an encrypted messaging service, and a member of the Signal Foundation Board of Directors. She has over 17 years of experience in tech, spanning industry, academia, and government. Before joining Signal as President, she was the Minderoo Research Professor at New York University (NYU), and served as the Faculty Director of the AI Now Institute, which she co-founded. Her research and scholarly work have helped shape global AI policy and shift the public narrative on AI to better recognize the surveillance business practices and concentration of industrial resources that modern AI requires.

Prior to NYU, Whittaker worked at Google for over a decade, where she led product and engineering teams, founded Google’s Open Research Group, and co-founded M-Lab, a globally distributed network measurement platform that now provides the world’s largest source of open data on internet performance. She was one of the core organizers pushing back against the company’s insufficient response to concerns about AI and its harms and was a central organizer of the Google Walkout.

She has advised the White House, the FCC, the City of New York, the European Parliament, and many other governments and civil society organizations on privacy, security, artificial intelligence, Internet policy, and measurement.  Whittaker recently completed a term as Senior Advisor on AI to the Chair at the U.S. Federal Trade Commission. She is scheduled to speak on a panel that will be moderated by Innovator Editor-in-Chief Jennifer L. Schenker at the Viva Technology conference taking place in Paris May 23-25. She recently spoke to The Innovator about the same topic: Can We Have it All: Safe, Profitable, and Ethical AI?”

Q: In an April 9 speech European Commission Vice-President Margarethe Vestager said: “Probably never in history have we been confronted with a technology that has this much power, but no predefined purpose. Neither good, nor bad in itself. It all depends on how we, humans, shape it and use it.   Just like in Oppenheimer’s time, we are faced with what AI researchers call the “alignment problem”: when technology has the power to both serve and destroy us, how do we channel its development? How do we ensure this technology reflects the societies that we want to have, instead of amplifying the flaws, and injustices, of the ones we already have?” How would you answer these questions?

MW: First, we need to look closely at the pronoun ‘we’ and at who is included and who’s excluded from this collective. Answering that question in the context of tech and the tech industry reveals that currently only a small handful of large firms, largely based in the U.S. and China, have the scarce resources and ability required to develop and deploy AI systems at scale. These are the big tech corporations that have compute, data, market reach, and an ability to attract and retain highly skilled talent. And they make sensitive social decisions—about whom AI serves and whom it subjects, what it does and does not do, whom it is and is not licensed to or not—with the core objectives of profit and growth; an objective that does not always align with socially beneficial outcomes. Given this dynamic, meaningful shifts toward more democratic governance, more socially minded development and decision making around AI, or any form of ‘alignment’ will necessitate first significant structural changes that work to reallocate decision making power away from the scant handful of AI giants aiming to profit from these technologies and toward those who are most at risk from their harmful, extractive application.

Q: The Innovator recently published an interview with Hamilton Mann, Group Vice President of Digital Marketing and Digital Transformation at Thales, a France-based global aerospace-and-defense company. Mann sits on the Advisory Board of the Ethical AI Governance Group (EAIGG), a diverse community of AI practitioners focused on democratizing the growth of ethical AI through best practices and innovations in AI development, deployment, and governance. He believes it will not only be possible -= but necessary – to ensure that algorithms uphold not only a form of intelligence but also a form of integrity over time. This would challenge the way we ‘teach’ AI and require that the AI that is used will need to learn from data not only meaning but also the degree of its potential impact, and the possible misalignment with respect to a given value model. What is your take on this? Can we train technology to do no evil? 

MW: Technology is built by humans and controlled by humans, and we cannot talk about technology as an independent agent acting outside of human decisions and accountability–this is true for AI as much as anything else. The integrity that Mann rightly envisions for AI cannot be understood as a property of a model, or of a software system into which a model is integrated. Such integrity can only come via the human choices made, and guardrails adhered to, by those developing and using these systems. This will require changed incentive structures, a massive shift toward democratic governance and decision making, and an understanding that those most likely to be harmed by AI systems are often not ‘users’ of the systems, but subjects of AI’s application ‘on them’ by those who have power over them–from employers, to governments to law enforcement. To truly ensure that AI systems are deployed in ways that have integrity, and uphold a dignified and equitable social order, those subject to AI’s use by powerful actors must have the information, power, and ability to determine what AI systems with ‘integrity’ mean, and the ability to reject or contest their use.

Q:. Organizations applying advanced AI models to their businesses have weak or non-existent guardrails, according to The Artificial Intelligence Index Report 2024, which was published on 15 April by the Institute for Human-Centered Artificial Intelligence at Stanford University in California. The report cites a Responsible AI (RAI) survey of 1000 companies conducted by Stanford researchers in collaboration with Accenture to gain an understanding of the current level of RAI adoption globally and allow for a comparison of RAI activities across 19 industries and 22 countries. The survey’s aim was to develop an early snapshot of current perceptions around the responsible development, deployment, and use of Generative AInand how this might affect RAI adoption and mitigation techniques. A significant number of respondents admitted that they had only some – or even none – of the guardrails in place. What sort of dangers does this pose for society and what can or should be done about it until some sort of global regulations and standards are put into place? 

MW: The centralized and largely unaccountable power of the AI industry poses dangers of democracy and geopolitical stability, and the prospects for meaningful regulation appear particularly dim, even as vague calls for ‘regulation’ or an ‘international body to focus on AI’ echo from tech companies and pundits. The need for democratic oversight and meaningful checks on these companies and the application of the technology they develop and profit from has never been more acute. Sadly, regulatory movement has been halting, full of loopholes for dangerous applications (military, law enforcement), and this shows no signs of changing, especially given that regulators seem hesitant to touch the core surveillance business model, from which many of these pathologies flow.

Q: There is concern that large language models are being dominated by a handful of Silicon Valley and Chinese players. What problems does this pose? And where does Europe fit into this picture? 

MW: This is not just a concern, this is the reality. The AI industry is dominated by a handful of monopoly players, largely based in the US (whose dominant cloud companies have about 70% of the global market). The best hope for AI “startups” in this environment is to be acquired or enter into another kind of encumbered partnership with one of the US-based ‘hyperscalers.’ Meaning, there is no path to market that isn’t through the current giants. The EU and others need to recognize this, and seriously prepare for a future where these largely unaccountable and obscure companies, which provide core infrastructure for computation globally in addition to licensing powerful AI services, could be pressured by a more hostile, more isolationist US government.

Q: In the early days of the Internet the mantra was “Privacy is dead. Get over it.” Is it game over or do you believe we can regain control of our data and our privacy in the age of GenAI? 7. You are speaking at VivaTech in May. What will be your main message to the audience?

MW: The ‘Internet’ is not even a century old, and the current version of commercial networked computation is only a few decades old, emerging out of the 1990s and the neoliberal zeitgeist of that time which saw the market as the rightful arbiter of social and political life, and saw regulations–even those that would protect privacy and curb massive corporate surveillance–as inhibiting this free market. Out of the 1990s, the surveillance business model emerged, and the massive amounts of data, compute, and monopolistic platforms that we now confront were shaped during this time. We cannot treat this peculiar, contingent business model as somehow natural or inevitable. We can regain control–of course we can!–and we can make necessary and meaningful changes to these infrastructures. Indeed, a moderate reading of GDPR would support a ban on surveillance advertising. The problem is one of political will and enforcement, not of a lack of solutions or ideas. Too often political leaders behave more like U.S. tech fanboys than as stewards of their own citizens and interests and fail to question the hype that tech companies deploy in service of promoting their products as synonymous with human and scientific progress.

I look forward to the day when one or another bold EU leader makes the iconic decision to truly confront these giants, and the asymmetric power they wield from within the US. People weary of Big Tech’s slippery claims and outsized control have been waiting a long time for such a moment, and when it happens the cheering from the proverbial crowd will be deafening.

To access more of The Innovator’s Interview Of The Week articles click here.

 

 

 

 

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.