Interview Of The Week

Interview Of The Week: Nicolas Moës, AI Governance Expert

Nicolas Moës is an economist by training focused on the impact of General-Purpose Artificial Intelligence (GPAI) on geopolitics, the economy and industry. He is the Director for European AI Governance at global think tank The Future Society, where he studies and monitors European developments in the legislative framework surrounding AI. His current focus is on the drafting of the EU AI Act and its enforcement mechanisms.

Moës also serves as an expert at the OECD.AI Policy Observatory in the Working Groups on AI Incidents and on Risk & Accountability and is involved in AI standardization efforts, as a member of the International Standardization Organization’s SC42 and CEN-CENELEC JTC 21 committees on Artificial Intelligence, as a Belgian representative.

Prior to The Future Society, he worked at the Brussels-based economic policy think-tank Bruegel on EU technology, AI and innovation strategies. His publications have focused on the impact of AI and automation, including The Future Society’s blueprint for an EU AI Office. He has also carried out research on global trade & investments, EU-China relations and transatlantic partnerships.

Moës completed his Masters degree (M.Phil.) in Economics at the University of Oxford with a thesis on institutional engineering for resolving the tragedy of the commons in global contexts. He recently spoke to The Innovator about EU and global efforts to ensure AI safety.

Q: AI Safety was a big news story this week, but the EU has been working on this issue for some time. Could you speak about those efforts and why they are more critical than ever?

NM: There is some tension about how to govern AI.  Most of my work is focused on the EU AI Act which provides one of the strongest course correct actions proposed to date to ensure that industry is more responsible than they are now. We are working on two aspects of AI governance: one is focused on obligations and the other is on making sure that the requirements and rules are enforced. There are a lot of mechanisms we could use to enforce the EU Act; an EU AI Office is one of them. Our research has been focused on what should be the design and power of that office.

Q: Why is an EU AI Office necessary?

NM:  The tech industry is not consistent. Tech companies publicly say there are significant and widespread AI risks, in particular with generative and general-purpose AI and that they welcome oversight but then they refuse third party scrutiny of their work or ceding any of their power to society. Their argument is that government doesn’t’ have the expertise so that each company should be left to assess its own safety, behind closed doors. My concern is that this is like asking students to grade their own homework. Look at the AI Safety Summit that was organized in the UK this week. One of the key questions was why there were so few independent civil society representatives? The outcome is therefore some self-imposed, self-assessed commitments. There is a need for third party scrutiny to be able to understand the models, cross examine the evidence and stop some of the dangerous and reckless developments. The AI Office is a first important step in that direction, notably because it also comes with a pool of independent researchers to be deployed for compliance controls.

Q: Last week former Google Executive Schmidt argued in an October 19 Op-ed in the Financial Times which he co-wrote with Mustafa Suleyman, co-founder of Inflection and DeepMind, that there is a need for an International Panel on AI Safety (IPAIS) modeled after the Intergovernmental Panel On Climate Change. The IPAIS, built on broad international membership, would regularly and impartially evaluate the state of AI, its risks, potential impacts and estimated timelines and would be staffed and led by computer scientists and researchers rather than political appointees or diplomats.

NM: It’s too weak. The idea behind is ‘There are risks so let’s study the risks.’ It’s like saying the room is on fire so let’s study the fire. We need in parallel a stronger, first-response approach to mitigate the harms already occurring today at large scale with complete lack of accountability. When we started to work on a code of conduct for industry in 2022, we started convening workshops with some of the technical people at Big Tech companies. What is interesting is that when we were interacting with people at a low-level, we discovered that their concern is real: when AI systems become fully autonomous, we lose control; when the most powerful AI systems are accessed by malicious actors, they could cause a pandemic or nuclear war. Some employees are depressed, others are quitting over what is going on. What is frustrating is that 58% of faculty working on AI social impact are funded by Big Tech so the public narrative is divisive: those concerned about one type of harm or the other spend more time attacking each other than addressing the fundamental issues or holding industry accountable. At the same time the PR and government affairs people at companies like Open AI and Anthropic are emphasizing dangers such as AI cyber weapons and misuse, or major accidents during training and testing of these opaque models, yet they refuse to establish basic quality management system, know-your-customer or major accident prevention policies. They also divert attention away from addressing risks that today’s AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections. and weaken personal privacy. What’s more in the U.S. these tech companies are telling government ‘please regulate us’ and in the EU they are lobbying very hard against this because they know that, contrary to the US, the EU is capable of coming up with legislation that would actually involve real oversight.

Q: How does this impact corporates?

NM:  An important aspect of the political economy is often forgotten: there are less than a dozen providers of very capable foundation models: Microsoft, Google/Google DeepMind, OpenAI, Anthropic, Meta and maybe Baidu. Smaller companies like Mistral in France are attracting funding but they have nowhere near the sustainable traction and product lines of larger players and rely on heavy communications campaigns.  So, the models that Europe’s traditional companies will be using for a while are all foreign. The providers of these foundation models want to push all responsibility for accidents and harms caused by violations of fundamental rights to the downstream i.e. the traditional companies.  My message to EU companies is: wake up. If we do not regulate foundation models on the upstream the liability by default will be on Europe’s established companies which is both unfair and counterproductive, economically speaking. Unfair because they would not be a cause of the risk – the risk arises during the design and development of the models, where lack of built-in safeguards are making most of these models brittle. Counterproductive, because there is no way that the downstream has any control on the design features of the model to limit the risk: they can’t do anything about what is being decided when it comes to the training datasets, copyrighted material, bias or whether the model can design bioweapons or manipulate kids. All these are decisions made by the handful of upstream providers mentioned earlier. What’s more, some of these factors can be changed at will by the original provider of the foundation models through updates, which, by the way, is a violation of procurement rules at many European companies. What we are talking about is a clash of cultures: at the U.S. tech/Silicon valley companies it is literally move fast and therefore break things. In the European market it is about trustworthiness, reliability, safety and high quality. Moreover, there is so much power concentrated in so few hands, that to redress the bargaining power a regulatory framework is needed. The big AI companies are the kingmakers, even when it comes to chip and Cloud providers – no customer on its own can impose its will.

Q: What will the EU AI Office look like and how might it enforce effective oversight?

NM: One of the key questions for this AI Office is whether a legal entity that is independent from other teams in the Commission is necessary. Another question is how to attract talent to staff the EU AI Office. One of the things being discussed is reusing the expertise pulled together for ECAT, the European Centre for Algorithmic Transparency, which was created for the Digital Services Act, or from other pools of experts created in the context of digital policies. These people are trusted experts who are vetted and considered independent and talented enough to do red team work [evaluating the models and potential dangers]. The financing model will also be very important. It will need to be independent.

In October the Spanish presidency of the EU made its proposal for the AI Office: Any international issues around foundational models and GenAI models and enforcement issues of big cases involving multiple jurisdictions would be the remit of the EU AI Office. It would only get involved at the local level if there were disagreements and national authorities need advice. The next step is on December 6 when EU AI Act co-legislators decide on the final details of the governance. As the UK did not deliver very strong output from the AI Safety Summit in terms of enforcement and rules, it feels like policy is being shaped by industry and civil society is not being heard. The EU has a moral obligation to step up: create simple but concrete rules delineating clearly what levels of safety, risk mitigation, reliability and quality upstream providers of foundation models should achieve when developing their models.

To access more of The Innovator’s Interview Of The Week articles click here.

 

 

 

 

 

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.