Sacha Alanoca is an expert for the OECD’s AI Policy Observatory and was named one of the “100 Brilliant Women in AI Ethics” in 2022. She is currently an MPA candidate and John F. Kennedy Fellow at Harvard University. On campus, she advances responsible AI efforts as the Co-Chair of the AI & Emerging Tech Caucus at the Harvard Kennedy School and AI Student Leader with the Berkman Klein Center for Internet and Society. Previously, Alanoca worked as a Senior AI Policy Researcher and Head of Community Development for the think tank The Future Society. She coordinated the development of Tunisia’s AI national strategy and published scientific and policy papers on topics such as AI tools for pandemic response. She was a speaker on a panel moderated by The Innovator’s Editor-in-Chief during the Viva Technology conference in Paris June 14.
Q: Why, in your view, is it necessary to govern AI?
SA: These last months, media attention focused a lot on long-term, hypothetical AI risks related to Artificial General Intelligence (AGI) taking over and destroying humanity. While creating safe AI is important, AI regulation should also remain focused on current AI harms affecting our society such as surveillance, data breaches, systemic inequality, and job automation. Setting up AI governance safeguards for current harms will help prevent future ones. It will also help foster trust and social cohesion as democratic values can be threatened by disruptive technologies such as Generative AI.
Q: How should we govern AI? Should we prioritize setting up an international AI regulatory agency?
SA: A global AI regulatory agency can play a meaningful coordination role in the long-term but it is not the first step we should focus on as countries might take some time to align on their AI positions, and the development of such structure can be resource and time sensitive. Instead, AI governance efforts should focus on current regulatory initiatives such as the EU AI Act. As emphasized during our panel at VivaTech on June 14, the EU Parliament just approved the AI Act, the first comprehensive regulation of AI on a risk-based approach. Under such an Act, certain AI systems deemed ‘high-risk’ such as CV-scanning tools will have to meet strict compliance standards while other less risky AI applications will be left largely unregulated. Having a risk-based approach can pave the way forward because it acknowledges that we don’t need to have the same level of scrutiny for all types of AI systems. The AI Act is notably the output of more than three years of work by the European Union. The first proposal was released by the EU Commission in April 2021.
ChatGPT and Generative AI are prompting a wider discussion about risks and benefits related to AI systems, but AI governance efforts are not starting from scratch. This is something AI experts have been working on for a while through AI governance platforms such as the OECD AI Policy Observatory (OECD.AI) or the Global Partnership on AI (GPAI). We can continue building on such foundational blocks, as I did during my previous role at The Future Society.
Q: The EU AI Act is controversial. Many in the tech industry say it will kill innovation.
SA: There is a false dichotomy between innovation and regulation. As highlighted by a recent article published in Time, it can be in the interest of big tech players to delay or avoid AI regulation. Just a few days after OpenAI CEO Sam Altman welcomed AI regulation in his testimony before Congress, he threatened to pull the plug on OpenAI’s operations in Europe if the AI Act was enforced. We should therefore include industry within AI governance discussions, but also take their recommendations with cautiousness. From the tobacco industry to other tech cases we know that self-regulation doesn’t work. Mark Zuckerberg went to the U.S. congress to limit regulation of social media platforms despite the Cambridge Analytica scandal and Sam Bankman-Fried did the same regarding cryptocurrencies and was notably arrested just a couple months afterwards for financial fraud. Startups and businesses need to be looped into the conversation, but regulators and policymakers also have a key role to play to safeguard our democratic process. I believe in the role of public institutions such as the EU to pave the way for AI regulation. The EU AI Act risk-based approach provides granularity regarding levels of risks and compliance.
Q: Do you think the EU AI Act – like GDPR, Europe’s data privacy rules – will have a broader impact?
SA: Like GDPR, the EU AU Act is likely to have a ‘Brussels effect.’ With GDPR each region did not necessarily copy paste the EU legislation, they translated it to fit other jurisdictions. For example, the data protection scheme in California was inspired by GDPR. Just like in data privacy the EU is leading in AI legislation and others will follow its risk-based approach. Brazil has already translated part of the EU AI act. What is interesting is how Western-dominated this conversation is so it will have to be adapted to local norms, values, and socio-economic priorities. What is considered a risk in one place might not be considered a risk in another.
Q: You were a speaker on a panel at Viva Technology on AI, ethics and liability. If something does go wrong who is it fault the developer, the designer, or the company that applies an algorithm?
SA: It depends on the AI application, and at which stage the AI harm has been done. It is highly contextual. Much of this still needs to be worked out. There is a tension between the hyper- activity of the tech industry and the timeline of policy and legal changes. In the next couple of years policy makers will need to adopt an increasingly agile and flexible approach to AI governance. The rapid development of Generative AI tools such as ChatGPT highlights such need. The challenge will be to find the right tempo across stakeholders to develop regulatory safeguards in a timely manner, which can foster consumer trust and innovation.
To access more of The Innovator’s Interview Of The Week articles click here.
This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.