News In Context

The EU AI Act’s Endgame

The EU AI Act is at a critical juncture. The question is whether large language models (LLMs) should be regulated at the outset or not. Opposing camps each claim that making the wrong move could jeopardize Europe’s future.

The outcome has potentially larger consequences. “This choice is not a matter of European competitiveness – it represents a foundation stone for responsibly governing our technology-infused future,” says a blog post from the International Center For Future Generations. “Retaining the regulation of foundation models in the AI Act is pivotal: it is a step towards a future where AI is not only technologically sophisticated but also kept in check.”

The issue has split not just European countries but the global AI community, pitting Yann LeCun, one of the three men who shared the 2018 Turing Award – the Nobel Prize of Computing – against the other two, AI pioneers Yoshua Bengio  and Geoff Hinton.

The divide concerns amendments made by the European Parliament which would subject providers of foundation models, general purpose and/or generative base models that app developers can tap into to build out automation software for specific use-cases, to certain legal requirements.

The Parliament’s draft amendment to the EU AU Act – which is now up for debate – says developers of foundation models such as OpenAI, Meta, Google, and Microsoft – must respect a minimum set of rules already standardized in most cutting-edge industrial sectors in terms of transparency and accountability, as well as terms covering prevention, management, and reduction of risks and accidents. Under the law these companies would be obligated to provide comprehensive and intelligible documentation to downstreams actors in the AI value chain. And, before being placed on the market, and then on a regular basis, these models would be subject to tests by independent experts.

Those against regulating foundational models include Cedric O, France’s former secretary of state, in charge of digital, as well as LeCun, whose day job is Chief AI Scientist at Meta, one of the few global providers of foundational models. In an op-ed published in Les Echos, Financial Times, Handelsblatt, NZZ, and Techcrunch, O and LeCun along with over 150 researchers, entrepreneurs, and business leader signatories, warned against the risks of over-regulating generative AI and killing innovation.

This camp maintains that the proposal to regulate foundational models in the EU AI Act would be catastrophic. “It is highly probable that it would not only hinder the emergence of European models but also prevent all European companies from accessing models as effective as their foreign competitors,” O says in a LinkedIn posting. “Speaking of a technology as crucial for companies’ competitiveness (and probably nations’ power), it would potentially be the final nail in the coffin for European technological sovereignty. “

O is currently a co-founding advisor at Mistral AI, an AI company that France now sees as its best shot at becoming a global player in artificial intelligence. A national AI champion – AlephAlpha– has also recently emerged in Germany. France and Germany, along with Italy, have done an about face on regulating foundational models and are now opposing it.  Cynics say the change of heart makes it seem as if all the talk about the need for regulation was motivated by blocking AI giants when it appeared Europe was behind, rather than out of real concern for ensuring trustworthy AI.

O and others in his camp are pushing for the elimination of binding clauses in the regulation for foundation model providers.  They would be replaced with a voluntary code of conduct,.

The other camp, which includes Nicolas Miailhe, the founder and president of The Future Society and Canadian computer scientist and leading deep learning expert Bengio ,argue in an op-ed in Le Monde signed by other leading figures in AI and technology, that renouncing an ambitious legal framework for artificial intelligence would weaken Europe’s historic position and threaten Europe’s core industries.

“Generative artificial intelligence (AI) systems will increasingly be at the heart of our economy and societies,” argues Miailhe and his camp in their op-ed. “They must be explicitly regulated by the EU’s AI regulation at a time when a frantic, and in many ways reckless, race is being unleashed. France and Europe can take their rightful place in the deployment and economy of AI, provided they capitalize on their strengths: the protection of fundamental rights, cutting-edge industry, and trusted AI.”

The Le Monde op-ed says the proposed legislation would not weaken emerging European foundation model providers, but actually strengthen them. ” Compliance with a European regulation including foundation models would equate to a “trust label that would be a competitive advantage both in Europe and internationally,” is argues.

Turing Award winners Bengio and Hinton along with scientist and serial entrepreneur Gary Marcus, a leading voice in artificial intelligence., and others in the AI community, are also calling for foundation models to be regulated.. They signed a pair of open letters urging policymakers in the EU to seek a compromise agreement rather than gutting the AI Act or ditching it.

Since a sticking point is that some consider the original drafts of rules around foundation models too burdensome for smaller companies like Mistral AI to meet, the “obvious and correct compromise,” which the Spanish Presidency of the EU has been trying to push, is a “tiered approach” to try to put the greatest burden on the largest companies,” Marcus says in a blog posting..

In one of the letters signed by Marcus, Bengio, Hinton and others, this camp argues that it is vital that risks are addressed at the foundation model level. “Only the providers of foundation models are in a position to comprehensively address their inherent risks,” says the letter. “They exclusively have access to and knowledge of the models’ training data, guardrail design, likely vulnerabilities, and other core properties. If severe risks from foundation models aren’t mitigated at the foundation model level, they won’t be mitigated at all, potentially threatening the safety of millions of people.””

The letter strongly advises against addressing risks from foundation models through a system of self-regulation. “Self-regulation is likely to dramatically fall short of the standards required for foundation model safety,” says the letter. “Since even a single unsafe model could cause risks to public safety, a vulnerable consensus on self-regulation does not ensure EU citizens’ safety. The safety of foundation models must be ensured by law.”

Foundation models differ significantly from traditional AI. Their generality, cost of development, and ability to act as a single point of failure for thousands of downstream applications mean they carry a distinct risk profile – one that is systemic, not yet fully understood, and affecting substantially all sectors of society, argues the letter signed by Marcus, Hinton and Bengio. “We must assess and manage these risks comprehensively along the value chain, with responsibility lying in the hands of those with the capacity and efficacy to address them.”

The scope of such regulation would be narrow, impacting fewer than 20 regulated entities in the world, capitalized at more than 100 million dollars, compared to the thousands of potential EU deployers. “These large developers can and should bear risk management responsibility on current powerful models if the Act aims to minimise burdens across the broader EU ecosystem,” says one of the Marcus, Hingon and Bengio letters. “Requirements for large upstream developers provide transparency and trust to numerous smaller downstream actors. Otherwise, European citizens are exposed to many risks that downstream deployers and SMEs, in particular, can’t possibly manage technically: lack of robustness, explainability, and trustworthiness. Model cards and voluntary – and therefore not enforceable – codes of conduct won’t suffice. EU companies deploying these models would become liability magnets. Regulation of foundation models is an important safety shield for EU industry and citizens.”:

A letter to the heads of state of France, Germany and Italy and EU officials signed by president of the Atomium European Institute for Science, Media and Democracy and the founding director of the Yale University’s Digital Ethics Center, also makes the case for pushing forward with an approach that includes legally binding rules for foundational models.

That letter cites three reasons. The first is that providers of large language models, such as OpenAI, Google and Microsoft, should not make the rules themselves. “When companies regulate themselves, they may prioritize their profits over public safety and ethical concerns,” says the letter. “It is also unclear who will monitor the development and applications of these codes of conduct, how, and with what degree to accountability.”

The second reason is that up to now the EU has led the global effort to ensure that AI is safe, fair and protects users’ privacy, but now other governments are starting to tackle oversight of AI.  If it fails to act “European citizens risk using AI products regulated according to values and agendas not aligned with European principles,” says the Atomium letter.

The third reason cited is what Atomium sees as the heavy cost of inaction. “A lack of regulation opens the door to potential misuse and abuse of AI technologies,” says the letter.  “The consequences are severe, including privacy violations, bias, discrimination, and threats to national security in critical areas like healthcare, transportation, and law enforcement. Economically, unregulated AI applications can distort competition and market dynamics, creating an uneven playing field where only powerful and well-funded companies will succeed.”

There is much at stake argue those who want to regulate foundation models from the start. “The AI Act is more than just a law,” says the Atomium letter. “It is a statement about what values, we as Europeans want to promote, and what kind of society we want to build.”

If a compromise is not reached and the EU AI Act is not passed “in the worst case, after five years of negotiation, we could wind up with nothing, leaving the citizens of the world more or less  entirely on the hook for any negative externalities that arise from generative AI, from misinformation to cybercrime to new bioweapons to bias; runaway AI, if it’s really a thing, would also not be covered,” Marcus said in a blog posting.

A decision on how to proceed with the EU AI Act is expected before the end of the year.

IN OTHER NEWS THIS WEEK

ARTIFICIAL INTELLIGENCE

Microsoft Will Have Observer Role On OpenAI’s Board

Microsoft will have a role on OpenAI’s new board as the artificial intelligence start-up aims to strengthen its corporate governance following a week of chaos in which five out of six of its directors, including chief executive Sam Altman, were ousted or quit. The new board will include Microsoft as a non-voting observer, alongside “individuals whose collective experience represents the breadth of OpenAI’s mission — from technology to safety to policy,” the start-up said on November 29.

AI Threatens Wages, Not Jobs, According To New ECB Study

The rapid adoption of artificial intelligence could reduce wages, but so far is creating, not destroying jobs, especially for the young and highly-skilled, research according to research published by the European Central Bank on November 28.

DEEP TECH

Tiny Living Robots Made From Human Cells Surprise Scientists

Scientists have created tiny living robots from human cells that can move around in a lab dish and may one day be able to help heal wounds or damaged tissue, according to a new study. A team at Tufts University and Harvard University’s Wyss Institute have dubbed these creations anthrobots. The research builds on earlier work from some of the same scientists, who made the first living robots, or xenobots, from stem cells sourced from embryos of the African clawed frog (Xenopus laevis). The research was published November 30 in the journal Advanced Science.

Google DeepMind Reveals Potential For Thousands Of New Materials

Google DeepMind has used artificial intelligence (AI) to predict the structure of more than two million new materials, a breakthrough it said could soon be used to improve real-world technologies.In a research paper published in science journal Nature on November 29, the Alphabet -owned AI firm said almost 400,000 of its hypothetical material designs could soon be produced in lab conditions.Potential applications for the research include the production of better-performing batteries, solar panels and computer chips.

CYBERSECURITY

Okta Hackers Stole Data On All Customer Support Users

Hackers who compromised Okta’s customer support system stole data from all of the cybersecurity firm’s customer support users, Okta said in a letter to clients obtained by CNBC November 28, a far greater incursion than the company initially believed. Okta provides identity management solutions for thousands of small and large businesses, allowing them to give employees a single point of sign on. It also makes Okta a high-profile target for hackers, who can exploit vulnerabilities or misconfigurations to gain access to a slew of other targets. In the high-profile attacks on MGM and Caesars, for example, threat actors used social engineering tactics to exploit IT help desks and target those company’s Okta platforms. The direct and indirect losses from those two incidents exceeded $100 million, including a multi-million dollar ransom payment from Caesars.

Ransomware Group Black Basta Is Said To Have Extorted More Than $100 Million

A cyber extortion gang suspected of being an offshoot of the notorious Russian Conti group of hackers has raked in more than $100 million since it emerged last year, researchers said in a report published on November 29. Digital currency tracking service Elliptic and Corvus Insurance in a joint report said the ransom-seeking cybercrime group known as “Black Basta” has extorted at least $107 million in bitcoin, with much of the laundered ransom payments making their way to a sanctioned Russian cryptocurrency exchange.

FOOD AND BEVERAGE

World Economic Forum Launches First Movers Coalition For Food

The World Economic Forum, with support from the Government of the United Arab Emirates, along with more than 20 corporate and research partners in the food sector, launched the First Movers Coalition for Food on December 1. The initiative uses the combined procurement power for sustainably produced farming products to speed up the adoption of sustainable farming, innovations and transitional funding.Food systems account for more than 30% of global emissions and are critical in achieving the Paris Agreement and limiting global warming to below 1.5C. Aggregating demand for sustainably produced and low-emission agricultural commodities, can accelerate the transition to net-zero, nature-positive transitions in food systems. The new initiative aims to accelerate sustainable farming and production methods and technologies by leveraging collective demand for low-carbon agricultural commodities. It will do so through the power of aggregated demand, aiming for a combined procurement value for low-carbon commodities of $10-$20 billion from coalition members. Corporate partners currently participating in the coalition account for a combined revenue of $2.1 trillion, with operations globally.

To access more of The Innovator’s News In Context stories click here.

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.