News In Context

What Business Needs To Know About The EU’s Plans To Regulate AI

The European Union unveiled the world’s first plans to regulate AI on April 21, reinforcing its role as a global rule maker and its commitment to ensuring that AI systems are human-centric and trustworthy.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, executive vice president at the European Commission, the EU’s executive arm, said in a statement.

The proposed legislation threatens to come down hard on corporates that violate rules concerning the misuse of AI, such as algorithmic bias in hiring or credit lending. Offending companies could face fines of up to 6 % of their global turnover.  

The draft proposal classifies AI applications under four distinct categories of risk:

  • Unacceptable risk: Use cases placed in this category, such as social scoring, will be banned.
  • High-risk: Use cases such as AI recruitment tools, will be subject to a vetting process which includes quality management and conformity assessment procedures
  • Limited risk: Use cases such as chatbots, will be subject to minimal transparency obligations
  • Minimal risk: Use cases such as AI-based video games or spam filters won’t face any additional restrictions

The designation of applications as high-risk is likely to evolve based on a set of criteria and risk assessment methodology, which are yet to be defined. “Therefore, it is prudent for sectors that rely heavily on AI to come into regulatory compliance of their own accord,” Kay Firth-Butterfield, the World Economic Forum’s Head of AI and Machine Learning, said in a statement reacting to the EU’s plans. “Forward-looking companies should proactively establish such a vetting process to ensure their AI systems’ trustworthy design and deployment. They can effectively leverage the set of governance frameworks [the Forum] developed at the Centre for the Fourth Industrial Revolution  across various use-cases (e.g., facial recognitionAI recruitment toolschatbots, etc.). They have been co-designed and tested through a very similar human-rights focus.”

AI systems identified as high-risk in the proposed new rules include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services(e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

The proposed rules will subject high-risk AI systems to strict obligations before they can be put on the market. High risk systems include: adequate risk assessment and mitigation systems; high quality of the datasets feeding the system to minimize risks and discriminatory outcomes; logging of activity to ensure traceability of results; detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; clear and adequate information to the user; appropriate human oversight measures to minimize risk; high level of robustness, security and accuracy

Among the identified risks are remote biometric systems, including facial recognition technology. AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of people are considered a high-level risk system and would require an ex-ante evaluation of the technology provider to attest its compliance before getting access to the EU market, and an ex-post evaluation of the technology provider, notes Sébastien Louradour, a Forum Fellow, Artificial Intelligence and Machine Learning, in a posting that focuses on what to know about the EU’s proposed facial recognition technology regulation.

 In addition, “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement are mostly prohibited unless they serve very limited exceptions related to public safety such as the targeted search of missing persons or the prevention of imminent terrorist threats.

Other use-cases such as facial recognition technology for authentication processes are not part of the list of high-level risks and are expected require a lighter level of regulation.

The EU doesn’t detail any threshold of accuracy to meet, but rather requires a robust and documented risk-mitigation process designed to prevent harm, notes Louradour.

“The deployment of a quality-management system is an important step as it will require providers to design adequate internal processes and procedures for the active mitigation of potential risks,” he says in his posting. While it will be up to the technology providers to set up their own quality processes, third-party notified bodies will have the responsibility of attesting providers’ compliance with the new EU legislation. “To succeed, tech providers will need to build tailored approaches to design, implement and run these adequate processes. Providers will also have to work closely with the user of the system to anticipate potential risks and propose mitigation processes to prevent them,” says Louradour.

How To Anticipate The Coming Regulation

Over the past two years, the World Economic Forum has partnered with industry players, government agencies and civil society to draft a proposed policy framework for responsible limits on facial recognition technology.  The Forum’s proposed oversight strategies, include a detailed a self-assessment questionnaire, a third-party audit and a certification scheme. Lourradour notes that the EU’s proposed concept of third-party audit to assess conformity suggests the same model of oversight and allows for rapid scale-up and deployment of certification bodies  to run the third-party audits across the EU.

The proposed conformity assessment procedure – which reviews the control of the compliance of the requirements stated in Title III of the proposed EU regulation – will first require notified bodies to draft dedicated audit frameworks and certification schemes. These two documents will be used to detail to audited organizations how the certification will play out.

 Louradour encourages providers to consider the Forum’s audit framework and certification scheme for the quality management system it detailed in a white paper published in December 2020  in collaboration with the French accredited certification body AFNOR Certification.

At least one organization is already launching plans to enable independent third-party audits of any type of system deemed high-risk by the EU.

ForHumanity, a 501(c)(3) tax-exempt public charity formed by an interdisciplinary group of dedicated  350+ contributors and 32 Fellows that focuses on the downside risks associated with AI and automation, said in a press release that it will immediately put into action its crowd-sourced development process and start to produce robust criteria that will support the EU’s proposals and enable independent third-party audits of high-risk systems. 

“We strongly recommend that certification schemes or technical standards are applied to the conformity assessment procedures,” says ForHumanity “As such, we have started to develop a certification scheme and will engage with national authorities in parallel with the legislative process.  Once relevant parties approve that our scheme upholds the scope, nature, and purpose of the final EU laws, we look forward to helping implement it.”

ForHumanity said it is concerned that  industry may believe that omission from the list of high-risk might be perceived as a tacit endorsement of systems as low-risk and outside of the scope for governance, oversight, accountability, and trust. “It is important that the Commission ensures that the list of use cases in Annex III is reviewed and maintained with a frequency that parallels new technical developments, “ the organization said in a statement.

Building On GDPR

With the AI rule book, the EU is intensifying a years-long plan to position itself as the world’s primary rulemaker for technology following the rollout of its comprehensive privacy rules, the GDPR, in 2018.

“We can expect many changes to this initial draft, but the risk-based nature of the approach complements and extends the foundations of the GDPR and proposes further harmonization,” Anne Josephine Flanagan, the Forum’s Data Policy and Governance Lead, said in a LinkedIn post. It is “useful to see specific high-risk use cases defined in the annexes – something the EU has typically tended to avoid, but which is becoming increasingly necessary as context matters,” she said.  “The trick will be ensuring that these rules are clear, future-proof and not overly burdensome whilst protecting people and encouraging innovation. This text will be pivotal for European tech as data processing relies increasingly on machine learning and AI relies on access to good data.”

The Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.

The proposals will be debated by the European Parliament and member states until at least 2023 before becoming law.

IN OTHER NEWS THIS WEEK

ENERGY

5G Phone Networks Could Provide Power As Well As Communications

In a paper published in Scientific Reports, Aline Eid and her colleagues at the Georgia Institute of Technology, in Atlanta, describe how they have designed a small, flexible antenna intended to harvest electrical power from signals emitted by so-called 5th-generation (5G) mobile-phone masts. The hope is that mobile-phone networks could pull double duty as a ubiquitous wireless power grid.

Enel Green Power Installs First-Ever Wave Energy Converter in Chile

Enel Green Power Chile, Enel Chile’s renewable energy subsidiary, installed the PB3 PowerBuoy, the first full-scale wave energy converter off the coast of Las Cruces, in the Valparaíso Region. The marine energy generator installed by Enel Green Power is the first of its kind installed in Latin America and the fifth in the world. The innovative system is able to convert wave energy into electrical energy that is stored in a 50 kWh battery system located inside the equipment, which feeds the different oceanographic sensors that monitor the marine environment, helping the investigation into this type of renewable energy.

Study Finds Resilience Needed To Jumpstart Final Stages Of Energy Transition

A World Economic Forum report published in collaboration with Accenture, draws on insights from the Energy Transition Index (ETI) 2021. The index benchmarks 115 countries on the current performance of their energy systems across the three dimensions of the energy triangle: economic development and growthenvironmental sustainability, andenergy security and access indicators – and their readiness to transition to secure, sustainable, affordable, and inclusive energy systems. Strong improvements were made on the Environmental Sustainability and Energy Access and Security dimensions. Eight out of the 10 largest economies have pledged net-zero goals by mid-century. The annual global investment in the energy transition surpassed $500 billion for the first time in 2020, despite the pandemic, However, the results also show that only 10% of the countries were able to make steady and consistent gains in their aggregate ETI score over the past decade, highlighting the inherent complexity of the energy transition challenge

SUSTAINABILITY

This Fashion Label Is Making Clothes Out Of Air Pollution

Pangaia latest collection features clothes and accessories emblazoned with logos that use black ink made from toxic particles. This particulate matter would otherwise contribute to global warming and harm human health. But Pangaia partnered with Graviky Labs, a startup spun out of an MIT project, to suck it out of the atmosphere and transform it into screen printing ink. This is the first time this kind of ink has been used in garments.

Unliver Launches A New Laundry Capsule Made From Recycled Carbon Emissions

A new laundry capsule from Unilever, which initially will be available in stores in China, uses surfactants made from captured industrial emissions. The laundry capsules, available through the brand Omo and launching in China April 22, result from a partnership between Unilever, biotech company LanzaTech, and green chemical company India Glycols.

FOOD AND AGRICULTURE

Clara Foods Teams Up With AB InBev To Make Animal Protein At Scale

Alt-protein maker Clara Foods announced a partnership with ZX Ventures, the innovation arm of beer brewer AB InBev, to “brew” animal-free protein at a large scale via fermentation. Clara Foods, which was founded out of the IndieBio accelerator, has been using precision fermentation for years to develop animal-free protein, including an animal-free egg white. But a major challenge is producing such proteins at scale — that is, in large enough amounts to realistically compete with the traditional animal protein industry. 

FINTECH

China’s Central Bank Fights Ant Group For Control Of Data

China’s central bank is attempting to take control of Ant Group’s vast trove of consumer lending data, marking the latest front in Beijing’s crackdown on Jack Ma’s financial technology group. The People’s Bank of China wants Ant to turn over its data, one of the most valuable assets in Ma’s internet empire, to a state-controlled credit scoring company that would be run by former executives of the central bank, according to people close to the negotiations. The entity would also serve other financial institutions, such as state-owned banks, that compete with the fintech group’s lending operations.

European Digital Bank Revolut Is Expanding Into India

Revolut, an online banking start-up based in the U.K., is planning an expansion into India.The London-based company announced Thursday that it had tapped Paroma Chatterjee, a former executive for Indian start-ups Flipkart, Via.com and Lendingkart, to lead its operations in the country. Revolut will invest about $25 million into the Indian market over the next five years and aims to launch its app there by 2022. The company, worth $5.5 billion in its most recent funding round, has raised more than $900 million from investors to date.

TECH COMPETITION

U.S. Senators Question Google And Apple About App Stores

On April 21 the U.S. Senate Judiciary Subcommittee on Competition Policy, Antitrust and Consumer Rights met to discuss the dominance of Google and Apple’s mobile app stores and whether the companies abuse their power at the expense of smaller competitors. Read the testimony London Business School’s Michael G. Jaacobides submitted in writing, in advance, for this meeting, on The Innovator’s website.

U.S. Lawmakers Back $100 Billion Science Push To Compete With China

A bipartisan group of U.S. lawmakers on Wednesday introduced legislation calling for $100 billion in government spending over five years on basic and advanced technology research and science in the face of rising competitive pressure from China.

ADVERTISING

Advertising Scam Targeting Streaming TV Apps Uncovered

Fraudsters infected nearly one million mobile devices with software that mimicked streaming-TV apps and collected revenue from unsuspecting advertisers, according to cybersecurity company Human Security, exposing vulnerabilities in a fast-growing corner of the digital ad market.The scammers spoofed an average of 650 million ad placement opportunities a day in online ad exchanges, stealing ad dollars meant for streaming apps available on popular streaming-TV platforms run by Roku, Amazon.com, Apple and Alphabet’s Google.

RETAIL

Amazon To Open First Ever Hair Salon In London

Amazon is expanding into yet another new business: it is opening its own hair salon. The Amazon Salon, described in a blog post, will be spread over two floors of a building in Spitalfields, a trendy district near the City of London known for its shopping and restaurants. It will be open seven days a week. The company said it will use the Amazon Salon to try out a number of new technologies with consumers. It gave industry publication Retail Week a preview ahead of the official announcement.

To access more of The Innovator’s News In Context articles click here.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.