News In Context

Reining In AI

Image by Steve Johnson, Unsplash

Governments around the world grappled this week with how to best rein in AI. As the EU AI Act nears completion, U.S. President Joe Biden issued an Executive Order on AI, the UK held a global AI Safety Summit and issued the Bletchley Declaration, and the G7 released Advanced AI principles, Code of Conduct and Leaders Statement. These developments follow news last week that the United Nations is creating an AI Advisory Body, which brings together global expertise from governments, business, the tech community, civil society, and academia.

AI has captured the attention of governments because there is potential in the future for serious, even catastrophic, harm stemming from the most significant capabilities of AI models. as well as risks that today’s AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections. and weaken personal privacy.

Both money and power are on the line. The largest tech companies would prefer to self-regulate and are attempting to influence how their AI models might be governed. Some observers accuse them of using proposed legislation to either lock in advantages or to slow the market down until they catch up.

Governments have their own agendas.  Elon Musk, who participated in the UK AI Safety Summit along with other U.S. tech leaders, summed up governments own conflicting interests via a post published on X, the social media platform formerly known as Twitter which he now owns.  Several hours ahead of the summit, Musk posted a cartoon on X that appeared to show the UK, US, Europe and China verbalizing the risks of AI to humankind while each was secretly thinking about their desire to develop it first.

Against that backdrop, critics say government actions announced this week so far amount to little more than the equivalent of corporate social responsibility goals with no teeth.

“As the UK did not deliver very strong output from the AI Safety Summit in terms of enforcement and rules, it feels like policy is being shaped by industry and civil society is not being heard,” Nicolas Moës, an economist by training focused on the impact of General-Purpose Artificial Intelligence (GPAI) on geopolitics, the economy and industry, said in an interview with The Innovator. He is the Director for European AI Governance at global think tank The Future Society, where he studies and monitors European developments in the legislative framework surrounding AI.

“One of the key questions was why there were so few independent civil society representatives? The outcome is therefore some self-imposed, self-assessed commitments,” says Moës. “There is a need for third party scrutiny to be able to understand the models, cross examine the evidence and stop some of the dangerous and reckless developments,” he says. “The EU has a moral obligation to step up: create simple but concrete rules delineating clearly what levels of safety, risk mitigation, reliability and quality upstream providers of foundation models should achieve when developing their models.”  (To to read the full interview click here.)

The U.S. Announces AI Safety Measures

On November 1, the Biden-Harris administration announced that the U.S. Department of Commerce, through the National Institute of Standards and Technology (NIST), will establish the U.S. Artificial Intelligence Safety Institute (USAISI) to lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models

The US AISI will operationalize NIST’s AI Risk Management Framework by creating guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk. The Institute will develop technical guidance that will be used by regulators considering rule making and enforcement on issues such as authenticating content created by humans, watermarking AI-generated content, identifying and mitigating against harmful algorithmic discrimination, ensuring transparency, and enabling adoption of privacy-preserving AI. It will also aim to serve as a driver of the future workforce for safe and trusted AI and to enable information-sharing and research collaboration with peer institutions internationally, including the UK’s planned AI Safety Institute, and partner with outside experts from civil society, academia, and industry.

The announcement coincided with U.S. Secretary of Commerce Gina Raimondo’s participation in the AI Safety Summit 2023 in the UK with Vice President Kamala Harris.

The USAISI will support the responsibilities assigned to the Department of Commerce under President Biden’s executive order on AI. Among other things the U.S. executive order will:

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
  • Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
  • Protect citizens from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
  • Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will seek to ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.

Axios points out that it’s not clear what action, if any, the government could take if it’s not happy with the test results an AI company provides, and that the executive order “largely depends on the goodwill of the tech companies.”

The U.S. executive order is “imperfect, but comes far closer to laying out real policy” than the Bletchley Declaration, which was adopted by 28 countries and the EU, wrote Gary Marcus, chief executive of the Centre for the Advancement of Trustworthy AI.

“The truth is the United States is already far behind Europe,” Max Tegmark, President of Tech policy think tank Future of Life Institute told Reuters. “Policymakers, including those in Congress, need to look out for their citizens by enacting laws with teeth that tackle threats and safeguard progress,” he said in a statement.

The Bletchley Declaration

The U.S. announcements came just before UK Prime Minister Rishi Sunak hosted an AI Safety Summit to help shape global rules for AI which resulted in the so-called Bletchley Declaration.

The declaration, sets out a two-pronged agenda focused on identifying risks of shared concern and building the scientific understanding of them, and building cross-country policies to mitigate them. “This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research,” the declaration said.

The summit succeeded in getting the U.S., China, India, Japan and EU countries together to create a baseline for future action and to safety- test AI models at the London-based AI Safety Institute, which aims to “act as a global hub on AI safety,” according to U.K. Prime Minister Rishi Sunak. OpenAI, Google DeepMind, Microsoft, Meta and other AI companies  signed on to the non-binding agreement.

Until now the only people testing the safety of new AI models have been the very companies developing it,”Sunak said in a statement at the end of the summit. “We shouldn’t rely on them to mark their own homework.”

Industry observers took to social media to point out what they see as some of the positives from the summit.  “The order is guiding money and attention to many areas of legitimate concern, including finding ways to protect people against AI-enabled fraud; using AI for identifying and fixing cybersecurity vulnerabilities; discovering and addressing risks to critical infrastructure and developing standards around AI safety and trustworthiness,” wrote Patricia Thaine, Co-founder and CEO at scale-up Private AI.

Others welcomed the inclusion of China into discussions about governing AI.

But some say the declaration does not go far enough. The declaration “begins to recognize the serious risks that AI poses, both near-term and long but I also wish that they could have gone further and represented a broader-cross section of society,” wrote The Center for the Advancement of Trustworthy AI’s Marcus.

“We urgently need to move past position statements – there have been a lot of those in recent months – and into concrete proposals about what to do next,” wrote Marcus. “Ultimately all these things need to have teeth; most voluntary guidelines are not enough.”

A Need For The EU To Step Up

The creation of an EU AI Office, one of the mechanisms being discussed to enforce the EU AI ACT, is a step in that direction,  because it also comes with a pool of independent researchers to be deployed for compliance controls, says The Future Society’s Moës . In October the Spanish presidency of the EU made its proposal for the AI Office: Any international issues around foundational models and GenAI models and enforcement issues of big cases involving multiple jurisdictions would be the remit of this new entity. What is needed, says Moës, is independent authorities/ third parties that authorize further development of a given product, and would authorize it only if receiving sufficient evidence it is safe. It should work in much the same way in manufacturing of electronics or in the construction industry, he says.  “You first get sign off on the blueprints before production; you don’t just build a skyscraper and then ask the authorities whether it is ok.” Oversight should include an authorization of whether or  not to train the model in the first place, he argues. “Given the risks they are talking about, to be coherent they ought to stop a minute and try as much as possible to assess whether developing the model is at all a good idea. And since we can no longer trust [the tech companies] to make the right decisions on this, we need authorities to oversee these decisions.”

Legislators are scheduled to decide on the final details of governance in the EU AI Act on December 6. “In the U.S.  tech companies are telling government ‘please regulate us’ and in the EU they are lobbying very hard against this because they know that, contrary to the U.S., the EU is capable of coming up with legislation that would actually involve real oversight,” says Moës,

IN OTHER NEWS THIS WEEK

ARTIFICIAL INTELLIGENCE

Musk’s xAI Set To Release First AI Model To Select Group

Elon Musk’s artificial intelligence startup xAI will release its first AI model to a select group on November 4, nearly a year after OpenAI’s ChatGPT caught the imagination of businesses and users around the world, spurring a surge in adoption of generative AI technology.Musk co-founded OpenAI, the company behind ChatGPT, in 2015, but stepped down from the company’s board in 2018.

Microsoft Makes CoPilot Available To 150 Million Workers

Microsoft has become the first in the industry to make the technology behind ChatGPT available as a standard feature in a widely used software product, potentially transforming the working lives of millions. On November 1 Microsoft officially declared “general availability” of a generative AI assistant, dubbed Copilot, in enterprise versions of its widely used Microsoft 365 suite of productivity apps, which includes Word, PowerPoint and Excel. The move potentially puts new AI tools at the fingertips of an estimated 150 million workers, according to analysts, and helping them automatically generate documents and emails or create spreadsheets more easily. The software is designed to make it simple to draw on all the data a company holds in its Microsoft applications. Eventually, connections to other data stores are meant to make Copilot a “smart” front end for working with all of a company’s most valuable data.

Siemens and Microsoft To Work Together On AI Project

Siemens  and Microsoft  on October 31 announced a joint project to use artificial intelligence to increase productivity and human-machine collaboration.The Siemens Industrial Copilot scheme will see the two companies work together to use generative AI for the manufacturing, transportation and healthcare industries.German automotive supplier Schaeffler is among the companies to have adopted the Siemens Industrial Copilot, Siemens said.

FINANCIAL SERVICES

Sam Bankman-Fried Convicted Of Multi-Billion Dollar FTX Fraud

FTX founder Sam Bankman-Fried was found guilty November 2 of stealing from customers of his now-bankrupt cryptocurrency exchange in one of the biggest financial frauds on recordA 12-member jury in Manhattan federal court convicted Bankman-Fried on all seven counts he faced after a monthlong trial in which prosecutors made the case that he looted $8 billion from the exchange’s users out of sheer greed.

HSBC Tokenizes Gold

HSBC has unveiled a platform that uses distributed ledger technology to tokenize the ownership of institutional clients’ physical gold held in the bank’s London vault.HSBC creates a ‘digital twin’ of an existing physical asset – specifically loco London gold that is in its vault. The tokenized physical gold can then be traded between HSBC and institutional investors through the bank’s Evolve single dealer platform, or through an API.The tokenization generates a permissioned digital representation of clients’ physical gold holdings, which is integrated into HSBC’s operational infrastructure. This provides a digital overlay for clients to see their tokenized gold trades and positions that correspond with their physical holdings. This in turn allows for an automated and, therefore, more efficient and cost-effective way for investors to keep track of their allocated as well as unallocated gold, says the bank.  In due course, the bank says this could enable fractionalisation of loco London gold bars and direct investment by retail investors.

To access more of The Innovator’s News In Context articles click here.

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.