The UK’s November 1-2 AI Safety Summit, a first-of-its-kind gathering of global leaders, is accelerating tech titans’ efforts to control the technology’s trajectory by influencing how it will be governed. It’s a high-stakes game that could determine not only AI’s impact on society but the sector’s winners and losers.
A lot of money is on the line. U.S. tech giants added $2.4 trillion to their market capitalizations in a year defined by the hype around generative artificial intelligence, according to a new report from venture capital firm Accel. Accel, in its annual Euroscape report, said the share price values of big technology firms such as Apple, Microsoft, Alphabet and Amazon rose by an average of 36% year- over-year. At press time the Financial Times reported that OpenAI is in talks with investors about selling shares at a valuation of $86 billion, roughly three times what it was worth six months ago, A stock sale at the level OpenAI is targeting would make the San Francisco-based group behind the ChatGPT chatbot one of the world’s most highly valued private companies.
The AI Safety Summit will focus on prevention and mitigation of harms from AI technology at the ‘frontier’ of General Purpose AI, as well as in some cases specific narrow AI which can possess potentially dangerous capabilities. These harms could be deliberate or accidental; caused to individuals, groups, organizations, nations or globally; and of many types, including but not limited to physical, psychological, or economic harms, according to the UK government, which is organizing the event at Bletchley Park, the principal center of Allied code-breaking during the Second World War.
The Summit will be faced with a larger question: who should control A.I., and who should make the rules that powerful artificial intelligence systems must follow?
Should A.I. be governed by a handful of companies that try their best to make their systems as safe and harmless as possible? asked an article in the New York Times. Should regulators and politicians step in and build their own guardrails? Or should A.I. models be made open-source and given away freely, so users and developers can choose their own rules?
In the run-up to the summit tech leaders were busy this week broadcasting their viewpoints from conference stages, in newspaper pages and on the Internet.
These efforts are in addition to more backdoor routes aimed at influencing policy makers. Politico published an expose October 13 that revealed how an organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key U.S. congressional offices, across federal agencies and at influential think tanks.The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, – and has close ties to Open Ai and Anthrophic- are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI, according to the Politico story. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from addressing risks that today’s AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections and weaken personal privacy.
Current and former Horizon AI fellows with salaries funded by Open Philanthropy are now working at the Department of Defense, the Department of Homeland Security and the State Department, as well as in the House Science Committee and Senate Commerce Committee, two crucial bodies in the development of AI rules, according to Politico. They also populate key think tanks shaping AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology, according to the Horizon web site. In 2022, Open Philanthropy set aside nearly $3 million to pay for what ultimately became the initial cohort of Horizon fellows.
The organization — which is closely aligned with “effective altruism,” a movement made famous by disgraced FTX founder Sam Bankman-Fried that emphasizes a data-driven approach to philanthropy — has also spent tens of millions of dollars on direct contributions to AI and biosecurity researchers at RAND, Georgetown’s CSET, the Center for a New American Security and other influential think tanks guiding Washington on AI, according to Politico.
In an interview with Politico Suresh Venkatasubramanian, a professor of computer science at Brown University who co-authored last year’s White House Blueprint for an AI Bill of Rights, compared Open Philanthropy’s growing AI network to the Washington influence web recently built by former Google executive Eric Schmidt. “It’s the same playbook that’s being run right now, planting various people in various places with appropriate funding,” he said.“As with Schmidt’s network, Open Philanthropy’s influence effort involves many of the outside policy shops that Washington relies on for technical expertise,” said Venkatasubramanian.
Some of those same people with technical expertise may end up working in new global organizations charged with overseeing AI safety.
A Call For An International Panel On AI Safety
Former Google Executive Schmidt argued in an October 19 Op-ed in the Financial Times which he co-wrote with Mustafa Suleyman, co-founder of Inflection and DeepMind, that there is a need for an International Panel on AI Safety (IPAIS) that would be objective. “What’s missing is an independent, expert-led body empowered to objectively inform governments about the current state of AI capabilities and make evidence-based predictions about what’s coming,” the two said in the Op-ed. “Policymakers are looking for impartial, technically reliable and timely assessments about its speed of progress and impact.”
“Before we charge headfirst into over-regulating we must first address lawmakers’ basic lack of understanding about what AI is, how fast it is developing and where the most significant risks lie,” said the Op-ed. “Before it can be properly managed, politicians (and the public) need to know what they are regulating, and why. Right now, confusion and uncertainty reign.”
Schmidt and Suleyman’s proposal, which was developed jointly with others, including LinkedIn and Inflection Co-founder Reid Hoffmann and Ian Bremmer, the founder and president of Eurasia Group, a political risk research and consulting firm, calls for this new AI safety body to be modeled after the Intergovernmental Panel on Climate Change (IPCC), which has a mandate to provide policymakers with “regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation”.
The IPAIS, built on broad international membership, would regularly and impartially evaluate the state of AI, its risks, potential impacts and estimated timelines, said the Op-ed. It would keep tabs on both technical and policy solutions to alleviate risks and enhance outcomes.
Just as the IPCC does not do its own fundamental research, but acts as a central hub that gathers the science on climate change, crystallizing what the world does and doesn’t know in “authoritative and independent form”, an IPAIS would work in the same way, staffed, and led by computer scientists and researchers rather than political appointees or diplomats, says the Op-ed.
An Argument Against “Premature” Legislation
Frenchman Yann Lecun, one of the world’s leading researchers in deep neural networks who in 2018 jointly won the Turing Award for computer science with Geoffrey Hinton and Yoshua Bengio, has his own ideas. LeCun, who is Meta’s Chief AI Scientist, disagrees with Bengio and Hinton on the dangers posed by AI and is against what he sees as premature regulation.
Regulating leading-edge AI models today would be like regulating the jet airline industry in 1925 when such airplanes had not even been invented, he said in an October 19 interview with The Financial Times. “The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment,” said. Lacun, who will participate in the AI Safety Summit
Meta, which has launched its own LLaMA generative AI model, has taken a different approach than other big tech companies, such as Google and Microsoft-backed OpenAI, in championing more accessible open-source AI systems.LeCun argued in The Financial Times interview that open-source models stimulated competition and enabled a greater diversity of people to build and use AI systems. But critics fear that placing powerful generative AI models in the hands of potentially bad actors magnifies the risks of industrial-scale disinformation, cyber warfare and bioterrorism.
Using Legislation To Lock In Advantage
One key issue that has already emerged is licensing — the idea, now part of a legislative framework being proposed by two U.S. Senators– is that the U.S. government should require licenses for companies to work on advanced AI. Deborah Raji, an AI researcher at the University of California, Berkeley, who attended last month’s AI Insight Forum in the Senate, told Politico that she worries that Open Philanthropy-funded experts could help lock in the advantages of existing tech giants by pushing for a licensing regime. She said that would likely cement the importance of a few leading AI companies – including OpenAI and Anthropic, two firms with significant financial and personal links to Moskovitz and Open Philanthropy.
Brown University Computer Science Professor Venkatasubramanian told Politico that the message to lawmakers from researchers, companies and organizations aligned with Open Philanthropy’s approach to AI is simple — “‘You should be scared out of your mind, and only I can help you.’” And he said any rules placing limits on who can work on “risky” AI would put today’s leading companies in the pole position.
“There is an agenda to control the development of large language models — and more broadly, generative AI technology,” Venkatasubramanian said.
Public And Private Sector Investments Are Out Of Balance
Bengio, founder and scientific director of Mila at the Quebec AI Institute, who some regard as a “godfather of AI” expressed a different idea in an October 17 article published in both The Bulletin Of Atomic Scientists and in Wired magazine. “In the future, we’ll need a humanity defense organization,” said Bengio. We have defense organizations within each country. We’ll need to organize internationally a way to protect ourselves against events that could otherwise destroy us. It’s a longer-term view, and it would take a lot of time to have multiple countries agree on the right investments. But right now, all the investment is happening in the private sector. There’s nothing that’s going on with a public-good objective that could defend humanity.”
One of the leading risks to the development of the AI sector “is the imbalance between public and private sector investment in what will soon be a technology as ubiquitous as electricity,” Fei-Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence and a former vice president at Google, told conference goers at a Wall Street Journal tech event this week.
U.S. government investment and incentives should at least match the U.S.’s investment in space exploration decades ago with the National Aeronautics and Space Administration, she said on stage. “This technology is as big or even bigger than the space technology,” said Li. “We cannot just leave it to the private sector.” Li said the Food and Drug Administration, Environmental Protection Agency and other government agencies should urgently take a role in regulating AI. “It is very hard to imagine one ring that rules them all,” she said.
The Safety Summit’s Objectives
The UK government says the first AI Safety Summit has five objectives:
- a shared understanding of the risks posed by frontier AI and the need for action
- a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
- appropriate measures which individual organisations should take to increase frontier AI safety
- areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
- showcase how ensuring the safe development of AI will enable AI to be used for good globally
Given the diversity of opinions on a way forward, and the real risk that policymakers, who have little understanding of AI, could be manipulated by the world’s biggest tech companies, it is difficult to see what a good outcome might look like.
IN OTHER NEWS THIS WEEK
Dutch Consumer Group Sues Amazon Over Data Tracking, Corporate Lawyers Brace For Similar Complaints
A consumer-rights group in the Netherlands sued Amazon on October over its alleged practice of tracking website visitors’ online activity, using recently expanded legal provisions allowing class actions, reports The Wall Street Journal. The lawsuit, filed in a Dutch court by Stichting Data Bescherming Nederland, or SDBN, said Amazon is violating the European Union’s privacy law by monitoring visitors to popular websites through cookies—the pieces of code that identify individual browsers to create targeted advertisements—without their permission. An EU law that took effect in June requires the bloc’s 27 member nations to introduce legislation that will make it easier for consumer groups to bring class-action cases against companies. Corporate lawyers are bracing for a wave of similar complaints representing large groups of consumers.
FOOD AND BEVERAGE
Tyson Foods Bets On Insect Protein
Tyson Foods is making a two-fold investment in insect protein startup Protix to increase production for animal feed ingredients, aquaculture and pet food. Through an undisclosed direct equity investment, Tyson will take a minority stake in Protix and fund further expansion of the latter’s insect protein business. The two companies will also build and operate a facility in the US that will be the “first at-scale facility” of its kind to upcycle manufacturing byproduct into insect protein.
AI AND ROBOTICS
Amazon Adopts New AI And Robotics Capabilities In Its Warehouse Operations
Amazon is introducing an array of new artificial intelligence and robotics capabilities into its warehouse operations aimed at reducing delivery times and help identify inventory more quickly. The revamp will change the way Amazon moves products through its fulfillment centers with new AI-equipped sortation machines and robotic arms. It is also set to alter how many of the company’s vast army of workers do their jobs.Amazon says its new robotics system, named Sequoia after the giant trees native to California’s Sierra Nevada region, is designed for both speed and safety. Humans are meant to work alongside new machines in a way that should reduce injuries, the company says.
Deutsche Bank Sets Emission Targets For Coal, Cement and Shipping Clients
Deutsche Bank has set emissions reduction targets for loans to clients in the coal mining, cement and shipping sectors and now has a net-zero plan for 55% of its financed emissions, its chief sustainability officer told Reuters.A key funder to polluting sectors, Germany’s biggest lender, like many of its peers, is under increasing pressure from policymakers and investors to push clients to curb climate-damaging emissions.
To access more of The Innovator’s News In Context stories click here.