Focus On AI

This Week In AI: A Tale of Power, Profit, Politics And Purpose

This week’s news in AI, which revolved around the fate of OpenAI and its chief executive, is a tale of power, profit, politics and purpose.

Open AI began as a nonprofit research lab because its founders didn’t think artificial intelligence should be pioneered by commercial firms. It needed to be developed by an organization, as Open AI’s charter puts it, “acting in the best interests of humanity.” It experimented with having a not-for-profit board, responsible for ensuring the safe development of AI, overseeing a for-profit commercial business. This uneasy relationship blew up over the last week when the board fired chief executive and co-founder Sam Altman.

Microsoft, which has invested $13 billion in OpenAI, called for the governance structure to change and offered Altman and OpenAI President Greg Brockman the chance to start a new AI-research group there. The majority of OpenAI’s staff threatened to resign. Altman returned to OpenAI. The organization’s board of directors is being overhauled: academics and researchers are being replaced by new directors who have extensive backgrounds in business and tech.

The full story behind the drama has yet to be unveiled but begs the question: “Can one organization, or one person, maintain the brain of a scientist, the drive of a capitalist and the cautious heart of a regulatory agency?” as New York Times columnist David Brooks succinctly put it in a column entitled “The Fight For The Soul of AI.” Or, as journalist Charlie Warzel wrote in The Atlantic, will the money always win out?

OpenAI’s internal struggle is not just about ensuring AI safety although that, too, is a concern. It is about purpose. Will AI be harnessed to solve some of the world’s biggest problems, such as combating climate change and eradicating disease? Or will the world allow a handful of monopolists to cause harms to society while building the equivalent of a new global pharma industry that controls access to breakthroughs and puts profits first?

Lost in the headlines about Sam Altman’s firing and rehiring was the November release of a chilling report from Open Markets Institute that focuses on the dangers of allowing AI to be controlled by monopolies. The report, “AI in the Public Interest: Confronting the Monopoly Threat  is not focused on the threat of artificial general intelligence (AGI) or killer drones. It outlines how just a handful of Big Tech companies – by exploiting existing monopoly power and aggressively co-opting other actors – have already positioned themselves to control the future of artificial intelligence and magnify many of the worst problems of the digital age.   These problems include the spread of misinformation and distortion of political debate, the decline of news and journalism, the undermining of compensation for creative work, exploitation of workers and consumers, monopolistic abuse of smaller businesses and challengers, amplified surveillance advertising and online addiction, and the threat to resilience and security from extreme concentration.

The report details how tech giants  broadly control the direction, speed, and nature of innovation in many, if not most, of the key technologies in the Internet tech stack. In addition to cloud capacity, computing technologies, and data, this includes chokeholds over computer and mobile phone operating systems, the standards and governance of the World Wide Web, and increasingly even the design and commercialization of semiconductors. These existing concentrations of power, in combination with their emerging dominance in AI, give this same handful of corporations the ability to determine when, how, and in whose interests AI is developed and rolled out. “Their control over AI’s ‘upstream’ infrastructure means they can easily identify any serious potential rival in its earliest stages and then move swiftly to crush, sidetrack, co-opt, or simply acquire the upstart,” says the report. “In short, these corporations are already shaping the entire ‘downstream’ ecosystem to serve their own short-term private interests in ways that will in many instances prevent other companies and individuals from using AI to solve urgent challenges and improve people’s lives.”

Indeed, Fortune magazine writer Jeremy Kahn observed on X (formerly known as Twitter) that “what OpenAI, Anthropic and DeepMind have all tried to do is raise billions and tap vast GPU resources of tech giants without having the resulting tech de facto controlled by them. I’m arguing the OpenAI fracas shows that might be impossible.”

The present efforts by gatekeepers such as Microsoft, Amazon, Google and Meta to dominate AI are made possible by the enormous financial and political advantages they have built up by exploiting their monopoly power, says the Open Markets Institute report. Not only do they already dominate almost all key links in the AI services and technology supply chains, they also are shaping the policy debate on whether and how to regulate AI in ways that protect and promote their existing interests. “This is largely thanks to their vast lobbying and influence systems, as well as to a carefully curated public narrative that their technical expertise is free from conflicts of interest,” says the report.

The report urges the use of existing laws – including anti-trust and competition- to help curb the power of the tech giants. There is little agreement amongst governments or industry observers about the best way forward.  On Oct. 30, the Biden administration released a major executive order “On the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.” Britain announced that it would not regulate A.I. at all in the short term, preferring instead to maintain a “pro-innovation approach.” The European Union’s proposed AI Act may stall on concerns from France, Germany and Italy, all of whom worry that the scrutiny of more powerful systems will simply mean those systems are developed elsewhere.

But regulation alone will not ensure that AI is used for the benefit of humanity.

To alleviate the risks of over reliance on private / corporate open source deep innovation in AI, “governments must invest, in coordination with each other, in AI research to a compound output level that’s on par with deep innovation and research production of private actors taken as a whole and commit to open sourcing this output,” says serial AI startup founder Benoit Bergeret, Executive Director, ESSEC Metalab for Data, Technology and Society, Co-founder of Hub France AI and a member of the OECD Working Group on AI Futures.

“Private funding is good, but it needs to play in a world where there is significant  competition from publicly funded teams,” Bergeret said in an interview with The Innovator. But, he cautioned, “research from publicly funded teams should also be actionable, which will be a true revolution in a world where traditionally, publicly funded research never gave a damn about transferring to the economy at scale.”

The role of government is crucial because in an open source innovation dominated AI world, “work disruption will occur quickly and will be hard to anticipate, potentially destabilizing workers, all the way to possibly impacting our democracies,” says Bergeret. “Governments must actively monitor and quantify these workforce impacts to inform and incentivize corporations, and make jobs that are immune to AI disruption more appealing economically and socially, to provide an acceptable professional new path to those impacted.”

Public money should be “invested ASAP in open research, as I said in my public address at OECD last week, to an extent that matches the research output levels achieved collectively by private AI research (OpenAI, Meta, Anthropic and everybody else),” says Bergeret. “The rationale for this in is the steering of research towards a for-profit agenda that I think should be countered. It would be a pity to see AI become the next Big Pharma. Public funding of AI research should not just be commensurate to private funding (at least in research productivity), it should also be open sourced and synchronized across borders. No single nation outside of China can do it alone.”

Accelerating science could be the most economically and socially valuable use for artificial intelligence, Alistair Nolan, a senior policy analyst at the OECD wrote in a blog post this week. It could lead to benefits that private AI research isn’t interested in other than in a siloed fashion, possibly facilitating large scale funding of AI research, says Bergeret.

Publicly funding a larger research project. would be a good thing, but funding smaller, agile, precise teams, passionate about cracking certain angles, should also be encouraged, Bergeret argues. “If today’s methods require vast amounts to train (and scale), I want to think that current foundation models are just a branch of AI technology, and that alternatives leading to much more efficient learning can be developed,” he says. “We already see such advances from small groups, with edge learning making significant strides.”

There is no doubt that publicly funded purpose driven projects that purport to be a counterweight to the tech giants’ monopolies could attract some of the world’s best and brightest AI talent. The question is whether government projects that are likely to be driven by technocrats and burdened by red tape will be able to keep them.

IN OTHER NEWS THIS WEEK

Artificial Intelligence

Businesses, Tech Groups, Warn EU On Over-Regulating AI Foundational Models

Businesses and tech groups on November 23 warned the European Union against over-regulating artificial intelligence systems known as foundation models in upcoming AI rules as this could kill nascent start-ups or drive them out of the region.The plea came as EU countries and EU lawmakers head into the final stretch of negotiations on rules that could set the benchmark for other countries.One of the biggest bones of contention is foundation models, such as OpenAI’s ChatGPT, which are AI systems that are trained on large sets of data, with the ability to learn from new data to perform a variety of tasks. “For Europe to become a global digital powerhouse, we need companies that can lead on AI innovation also using foundation models and GPAI [General Purpose Artificial Intelligence],” DigitalEurope, whose members include Airbus, Apple, Ericsson, Google, LSE and SAP, said in a letter. Thirty-two European digital associations also signed the letter. The signatories, who said just 3% of the world’s AI unicorns come from the European Union, backed a joint proposal by France, Germany and Italy to limit the scope of AI rules for foundation models to transparency requirements.

OpenAI, Microsoft Hit With New Author Copyright Lawsuit

OpenAI and Microsoft were sued November 21 over claims that they misused the work of nonfiction authors to train the artificial intelligence models that underlie services like OpenAI’s chatbot ChatGPT.OpenAI copied tens of thousands of nonfiction books without permission to teach its large language models to respond to human text prompts, said author and Hollywood Reporter editor Julian Sancton, who is leading the proposed class action filed in Manhattan federal court. The lawsuit is one of several that have been brought by groups of copyright owners, including authors John Grisham, George R.R. Martin and Jonathan Franzen, against OpenAI and other tech companies over the alleged misuse of their work to train AI systems. The companies have denied the allegations. Sancton’s complaint is the first author lawsuit against OpenAI to also name Microsoft as a defendant. The company has invested billions of dollars in the artificial intelligence startup and integrated OpenAI’s systems into its products.

AI ‘hit squad’ Set Up To Cut Size of UK Civil Service and Boost Productivity

An artificial intelligence “hit squad” unit will be set up at the heart of Whitehall with a remit to shrink the size of the UK civil service and bolster public sector productivity, reports The Financial Times. Deputy Prime Minister Oliver Dowden plans to form a task force of 30 “high-end, technically capable” experts in AI and data engineering with an annual budget of about £5 million, to begin the process of transforming public services.

CYBERSECURITY

EU Mulls Wider Scope For Cybersecurity Certification Scheme

Reuters reported that The European Union is considering broadening the scope of proposed cybersecurity labelling rules that would affect not just Amazon, Alphabet’s  Google and Microsoft but also banks and airlines, according to the latest draft of the rules.The EU move to set up such a system comes as Big Tech looks to the government cloud market to drive growth in the coming years while a potential boom in artificial intelligence after the viral success of OpenAI’s ChatGPT could also boost demand for cloud services. The latest proposal from EU cybersecurity agency ENISA concerns an EU certification scheme  which vouches for the cybersecurity of cloud services and determines how governments and companies in the bloc select a vendor for their business.The document retains key provisions contained in earlier drafts such as a requirement that U.S. tech giants set up a joint venture with an EU-based company to qualify for the EU cybersecurity label.

To access more of The Innovator’s News In Context stories click here.

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.