News In Context

The Year In AI: The Good, The Bad And The Ugly

The year in AI is ending in much the same way it began, with a sense of exuberance and unease. The view on this Janus-faced technology depends on the lens.

Generative AI’s impact on productivity could add trillions of dollars in value to the global economy, according to McKinsey. Its research estimates that about 75% of the value that Generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering and R&D. This translates into big productivity gains, around $400 to $600 billion for  retail, which can use AI to accelerate collection of consumer insights and assist customers in their shopping journey; $60 to $110 billion in life sciences, which can use the technology to improve drug discovery process efficiency and reinforce safety throughout the drug development lifestyle, and $200 to $340 billion in banking, which can use Generative AI to speed the transformation of legacy systems and automate reporting and risk documentation.

“Enterprise demand for AI and accelerated computing is strong,” says a December Generative AI report compiled by market intelligence company CB Insights. “We are seeing momentum in verticals such as automotive, financial services, healthcare, and telecom. AI and accelerated computing are quickly becoming integral to customers’ innovation roadmaps and competitive positioning.”

While businesses see potential efficiency gains, the world’s biggest tech companies see the chance to increase their profits and become ever more powerful. Apple, Amazon, Alphabet, Meta, Microsoft and Nvidia -all members of the $1 trillion market cap club – are investing heavily in AI.

But there is growing unease about what some refer to as the frantic and somewhat reckless race to develop AI. The technology has the potential in the future for serious, even catastrophic, harm stemming from the most significant capabilities of AI models. as well as risks that today’s AI systems already pose, which the Open Markets Institute says includes the spread of misinformation and distortion of political debate, the decline of news and journalism, the undermining of compensation for creative work, exploitation of workers and consumers, monopolistic abuse of smaller businesses and challengers, amplified surveillance advertising and online addiction, and the threat to resilience and security from extreme concentration.

News stories in December underscore the harms. The Financial Times wrote about how the use of deepfake videos in an upcoming Bangladesh election, is exacerbating the fears of policymakers around the world who are worrying over how AI-generated disinformation can be harnessed to try to mislead voters and inflame divisions ahead of several big elections next year. AI-generated deepfakes — some more convincing than others have already entered U.S. election campaigns, and the technology is making it increasingly difficult to distinguish between real and fabricated war footage in Ukraine and Gaza.

Meanwhile, The Wall Street Journal ran a story entitled “News Publishers See Google’s AI Search Tool As A Traffic-Destroying Nightmare.”  The tech giant’s AI-powered search product is being tested on roughly 10 million users; publishers who reply on Google for traffic are worried.  “AI and large language models have the potential to destroy journalism and media brands as we know them,” Mathias Döpfner, chairman and CEO of Axel Springer, referring to the technology that makes generative-AI possible, said in an interview with The Wall Street Journal. His company, one of Europe’s largest publishers and the owner of U.S. publications Politico and Business Insider, this week announced a deal to license its content to generative-AI specialist OpenAI. Some see Döpfner’s move as a leap of faith akin to publishers believing that alliances with social media companies would benefit them. Those alliances did not pay off. Publishers are reeling from a major decline in traffic sourced from social-media sites, as both Meta and X, the former Twitter, have pulled away from distributing news.

The arts are also under threat, says a December story in The New Yorker. It points to how In September, while screenwriters were negotiating an end to their five-month strike, having persuaded the studios not to use A.I. scripts,  the Authors Guild, along with a group of prominent novelists, filed a class-action suit against OpenAI, alleging that it used their copyrighted work without consent or compensation to train its AI models. Meanwhile, according to the Center for Artistic Inquiry and Reporting, A.I.-generated art enabled by tools such as OpenAI’s DALL-E2 and Stability AI’s Stable Diffusion,  is “vampirical, feasting on past generations of artwork,” and arguably amounts to “the greatest art heist in history,” says The New Yorker story.

“I would call this an inflection moment,” pioneering AI scientist Fei-Fei Li told writer Matt O’Brien in an article published by Associated Press in December. “2023 is, in history, hopefully going to be remembered for the profound changes of the technology as well as the public awakening. It was a year for people to figure out ‘what this is, how to use it, what’s the impact — all the good, the bad and the ugly,”’she said.

The Struggle to Rein In A Technology That Is Still Unfolding

In March, more  than 30,000 people, including prominent technologists, signed a letter calling on A.I. companies to pause work on their most advanced technology for six months, to make room for some kind of regulation. It read, in part:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”  Such decisions must not be delegated to unelected tech leaders, argued the letter.

The pause didn’t happen. Instead, there was intense lobbying to try and influence the way governments will deal with AI. Tech leaders argued regulation will kill innovation. Respected AI scientists warned that if government does not rein in AI, chaos will reign.

In response the White House issued an executive order, the UK held a global AI Safety Summit and issued the Bletchley Declaration, the G7 released an Advanced AI principles, Code of Conduct and Leaders Statement, the United Nations said it was creating an AI Advisory Body and the European Union  -despite opposition from tech companies and some key member states – agreed on December 8 to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of the rapidly evolving technology. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set. When the law goes into effect foundational models, like that powering the ChatGPT chatbot, will face new transparency requirements; chatbots and software that create manipulated images such as “deepfakes” will have to make clear that what people were seeing was generated by A.I. and use of facial recognition software by police and governments will be restricted outside of certain safety and national security exemptions. (For more details on what the EU AI Act says click here)

As the first global power to pass comprehensive AI legislation, Europe is once again setting what could become worldwide regulatory standards — much as it did on digital privacy rules — but the EU AI Act will not be in force until 2025. The time lag is an issue because AI is developing so fast and there is no global agreement on how to regulate a technology that is still unfolding. In the meantime, AI companies are continuing to release new, more powerful versions of their technologies without guardrails, at breakneck speed.

In one week in December alone Google launched a new set of generative artificial intelligence models that will run directly on mobile phones; Elon Musk said he is raising $1 billion in funding for xAI, an AI company he is launching as a rival to OpenAI, Microsoft and Alphabet’s Google;  and Meta and IBM launched a coalition of more than 50 artificial intelligence companies and research institutions that are pushing a so-called open model of AI.

AI Everywhere, All The Time 

It is hard to believe that it was just a little more than a year ago when OpenAI released ChatGPT. Within five days, the chatbot had a million users. Within two months, it was logging a hundred million monthly users—a number that has now nearly doubled.

Shortly after ChatGPT came out, Google released its own chatbot, Bard; Microsoft incorporated OpenAI’s model into its Bing search engine; Meta débuted LLaMA;  Anthropic launched Claude, a “next generation AI assistant, and Reid Hoffman and Mustafa Suleyman ramped up Inflection AI, an AI studio which launched a new flagship large language model, Inflection-2, which it claims can outperform most major rivals with the exception of OpenAI’s GPT-4 ,and aims to create “a personal AI for everyone.”  If that seems like an exaggeration, consider that just in time for Christmas a Silicon Valley startup launched a new ChatGPT-powered stuffed animal that can carry on endless conversations with kids, according to Axios, All three models use OpenAI’s models (mainly GPT-3.5 turbo) to craft the toy’s dialogue. The toy itself has WiFi and Bluetooth along with a speaker, microphone and accelerometer  and connects with a mobile app where parents can control settings and view past conversations.

The startup behind the toy, Curio, a four-person startup based in Redwood City, California, is one of hundreds of startups that have piled into the GenAI space. Many of them are being snapped up by tech giants looking to dominate the space.

Confronting The Monopoly Threat

A November report, “AI in the Public Interest: Confronting the Monopoly Threat  by the Open Markets Institute outlines how just a handful of Big Tech companies – by exploiting existing monopoly power and aggressively co-opting other actors – have already positioned themselves to control the future of artificial intelligence

The report details how tech giants  broadly control the direction, speed, and nature of innovation in many, if not most, of the key technologies in the Internet tech stack. In addition to cloud capacity, computing technologies, and data, this includes chokeholds over computer and mobile phone operating systems, the standards and governance of the World Wide Web, and increasingly even the design and commercialization of semiconductors. These existing concentrations of power, in combination with their emerging dominance in AI, give this same handful of corporations the ability to determine when, how, and in whose interests AI is developed and rolled out. “Their control over AI’s ‘upstream’ infrastructure means they can easily identify any serious potential rival in its earliest stages and then move swiftly to crush, sidetrack, co-opt, or simply acquire the upstart,” says the report. “In short, these corporations are already shaping the entire ‘downstream’ ecosystem to serve their own short-term private interests in ways that will in many instances prevent other companies and individuals from using AI to solve urgent challenges and improve people’s lives.”

Indeed, Fortune magazine writer Jeremy Kahn observed on X (formerly known as Twitter) that “what OpenAI, Anthropic and DeepMind have all tried to do is raise billions and tap vast GPU resources of tech giants without having the resulting tech de facto controlled by them. I’m arguing the OpenAI fracas shows that might be impossible.”

Open AI began as a nonprofit research lab because its founders didn’t think artificial intelligence should be pioneered by commercial firms. It needed to be developed by an organization, as Open AI’s charter puts it, “acting in the best interests of humanity.” It experimented with having a not-for-profit board, responsible for ensuring the safe development of AI, overseeing a for-profit commercial business. This uneasy relationship blew up in November when the board fired chief executive and co-founder Sam Altman.

Microsoft, which has invested $13 billion in OpenAI, called for the governance structure to change and offered Altman and OpenAI President Greg Brockman the chance to start a new AI-research group there. The majority of OpenAI’s staff threatened to resign. Altman returned to OpenAI. The organization’s board of directors is being overhauled: academics and researchers were g replaced by new directors who have extensive backgrounds in business and tech.

The full story behind the drama has yet to be unveiled but begs the question: “Can one organization, or one person, maintain the brain of a scientist, the drive of a capitalist and the cautious heart of a regulatory agency?” as New York Times columnist David Brooks succinctly put it in a column entitled “The Fight For The Soul of AI.” Or, as journalist Charlie Warzel wrote in The Atlantic, will the money always win out?

OpenAI’s internal struggle is not just about ensuring AI safety although that, too, is a concern. It is about purpose. Will AI be harnessed to solve some of the world’s biggest problems, such as combating climate change and eradicating disease? Or will the world allow a handful of monopolists to cause harms to society while building the equivalent of a new global pharma industry that controls access to breakthroughs and puts profits first?

The Outlook For 2024

A lack of firm guardrails allowed tech companies to unleash products this year that were not ready for prime time. It is no accident that dictionary.com picked hallucinate as its 2023 word of the year. The definition, when it comes to AI, means “to produce false information contrary to the intent of the user and present it as if true and factual. ”

“Hallucinate as our 2023 Word of the Year encapsulates technology’s continuing impact on social change, and the continued discrepancy between the perfect future we envision and the messy one we actually achieve,”  Grant Barrett, dictionary.com’s lexicography head, was quoted as saying in a CBS news story.

2024 could be messier. The danger is that 2024 will become the new 1984, as AI makes it harder to discern what is true, further undermining trust. And, while AI promises to free us from drudgery so we can engage in creative work, this will not be true for all workers, leaving a certain percentage unemployable. There is also a risk that the technology divide between the global South and North will grow even larger, further increasing the chances of social unrest.

The systems that make up the world order are being disrupted by a technology we do not yet fully understand, and new ones are not being created apace. If we are not careful 2024’s AI word of the year could well end up being “destabilize.”

To access more of the Innovator’s News In Context articles click here.

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.