News In Context

Lawsuit Takes Aim At The Way AI Is Trained

Many artists, writers, composers and privacy activists complain that companies are training their AI systems using data that does not belong to them. Now the legality of this practice is being put to the test.

A lawsuit recently filed in the U.S.  is believed to be the first legal attack on AI training, according to a story in The New York Times. The lawsuit, which was filed in Los Angeles by Matthew Butterick, a programmer, designer, writer and lawyer, concerns Microsoft ‘s Copilot tool which uses a new kind of artificial intelligence technology that can generate its own computer code. Like many cutting-edge AI technologies, Copilot developed its skills by analyzing vast amounts of data. In this case, it relied on billions of lines of computer code posted to the Internet. In an interview with The New York Times Butterick equates this process to piracy, because the system does not acknowledge its debt to existing work. His lawsuit claims that Microsoft and its collaborators violated the legal rights of millions of programmers who spent years writing the original code.

Companies train a wide variety of systems in this way, including art generators and speech recognition systems.

Copilot is based on technology built by OpenAI, an artificial intelligence lab in San Francisco backed by a billion dollars in funding from Microsoft. After Microsoft and GitHub released Copilot, GitHub’s chief executive, Nat Friedman, tweeted that using existing code to train the system was “fair use” of the material under copyright law, an argument often used by companies and researchers who built these systems. But no court case has yet tested this argument.

“The ambitions of Microsoft and OpenAI go way beyond GitHub and Copilot,” Butterick told  The New York Times. “They want to train on any data anywhere, for free, without consent, forever.”

Training an AI system on copyrighted material is not necessarily illegal, notes the Times article. But doing so could be if the system ends up creating material that is substantially similar to the data it was trained on.  Some users of Copilot have said it generates code that seems identical — or nearly identical — to existing programs, an observation that could become the central part of Butterick’s case and others.

IN OTHER NEWS THIS WEEK

CYBERSECURITY

EU Strengthens Cybersecurity

The European Council adopted legislation for a common level of cybersecurity across the Union, to further improve the resilience and incident response capacities of the public and private sector and the EU as a whole. The new directive, called ‘NIS2, will replace the current directive on security of network and information systems. NIS2 will set the baseline for cybersecurity risk management measures and reporting obligations across all sectors that are covered by the directive, such as energy, transport, health and digital infrastructure. The directive will formally establish the European Cyber Crises Liaison Organisation Network, EU-CyCLONe, which will support the coordinated management of large-scale cybersecurity incidents and crises.

QUANTUM COMPUTING

U.S. And France To Enhance Cooperation On Quantum Computing

The United States and France signed a Joint Statement on Cooperation in Quantum Information Science and Technology in Washington, DC. The new plan builds on several existing agreements to strengthen U.S.-France cooperation in science and technology. This collaboration will allow both nations to work together in advancing quantum technology, which has the potential to solve the world’s most critical challenges.

ARTIFICIAL INTELLIGENCE

Amazon To Warn Customers Of AI Risks

Amazon is planning to roll out warning cards for software sold by its cloud-computing division, in light of ongoing concern that artificially intelligent systems can discriminate against different groups, the company told Reuters. Amazon’s so-called AI Service Cards will be public so its business customers can see the limitations of certain cloud services, such as facial recognition and audio transcription. The company said the goal is to prevent mistaken use of its technology, explain how its systems work and manage privacy.

OpenAI Releases Demo Of Its New Chat Technology

OpenAI  has released a demo of a new model called ChatGPT, a spin-off of GPT-3, its groundbreaking large language model, that is geared toward answering questions via back-and-forth dialogue. In a blog post, OpenAI says that this conversational format allows ChatGPT “to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

FOOD AND BEVERAGE

Capsule Killer CoffeeB Ramps Up Distribution Of Sustainable Alternative

Swiss retail giant Migros, the company behind the CoffeeB coffee brewing system, announced this it  has struck a deal with Germany’s largest retailer in EDEKA. According to the release, EDEKA will begin rolling out the capsule-less coffee system to its 11,000 stores in April 2023. Developed over five years, the CoffeeB system is a single-serve coffee machine that does away with the plastic pod or capsule. Instead, the new system utilizes round balls of coffee called Coffee Balls instead of plastic or aluminum capsules. Coffee Balls, which are wrapped in a layer of algae that keeps the coffee fresh and protected from flavor loss, can be dropped into a compost bin after they are used.

TRANSPORTATION

Rolls-Royce Successfully Tests Hydrogen-Powered Jet Engine

Britain’s Rolls-Royce said it has successfully run an aircraft engine on hydrogen, a world aviation first that marks a major step towards proving the gas could be key to decarbonising air travel. The ground test, using a converted Rolls-Royce AE 2100-A regional aircraft engine, used green hydrogen created by wind and tidal power, Rolls and its testing program partner easyJet are seeking to prove that hydrogen can safely and efficiently deliver power for civil aero engines.

To read more of The Innovator’s News In Context stories click here.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.