While for-profit tech companies race ahead with new AI models a group of prominent artificial intelligence experts called on European officials to pursue even broader regulations of the technology in the European Union’s AI Act, China unveiled a number of draft measures for managing generative AI services, including making providers responsible for the validity of data used to train generative AI tools, and the U.S. said it would weigh accountability measures.
These moves come as concern grows about the impact of Generative AI on society. Late last month, more than 1,800 signatories — including Elon Musk and Apple co-founder Steve Wozniak — called for a six-month pause on the development of systems “more powerful” than GPT-4. Among the signatories was UK-based veteran tech entrepreneur and venture capitalist Ian Hogarth, who has backed more than 50 AI start-ups in Europe and the U.S., including Anthropic, one of the world’s highest-funded generative AI start-ups, and Helsing, a leading European AI defense company. This week Hogarth wrote an article in the Financial Times after attending a London dinner party with people in the AI community who acknowledged developments would bring significant risks but are racing ahead anyway.
“It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight,” wrote Hogarth.
In his essay Hogarth called for government action. “We are not powerless to slow down this race,” he said. If you work in government, hold hearings and ask AI leaders, under oath, about their timelines for developing God-like AGI [Artificial General Intelligence]. Ask for a complete record of the security issues they have discovered when testing current models. Ask for evidence that they understand how these systems work and their confidence in achieving alignment. Invite independent experts to the hearings to cross-examine these labs.”
In a policy brief released Thursday, more than 50 individual expert and institutional signatories who share some of the same concerns advocated for the EU to include general purpose AI (GPAI) in its forthcoming regulations, rather than limiting the regulations to a more narrow definition.
The group, which includes institutions like the Mozilla Foundation and AI experts like Timnit Gebru, says that even though general purpose tools might not be designed with high-risk uses in mind, they could be used in different settings that make them higher risk. The group points to generative AI tools that have risen in popularity over the past few months, like ChatGPT.
Regulation should be considered around how AI is developed, including around how data has been collected, who was involved in the collection and training of the technology and more, according to Mehtab Khan, a signatory and resident fellow and lead at the Yale/Wikimedia Initiative on Intermediaries and Information.“GPAI should be regulated throughout the product cycle and not just the application layer,” Khan said, adding that simple labels for high and low risk “are just inherently not capturing the dynamism” of the technology.
The group suggests that European policymakers take steps to future-proof the legislation, such as by avoiding restricting the rules to certain types of products, like chatbots. And they warn that developers should not be able to shirk liability by pasting on a standard legal disclaimer.
Italy is not waiting for the Europe to pass legislation. Its data protection agency set out a list of demands on April 12 which it said OpenAI must meet by April 30 to address the agency’s concerns over the ChatGPT chatbot and allow the artificial intelligence service to resume in the country.Almost two weeks ago Microsoft-backed OpenAI took ChatGPT offline in Italy after the authority, known as Garante, temporarily restricted its personal data processing and began a probe into a suspected breach of privacy rules
On April 11, the Cyberspace Administration of China (CAC) unveiled its draft measures for managing generative AI services.The CAC has said providers should be responsible for the validity of data used to train AI tools and that measures should be taken to prevent discrimination when designing algorithms and training data sets, according to a report by Reuters. Firms will also be required to submit security assessments to the government before launching their AI tools to the public.
If inappropriate content is generated by their platforms, companies must update the technology within three months to prevent similar content from being generated again, according to the draft rules. Failure to comply with the rules will results in providers being fined, having their services suspended, or facing criminal investigations. Any content generated by generative AI must be in line with the country’s core socialist values, the CAC said.
China’s tech giants have AI development well under way. The CAC announcement was issued on the same day that Alibaba Cloud announced a new large language model, called Tongyi Qianwen, that it will roll out as a ChatGPT-style front end to all its business applications. Last month, another Chinese internet services and AI giant, Baidu, announced a Chinese language ChatGPT alternative, Ernie bot.
Meanwhile, the U.S. government said April 11 that it is seeking public comments on potential accountability measures for artificial intelligence (AI) systems as questions loom about its impact on society.
Separately Senate Majority Leader Chuck Schumer said April 13 that he was launching an effort to establish rules on artificial intelligence to address national security and education concerns, as use of programs like ChatGPT becomes widespread. Schumer said in a statement he had drafted and circulated a “framework that outlines a new regulatory regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology.”
Schumer’s plan will need the approval of Congress and the White House and could still take months or more. However, is the most concrete step yet that the U.S. government may adopt new regulations to address rising concerns about generative AI.
“Time is of the essence to get ahead of this powerful new technology to prevent potentially wide-ranging damage to society and national security and instead put it to positive use by advancing strong, bipartisan legislation,” Schumer said.
IN OTHER NEWS THIS WEEK
Europe To Launch Cyber Shield
In an April 5 speech at the opening of the International Cybersecurity Forum European Commissioner Thierry Breton announced that in a few weeks’ time the Commission will propose a Cyber Solidarity Act to establish a European infrastructure of security operation centers (SOCs) which will scan the network using artificial intelligence technologies and detect weak signals of attacks. This common European advanced detection infrastructure will form a European cyber shield and will be a sort of “European protection dome”.” It will be, so to speak, our Cyber Galileo,” said Breton. He noted that Europe has already launched a pilot project that brings together 17 countries in 3 large SOCs that will be deployed this year, even before the Act is negotiated.“Beyond SOCs, we also need to strengthen the security and resilience of our critical infrastructures (airports, power plants, gas pipelines, electricity networks, etc.),” he said.
IMF Engaging With Dozens Of Countries on CBDCs
The International Monetary Fund is publishing a CBDC handbook amid growing demand for its assistance that has already seen the fund engage with nearly 30 countries investigating digital currencies. A 2021 BIS survey found that nine out of 10 central banks are exploring CBDCs, with half developing or running concrete experiments. Over 40 IMF member countries have approached the body for technical assistance on CBDCs, with questions ranging from objectives and design choices to pilots and analysis of macro-financial implications.
FOOD AND AGRICULTURE
Bel Group To Team With AI Startup To Make Plant-Based Cheese
France’s Bel Group—the multinational firm behind cheese brands Babybel, The Laughing Cow, and Boursin—has teamed up with AI-powered startup Climax Foods to develop plant-based versions of its iconic brands for launch in Europe and the US by the end of 2024, starting with Mini Babybel. Bel, which has also acquired an equity stake in California-based Climax Foods, aims to co-create plant-based cheeses “indistinguishable from their dairy counterparts” that will be manufactured in Bel factories around the world, chief venture officer Caroline Sorlin told AFN. The aim is to generate half of Bel’s revenues from plant-based, fruit-based, or animal-free products by 2030, she said. “As a dairy company, we’re making our best efforts to reduce our carbon footprint through regenerative agriculture and other initiatives, but it’s not enough; we need to find other solutions.”
Streaming Services Told to Clamp Down On AI-Generated Music
Universal Music Group has told streaming platforms, including Spotify and Apple, to block artificial intelligence services from scraping melodies and lyrics from their copyrighted songs, according to emails viewed by the Financial Times. UMG, which controls about a third of the global music market, has become increasingly concerned about AI bots using their songs to train themselves to churn out music that sounds like popular artists. AI-generated songs have been popping up on streaming services and UMG has been sending takedown requests “left and right”, a person familiar with the matter told The Financial Times. The company is asking streaming companies to cut off access to their music catalogue for developers using it to train AI technology.
To access more of The Innovator’s News In Context articles click here.