On June 24th OpenAI CEO Sam Altman officially launched Worldcoin, a controversial cryptocurrency project that aims to create a global identification system by scanning users’ eyeballs to help distinguish them from robots and provide the infrastructure to distribute a whole range of financial services including universal basic income. By day three Altman reported on Twitter: “Crazy lines around the world. One person getting verified every 8 seconds now.”
The project has 2 million users from its beta period and is now scaling up operations to 35 cities in 20 countries. The new venture has been accused of deceptive practices during its beta launch. Those accusations and its bid to collect biometric data and provide global financial services has prompted pundits to predict an inevitable clash with regulators.
“Risks include unavoidable privacy leaks, further erosion of people’s ability to navigate the Internet anonymously, coercion by authoritarian governments, and the potential impossibility of being secure at the same time as being decentralized,” Vitalik Buterin, the co-creator of Ethereum, a blockchain platform for decentralized financial applications, said in a July 24 blog posting.
The launch of Worldcoin is the latest in a string of advances at companies backed or led by Altman, including OpenAI’s release of ChatGPT in November of last year and the announcement in July that Oklo, a nuclear fission start-up chaired by Altman, is to go public in a deal valuing the company at $850 million.
“These are independent parts of a specific vision of the future which I believe in,” Altman said in an interview with the Financial Times. “But they’re all doing their own things and they all work independently.” Collectively, the FT notes “Altman’s projects could reshape society and their success would place him at the heart of a powerful network of companies.”
In the interview with The Financial Times Altman insisted he had no intention of “disintermediating” governments but suggested the public sector had “a lack of will” to lead innovation. “People ask me periodically, ‘don’t you think this should be done by the government? Isn’t it horrible that you are doing this as a private tech company?’,” he said. “Why don’t you ask the government why they aren’t doing these things, isn’t that the horrible part?’”
Worldcoin is not the first Silicon Valley tech company to try to launch a global digital currency Facebook’s Libra project (later renamed Diem) had the same goal but regulators and politicians ultimately killed it, due in large part to a lack of trust in tech and the companies who develop it. Critics point to the propensity of not just Facebook (now called Meta) but all powerful tech companies to move fast without considering unintended consequences and sometimes cynically use non-tech savvy members of the public as dupes.
The MIT Technology Review accuses Worldcoin of using such deceptive practices. “The startup promises a fairly-distributed, cryptocurrency-based universal basic income. So far all it’s done is build a biometric database from the bodies of the poor,” the magazine wrote in a 2022 article about Worldcoin’s beta testing.The magazine’s investigation said it found “ wide gaps between Worldcoin’s public messaging, which focused on protecting privacy, and what users experienced. We found that the company’s representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent.”
Meanwhile the US Federal Trade Commission announced July 13 that it has launched a probe into OpenAI, looking at whether the company has engaged in “unfair or deceptive” privacy and data security practices or harmed people by creating false information about them. “
Reuters also reported that U.S. senators Elizabeth Warren, a Democrat, and Lindsey Graham, a Republican, said on July 27 that they would push for an ambitious bill to create a new U.S. government regulator empowered to rein in Meta Platforms’ Facebook, Alphabet’s Google, Amazon.com and other Big Tech platforms.
At a separate hearing earlier this week a trio of influential artificial intelligence leaders testified at a congressional hearing, warning that the frantic pace of AI development could lead to serious harms within the next few years. Yoshua Bengio, an AI professor at the University of Montreal who is known as one of the fathers of modern AI science, said the United States should push for international cooperation to control the development of AI, outlining a regime similar to international rules on nuclear technology, according to a Washington Post article. Dario Amodei, the chief executive of AI start-up Anthropic, said he fears cutting edge AI could be used to create dangerous virus and other bioweapons in as little as two years. And Stuart Russell, a computer science professor at the University of California at Berkeley, said the way AI works means it is harder to fully understand and control than other powerful technologies.Russell said a new regulatory agency specifically focused on AI will be necessary. He predicts the tech will eventually overhaul the economy and contribute a massive amount of growth to GDP, and therefore will need robust and focused oversight.
A gap in thinking about what type of policing is needed for tech companies was evident in an announcement made this week by AI players. Four of the preeminent AI companies are coming together to form a new industry body designed to ensure “safe and responsible development” of so-called “frontier AI” models. In response to growing calls for regulatory oversight, ChatGPT developer OpenAI, Microsoft, Google and Anthropic announced the Frontier Model Forum, a coalition that draws on the expertise of member companies to develop technical evaluations and benchmarks, and promote best practices and standards.
While the Frontier Model Forum is designed to demonstrate that the AI industry is taking safety concerns seriously, it also highlights Big Tech’s desires to stave off incoming regulation through voluntary initiatives and, as Techcrunch noted “ perhaps go some way toward writing its own rules.”
IN OTHER NEWS THIS WEEEK
FOOD AND BEVERAGE
Israel’s Steakholder Foods Enters 3D Printed Fish Pilot With Gulf State
Steakholder Foods, an Israeli deep tech food company, has struck a deal to launch a pilot plant for 3D-printed “hybrid-fish products” with an unnamed member of the Gulf Cooperation Council, an economic union between Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates.“The collaboration between the partners will leverage Steakholder Foods’ expertise in providing mature Ready-to-Cook 3D printer technologies and customized bio-inks, tailored to produce a wide range of species-specific cultivated fish and meat products, as well as vegetable-based products. The partners seek to utilize Steakholder Foods’ advanced technology to overcome the limitations of traditional fish and meat production, ensuring consistent, nutritious, and safe food products that closely mimic the taste, texture, and appearance of conventional meat, fish and vegetable-based products,” according to a Steakholder Foods press release.“The agreement foresees a material initial down payment to Steakholder Foods for the procurement of its 3D-printer technologies, followed by a milestone-based sales and procurement plan for industrial-scale output,” the company said.
Aleph Farms Applies To Sell Its Cultivated Beef In Switzerland
Israeli startup Aleph Farms has applied to Swiss food safety regulators for approval to sell its cultivated beef steaks in its first European market: Switzerland. A spokesperson told AgFunderNews that Aleph had had lengthy conversations with the Swiss Federal Food Safety and Veterinary Office (FSVO) to understand what was required under the country’s novel foods rules before submitting a safety dossier.Aleph and Swiss supermarket chain Migros, an early investor in the company, have “conducted extensive consumer research in Switzerland and navigated the intricacies of the country’s regulatory landscape for novel foods,” said the spokesperson.
CYBERSECURITY
U.S. Adopts New Cyber Rules
The Securities and Exchange Commission (SEC), Wall Street’s top regulator, adopted new rules on July 26 requiring publicly traded companies to disclose hacking incidents, a measure officials said was to help the investing public contend with the mounting cost and frequency of cyber attacks. The new cybersecurity rule will require companies to disclose a cyber breach within four days after determining it is serious enough to be material to investors. The rule would allow delays if the Justice Department deems them necessary to protect national security or police investigations, the SEC said. Companies will also have to periodically describe their efforts to identify and manage threats in cyberspace. The rule, first proposed in March 2022, forms part of a broader SEC effort to harden the financial system against data theft, systems failure and cyber-intrusions.
HEALTH
Could Worms Revived After 26,000 Years Hold The Secret To Helping People Survive Being Frozen or Hurtled Into Outer Space?
Scientists said they have revived worms buried in Siberian permafrost for 46,000 years. The half-dozen creatures, a type of nematode or roundworm, survived for millennia in permafrost by entering a state of suspended animation, according to a paper published Thursday in the journal PLOS Genetics. Genetic testing suggests the worms are a new and possibly extinct species, researchers said. “This paper could make people consider this third condition between life and death,” Teymuras Kurzchalia, co-author of the study and a biologist at the Max Planck Institute of Molecular Cell Biology and Genetics in Germany told The Wall Street Journal. Could the secrets of cryptobiosis help people survive being frozen or hurtled into outer space? That remains in the realm of science fiction, scientists told the Journal, but further study could reveal mechanisms such as the workings of genes or proteins that help cryptobiotic creatures survive in extreme conditions. Such insight could one day be harnessed to make people more resilient, researchers said.
To access more of The Innovator’s News In Context articles click here.