Deep Dives

The Global AI Race And Its Consequences

If the Chinese government has its way the country will be the world’s dominant player in artificial intelligence by 2030.

That’s the plan. And it appears to be on track.

Governments across the world are rushing to craft AI policies, but none have published a plan as comprehensive as that outlined by the Chinese government in July, 2017. And few, if any, have an equivalent ability to execute.

As a result the U.S., now the front runner, risks losing its lead in the global AI race. Europe is expected to land up a distant third. And if pundits are right the rest of the world will find it difficult — in not — impossible to catch up.

The winners of the global race for dominance in AI stand to reap enormous economic benefits. Already the potential impact of AI on GDP in China is expected to be greater than in the U.S. or Western Europe. That is also projected to translate into stronger job creation potential in the country, according to a new report from PwC released at the World Economic Forum’s Annual Meeting of The New Champions in Tianjin Sept. 18–20.

How did China, the world’s largest consumer market and the world’s most populated country with nearly 1.4 billion people, leap ahead ? Years of work to become a digital-first economy haves resulted in a plethora of consumer data at the disposal of the government and big tech companies, says a recent report by the research firm CB Insights. And the more data you feed AI the better it becomes.

Government support for and intervention in AI development is boosting China’s fast- growing tech market. China’s Science Ministry announced that the nation’s first wave of open AI platforms will rely heavily on the country’s three Internet giants — — Baidu for autonomous driving, Alibaba for smart cities and Tencent for AI in healthcare.

Tencent, which runs the social networking service WeChat, has access to over one billion users on its platform, while Baidu is the country’s largest search provider and Alibaba is its biggest e-commerce platform. All three offer a widening range of products and services, and like the biggest tech giants in the U.S., have far-reaching global ambitions.

The three Internet giants, collectively known as BAT, are expanding into other countries in Asia, recruiting U.S. talent, investing in U.S. AI startups and forming global partnerships to advance smart city solutions, autonomous driving, conversational AI and predictive healthcare, among other initiatives, says the CB Insights report.

Baidu, Alibaba and Tencent have participated in 39 equity deals into startups building AI software and AI chips since 2014, says the report. A major portion of the deals — around 44%-% — went to startups in the U.S. In contrast, the report says, Facebook, Apple, Google, Microsoft and Amazon « “have a negligible private market footprint in China,” » noting that there has been only one equity deal among this group into a China startup : Google’s investment in the voice startup Mobovi.

All three Internet giants are investing in autonomous vehicle technology and have snapped up startups in this area. The reason ? China is likely to emerge as the world’s largest market for autonomous vehicles and mobility services, worth more than $500 billion by 2030, according to an annual report on the nation’s innovation economy by the South China Morning Post and 500 Startups, a Silicon Valley venture fund and seed accelerator. That leadership means China will likely have tremendous global influence over design, operating rules, and sales of autonomous vehicles, since breakthroughs in each of these areas are expected to be driven by its companies and regulators.

But the AI race involves more than economic and technological might. As AI becomes better at mimicking and surpassing humans, the countries and companies who control the technology could end up serving as mission control for humanity.

AI, like all technologies, is Janus-faced. It depends on who designs the technology and how it is applied. For example, AI-equipped surveillance technology can be used to make cities safer and more efficient or as a tool of dyssytopian societies.

The dark side of AI is increasingly in the spotlight. In his book “Superintelligence : Paths, Dangers, Strategies,” the philosopher Nick Bostrom of Oxford University imagines an AI that has been programmed to make as many paper clips as possible. It ruthlessly transforms all of Earth and then an evern increasting portion of outer space into paper clip manufacturing facilities. Bostrom’s book was one of the things that inspirpsired Elon Musk, the U.S. billionaire behind Tesla and SpaceX, to say that AI is « “potentially more dangerous than nukes.” » Musk, the late physicist Stephen Hawking and others in the scientific community signed an open letter calling for a ban on autonomous military weapons and research to ensure that AI systems are benefiticial to humanity.

Concerns go far beyond killer drones and super- intelligent systems running amok and killer drones. When Axon, the U.S.’s biggest seller of police body cameras, voiced interest in pursuing face-recognition technology and other AI capabilities into real-time video, which would have allowed officers to scan and recoginize the faces of potentially everyone they see while on patrol, some 42 civil rights, technology and privacy groups protested. They wrote a letter urging an outright ban on face recognition on policy body cams, which it called « “categorically unethical to deploy” » in part because of the technology’s privacy implications.

In China police regularly use AI-powered surveillance technology to capture fugitives and shame jaywalkers, part of an effort to make cities safer and more efficient and to exert more control. The reliance on AI-powered technology has created a booming industry : the Beijing startup SenseTime, which makes surveillance technology, has a valuation of $4.5 billion, making it the world’s most highly-valued AI startup.

Concerns about AI’s uses also include bias. The letter from U.S. civil rights groups pointed out that recent research found that most facial-recognition systems perform far less accurately when assessing people with darker skin, opening the potential for an AI-enabled police officer to misidentify an innocent person as a dangerous criminal, with potentially deadly consequences.

The application of AI to areas such as loan services and recruitment and its use by the Chinese government to give people a “« social score” » based on their behavior is leading to calls for transparency and for standards to prevent discrimination and marginalization. One way of doing this is to require algorithms to be interpretable to end users, by describing how their algorithms work and articulating rationales for their decisions. For example, the European Union has made “explainability” a check on the potential dangers of AI, guaranteeing a person’s right to obtain “meaningful information” about certain decisions made by an algorithm.

As technology companies move to hardwire ethics into AI, questions are being raised not only about how machines make decisions but about what values are being used to underpin them, as there is no global conscensus on ethics or issues such as data privacy.

The Cambridge Analytica scandal made the public aware of the growing power and influence of technology companies and the way they share and use data. Some see this as a chance for Europe to gain a competitive edge by offering an alternative form of AI — one that safeguards European values — such as data privacy and democracy, by hardwiring them into the technology. .

The push in Europe to encode « “European values” » is a reaction to a perception that technology companies in the U.S. are out only to maximize profits with no accountability to government, and that China primarily wants to use AI to control its population and does not care about data privacy.

But data privacy is now on the agenda in China.

Part of the country’s plan to dominate in AI is to devote considerable efforts to standard-setting processes in AI-driven sectors, including

algorithmic transparency, liability, bias, and privacy, says a June analysis of China’s involvement in AI standards written by Jeffrey Ding, the China lead for the Governance of AI Program at the Future of Humanity Institute ; Paul Triolo, China Digital Economy Fellow at New America; and Samm Sacks, Senior Fellow, Technology Policy, Center for Strategic and International Studies.

Chinese organizations released an in-depth white paper on AI standards in January and hosted a major international AI standards meeting in Beijing in April.

The white paper’s discussion of data privacy standards reflects an emerging, important debate over privacy protections in China, says the analytical report by Ding, Triolo and Sacks. “On the one hand, there is demand from the public for restrictions on how companies collect and use personal information. These concerns are reflected in a standard for personal information security which aims to strengthen user control over how their data is handled by companies. But at the same time, the government does not want to to make the rules too strict for companies in a way that would inhibit AI development.”

While it is important to bring China into the global dialogue and help set standards,the paper written by Ding, Triolo and Sacks warns that “should Chinese officials and experts succeed in influencing such standards and related AI governance discussions, “the policy [KCT1] landscape may skew toward the interests of government-driven technical organizations, attenuating the voices of independent civil society actors that inform the debate in North America and Europe, because these organizations do not have a voice [in China].”

China’s Sputnik Moment

China’s ambitious plans to dominate AI can be traced back to the moment the computer AI program AlphaGo scored its first high-profile victory in March 2016 during a five-game series of matches of the strategic board game Go against an expert human player, winning four to one. While barely noticed by most Americans, the five games drew more than 280 million Chinese viewers, notes Kai-Fu Lee, the former head of ex-Google China, head Kai-Fu Lee, in his new book “AI Superpowers : China, Silicon Valley And The New World Order.”.

“Overnight, China plunged into an artificial intelligence fever,” » notes a passage in the book. It was China’s Sputnik moment, comparable to when the Soviet Union launched the first human-made satellite into orbit in October 1957, an event that sparked widespread U.S. public anxiety about perceived Soviet technological superiority and triggering what was known as the ‘Space Race,’” Lee says in the book.

Lee, now the dean of China’s new AI Research Center and the CEO and founder of Sinovations Ventures, a Chinese early- stage ventures firm with a presence in Beijing, Shanghai, Shenzhen and Silicon Valley, says China is rapidly catching up to the U.S. and may surpass it.

As AI companies in the U.S. and China accumulate more data and talent, the virtuous cycle of data-driven improvements will widen their lead to a point where « it will become insurmountable, » says the book.

That is not necessarily a bad thing, says Lee. While the book acknowledges there is a real threat that AI, which will have a huge impact on the labor force, could lead to « tremendous social disorder and political collapse stemming from widespread unemployment and gaping inequality, » Lee says he believes worst-case scenarios can be avoided.

The new book “describes this competition as an enabler, not as a destroyer,” Lee said in an interview with The Innovator earlier this year. « The technologies underneath the applications are well known and their applications mostly benign, unlike the nuclear weapon arms race. So I expect China and the West to leverage their strengths and make great progress, creating wealth and innovations benefiting all of mankind.»

Just how positive global control of AI by China and the U.S. would be is still a matter of debate. So what steps should countries take to better their position ? The elements of national power in the age of AI includes owning large quantities of the right type of data, training and enabling AI-capable talent pools, having the right computing resources and fostering public-private partnerships, says a July report by the Center for a New American Security (CNAS).

It is crucial that both the public and private sectors play a role to fully realize the promise of AI,” says Paul Daugherty, Accenture’s chief technology & innovation officer. «The public sector must focus on national investment, an R&D agenda, data policies, workforce and education initiatives, and favorable regulatory environments. The private sector should focus on entrepreneurship, investment, worker reskilling and talent development, and business innovation. Both must focus on what Accenture calls Responsible AI, which is about adopting AI policies, guidelines, and implementations that ensure transparency, fairness, and accountability, » he says. « A new, tighter level of collaboration, integration and commitment across the public and private sectors is absolutely essential. »

Managing AI’s Use

Managing the creation and use of AI technology is crucial, given AI’s ability to influence defense, diplomacy, intelligence, economic competiveness, social stability and the diffusion of information, notes the CNAS report.

« The sharper the competition…the greater the need to also think about the potential for a race to the bottom in AI safety, » says the CNAS report. « As countries and companies competitively create AI applications, especially if they believe that there are large advantages to being first movers, there is a risk that countries may put aside safety and reliability concerns due to the desire to be first. Such a race to the bottom would escalate the potential for AI-driven accidents, both in the commercial and military sectors. »

As with past industrial revolutions, the outcomes of this race for technological dominance will depend not just on the technology itself but on how companies, governments and people use it.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.