Interview Of The Week

Interview Of The Week: De Kai, AI Expert

De Kai is Professor of Computer Science and Engineering at Hong Kong University Of Science and Technology (HKUST) and Distinguished Research Scholar at Berkeley’s International Computer Science Institute. He serves on the board of AI ethics think tank The Future Society  and was one of eight inaugural members of Google’s AI ethics council. He invented and built the world’s first global-scale online language translator. For his pioneering contributions to AIs like Google/Yahoo/Microsoft Translate, De Kai was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows and by Debrett’s as one of the 100 most influential figures in Hong Kong. His works spans artificial intelligence, cognition, language, music, creativity, ethics, society, and policy. De Kai, a speaker at the CogX Festival in London Sept. 12-14, recently spoke to The Innovator about responsible AI.

Q:  What worries you the most about AI?

DK: I just gave a TED talk about how our biggest fears should not be AI overlords or robots like Ex Machina but rather fear itself. AI amplifies human fear, upsetting checks and balances and driving polarization, hatred, and paranoia. It is also helping arm the population with weapons of mass destruction. In the current world order weapons of mass destruction are resources controlled at the nation-state level but today AI is rapidly democratizing physical and informational warfare. To date our survival has been dependent on our ability to outrun the destructive technology we invent but now every human – including criminals, terrorists, and disgruntled individuals – can potentially have access to weapons of mass destruction.  What are the chances that not one of them would hit the launch button out of fear?  This is the AI-enabled new world that we are already entering that most of us are not wanting to think about. We need to have our eyes open to the urgency of the problem.

Q: How should business approach this?

DK: Setup a responsible AI team and familiarize your company with guidelines developed by organizations such as the IEEE [Institute of Electrical and Electronics Engineers). I think it is important to have someone at C-Suite level involved,  in addition to the CTO or CIO. The issues around AI governance are still not being elevated high enough in companies. AI is going to be everywhere. This is not an area where corporations will want to be playing catch-up.

Secondly, businesses must focus on the unintended consequences. We saw in the previous generation of technology how companies got into a lot of trouble by hijacking and selling data and then said ‘oops.’ I think it is necessary to learn that lesson and not do that again. I am advising corporate executives to get ahead of this. If they are investing in deploying AI in their organization and their products, they need to carve out some portion of that investment and get their AI experts and executive to sit down and list as many of the unintended consequences as possible and capture that in a document that flags the dangers and publish that alongside of their privacy policies. Companies need to show people that they have thought about this rather than just paying empty lip service.

Saying we didn’t know this was going to happen is not an acceptable excuse anymore. It is negligence and people should be liable for their negligence.  I am part of The Future Society which is literally helping draft passages in the EU AI Act. Coming legislation will make it clear that companies will be liable for negligence, therefore not publishing the unintended consequences will be akin to not publishing your company’s privacy policy.

Q: There is a need not just for corporate governance or national rules but also agreement on some kind of global governance. How do we get there?

DK: Three giant experiments on managing AI development are happening in parallel. The U.S., EU and China are all taking different approaches. In the U.S. difficult questions are being swept under the rug. The EU’s approach is the same as GDPR privacy laws: pro-governance with a lot of bureaucracy, which in many ways is going to slow down innovation. China has followed the European model without slowing down innovation. It is taking liability seriously and slaps down Big Tech companies without unnecessary bureaucracy. We need to de-escalate the AI arms race dynamics before it gets dangerously out-of-hand and foster collaborative governance. We can’t even agree on the right approach within our own countries so a greater level of maturity will be required. To get around the political grandstanding, HKUST together with Tsinghua University – the two most prestigious universities in the Northern and Southern Chinese regions – are convening a summit in December with heavy participation from Europe and the U.S. Participants will include academics, think tanks and leaders that politicians consult. The aim is to try and get an understanding of what is going on and try to find a way forward.

Q: How can we ensure that globally AI is used for good?

DK: AI could help promote an abundance mindset rather than a scarcity mindset. It could be deployed at mass scale to optimize the taxation level/social benefits ratio. It could be applied to the challenge of predicting optimization of universal basic income. It could help us rethink resource allocation by deploying antitrust analysis at a much more effective, larger scale. The technology could help us with centralized planning and coordination complexity while handling decentralized planning objectives. AI could also be used to counter destructive fearmongering and increase empathy. We could apply the technology not just to language translation but also to cultural translation by helping humans with the cognitively difficult task of better understanding and relating to how others frame things. We need AI to be democratizing empathy, not weapons of mass destruction.

Slowing the toxic combination of a more dangerous deployment of physical weapons on the one hand and on the other hand the misuse and manipulation of information will require more than technology.  For this to work humanity must undergo a cultural hyper-evolution and agree to move away from destructive practices such as winner-takes-all economic models. Right now, we are stuck in a rut. If we refuse to change, we are going to run right off a cliff. We have one shot to grow up or humanity may become just another failed experiment. Our survival is at stake here.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To read more of The Innovator’s Interview Of The Week articles click here.




About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.