Interview Of The Week

Interview Of The Week: Kellee Tsai, AI Ethics Expert

Kellee Tsai is the Associate Director of the Center for AI Research (CAiRE) at Hong Kong University of Science & Technology (HKUST), Founding Director of CAiRE’s AI Ethics & Governance Lab, and the Dean of Humanities and Social Science. She is leading the university’s initiative on Bridging East-West Approaches to AI Ethics, Regulation, and Public Policy.A graduate of Columbia University (B.A./M.I.A., Ph.D.), Tsai spent 13 years of her academic career at Johns Hopkins University where she was on the faculty of the Department of Political Science.  Her leadership appointments at Johns Hopkins include serving as Director of the East Asian Studies program and Vice Dean of Humanities and Social Sciences.  In 2013, she relocated to HKUST to serve as Head of the Division of Social Science and was appointed Dean of Humanities and Social Science in 2018. Tsai, the author of seven books, recently spoke to The Innovator about crafting a global approach to AI ethics.

Q: Tell us about your work at the Center for AI Research AI Ethics & Governance Lab

KT: Our interdisciplinary Center for AI Research (CAiRE) was founded by my colleague Pascale Fung in September 2018. The idea was to include not just engineers working on technology like natural language models, computer vision and machine learning, but also philosophers and social scientists to conduct human-centric AI research and promote ethical AI to benefit society. Our advisory board includes AI experts such as Yann LaCun, Hiroaki Kitano, and Kai-Fu Lee. CAiRE was the first Asian partner to join PAI [Partnership on AI], a not-for-profit organization with over 60 members, including13 major AI companies and two dozen academic institutions. It was especially meaningful for CAiRE to be included early on because the Anglophone world dominates high-profile discourse about the governance of AI, while much of the technology is also being produced in Asia. Over the past decade, China has generated the most scientific publications on AI, the most citations to those papers, and nearly 75% of patented AI technology.

The premise of our East-West work on ethical AI and public policy is that it is essential to be more inclusive and collaborative. Critical conversations around AI are occurring in China literally every day. There is a lot of concern and interest there but the Anglophone world is not aware of it. There is an asymmetry because leading Chinese engineers read English and are aware of the English language discussions, but the reverse is not necessarily true.

Q: How can the center help?

KT: Hong Kong is uniquely positioned to host East-West conversations and bring relevant stakeholders together to deliberate on the issues in a way that bridges Eastern and Western philosophies. Some Chinese stakeholders are not comfortable going to the U.S. at the moment. We are well-linked with both Asian tech firms and Silicon Valley, so we can fill the void in a constructive way to facilitate cross-cultural cooperation and overcome potential issues of mistrust and language or technical barriers.

Q: What are some of the challenges?

KT: There are differences between countries with philosophical traditions that emphasize individual and rights-based approaches to regulation in the Enlightenment tradition versus those informed by Confucian thought emphasizing collectivism, communal welfare, and shared responsibilities. Most of the values expressed in English statements of AI ethics principles are aimed at preventing harm and minimizing risk, while many of the Chinese documents also include solidarity, harmony and sustainability as a high-level way of looking at things.

Having said that, I think it’s important to remember there are significant intra-regional differences as well. The U.S. and UK have taken a laissez-faire approach while the EU is all about regulatory control. Meanwhile, China actually has a high density of regulations, certainly more than other countries in East Asia. And Japan’s technological optimism stance has long viewed robots as potential allies to humans in creating better societies, even embracing human and machine co-existence.  Of particular note, both China and Singapore have banned fake news and disinformation, while other Asian countries have not. We need to be attentive to key differences within regions as well.

Q: In addition to cultural issues there is no global agreement on ethics so how will it be possible to craft a global AI ethics policy?

KT: Our researchers have compiled a database of national AI strategies and principles. Based on what is there I think we can get to the point of high-level points of convergence. The principals of fairness, transparency, accountability, safety, and privacy recur in all the ethics documents but there is over representation of certain countries in the discourse and of principles presented in English. What we are doing at CAiRE is looking at principles being drafted in other languages such as Japanese, Chinese, and Korean, which are under representated. Certain values may well be shared but they aren’t necessarily articulated in English.

We also believe that it is important to look at ethical principles from other (non-Enlightenment and non-Confucian) societies.  Countries such as India, Iran, Russia, and Saudi Arabia have distinct philosophical and religious traditions, which may be reflected in their AI principles. The Global South needs to be involved in discussing global governance of AI.

Q: You mentioned the principle of accountability.  Up to now the U.S. government has taken a laissez-faire approach to regulating tech companies but this week Sam Altman, the CEO of OpenAI, called for the U.S. government to license algorithms.  Is that a good idea? How could such a system work globally? Do we need an equivalent of an ICANN for the AI age?

KT: It’s tricky. Transparency is important for accountability, but some Silicon Valley companies believe that if they are forced to reveal the guts of their algorithms and make them open source, then they will lose their competitive edge.  So, the question is how to balance regulation with incentives for continued innovation and competition. There are no easy answers and there aren’t perfect parallels. Yet international regimes have been effective in areas like nuclear power and in banning certain types of conventional weapons. Some types of AI could fall into that category. Regulation could start with specific uses or sectors. Plenty of high-level principles have been articulated. Now we need to move to implementation and action.

As for Altman’s powerful testimony at the Senate, I interpret that as progress towards bridging the gap between more top-down and bottom-up approaches to AI regulation.  The EU and China are both top-down, albeit with different emphases.  The U.S. and U.K. have been more hands-off, and government is often perceived to be the source of problems rather than a solution to them as in Asian societies where the state assumes a more paternalistic role.  Now the leading American company of generative AI is asking the government for help.

Establishing the AI equivalent of ICANN would be a big step. Lessons could be learned from its multi-stakeholder model, global coordination, technical expertise, and contractual compliance.  But remember that ICANN was established by the U.S. government in 1998 during a different geopolitical context on the eve of the digital revolution.  That doesn’t mean it’s not possible, but to be effective, an AI version of ICANN in 2023 may need to be less US-centric from inception.

Q: Is it time for the tech industry to subject itself to regulation?

KT: Many engineers are seeking guidance because they don’t want to go down a road and then find out it is not going to be workable in commercial application. There are a series of conversations about how to do this going on globally. The IEEE [The Institute of Electrical and Electronics Engineers] is working on standards. There’s an epistemic community of  scientists and engineers that really care and committed to doing this because they know they can’t continue to develop the technology without guardrails. This sense of responsibility needs to be reconciled with the focus on profitability by technology companies.

There are only about 9 to 10 global LLM companies. We need to look at their corporate policies and identify what they have articulated as their principles and then test to see if they actually hold up. It is important to get practical and develop concrete indicators in measuring compliance, which must be a multi-disciplinary effort. We are having a variety of conversations with tech companies and developers about coming up with recommendations to forge some degree of consensus.

Q: Can existing AI laws be applied?

KT: Existing AI regulations and policies in many cases are still just statements of principle. Even when there are actual laws and policies, we need to look at how binding they are and how well they are being enforced. Just a couple of weeks ago, China came up with its own draft measures on generative AI services. The government says they must reflect “core socialist values,” and not do anything to undermine national unity or incite violence. They are also proposing restricting the use of personal data by generative AI training material—but the fine for violating that rule is between $1,400 and $14,500, which is not that punitive.

Q: How do we move forward?

KT: We urgently need to design flexible governance models that combine global standards with some degree of adaptability. It is going to require multi-level governance embedded in regional, national, and local regulatory contexts. Commitment to enforcement by multiple stakeholders—governments, developers, firms, and users—will be essential.  That is why we are promoting inclusive, cross-national research and conversations.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To access more of The Innovator’s Interview Of The Week articles click here.

 

 

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.