Interview Of The Week

Interview Of The Week: Kay Firth-Butterfield, Head of AI, World Economic Forum

Kay Firth-Butterfield is Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum and is one of the foremost experts in the world on the governance of AI. She is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI and AI4All. She regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic and social changes arising from the use of AI. She recently spoke to The Innovator about the key takeways for business from The World Economic Forum’s Global Technology Governance Summit 2021,which took place April 6th to 8th,

Q: Responsible innovation and building trust into tech were big themes at the summit. What does that mean in practice when it comes to a specific technology like AI?

KFB: The idea of responsible AI or responsible tech or innovation is now absolutely mainstream. If you are a CEO and are not thinking about appointing a chief AI  ethics officer or, if you are thinking only about employing a technical chief AI officer and not also building diverse teams around AI, you are headed for trouble. I think we are at the point where every CEO knows this or should know it but when it comes to operationalizing this in the business companies are lagging behind.

Q: Is this why the Forum has launching a new coalition called the Global AI Action Alliance (GAIA)?

KFB:  The aim of the alliance is to speed adoption of inclusive, trustworthy and transparent AI across all sectors. We are pulling together existing valuable resources so that people can learn from them.   IBM, for example, plans to open source its approach to AI Ethics through GAIA, as a way of showing its commitment to the responsible stewardship of data and technology. It will make available, through the alliance, its approach to putting AI ethics into practice at a global scale; the governance structure it uses to evaluate applications of data and technology in a centralized, accountable way; and contribute consulting expertise and know how needed to implement AI ethics. Salesforce and Microsoft are also contributing some of the amazing work they have done on their own policies. And the Forum is developing toolkits to help senior executives. If you are a board member and you don’t know how to operationalize AI ethics the Forum has developed a board tool kit to help you with that. And if you are in the C-Suite the Forum will soon have a tool kit for you. Alongside of these tools is a need for standards and certification. Standards are very hard. I am involved in the IEEE work. We have been working on this since 2015. So, while the idea of global standardization is great, I can’t see it happening anytime soon, if it is even possible, as there is no enforcement mechanism. Certification is easier. The Forum is, for example, already working on the feasibility of a certification and mark for the trusted use of AI systems that could work across sectors and regions, with the Schwartz Reisman Institute for Technology and Society at the University of Toronto and AI Global, a non-profit building governance tools to address growing concerns about AI. I think enforcement of AI will come from some governments (such as the EU) but mainly it will come from industry actors in a few major ways: insurance, if algorithms area to be insured they have to meet certain goals; ethical investing, investors will be looking for companies with responsible AI practices; the VC community, companies without responsible AI practices will be less easy to grow and sell; soft government mechanisms; agreement amongst companies to use a certification tool as the industry standard; customers and citizens.

Q: One of the summit sessions zoomed in on the topic of chatbots and highlighted some Forum pilots involving their use in healthcare.  What do you hope to achieve with these pilots?

The use of AI chatbots to exchange sensitive healthcare information brings up legal, privacy and security issues as well as a need for fairness and explainability. That is why earlier this year the Forum assembled 25 global experts from hospitals, governments and the private sector to co-create Chatbots RESET, a framework for governing the responsible use of chatbots in healthcare conversations. The framework consists of a set of ten principles selected from AI ethics and healthcare ethics, interpreted within the context of the use of chatbots in healthcare. It also contains operational recommendations for technology developers, healthcare providers and governments to consider at various stages to ensure the scale-up of chatbot technology is done responsibly. Reliance Group, Apollo Hospitals and Tech Mahindra are piloting the chatbots framework in India; OmniBot.ai and Ada Health are piloting it in Europe and the Forum’s affiliate Centre for Fourth Industrial Revolution technologies in Rwanda is exploring how to integrate some of the chatbots framework recommendations into its policy work.  Creating firm governance for use of AI in this sector will be transformative, allowing many who would otherwise have no access to healthcare to have care and companies to safely develop products in this market. We also want to use what we have learned from these pilots to see how this framework could be applied to the use of chatbots into areas of healthcare beyond triage, such as elder care. The opportunities to use natural language processing for elder care are huge. Already, the feedback we are getting is that when companies using AI chatbots feel like they are going off the rails they can look back at the framework and reset. I see that as a very positive thing.

Q: A summit session on national AI strategies moderated by The Innovator shined a light on two of the Forum’s Fourth Industrial Revolution (4IR) affiliate centers. What is the Forum hoping to achieve with its network of centers?

KFB: The Forum has established 4IR centers across the world.  The newest one, in Azerbaijan, which opened April 1, will center on digital trade. The centers in Colombia and Brazil are focusing on procurement and the AI Platform at the Forum is partnering with the InterAmerican Development Bank to spread what we do across Latin America. The mission of the centers is really to help countries adapt to and adopt new technologies and in some cases leapfrog ahead, so they don’t get left behind. There is a worry that AI will increase the digital divide. In addition, multinationals from China and the U.S. are exerting a huge influence on the development of AI. It is important for countries to establish a balance and create their own AI ecosystems within their own countries.

Getting to Net Zero was another big theme of the summit.  The question is how and how soon?  Are there specific Forum programs to help guide corporates’ thinking and actions on this topic?

The Forum has called for companies and governments to transition to net zero as rapidly as possible, and has launched numerous projects to support them in this mission. These include an Alliance of CEO Climate Leaders, an annually published Energy Transition Index, a Global Battery Alliance, and the Mission Possible Partnership. Most recently, the Forum has partnered with Bloomberg New Energy Finance and the German Energy Agency (DENA) to run a series of workshops to identify and unlock the most promising opportunities for AI to accelerate the energy transition.

Q: What do you hope will be the key takeaways for business from the Global Technology Governance Summit?

There are many opportunities to use tech for good. During the summit Marc Benioff, [Chair and Chief Executive Officer, Salesforce] said climate change is the number one priority for the application of innovation and emerging tech. Health care is another example. Safe Delivery App, a smartphone application created by Novo Nartis that provides skilled birth attendants with direct and instant access to evidence-based and up-to-date clinical guidelines on basic emergency obstetric and neonatal care,is being rolled out in 40 countries and promises to save many lives. Vilas Dhar, [President of the Patrick J. McGovern Foundation and a co-chair of the Forum’s Global AI Action Alliance], pointed out that there is a lot of potential to use AI to increase social justice. Butcompanies can no longer claim they are not aware of the risks associated with the technologies they are using. As our session on corporate governance indicated companies risk losing their investors and their customers if their use of technology is not based on trust and integrity. This means installing guard rails around the use of AI. In sum, the summit’s main message was a call to action: technology governance should be on every company’s agenda.

This interview of the week is content that would normally only be available to subscribers. Sign up for a 4-week trial to see what you have been missing

About the author

Jennifer L. Schenker

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.