Kay Firth-Butterfield is head of artificial intelligence and machine learning at the World Economic Forum’s Center for The Fourth Industrial Revolution, vice chair of the global engineering standards group IEEE’s Global Initiative for Ethical Considerations in the Design of Autonomous Systems, an expert advisor to Britain’s All Party Parliamentary Group on Artificial Intelligence and a member of the technical advisory group to the Foundation for Responsible Robotics.
She has worked as a barrister, mediator, arbitrator, professor and judge in the United Kingdom and has advanced degrees in law and international relations which focus on the ramifications of pervasive artificial intelligence. She was recently interviewed by The Innovator.
Q: If autonomous AI software, crunching data far more rapidly than humans, can help eradicate disease and poverty and introduce societal improvements and efficiencies, then we must embrace it. But, at the same time, we have to have governance. And right now there is no such thing. As a lawyer, judge and AI expert what do you think is the best way to approach governance?
KFB: We need good governance, not governance open to endless challenge. This is the reason that I joined the Center for the Fourth Industrial Revolution. The way each project will create a multi-stakeholder team to co-design governance mechanisms and then pilot them is, I believe, the ideal way to create useful legislation in a partner country that is then scaleable to others.
Q: Can you tell us more about the World Economic Forum’s role in helping shape AI’s future?
KFB: We are seeking to tackle the big-picture issues at the moment: privacy, trust, bias, transparency, accountability; rather than looking at uses of AI. Using a pre-second Industrial Revolution analogy, I take the view that we need to get the coupling between horse and cart right before we start the journey. The projects that help inform boards and help countries create best practices for procurement of AI are important because if we can help companies and countries to only commission and create ethical, human-centered and responsible AI then that is the type of AI which will spread. It also enables countries and companies to apply culturally relevant standards to such commissioning while encouraging a norm across the globe.
Q: A number of efforts are underway in the tech community and academia to grapple with the complex challenges that AI poses. What is the best way to ensure that there is some sort of coordination between all of these efforts?
KFB: Each of these organizations is doing valuable work in different areas and aspects of AI. I see them all as complementary to one another. The way in which AI is developed will be critical for the way in which humanity thrives in the future. We all want to see the massive benefits that AI can bring to humanity while minimizing the risks.
Our work at the Center for the Fourth Industrial Revolution and with the wider Forum is global, and it needs to be. All nations need to be in a position to benefit from AI and our work is inclusive. At the same time, different nations also need different AI strategies. It is dependent upon their level of technological development.
At the Center in San Francisco, we are working with partners not only from industry, academia, the start-up community and civil society but also with countries. Each project team will build policy frameworks and governance protocols with a focus on partners who will pilot them in their jurisdictions and organizations. Our vision is to help shape the development and application of these emerging technologies for the benefit of humanity.
Q: What sort of guidelines need to be put in place to monitor AI research and who should develop them?
KFB: Universities need to consider whether something like the IRB (Institutional Review Board) system should be applied to AI research. For example, as chatbots are created to support mentally ill adults and children, should those interactions be regulated? In such cases, medical professional bodies should have a share of the discussion.
One of the projects which has been recommended to me by the Forum’s Global Future Council on AI and Robotics is to see if the Forum can help scale around the world the education of computer scientists (especially those going into AI) in the ethical human-centered design of AI. It is a project that we are scoping with professors from universities around the world. This would help to educate scientists from the beginning of their careers.
There are voluntary principles such as the Asilomar Principles. And there have been suggestions that just as lawyers and doctors have ethics training at universities and then professional bodies enforce such requirements in practice, so too should there be a professional regulatory body for AI scientists.
Q: Most of the initiatives on AI and ethics involve tech companies. What role, in your opinion, should big corporates in other fields play? How should they get involved and why?
KFB: The way good or bad designs of AI will spread across the world is through the increased use of AI by non-tech companies. If they set standards for the sort of AI they buy or develop then good design of AI will spread more comprehensively and more swiftly. In a Harvard Business Review article last year, Andrew Ng (who formally spearheaded AI efforts at Google and Baidu) said that all companies will have to start to appoint a Chief AI Officer. I have said that they will also need to appoint a Chief Values Officer whose job would be to supervise the ethical/responsible use of AI and probably run an Ethics Advisory Panel. The job of an Ethics Advisory Panel would be to look at the use of AI in each product at the initial stage so that ethical, human-centered and responsible design is built into the application from the start and not considered as an add-on afterthought.
Q: What, in your mind, are the most pressing AI-related issues that should be placed on the agenda in 2018?
KFB: There are many pressing issues that need to be on 2018’s agenda. How can the developing world use and develop AI? Governments need to take a hard look at the education system. Is it ready for the Fourth Industrial Revolution? And what about the reskilling of workers? AI will change the shape of what traditional work is and the population will need to be prepared. Governments need to also look into developing an infrastructure that enables talent to thrive at home instead of leaving for jobs elsewhere. If we do not tackle this problem there is a significant chance that the disparities of wealth will simply increase.
Specifically in AI, work will be needed to address privacy concerns. We need to find ways to ensure those creating AI are drawn from a more diverse population. We need to work on how bias in algorithms caused by use of historic data can be addressed. We also need to work on the transparency of algorithms.
AI is moving rapidly, but there is a lot to do to ensure we maximize the benefits while minimizing the risks.