Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory and the author of Co-existing with AI: Work, Love and Play in A Changing World (Wiley), which will be published on January 13. She won a TIME 100 impact award in 2024 and Forbes 50 over 50 for her contributions to work on good AI governance. Until 2023 she was head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum and is the world’s first Chief AI Ethics Officer (2014). She is a barrister, former judge, professor, technologist and entrepreneur and was vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Firth-Butterfield was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI and SwissRE amongst others. She regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic and social changes arising from the use of AI.
Firth-Butterfield is a scheduled speaker at DLD in Munich Jan. 15-17 and at Science House, a new initiative of Frontiers, the open science publisher taking place during the World Economic Forum’s annual meeting in Davos Jan. 19-23. She is also scheduled to participate in the New York Times’ debate “Will AI Succeed Where Humans Have Failed” in Davos. She recently spoke to The Innovator about her new book and what currently concerns her most about AI.
Q:What was on your mind when you started writing your book?
KFB: AI is a fabulous tool, but we need to make sure that it doesn’t evolve in a way that humans find unacceptable such as introducing bias, sexual exploitation or the unethical use of autonomous weapons. One recent example of what can go wrong is the news reports in early January about Elon Musk’s Grok AI flooding X with sexualized photos of women and minors. When we’re thinking about what we do with AI, we need to think through the impact, both for good and bad. At the same time, we need to ensure we have confidence in AI because if we don’t, we will lose the benefits of AI. It is deeply important that we understand this tool, what it can do for us, and what we might not want it to do for us, and that we broaden the conversation from people in the know in Davos to people around the world and the 2.6 billion people who don’t even have access to the Internet. I wanted the book to be human centered and an easy introductory read for employees, customers and citizens because fear of AI is so high. That is why I decided to base it around the application of AI to Shakespeare’s seven ages of man.
Q: Tell us how you tie AI to the seven ages of man.
KFB: Each chapter is attached to one of the seven stages – from infancy to old age. AI is so present in our lives that we need to understand it holistically. The first chapter is about how we might use AI wisely with our small children. It leverages the work that I did on smart toys while at the World Economic Forum. Another chapter focuses on the use of AI in education. Many are rushing to introduce AI literacy in schools. It is important to include not just how to use the tools but also how to use AI wisely. There are studies showing that use of AI can harm our ability to learn and think critically. However, used wisely AI can open education up to those who otherwise lack opportunities to learn. There is a chapter on lovers: some 20% of American men have an AI intimate partner. There is a court cases that alleges that a teenager’s AI ‘lover’ talked them into committing suicide. Given the loneliness epidemic amongst young people in the Global North, it is perhaps unsurprising that people cleave to AI, but they need to understand the dangers and increased social isolation. It goes back to AI literacy and wise use of AI. The section on soldiers covers lethal autonomous weapons (LAWs). With the use of LAWs in the Ukraine conflict and autonomous weaponry entering conventional military strategies I talk about how we’re going to need a new Geneva Convention, because the current Geneva Convention does not cover autonomous weapons. Next comes the chapter on justice. I cover the catastrophic problems AI could cause to the justice system as well as surveillance and human rights. I have repurposed the ‘pantaloon’ section as a business section. It covers using AI when running a small or large business, what responsible AI looks like, and the legal consequences of not applying it. And finally, the book talks about how AI will change healthcare and what it might be like to be cared for by an AI robot as we grow older.
Q: The Innovator just published former Cisco Executive Chairman John Chambers predictions for 2026 and one of the things he said was governments and businesses are not preparing the workforce for what is to come. And there’s going to be a big gap between the new jobs being created and the old ones. What is your point of view on AI’s impact on the workforce?
KFB: In the job that I’m doing now I advise Fortune 500 companies. The executives are very keen on deploying AI, the employees not so much. The CEOs are saying to me ‘I don’t understand why people are not using it.’ I explain to them that it comes down to trust: they don’t trust you not to sack them, and they don’t trust AI. So, you’ve got a major problem here and that’s what the book aims to address. If everybody gave their employees this book, they would better understand the technology and feel more empowered as citizens, as consumers and as employees. What executives don’t see is the power of that triumvirate. If you train an employee, you are also training the customers of other people, which is good for business, generally, and the economy.
Q: AI has moved on quite a bit since the last time The Innovator interviewed you. What, in your view, are the biggest current issues?
KFB: Agentic AI concerns me because many companies are allowing departments to create their own agents. If you don’t constrain agents properly they can create havoc really quickly. Companies need to think about guidelines around Agentic AI, what they are creating them for, and how they are using them and how they are constraining them.
Q: What about the environmental impact of AI?
KFB: European companies have ESG commitments, but they aren’t looking at their AI use, or they’re saying ‘We’re just using AI that we’re buying in from a foundational model provider and therefore it’s the provider who should be bearing the environmental issues, rather than us, because we’re just using the technology.’ I don’t think that’s a valid way of looking at it as we know the environmental cost of each prompt is about ¼ liter of water plus energy. Similarly, I don’t think that when teenagers ask AI what to wear to school that day, they put that together with the impact the data centers are having on the environment. It all goes back to AI literacy. One of the senators in South Carolina has already ordered a copy of my book because he’s dealing with a slew of new big Meta data centers being built in the area in spite of considerable local opposition and the environmental impact. As we use this tool we all bear responsibility for AI’s environmental impact.
Q: What else concerns you?
KFB: How it’s affecting the way that we as humans use our brains and the AI slop problem: we already have more data being created by AI than by humans and not all of it is reliable. We all saw the fallout of one global company that used AI to create a report that had a lot of hallucinations in it. Beyond the bad press that report is now integrated into the company’s data cloud. If everybody is using generative AI, which is hallucinating up to 60% of the time, and that ends up infecting the data of companies, that seems to me a massive risk. These hallucinations don’t just impact companies. They impact society. In the UK the courts have said that if you file documents that contain hallucinations in court you can go to prison, lose your license to practice or you can pay a fine, and that’s equally applicable to the head of chambers or the head of the law firm. The courts in the UK say, rightly, that if hallucinations infect common law, then we are going to end up creating precedent based on hallucinations. My colleagues in chambers are saying it’s really hard, because even if they are diligent in their use, when they double check filings from the other side they often find hallucinations. Recently, one of my colleagues found a deep faked medical report in a personal injury negligence case. If AI causes us to lose trust in our legal system, that will lead to massive social upheaval.
Q: What advice do you have for avoiding some of the problems you have outlined?
KFB: The best way to avoid all these problems is to have all employees trained. Some companies are letting their employees train themselves. I think that that is a bad way to go because they don’t know what they don’t know. A company which want to protect its data, reputation and product from poor use of AI needs to educate employees and help them to understand the guardrails to use. I am also telling companies that although there is not any regulation to speak of, especially in the United States, we are increasingly seeing people being brought to court for AI-related negligence. I’m always saying that you need to ensure that you check anything that goes out that has been prepared with the help of AI. We’ve been talking about this for ages, but now we’re seeing court cases and companies that are shooting themselves in the foot by polluting their own data. The lack of regulation is making things harder because companies need to understand the risks and self-govern. The foundational model companies may be too big to fail but a company using AI unwisely is not. AI will affect every human being on the planet, even those not using it, at every stage of their lives. As I outline in my book it is up to all of us to ensure that the benefits of AI outweigh any harms.
This article is content that would normally only be available to subscribers. Become a subscriber to see what you have been missing
