Tony Fish, who teaches AI and Ethics at London Business School and the London School of Economics, has founded, co-founded, sold and listed many businesses and has worked as a general partner of a venture capital fund. His professional life has cut across a diverse range of sectors including health, finance, media, mobile, sport and education. He is a visiting Fellow at the U.K.’s Henley Business School For Entrepreneurship and Innovation, Entrepreneur-in-Residence at the Bradford School of Management and serves as a expert on Big Data to the European Commission. In addition to speaking at over 300 events and conferences on data, innovation, entrepreneurship, digital trends and early stage growth, he has authored and published three books. Fish recently spoke to The Innovator about how boards should think about AI and Ethics.
Q : We hear a lot about AI and Ethics these days. Why is this becoming a big issue for business ?
TF : The ideas we have around AI and learning are not well grounded. A human’s ability to learn comes from some form of synchronization between the tools available in the environment we live in and the chemistry and biology of our minds and bodies but we don’t have a unified view of how it all works. If we don’t understand how humans learn it makes it even more difficult to understand how machines learn. So why is this an issue for business ? It impacts how a company trains its algorithms and how much agency it is willing to give to artificial intelligence. How we make those calls of judgement and justify those judgements will determine what we are going to do with AI and will have important consequences not just for companies but for society as a whole.
Q : What do business leaders need to understand about algorithms in order to determine how much agency to give them ?
TF : In machine learning algorithms learn models from the data. If you show an algorithm pictures of a cat it will eventually learn to recognize one. The difference between machine learning and AI is that with AI the technology can learn not only to recognize a cat and name it but also know how to care for one. But how does an AI learn to care for a cat when there are multiple views on the best way to care for a cat ? Some people think you need to leave them outside, some believe you should keep them indoors, etc. so even something as simple as that requires a judgement. So given that judgement is often necessary what types of tasks should you give to AI ?
Q : What are the choices ?
TF : There are three types of tasks. The first is arduous straightforward tasks that require codifying a solution into a program. AI is particularly good at this. The second type of task involves really complex problems that people could handle. You can codify the capability into a program but you can’t specify all of the rules because it is difficult to anticipate all of the scenarios. This is the issue with self-driving cars. You can’t teach the algorithms to anticipate everything which is why we read in the news recently that an autonomous car ran over and killed a lady walking her bike. The third type of task is solving complicated problems that people are unable to solve because they don’t have the competence. Through the data — if there is enough — the AI is able to produce patterns that the human is unable to see. In this case if you apply AI you can end up with better outcomes. The scenario that scares the pants off of me is the one where you put AI into a cycle where you want to create change that leads to unknown outcomes. We don’t want machines making these kind of decisions.
Q : How should boards be thinking about this ?
TF : As we gather more and more data do we remove the agency of workers? These are the judgement calls, these are the decisions that boards will have to make. AI is not just about data. It has implications for society. What are the implications for the future of work, the type of work and the way we manage our workers ? Is it acceptable to take away workers’ agency to become more efficient ? And is efficiency the sole purpose of the organization ? The design of the AI — and how much agency it has- will reflect the ethics of the business itself. Who makes that judgement call ? Who makes that choice ? This is why it is more important than ever that boards be made up of not only people in finance and marketing but also from the fields of psychology, anthropology and sociology. Boards need to be thinking not just about gender and ethnic diversity but also about diversity of thinking in order to properly frame the problems and make the right judgement calls.