Interview Of The Week

Interview Of The Week: Mehran Sahami, Stanford University

Mehran Sahami is the James and Ellenor Chesebrough Professor in the Computer Science department at Stanford University and a co-author with Rob Reich and Jeremy Weinstein of the book “System Error: Where Big Tech Went Wrong and How We Can Reboot.”  Prior to joining the Stanford faculty in 2007, he was a Senior Research Scientist at Google. His research interests include computer science education, artificial intelligence, and ethics. Sahami served as co-chair of the ACM/IEEE-CS joint task force on Computer Science Curricula 2013, which created curricular guidelines for college programs in Computer Science at an international level, and in 2018 he was appointed by California Governor Jerry Brown to the state’s Computer Science Strategic Implementation Plan Advisory Panel. Sahami, a scheduled speaker at DLD 2023 conference in Munich Jan. 12-14, recently spoke to The Innovator about the need to incorporate ethics into AI.

Q: How should companies think about AI and ethics?

MS:  There is now more awareness of the level of risk involved with using AI and how decisions taken by AI might impact people’s lives. More companies are building processes around deployment and conducting audits of their AI and there is more awareness of the dangers. In the case of facial recognition, for example, some large companies have pulled back because they understand the social concerns and the reputational risks.

The European Union’s proposed AI Act classifies AI based on the amount of risk. The proposed law instructs companies to be transparent: they should let people know when AI is being used to make decisions about them; what information the decision was based on; and what recourse they have if they want to challenge a decision. But there is a difference between compliance and ethics. Ethics is about trying to do the right thing, not just what’s legally required. It is important that when we talk about AI and ethics that companies do not  think of it in terms of compliance goal posts, but rather as building processes that helps ensure the socially responsible development and deployment of AI.  Regular auditing, for example, should be included as part of the process for every new deployment. 

Q: System Error, the book you co-authored, talks about how Big Tech’s relentless focus on optimization is driving a future that reinforces discrimination, erodes privacy, displaces workers, and pollutes the information we get. How do we fix this?

MS: Societal impact goes part and parcel with technology. That can mean that sometimes you should curtail the deployment of a technology because the negative impact could be too significant. This is hard to do because tech runs faster than regulation, but it is necessary. Take the example of quantum computing. If it can break the encryption in use today [before a stronger form of encryption is put in place] it would cause a great disruption so even though the technology has not yet been fully deployed, we need to have a mitigation strategy in place.

Q: How do we ensure that new technologies reflect the values of society and not just the ethics of an overworked programmer racing to meet a product deadline at a venture-backed or public company?

MS: Stanford has launched an Embedded Ethics program to help students consistently think through some of the issues that arise in computing:  what the technology is capable of—both good and bad—and providing for strategies to try to mitigate some of the harmful outcomes. We try to show through a broader philosophical lens what sort of things should be considered, such as why privacy is important and how should we think about it. The issue of privacy isn’t an absolute. There is a value tension. If child exploitation is involved, for example, there is a legitimate interest in violating privacy in communications to root out crime. If there is such a value trade-off, then the issues must be adjudicated through the political system and regulatory guardrails must be set up.

Q: Some companies are now appointing Chief AI Ethics Officers (CAIEOs) is to make AI ethics principles part of operations within a company, organization, or institution. Is this a good idea?

MS: Company ethics should not be the responsibility of one person, but rather everybody’s job. If the position is seen as a compliance role – ie signing off in compliance of legal issues – what that does is absolve many other people from having to worry about ethics as part of their jobs. Every person needs to be responsible for their products and the ways that the technology should be developed. There should be regular audits of products and technologies and when they see problems, risk mitigation should take place. Ethics should be something that all employees think about, not just delegated to a particular office.

Q: Isn’t there a risk that if it is not one person’s responsibility it will be no one’s responsibility?

MS:  You get what you reward. If testing and audit processes are part of everyone’s job responsibility, and they are evaluated and compensated when they take it seriously, the same way they do when software is produced, you will get better results.  That can mean rewarding a team that decides not to release a product because it is deemed as too risky or leads to too many negative consequences.  That’s something that managers need to take seriously.

 Q: What other ethical considerations should companies consider when deploying AI?

MS: AI can make companies more productive by augmenting human ability.  It can also mean that workers may be displaced. The impact of AI on the labor force will not be uniform. Companies need to work on roadmaps, inform their employees, and develop an appropriate plan for the different sorts of roles they can take on and what kind of reskilling is necessary, so the transition is less abrupt.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing

To access more of The Innovator’s Interview Of The Week articles click here.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.