Interview Of The Week

Interview Of The Week: Anand Rao

Anand Rao, PwC’s Global AI Leader, was a speaker at the World Economic Forum’s Annual Meeting of the New Champions in Dalian, China, July 1–3. During the conference PwC introduced a « Responsible AI Toolkit, » a diagnostic survey that is designed to help organizations access their understanding and application of responsible and ethical AI practices. PwC has identified five areas organizations need to focus on in order to tailor their particular strategy : governance, ethics and regulation, interpretability and explainability, robustness and security and bias and fairness.The toolkit is compatible with a AI toolkit for the C-Suite and boards introduced by the World Economic Forum earlier this year. PwC is partnering with the Forum’s Centre For The Fourth Industrial Revolution to further develop such guidelines. Rao met with The Innovator at the conference to discuss why leaders need to take responsiblity for — and action on — responsible AI practices.

Q : Earlier this year 85% of CEOS said AI would significantly change the way they do business in the next five years and 84% acknowledged that AI-based decisions need to be explainable in order to be trusted. How do corporates earn that trust ?

AR : To me trust is something you develop. You don’t declare yourself to be trustworthy.The C-Suite needs to actively drive and engage in the end-to-end integration of a responsible and ethically-led strategy for the development of AI in order to balance the economic potential gains. One without the other represents fundamental reputational, operational and financial risks. As an audit firm we are naturally cautious and conservative. We are not saying AI should not be done. We are just saying make sure you do it in a responsible way.

Q : What does that mean in practice ?

AR : There are now more than 72 different guidelines on AI & Ethics issued by various organizations. What we have done is used AI to analyze AI ethics and identified the key things needed to translate concepts into good practice. For example one common guideline says « AI needs to be beneficial. » How do you translate that into the banking area when, for example, you build a model for mortgages? There is a difference in what is legal and what is ethical. If one of the decision-making factors is based on zipcode this is often highly coordinated with ethnicity which is legally ok and ethically it is not. We don’t say what is right and what is wrong. We just run an analysis with our tool kit to help companies decide which variables should be used for decision making. If a variable is legally permited but ethically questionable then a company needs to make a choice on whether they want to do it or not and look at what is the cost if they do use it. And, companies need to look at whether they can justify what they are doing when the regulator comes in. It is about explainability. Companies need to evaluate deep learning algorithms. You may get good results for the data but you may not be able to explain how the AI reached its decision. Another algorithm may not be as accurate but is easily explainable. When it comes to your end customer and a mortgage that has been denied you don’t want to be in a position where you have to say ‘I don’t know how that decision was reached.’ So companies have to figure out how they want to balance this.

Q : What else should companies be thinking about when it comes to damage control ?

AR : Be very careful when and where and why you are using AI and determine what level of governance you need. What do the board of directors need to know ? Some companies are looking at insurance for AI. How long would it take your company to detect a problem before you get a call from the press and how quickly can you react ? All of these things matter.

Q : What do you advise companies to do ?

AR : First and foremost try to get educated on AI, to really understand what is AI. There is a lot of confusion with broader digital transformation, data and analytics, automation and AI. They are very related and yet distinct. Start looking at existing risks and governance processes and see where and what you need to update. Don’t think of new efforts that cost millions of dollars. Start small and determine how much AI is getting used, how are you going to monitor the increased use of AI and think about how you are going to get the ROI from it.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.