Interview Of The Week

Simon Torrance, AI Risk Expert

Simon Torrance is Founder of AI Risk, a London-based strategy, research and innovation firm specializing in helping leaders understand and manage the strategic risks of AI adoption, which include: strategic risk (losing out to competitors), financial risk (making poor investment decisions), operational risk (vulnerability to cyberattacks, inability to access talent) and regulatory risk (non-compliance with new laws).

In parallel he works as a senior independent advisor to boards and leadership teams on new growth strategy, technology innovation, and venture building. Torrance is a regular keynote speaker, a member of the World Economic Forum’s ‘Accelerating Digital Transformation’ executive working group, a guest lecturer at Singularity University, and co-author of ‘Fightback – How To Win In The Digital Economy’.

In 2023 Torrance ran an in-depth think tank on AI risk, supported by global corporations, which created the world’s first AI Risk taxonomy. He recently spoke to The Innovator about how corporates can manage AI risk.

Q: How do you define AI risk?

ST: As part of our think tank we created the world’s first AI Risk Taxonomy, detailing 80 separate AI risks across seven categories, rating them by severity, frequency and timeliness, using an actuarial model. The taxonomy leveraged a review of over 5000 real incidents worldwide. We came up with four key themes: unethical/irresponsible use of AI, new forms of cyberattacks, poor AI controls and governance and – the most important – unintended consequences from adopting AI. Unintended consequences means situations in which AI is created to do something for you and it does something else you didn’t expect that is very difficult to row back from. An example from the healthcare sector is model drift that can occur when AI is trained on certain data and it degrades over time, causing doctors to make decisions that are less and less accurate. It is a major risk, one that is difficult to control and manage. Other unintended consequences include large scale AI malfunctions in energy or transportation systems, physical damages to people and property, and the impact of increasingly autonomous AI systems undertaking complex muti-step tasks (what some call ‘Artificial Capable Intelligence’). These are very significant risks that are likely to have big impact. Most of these risks are deemed ‘high risk’ by the new EU AU Act and will legally require specific action to be taken to mitigate them.

Q: What kind of mechanisms do companies need to put into place to manage these risks?

ST: There is a new market emerging for specific AI risk management strategies, methods and tools. This is not just to help leaders  ensure their organizations are compliant with best practices, standards and regulations. It’s also to help companies avoid being out-competed and losing shareholder confidence. Governments are extremely worried and are scrambling to create laws, which are likely out-of-date as soon as they are enacted. The worry is that they may inadvertently stifle innovations in AI that could bring enormous benefits to society, for example dealing with ageing populations, climate change, educational and financial inclusion and developing new medicines. In an April 9 speech at The Institute of Advanced Study that was once headed by J. Robert Oppenheimer, European Commission Executive Vice President Margarethe Vestager  said ‘After the war, around 1955, politics created the International Atomic Energy Agency, to promote the safe, secure and peaceful use of nuclear technologies. Which then created the conditions for the nuclear Non-Proliferation Treaty.   When it comes to digital, this is our 1955. And the policy choices we make today will shape how technology develops and how it is used, for decades to come.’ AI systems offer great benefits, but due to their ‘probabilistic’ approach (outputs are not pre-determined) they are more risky than traditional IT systems . So, at one level, we need international standards and agreements to manage AI’s safe development, and at another, enterprises (big and small, across all sectors) need strategies, methods and tools to manage safe, trustworthy and impactful AI innovation.

Many enterprises are being cautious, with good reason. There is a tension between the arms race – ie knowing that if you don’t adopt AI your competitors will – and moving ahead, despite the risks which feel both unclear and significant. Companies are finding it difficult to manage this today. That is why I created my company, AI Risk, to try and help companies understand how to manage these tensions. There is the strategic risk of not moving fast enough and losing out to the competition; there is a financial risk of investing lots of money into ill-conceived AI programs and not get a return or having any real impact on the company; there are operational risks such as outside malicious actors using AI to launch new types of cyberattacks at massive scale or the inability to attract the talent you need and being at the mercy of Big Tech; and finally there are regulatory risks. Just like companies use ERP [ Enterprise Resource Planning] software to manage their operations, they will need to use new AI governance software to holistically manage their AI projects. This fast emerging category of enterprise software defines the controls a company needs, supports the safe development of projects and monitors them so they achieve business results more effectively and comply with best practices, industry standards and regulations.

Q: Will companies be able to buy AI insurance to protect themselves if things go wrong despite precautions?

ST: Insurance, like banking, is a sector where there’s not only a very high correlation between the adoption of technology and improvements in productivity and growth, but also clear potential for AI to augment and/or substitute workers’ tasks.  As a result  insurers and brokers will soon be entering into their own competitive ‘arms race’ as internal and external demand for AI-enabled solutions increases and prices fall. This disruption will act as a catalyst for the industry to become expert in managing all forms of AI risk. It can then turn this experience into new products and services (beyond just paying out claims) that help the rest of the world adopt AI safely. The increasing ability of AI systems to undertake end-to-end tasks, make autonomous decisions, learn, and interact with the world in unpredictable ways challenges traditional notions of foreseeability, liability and responsibility that underpin insurance today. Insurers will quickly need to learn what that means for their traditional approaches to underwriting and create a wider set of education, prediction, and prevention solutions to complement existing ‘risk transfer’ offerings.

The risk/insurance industry should learn the lessons of the recent past. It has not been effective at dealing with the risks of cyber attacks to businesses (the ‘protection gap’ in this space – the difference between the cover that people need to be economically resilient and what they have in place – are already over $1 trillion today, and growing fast). AI will exacerbate them as well as create new gaps. To me, these protection gaps should be re-conceived as growth opportunities for the insurance industry: a golden opportunity to create new types of risk management solutions. If insurance companies don’t do it, others will.

Q: What advice do you have for corporates?

ST: All companies need a comprehensive understanding of AI risk, a holistic strategy for growth as well as table-stakes optimization of existing business processes. They need to move ahead fast now to grasp the opportunity. Combining a competitive growth strategy with a  systematic and automated approach to AI Governance will be critical. Helping leaders and boards to fully appreciate the opportunities and threats is the key first step.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To access more of The Innovator’s Interview Of The Week articles click here.

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.