Andrew Maynard is a professor of Advanced Technology Transitions at Arizona State University and director of the ASU Future of Being Human initiative. A former physicist and public health researcher, Andrew’s career spans nanotechnology, AI, risk innovation, and science communication. Maynard has advised U.S. and international policymakers, contributed to high-profile initiatives with the World Economic Forum and US National Academies and is a Fellow of the American Association for the Advancement of Science. He writes a blog called The Future of Being Human on Substack, is co-host of the podcast Modem Futura, and is author of the books Future Rising and Films from the Future. Maynard, a speaker at the World Economic Forum’s Annual Meeting of the New Champions in Tianjin, China in June, recently spoke to The Innovator about managing technology transitions in an age of exponential change.
Q: What, in your opinion, is the impact of exponential technology change?
AM:AI 2027, a speculative scenario on AI futures released earlier this year, maps out what its authors consider to be plausible near-term futures for artificial intelligence leading to the emergence of superintelligence by 2027, followed by radical shifts in the world order. The scenario depends on a number of assumptions — all of which can be contested, but nevertheless are useful for exploring potential (if not necessarily likely) near term AI futures. What got me worrying is the futility of matching responsible innovation processes that can take years, to a period of AI acceleration where a lag of even a month in the development cycle might mean the difference between abject failure and world domination. What worries me just as much is that nothing about how we think, how we plan, or how we develop approaches to ensuring better futures, is geared toward exponential advances that happen over months rather than years. And this means that if, unlikely as I hope it is, something like the AI 2027 scenario plays out, we would most likely fail to recognize it — or would actively deny it — until it was too late. And all because we are bad at wrapping our heads around rapid exponential growth.
For anyone who’s watched the Dan Brown movie Inferno, there’s a deeply flawed but nevertheless compelling illustration of how hard we find this toward the beginning of the film. The illustration draws on a thought experiment developed by Al Bartlett in 1978, designed to illustrate exponential growth in a finite environment. The thought experiment asks: if you have a beaker which, at 11:00 PM, has one bacterium in it, and the bacterium and its progeny divide once every minute so they fill the beaker by 12:00 AM, at what time is the beaker half full? The answer — assuming everything else is equal — is 11:59 PM. One minute to midnight. It is a good illustration of how hard it is for us as individuals or for businesses to plan for exponential change — especially when it occurs over timescales much shorter than those associated with collective human actions.
Q: How do you advise executives who are busy running their everyday businesses, meeting revenue targets and shareholder expectations, to deal with this?
AM: I’ve thought a lot about how to help busy executives make more informed decisions around emerging technologies. The first thing is to think about risk in a different way. Risk is conventionally defined as the probability of harm occurring because of some process or action, but this framing of risk runs out of steam rapidly when an enterprise faces risks that are not easily quantifiable or are not associated with clear causative links. Too often, risks that aren’t addressable within this framework are simply pushed to one side or overlooked. However, there is an alternative way to think about risk that complements this more conventional approach and is at the heart of risk innovation thinking: approaching risk as a threat to value.
Value denotes something of worth. This retains the idea of risk being about tangible forms of value like health, a sustainable environment, and financial profitability but it also allows us to think about risk in terms of less common but sometimes more impactful forms of value. These might include deeply held belief and value that has yet to be achieved but is nevertheless important. Whether tangible or intangible, a current product or a future success, if it is worth something to you or your stakeholders, it’s an area of value.
For anyone engaged in the process of innovation, this way of thinking about risk -anything that threatens value- is as important as it is transformative. All too often, it’s the less tangible aspects of risk, or orphan risks, that are brushed aside that cause the greatest problems.
Understanding the nature of orphan risks, and how they differ from conventional risks, is a critical aspect of risk innovation thinking. The conventional risks that organizations tend to plan for are those where there is a clear and direct return on investment. On the other hand, risks associated with trust, reputation, ethical behavior, and social norms tend not to be incorporated as effectively into business plans and strategies. In many cases, it’s because it’s harder to make the connection between investing in risk mitigation and short-term profits, and not necessarily because the risks are not recognized. But the results are often the same—the risks are overlooked, ignored, or simply brushed under the carpet. They are, in a very real sense, orphaned. And yet, the act of orphaning them can come with quite devastating business and social consequences. There are rarely cut and dried approaches to handling orphan risks,yet being aware of them and developing strategies for responding to them—even if they are as simple as a commitment to not ignoring them—are important to the long-term success of enterprises.
Q: What other advice do you have for companies?
AM: We are heading toward profound tipping points and business as usual will not work. Few companies have a nuanced understanding of how people can create barriers or opportunities. Consider Monsanto [an American agrochemical and agricultural biotechnology corporation that was sold to Bayer] and the pushback on genetically modified foods that spread across the world. We are going to see more of that sort of thing if companies are not careful. Succeeding in a world that is technologically and sociologically complex requires companies to have people in the organization who understand consequences from a social perspective. It comes back to threat to value. If people see something that is of value to them under threat they will fight back
Q: What would you like readers to take away from this interview?
AM: Transitions that are being driven by accelerating advances in science and technology include AI (and possibly AGI), gene editing, biotechnology, quantum technologies, neuroscience, robotics, nanoscale science and engineering, large-scale automation, virtual and extended reality, and much more. Individually these represent technologies that are enabling us to imagine futures that were inconceivable just a few decades ago. Together, they are synergistically accelerating the rate of transformative change between past and future beyond anything we’ve ever experienced. Successfully navigating the world-changing transitions these technologies will bring about is perhaps one of the greatest challenges facing not just business but humanity — get it right and we open near-unimaginable possibilities. Get it wrong and we risk catastrophic failures.
This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.
To access more of The Innovator’s Interview Of The Week articles click here.