Latest articles

The Case For Appointing A Chief AI Ethics Officer

Ten years ago, in 2014, I was appointed as the World Economic Forum’s Chief AI Ethics Officer. Sadly, far too few organizations have followed the Forum’s lead.

The number of companies with a designated head of AI position has almost tripled globally in the past five years, according to social network LinkedIn. And The White House announced U.S. federal agencies were required to designate chief AI officers “to ensure accountability, leadership, and oversight” of the technology.

While that may sound encouraging organizations are still putting more emphasis on improving workforce efficiency, identifying new revenue streams, and mitigating cybersecurity risks, than on ensuring AI is being used responsibly.

Indeed, a poll taken as part of The Artificial Intelligence Index Report 2024, which was published April 15 by the Institute for Human-Centered AI at Stanford University in California, is a case in point. The report cites a Responsible AI (RAI) survey of 1000 companies to gain an understanding of RAI activities across 19 industries and 22 countries. A significant number of respondents admitted that they had implemented AI with only some -or even no- guardrails in place.

It is a big mistake to let RAI becoming an afterthought or press release talking point. Many companies which surged ahead without thinking about RAI are now finding themselves in a costly re-wind process to meet regulatory requirements– a cautionary tale for all.

Against this backdrop I spoke to Steve Mills, Chief AI Ethics Officer and Managing Director & Partner at Boston Consulting Group, as part of a series of conversation I am having with individuals who are leading the way in helping organizations derive benefits from AI while also ensuring responsible design, development and use of AI.

Since BCG helps corporates with responsible implementation of AI, it had to ensure that its own house was in order. So, the consulting group created the position of Chief AI Ethics Offer and designed what Steve describes as “a comprehensive Responsible AI program that brought together organizational functions while establishing new governance, processes, and tools had to be created in house which is a large and complex task.”

Steve and I both agree that now, more than ever, responsible AI is a C-Suite task. It requires a senior executive with the appropriate stature, focus, and resourcing to advise the leadership team, engage with the external AI ecosystem, and effect meaningful change in how we build and deploy AI products.

The role of Chief AI Ethics Officer demands a unique blend of technical expertise, product development experience, and policy and regulatory understanding. “Although my title may still be a bit uncommon today, I believe we will see it become a de-facto standard very quickly given the importance of AI and generative AI (GenAI),” says Steve.

He says – and I agree – that if implementation of AI/GenAI is not done responsibly it can be value-destroying rather than value-accretive for companies.

The risk is not just creating one bad customer experience, it can be much more far-reaching. Failures of AI systems can grab headlines and the attention of regulators. They can rapidly destroy brand value and customer trust as well as carry costly financial and regulatory impact.

Irresponsible use of AI does not only harm companies; lapses can cause real harm to individuals. For example, consider a chatbot providing guidance on HR policies. An erroneous response on medical leave policy could create financial and emotional harms to an employee. It is the responsibility of any company building and deploying AI to ensure it does not create emotional, financial, psychological, physical or any other harms to individuals or society. “Certainly, there are risks to the company that need to be managed, but corporate responsibility goes far beyond that,” says Steve. Companies would do well to remember that there are now real financial penalties available to regulators to punish such behavior and protect the individual harmed.

There are other compelling reasons for building AI in a responsible manner. Companies with mature RAI programs report higher customer retention and brand trust, stronger recruiting and retention, and faster innovation. In addition, many RAI best practices lead to products that better meet user needs, meaning companies with mature RAI programs report more value from their AI investments. “RAI is about both minimizing the downside risk but also maximizing the upside potential of AI,” says Steve.

The pressure to rapidly commercialize AI and GenAI is intense and can dominate strategic discussions.

Steve and I are both big proponents of the transformative power of AI and recognize its strategic importance to businesses. But the bottom line is that companies cannot scale AI/GenAI without developing a robust Responsible AI program to mitigate risks and capture value. They cannot stop at talking points. They need to back up those conversations with action. They need to invest the necessary resources to create a comprehensive RAI program, including integrating RAI into their risk management frameworks, implementing RAI-by-design, and upskilling employees to create a culture of RAI.

“There are both direct and indirect benefits of RAI, all of which generate significant value for businesses,” says Steve. He points to BCG research with MIT which shows that companies that have implemented RAI report lower system lapses, less severity in those lapses and, interestingly, higher value driven through AI investment itself.

All companies must adopt RAI and they need to do it now, “I worry that companies feel like it’s too late, that they’ve implemented a ton of AI,” says Steve. “They need to focus on implementing RAI no matter what stage they are in because it’s critical that they build AI consistent with their values. Responsible AI is table stakes for any business that wants to realize the value of AI/GenAI”.

Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield  is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She  was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic,  and social changes arising from the use of AI. This is the fourth of a planned series of exclusive columns that she is writing for The Innovator.

To access more of Firth-Butterfield’s columns and more Focus AI articles click here.

 

 

About the author

Kay Firth-Butterfield