With the EU AI Act coming into force and measures taken by the Biden Administration in the U.S. to ensure AI is used responsibility being put in place conversation is again turning to how regulation stifles innovation. But does it?
I am privileged to sit on the Advisory Board of Philippines-based ADI, the AI and data arm of the Aboitiz Group, a conglomerate that operates in six major industries including power, banking and financial services, food, infrastructure, and Data Science and AI. I joined for two reasons: David Harddon, the CEO, did pioneering regulatory work on AI when he was Chief Data Officer at the Monetary Authority of Singapore and because he came to me saying that he wanted to only create AI use cases based on Responsible AI foundations.
We both believe that seeing Responsible AI as a cost of doing business is the wrong way to view it. With falling levels of trust in AI products and the companies creating and using it, we feel that creating products with responsible AI will add to the bottom line and success with customers. ADI’s approach can be used as guidance for regulators who want to harmonize the developmental and regulatory aspects of data and AI. It also serves as an example of how applied Responsible AI (RAI) makes business sense.
I recently caught up with Harddon and asked him why he decided that ADI will take a Responsible AI approach to all of its work.
Q: Why do you believe that it better to proactively embrace AI instead of waiting for regulators?
DRH: The time at The Monetary Authority of Singapore cemented my view that waiting for the regulator” was a form of social moral hazard as well as posing commercial risks. In fact, at ADI we had been able to demonstrate that embedded RAI can facilitate the sustainable operationalization of AI while boosting? commercial return. Our approach ensures that RAI is not a separate component, but is embedded into the core of our business strategy and decision-making process. Thus we decided to preemptively mitigate risks as well as ‘do the right thing’ by baking the guidelines of RAI, to the best of our ability and knowledge, at the core of our work. To give perhaps a silly analogy, there is no law that requires you to look left and right when crossing the road, however it is not only common sense, doing so sustains your ability to continuously and successfully cross the road.
Q: Why is it important to you to have an external advisory board?
DRH: To quote Albert Einstein “The more I learn, the more I realize how much I don’t know.” I truly believe in the value of surrounding oneself with those who know more, and the value of external counsel and perspectives. To quote ancient texts, “The way of a fool is right in his own eyes, but a wise man listens to counsel.” These are particularly true for AI start-ups like ADI, where we are continuously navigating unchartered waters while fostering ambitious goals to drive impact.
Q: Why do you believe using Responsible AI will help companies make money?
DRH: One of the realizations I had during my time as a regulator is that industry, largely, views governance, compliance, regulation, etc. as a business cost. This is most likely the same case with Responsible AI: it is seen as a cost to the business to adhere to governance and regulatory requirements. I personally disagree with this view and advocate that, while counter-intuitive, these functions need to have a business development mindset. The role of compliance isn’t to appease the regulator; the role is to drive business forward in a manner that adheres to the rules of the land. I believe that RAI results in ‘making money’ rather than ‘costing money’. In a recent HBR article that I co-authored, we wanted to validate whether including so-called discriminatory attributes like gender resulted in more negative discrimination in lending.. We were able to empirically show that overtly including this information, not only reduced potential gender-based discrimination but concurrently increased overall revenue. The RAI here used additional knowledge that was not previously available to enable the business to be more equitable and at the same time profitable in their lending operations.
Q: You found that your loan officers and AI come to better decisions together, rather than separately. Does this tell us anything others could learn about Responsible AI and augmentation of jobs?
DRH: In fact, this case demonstrates the power and effect of incorporating additional information/knowledge in business operations – further evidencing my staunch belief in the power of AI as Augmented Intelligence, and its net positive impact on jobs. At ADI with our partners, it’s both ways – AI enhances human capabilities and humans strengthen AI models. This approach is applicable in all verticals that we are focusing on – financial services, power, and smart cities. It makes sense because how can we be worse off by knowing more and seeking to be better?
Q: What are some of the other ways that RAI can increase inclusion?
DRH: I am on the Advisory Board of Connected Women, an organization with the objective to helping Filipino women find meaningful online careers, including jobs related to AI. Prior to my joining Connected Women, ADI had already partnerd with them to bring better economic opportunities to their Elevate AIDA (Artificial Intelligence and Data Annotation) graduates. ADI engages their graduates in data cleansing and data annotation, among other things. This not only provides a platform for these women to actively contribute to the digital economy and readies them for the future of work it also enhances our AI models.
Q: Its important for any company to return value to its owners, investors or shareholders. Do you think that’s possible by building on Responsible AI foundations?
DRH: Without a shadow of a doubt – yes. The goal isn’t to implement RAI for the sake of implementing RAI . ADI is leveraging AI in the pursuit of achieving business impact, quantifiably measured through revenue, operational efficiency, risk management, and sustainability.
In summary, I am hopeful that ADI represents the future of responsible design development and use of AI. Some companies have adopted extensive voluntary responsible AI internally but more often than not the push has been to innovate without guardrails. Those companies are now having to face up to the problems of moving fast and breaking things. On one of my panels in Davos this year the CEO of Accenture North America talked about the large number of AI deployments which they were now helping unwind because of the lack of Responsible AI guidelines used in initial implementation. Responsible AI can offer organizations a substantial upside. Taking a different route can prove costly in more ways than one.
Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic, and social changes arising from the use of AI. This is the second of a planned series of exclusive columns that she is writing for The Innovator.
This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.
To watch a YouTube video of Firth-Butterfield in conversation with Harddon in an ADI podcast on Data, AI and Everything click here.