Focus On AI

Designing A Global Approach to Responsible AI

Designing A Global Aoproach To Ethical Data-Driven AI

AI has the potential to speed up drug development and our ability to eradicate diseases. It can help get medical advice to people who have no access to doctors and usher in personalized medicine, making more precise diagnosis or prediction of diseases and response to therapy possible.  The more data it collects, the more accurate it will be.

But what happens if we give away all the data and leave all these AI-led discoveries to a small number of global monopolists? Will we end up worse off, with even more unequal access to medical care? 

Designing an approach to ethical and equitable data-driven AI in the health care sector raises many questions. Should we allow every single molecule in peoples’ bodies to become exploitable for economic value?  Or should health data be considered a public good and treated separately from data about which brands we prefer or what we are posting on social media? Who owns an individual’s health data and how do we make certain that privacy is respected? How do we guarantee that individuals and/or communities benefit from the data collected about them? And how do we ensure that data collected represents – and is designed to help members of all races and ethnic groups and that low-income countries or communities don’t get left behind? 

These crucial issues were discussed by a January 28 panel moderated by The Innovator’s Editor-In-Chief during The Davos Agenda, a week-long virtual meeting organized by The World Economic Forum. Panelists included Dr. Gianrico Farrugia, President and CEO of The Mayo Clinic, South Centre Executive Director Carlos Correa and Stella Ndabeni-Abrahams, South Africa’s Minister of Communications and Digital Technologies.

The discussion shed light on how organizations and countries are trying to come to grips with these issues but also underscored that a lack of global consensus is holding back the roll-out of the technology and the benefits it could bring not just to healthcare but all sectors.

AI holds the promise of making organizations 40% more efficient by 2035, unlocking an estimated $14 trillion in new economic value. But as AI’s transformative potential has become clear, so, too, have the risks posed by unsafe or unethical AI systems, says the Forum.  It points out how recent controversies on facial recognition, automated decision-making and COVID-19 tracking have shown that realizing AI’s potential requires strong buy-in from citizens and governments, based on their trust that AI is being built and used ethically and will be applied for the public good and not just profit a few corporations.

To that end the Forum announced January 28 that it is launching a new coalition called the Global AI Action Alliance (GAIA). The aim is “to speed adoption of trustworthy and transparent AI across all sectors,” Kay Firth-Butterfield, the Forum’s Head of Artificial Intelligence and Machine Learning, said during a press conference about the launch.

Designing A Fit-For-Purpose Approach

The alliance is a recognition that a fit-for-purpose approach is needed to solve one of the 21st centuries thorniest governance issues.

There is a growing consensus that governance should not be left to governments alone. Most politicians are not savvy enough about technology to ask the right questions about data-driven AI. What’s more regulations often take too long to draft and are often out-of-date by the time they are implemented and can end up doing more harm than good. 

“No government agency is going to be able to keep up with the tech wave that is in front of us,” says the Mayo Clinic’s Farrugia. “It would be a significant mistake to try and control everything because what is true today is unlikely to be true six months from now and certainly not in a year’s time.”

Meanwhile, multiple scandals have convinced many that the tech industry can’t be trusted to govern itself.

Academia, not-for-profit organizations and standards bodies are trying to weigh in, but their efforts are fragmented. More than 175 organizations have separately proposed AI ethical guidelines while standards bodies are working on ways to hardwire ethics into the technology itself.

The trouble is there is no global agreement on what is ethical. Values are not universally shared and, argue some, work on AI ethics is being skewed by the companies that fund it. Indeed, a paper by the University of Toronto’s Centre for Ethics, found that Big Tech is gaining influence over AI ethicists by providing a large amount of money to researchers in the field. Some 58% of AI ethics faculty are looking to Big Tech for money, according to the paper, which is entitled “The Grey Hoodie Project: Big Tobacco, Big Tech And The Threat To Academic integrity.”  “The negative effect of Big Tobacco’s money on research quality has been widely report and it is commonly accepted that private-interest funding biases research,” says the paper. “This is exactly what is happening in the field of machine learning.”

The Forum’s solution is to bring all of the stakeholders to the table in a neutral forum.  It is inviting participation from business, governments, standards bodies and academia to try and develop a coherent multilateral approach. The alliance is being supported by a grant from the Patrick J. McGovern Foundation, a global, $1.4 billion philanthropy that aims to advance the frontiers of artificial intelligence, data science and social impact. The alliance steering committee is being co-chaired by Vilas Dhar, President of the Foundation and Arvind Krishna, Chairman and CEO of IBM.

Establishing Trust Through A Global Coalition

“AI is essential to unlock all the data that everything around us gives us,” Krishna said during the press conference announcing the alliance’s launch. “It is going to unlock massive amounts of productivity, some $40 trillion, over the next 15 years. We are only a few precent of the way there – less than 10% of that value has been unlocked. Why is that? A big aspect why is trust. We all experience AI in smart phones and smart speakers but do we trust it to make health decisions for us? Do we trust it to make government decisions for us? What would engender that trust? Without trust we are not going to be able to unlock all that value.”

IBM plans to open source its approach to AI Ethics through GAIA, as a way of showing its commitment to the responsible stewardship of data and technology. The U.S. tech company said it will make available, through the alliance, its approach to putting AI ethics into practice at a global scale; the governance structure it uses to evaluate applications of data and technology in a centralized, accountable way; and contribute consulting expertise and know how needed to implement AI ethics.

Other concrete initiatives are already in the works. The Forum is actively partnering with a wide range of contributors to its work on ethical AI, including business leaders, academics, governments and other philanthropic organizations. “We are not creating this alliance in a vacuum, and we want it to build on the excellent work being done by others,” says Mark Caine, the Forum’s Project Lead for Artificial Intelligence and Machine Learning. “We work with hundreds of organizations that want do AI better. We want to bring them to the table, understand what their needs are, and connect them with the best available tools for ensuring that their use of AI is inclusive, trustworthy, and transparent.”  

Third Party Certification Of Responsible AI

The Forum is already working on the feasibility of a certification and mark for the trusted use of AI systems that could work across sectors and regions, with the Schwartz Reisman Institute for Technology and Society at the University of Toronto and AI Global, a non-profit building governance tools to address growing concerns about AI.

It’s one thing for companies to say they have developed responsible AI but how do people know if this is actually the case?   “Anyone can design an algorithm but there is no guarantee that an algorithm is safe,” says Miyo Yamashita, Director of Strategy and Operations at the Schwartz Reisman Institute.

“We are focusing on setting guard rails through a comprehensive and independent certification program that is grounded in accepted principles, is practical and measurable, internationally recognized and built with trust and transparency,” explains Ashley  Casovan, Executive Director of AI Global.

The certification process is being built in collaboration with government and regulatory agencies, academia, civil societies, tech companies, audit firms that will act as certifiers.

A global, recognized AI mark of approval would work like the universally recognized “UL Listed” approval mark that attests to the fact that a product had been tested by UL (originally known as Underwriters Laboratories), a global independent safety science company, to inspire consumer confidence.

In the case of AI, it would aim to set meaningful benchmarks for bias, fairness, explainability and other metrics of responsibly built AI systems.  But unlike appliances that need only be tested once to be considered safe, algorithms will have to be tested on an ongoing basis and be independently audited, says Casovan.  AI systems will be assessed and scored across multiple dimensions. Certification output will be a scorecard, analogous to consumer reports for cars.

The program is currently being piloted with real-world AI applications, including New York State’s automated hiring system, the government of Alberta’s forest fire protection program and by an insurance company. A first version of the certification process is expected in about 18 months.

Creating A Data-Driven Economy

The future of AI will also be shaped by the ways in which data is collected, stored, governed, accessed and used so the Forum is separately working on several programs that are piloting how to leverage data differently in order to reap the full benefits of AI.

The Forum’s stance is that many institutions have wrongly focused their attention and resources mostly, and in some cases exclusively, on data protection and privacy.  Not only has this approach failed to harness the full value of data, says the Forum, it has also led to the rapid fragmentation of data governance policies and impeded data sharing for agreed-upon purposes, such as mission-critical applications during a pandemic.

The challenge is to create a new flexible data governance model that allows for the combining of data from personal, commercial, and government sources, while still respecting rights, that will positively empower a variety of stakeholders while removing unintended policy barriers. “We want to establish best practices in data governance and demonstrate what that would look like,” Sheila Warren, the Forum’s Head of Data, Blockchain and Digital Assets, said during an interview. 

Orienting data policy and data models around common purposes, such as specific use cases, could unlock opportunities for both the public good and commercial spheres. The idea is that data can and should be treated differently depending on its actual and anticipated use and that Fourth Industrial Revolution technologies are on a path to enabling differentiated permissioning of the same data, dependent upon context. Technology already exists that would not only permit devices to collect information about individuals’ vitals and lifestyle but also allow people to set permissions around what their data can be used for, such as research and testing around the cure for Covid-19, or dementia, or cancer. As new initiatives are launched for permitted purposes, an individual’s  relevant data could automatically be encrypted, anonymized, and transmitted along with digital rights management rules to ensure that the data cannot be used for other purposes. The data brought together by this aggregation could  be utilized by algorithms to identify trends human experts can miss and raise recommendations for professionals, such as researchers in the medical field, to review. 

People could additionally choose to set permissions for commercial purposes. If someone wanted to use their anonymized data for prescribed purposes, such as market research, they could make  the choice to opt in and, in some cases, get paid. The valuation of the data used could be divided by purpose and defined by a “commodity exchange,” which would set the value of the data output for specific use cases, much like happens with intellectual property today This would allow people to get paid at the moment of consumption, and simultaneously enable financial authorities to clarify taxable income. It is unlikely that this process will generate significant amounts of revenue for individuals, says Warren, but these exchanges could empower people and issue “digital dividends” that provide benefits other than money to communities.  For instance, this process could reduce bias in datasets by ensuring the people that provide data benefit. 

To prove its hypothesis the forum is piloting government-led data marketplaces in Colombia and in Japan. The hope is that “facilitating data marketplaces will promote the appropriate exchange of data as a strategic asset for common good and stimulate the transition to a data-driven economy that equitably apportions risks and rewards across a broader set of stakeholders,” says Warren.

Although the technology is there a system for ethically and safely sharing data about PPE and other critical suppliers was not in place when the pandemic started.  “It’s tragic,” says Warren. “So many lives were lost. My hope is that we don’t miss the opportunity to create a better system going forward, using insights from data.” 

The Human Factor

Another Forum project involving people and government, which is scheduled to be piloted in Helsinki this year, seeks to test new opportunities for data availability and data sharing between the public and private sectors through the development of a new “human-centric model.”  The idea behind that program is to change the way society thinks about data, says Warren. To date, most approaches treat data as a resource for business and control what organizations can and can’t do with data without fully taking into account the interests of people. In this pilot regulations will be designed in a way that assumes and guarantees that data about people is being used for their and their communities benefit and that the technology prioritizes humane values, including social and ecological ones.

Up until now the choice has been painted as either support business and innovation or impede innovation and empower people, says Warren. The Helsinki trial is designed to prove that it is possible to do both, and that human-centric does not mean anti-business, she says.

AI For All

Another big focus of the Forum’s work is how to make AI and data more inclusive. For example, in order to allow emerging markets to benefit from AI health bots, they need to have the proper data protection legislation in place and the Forum is helping with those efforts in countries like
Rwanda, says Eddan Katz, who connecting AI platform projects across the Forum’s Center for the Fourth Industrial Revolution network of government, corporate, and civil society partners and affiliates.  The Forum also wants to help safeguard against “data colonization” – a term that describes companies in developed countries collecting data from communities in emerging markets, and then using the data it to create products that those communities can’t afford.

It is also important that all groups be represented when collecting data for drug development and medical research, if the world is to reap the full benefits of  AI, notes Warren.  Genomics — which influences all aspects of our health, from the risk of disease, to how quickly the body breaks down drugs — promises to usher in the age of precision medicine. But about 78% of the existing genomic data used in developing precision medicine has been gathered from people with European ancestry, meaning most of the world’s population risks missing out on healthcare innovations. Due to this data bias researchers believe that the true promise of genomics cannot be realized globally.

All too often key voices from civil society are missing in AI development, says Dhar, the president of the Patrick J. McGovern Foundation.

The foundation’s work with the Global AI Action Alliance will center on giving a chance for people from all walks of society to weigh in, making sure that AI is applied ethically and equitably and used to help solve the United Nations’ Sustainable Development Goals, he says.  Some 2,700 uses cases for AI to help solve the SDGs in areas such as climate change, aging and mobility have been identified but very few of these use cases are being exploited.

“Our end goal with this work is to create an ethical AI-enabled society, one that is inclusive, generative and inspiring,” he says. “GAIA is a first step in that direction.”

To access more of The Innovator’s Focus On AI articles click here.

If you are not already a subscriber to The Innovator click here for a free trial.

About the author

Jennifer L. Schenker

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.