Interview Of The Week

Interview Of The Week: Arunima Sarkar, World Economic Forum

Arunima Sarkar is currently AI Lead, Centre for Fourth Industrial Revolution, World Economic Forum. She is responsible for  the co-design of governance protocol and technology policy frameworks for artificial intelligence and is also leading the quantum computing governance work stream and developing principles and frameworks for responsible innovation and use of the technology. Sarkar is also an expert member of the Global Partnership on AI (GPAI) working group on Responsible AI. She has 20+ years of experience in research, corporate growth and strategy initiatives, technology policy with special focus on data, analytics, and artificial intelligence. Sarkar previously led the Global Applied Intelligence Research of Accenture and has also worked in technology consulting and research roles at Gartner and other organizations. She recently spoke to The Innovator about how businesses can responsibly design, develop and deploy new technologies such as AI and quantum computing.

Q: How can businesses know if they are applying AI responsibly?

AS: While AI powered decision making brings new opportunities it comes with greater AI responsibility as well. Earlier this year The World Economic Forum published an AI C-Suite Toolkit covering the multiple dimensions businesses need to consider when making investments in AI. The questions that executives need to ask themselves and their teams cover alignment of AI strategy with the overall corporate  strategy, developing an AI culture in the  organization, AI maturity and organizational change, setting up for successfully implementing AI, understanding and managing AI risks, and adoption of ethical and responsible AI practices and governance mechanisms.

Q: Can you tell us about your work on facial recognition technology?

AS: In April 2019, the World Economic Forum Centre for the Fourth Industrial Revolution launched the Responsible Limits on Facial Recognition project. It seeks to address the need for a set of concrete guidelines to ensure the trustworthy and safe use of this technology through the design of a robust governance framework.

While the adoption of facial recognition technology  has many socially beneficial uses in various industries, it also creates a unique set of challenges that require an appropriate governance process to ensure its use is ethical and grounded in human rights. It is a difficult and highly sensitive use case that needs to be addressed, and the Forum has been working on this for the last three years.

 We co-design the frameworks with multistakeholder communities. Once we release a framework we test them, we then incorporate the pilot feedback and learnings and revise the feedback accordingly and then scale them across different regions. We did this with the application of facial recognition technology to flow management at Japan’s Narita airport.  In 2020 we issued a white paper that argued that based on field experience, entrusting a certification body like AFNOR Certification to assess compliance with the principles for action is a good way to ensure the trustworthy design and use of facial recognition technology for flow management applications. Organizations that take this approach can undertake a rigorous multi‑step process that starts with a governance framework review (principles for action, best practices, assessment questionnaire and audit framework), which can be used as a guide to design or improve an existing facial recognition system.

Next, the Forum has been tackling the challenge of using facial recognition technology by law enforcement in a safe and trustworthy manner. It is a high-risk use case of AI where there have been instances of inaccuracy and threat to privacy, so much so that companies such as Amazon, IBM and Microsoft have placed a temporary moratorium on selling the technology to law enforcement until proper regulation is in place.

 On November 3 the Forum, in partnership with INTERPOL, the United Nations Interregional Crime and Justice Research Institute and the Netherlands police, published A Policy Framework for Responsible Limits on Facial Recognition Technology, Use Case: Law Enforcement Investigations. Six law enforcement agencies undertook an exercise to pilot the policy framework to review and validate its utility and completeness. Brazilian Federal Police, the Central Directorate of the Judicial Police in France, the National Gendarmerie in France, the Netherlands Police, the New Zealand Police, and the Swedish Police Authority. The pilot exercise served to clearly demonstrate that very different procedures exist from agency to agency, which in turn shows a lack of standardization and underscores the absence of guidance to facilitate such standardization.

 That said, a consensus formed around one aspect, namely the importance of the human element. First, it is essential that the human being using the technology understands it- how it functions, its use and its limitations – in order to be in a position to be able to mitigate the risks. Second, agencies agreed that any output of a facial recognition-based search should be reviewed by a trained facial expert. Third, even after this review, the conclusion of the search remains always and solely an investigative lead to be verified by investigators. Collectively, this serves to ensure that a human being is always central to the use of facial recognition technology and that identification is never automated.

Q: Some would argue that discussions about responsible AI began after the horse was out of the barn. Is that why the Forum is starting to look at responsible use of quantum computing even though the technology is not readily available?

AS: With AI, the world started talking about governance once the technology was out there. With quantum we have the time to be proactive. Governments are starting to build quantum policy units and develop national quantum strategies. At the same time there is a lot of investment in quantum. National governments have invested over $25 billion into quantum computing research and over $1 billion in venture capital deals closed in 2021 – more than the previous three years combined.  We are at that point in time when there is rising interest but most of the work on quantum is still in the labs, so the time is right for this kind of conversation.

The Forum has convened a very diverse group of experts from technology companies, government, scientific research organizations and academics from across regions. We spent almost a year discussing the possible opportunities and risks before 25 organizations started drafting the principles. There were a lot of diverse views but we were able to come to common ground on nine themes underpinned by a set of seven core values of quantum computing governance principles, which was published earlier this year.

The nine themes are using the transformative capabilities of this technology and the applications for the good of humanity; wide access to quantum computing hardware; ensuring collaboration and open innovation in a precompetitive environment; creating awareness; workforce development and skills building; ensuring cybersecurity; mitigating potential data privacy violations through theft and processing by quantum computers;  promoting standards and road mapping mechanisms to accelerate the development of the technology; and sustainability.

 It is too soon to know how the technology will develop.  If we are not careful lack of access to the hardware infrastructure and lack of technology skills could broaden the digital divide and the risk is that it could be much larger than what we have seen with other technologies.

Q: What are the key takeaways for business?

AS: Each organization will have different use cases for AI or quantum computing, The first step, before a company implements either, should be to do an impact assessment and risk assessment and ask themselves if this is the right technology to address a specific problem. If it turns out to be the right technology, then they need to ask themselves what the implications are for the ecosystem and what is the impact on customers and stakeholders. It is important to have foresight on the impact and the risks associated with developing or using a technology and to have that be a key part of the initial design process, from the time companies begin developing their AI or quantum strategies. After that, responsible practices need to be applied throughout the lifecycle of a product or service. Building that culture needs to come from the top management and trickle down to the development teams.

Q: How can companies be sure that what they are doing in one country will be considered responsible in another?

AS: There is a need for cross jurisdiction and cross border collaboration, and it is not just for policy making but also for practical reasons. Globally we are trying to solve common societal problems around mobility, health, environment. There are some common issues we face- like access to good quality data, sharing of data within organizations, between partners, within an economy and cross border. Companies are working in multinational environments, and they need to solve for problems across jurisdictions. SMEs also want to access markets across jurisdictions, hence some degree of interoperable frameworks is necessary.

Frameworks, partnerships and best practices hold a lot of promise to enhance responsible AI and there are initiatives underway. I would like to mention that the work being done by GPAI can enable practical cooperation across countries and formulate common approaches to the responsible development and implementation of AI. We just concluded the GPAI 2022 annual summit in Japan, which took place November  21 and 22. Several experts in the Responsible AI working group from diverse nations and disciplines have been working on projects around responsible use of AI and I have personally been involved in some of the AI and climate work. We also discussed AI governance trends in different countries like Singapore, UK and Japan, and how we can learn from each other. We need to look at how to make it simpler for businesses to operate technologies responsibly in different regions. This is definitely an area that needs further attention and work.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To access more of The Innovator’s Interview Of The Week articles click here.



About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.