Professor Renée Cummings, who is listed among the world’s top 100 women in AI ethics, is an AI, data and tech ethicist, and the first data activist-in-residence at the University of Virginia’s (UVA), School of Data Science, where she was named Professor of Practice in Data Science. She’s also the inaugural senior fellow in AI, Data and Policy at All Tech Is Human, a leading international think tank.
Cummings is also a nonresident senior fellow at The Brookings Institution, co-director of Brookings’ Equity Lab, and a member of the World Economic Forum’s Data Equity Council and the World Economic Forum’s AI Governance Alliance. She is also a member of the Global Academic Network, at the Center for AI and Digital Policy (CAIDP), Washington, D.C.
A criminologist, criminal psychologist, therapeutic jurisprudence specialist, and a community scholar at Columbia University, Cummings serves as co-director of the Public Interest Technology (PIT) University Network, at UVA. She recently spoke to The Innovator about reimagining equity in the age of AI.
Q: We don’t have an equitable society. How do we establish equity with AI?
RC: We can’t have equitable AI without equitable data. The data we have-which is being used to make critical decisions from the C-Suite to Main Street- was created with historical inequities and biases so the equity challenge with AI is a data equity challenge. AI governance and data governance must walk hand in hand.
Q: How do we make that happen from a practical point of view?
RC: I am a member of a World Economic Forum Global Future Council focusing on data equity. We have developed a framework focused on three main categories: data, purpose and people. Critically examining the data pipeline can improve outcomes and ensure that biases are addressed early in the process and throughout the process. Working with data requires a clear purpose and intentionality. Without this, data analytics may lack fairness and impact, or even cause harm. Protecting individuals’ data rights throughout the data life cycle is crucial to ensuring that the collection and use of data benefit people and communities. As part of the framework, a series of questions have been developed to help organizations evaluate data and initial actions are suggested for implementing data equity. If organizations design their data strategy systems and solutions using this equity framework, they can reduce bias and discrimination and the associated legal, financial and reputational risks.
Q: What is the best way to get started?
RC: Start with a critical deconstruction of AI, to understand just how powerful this technology is and how perilous it could be without the requisite due diligence. When applied to critical systems, AI can accelerate and amplify self-interest and systemic inequities as well as disempower and disenfranchise.
There is no perfect data set or perfect algorithm. People need to understand both the capabilities and the limitations of the models, to document the provenance of the data, to understand the risks and how to detect, mitigate and manage those risks. We need humans and machines to work together, collective intelligence, to achieve a sophisticated level of critical thinking when it comes to building the models and their deployment, to ensure fairness and inclusion.
Q: There has been a lot of controversy around the use of AI by human resources in hiring with some high-profile cases highlighting how it reinforces traditional biases against women, people of color, etc. But many companies are using AI to screen. How can they do so safely?
RC: In the age of AI, we need to include data as a critical aspect and asset within the realm of equal opportunity and determine what that looks like in the workplace from a hiring, training and retention perspective. Within that context, companies should be asking themselves: How are we creating opportunities with AI? What kind of outcomes are we achieving? Are our hiring practices fair? If we are using AI inspired tools to do training and evaluations are these tools accentuating bias?
Q: You were recently invited to the White House to discuss AI and equity. Can you talk about what that meeting focused on?
CL: At the White House, we discussed how AI equity could help reduce the digital divide by creating not only equitable data but equitable access and equitable opportunities. This means ensuring that people have the skills and tools to participate in the AI economy and to ensure that through AI education and awareness and AI literacy training we are building a sustainable AI ecosystem of entrepreneurs, an AI ready workforce and an AI prepared society.
Q: What advice do you have for corporates?
In our framework we recommend that the private sector:
- Embed model and system traceability, transparency and accountability
- Employ diverse and inclusive risk management and red-teaming strategies
- Enable user feedback and audit of data and algorithms
- Implement ethical impact assessments and diverse, equitable and inclusive field testing.
- Implement transparent and inclusive auditing mechanisms
The goal should be good economics and good governance. If you don’t engage with an equity framework your organization could face the legal, reputational and financial risks that many companies struggle to bounce back from. Responsible AI is about trust and taking an ethical approach. Equity is a key component of that conversation, if not the main component. AI for good –AI that benefits humanity — has got to begin by reimagining equity.
This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.
To access more of The Innovator’s Interview Of The Week articles click here.