Startup Of The Week

Startup Of The Week: Credo AI

Credo AI has developed an end-to-end governance platform that allows corporates to manage compliance and measure risk for AI deployments at scale. The company is targeting finance and banking, HR and talent management, retail, high tech, and government.

“What we are really solving for is the AI governance chasm,” says CEO and Founder Navrina Singh. “It’s a core pain point. The people charged with compliance oversight don’t have AI expertise and the people working on the technical side don’t understand the risk framework.”

Singh, an engineer by training, has over 18 years of experience in Enterprise SaaS, AI and mobile, working in multiple product and business leadership roles at Qualcomm and Microsoft and serving as an executive board member of Mozilla to guide that company’s trustworthy AI charter. After being named a Young Global Leader by the World Economic Forum and joining its AI Council; Singh started studying emerging AI legislation and impact assessments in the EU, Singapore and Canada. It was an eye-opening experience. She said she realized that much of what people on the tech side are building was “fundamentally wrong” and that there was little or no oversight and accountability. Singh launched Credo AI to create tools for enterprise and governments that bridge that gap, with the aim of helping clients reduce financial, regulatory and brand risks.

AI holds the promise of making organizations 40% more efficient by 2035, unlocking an estimated $14 trillion in new economic value, according to The World Economic Forum. But as AI’s transformative potential has become clear, so, too, have the risks posed by unsafe or unethical AI systems, says the Forum.  It points out how recent controversies on facial recognition, automated decision-making and COVID-19 tracking have shown that realizing AI’s potential requires strong buy-in from citizens and governments, based on their trust that AI is being built and used ethically and will be applied for the public good and not just profit a few corporations.

Academia, not-for-profit organizations and standards bodies are trying to weigh in, but their efforts are fragmented. More than 175 organizations have separately proposed AI ethical guidelines while standards bodies are working on ways to hardwire ethics into the technology itself.

Work is underway on the feasibility of a certification and mark for the trusted use of AI systems and on an agreed upon way to audit them that could work across sectors and regions. But corporates or governments who are using AI can’t wait for these systems to be in place.

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm. In the UK exam results based on a controversial algorithm were trashed after accusations surfaced that the results were biased against students from poorer backgrounds.

Meanwhile, a number of high-profile cases have demonstrated that used wrongly, AI human resource tools can reinforce historical biases and expose companies that use these tech tools to reputational damage and legal issues.

To avoid such issues, Credo AI’s system includes an alignment tool that figures out what good or responsible looks like, based on perimeters set by a company or government and regulatory requirements. It then maps a pathway to objectively measure datasets and models to measure fairness, reliability, and transparency across the entire life cycle of AI-powered products and services, from design to production. The technology also checks the context – ie what is the use case intent, the impact the company wants it to have and the geography where it will be applied since regulations differ.

“Credo AI pulls in all the guard rails –  standards and regulations, existing models of risk management, established ethical principles – to do a gap analysis,” says Singh. It additionally offers a “get ready policy pack” which helps companies prepare to comply with upcoming regulations.

For example, in December of last year New York City announced Local Law Int. No. 1894-A, which takes effect on January 1, 2023. The law regulates the use of “automated employment decision tools” in hiring and promotion decisions within the city. The law, which applies to employers and employment agencies alike, requires that a bias audit be conducted on an automated employment decision tool prior to use. The bill would also require that candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion, as well as, be notified about the job qualifications and characteristics that will be used by the automated employment decision tool. Violations of the provisions of the bill would be subject to a civil penalty.

There is currently no legally recognized independent bias audit, says Singh, but Credo AI can help companies understand where the gaps are and try and fix them.  She predicts that in the next 12 to 14 months more jurisdictions will pass similar laws.

The two-year-old U.S. startup’s customers include a Fortune 50 financial services company, a large defense contractor, a large retailor and top Cloud providers.

Competitors include Machine Learning Model Operationalization Management ( ML Ops) companies such as Microsoft as well as companies like ServiceNow which offers a Government, Risk and Compliance (GRC) automation tool. Singh says Credio AI is the only one that does both.

The company has financial backing from Andrew Ng, a co-founder and head of Google Brain and the former chief scientist at Baidu, who is credited with building that company’s artificial intelligence group.  And, it has hired Eddan Katz as a tech policy advisor. Katz most recently worked at the World Economic Forum, connecting AI platform projects across the Center for the Fourth Industrial Revolution network of government, corporate, and civil society partners and affiliates.

“The Credo AI platform is an essential tool for companies to anticipate industry-specific regulations as they take shape, but more importantly preparing people in different roles across an organization as they integrate AI into their business well into the future,” says Katz.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To access more of The Innovator’s Startup Of The Week articles click here.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.