Interview Of The Week

Interview Of The Week: Raffi Krikorian, AI Expert

Raffi Krikorian is the Chief Technology Officer at the Emerson Collective, an organization focused on education, immigration reform, the environment, media and journalism, and health founded by Laurene Powell Jobs, Steve Jobs’ widow.

An engineer and tech leader, Krikorian previously worked as vice-president of platform engineering at Twitter, running a global engineering team. At Uber, he helped start and run the self-driving efforts tasked with building and deploying the first-ever self-driving and passenger carrying fleet. When Donald Trump was elected president of the United States in 2016, Krikorian pivoted to put his technological know-how to use in the political realm. He joined the Democratic National Committee, as their first-ever Chief Technology Officer, to create a unique team fused with campaign and industry veterans, to secure the party after its breach by Russian state actors and to entirely revamp its tech infrastructure.

During his four years at the Emerson Collective Krikorian has focused on applying technology to the social sector. He works across all teams — and with Emerson partners — to consider data, tools, and product design with the aim of empowering them to achieve their goals.

Krikorian is a graduate of M.I.T. He currently serves on numerous boards including TUMO, an Armenia-based organization focused on bringing cutting edge STEM education, through an innovative free after-school program, to 50,000 kids in centers in Armenia, Lebanon, France, Switzerland, Germany, Ukraine, Albania, and others; the Community Tech Alliance, which works to break the cycle of time-intensive patchwork solutions that get rebuilt by progressive organizations election cycle to cycle; and Medic, a nonprofit organization founded to improve health and health outcomes in the world’s hardest-to-reach communities.

Krikorian, who recently launched a podcast called “Technically Optimistic,” was a speaker on a May 24 panel at the Viva Tech conference in Paris moderated by The Innovator’s Editor-in-Chief on how to regulate AI. He recently agreed to be interviewed by The Innovator about the dangers and benefits of AI.

Q: Can we have it all: Safe, ethical and profitable AI?

RK: Tech companies and regulators are still trying to figure this out and civil society is confused or scared, so right now there are too many balls in the air. It is unclear, right now, whether we can have it all.

Q: What needs to be done to change that?

RK: I would do a few things. We need to dramatically ramp up public education so as to bring in more voices into this debate. On the K-12 side, I would actively issue guidelines to every single U.S. state on how to accelerate AI literacy by adding it to school curriculums.  Finland has led the way on educating the public about AI. In 2018, it set a goal to educate 1% of its population through an online course. It managed to get 10% of its entire population to go through it! We need more of these types of efforts. We also need to change government pay scales for people working on AI safety. Qualified people are making $1 million liquid salaries with full equity at private companies, so it is almost laughable to ask them to come to work for the government for $150,000 to $200,000.

Q: What should corporates being doing to get up to speed on AI and AI safety and ensure they are installing the right safeguards? A recent global survey of companies conducted by the Institute for Human-Centered AI at Stanford University found that a significant number of respondents admitted that they had only some -or even none – of the necessary guardrails in place.

RK: Corporates should invest in having their executives participate in training programs – like those coming from EqualAI or others. The markets and insurance companies are going to mandate it anyway. The challenge is how to do it faster than is required. There will be a whole cottage industry of consultants and tool makers helping companies with guard rails and enforcing them. CIOs will need to keep track of what data is being used [to train algorithms] so it can be traced, and they should not wait for the market to catch up

Q: Are governments setting the right frameworks?

RK: I am eternally optimistic. I think we need to acknowledge that we are in a time of flux so nothing that we do today is going to be the correct thing. We need to move away from seeing black or white options: not doing anything at all and leaving it to market forces or passing strict legislation on the technology that never gets unwound or updated. I think the conversation needs to zoom out. It’s less about the nuances of the technology today and more about the values. The White House’s framework for an AI Bill of Rights sets a high-level guiding vision so that we can have bigger conversations. We need safe spaces where we can have these conversations off-the-record and to adopt small short-term measures that will hopefully buy us time such as the temporary measures the FTC [ The U.S. Federal Trade Commission] is proposing [holding the makers of AL tools that let people generate fraudulent video, audio, and image impersonations liable.} The problem is that this technology is coming at society at a faster and faster pace, and we need to figure out how to have conversations about the broader set of guiding principles and not get bogged down in backwards looking legislation, which is what usually happens.

Q: You were the CTO of the Democratic Party. Are you worried about the impact of deepfakes on the upcoming elections?

RK: I am worried. Deepfakes can influence elections. The FTC measures will help in the U.S but this year there are elections around the world. We are seeing how deepfakes played a role in elections in India and Indonesia. There are two short-term options: Telling tech companies to stop what they are doing or enforcement mechanisms such as the FTC’s action. The biggest problem is probably not video deep-fakes, but, frankly audio ones. We’ve seen those play out in the primaries in the United States, in England, and in other places. They’re cheap to make, and easy to distribute.

Q: How can we ensure data privacy in the age of AI?

RL: We need to spend a lot more time – in the U.S. and EU – on setting boundaries around personal data. GDPR is not enough.I recently testified in front of the Committee of Energy and Commerce’s subcommittee on Innovation, Data, and Commerce in the House of Representatives. The topic was  the American Data Privacy and Protection Act, which proposes that data privacy is the foundation of any AI legislation and seeks to establish federal limits for how much consumer data a range of companies and service providers can collect, use, or transfer.

My main point was that we need to approach the encroachment on user privacy from all angles, as there is no silver bullet, and no single effort is enough. I think that we should consider the following requirements:

*Increased efforts to promote and expand digital literacy, as well as continued pushes on the design patterns needed to transparently explain to users, up front, what they are consenting to.
*Allowing people to access the full life cycle of their user data – from creation to usage to sales and swaps to deletion.

* Offering mechanisms to still engage with applications without data collection activated, albeit perhaps in a limited way.

Q: We have discussed some of the downsides of AI. Let’s talk about AI’s power to do good.

RK: AI can do a lot of good. We can be giving humans superpowers today by using AI as an augmentation not an automation. I don’t want a robot to teach my son but I think we can all agree that we would love to see AI help teachers become better teachers, or help doctors be better doctors and improve drug treatments. AI may also be the only way to get us out of our climate crisis. It is going to be the fundamental fabric of our society and it is up to us to direct it in a way that will supplement human efforts and help solve the world’s biggest problems. That is the optimistic side of my brain talking. AI can raise all boats and give us all superpowers. That is what is meant by human-centric AI. Erik Brynjolfsson at Stanford did a case study with call centers. Super simplifying it: human workers were removed from one half of the call center and entirely replaced by AI. In the other half of the call center AI tools gave the human operators super powers by automatically giving them tools that gave them better access to information. In the short term the side of the call center that replaced all the humans made more money, but in the long term the human operators with AI superpowers got better results because customers were more satisfied. It is really easy for businesses to look only through a short-term lens but if they take a slightly longer-term view and  figure out a more human centered approach they will fundamentally get to a better place. That is the world I am gunning towards.

Q: How does your work at the Emerson Collective help further that vision?

RK: At Emerson we tackle big societal issues such as immigration, education, health, and climate. On the tech group we have an assumption that technology is the accelarant to the change we want to see in the world. We work closely with grantees to help them with the tech they use, and another part of our team takes looks at clashes between technology and society. They are involved in briefings on Capital Hill and advocating for human-centered tech.  We are developing products in the field, such as chatbots to help with immigrant legal services that provide the lawyers with rapid access to the info they need.  We also do data science work to get, for example, a real time pulse of what job vacancies for teachers look like.  We all know that teachers are underpaid in the U.S. and data about jobs remaining unfilled can help effect policy. In short, our work is aimed at using technology to make the world better.

Q: What would you like The Innovator’s readers to take away from this interview?

RK: The future is up to us. There are vested interests in making us all believe that the way AI is being deployed today is inevitable; that we need to accept that this software will roll out to hundreds of millions of people at a blink of an eye and ‘we’ll see what happens.’ The world doesn’t have to be this way. That’s why I’m spending so much time on my podcast and newsletter. I want to educate as many people as possible on the nuances of this technology so we can push back. We can advocate for the world we want through talking to legislators or sending the right market forces. That’s the only way we can get to a balanced conversation about what we want for our society.

To access more of The Innovator’s Interview Of The Week articles click here.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.