Interview Of The Week

Interview Of The Week: Ayanna Howard, AI And Robotics Expert

Dr. Ayanna Howard is an expert in robotics and AI. At NASA, she worked on designing advanced technologies for future Mars rover missions. Now, she works on projects ranging from healthcare robots to developing methods to mitigate bias and trust in AI. Her research encompasses advancements in AI, assistive technologies, and robotics, and has resulted in over 275  publications.  She is the author of the best selling audiobook – Sex, Race, and Robots : How to be Human in the Age of AI. She is currently the Dean of Engineering at The Ohio State University and Monte Ahuja Endowed Dean’s Chair. In addition, she serves on the Board of Directors for Autodesk and Motorola Solutions.

Prior to Ohio State, Howard was the Linda J. and Mark C. Smith Endowed Chair in Bioengineering and Chair of the School of Interactive Computing at the Georgia Institute of Technology. She also worked at NASA’s Jet Propulsion Laboratory where she held the title of Senior Robotics Researcher and Deputy Manager in the Office of the Chief Scientist.

Howard holds a degree from Brown University, a M.S. and Ph.D. in Electrical Engineering from the University of Southern California, as well as an M.B.A. from the Drucker Graduate School of Management. She was a speaker on a panel about AI and Robotics at the World Economic Forum’s annual meeting in Davos which was moderated by The Innovator’s Editor-in-Chief.  She recently spoke to The Innovator about how robotics and AI are evolving.

Q:  The U.S. robotics company Figure recently signed a partnership with BMW Manufacturing to deploy its humanoid robots in the car maker’s facility in the United States and pundits are predicting that beginning this year more people will start interacting with robots on the factory floor, requiring major organizational change and the upskilling of the global workforce. Are we at a tipping point?

AH: We are getting closer. In addition to Figure’s advances, Amazon is testing Agility Robotics’ Digit, a two-legged robot, in its U.S. warehouses and Elon Musk’s Tesla is developing a humanoid robot called Optimis. That said, I think we are still a little ways from the tipping point since we don’t yet have a low-cost platform at scale. I think we are about a year to three years away from that. When we get there, these robots will begin to displace the factory floor’s lower skill manual jobs, but it will not yet replace the knowledge worker, at least not immediately.

Q:  What is the best way to train people to work alongside robots and qualm fears that robots or AI will take their jobs?

AH: Robots will take away jobs but, on the more positive side, it will also lead to the creation of new jobs. We are not yet at the stage where robotic systems are fully intelligent. They can navigate in different factory environments, but they aren’t able to think adaptively in dynamic situations. Humans will therefore be needed as their work partners. This will require companies to retrain their work force. When humanoid platforms become available at scale, this change is going to come very quickly so companies should start planning for it. They should start by integrating the platforms, available at lower scale, with people and testing out how that integration will change workflows and the type of human skills that will be needed.

Q: Beyond the factory floor where do you see robots having the biggest impact short term?

AH: Healthcare. Robots can really help with rehabilitation, especially for children with special needs. Robots can act as exercise coaches, interacting with infants all the way up to eight-years-old through games. Robots aren’t capable of everything, but robots enhanced with large language models can interact in conversations with an older adult with dementia or mild impairment and remind them to take their medications or repeat the exact same thing many, many times and do it without getting frustrated, which is not always the case for humans.

Physical robots have better engagement with individuals but the main limitation for robots is mobility. Our world is dynamic. Two houses built at the same time, on the same street, may be similar but the stairs might be in different places. AI can help robotics evolve – with all the data collected, AI can help robots learn from each other by being connected to the Cloud.

Q: In early January Google Deepmind’s Robotics Division announced that it was releasing a suite of advances in robotics research that add large language models into the mix. The company said its new robots will feature safety guardrails, one of which is providing its LLM-based decision-maker with a Robot Constitution – a set of safety-focused prompts to abide by when selecting tasks for the robots. These rules are in part inspired by Isaac Asimov’s Three Laws of Robotics – first and foremost that a robot “may not injure a human being.” Is a robotic constitution enough?

AH:  It is not enough to say “do no harm”. Life is about the gray. We all have value systems that guide us to make decisions but even then, we sometimes cross the lines. For example, you might believe you should not kill but if someone breaks into your house and it is a question of them or you, you might reactively kill as a form of protecting yourself and your family. So, we need to think about providing robots with ways to think about uncertainty and determine what are the non-negotiable lines that they should not cross.

Q: You have spoken about how humans trust robots and AI too much and that it would be advisable to build in distrust. What would that look like and why do you think this is necessary?

AH: We have seen that, when we survey people, they say they don’t trust robots and AI, but their behavior says the opposite. They will use AI to screen their email or candidates or perform financial audits and expect it to work. It does work pretty well but AI is not 100% perfect. No technology is. And, many times when we look backwards at the decisions made, we end up saying ‘yeah that was totally wrong.’ It is important that people don’t lose their skills by relying too much on AI. If we start using it all the time, eventually we will stop questioning it. This is why I believe companies (or people themselves) need to turn AI off from time to time so that people don’t lose the skill of looking at the data and, when the AI comes back on, be able to determine if something was totally wrong. As we train AI systems, we will need to reboot ourselves in terms of our skills and our ability to question the outcomes.

Q: Under what circumstances is it important to have a human in the loop ?

AH: High risk scenarios that could lead to loss of life or liberty or in healthcare, where human oversight is still needed.

Q: Your audio book explores how the tech world’s racial and sexual biases are infecting the next generation of AI with profoundly negative effects for humans of all genders and races. How do we ensure that robots are representative of the entire human population in terms of gender and race but also culture and values?

AH:  I believe that, in general, there is less bias in robotics and AI than there is in people. That said, there is bias in the data and the human developers and in the outcomes. That’s a problem because we expect AI to be perfect and if we expect perfection, we are less likely to question it and that tends to amplify the bias. This can be solved, in part, via education. We need to provide more resources and more access to populations that might not be exposed to robotics, to take smart people and teach them about AI and how to code and how to build hardware.  We need a diversity of experiences to solve the world problems

I also believe that we should give people choices when they buy robots.  If you buy an automobile, the manufacturer lets you configure the color and the options. You have a choice in that case, but with robots you need to accept what comes out of the box. What if that robot doesn’t share your values? We should be allowed to express preferences and design our robot of choice based on that.

Q: What happens when something goes wrong? It is the responsibility of the company that programmed the robot or the responsibility of the one using them?

AH: There needs to be a shared responsibility. In some fields, like medicine for example, if you are doing your job and have the right training and keep up your credentials  if harm happens to a patient, negligence from lack of training is harder to prove. The healthcare organizations you work for must also ensure you have the right training, so there is a shared responsibility. The same needs to be true for robotics. The companies building the tools and APIs need to build in safeguards and the companies purchasing and using the tools need to ensure there is proper training around its use and to establish controls.

Q: What are the key messages you would like readers to take away from this interview?

AH: Create educational programs to get a more diverse set of people working on robotics; keep humans in the loop; we need to have clarity on ethical responsibility and liability. These issues are everyone’s responsibility. If we get this right the combination of AI and robotics will lead to better healthcare and educational opportunities and make it so the entire world, not just the developed world, will benefit.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To read more of The Innovator”s Interview Of The Week articles click here.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.