Interview Of The Week

Interview Of The Week: Eng Lim Goh

Eng Lim Goh, Vice-president and Chief Technology Officer at Hewlett Packard Enterprise (HPE) and expert in artificial intelligence. His current research interest is in the progression from data intensive computing to analytics, machine learning and the shift from artificial specific to artificial general intelligence and autonomous systems. He joined Silicon Graphics (SGI), an American high-performance computing manufacturer in 1989, becoming a chief engineer in 1998 and then chief technology officer in 2000. After HPE purchased SGI it appointed him vice president and SGI chief technology officer. He is also principal investigator of the experiment aboard the International Space Station to operate autonomous supercomputers for long duration space travel, a program that won the 2019 NASA Exceptional Technology Achievement Medal and the 2017 HPCwire Top Supercomputing Achievement Award. Goh, who completed his Ph.D. research and dissertation on parallel architectures and computer graphics, and holds a first-class honors degree in mechanical engineering from the UK’s Birmingham University, was a speaker at Sibos in London on September 24 and participated in an on-stage fireside chat with The Innovator’s editor-in-chief about the differences between natural and artificial intelligence. He agreed to be separately interviewed about how established businesses should think about and prepare for the age of AI.

Q : How should companies think about AI ?

ELG : First they have to understand what machines can and cannot do and understand the difference between natural and artificial intelligence. The brain is more than a million times more complex than neural networks are today. When we are young if we look at 10–15 pictures of giraffes we know what a giraffe is but machine learning requires1000s of images to train. Machines are very task specific. That is why autonomous cars trained in Europe have issues in Australia with identifying kangaroos suddenly appearing on the road. Humans tend to generalize a bit more but the more advanced machine learning technology in use today can only predict things in a very specific way. For example, you can train it to read X-rays to diagnose Tuberculosis but it will not pick up on anything else. The narrower the question you ask the better the machine is able to make the predictions. This is why machine learning systems don’t do well in multi-skilled jobs. To ensure humans have jobs make sure they are multi-skilled. Eventually different aspects of their skills will be automated by rules-based machines or by machine learning but machine learning replaces tasks not jobs. The person responsible for a multi-tasked job is still needed and still needs to apply final judgement. That said, we humans can also be wrong, machines are there so we can check on each other.

Q: How long will it be before machines are ready to make autonomous decisions?

ELG: One thing we have learned is not to over promise. For the time being the more specific you can make a task the better the machine will be at sustaining accuracy over time. A lot of the time machines don’t have enough data or good quality data and therefore mistakes can be made. Most decisions still require humans at the back-end to oversee the right decisions are being performed.

Q: Many established companies worry that they will never have as much as data as the Internet giants. Is it game over or do you just need to have enough data?

ELG: It depends on the applications but typically if you have 100,000 data points labeled well it is a good start. The point here is to keep data and organize it well as it is a competitive advantage for the future and if you don’t have enough start collecting earnestly.

Q: How can companies ensure that their AI behaves responsibly?

ELG: Responsibility for a machine’s action should not just lie with the data scientists. Companies also need to rely on an ethics person. Companies are hiring people to oversee ethics because there is growing recognition that there needs to be ethical oversight. When you feed the machine data you still can’t trust it fully so companies also need to create rules to fence it up so that if it starts to do something wrong the right actions will kick in. And, you need human beings to check and apply judgement even after the fence is constructed.

Q: Are we going to be able to fully leverage the efficiencies of AI if we don’t trust it?

ELG: I will give you an example. HPE used a machine to design a Formula One race car, reducing the design cycle from two weeks to one day. When I visited the company six months later and I asked about the time savings they said they had reduced the design time from two weeks to one week. I asked them what they did during the other six days. The answer was ‘the six days are spent having humans check the machines’ results’. However, they eventually gained trust and leveraged fully.

Q: What is the lesson for business?

ELG: If you are implementing an AI system you may not initially reap all the benefits of efficiency gains. Therefore do set aside time for trust to be gained.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.