Interview Of The Week

Interview Of The Week: Henry Ajder, Generative AI Expert

Henry Ajder, a speaker at Cog X In London September12 to 14, is an expert and advisor on generative AI, deepfakes, and the synthetic media revolution. One of the first “GenAI Cartographers”, he has led pioneering research at organizations including MIT, The Partnership on AI, and Sensity AI, influencing international legislation and commercial AI strategy. Ajder advises organizations on AI and generative technologies, including Adobe, Meta, EY, BBC, and The UK government. He holds a visiting research position on generative AI and responsible AI strategy at The University of Cambridge’s Jesus College. Ajder, who presented the BBC series “The Future Will be Synthesized”, the BBC’s first documentary series exploring the generative paradigm shift, recently spoke to The Innovator about how business should approach Generative AI.

Q: What are Generative AI’s benefits to business?

HA: There are lots of good reasons why a company may wish to deploy Generative AI, including efficiency savings. For example, whereas you previously might have needed five actors to make advertisements in different regions, thanks to Generative AI you now need one actor and one original performance because algorithms can recreate the dialogue with very convincing lip sync. We are also seeing the cloning of voices in film post-production, making it no longer necessary to reshoot the same scene with live actors, and, in some cases to extend movie magic. Gen AI is also enabling life -like avatars with human mannerisms to be used for training. It is much more engaging for an avatar to present to you than reading a boring machinery manual. Gen AI can automate dull or uninteresting work to free up employees to do more interesting work or to help employees in their work. For example, we are seeing Gen AI used in creative ways to generate material for advertising for businesses, including helping people to write copy in a much more efficient way. Workers will increasingly use Generative AI as a co-creator and a co-pilot.

Q: What are the potential pitfalls?

HA: We are in a feverish hype cycle. There is a huge amount of speculation about what these models can do and what they will eventually be able to do. It is important that businesses be provided with a reality check about how they can benefit and that they build these systems in a way that is future proof and complies with upcoming legislation and norms.

We have seen several companies adopt Generative AI powered customer chatbots. In one case customers asking for restaurant recommendations were advised to go to a food bank in Canada. This is just one example of how Generative AI is prone to hallucinations and is not well optimized for customer-facing applications. The rush to adopt Generative AI reminds me of how many businesses quickly planted their flags in the metaverse with underwhelming results. Companies need to be asking themselves whether they should be testing the technology internally or externally. If they want to use it externally, they should first determine the value add for their customers and whether use of the technology will help or create more problems than it solves.

Generative AI is extremely valuable when used as part of an iterative process but still requires a good amount of human oversight. If, for example, someone is given responsibility for managing a team of 20 chatbots that they don’t have entire control of and the chatbots give false and potentially harmful information whoever is in charge could be liable.

Q: What advice do you have for business?

HA: Clean up your data and organize it before you start. Think about bias and what models you are going to use. If you are looking to use a large language model, are you going to use one provided by the Big Tech companies? There are positives to that. They offer friendly APIs, but, on the negative side, they ingest your data, which might create issues of privacy for your customers or leak sensitive data. You will also be at the mercy of the whims of that company if they decide to hike up the prices or change the model in some ways.

Another approach is using an open-source model as your foundation and training it with your own data. It is more expensive to do that, but you get to train it yourself and fine-tune it the way you want. The advantage is you’re not beholden to anyone, but the challenge is you need to update it. You could spend a lot of time and money building a model with 10 billion parameters, for example, and then find that it is out of date by 2025.

A third option is to build your own from scratch, with is very expensive and hard to maintain. If you are a well-resourced large company like Goldman Sachs that might be attractive, but few companies are likely to go that route.

Regardless of which foundational model you use, it is important to get your employees to understand Generative AI’s limitations and put processes in place to train people to ensure your model is used safely and reliably.

Regulation is moving, relatively speaking, very fast. The EU is developing strict rules on the sourcing of data, defining high risk areas, and requiring auditing. The U.S. is still fairly relaxed but is moving towards requiring the disclosing of synthetic outputs and building in safety measures. There are big questions that need answering. For example, there are ongoing arguments about whether scraping publicly available information is a violation of copyright. Dalle-E and Stability.ai could not exist without mass scraping on the Open Web. Secondly, what are the punitive penalties going to be for companies that cause harms with Generative AI? Will there be huge fines or we going to see criminally liabilities kicking in?

My message to companies is novelty is not innovation. Don’t rush into Generative AI. Ask yourself if I build my product now will it give me a good return on investment or cause me a big headache? This is a sandstorm; things are shifting day by day so you want to be thinking not just about the next six weeks or the next six months but about what might be happening in terms of legislation and market trends.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To read more of The Innovator’s Interview Of The Week articles click here.

 

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.