News In Context

This Week In AI

OpenAI and Meta signaled this week that they will soon release new AI models that they say will be capable of reasoning and planning, critical steps towards achieving superhuman cognition in machines. Things are moving so fast that the capability of new artificial intelligence models may surpass human intelligence by the end of next year, Elon Musk, founder of Tesla and SpaceX, said April 8 in an interview with Nicolai Tangen, CEO of Norges Bank Investment Management, Norway’s $1.6 trillion sovereign fund and one of the largest investors in Tesla.

While not all AI experts agree on that timeline, it is now a given that every organization will need to adopt the technology. Jamie Dimon, chief executive of JPMorgan Chase told investors April 8 that AI could be as transformative as some of the major technological inventions over the past several hundred years. “Think the printing press, the steam engine, electricity, computing and the Internet, among others,” Dimon wrote in his annual letter to shareholders. “We have been actively using predictive AI and ML for years — and now have over 400 use cases in production in areas such as marketing, fraud and risk — and they are increasingly driving real business value across our businesses and functions,” Dimon said in the newsletter. “We’re also exploring the potential that generative AI (GenAI) can unlock across a range of domains, most notably in software engineering, customer service and operations, as well as in general employee productivity.”

But as both tech companies and corporates in traditional businesses race to deploy the technology the increasing power of the latest AI systems is stretching traditional evaluation methods to the breaking point, posing a challenge over how to safely work with the fast-evolving technology, The Financial Times reported this week. The problem of how to assess LLMs has shifted from academia to the boardroom, as generative AI has become the top investment priority of 70 % of chief executives, according to a KPMG survey of more than 1,300 global CEOs.

Against that backdrop some 1,500 global leader and Generative AI pioneers gathered in Paris April 8 for the R.AI.SE Summit (pictured here) to share first-hand insights on using Generative AI to address essential business and societal challenges.

Here are some key takeaways from the event:

  • AI tech is moving very fast. Indeed, one speaker noted that “almost every month a new technique changes well established practices.”
  • If they haven’t done so already companies need to start testing the technology now, said speaker Florence Verzelen, VP at Dassault Systèmes, She emphasized the importance of early experimentation with AI. The message was clear: “If you don’t, you’re going to be late.”
  • Build a business case. “Don’t ask what GenAI could do for you, ask what can I do with GenAI to solve a business problem?” said speaker Francois Candelon, the Global Director of the BCG Henderson Institute based in BCG’s Paris office and a Managing Director and Senior Partner. The message from speakers was that GenAI’s potential extends far beyond enhancing productivity. It’s true value lies in driving substantial growth, opening new avenues for businesses to capitalize on. They told the audience that when integrating GenAI into your company’s operations, the starting point should be the income you aim to achieve. This goal-oriented approach ensures that AI initiatives are directly aligned with business objectives, maximizing their impact.
  • Upskilling and reskilling is necessary. It is not only about how people are impacted by AI but how people impact AI, says BCG Henderson Institute’s Candelon. Success depends on humans and how they will use it.
  • Enterprise governance of Generative AI is essential. Speakers stressed the importance of cross-functional collaboration between technical, legal, and policy teams to adapt existing governance frameworks to generative AI’s unique capabilities and risks.
  • Build-in safeguards to prevent unintended consequences and help AI applications perform within ethical boundaries and comply with regulatory requirements. Continuous monitoring, risk assessment, and feedback loops can be integrated into governance models to respond dynamically to the evolving AI landscape.
  • Despite such precautions, problems with the data are inevitable. LLMs can be polluted by fake, biased, or toxic content, prompt injections or the poisoning of training data. Teaching AI to forget will become an imperative, keynote speaker Ben Luria, CEO of Hirundo, a startup that is developing an AI “unlearning platform,” told the audience. (See The Innovator’s Startup of the Week story)
  • Trust is crucial. Half of enterprises don’t deploy AI because they worry about security concerns, speaker Marten Mickos, CEO of cybersecurity company HackerOne, told the audience. They are right to be worried because everything can be hacked.

GenAI opportunities and challenges were best summed up by Mickos, who said “Whether you think of AI as a threat or an opportunity you are correct.”

IN OTHER NEWS THIS WEEK

SUSTAINABILITY

New York Is Suing The World’s Biggest Meat Company. It Might Be A Tipping Point For Greenwashing.

The office of the New York attorney general, Letitia James has announced that it is suing the world’s largest meat company, JBS, for misleading customers about its climate commitments. It’s just one in a string of greenwashing lawsuits being brought against large airline, automobile and fashion companies of late. “It’s been 20 years of companies lying about their environmental and climate justice impacts. And it feels like all of a sudden, from Europe to the US, the crackdown is beginning to happen,” Todd Paglia, executive director of environmental non-profit Stand.earth told The Guardian. “I think greenwash[ing] is actually one of the pivotal issues in the next five years.”

CYBERSECURITY

U.S. Cyber Agency Says Russian Hackers Are Using Access To Microsoft Corporate Email To Break Into Customer Systems

The U.S. Cybersecurity and Infrastructure Security Agency said Russian government-backed hackers have used their access to Microsoft’s email system to steal correspondence between officials and the tech giant, according to an emergency directive by the U.S. watchdog released on April 14. In the directive the agency warned that hackers were exploiting authentication details shared by email to try to break into Microsoft’s customer systems, including those of an unspecified number of government agencies. CISA warned that the hackers might have gone after non-governmental groups as well. “Other organizations may also have been impacted by the exfiltration of Microsoft corporate email,” CISA said, encouraging customers to contact Microsoft for further details. Last week, a report from the U.S. Cyber Safety Review Board which said that a separate hack – blamed on China – had been preventable, faulting Microsoft for cybersecurity lapses and a deliberate lack of transparency.

To access more of The Innovator’s News In Context stories click here.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.