If 2024 was the year artificial intelligence went mainstream, 2025 was the year it became inescapable—transforming not just how we work, but the fundamental architecture of power, commerce, and human creativity itself.
Across more than 150 stories The Innovator published this year, one theme dominated above all others: we are living through what business strategist Carsten Linz, founder and CEO of Germany’s bluegain, calls “strategic discontinuity”—a moment when the rules that governed success for decades simply stop working. AI didn’t just improve existing processes in 2025; it rewrote the operating system of modern civilization.
The evidence was everywhere. From public school classrooms to factories, from scientific laboratories to the Auschwitz Memorial, from fusion energy reactors to the hallways of the World Economic Forum in Davos, AI was reshaping reality at a pace that left even its most enthusiastic proponents struggling to keep up.
But 2025 also revealed something darker: the emergence of new divides—between those who control AI and those who don’t, between nations racing to dominate and those being left behind, between the promise of abundance and reality.
This is the story of 2025 in technology: a year of breathtaking innovation and deepening inequality, of spectacular breakthroughs and sobering warnings, of a future arriving faster than we’re prepared to handle. Read on to get the highlights:
Creative Destruction
It is no accident that in this year of strategic discontunity the 2025 Nobel Memorial Prize in Economic Sciences was awarded to Joel Mokyr, Philippe Aghion and Peter Howitt, for their work around ‘creative destruction‘ as a central driver of economic growth, an idea originally popularized by Alois Schumpeter. The Nobel laureates’ research shows how progress is not a continuation of what exists but a renewal that occurs when outdated structures give way to better ideas, new technologies, and bold capabilities. “Their theory makes one idea unmistakably clear: long-term growth favors the courageous,” Linz wrote in a 2025 guest essay for The Innovator. “In their models, innovation happens when new entrants disrupt incumbents, not because of disloyalty to the past, but because the future requires something else. The conditions for transformation are not technical; they are philosophical. A system renews when it chooses what to preserve, what to evolve, and what to replace.”
AI is forcing companies to act. “If progress was inevitable,” economist Carl Benedikt Frey wrote in his 2025 book How Progress Ends, “it would not have taken humanity 200,000 years to have an industrial revolution.” If inevitable most places around the world would be rich and prosperous today. “ Progress, he said in a 2025 interview with The Innovator, is “constant work in progress.”
That insight – that advancement requires continuous institutional adaption, not just technological invention – explains why 2025 felt so paradoxical. We witnessed breathtaking innovation alongside governance collapse. Record clean energy investment alongside of record emissions. Record spending on AI and little impact (so far) to company’s bottom lines.
Despite the hype a 2025 study from MIT on AI use revealed that only 5% of large companies are seeing real benefits on their P&Ls. Most experiments fail; most internal builds fall short; and too much energy is being put into AI for the front office, not the back office.
Linz explained the massive discrepancy between AI ambitions and operational reality, in his column for The Innovator. When companies rush into AI without addressing the fundamentals, the consequences can be devastating. AI does not fix flawed fundamentals it only magnifies them, he says. The sooner companies recognize this and begin to understand their process landscape and their data foundation, the better.
Lots of firms are focused on productivity gains: doing what they were doing before, but more effectively and cheaper, business strategist Benoit Reillier, Managing Director and Co-founder of the UK’s Launchworks & Co, said in an interview with The Innovator. “We think this is important and necessary, but it’s absolutely not sufficient. Companies need to step back and think about how entire organizations, industries and ecosystems are going to change as cognition and AI become a powerful part of the equation and use their productivity gains to invest and position themselves strategically.
Most companies realize too late that their processes were never designed to work at this speed, says Linz. This can be observed in the automotive industry, where established OEMs are under enormous pressure as Chinese full-stack providers launch new models within 12 to 18 months, closely integrate hardware and software, and iterate quickly. A 2025 headline in a German newspaper asked whether Stuttgart, home to Porsche and Mercedes-Benz, will end up like Detroit.
The question each organization should be asking is “what are you prepared to stop, simplify or rewire in the next 12 months?” says Linz.
Enterprises are not the only institutions built for one era that are persisting perilously into another: Europe’s inability to create a single market continues to create a 110% tariff equivalent on digital services and two-thirds of global energy production are wasted annually ($4.6 trillion lost) while governments debate how to get to Net Zero.
Geopolitics: Compute As The Currency Of The Future
If 2025 had a defining resource, it was compute. As OpenAI CEO Sam Altman declared, it’s becoming “maybe the most precious commodity in the world.”
Countries that can “manufacture intelligence” at scale will be at the forefront of harnessing the benefits of finding solutions to key challenges, from green transition to digital biology, says a 2025 report by the Tony Blair Institute for Global Change (TBI) It argues that compute is not just a source of scientific and economic progress, but the new benchmark of global power economically and geopolitically.
“Just as governments needed to enable infrastructure such as roads, railways and telecommunication networks for business to thrive, investing in shared and public compute has become equally important,” Jakob Mökander, TBI’s Director of Science & Technology Policy, said in an interview with The Innovator.
Compute is, in fact, slated to become “the foundation of next-generation economic growth and influence, shaping economic developments, as well as the future of sovereign power and international influence,” says the TBI report. It warns that compute infrastructure risks becoming “the basis of a new digital divide,” with countries that can “manufacture intelligence at scale” positioned to lead the next century.
The concentration is striking. The United States built more data-center capacity in 2023 than the rest of the world combined, excluding China. Nvidia’s market capitalization reached $3.314 trillion, making it the world’s second most valuable company. The US announced plans for 5,796.2 megawatts of new data centers over three years—a 164.4% increase.
But China isn’t standing still. In a watershed moment, DeepSeek demonstrated AI reasoning models on par with OpenAI and Anthropic—at significantly lower cost, setting off alarm bells in the U.S.
The US-China rivalry is creating what some called a “compute security dilemma,” with export controls, industrial espionage, and an intensifying race for supremacy.
Confronting AI’s Dark Side
The February 10-11 AI Action Summit in Paris, addressed concerns about control of AI by a few dominant players, how to equitably distribute AI’s benefits globally the role of opensource and building responsible and trustworthy AI.
The summit, which gathered nearly 100 nations and was co-hosted by the French and Indian governments, emphasized how general-purpose AI has immense potential for education, medical applications, research advances in fields such as chemistry, biology, or physics, and generally increased prosperity. But AI’s dark side was nonetheless top of mind.
Governments, leading AI companies, civil society groups and experts gathered for the AI Action Summit were presented with an International AI Safety Report 2025 report spearheaded by Turing prize winner Yoshio Bengio and compiled with the help of expert representatives nominated by 30 countries, the OECD, the EU, and the UN, as well as several other world-leading experts. The goal of the report is to provide a shared scientific, evidence-based foundation for discussions about the risks.
The document, which was commissioned after the 2023 global AI Safety Summit and became publicly available in 2025, covers numerous threats ranging from already established harms such as bias, scams, extortion, psychological manipulation, generation of non-consensual intimate imagery and child sexual abuse material, deepfakes and targeted sabotage of individuals and organizations, to future threats such as large-scale labor market impacts, AI-enabled biological attacks, and society losing control over Artificial General Intelligence (AGI).
The Governance Crisis
Managing this toxic brew is complicated by conflicting approaches to risk management. The regulatory landscape further fractured in 2025. While France, on February 3, announced the creation of the equivalent of an AI Safety Institute (INESIA), one of U.S. President Donald J. Trump’s first acts in office was to rescind an Executive Order issued by the previous administration on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Since then, Trump has gone even further, removing the right of U.S. states to govern the technology.
The danger is that even if Europe, and the rest of the world, opt for leveraging AI in a “responsible and sustainable manner” the U.S. government and U.S. companies see the 2025 breakthrough by China’s DeepSeek, which has demonstrated AI reasoning models that appear to be on par with U.S. companies OpenAI and Anthropic, at significantly lower cost, as a threat that needs to be countered in a no-holds barred race.
As model capabilities continue to advance amid mounting global instability, fueling the race between the U.S. and China, but also opening new opportunities for startups around the world working on alternative opensource models, the need for international collaboration on a series of other risks including the global AI R&D and compute divide, market concentration, environmental risks, privacy, copyright violations and safeguarding intellectual property (IP), has never been higher.
“AI is the most powerful technology ever built,” AI scale-up iGenius founder and CEO Uljan Sharka said in a 2025 interview with The Innovator. “If we centralize it, we risk subjecting ourselves to modern dictatorship. The few tech companies that own it will become the government of the future. They will be the creators and everyone else a user, or a slave. I am motivated by building tech that enables digital equality, allowing everyone to compete, with the goal of building things beyond our imagination.”
Europe is pinning its hopes on European scale-ups like Italy-based iGenuis and Mistral AI’s co-founder to create alternatives to Chinese and American offerings.
Although European efforts made headway in 2025 the Continent is still way behind the U.S. and China. The Tony Blair Institute’s November report on Europe’s AI readiness noted how the Continent ignored 40 years of warnings about how it was falling behind on tech. The question is how fast it can catch up – if ever. Estonia’s public schools are adopting chatbots now. The only companies that were prepared to respond to the government tender were OpenAI and Google as no European alternatives are available.
The developing world is even further behind. Africa’s compute capacity remains equivalent to Spain’s and nearly 3 billion people remain offline, cut off from AI entirely.
A Push For Openness In The Full AI Stack
Some are pinning their hopes on open source as a way of ensuring that AI’s rollout is more equitable. “Embracing openness in AI is non-negotiable if we are to build trust and safety; it fosters transparency, accountability, and inclusive collaboration,” said a statement issued Feb. 4 by Mozilla. The statement followed a meeting in Paris in the lead up to the AI Action Summit, organized by Mozilla, Foundation Abeona, École Normale Supérieure (ENS) and the Columbia Institute of Global Politics which brought together a diverse group of AI experts, academics, civil society, regulators and business leaders in Paris to discuss openness, a topic it says is increasingly central to the future of AI.
“ Openness must extend beyond software to broader access to the full AI stack, including data and infrastructure, with a governance that safeguards public interest and prevents monopolization,” says the Mozilla statement. “ If AI is to advance competition, innovation, language, research, culture and creativity for the global majority of people, then an evidence-based approach to the benefits of openness, particularly when it comes to proven economic benefits, is essential for driving this agenda forward.”
AI And Energy
Many of the headlines in 2025 dealt with concern about AI’s use of energy and water. But AI’s energy needs also helped fuel progress in transformative energy technologies such as fusion, geothermal and new ways of generating nuclear energy.
Still, much more needs to be done to advance the energy transition, and large corporates have a big role to play.
“Many large players systemically think of tech innovation as something to do from inside their business units with some corporate R&D in-house with maybe a few university partnerships, Doug Arendt, who leads the National Renewable Energy Laboratory Foundation as Executive Director and is co-chair of the World Economic Forum’s Energy Technology Frontiers Council, said in an interview with The Innovator. “ They need to think much more boldly about unconventional partners to advance the innovation funnel, including working with national labs and other industrial players. New types of partnerships are needed with creative and appropriate IP and business models. There is an enormous under appreciation for looking across sectors and across disciplines. On the scaling side, companies have an powerful ability to do appropriate engineering at scale to test new approaches and derisk technology.
The Human Toll
AI’s human toll was also evident in 2025. There was a significant increase in deep fakes. Bias ingrained in LLMs continued to be a problem as the latest example illustrates. In December a new study led by a team at Harvard Medical School shows that LLMs can somehow infer demographic information from pathology slides, leading to bias in cancer diagnosis among different populations. AI’s negative impact on the arts also made headlines. After hobbling the news business by grabbing most of the advertising revenue and normalizing the giving away of content for free, Big Tech is using original articles created by journalists at surviving outlets to train its AI models, without giving credit to their work or providing any kind of compensation.
Big Tech companies are, in fact, hoovering up the content not only of newspapers and magazines but artists, authors and musicians, ballooning their own valuations while threatening the livelihoods of content creators. Copyright lawsuits filed against GenAI companies mushroomed in 2025, alleging that the way they operate amounts to theft.
Bill Gross, one of Silicon Valley’s most prolific entrepreneurs, believes there is a better way than lawsuits to combat the problem: using tech of his own invention to enable generative artificial intelligence (GenAI) platforms to attribute and compensate content owners
While this is an encouraging development intellectual property issues raised by AI in 2025 are not likely to be solved anytime soon.
AI and The Future Of Work
Fears about AI’s impact on jobs proved to be well grounded.: growth is slowing due to AI’s impact on productivity. We’re already seeing the trend: Amazon, Microsoft, Salesforce, and UPS have laid off thousands of employees, and industry leaders like Walmart and Ford have openly said that revenue will rise while headcount falls. “I believe AI will create many new jobs over time, but there will be a lag between rapid job destruction and job creation in the early years,” Former Cisco Executive Chairman John Chambers wrote in an exclusive column for The Innovator. “Recent high school and college graduates, as well as workers over 50 who can’t or won’t adapt to AI, will be hit the hardest. AI-native startups, unicorns, and decacorns will help with job creation, but education systems and traditional businesses must move faster to retool and retrain the workforce to help absorb the shock.”
Call My (AI) Agent
The writing is on the wall. Humans will be increasingly replaced by AI agents. 2025 started off with the publication of a white paper by the World Economic Forum entitled Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents. AI agent’s ability to manage complex tasks with minimal human intervention offers the promise of significantly increased efficiency and productivity, said the white paper. Additionally, the application of AI agents could play a crucial role in addressing the shortfall of skills in various industries, filling the gaps in areas where human expertise is lacking or in high demand. As the technology progresses AI agents are expected to be able to tackle open-ended, real-world challenges such as helping in scientific discovery, improving the efficiency of complex systems like supply chains or electrical grids, managing rare non-routine processes that are too infrequent to justify traditional automation, or enabling physical robots that can manipulate objects and navigate physical environments.
There was an uptake of AI agents by business in 2025, notably in areas such as customer service, but current AI agents are far from perfect as a December 18 story in the Wall Street Journal illustrated. The newsroom tested a vending machine run by Claude, Anthropic’s AI model. A journalist was able to convince the AI agent that it was a Communist and it should give away merchandise for free. Claude also ordered a bizarre range of vending machine merchandise including a live fish and a gaming console and offered to buy a stun gun, cigarettes and underwear.
That said delegating tasks to AI agents will become the norm in 2026, business strategist Reillier predicted in an interview with The Innovator. Organizations will automate workflows with teams of agents. Tech companies, such as OpenAI, Perplexity, Google and others, will offer agentic AI capabilities in their Apps and browsers.
This shift will change the architecture of the Internet and commerce, predicted brand and business strategist Dr. Erich Joachimsthaler, in an interview with The Innovator. “A new architecture of the Internet is taking shape,” he said. “For twenty years, the currency was attention. We bought impressions, optimized funnels, tuned SEO, and chased reach. That logic is collapsing. AI is replacing attention with intent as the core currency. In this world, a person expresses a goal, or ‘intent.’ This can now be interpreted or inferred by an AI agent which acts on it, and completes the task. No funnel. No click. No visit to your website. And no patience for the old tricks of brand building. This shift demands far more than incremental optimization. Merely defining growth in terms of segments, targets, needs or jobs and perhaps occasions is dated. It now requires understanding of the cultural context of life, episodes and moments that drive intent, the 1,440 minutes people live from midnight to midnight. Brand strategy needs to be rewritten around this, and brand execution needs to be re-engineered around this too.”
2026 Is Going To Be Wild
So, where does this leave us as we plan for 2026? More disruption is coming. Trends that will be discussed at the World Economic Forum’s annual meeting in Davos include a triple bubble (AI, crypto and debt), the quantum economy (computing, sensing and communications), and SupTech, the use of supervisory tech to monitor risk. Industry observers say world models – neural networks that understand the dynamics of the real world – including physics and spatial properties- will become increasingly important. In November American Chinese computer scientist Fei Fei Li’s company World Labs, a spatial intelligence AI company building Large World Models (LWMs) to perceive, generate, and interact with the 3D world, launched its first multimodal world model. Meanwhile, Yann LeCun, Meta’s French-American chief artificial intelligence scientist, left to launch his own world model startup. 2026 may also usher in another step change: realizing the visual Turing test. The test is to see if when you use specialized devices such as glasses or a helmet to view the world if you’re able to say whether what you’re looking at is real or not, Launchworks & Co’s Reillier, said in an interview with The Innovator.
Choices need to be made. As AI ethics expert Kay Firth Butterfield, the World Economic Forum’s former head of AI, wrote in an essay for The Innovator: “AI does not happen to us: choices made by people determine its future. “The question is whether those choices will be made democratically, by societies debating values and trade-offs, or autocratically, by tech billionaires with vested interests and no guardrails.
“There is probably a future world in which the big tech providers dominate, and rich people get much richer and a lot more people end up with no jobs and health services are even more preferential to white people and people with money have a much better education,” Dr. Laura Gilbert, who leads the Tony Blair Institute for Global Change’s work on applied artificial intelligence, said in a 2025 interview. “Alternatively, we can create a world that is more inclusive, where everyone gets a good education, a decent standard of living and preventative healthcare. That is the world I want to live in. So, the question is not what will the future hold? The question we should be answering is what should the future hold? In the technology sector we hold the key to what the future looks like, even more than the politicians do. If we want to create a world that is more equal, fairer, with better outcomes for everyone, that is the thing we should work hard to build. I would like everyone in tech and digital not to guess – but to decide what the future holds.”
As we craft this new future there will be eye-popping wins and failures. “The hard truth is that traditional, average companies are already in trouble,” Chambers wrote in his December 2025 column for The Innovator. “Meanwhile, the winners will be nothing short of spectacular – scaling faster, strategically partnering, and building competitive moats at a pace we’ve never seen before. 2026 is going to make 2025 look tranquil. Leaders should plan to have their seat belts fastened tight for the foreseeable future.”
Indeed, the theme of this year’s DLD, a premium tech conference taking place in Munich in January, is “It’s Going To Be Wild.” It is a great way to summarize for where things stand as we exit 2025 and enter the year to come. Technology is racing ahead, leaving us to build the bridge to the future while crossing it.
This year in review story, which synthesizes The Innovator’s 2025 stories, was compiled with the help of Anthroptic’s Claude, with significant human editing and input. Some of Claude’s conclusions were astute, others not so much. It also attributed thoughts and quotes to the wrong people. I found this annoying but also somewhat comforting. AI is a helpful tool but isn’t ready to fully replace us just yet.
