The Bloomberg Billionaires Index estimated that the world’s 10 wealthiest people – a list dominated by U.S. tech billionaires, including Elon Musk – gained nearly $64 billion on November 6, the largest daily increase since the index began in 2012. The tech titans had multiple reasons for celebrating. Much of the gains for the top 10 was because of a surge in U.S. stocks after the election as investors anticipated a regulation-light policy platform.
The theory that regulation is bad for innovation has been taken up as a rallying cry by the tech sector both in and outside of Silicon Valley: if regulators are allowed to have their way, the argument goes, AI development will be hampered. So, for them, light regulation – or no regulation – is great news.
The argument against regulation as a hindrance to innovation is not new, Austrian-American economist Joseph Schumpeter created the theory of “creative destruction” in the Mid-20th century. His theory maintained that entrepreneurs, rather than regulators, were the key to driving economic development through new technologies and business models. Since then, many others including Milton Friedman and other economists have supported minimizing regulation as a key driver of innovation.
What is new is Trump’s creation of the Department of Government Efficiency (DOGE) to dismantle federal agencies. This week Trump announced that he has appointed two tech entrepreneurs, Musk, the entrepreneur behind Tesla and Space-X and one of the U.S. President-elect’s biggest campaign contributors, and Vivek Ramiswamy, a pharmaceutical entrepreneur, to lead DOGE.
There are more than a few potential conflicts of interest: the name of the new department — DOGE — appears to be a play on another one of Musk’s many investments: the cryptocurrency Dogecoin, which the billionaire regularly promotes to others – and, as The New York Times and other media have pointed out SpaceX, Tesla and other companies Musk created, have also been targeted recently in at least 20 different investigations or lawsuits by federal agencies. In other words, Musk will be watching over and maybe even dismantling agencies that police his companies.
The tech industry argument that it can and should police itself has always been underpinned by self-interest, but this takes things to a whole new level.
Ramaswamy has proposed immediately eliminating the Education Department, the Federal Bureau of Investigation and the Internal Revenue Service by executive order. Slashing government regulations and spending became a top priority for Musk as his frustrations have grown with what he considers excessive or redundant oversight by the Federal Aviation Administration and the Interior Department, as SpaceX sought launch licenses to continue testing its newest rocket called Starship. He has pledged to cut $2 trillion from the federal budget. But he did not explain in any detail how that would be accomplished or what parts of the government would be slashed. The question is what will this mean for AI?
Although the U.S. issued two Presidential executive orders on AI in 2019 and in 2023 and some draft legislation is being circulated in Congress Trump’s re-election has cast doubts about the successful implementation of federal regulation around AI.
Trump plans to repeal President Biden’s executive order on AI, according to his campaign platform, stating that it “hinders AI innovation, and imposes radical left wing ideas on the development of this technology” and that “in its place, Republicans support AI development rooted in free speech and human flourishing.” Musk and Ramaswamy could very well dismantle some of the regulatory agencies which have stepped in to fill that void, leaving no guardrails in place.
State Of Play
Globally, countries are focusing on creating legislation and ethical guidelines that address both the opportunities and risks of AI. While the The European Union AI Act, which became law in the EU in August of 2024, is one of the most comprehensive and far-reaching pieces of legislation to date, other countries are also taking significant steps to ensure AI is used safely, ethically, and in a way that promotes public trust. The landscape is still evolving, and international cooperation may be necessary to ensure consistent and effective regulation as AI continues to develop. Other countries currently in the process of creating legislation and frameworks around AI include Singapore, South Korea, Australia, China, the UK, Canada and Brazil.
In the U.S., in the absence of federal direction on legislation more than one third of U.S. states have created their own regulations around AI. This leaves organizations deploying AI in the U.S. with a myriad of different rules and regulations to comply with in the coming years. To give some examples, Colorado’s AI Act, which will come into force in February 2026, is for all intents and purposes a copy of the EU AI Act. New York has extensive legislation on various uses of algorithms, for instance by human resources. Following a veto of controversial AI legislation by Governor Gavin Newsom, California is likely to promote other bills (there are many in the pipeline). The very Republican state of Texas plans to introduce in 2025 “the Texas Responsible AI Governance Act,” which takes a risk-based approach and wants to guard against automated decision that leads to discrimination.
This week OpenAI made it clear its plans to work with the new administration on AI policy. A blueprint for U.S. AI infrastructure” the company presented in Washington D.C. on November 13 outlines AI economic zones co-created by state and federal governments “to give states incentives to speed up permitting and approvals for AI infrastructure.”
In sum, we are left with a piecemeal approach to AI safety. While some countries or states embrace Responsible AI others see a political and economic opportunity in taking a laissez-faire approach.
In Europe business leaders are already worried about losing ground to the U.S. and China due to the EU AI Act. At a recent European tech event attended by more than 70,000 people, a founder of an AI startup, who spends time in both the EU and Silicon Valley, stated that regulation would keep AI from creating a return on investment for businesses. In fact, many panels about AI echoed similar sentiment: any regulation will kill innovation.
It is wrong to see the debate in such black and white terms. The task of the EU is not just to regulate AI but also to guard Europeans’ way of life. If Europeans were asked if they wanted to live in a world that is AI regulation free, such as China or parts of the U.S. most would reject the idea as incompatible with their values. Europe regulates safety of all other products which might damage humans – from cars to washing machines – why should AI not be equally regulated and yet succeed?
The Case for Responsible Legislation
Having regulation to guide the design, development and deployment of AI will produce benefits for organizations that use AI as a driver of their business. For example, companies that deploy AI without the correct data or without sufficient re-training of models will soon see detrimental effects on their businesses. In the Innovator earlier this month there was an article about using automated AI agents in insurance. While the story is very interesting failure or misbehavior of those systems could bankrupt the business.
Guardrails are needed both to protect business and society as a whole but we need to stop thinking about regulations as a constraint. Regulations can have multiple benefits:
- They can spark creativity, forcing teams to think differently and produce better solutions. If better solutions are created in Europe, it would break some of the U.S. tech titan’s dominance of power.
- A regulated approach can offer a clear aim or target instead of chasing endless paths and opportunities.
- Regulation enables innovators to test their technologies against a framework to ensure they are deploying technology that will help their customer and not cause any unintended outcomes.
Seizing The Day
From the lack of labor laws of the Industrial Revolution to the collapse of the Rana Plaza building in Bangladesh in 2013, the world has seen terrible and unintended outcomes from industries that were either not regulated or under-regulated over the past 300 years. There is an opportunity to create impactful innovation under reasonable regulation and we must seize it. This is the time to ensure that the technology we develop creates the outcomes we want. No one wants technology to grind to a halt, but we must proceed with caution to ensure the safety of humanity. If OpenAI’s Sam Altman is right AGI (artificial general intelligence) could be here as early as 2025. We need those who represent us to make the decision on whether to create and unleash AGI. Leaving that in the hands of a few white extraordinarily wealthy men in Silicon Valley with their own vested interests and no guardrails in place is both unwise and downright dangerous.
Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic, and social changes arising from the use of AI. This is the eighth of a planned series of exclusive columns that she is writing for The Innovator. This column was co-authored by Rebecca Y. Gonzales, Chief Customer Officer for the Cantellus Group. In this role, she leads customer initiatives that leverage artificial intelligence, particularly generative AI and other frontier technologies, to drive innovation and practical applications for customers globally. Prior to joining Cantellus, Rebecca was Head of Generative AI Enablement at Amazon Web Services Generative AI Innovation Center.
This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.
To access more of Firth-Butterfield’s columns and other Focus AI articles click here.