Latest articles

Interview Of The Week: James Martin On Responsible AI

James Martin is a leading voice on the impacts and risks of AI. He conceived and runs “Frugal AI” and “Responsible AI“, AI-focused training courses offered by French digital sustainability pioneers GreenIT.fr, and advises companies and associations worldwide on how to use technology more responsibly. He is also a founding member of Ecologits, a global standard for evaluating AI’s environmental impacts and runs the responsible tech blog BetterTech.

Previously, Martin led sustainability and communications at French tech scale-ups Scaleway (iliad Group) and Shippeo (part of French Tech 120, 2025). He was pivotal in enabling Scaleway to be recognized as a digital sustainability pioneer – notably via a much-referenced 2023 Green IT white paper, “How can engineers make tech more sustainable?” -– while  it established one of Europe’s most powerful AI GPU clusters; and in conceiving Shippeo’s  Responsible AI Charter. 

Martin speaks regularly at events such as ChangeNOW, one of Europe’s biggest climate conferences, where he was one of the first to outline the considerable environmental impacts of generative AI (full session from March 2025), and most recently moderated the 2026’s edition’s AI sessions. He spoke to The Innovator about how corporates can curb their AI environmental footprint and derisk their use of the technology.

Q: Responsible AI includes limiting energy, emissions and water use because corporations are responsible for their emissions. Can you talk about AI’s environmental impact?

JM: The first serious reports about the impacts of AI were published in 2024. The one that I refer to the most is the United States Data Center Energy Usage Report published by Lawrence Berkeley National Laboratory and funded by the U.S. Department of Energy’s Industrial Efficiency and Decarbonization Office. The report found that:

*Data center electricity consumption load growth has tripled over the past decade and is projected to double or triple again by 2028.

*U.S. data centers used roughly 200 terawatt-hours of electricity in 2024 — comparable to the annual electricity consumption of Thailand. AI-specific servers accounted for an estimated 53 to 76 terawatt-hours of that, enough on the high end to power over 7.2 million U.S. homes for a year. 

*By 2028, more than half of the electricity going to U.S. data centers is projected to be used for AI, which alone could consume as much electricity annually as 22% of all U.S. households

*Data centers are expected to trend toward using more carbon-intensive energy sources like natural gas to meet immediate demand needs. 

This was a red alert for me, a big realization that we’re really entering a new era with AI, not just in terms of what it can do but in terms of its impact on the planet.

Usage of the Cloud in recent year increased by around 500%; but because we were using CPU [Central Processing Units] servers, data center operators were able to make tweaks to keep energy consumption relatively flat. But then, when we switched to GPUs [graphics processing units] for AI, because they consume around 4X  more energy than CPU’s, the same optimization tricks could not be used. 

 Q: There is no agreed upon standards for the AI providers to report their energy and water consumption so companies and individuals are unaware of the impact of their AI usage. 

JM: Lack of transparency is an issue. Data centers use their energy efficiency rating as a selling point, but it’s only since the end of last year in Europe that they need to declare their water efficiency. That means that until six months ago clients could go to a data center with a low PUE [Power Usage Effectiveness] score, think they’d done their bit for the planet, and then just move on. But often if your data center has a low PUE, it’s because it’s using millions and millions of gallons of water to keep the servers cool and even more so with GPUs, which heat up 2.5X to 3X more than CPUs. So, it’s not just the energy, it’s the heat. Water is cheaper than electricity, so water has as late tended to be big data centers preferred cooling method.  

 There are 16 different environmental impacts that any product can have on the planet. Right now, we’re only looking at one or two for AI, and they are not being measured uniformly. The more time goes on, the less transparent companies like OpenAI are. In a blog post [OpenAI CEO] Sam Altman said the water consumption of a ChatGPT prompt was roughly one fifteenth of a teaspoon. He did not mention the fact that ChatGPT gets 2.5 billion queries per day. When you take those tiny amounts and you multiply them by 2.5 billion, there is a massive impact. That’s one of the things AI providers have become very good at doing: minimizing the size of their impact. They do it because there are no laws, especially not in the U.S., where most of this activity is happening, to make them do otherwise. So, there is no transparency and no accountability.

Q: What is the fix for that? Should governments set standards on how energy and water use can be measured and can AI companies be audited to make sure that they are reporting correctly?

JM: I’m not very hopeful, not in the U.S. anyway, but things are starting to change. The European Commission has just started a

discussion about whether they should put rules in place to mandate that if a data center project wants to get clearance in Europe it has to make sure it’s using the least impactful cooling and powering technology possible. But it’s early days.

Q: Isn’t there a danger that governments that want to attract the building of data centers, because they see it as an economic necessity, will turn a blind eye to the environmental impacts

JM: The only thing that is stopping this huge AI wave right now is the American people. There’s an organization called Data Center Watch which has been keeping an eye on protests all around the U.S. by people who don’t want AI data centers in their backyards. People across the U.S. have managed to halt or stop nearly $100 billion worth of AI data centers. There are some states where, whether you’re Republican or Democrat, you will not get elected if you don’t say ‘I am against data centers.’ That’s one of the things that gives me hope: people power can work. They’re protesting the fact that those data centers are making their electricity bills go up or using up the local water. It is a global issue. There are some regions of Mexico where there are so many data centers that the local people only get water like half a day a week so that the data centers can keep running. 

Q: What can corporates do to ensure that they are curbing their use of power, water and emissions from AI? 

JM: One of the things I do is to point to Salesforce as a great example. Salesforce believes in using models that are as small as possible, that consume as little energy as possible, that run off the cleanest energy possible. These are pretty simple things to do. Why isn’t your company doing that too? The other thing I tell firms is because the big AI providers and AI labs are being opaque and are not declaring their environmental impact, there are tools which corporates can use to estimate the impact of their AI activity. I’m one of the founding members of a tool provider called EcoLogits, which is part of an association that came out of a French group called Data for Good. It is part of an organization which runs another referential tool,  called Code Carbon. They do things differently, but with the same aim, which is quantifying, assessing, and evaluating. We can’t say “measure”, because it’s not precise enough. The challenge is that the big models – Open AI, Gemini, Claude and so on, say nothing, or next to nothing, about what their impacts are. What EcoLogits does is take a model of comparable size that is open source, and say, well, to the best of our knowledge, ChatGPT has this many parameters -let’s say 1 trillion – then we find an open source model with 1 trillion parameters. Open source models are transparent, so we can work out their emissions, and water and energy use and then we transpose that. It’s a highly educated guess: we say if this one has this much impact, then ChatGPT will probably have this much impact. EcoLogits has two incarnations. One, the public facing version, is called EcoLogits Calculator, and you can find that on Hugging Face. It helps you determine if you write X words with this AI model it will result in this much energy, this much water usage.  The other is a version for developers, which is basically a library you can plug into your Python code that will enable you to do things like display to your users how much energy each prompt uses. If you are using an LLM that you are hosting yourself, then you can use Code Carbon to calculate your CO2 eq.  emissions. These are the sort of tools that companies can use to, once again, not measure but evaluate and assess the environmental impact of their AI activity and then keep it in check by finding the models that are best adapted to their needs. The AI that has been served to us so far is like using a bazooka to swat a fly. They are around 2000 times more powerful than we need. It’s not that surprising that people use them so much, because they seem magical and are largely free, but people don’t understand the real environmental impact What I encourage people to do is two main things: 1/. Use the smallest possible model that is best for your needs, for what you’re trying to do. And 2/. Use open source models as far as possible, as they are transparent and adaptable, which notably means you can host them wherever you like..Everyone right now is using 2 trillion parameters when they could be using 10 billion. Whereas I can promise you – and it’s backed up with all sorts of research – that a model with about 10 billion parameterscan cover most people’s needs. Most people use LLMs for research information, like they used to with Google Search, or they use LLMs to rewrite their emails or refine their texts in some way. These are super basic needs, not quantum physics.

When you ask an LLM the same question that you would ask a search engine, you’re wasting a lot of energy and resources.It’s not your fault that you’ve been given a bazooka  instead of a fly swatter, as an individual. But as a company, you can identify the use cases which are appropriate.

What has been happening inside companies is if their Cloud provider is Google, it’s quite easy for them to tick the Gemini box on their Cloud contract and they will then end up using Gemini for all sorts of things. I explain that they don’t have to use it for everything. If, for example, they are using it to help their developers to code faster, then I would come in and say ‘Here’s this open source model which has fewer parameters, lower emissions and probably costs a lot less too. So why aren’t you using this instead?’ Another advantage of it is that you can host it yourself, so it doesn’t have to be hosted in the U.S.  There are all sorts of sovereignty advantages too. Because this whole AI revolution has gone so quickly, most companies haves igned contracts with U.S. tech giants. So they may be locked in to using Gemini or similar for most of their needs.  But that shouldn’t stop them from experimenting. Why not do a POC [proof of concept] on a specific use case to see if you could do it with a more frugal AI model instead? 

Let me give a specific example. An AI startup called Miralia that specializes in symbolic AI [an approach to AI that uses symbols and rules to represent knowledge and perform reasoning. It emphasizes logic, structured data, and human-like decision-making processes.] Imagine any company today will receive millions of emails, with random questions, or maybe phishing, or maybe an invoice. Their AI can scan the contents of that email and say, ‘Ah, this one is for sales. This one is for accounting. This one is for marketing’. And it does it perfectly. It uses symbolic AI for 90% of the emails and then for the other 10%, it uses LLMs to fill in the gaps. The CEO of Miralia told me that if a company were doing all of that sorting with generative AI, their Cloud bill would be five times higher. And they have also worked out that by doing it that way generates 100x less emissions. It’s a great example of why it makes sense for companies to use the right AI for the right needs.

Q: How do you advise companies on governance issues?

JM: After we launched the first training course on Frugal AI one of the questions we got from a company was ‘If something goes wrong due to AI in a company, when is it the company’s fault and when is it the employee’s fault? That is why we are also launching a training course, called Responsible AI, that is about managing risks. The course covers predictive, generative and agentic AI and I boiled the risks down to 10. One of them is environmental impact; another is societal impact, which is also something we cover in frugal AI. The eight others are things like transparency, reliability, bias and security. Those are the things that companies always list when you ask them what worries them about AI. We cover those 10 risks, explain what they are and give examples and tips on how to manage them. What’s interesting is that both the Frugal AI and Responsible AI courses conclude with the same advice: develop a Responsible AI Charter for your company. With good reason. Around 50% of staff today are using shadow AI. Other estimations put it even higher. About the same proportion of companies do not have an AI policy, even though they know that most employees are using shadow AI and that there is as such a risk of confidential data leaving their company. But for now, apart from the biggest companies, they’re not doing anything about it, which is surprising, considering that in 2023 there was the first high-profile case of an employee working for Samsung copy-pasting the code base into ChatGPT, causing a data breach. What did Samsung do? It banned ChatGPT but it couldn’t fire the person who did it, because they had no rules to say he or she couldn’t do that. It did say anyone else who does that, they’ll get fired. It has since unbanned ChatGPT for all sorts of reasons, but the point is, they did not have an AI charter in place at the time and there were consequences.

The more time goes on, the crazier the field gets, and the harder it is to keep on top of everything that’s going on. Look at OpenClaw [a free, open-source autonomous AI agent that runs locally on your machine and executes tasks — such as managing emails, calendars, files, and browser automation — by integrating with large language models and using messaging platforms as its primary interface]. Summer Yue, Meta’s Alignment Director,  put an OpenClaw agent on her computer just to see what would happen. It started deleting all her emails while she was on a call. She told it to stop and it wouldn’t. Research and advisory firm Gartner is rightly being extremely cautious about agentic AI. It notably says agentic web navigators should in no way be used within companies, for security reasons. And yet, according to Cyberhaven,  28% of companies have at least one of these navigators installed by an employee. That’s not even mentioning OpenClaw agents. So this is something companies need to put a lid on right away.

Q: What would you like readers to take away from this interview?

JM: In terms of ethics and governance, my advice is very clear: your company needs to have rules in place, preferably with sanctions, so you can hold your employees responsible. When it comes to frugal AI, the objective is to do as much compute as possible with as little resources as possible, and by resources, I mean hardware, water, and electricity. The potential harms are too great to just cross our fingers and hope everything will go well, because it’s not going well. The use of AI needs more governance. It needs more common sense, and when it comes to models there needs to be a recognition that smaller is better for everyone, if we still want to have a liveable planet in 10, 20, 30 years from now, and beyond.

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.