What It Will Take to Make AI Sustainable

Researcher Sasha Luccioni argues we need better emissions data and a better sense of how people are using AI in the first place.
Image may contain Electronics Hardware Computer Server and Computer Hardware
Photograph: Bloomberg/Getty Images

Building AI sustainably seems like a pipe dream as tech giants that previously made promises to cut emissions have been racing to build out massive data centers powered by fossil fuels.

The rush to build out AI at all costs has been reinforced by the Trump administration, which is also rolling back environmental protections.

Despite these headwinds, Sasha Luccioni, an AI sustainability researcher, thinks that demand for more transparency in AI, from both businesses and individuals, is higher than ever from the customer side.

Luccioni has become a leader in trying to create more transparency about AI’s emissions and environmental impacts in her four years at Hugging Face, an AI company, including pioneering a leaderboard documenting the energy efficiency of open-source AI models. She has also been an outspoken critic of major AI companies that, she says, are deliberately withholding energy and sustainability information from the public.

Now, she’s starting Sustainable AI Group, a new venture with former Salesforce sustainability chief Boris Gamazaychikov. They’ll focus on helping companies answer, among other things, “what are the levers that we can play with in order to make agents slightly less bad?” Luccioni is also interested in sussing out the energy needs of different types of AI tools, such as speech-to-text translation, or photo-to-video—an area that’s she says has so far been understudied.

Luccioni sat down exclusively with WIRED to talk about the demand for sustainable AI and what exactly she wants to see from Big Tech.

This interview has been edited for length and clarity.

WIRED: I hear a lot from individual people who are worried about the environment and AI use, but I don't hear as much from companies thinking about this. What have you heard specifically from folks who are working with AI in their business, and what are they worried about?

Sasha Luccioni: First of all, they are getting a lot of employee pressure—and board pressure, director pressure, like, “You need to be quantifying this.” Their employees are like, "You're forcing us to use Copilot—how does it affect our ESG goals?”

For most companies, AI has become a core part of their business offering. In that case, they have to understand the risks. They have to understand where models are running. They can't continue to use models where they don’t even know the location of the data centers or the grid they're connected to. They have to know what the supply chain emissions are, transportation emissions, all these different things.

It’s not about not using AI. I think we’re past that. It’s choosing the right models, for example, or sending the signal that energy source matters, so customers are willing to pay a little bit more for data centers that are powered by renewable energy. There are ways of doing it, and it's a matter of finding the believers in the right places.

I'd also imagine that for global companies, the sustainability situation is very different than in the US, right? The US government might not give a shit about this, but other governments certainly do.

In Europe, they have the EU AI Act. Sustainability has been a pretty big part of that since the beginning. They put a bunch of clauses in there, and now the first reporting initiatives are coming out.

Even Asia is trying to be more transparent. The International Energy Agency has been doing these reports [on AI and energy use]. I was talking to them, and they were like, other countries realize that the IEA gets their numbers from the countries, and the countries don't have these numbers for data centers specifically. They can't make future-looking choices, because they need the numbers to know "OK, well that means we need X capacity, in the next five years" or whatever. [Some countries] have started pushing back on the data center builders.

If you could wave a magic wand tomorrow and make Sam Altman, or Dario Amodei, or whoever, give you a piece of information that you've been looking for, what would it be? Or would you want them to generally be more open about what they have?

I wish there was a little meter or info box on the ChatGPT or Claude UI that tells you at the end of each query or conversation how much energy was used. Ideally, greenhouse gas emissions and how that energy was generated.

I think that it would be a market competitive advantage if one of the big model providers decided to make a bet on sustainability. Right now, they're all infighting and trying to one-up each other. If one of them was like, "OK, we're going to stop trying to create these data centers that are powered by natural gas, and we're going to make renewable data centers," I think that that could actually give them an advantage. It's like when Anthropic said no to the US government for military use. It did give them a boost.

A cultural boost.

Exactly.

I feel like in the popular conversation around AI, the big closed models are the only game in town. Do people you speak to have this knowledge that you don't actually have to use the big models for literally everything you're doing? Is that part of what you're educating people on?

Definitely. Maybe this is the geek in me, but I love going back to, like, what is AI? People love saying how AI is revolutionizing our societies, but the kinds of models that have been doing that grunt work are not LLMs. They're classifiers. These are the systems that have been such a core part of what we've seen as AI productivity.

I always try to disentangle what's actually been useful to us versus what is being sold to us. Let’s say you work in finance. You're going to be trying to figure out where the market's going. You don't need a general purpose LLM for that.

Google has been providing numbers about the number of tokens sent and number of tokens received. That's a really important piece of information. That way you can figure out, for example, if the queries are super simple, then you can use simpler models. If you realize that most of your employees are generating images or whatever, you have that piece of information as well.

Internally, you can say, "Well, if you guys want to just search company documents, this is the model to use. It's simple and cheap. And if you actually want to do deep research, here’s a more complex model."

It also sometimes feels like to me that the companies that are developing these big models don't want us to know that there are other options.

It’s such an incestuous field. Many of the big companies making the models are also the ones that are selling you the compute. It makes sense for them to sell you the largest model, because then you need the most compute.

If there were a completely different set of actors who were billing and operating the data centers versus training the models versus creating the products based on these models—if these were a completely distinct set of entities, we would have so much more diversity in AI right now.

I have all these real worries about what's going on with AI, environmentally, but it’s so weird watching this conversation calcify into a lack of nuance. How do you handle those types of conversations with people?

As a researcher, I can't go around just giving numbers that I can’t vouch for. But on the other hand, it is really hard to convey scale and nuance.

It's true that maybe each individual query is not a big deal. But then you multiply it by the number of people that use these things—it is a really hard conversation.

We still need the numbers on energy and water use in order to make informed decisions. Even if the numbers are tiny, we still should get them because we have numbers for transportation, we have numbers for nutrition, we have numbers for all these different things.