Gartner’s Technology Trends for 2025 – Energy-Efficient Computing: AI Energy Consumption and Its Management
This article is the third installment in a blog series on energy-efficient computing.
In October 2024, Gartner released its review of the top ten technology trends for 2025. For the first time, energy-efficient computing appeared on the list, ranked as the sixth trend.
The previous articles examined the energy consumption of IT, its growth drivers, and energy use in software. This third part focuses on a trending topic—AI and its energy consumption.
AI Energy Consumption Is Increasing Rapidly
The use of AI has surged significantly over the past two to three years, with new AI applications emerging weekly. The quality of these implementations, particularly in generative AI, has also improved substantially.
However, people have woken up to the energy consumption and emissions associated with AI solutions. The widespread use of AI has driven the establishment of new data centers, leading to increased operational (in-use) and embedded (manufacturing-related) emissions.
Assessing AI’s energy consumption has long been hindered by a lack of standardized data. AI providers have been—for a reason—reserved about disclosing energy usage, offering only general insights. Despite these challenges, AI’s energy demands have been a key reason for companies like Microsoft and Google struggling to reduce their carbon footprints. Both saw their emissions rise last year, jeopardizing their carbon neutrality goals. Microsoft, for example, purchased the production rights of the infamous Three Mile Island nuclear plant to power AI operations starting in 2028, when the refurbished plant is expected to come online.
Finally, Measurable Data On AI Energy Consumption and Emissions
In 2024, the first scientific studies on AI energy consumption and emissions were published. These articles acknowledged that the field remains under-researched. New studies are most certainly on the pipeline, but the urgency of making decisions to combat climate change remains and there is no time to wait for exhaustive data.
Another challenge has been defining a standardized measurement method for various AI solutions, as their features, configurations, and training datasets—hence energy consumption—vary widely. Additionally, hardware manufacturers have been reluctant to reveal the embedded carbon footprint of AI-related devices, such as GPUs. Early estimates suggest that embedded emissions can account for up to half of a device’s total lifecycle emissions—an unusually high figure compared to the typical 80-20 split between operational and embedded emissions for servers and networks.
AI Task Energy Consumption Varies Greatly
Recent studies have measured the carbon footprint of various AI tasks, including text and image classification, image generation, pattern recognition, summarization, and captioning. Since all tests were conducted on the same cloud platform, the carbon footprint directly corresponds to energy consumption.
The differences between tasks were stark. Text classification was the least energy-intensive at 0.002 kWh per thousand operations, while image generation was the most at 2.9 kWh for the same volume. These figures represent averages across multiple AI engines. At the extremes, the least efficient image generation consumed 6,833 times more energy than the most efficient text generation. Generally, text processing was found to be significantly more energy-efficient than image processing, largely due to the lower number of tokens required.
Model size also impacts energy consumption, with larger models consuming more than smaller ones. However, the intended use of the model has a greater influence than its size. Specialized models consume significantly less energy than general-purpose models for the same task. For example, specialized models for summarization emit 4-10 gCO₂eq per thousand operations, compared to 20-30 gCO₂eq for general-purpose models. This is understandable given that specialized models often have fewer parameters (e.g., up to 600 million) compared to the largest general models (e.g., around 11 billion parameters).
Energy Cost of AI Training
Beyond operational energy use, AI systems training must be taken into account, as it involves energy consumption for data collection, cleaning, organization, and model training.
Unfortunately, little data is available on the energy demands of training commonly used models. For organizations training their own AI models, it’s advisable to measure training energy consumption and explore ways to reduce it. A straightforward approach is to train models in regions with low-carbon energy production, such as Finland or other Nordic countries. Training during periods of low societal electricity demand or surplus renewable energy can further reduce emissions.
Recommendations For Reducing AI’s Energy Consumption and Emissions
While prohibiting AI use is unrealistic—the proverbial genie is out of the bottle—energy consumption and emissions can be mitigated by:
-
1
Selecting tasks to be solved with AI thoughtfully.
-
2
Using specialized models instead of general-purpose ones whenever possible.
-
3
Considering energy usage (when known) when comparing different models.
-
4
Training systems in low-carbon data centers and during periods of reduced energy demand or excess renewable availability.
In the next part of the series, we’ll explore how IT service buyers, developers, and users can minimize the energy consumption of their solutions.
Thoughts by