AI for Business

The Data Center Power Crunch: How AI’s Appetite Is Forcing a Cooling Revolution

Generative AI and its next-generation successors are pushing data centers to consume electricity at the scale of small cities. Racks now routinely draw 100 kilowatts or more, and traditional air...

Share:

Generative AI and its next-generation successors are pushing data centers to consume electricity at the scale of small cities. Racks now routinely draw 100 kilowatts or more, and traditional air cooling can’t keep up. Operators are scrambling for transformers that face three-to-five-year delivery delays. At Data Center World in Washington, D.C., executives described the strain openly. Phill Lawson-Shanks, CEO of Aligned Data Centers, noted that one project required a complete restructuring just to secure enough power and cooling from the grid.

Nearly half of planned U.S. data center builds are stalled or canceled due to power shortages, according to Bloomberg. Alphabet, Amazon, Meta, and Microsoft have pledged over $650 billion in expansions by 2026, but electrical gear from China is scarce. Grid queues stretch seven years in some regions, and utilities are slamming the brakes on new connections.

Yet demand keeps climbing. Omdia analysts Shen Wang and Alan Howard see AI factories bulldozing ahead. “People need AI, so large AI factories will move forward somehow,” Wang said. Aligned is now scouting “stranded power” sites, ordering gear 2.5 years in advance, and planning 88 buildings—all pre-leased.

Cooling has flipped from an afterthought to a mandate. Rack densities have jumped from tens to hundreds of kilowatts. Liquid cooling systems will match air-cooled capacity by the end of 2025 and double it by 2026. Cold plate shipments are expected to explode from 8 million in 2025 to 356 million by 2030. Vertiv’s Scott Armul emphasizes simulation: “We can now simulate the whole environment, so we know that we need to adjust liquid cooling valves and alter CDU set points to optimize facility cooling and efficiency.”

Average rack power has risen to 16 kilowatts today, up from 6.1 kilowatts nine years ago. AI workloads already demand 30 to 40 kilowatts or more. Only 20% of operators can handle 50 to 70 kilowatts per rack, and AI’s share of data center workloads is expected to grow from 15% to 40% by 2030.

Global electricity consumption for data centers hit 415 terawatt-hours in 2024—1.5% of total worldwide usage—and is growing 12% annually. Projections put it at 1,050 TWh by 2026, rivaling the energy use of Japan or Russia. Goldman Sachs forecasts a 165% rise by 2030. The IEA points to AI as the prime driver, adding hundreds of terawatt-hours. In 2024, 40% of that power came from natural gas, 24% from renewables, and 20% from nuclear.

Water usage is also rising. U.S. data centers consumed 17 billion gallons in 2023, mostly from hyperscalers. That number could reach 33 billion gallons annually by 2028. One AI prompt uses about half a liter of water. But innovations are emerging: waste heat could warm towns in Appalachia or fuel greenhouses. Europe sees potential to recover 221 TWh yearly for district heating—enough to warm 500,000 homes in London alone.

Operators are pivoting fast. CoreSite’s 2026 outlook highlights on-site power, natural gas, and liquid cooling advances. Accenture calls it unprecedented momentum for AI, cloud, and edge. JLL describes it as the largest infrastructure supercycle in history, reshaping power, tech, and real estate.

Behind-the-meter generation is skipping the grid. Natural gas turbines can power a site in 12 to 24 months, not three to five years. Nuclear is making a comeback: Microsoft is restarting Three Mile Island for 835 megawatts, Amazon bought the Talen nuclear campus, and Google signed small modular reactor deals with Kairos. Meta is hunting for sites. Hyperscalers are bidding fiercely for baseload power.

Liquid cooling now dominates the conversation. Cold plates handle chips directly; immersion tackles entire racks. Vendors like LiquidStack slash water usage and cool megawatts. China is testing underwater servers that tap ocean chill. Sandia is experimenting with submersion cooling that eliminates fans. NVIDIA’s Vera Rubin racks demand full liquid cooling at 120 kilowatts. Microsoft has already deployed the first NVL72 racks.

Supply chains are straining. CDU lead times have tripled. Vertiv’s $50 million Ohio plant won’t come online until 2027. Pipe fittings are becoming a bottleneck. A full 83% of experts say cooling technology lags behind AI’s needs. Yet hyperscalers keep pushing: Microsoft, Google, Meta, and Amazon have signed power deals, secured land, and ordered GPUs—even as racks sit idle waiting for cooling.

Heat islands are emerging around facilities, spiking local temperatures, raising bills, straining water supplies, and generating noise. Cambridge researchers urge chip-level tweaks and hybrid cooling—liquid at the chip, air for the rest—plus software efficiency gains.

China is experimenting boldly with submerged servers that cut energy and water use. These aren’t theoretical; they’re in operational tests.

Alberta now has 30 AI projects in its queue. Wonder Valley’s $70 billion park plans to tap gas and geothermal for 1.4 gigawatts in its first phase. Synapse’s one-gigawatt bid faltered due to community pushback.

Gartner warns that power shortages will curb 40% of data centers by 2027. EnkiAI calls grid strain the top barrier. Yet AI infrastructure spending is projected to hit $582 billion in 2026, up 19%.

Operators are now integrating power, cooling, and compute from the start. They’re using standardized blocks, defined interfaces between CDUs and thermal cycles, and scalable modules for gigawatt campuses.

So grids creak. But AI marches on. Nuclear restarts. Gas surges. Liquids flow. Waste heat gets repurposed. Constraints bind, but solutions forge ahead. Industry insiders are watching electrons and pipes as closely as they watch silicon.

Source: Webpronews

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →