AI surge strains data centres over power & capacity
Industry leaders are warning that global data centre infrastructure is coming under increasing strain as artificial intelligence workloads surge. Marking International Data Centre Day, commentators are highlighting structural pressures on capacity, energy and supply chains.
AI training and inference require dense compute and storage, while operators face rising demand from cloud providers, enterprises and edge deployments. Analysts and vendors expect this growth to reshape how facilities are designed, funded and run, as traditional architectures and power strategies come under pressure.
Energy use has become a central concern for operators and regulators. Recent estimates suggest data centres consume around 2.5% of the United Kingdom's electricity each year, and expansion plans in key hubs are prompting closer scrutiny of grid impact. Industry surveys show that about two-thirds of operators view optimising energy consumption and securing power as a top operational challenge.
Water use is also drawing attention. Cooling systems in some facilities require significant volumes of water, and environmental groups have raised concerns about pressure on local resources. Available figures indicate that fewer than 4% of data centres currently use more than 100,000 cubic metres of water each year. Commentators argue that, as capacity scales, operators will need tighter monitoring and new cooling strategies to keep usage within acceptable limits.
On the demand side, AI is reshaping the long-term outlook for data centre capacity. Forecasts cited by vendors suggest global demand for AI-ready capacity could rise by about a third each year between 2023 and 2030. Some projections indicate AI workloads could account for nearly 70% of total data centre demand by the end of the decade, transforming investment priorities across the sector.
Executives and consultants say this trajectory is exposing the limits of current designs. Many legacy facilities were built for more predictable enterprise and cloud workloads. Operators now face fluctuating, power-hungry AI jobs that create sharp peaks in energy and cooling demand. Industry voices describe a shift away from incremental upgrades and towards new architectures that can cope with greater volatility in both demand and power supply.
Supply chain risk is another growing concern. Lead times for key components, including transformers, specialised chips and cooling equipment, have lengthened in recent years. Constraints in construction labour, grid connections and specialist engineering skills add further complexity to large projects.
Consultants argue that build-outs and upgrades now require better integration between corporate planning and the supplier ecosystem. They point to delays in grid hook-ups, equipment shortages and fragmented project management as recurring causes of overruns. In response, some operators are revisiting sourcing strategies, diversifying suppliers and placing earlier orders for long-lead items to avoid bottlenecks.
"Data centres face growing pressure as digital workloads increase and supply chains grow more complex. Gaining clear visibility of suppliers and coordinating actions across teams is critical to managing risk and preventing delays. Early engagement with partners, careful resource allocation, and strategic investment support operational stability and maintain service continuity. International Data Centre Day underscores the importance of this infrastructure and the supply chains that support it. When data centres and their supply chains are closely aligned with business operations, they can safeguard critical infrastructure, maintain reliability, and respond more effectively to evolving demands," said Alan Win, Founder and CEO, Middlebank Consulting Group.
Win's comments reflect broader concern that data centre projects sit at the intersection of construction, utilities and technology markets. Market watchers warn that disruption in any of these layers can derail expansion plans. As a result, investors and operators are paying closer attention to end-to-end risk assessments, including geopolitical factors, trade restrictions and climate-related shocks that could affect component availability or logistics.
As operators grapple with grid limits and latency requirements, attention is also turning to where processing takes place. Some organisations are shifting part of their workloads away from centralised cloud regions and into edge locations, branch sites and on-premises environments. The trend is particularly visible in sectors where seconds matter, such as healthcare, manufacturing and critical infrastructure.
Vendors supplying edge and on-premises systems report growing interest in local processing for time-sensitive data. They highlight use cases in which data cannot be sent to distant facilities or public cloud endpoints without affecting service quality. These deployments still rely on central data centres for aggregation and long-term storage, but decision-making increasingly happens close to where data is generated.
Edge advocates say this distribution of workloads can ease pressure on core facilities while improving resilience. They point to advances in compact servers, virtualisation software and storage systems that support smaller sites. Some enterprises now treat their own server rooms and micro data centres as part of a broader hybrid estate spanning cloud, colocation and on-premises assets.
"The rapid development of AI analytics is accelerating revolution across every industry, and data centres are hitting the headlines as fears grow that demand will outpace capacity. But at the edge, latency is already a key concern as data is often so time-sensitive here that the AI engines behind the technology must be deployed locally. In organisations such as health centres and manufacturing plants, there simply isn't time to send the data to a data centre or the cloud for AI processing - decision-making must be instantaneous. "Enterprises are increasingly keeping their most critical applications and information closer to home, on their own servers, at local edge sites. Implementing solutions like this both eases the load on the data centres and improves the performance of local devices that, thanks to advances in AI, are now delivering more valuable insights than we ever imagined. "To enable this transition, more organisations are moving toward proven on-premises hyperconverged infrastructure (HCI) due to its resilient and reliable design, with the global market expected to continue to grow from $11.98 billion in 2024 to $61.49 billion by 2032. HCI combines computing, networking and storage resources, streamlining data centre architecture, using virtualisation to reduce server requirements. Modern HCI solutions are designed specifically for smaller sites to run applications and store data securely at the edge, yet can connect to the cloud and data centre as often as needed. As pressure on data centres continues to mount, organisations that embrace on-premises alternatives will be set up for success at the edge," said Mark Christie, Director of Technical Services, StorMagic.
Some industry figures argue that the sector needs a more radical rethink, particularly around energy and cooling. They point to emerging designs that combine real-time automation with greater use of renewables and flexible demand management. The aim is to make sites more adaptable as grids absorb higher levels of intermittent generation such as wind and solar.
Vendors supplying industrial software and automation systems see an opportunity in this shift. They promote tools that give operators detailed visibility into power flows, thermal performance and equipment behaviour. The goal is to adjust workloads, cooling profiles and on-site generation quickly as conditions change.
"As global AI usage explodes, the worldwide race to build new data centres is on. Global demand for AI-ready data centre capacity is forecast to rise by 33% annually between 2023 to 2030, and by the end of the decade, AI is expected to drive nearly 70% of total demand. Data centres can't keep up, so it's time to rewrite the rulebook. "Meeting such soaring demand requires a new approach: data centres that can auto‐scale capacity, reconfigure cooling and optimise onsite renewable energy sources - all in real time. Digital technologies will be central to this transformation to gigawatt-scale AI factories. Data, simulation and AI must all be leveraged together to ensure the most optimal use of energy and scarce resources, while automation and instrumentation will help operate and maintain data centres at scale. Crucially, data centres must now be designed for instability: built to absorb volatile demand spikes and adapt to unpredictable energy supplies. "These innovations will allow data centres to scale faster, perform more reliably, and operate more sustainably. This will help to foster growth without intensifying the sector's carbon footprint. In an environment defined by volatility and acceleration, long-term value will depend on adaptive and resilient infrastructure that is sustainable by design," said Khaled Salah, VP, AVEVA.