N
itin Bajpai, Consultant, and Venugopal Mothkoor, Climate and Energy Modelling Specialist, NITI Aayog
India is quietly becoming one of the world’s key data centre markets. Every time an Indian user streams a movie, uploads data to the cloud or asks an artificial intelligence (AI) model a question, the request travels through a network of high-density data centres that consume vast amounts of electricity. India currently operates about 1.5 GW of such capacity, while multiple industry forecasts now project 4.5 GW or more by 2030, backed by roughly $20 billion-$25 billion of investment. Globally, data centre power capacity was around 122 GW in 2024 and is expected to reach roughly 220 GW by 2030, as AI and cloud drive another wave of build-out. The International Energy Agency estimates that electricity use by data centres will more than double from 460 TWh in 2024 to over 1,000 TWh by 2030, or close to 3 per cent of global power demand. The rise of AI intensifies this curve – at the micro level, the paradox is tangible: ChatGPT queries typically consume 0.3-0.4 watt-hours (Wh), though complex and long input queries can reach 20-40 Wh — far exceeding the typical Google search, equivalent to running a 10 watt LED bulb for a few minutes. Individually, this is tiny; however, at the scale of billions of AI calls per day, it becomes a new class of critical load.
The energy equation and technological response
At the system level, data centre electricity demand is governed by a simple relationship:
Data centre energy ≈ IT load × utilisation × PUE
Here, IT load is the server power draw, utilisation is how much of that capacity is actually used, and power usage effectiveness (PUE) is total facility power divided by IT power. Nuances matter: AI workloads impose much higher power densities than legacy IT. A traditional rack has historically drawn around 5-10 kW, while AI-optimised racks in new graphics processing unit (GPU) halls routinely draw 30-100+ kW, with some designs already heading towards 200 kW per rack and beyond.
For India, this translates into a steep demand curve. S&P Global estimates that data centre electricity use will rise from about 13 TWh in 2024 to up to 57 TWh by 2030, making India the second largest data centre power consumer in Asia-Pacific after China and accounting for roughly 2.6-3 per cent of national generation. But higher demand is not destiny – it is a design variable.
The technical response has already been demonstrated at scale. In 2016, Google’s DeepMind team deployed an AI control system to optimise cooling in its data centres, delivering around 40 per cent lower cooling energy and an overall 15 per cent improvement in PUE, all on existing hardware. Globally, the average PUE remains at around 1.55-1.6, but best-in-class hyperscale facilities regularly achieve about 1.2 or better. Indian studies and operator statements suggest many domestic facilities still operate around 1.5-1.6 PUE, though newer builds are starting to push lower.
If we can combine liquid cooling, better electrical design and AI-driven workload/thermal optimisation to move from PUE 1.6 to 1.2, and raise average server utilisation from roughly 40 per cent to 70 per cent, simple application of the energy equation suggests it could deliver 70-80 per cent more compute for only about 30 per cent more electricity. (This is an illustrative scenario, not a precise forecast.) In other words, efficiency is India’s biggest multiplier: it buys time for renewable capacity to catch up while keeping AI infrastructure economically viable.
Possible options for India
Hyperscalers vs “fit-for-purpose” infrastructure
By 2030, global power demand from data centres is expected to be heavily concentrated in hyperscale and wholesale facilities, which may account for around 70 per cent of capacity. India is following a similar pattern, with large players (domestic and global) driving most new capacity. Hyperscalers typically deliver better PUE and renewable procurement at scale, but they are not the whole story. A complementary ecosystem of fit-for-purpose regional and edge data centres built to common efficiency standards and tailored to specific sectors (government, health, manufacturing) can spread economic benefits while avoiding over concentration of load in a handful of urban nodes.
Sweating existing capacity before building new
India already has around 1.5 GW of live data centre capacity, with several additional gigawatts in the committed pipeline, meaning that a large part of the 2030 landscape is effectively locked in. Rather than only adding new megawatts, there is a strong case for first “sweating” existing capacity by consolidating fragmented enterprise server rooms into more efficient facilities and utilising what is already built.
Indian efficiency guidelines and earlier work on Indian data centres flag low server utilisation and idle servers as major sources of avoidable energy use, recommending consolidation and virtualisation as key remedies, while global studies show typical large data centres running at only 20-30 per cent utilisation versus 70-80 per cent in best practice environments. It is, therefore, reasonable to assume India has similar headroom, making high-efficiency megawatts – those that deliver more compute per unit of electricity – far more valuable than simply adding raw megawatts to the grid.
Energy efficiency and renewable-plus-storage
On the supply side, India is already seeing data centres act as anchor loads for renewable projects. Airtel’s Nxtra has signed captive solar-and-wind deals totalling about 140 GWh per year, moving some sites towards 70 per cent renewable share. Operators like Yotta publicly target 50-80 per cent renewable penetration over the next few years.
Pairing these PPAs with on-site or co-located battery storage lets data centres shift part of their AI compute to periods of high solar or wind output, reducing stress on the grid and cutting emissions. High voltage corridors, such as the new high voltage direct current and 765 kV links out of Gujarat’s renewable hubs, are being built to move this clean power to load centres. Strategically siting AI data centre clusters near such nodes will further reduce losses and curtailment.
Taken together, this “systems view” – efficient IT, smart operations, renewables and storage, plus grid-aware siting – offers India a way to scale AI infrastructure without blowing its carbon or capacity budgets.
Policy imperative
As most of India’s 2030 data centre capacity is still in the pipeline, regulation can shape outcomes rather than chase them.
- Embed data centres in energy conservation and standards frameworks: Data centres can be explicitly brought under India’s evolving Energy Conservation and Sustainable Buildings Codes (ECSBC-type frameworks) and allied guidelines, with mandatory PUE and carbon usage effectiveness disclosure, periodic audits and minimum performance thresholds that tighten over time (for example, new facilities starting below PUE 1.5 and trending towards 1.3-1.2 by 2030, aligned with what the Ministry of Electronics and Information Technology has already signalled in the IndiaAI GPU tender).
- Performance benchmarking and clean energy targets: India’s sustainability push for data centres has already experimented with PUE < 1.35 as a benchmark in public AI tenders, though industry pushback led to partial dilution. A refined approach could combine:
- Benchmarking and disclosure (via a Carbon Credit Trading Scheme-like registry covering PUE, renewable share and outage performance), and
- Progressive clean-energy floors – for example, 50 per cent renewable share from day one, rising gradually for any facility availing central or state incentives.
As India accelerates its digital ambitions, the path forward should prioritise efficiency over expansion and system design over short-term fixes. Strong regulatory guardrails, predictable renewable procurement pathways and investment in modernised grid infrastructure can align data centre growth with national energy objectives.
(The views expressed in this article are the personal views of the authors.)
