Abstract

The rapid expansion of artificial intelligence (AI) and machine learning is driving unprecedented electricity demand from data centers. It is predicted that by 2030, 90% of AI workloads will be inference-based, requiring interconnection of multiple low-latency edge data centers (<20 MW) sited closer to end users - often on already constrained distribution feeders. Although individually small, these loads can aggregate to large loads per feeder, straining infrastructure, creating multi-year interconnection delays, and driving up customer costs. This paper proposes a data center-focused grid-integration framework that combines feeder hosting capacity analysis with building energy efficiency, building load flexibility, and waste heat reuse to expand effective feeder and substation headroom. Such approaches can reduce interconnection delays, lower costs for ratepayers, and accelerate AI-ready infrastructure deployment.
Original languageAmerican English
Number of pages25
StatePublished - 2025

NLR Publication Number

  • NREL/TP-5500-96700

Keywords

  • affordability
  • artificial intelligence
  • buildings
  • data center
  • demand
  • energy efficiency
  • industrialized construction
  • machine learning
  • reliability
  • waste heat

Fingerprint

Dive into the research topics of 'Considerations for Distributed Edge Data Centers and Use of Building Loads to Support Large Interconnections'. Together they form a unique fingerprint.

Cite this