The Data Center Frontier Show

Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.

Listen on:

  • Apple Podcasts
  • Podbean App
  • Spotify
  • Amazon Music
  • iHeartRadio
  • PlayerFM
  • Podchaser
  • BoomPlay

Episodes

2 hours ago

As AI workloads continue to scale, data centers are facing a new class of electrical challenges—ones driven not by total energy demand alone, but by how quickly that demand can change. AI training environments, particularly those built around dense GPU clusters, can cause rapid and unpredictable swings in power consumption. These fast load changes place stress on power systems that were originally designed for steadier, more predictable behavior. 
In the podcast, we explore why traditional approaches to power stabilization may not fully address the demand of AI-driven variability. While these approaches can absorb momentary spikes, they may fall short when it comes to sustained smoothing or supporting broader system stability. This becomes even more complex as many data centers are powered by on-site generation before transitioning to utility grid connections later in their lifecycle. 
The conversation highlights how newer energy storage strategies are evolving to meet these demands. Advanced battery-based systems, when paired with more adaptive control strategies, are designed to respond rapidly to load changes while operating effectively across different grid conditions. Rather than reacting only after voltage or frequency disturbances occur, these systems can proactively manage fluctuations at the point of interconnection, helping protect generation assets, improve power quality, and facilitate faster project timelines. 
As AI continues to push infrastructure into unfamiliar territory, the industry will need flexible, high-speed solutions that work across both islanded and grid-connected environments. Technologies designed with this adaptability in mind are quickly becoming a key enabler for the next generation of AI-ready data centers. 
These capabilities described herein reflect general technology characteristics and may vary based on system configuration, site conditions, and grid environment. 

2 days ago

On the latest episode of the Data Center Frontier Show podcast, DCF Editor in Chief Matt Vincent speaks with Melissa Kalka, M&A and private equity partner, and Kimberly McGrath, real estate partner at Kirkland & Ellis, about how capital, power, and deal strategy are changing in the AI data center era.
Their core message is clear. Capital is still flowing into digital infrastructure, but the market has become far more disciplined. Investors are no longer simply chasing land or growth stories. They are digging deeper into platform quality, delivery track record, contractual structure, and above all, power certainty.
That last point now sits at the center of nearly every transaction. As AI workloads push development from 20 MW and 48 MW deals toward 100 MW, 500 MW, and even gigawatt-scale campuses, power availability has become the first screen in diligence. A site may have land and entitlements, but without credible access to power, it may struggle to attract customers, financing, or buyers.
The conversation also underscores how AI has changed the asset class itself. Data centers are no longer being evaluated strictly as real estate. They are increasingly underwritten as a hybrid of real estate and infrastructure, with longer hold periods, shared campus systems, and more complex capital stacks.
That dynamic is driving new financing structures, including more private credit activity, more infrastructure-style investment, and growing interest in open-ended and perpetual vehicles for long-term ownership.
Powered land, meanwhile, has emerged as an asset category of its own. In a market where development pipelines remain robust and hyperscalers are pursuing massive capacity expansions, sites with large increments of secured power are drawing intense interest.
Kalka and McGrath also explain that customer contracts now function as a key part of financing infrastructure. Lease and colocation agreements are being negotiated with greater attention to lender expectations, long-term revenue stability, and risk allocation around power delivery and development timing.
For developers and operators, one of the biggest lessons is that structure matters early. Projects need to be organized from the outset in ways that make them financeable, investable, and divisible as platforms mature. Just as important, these deals now require extraordinary coordination across legal, real estate, regulatory, financing, environmental, and community stakeholders.
The episode offers a timely look at a market moving out of its speculative phase and into a more demanding period defined by execution. In the AI era, the winners will not simply be those who raise capital fastest, but those who can align capital, contracts, land, and power into a credible path to delivery.

3 days ago

In today’s mission-critical supply chains, downtime is not an inconvenience—it’s a crisis. Whether supporting manufacturing, fabrication, integration or construction, warehouse management systems (WMS) have evolved from simple inventory tools into the digital backbone of high-stakes logistics environments.
Today, Jarrett Atkinson, Vice President of Supply Chain for BluePrint Supply Chain explores how modern WMS platforms are redefining resilience, visibility, and performance in mission critical construction supply chains where failure is not an option
We dive into what separates a standard WMS from one engineered for high-availability operations supporting multi-site deployment and specialized handling of large-scale gear.  We will also discuss critical KPIs, reporting and visibility—how a WMS unlocks critical business insights that can improve efficiency, reduce costs, and eliminate project obstacles.
Beyond technology, we also address implementation risk and examine the innovations poised to shape the next five years of mission-critical logistics.
 

Friday Mar 27, 2026

We’re taking a closer look at a topic that’s no longer optional for data‑center leaders: sustainability with measurable accountability. As carbon regulations tighten, especially around Scope 3 emissions, owners and operators are rethinking how they specify and source every component in the power chain. At the same time, supply‑chain pressures, copper constraints, and new state‑level requirements like on‑premise power for large sites are introducing new complexities into design, procurement, and long‑term planning. Joel Wynn, VP of Data Center Sales at Southwire, brings a unique end‑to‑end perspective, spanning mining practices, material traceability, advanced conductor engineering, Environmental Product Declarations, and the real‑world challenges hyperscalers and colos face when trying to reduce embodied carbon.
Hear a conversation about how reduced‑carbon copper, transparent supply chains, and next‑generation power infrastructure can meaningfully move the needle on sustainability and how data‑center developers can prepare for the regulatory, technical, and community‑driven expectations coming next.
Where does power innovation come into play in the context of sustainability?  We are already seeing shifts in the industry and the move to on-premise power. Southwire is focused on bringing innovation to the industry from the mining companies to the data center, all while identifying opportunities to upgrade existing cable for greater efficiency.  

Tuesday Mar 24, 2026

As AI data center campuses scale toward gigawatt capacity, the industry is confronting a new kind of bottleneck. Not just how to generate power, but how to move it efficiently across increasingly complex environments.
In this episode of the Data Center Frontier Show Podcast, MetOx CEO Bud Vos outlines why traditional copper-based power distribution may be approaching its limits, and how high-temperature superconducting (HTS) wire could offer a fundamentally different path forward.
“When you start looking at gigawatt-type campuses, you find three fundamental constraints—the grid interconnect, campus distribution, and delivery inside the data hall,” Vos explains. At each layer, scaling with copper drives exponential increases in materials, infrastructure, and complexity.
HTS technology changes that equation. By delivering roughly 10x the power density of copper, superconducting cables can dramatically reduce the physical footprint of power infrastructure, replacing dozens of conventional cables with just a few, while also cutting material use and simplifying system design.
The technology also reverses a key trend in data center power architecture. Instead of pushing voltage higher to compensate for copper limitations, superconductors enable higher current at lower voltage, potentially simplifying electrical systems across the facility.
Just as importantly, superconductors are effectively lossless. “They don’t generate heat as part of the power delivery infrastructure,” Vos notes, a property that could reshape how operators think about thermal management in high-density AI environments. While HTS systems require cooling with liquid nitrogen, that requirement may align with the industry’s broader shift toward liquid cooling.
Beyond engineering, HTS could also play a role in easing permitting and community opposition by reducing the physical footprint of power infrastructure. Narrower rights-of-way and fewer materials translate into less visible impact—an increasingly important factor as data center development faces growing scrutiny.
Crucially, superconducting systems are not theoretical. They have already been deployed in utility environments, providing a track record of reliability that may help accelerate adoption in the data center sector.
As onsite and behind-the-meter generation become more common, HTS is particularly well-suited to moving large amounts of power across multi-building campuses and into high-density data halls. At the same time, the technology offers a potential alternative to strained supply chains for copper and traditional electrical equipment.
Looking further ahead, superconductivity’s role may extend even deeper, with HTS materials also serving as a foundation for emerging fusion energy systems, hinting at a future where power generation and data center infrastructure are more tightly linked.
For now, Vos sees the industry at the beginning of an adoption cycle. “We’re deploying, testing, and then innovating on top of that,” he says.
As AI infrastructure enters its execution phase, superconductivity may move from a niche technology to a core component of how the next generation of data centers is powered.

Thursday Mar 19, 2026

A look at the major trends shaping the data center and HVAC industries in 2026. Key topics include the growing role of high-voltage DC for improved power quality, the rise of liquid cooling, and how air-cooling technologies continue to play a critical part across the data center ecosystem. 
Industry discussions also touch on innovation momentum coming out of recent events, shifting demand toward high growth markets, and the increasing importance of localized manufacturing to reduce lead times, navigate tariffs, and strengthen supply chain resilience—especially as AI driven data center expansion accelerates. 
Themes such as energy efficiency, grid capacity limitations, hybrid cooling approaches, and system level optimization frame a broader question for operators and suppliers alike: Where do you fit within the data center system, and how are you preparing for what comes next? 

Thursday Mar 12, 2026

Subzero Engineering is pleased to announce the acquisition of the Dissolvable Air Barrier (DAB) Panels product line from Cambridge R&D, further expanding Subzero’s portfolio of data center containment solutions and reinforcing its commitment to safety, performance, and turnkey system delivery. 
DAB Panels are a unique overhead containment solution designed to provide effective airflow separation during normal data center operation while dissolving within seconds when exposed to water during sprinkler activation. This dissolvable design helps eliminate falling panel hazards and supports safer fire suppression outcomes—addressing a critical challenge found in traditional rigid overhead containment systems. 
“With this acquisition, we’re strengthening our ability to deliver truly integrated, safety-driven containment solutions,” said Shane Kilfoil, President of Subzero Engineering. “DAB Panels complement our existing containment portfolio and give our customers another proven option to address airflow management and fire safety without compromise.” 
DAB Panels are engineered for both hot aisle and cold aisle containment applications and offer a combination of airflow performance, safety, and installation flexibility. Made from EPA-certified, plant-based cellulose materials, the panels achieve Class A fire and smoke performance, producing low heat and minimal smoke while maintaining visibility for emergency personnel. 
Despite their dissolvable design, DAB Panels remain durable during normal operation—withstanding high static air pressure and maintaining airflow separation where it matters most. Panels can be easily modified in the field to accommodate varying cabinet heights and existing infrastructure, eliminating the need to relocate sprinkler heads and reducing installation time and cost. 
DAB Panels integrate seamlessly across Subzero’s full portfolio of data center containment products, including aisle frames, doors, roofs, and airflow management systems. This unified approach enables Subzero to deliver turnkey containment solutions engineered for performance, safety, and long-term scalability—backed by a single partner and a coordinated system designed to work together. 

Tuesday Mar 10, 2026

In this episode of the Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent speaks with Michael Siteman, President of Prodigious Proclivities and a long-time leader and board member within 7x24 Exchange International, about how data center development is being reshaped by AI, power scarcity, network strategy, and community resistance.
Siteman explains how site selection has evolved from a traditional real estate exercise into a far more complex infrastructure challenge.
“The business used to be a pure real estate play,” Siteman says. “Now it’s a systems engineering problem. It’s power, network topology, the real estate itself, and political risk.”
The conversation explores the growing dominance of power in development strategy, including the rapid rise of behind-the-meter generation as utilities struggle to keep pace with demand. Siteman notes that attitudes toward onsite generation have shifted dramatically in just the past few months.
“Six months ago, people would say, ‘If you don’t have grid interconnection, we’re not interested,’” he says. “In the last 30 days, it’s completely different.”
Vincent and Siteman also discuss the balance between network access and power access, the risks of pre-leasing capacity before buildings are completed, and the growing importance of local politics and government relations in getting projects approved.
The episode closes with a look at the widening gap between traditional hyperscale facilities and AI factories, the question of whether AI infrastructure is heading toward a bubble, and the industry’s urgent workforce shortage.
“Data centers don’t run themselves,” Siteman says. “We simply don’t have enough people to build and operate the infrastructure that’s coming.”
This is a grounded, field-level conversation about what is really driving data center development in the AI era, and what the industry will need to solve next.

Tuesday Mar 03, 2026

The AI infrastructure boom is rapidly reshaping how the data center industry thinks about power. What was once a relatively straightforward utility procurement exercise is evolving into a complex strategy spanning onsite generation, fuel logistics, financing, and system architecture.
That reality framed a recent special edition of The Data Center Frontier Show Podcast, which recast and updated a pivotal DCF Trends Summit 2025 session: From Grid to Onsite Powering: Optimizing Energy Behind the Meter for Data Centers. 
Moderated by Fengrong Li, Senior Managing Director at FTI Consulting, the panel explored how operators are responding as interconnection timelines stretch and AI workloads surge. Li’s framing emphasized a core shift: onsite power is moving from contingency planning to critical-path infrastructure.
From the OEM perspective, David Blank of Siemens Energy noted that behind-the-meter deployments have accelerated sharply over the past year as developers confront multi-year waits for firm utility capacity.
“Everyone would prefer grid power,” Blank said. “But in many cases, reliable access isn’t available for five, ten, even ten-plus years.”
Panelists agreed that AI’s scale and speed are driving a structural rethink. Brian Gitt of Oklo described the moment as a return to industrial roots, with large loads once again building dedicated generation to meet growth timelines.
At the same time, new technical pressures are emerging. AI clusters can produce sharp load swings, forcing developers to deploy fast-response buffering technologies such as batteries, flywheels, and supercapacitors to maintain stability.
Despite differing technology paths—including gas turbines, hydrogen fuel cells, and advanced nuclear—the panel aligned on one common theme: modularity. Phased power blocks increasingly mirror how AI campuses are actually built and financed.
The discussion also highlighted the growing importance of contract structures. Long-term offtake commitments, capacity reservations, and credit support are increasingly required to unlock equipment queues and fuel supply.
Other panelists included Marty Trivette of AlphaStruxure and Yuval Bachar of ECL. The event was hosted by Data Center Frontier’s Matt Vincent.
The takeaway was clear: in the AI era, energy strategy has moved to the critical path—and for many operators, that path now runs behind the meter.

Tuesday Feb 24, 2026

The data center industry is racing into the AI era with bigger campuses, tighter timelines, and unprecedented infrastructure complexity. But in this episode of The Data Center Frontier Show Podcast, 7x24 Exchange International founding member and Mission Critical Global Alliance (MCGA) board member Dennis Cronin argues the industry’s biggest constraint may be the one it talks about least: people.
Cronin’s message is direct: the “talent cliff” isn’t coming; it’s already here. Based on recent research into open roles, he estimates 467,000 to 498,000 openings in core data center positions (facilities and ops leadership, electrical, generator/UPS, HVAC, controls), plus another ~514,000 emerging roles tied to AI infrastructure, sustainability, and cyber-physical security—bringing the total to roughly one million jobs the industry needs to fill.
A major driver is what Cronin calls the “five-year experience trap”: employers require five years of experience even for entry-level roles, but newcomers can’t get experience without being hired. The result is widespread talent poaching, involving workers jumping from site to site for 10–20% raises, without expanding the overall labor pool.
Cronin also highlights a frequently missed reality in public policy debates: the job multiplier effect. While data centers may have lean direct staffing, they support a much larger ecosystem of contractors, service providers, and manufacturers, from generator and UPS technicians to security integrators and the electrical/mechanical supply chain, many of whom are already scrambling to hire.
On training, Cronin explains why company-run programs and commercial training aren’t enough on their own. Internal academies often produce siloed specialists trained for a single operator’s environment, while commercial courses, often ~$1,000 per day per person, are typically designed to upskill people already in the industry, not onboard new entrants.
MCGA’s strategy focuses on community colleges as the most scalable on-ramp: affordable programs, scholarships, and hands-on labs that can produce strong technicians in two-year degrees. Cronin cites programs at Cleveland Community College (NC), Northern Virginia Community College, and Southside Community College (VA), noting that dozens of schools are exploring data center curricula but funding remains a barrier.
Cronin’s proposed solution is a true workforce ecosystem: outreach, standardized curriculum, certification labs, structured apprenticeships, and employer commitments. He also advocates replacing the “five years” requirement with an entry-level certification that proves foundational knowledge, i.e. acronyms and language, reading one-lines, SOPs/MOPs, and crucially, safety and situational awareness in electrical and mechanical environments.
Finally, Cronin tackles the money question. With $60B in data centers announced this year, he says the industry needs a major, shared investment across operators, vendors, contractors, and manufacturers to fund training and scholarships at scale. The stakes are operational: in an era of gigawatt AI facilities and shrinking margins for error, workforce readiness is now a mission-critical issue.

Copyright Data Center Frontier LLC © 2019

Version: 20241125