The Data Center Frontier Show
Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.
Episodes

4 days ago
4 days ago
The data center industry is changing faster than ever. Artificial intelligence, cloud expansion, and high-density workloads are driving record-breaking energy and cooling demands. But behind every megawatt of compute capacity lies an equally critical resource: water.
As data halls evolve from static infrastructure to dynamic, service-driven ecosystems, cooling has emerged as one of the most powerful levers for efficiency, reliability, and sustainability. In this episode, Ecolab explores how Cooling as a Service (CaaS) is transforming data center operations, shifting cooling from a capital expense to a measurable, performance-based service that drives uptime, reliability, and environmental stewardship.
Tune in to hear experts discuss how data centers can future-proof their operations through a smarter, service-oriented approach to thermal management. From proactive analytics to commissioning best practices, this conversation dives into the technologies, partnerships, and business models redefining how cooling is managed and measured across the world’s most advanced digital infrastructure.

5 days ago
5 days ago
Applied Digital CEO Wes Cummins joins Data Center Frontier Editor-in-Chief Matt Vincent to break down what it takes to build AI data centers that can keep pace with Nvidia-era infrastructure demands and actually deliver on schedule.
Cummins explains Applied Digital’s “maximum flexibility” design philosophy, including higher-voltage delivery, mixed density options, and even more floor space to future-proof facilities as power and cooling requirements evolve.
The conversation digs into the execution reality behind the AI boom: long-lead power gear, utility timelines, and the tight MEP supply chain that will cause many projects to slip in 2026–2027.
Cummins outlines how Applied Digital locked in key components 18–24 months ago and scaled from a single 100 MW “field of dreams” building to roughly 700 MW under construction, using fourth-generation designs and extensive off-site MEP assembly—“LEGO brick” skids—to boost speed and reduce on-site labor risk.
On cooling, Cummins pulls back the curtain on operating direct-to-chip liquid cooling at scale in Ellendale, North Dakota, including the extra redundancy layers—pumps, chillers, dual loops, and thermal storage—required to protect GPUs and hit five-nines reliability.
He also discusses aligning infrastructure with Nvidia’s roadmap (from 415V toward 800V and eventually DC), the customer demand surge pushing capacity planning into 2028, and partnerships with ABB and Corintis aimed at next-gen power distribution and liquid cooling performance.

Thursday Jan 22, 2026
Thursday Jan 22, 2026
In this episode of the Data Center Frontier Show, Matt Vincent is joined by Liam Weld, Head of Data Centers for Meter to discuss why connectivity for data centers is often forgotten about.

Tuesday Jan 20, 2026
Tuesday Jan 20, 2026
AI data centers are no longer just buildings full of racks. They are tightly coupled systems where power, cooling, IT, and operations all depend on each other, and where bad assumptions get expensive fast.
On the latest episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Sherman Ikemoto of Cadence about what it now takes to design an “AI factory” that actually works.
Ikemoto explains that data center design has always been fragmented. Servers, cooling, and power are designed by different suppliers, and only at the end does the operator try to integrate everything into one system. That final integration phase has long relied on basic tools and rules of thumb, which is risky in today’s GPU-dense world.
Cadence is addressing this with what it calls “DC elements”: digitally validated building blocks that represent real systems, such as NVIDIA’s DGX SuperPOD with GB200 GPUs. These are not just drawings; they model how systems really behave in terms of power, heat, airflow, and liquid cooling. Operators can assemble these elements in a digital twin and see how an AI factory will actually perform before it is built.
A key shift is designing directly to service-level agreements. Traditional uncertainty forced engineers to add large safety margins, driving up cost and wasting power. With more accurate simulation, designers can shrink those margins while still hitting uptime and performance targets, critical as rack densities move from 10–20 kW to 50–100 kW and beyond.
Cadence validates its digital elements using a star system. The highest level, five stars, requires deep validation and supplier sign-off. The GB200 DGX SuperPOD model reached that level through close collaboration with NVIDIA.
Ikemoto says the biggest bottleneck in AI data center buildouts is not just utilities or equipment; it is knowledge. The industry is moving too fast for old design habits. Physical prototyping is slow and expensive, so virtual prototyping through simulation is becoming essential, much like in aerospace and automotive design.
Cadence’s Reality Digital Twin platform uses a custom CFD engine built specifically for data centers, capable of modeling both air and liquid cooling and how they interact. It supports “extreme co-design,” where power, cooling, IT layout, and operations are designed together rather than in silos. Integration with NVIDIA Omniverse is aimed at letting multiple design tools share data and catch conflicts early.
Digital twins also extend beyond commissioning. Many operators now use them in live operations, connected to monitoring systems. They test upgrades, maintenance, and layout changes in the twin before touching the real facility. Over time, the digital twin becomes the operating platform for the data center.
Running real AI and machine-learning workloads through these models reveals surprises. Some applications create short, sharp power spikes in specific areas. To be safe, facilities often over-provision power by 20–30%, leaving valuable capacity unused most of the time. By linking application behavior to hardware and facility power systems, simulation can reduce that waste, crucial in an era where power is the main bottleneck.
The episode also looks at Cadence’s new billion-cycle power analysis tools, which allow massive chip designs to be profiled with near-real accuracy, feeding better system- and facility-level models.
Cadence and NVIDIA have worked together for decades at the chip level. Now that collaboration has expanded to servers, racks, and entire AI factories. As Ikemoto puts it, the data center is the ultimate system—where everything finally comes together—and it now needs to be designed with the same rigor as the silicon inside it.

Thursday Jan 15, 2026
Thursday Jan 15, 2026
AI is reshaping the data center industry faster than any prior wave of demand. Power needs are rising, communities are paying closer attention, and grid timelines are stretching. On the latest episode of The Data Center Frontier Show, Page Haun of Cologix explains what sustainability really looks like in the AI era, and why it has become a core design requirement, not a side initiative.
Haun describes today’s moment as a “perfect storm,” where AI-driven growth meets grid constraints, community scrutiny, and regulatory pressure. The industry is responding through closer collaboration among operators, utilities, and governments, sharing long-term load forecasts and infrastructure plans. But one challenge remains: communication. Data centers still struggle to explain their essential role in the digital economy, from healthcare and education to entertainment and AI services.
Cologix’s Montreal 8 facility, which recently achieved LEED Gold certification, shows how sustainable design is becoming standard practice. The project focused on energy efficiency, water conservation, responsible materials, and reduced waste, lowering both environmental impact and operating costs. Those lessons now shape how Cologix approaches future builds.
High-density AI changes everything inside the building. Liquid cooling is becoming central because it delivers tighter thermal control with better efficiency, but flexibility is the real priority. Facilities must support multiple cooling approaches so they don’t become obsolete as hardware evolves. Water stewardship is just as critical. Cologix uses closed-loop systems that dramatically reduce consumption, achieving an average WUE of 0.203, far below the industry norm.
Sustainability also starts with where you build. In Canada, Cologix leverages hydropower in Montreal and deep lake water cooling in Toronto. In California, natural air cooling cuts energy use. Where geography doesn’t help, partnerships do. In Ohio, Cologix is deploying onsite fuel cells to operate while new transmission lines are built, covering the full cost so other utility customers aren’t burdened.
Community relationships now shape whether projects move forward. Cologix treats communities as long-term partners, not transactions, by holding town meetings, working with local leaders, and supporting programs like STEM education, food drives, and disaster relief.
Transparency ties it all together. In its 2024 ESG report, Cologix reported 65% carbon-free energy use, strong PUE and WUE performance, and expanded environmental certifications. As AI scales, openness about impact is becoming a competitive advantage.
Haun closed with three non-negotiables for AI-era data centers: flexible power and cooling design, holistic resource management, and a real plan for renewable energy, backed by strong community engagement. In the age of AI, sustainability isn’t a differentiator anymore. It’s the baseline.

Wednesday Jan 07, 2026
Wednesday Jan 07, 2026
In this episode of the Data Center Frontier Show, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Axel Bokiba, General Manager Data Center Cooling for MOOG, about what is takes to deliver liquid cooling reliably at hyperscale.

Tuesday Jan 06, 2026
Tuesday Jan 06, 2026
In this episode of The Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Kevin Ooley, CFO of DataBank, about how the operator is structuring capital to support disciplined growth amid accelerating AI and enterprise demand.
Ooley explains the rationale behind DataBank’s expansion of its development credit facility from $725 million to $1.6 billion, describing it as a strong signal of lender confidence in data centers as long-duration, mission-critical real estate assets.
Central to that strategy is DataBank’s “Devco facility,” a pooled, revolving financing vehicle designed to support multiple projects at different stages of development; from land and site work through construction, leasing, and commissioning.
The conversation explores how DataBank translates capital into concrete expansion across priority U.S. markets, including Northern Virginia, Dallas, and Atlanta, with nearly 20 projects underway through 2025 and 2026. Ooley details how recent deployments, including fully pre-leased capacity, feed a development pipeline supported by both debt and roughly $2 billion in equity raised in late 2024.
Vincent and Ooley also dig into how DataBank balances rapid growth with prudent leverage, managing interest-rate volatility through hedging and refinancing stabilized assets into fixed-rate securitizations.
In the AI era, Ooley emphasizes DataBank’s focus on “NFL cities,” serving enterprise and hyperscale customers that need proximity, reliability, and scale while Databank delivers power, buildings, and uptime, and customers source their own GPUs.
The episode closes with a look at Databank’s long-term sponsorship by DigitalBridge, its deep banking relationships, and the market signals—pricing, absorption, and customer demand—that will ultimately dictate the pace of growth.

Monday Dec 29, 2025
Monday Dec 29, 2025
DCF Trends Summit 2025 Session Recap
As the data center industry accelerates into an AI-driven expansion cycle, the fundamentals of site selection and investment are being rewritten. In this session from the Data Center Frontier Trends Summit 2025, Ed Socia of datacenterHawk moderated a discussion with Denitza Arguirova of Provident Data Centers, Karen Petersburg of PowerHouse Data Centers, Brian Winterhalter of DLA Piper, Phill Lawson-Shanks of Aligned Data Centers, and Fred Bayles of Cologix on how power scarcity, entitlement complexity, and community scrutiny are reshaping where—and how—data centers get built.
A central theme of the conversation was that power, not land, now drives site selection. Panelists described how traditional assumptions around transmission timelines and flat electricity pricing no longer apply, pushing developers toward Tier 2 and Tier 3 markets, power-first strategies, and closer partnerships with utilities. On-site generation, particularly natural gas, was discussed as a short-term bridge rather than a permanent substitute for grid interconnection.
The group also explored how entitlement processes in mature markets have become more demanding. Economic development benefits alone are no longer sufficient; jurisdictions increasingly expect higher-quality design, sensitivity to surrounding communities, and tangible off-site investments. Panelists emphasized that credibility—earned through experience, transparency, and demonstrated follow-through—has become essential to securing approvals.
Sustainability and ESG considerations remain critical, but the discussion took a pragmatic view of scale. Meeting projected data center demand will require a mix of energy sources, with renewables complemented by transitional solutions and evolving PPA structures. Community engagement was highlighted as equally important, extending beyond environmental metrics to include workforce development, education, and long-term social investment.
Artificial intelligence added another layer of complexity. While large AI training workloads can operate in remote locations, monetized AI applications increasingly demand proximity to users. Rapid hardware cycles, megawatt-scale racks, and liquid-cooling requirements are driving more modular, adaptable designs—often within existing data center portfolios.
The session closed with a look at regional opportunity and investor expectations, with markets such as Pennsylvania, Alabama, Ohio, and Oklahoma cited for their utility relationships and development readiness. The overarching conclusion was clear: the traditional data center blueprint still matters—but power strategy, flexibility, and authentic community integration now define success.

Friday Dec 19, 2025
Friday Dec 19, 2025
As the data center industry enters the AI era in earnest, incremental upgrades are no longer enough. That was the central message of the Data Center Frontier Trends Summit 2025 session “AI Is the New Normal: Building the AI Factory for Power, Profit, and Scale,” where operators and infrastructure leaders made the case that AI is no longer a specialty workload; it is redefining the data center itself.
Panelists described the AI factory as a new infrastructure archetype: purpose-built, power-intensive, liquid-cooled, and designed for constant change. Rack densities that once hovered in the low teens have now surged past 50 kilowatts and, in some cases, toward megawatt-scale configurations. Facilities designed for yesterday’s assumptions simply cannot keep up.
Ken Patchett of Lambda framed AI factories as inherently multi-density environments, capable of supporting everything from traditional enterprise racks to extreme GPU deployments within the same campus. These facilities are not replacements for conventional data centers, he noted, but essential additions; and they must be designed for rapid iteration as chip architectures evolve every few months.
Wes Cummins of Applied Digital extended the conversation to campus scale and geography. AI demand is pushing developers toward tertiary markets where power is abundant but historically underutilized. Training and inference workloads now require hundreds of megawatts at single sites, delivered in timelines that have shrunk from years to little more than a year. Cost efficiency, ultra-low PUE, and flexible shells are becoming decisive competitive advantages.
Liquid cooling emerged as a foundational requirement rather than an optimization. Patrick Pedroso of Equus Compute Solutions compared the shift to the automotive industry’s move away from air-cooled engines. From rear-door heat exchangers to direct-to-chip and immersion systems, cooling strategies must now accommodate fluctuating AI workloads while enabling energy recovery—even at the edge.
For Kenneth Moreano of Scott Data Center, the AI factory is as much a service model as a physical asset. By abstracting infrastructure complexity and controlling the full stack in-house, his company enables enterprise customers to move from AI experimentation to production at scale, without managing the underlying technical detail.
Across the discussion, panelists agreed that the industry’s traditional design and financing playbook is obsolete. AI infrastructure cannot be treated as a 25-year depreciable asset when hardware cycles move in months. Instead, data centers must be built as adaptable, elemental systems: capable of evolving as power, cooling, and compute requirements continue to shift.
The session concluded with one obvious takeaway: AI is not a future state to prepare for. It is already shaping how data centers are built, where they are located, and how they generate value. The AI factory is no longer theoretical—and the industry is racing to build it fast enough.

Wednesday Dec 17, 2025
Wednesday Dec 17, 2025
As AI workloads push data center infrastructure in both centralized and distributed directions, the industry is rethinking where compute lives, how data moves, and who controls the networks in between. This episode captures highlights from The Distributed Data Frontier: Edge, Interconnection, and the Future of Digital Infrastructure, a panel discussion from the 2025 Data Center Frontier Trends Summit.
Moderated by Scott Bergs of Dark Fiber and Infrastructure, the panel brought together leaders from DartPoints, 1623 Farnam, Duos Edge AI, ValorC3 Data Centers, and 365 Data Centers to examine how edge facilities, interconnection hubs, and regional data centers are adapting to rising power densities, AI inference workloads, and mounting connectivity constraints.
Panelists discussed the rapid shift from legacy 4–6 kW rack designs to environments supporting 20–60 kW and beyond, while noting that many AI inference applications can be deployed effectively at moderate densities when paired with the right connectivity. Hospitals, regional enterprises, and public-sector use cases are emerging as key drivers of distributed AI infrastructure, particularly in tier 3 and tier 4 markets.
The conversation also highlighted connectivity as a defining bottleneck. Permitting delays, middle-mile fiber constraints, and the need for early carrier engagement are increasingly shaping site selection and time-to-market outcomes. As data centers evolve into network-centric platforms, operators are balancing neutrality, fiber ownership, and long-term upgradability to ensure today’s builds remain relevant in a rapidly changing AI landscape.







