The Data Center Frontier Show

Data Center Frontier’s editors are your guide to how next-generation technologies are changing our world, and the critical role the data center industry plays in creating our extraordinary future.

Listen on:

  • Podbean App
  • Spotify
  • Amazon Music
  • iHeartRadio
  • PlayerFM
  • Podchaser
  • BoomPlay

Episodes

Tuesday Oct 24, 2023

For this episode of the Data Center Frontier Show Podcast, we sat down for a chat with Andy Pernsteiner, Field CTO of VAST Data.
The VAST Data Platform embodies a revolutionary approach to data-intensive AI computing which the company says serves as "the comprehensive software infrastructure required to capture, catalog, refine, enrich, and preserve data" through real-time deep data analysis and deep learning.
In September, VAST Data announced a strategic partnership with CoreWeave, whereby CoreWeave will employ the VAST Data Platform to build a global, NVIDIA-powered accelerated computing cloud for deploying, managing and securing hundreds of petabytes of data for generative AI, high performance computing (HPC) and visual effects (VFX) workloads.
That announcement followed news in August that Core42 (formerly G42 Cloud), a leading cloud provider in the UAE and VAST Data had joined forces in an ambitious strategic partnership to build a central data foundation for a global network of AI supercomputers that will store and learn from hundreds of petabytes of data.
This week, VAST Data has announced another strategic partnership with Lambda, a, Infrastructure-as-a-Service and compute provider for public and private NVIDIA GPU infrastructure, that will enable a hybrid cloud dedicated to AI and deep learning workloads. The partners will build an NVIDIA GPU-powered accelerated computing platform for Generative AI across both public and private clouds. Lambda selected the VAST Data Platform to power its On-Demand GPU Cloud, providing customer GPU deployments for LLM training and inference workloads.
The Lambda, CoreWeave and Core42 announcements represent three burgeoning AI cloud providers within the short space of three months who've chosen to standardize with VAST Data as the scalable data platform behind their respective clouds. Such key partnerships position VAST Data to innovate through a new category of data infrastructure that will build the next-generation public cloud, the company contends
As Field CTO at VAST Data, Andy Pernsteiner is helping the company's customers to build, deploy, and scale some of the world’s largest and most demanding computing environments. Andy spent the past 15 years focused on supporting and building large scale, high performance data platform solutions.
As recounted by his biographical statement, from his humble beginnings as an escalations engineer at pre-IPO Isilon, to leading a team of technical ninjas at MapR, Andy has consistently been on the frontlines of solving some of the toughest challenges that customers face when implementing big data analytics and new-generation AI technologies.
Here's a timeline of key points discussed on the podcast:
0:00 - 4:12 - Introducing the VAST Data Platform; recapping VAST Data's latest news announcements; and introducing VAST Data's Field CTO, Andy Pernsteiner.
4:45 - History of the VAST Data Platform. Observations on the growing "stratification" of AI computing practices.
5:34 - Notes on implementing the evolving VAST Data managed platform, both now and in the future.
6:32 - Andy Pernsteiner: "It won't be for everybody...but we're trying to build something that the vast majority of customers and enterprises can use for AI/ML and deep learning."
07:13 - Reading the room, when very few inside that have heard of "a GPU..." or know what its purpose and role is inside AI/ML infrastructure.
07:56 - Andy Pernsteiner: "The fact that CoreWeave exists at all is proof that the market doesn't yet have a way of solving for this big gap between where we are right now, and where we need to get tom in terms of generative AI and in terms of deep learning."
08:17 - How VAST started as a data storage platform, and was extended to include an ambitious database geared for large-scale AI training and inference.
09:02 - How another aspect of VAST is consolidation, "considering what you'd have to do to stitch together a generative AI practice in the cloud."
09:57 - On how the biggest customer bottleneck now is partly the necessary infrastructure, but also partly the necessary expertise.
10:25 - "We think that AI shouldn't just be for hyperscalers to deploy" - and how CoreWeave fits that model.
11:15 - Additional classifications of VAST Data customers are reviewed.
12:02 - Andy Pernsteiner: "One of the unique things that CoreWeave does is they make it easy to get started with GPUs, but also have the breadth and scale to achieve a production state - versus deploying at scale in the public cloud."
13:15 - VAST Data sees themselves bridging the gap between on-prem and in the cloud.
13:35 - Can we talk about NVIDIA for a minute?
14:13 - Notes on NVIDIA's GPU Direct Storage, which VAST Data is one of only a few vendors to enable.
15:10 - More on VAST Data's "strong, fruitful" years-long partnership with NVIDIA.
15:38 - DCF asks about the implications of recent reports that NVIDIA has asked about leasing data center space for its DGX Cloud service.
16:39 - Bottom line: NVIDIA wants to give customers an easy way to use their GPUs.
18:13 - Is VAST Data being positioned as a universally adopted AI computing platform?
19:22 - Andy Pernsteiner: "The goal was always to evolve into a company and into a product line that would allow the customer to do more than just store the data."
20:24 - Andy Pernsteiner: "I think that in the space that we're putting much of our energy into, there isn't really a competitor."
21:12 - How VAST Data is unique in its support of both structured and unstructured data.
22:08 - Andy Pernsteiner: "In many ways, what sets companies like CoreWeave apart from some of the public cloud providers is they focused on saying, we need something extremely high performance for AI and deep learning. The public cloud was never optimized for that - they were optimized for general purpose. We're optimized for AI and deep learning, because we started from a place where performance, cost and efficiency were the most important things."
23:03 - Andy Pernsteiner: "We're unique in this aspect: we've developed a platform from scratch that's optimized for massive scale, performance and efficiency, and it marries very well with the deep learning concept."
24:20 - DCF revisits the question of bridging the perceptible gap in industry knowledge surrounding AI infrastructure readiness.
25:01 - Comments on the necessity of VAST partnering with organizations to build out infrastructure.
26:12 - Andy Pernsteiner: "It's very fortunate that Nvidia acquired Mellanox in many ways, because it gives them the ability to be authoritative on the networking space as well. Because something that's often overlooked when building out AI and deep learning architectures is that you have GPUs and you have storage, but in order to feed it, you need a network that's very high speed and very robust, and that hasn't been the design for most data centers in the past."
27:43 - Andy Pernsteiner: "One of the unique things that we do, is we can bridge the gap between the high performance networks and the enterprise networks."
28:07 - Andy Pernsteiner: "No longer do people have to have separate silos for high performance and AI and for enterprise workloads. They can have it in one place, even if they keep the segmentation for their applications, for security and other purposes. We're the only vendor that I'm aware of that can bridge the gaps between those two worlds, and do so in a way that lets customers get the full value out of all their data."
28:58 - DCF asks: Armed with VAST Data, is a company like CoreWeave ready to go toe-to-toe with the big hyperscale clouds -  or is that not what it's about?
30:38 - Andy Pernsteiner: "We have an engineering organization that's extremely large now that is dedicated to building lots of new applications and services. And our focus on enabling these GPU cloud providers is one of the top priorities for the company right now."
32:26 - DCF asks: Does a platform like VAST Data's address the power availability dilemma that's going to be involved with data centers' widespread uptake of AI computing?
Here are some links to some recent related DCF articles:
Nvidia is Seeking to Redefine Data Center Acceleration
Summer of AI: Hyperscale, Colocation Data Center Infrastructure Focus Tilts Slightly Away From Cloud
AI and HPC Drive Demand for Higher Density Data Centers, New As-a-Service Offerings
How Intel, AMD and Nvidia are Approaching the AI Arms Race
Nvidia is All-In on Generative AI

Tuesday Oct 10, 2023

For the latest episode of the Data Center Frontier Show Podcast, editors Matt Vincent and David Chernicoff sat down with Mike Jackson, Global Director of Product, Data Center and Distributed IT Software for Eaton.
The purpose of the talk was to learn about the company's newly launched BrightLayer Data Centers suite, and how it covers the traditional DCIM use case - and a lot more.
According to Eaton, the BrightLayer Data Centers suite's digital toolset enables facilities to efficiently manage an increasingly complex ecosystem of IT and OT assets, while providing full system visibility into data center white space, grey space and/or distributed infrastructure environments.
"We're looking at a holistic view of the data center and understanding the concepts of space, power, cooling, network fiber," said Jackson. "It starts with the assets and capacity, and understanding: what do you have, and how is it used?"
Here's a timeline of points discussed on the podcast:
0:39 - Inquiring about the BrightLayer platform and its relevance to facets of energy, sustainability, and design in data centers.
7:57 - Explaining the platform's "three legs of the stool":  Data center performance management, electrical power monitoring, and distributed IT performance management. Jackson describes how all three elements are part of one code base.
10:42 - Jackson recounts the BrightLayer Data Center suite's beta launch in June and the product's official, commercial launch in September; whereby, out of the gate, over 30 customers are already actively using the platform across different use cases.
13:02 - Jackson explains how the BrightLayer Data Center suite's focus on performance management and sustainability is meant to differentiate the platform from other DCIM systems, in attracting both existing and new Eaton customers.
17:16 - Jackson observes that many customers are being regulated or pushed into sustainability goals, and how the first step for facilities in this situation is measuring and tracking data center consumption. He further contends that the BrightLayer tools can help reduce data center cooling challenges while optimizing workload placement for sustainability, and cost savings.
20:11 - Jackson talks about the importance of integration with other software and data center processes, and the finer points of open API layers and out-of-the-box integrations.
22:26 - In terms of associated hardware, Jackson reviews the Eaton EnergyAware UPS series' ability to proactively manage a data center's power drop via handling utility and battery sources at the same time. He further notes that many customers are now expressing interest in microgrid technology and use of alternative energy sources.
27:21 - Jackson discusses the potential for multitenant data centers to use smart hardware and software to offset costs and improve efficiency, while offering new services to customers and managed service providers.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn.

Tuesday Sep 26, 2023

For this episode of the Data Center Frontier Show Podcast, DCF editors Matt Vincent and David Chernicoff chat with Tiffany Osias, VP of Colocation for Equinix.
Osias begins by discussing the company's investment in a range of data center innovations to help its customers enter new markets and gain competitive advantages through burgeoning AI and machine learning tools.
In the course of the discussion, we also learn about Equinix's deployment of closed loop liquid cooling technologies in six data centers in 2023, and where the company stands on offering increased rack densities for powering AI workloads.
Osias also discusses developments along the course of Equinix helping its customers to optimize their hybrid cloud and multi-cloud architectures and strategies.
Data center sustainability also factors into the conversation, as Osias touches on how Equinix aims to achieve 100% renewable energy coverage by 2030.
Here's a timeline of key discussion points in Data Center Frontier's podcast interview with Equinix VP of Colocation, Tiffany Osias:
1:09 - Osias explains how Equinix invests in data center innovation to help its customers enter new markets, contain costs, and gain a competitive advantage, especially as AI and machine learning become more prevalent in decision-making processes.
1:50 - The discussion turns to how Equinix enables its customers' use of AI by providing secure, reliable service and efficient cooling options, including advanced liquid cooling technologies.
4:07 - Osias remarks on how Equinix plans to deploy closed loop liquid cooling in six data centers in 2023 to meet increasing demand from customers for full liquid-to-liquid environments.
5:49 - We learn how Equinix offers high-density racks for customers running 10-50+ kW per rack, and provides a bespoke footprint for each customer based on their power consumption needs and cooling capabilities.
7:14 - Osias remarks on how liquid cooling can have a positive impact on data center sustainability by reducing physical footprint, carbon emissions from manufacturing, and improving cooling efficiency. The company's use of renewable energy is also examined.
10:19 - Osias describes how AI impacts the Equinix approach to data center infrastructure, and the importance of partnerships and interconnection strategies.
12:09 - Osias discusses how Equinix aims to achieve 100% renewable energy coverage by 2030 and has made progress towards that goal.
13:21 - Notes on how Equinix helps customers optimize their hybrid multi-cloud architecture and interconnect with cloud and storage providers.
Read the full article about the podcast for interview transcript highlights, plus a recent video from Equinix regarding data center sustainability.

Tuesday Sep 12, 2023

Data Center Frontier editors Matt Vincent and David Chernicoff recently caught up with Cyxtera's Field CTO Holland Barry on the occasion of Cyxtera and Hewlett Packard Enterprise (HPE) announcing a new collaboration to help simplify customers' hybrid IT strategies.
Cyxtera is now leveraging HPE's GreenLake edge-to-cloud platform to support an enterprise bare metal platform. The podcast discussion extends to how Cyxtera is presently focused on supporting AI workloads in data centers and collaborating with HPE to offer a multi-hybrid cloud strategy.
Barry revealed during the podcast that Cyxtera already supports 70 kilowatt racks in 18 markets, and is discussing expanded deployments with its customers and partners. Barry added that many present customers are considering moving to private cloud platforms, due to rising public cloud costs and unexpected fees.
Barry said, "My function here at Cyxtera deals largely with the technologies that we both implement internally and also that we deploy within the data centers themselves to make sure that experience of being in the data center colo facility is seamless, feels as much like cloud as it can in terms of the provisioning of services, how we bill for things, things like that."
He added, "Generally speaking, I'm a technologist at heart and I just want to make sure that what we're building is what's useful for the market to consume."
Here's a list of key points discussed on the podcast:
2:01 - Barry talks about Cyxtera's vision for supporting AI workloads in data centers, including cooling technologies, network speed, power designs, and accommodating adjacencies with edge and public cloud platforms.
4:18 - DCF's Chernicoff asks if Cyxtera will offer 70 kilowatt racks a la Digital Realty. Barry explains that Cyxtera already supports this capacity in 18 of its markets, and is in active discussions with customers and partners over an expansion.
5:40 - Barry discusses how Cyxtera's collaboration with HPE addresses rising cloud costs and furthers a multi-hybrid cloud strategy, including Cyxtera's new enterprise bare metal platform and options for opex financing models.
7:53 - Use cases for customers moving to the HPE GreenLake solution via Cyxtera are discussed, including repatriating cloud workloads and tech refreshes.
15:03 - Asked about the convergence of cloud and hybrid IT strategies, Barry says that Cyxtera views themselves as part of such transformations and says the provider is up front with its customers about what workloads are best suited for their platform. The trend of recalibrating workloads from the public cloud to data centers for better cost management is also discussed.
18:22 - Barry expounds on how egress fees and other unexpected costs can lead to a "death by 1000 cuts" situation for public cloud users, driving them to consider private cloud options
19:57 - Barry observes that many customers are realizing the costs of the public cloud and considering moving to a private cloud solution, and emphasizes the importance of Cyxtera making this transition as easy as possible through technology choices and partnerships.
21:57 - Barry comments on the new Cyxtera partnership with HPE in the context of providing choices and solutions to make moving customer workloads to their venue as easy as possible, with the goal of building a multi-hybrid cloud reality in the future.
Background on Cyxtera
Cyxtera Technologies operates a global network of 60 data centers, supports 2,300 customers, and had $746 million in revenue in 2022. The company was formed in 2016 when Medina Capital, led by former Terremark CEO Manuel Medina, teamed with investors including BC Partners to buy the data center portfolio of CenturyLink for $2.15 billion. It was at that time one of several data center players seeking to build a colocation business atop a portfolio of data centers spun off by telecom companies.
This April, Data Center Frontier's Rich Miller reported that Cyxtera Technologies was reportedly fielding interest from suitors as it sought to reduce its debt load. Data Center Dynamics at that time shared that Cyxtera was exploring options for a sale or capital raise, citing a Bloomberg story that said private equity suitors were studying the company's operations. Shares of Cyxtera had fallen sharply in value during that timeframe and were trading at 31 cents a share at one point, giving the company a market capitalization of about $55 million, a far cry from the $3.4 billion valuation placed on the company when it went public in 2021 through a merger with Starboard Value Acquisition Corp.
In May, as shares of Cyxtera fell to new lows, DCF reported lenders for the company said they would provide the colocation provider with $50 million in new funding, allowing it more time to arrange a sale or line up new capital. In June, the colocation provider filed for Chapter 11 bankruptcy. After working for months to find a buyer or reduce its debt load, the company decided it would now restructure through a pre-packaged bankruptcy.
The Chapter 11 filing was part of an arrangement with its lenders, who retained the right to gain a controlling equity interest in the company under terms of a restructuring agreement. At the time of the bankruptcy filing, some of the company's lenders committed to provide $200 million in financing to enable Cyxtera to continue operating as it restructures.
"Cyxtera expects to use the Chapter 11 process to strengthen the company's financial position, meaningfully deleverage its balance sheet and facilitate the business’s long-term success," the company said in a press release." More details have been made available on Cyxtera's restructuring web site. Cyxtera subsidiaries in the United Kingdom, Germany and Singapore are not included in the bankruptcy case, which was filed in New Jersey.
Dgtl Infra's Mary Zhang has done significant recent reporting over the summer on the story of Cyxtera's existing lease rejections in wake of the bankruptcy filing, as well as charting the company's timeline extension for its bankruptcy-led sale process into late September.
In his June reporting on Cyxtera's bankruptcy filing, DCF's Miller noted that:
"Since Cyxtera leases many of its data centers, Cyxtera's Chapter 11 filing creates a potential challenge for its landlords. Cyxtera leases space in 15 facilities operated by Digital Realty, representing $61.5 million in annual revenue, or about 1.7 percent of Digital's annual revenue.
It also leases space in 6 data centers owned by Digital Core REIT, a Singapore-based public company sponsored by Digital Realty. That includes two sites in Los Angeles, three in Silicon Valley and one in Frankfurt. The $16.3 million in annual rent from Cyxtera represents 22.3 percent of revenue for Digital Core REIT.
A bankruptcy filing provides debtors with the opportunity to reject leases to reduce their real estate costs. In its press release, Cyxtera noted that it "is continuing to evaluate its data center footprint, consistent with its commitment to optimizing operations."
An August report from Bloomberg stated that Cyxtera had drawn interest for its assets from multiple parties, including Brookfield Infrastructure Partners and Digital Realty Trust Inc., according to people with knowledge of the situation. as reported by Bloomberg's Reshmi Basu.
In a recent email to this editor regarding Cyxtera, DCF's Miller opined further:
"The key questions for Cyxtera are really all about the bankruptcy outcome, and where that stands. The future could be very different for Cyxtera depending who the winning bidder is and whether they would reject leases. For example, Digital Realty is reported to be one of the bidders. That makes sense, as Cyxtera leases 12 facilities from them and DLR has a vested interest in protecting that income. But if Digital wins the auction, do they keep leasing space in Cyxtera’s many non-Digital sites? Or do they reject those leases and consolidate? The auction winner will guide future strategy for Cyxtera. And it could be very different if it’s a private equity firm vs. a strategic buyer like Digital Realty."
On August 7, concurrent with its business update for Q2, Cyxtera announced that it had reached a key milestone in its Chapter 11 process by filing a proposed plan of reorganization with the U.S. Bankruptcy Court for the District of New Jersey, and said it had reached an agreement with its lenders to optimize the company's capital structure and reduce its pre-filing funded debt by more than $950 million. In its Q2 update, the company said it had delivered solid growth in total revenue, recurring revenue, core revenue and transaction adjusted EBITDA. Cyxtera's August 7 press release added that negotiations around the company's sale alternative remained active.
According to a press release, the proposed reorganization plan is supported by certain of Cyxtera’s lenders who collectively hold over two-thirds of the company’s outstanding first lien debt, and are parties to Cyxtera’s previously announced restructuring support agreement. The company said the proposed plan provides flexibility for the company to pursue a balance sheet recapitalization or a sale of the business.
Cyxtera noted that if the plan is approved and a recapitalization is consummated, the lenders have committed to support a holistic restructuring of the company’s balance sheet. Such a restructuring would eliminate more than $950 million of Cyxtera’s pre-filing debt and provide the company with enhanced financial flexibility to invest in its business for the benefit of its customers and partners.
For Q2 of 2023, Cyxtera said its total revenue increased by $14.9 million, or 8.1% YoY, to $199.0 million in the second quarter of 2023. On a constant currency basis, the company's total revenue increased by $15.1 million, or 8.2% YoY. Recurring revenue increased by $15.8 million, or 9.1% YoY, to $190.0 million in the second quarter.
Cyxtera added that its core revenue increased by $17.4 million, or 10.3% YoY, to $186.2 million in the second quarter. Finally, the company said its transaction Adjusted EBITDA increased by $6.4 million, or 10.7%, to $66.4 million and increased by $6.5 million, or 10.9% YoY, on a constant currency basis, in the second quarter.
Carlos Sagasta, Cyxtera’s Chief Financial Officer, said, “We are pleased to have delivered another quarter of solid growth across the business, underscoring the strength of our offering and the value we create for our global customers. We expect to continue building on this momentum as we successfully complete the process to strengthen our financial position for the long term.”
The press release added that in either a recapitalization or sale scenario, the company remains on track to emerge from the court-supervised process no later than the fall of this year. The company said it had received multiple qualified bids to date. Final bids from interested parties in the sale process were originally due on August 18, a deadline which came and went. An auction slated for August 30 was also cancelled.
Nelson Fonseca, Cyxtera’s Chief Executive Officer, commented, “We continue to make important progress in our court-supervised process, while demonstrating solid performance across our business. Filing this plan with the support of our lenders provides us a path to emerge in a significantly stronger financial position.”
Here are links to some recent DCF articles on Cyxtera:
Colocation Provider Cyxtera Files for Chapter 11 Bankruptcy
Cyxtera Gets $50 Million Funding, More Time to Seek a Buyer
As Stock Price Slumps, Cyxtera Reportedly Mulling Capital Raise or Sale
Cyxtera Goes Public as Starboard SPAC Acquisition Closes
Cyxtera to Go Public Through $3.4 Billion Merger With Starboard SPAC

Tuesday Aug 29, 2023

Recognizing how data center liquid cooling technology has taken the spotlight this year, in this episode of the Data Center Frontier Show podcast, DCF Editor in Chief Matt Vincent sits down with Mark Fenton, Sr. Product Marketing Manager for Cadence Design Systems; and Mark Seymour, Distinguished Engineer with Cadence and co-founder and CEO of Future Facilities, Ltd., a company specializing in digital twin technology for data centers, whom Cadence acquired in July of 2022.
The discussion unpacks some of the implications of the rise data center liquid cooling technology for data center designs in the era of AI, as proclaimed earlier this month in the pages of VentureBeat.
Here's a timeline of points discussed on the podcast:
1:06 - Transitioning to the "AI Era" of Data Centers?
2:06 - The Cloud and AI Are Absolutely Symbiotic
3:40 - Liquid Cooling Customers: Traditional vs. Now
5:43 - The Beauty of Direct Liquid to Chip Technologies
7:07 - The Issue with Rack Retrofits
8:17 - Timing of Liquid Cooling Imperatives for Data Center Design
11:02 - Cost Considerations for Liquid Cooling: Is PUE a Bad Premise?
13:13 - How Data Center Design Tools are Accounting for Liquid Cooling Technologies
14:40 - Digital Twins for Air Cooled vs. Liquid Cooling Data Centers
16:30 - Liquid Cooling Doesn't Stop Inside the White Space
17:31 - How Liquid Cooling Improves Sustainability and ESG for Data Centers
18:46 - Liquid Cooling Can Potentially Produce Higher-Quality Waste Heat
20:18 - The Holistic Efficiencies of Data Center Liquid Cooling
22:29 - From Opportunities to Challenges
23:32 - Data Centers Love a Silver Bullet
25:34 - Evolution of Data Center Liquid Cooling Designs
26:21 - The Problem Is Power Densities are Rising
27:36 - Drawing Distinctions for Immersion Cooling
29:35 - Immersion Cooling Maintenance Questions
Here are links to some recent DCF articles on data center liquid cooling technology:
Investors Are Warming Up to Liquid Cooling
Liquid Cooling Is In Your Future. Are You Ready?
How to Get Started on Your Immersion Cooling Journey
Direct Liquid Cooling - The Ultimate Guide for Data Centers
Liquid Cooling: Going Beyond Water
Four Factors to Consider When Selecting the Right Glycol-Based Fluid for Liquid Cooling
Why Liquid Cooling is Critical for Your Data Center's Future

Tuesday Aug 15, 2023

Premised on DCF's recent article series centered on data center diesel backup generator technology, the latest episode of the Data Center Frontier Show podcast finds site editors Matt Vincent and David Chernicoff recounting how Aligned Data Centers' Quantum Loophole campus was recently called out by the State of Maryland over a permitting snag in a contentiously approved plan for construction of 168 data center diesel generators, amounting to over 500 MW of backup power generation.
Data centers like Aligned's Quantum Loophole campus, which is being raised on the site of a former aluminum smelting plant, seek to do in Maryland what so many others are doing next door in Northern Virginia. Maryland does want the data center business, but isn't having it without certain qualifications to be met in the form of the state's Certificate of Public Convenience and Necessity (CPCN) licensing process.
As recorded by DCD, in wake of the permitting snag, Maryland officials have wondered aloud about clean energy alternatives, even to the point of expressing incredulity that use of carbon-emitting technology is even on the table -- especially given certain outside realities, not least being Aligned's use of microgrid power in its Plano, Texas data center.
Chernicoff and Vincent sidle up to the conclusion that a modular, incremental technology approach allows for a mosaic of available data center backup power generation solutions including diesel to be used, which the overall industry currently requires. Chernicoff also notes how Tier 4 standards for data center diesel power have gotten significantly cleaner after two decades of refinement.
Here’s a timeline of points discussed on the podcast:
1:05 - The Issue with Aligned Data Centers' Quantum Loophole Campus In Maryland
2:00 - Diesel and Maryland Are At Loggerheads
4:00 - If Someplace Ever Screamed Out for a Microgrid ...
5:20 - Perceptions of Diesel Power
6:00 - Cleaner Generators and Backup Power Runtime Realities
6:42 - The 3 Big Players in the Data Center Diesel Generators
7:14 - Competitive Advantages of No-Load Maintenance
8:20 - Alternatives to Diesel: Microgrid, Battery Backup, SMR, and Biodiesel Technologies
9:44 - A Catch-22 Situation for Data Centers
10:41 - Bits and Pieces of Technology
10:59 - The Benefit of Building from a Clean Slate
11:29 - Building an Entire Data Center Campus, You Expect To Be There For a Decade or Three
12:00 - Could a Microgrid Ever Furnish On-Demand Gigawatt Power?
12:27 - Enclosures for Diesel Backup Power Generators
13:21 - Quality of Support a Huge Competitive Factor
14:17 - The Scoop on Supply Chain
15:15 - Diesel Generator Sizing Concerns
16:01 - Overprovisioning for Backup Power Is an Issue
17:10 - Where Diesel Power Generation Meets Sustainability
18:08 - A Stepping Stone to Other Backup Power Solutions?
Here are links to some recent DCF articles on backup power for data centers:
Top-Level Issues to Consider When Selecting Backup Generator Technology
Sustainability Advantages of HVO Fuel for Diesel Generators
Virginia Ends Effort to Shift Data Centers to Generators in Grid Alerts
New Technology and Practices Improve the Environmental Performance of Diesel Generators
Beyond Diesel: Sustainable Onsite Power for Data Centers
Microsoft Plans to Stop Using Diesel Generators by 2030
Google Looks to Batteries as Replacement for Diesel Generators
Rethinking the Data Center: Hydrogen Backup is Latest Microsoft Moonshot

Friday Jul 28, 2023

According to a recent State of the Edge report, global capital expenditure on IT equipment for edge infrastructure is projected to grow to $104 billion by 2028. Moreover, recent IDC research forecasts worldwide spending on edge computing platforms to reach nearly $274 billion by 2025.
As AFL executives in a related DCF 'Voices of the Industry' essay from earlier this year explained further, "Edge data centers are key to unleashing advanced use cases resulting in new user experiences and new business opportunities."
As recently as last month, a market brief from JLL unpacked just why smaller data centers are taking off, as AI, 5G and hybrid work fuel an exponential expansion of edge computing footprints.
As noted by the brief, "Hyperscale centers are usually located in cities and can typically house 10,000 racks with a capacity in excess of 80 MW. Edge data centers by comparison, have a smaller capacity between 500 kilowatts to 2 MW and, as the name suggests, are located on the outer edge of networks. They bring computing capability geographically closer to those users situated further away from the heart of the cloud."
“These assets are increasingly important to the architecture of computing networks, thanks to the continued adoption of IoT devices and now the rise of generative AI applications, and machine learning,” added Tom Glover, JLL Head of EMEA Data Center Transactions.
For its part, PricewaterhouseCoopers International Limited (PwC) recently noted that "the global market for edge data centers is expected to nearly triple to $13.5 billion in 2024 from $4 billion in 2017, thanks to the potential for these smaller, locally located data centers to reduce latency, overcome intermittent connections and store and compute data close to the end user."
PwC's edge data center examination cautioned, "However, the right timing and strategy for moving data centers (and related services) to the edge will be different for each organization, depending on the conditions, environment and business opportunities in its marketplace."
So even a just a cursory reading of the business and technology prospects for edge data centers told DCF's editors that it was time for a podcast discussion probing the history and reach of this most evergreen (yet paradoxically sometimes elusive) technology topic for our industry.
Here's a summary points discussed by of DCF editors Matt Vincent and David Chernicoff in today's podcast.
1:01 - Framing the topic with a Bill Kleyman quote.
2:06 - Comparing and contrasting the "original" or "local" edge vs. the hyperscale version.
3:33 - How a lot of edge data centers have come out of the CDN model.
4:23 - From Google and AWS to Akamai, Cloudflare and Rackspace.
6:46 - Optimizing delivery at the edge to challenge the hyperscalers for business.
7:45 - Blending edge computing and edge data centers to move data around as little as possible.
9:11 - 5G and telco: The 'red-headed stepchild' of the edge data center?
9:43 - "If you think about it, every cell tower you see has a data center attached to it."
11:32 - Many major CSPs didn't expect the kind of usage their cell towers are getting.
12:35 - On self-driving cars and autonomous vehicles as competitive edge use-cases.
14:26 - Leveraging 5G, actual connectivity, and localized data centers.
15:17 - How latency and bandwidth have become huge issues in gaining a business advantage.
15:34 - Edge permutations redux.
16:47 - "You just had to work AI into the conversation, didn't you?"
17:50 - When data center-quality analytics live in the trunk of a vehicle.
18:25 - Qualcomm: "Wherever a phone is, that's the edge."
19:21 - "Think of the issues involved. The backhaul, the latency, the security of that data moving across that much fiber."
20:15 - Latency Makes People Go Away
21:31 - "There's certainly a lot more edge-type data centers being built than giant hyperscale data centers."
22:03 - What have supply chain issues done to these smaller data center builds?
22:40 - How edge data center development may depend on what the market does.
23:12 - Engineering the industrial vs. the suburban edge in rural areas.
26:10 - Closing thoughts: "What's old is new again...The first point of contact is the edge."
Here are links to some recent DCF stories on edge data centers:
Akamai Bets on Bringing Cloud In Closer with 5 New Data Center Sites
Getting Closer to the Edge: Data Centers Move Closer to Consumption
Roundtable: Growth Seen Across Many Flavors of Edge Computing
Data Center Insights: Phillip Marangella of EdgeConneX
Tower Operators Step Up the Pace of Their Edge Deployments
Let Form Follow Function at the Edge

Tuesday Jul 18, 2023

The latest episode of the Data Center Frontier Show podcast begins with the site's editors Matt Vincent and David Chernicoff  commemorating the "hand off" of show hosting duties from DCF founder and Editor at Large Rich Miller. As duly noted, this transition of course occurs amid another terrifying, annually recurring "hottest year on record" for planet Earth. The discussion between editors unfolds to focus on two areas wherein David has done significant reporting this year, based on a wealth of accumulated industry knowledge: data center cooling and the future of power for data centers. Tune in to hear the editors unsuccessfully attempt to bypass the topic of A.I. for even just three minutes...
Here’s a timeline of points Matt and David discuss on the podcast:
0:00 - Podcast Hand-Off Notes: 'Oh Captain, My Captain'1:17 - Hello to David Chernicoff in the Hottest Year on Record (Again)2:03 - Cooling and the Future of Power (and the Impact of AI) 2:55 - "Whatever space you give it, it will fill."3:08 - Doing the math for a tray of NVIDIA H100 processors. 4:02 - 10 kW isn't high-density anymore (and DoE's COOLERCHIPS program knows it).5:02 - Becoming better corporate citizens of the world (or at least Northern Va.)6:40 - Notes on CO2 Cooling and Liquid Cooling 7:21 - Replacing HFCs for Less GHGs9:26 - Liquid Cooling: A Whole Different Ball of Wax10:44 - Devil's Advocate: Water-based Cooling13:14 - Incremental Cooling Processes and the Real World14:17 - "Musk bought 10,000 H100 CPUs..."15:04 - Cooling in the Hybrid Cloud Environment16:02 - Data Centers, the Utility Crunch and Nuclear Power18:01 - The Main Issue is the Grid Itself19:21 - The Building of New Substations Has to Occur20:31 - "Is AI the straw that breaks the camel's back?"23:02 - Implications for Edge Data Centers24:53 - The Next Hurdle for AI: The Speed of Interconnection

Tuesday Jun 20, 2023

What might AI - artificial intelligence - mean for the data center industry? On this week’s Data Center Frontier Show, host Rich Miller chats with DCF Senior Editor David Chernicoff, who has been digging into all things AI, including its potential impact on cloud platforms, chip makers, hardware startups, server vendors, and colocation providers. They take a deep dive into the implications of AI for the data center industry. If you are at all interested in AI and data centers, this is the podcast for you.  Here’s a timeline of topics David and Rich discuss on the podcast: 1:15 – David shares a bit about his career and "Data Center Journey"  3:45 – "An Interesting Time for Hardware:" Trends driving development of chips and servers.      6:15 – The history of rack density, and the arrival of ChatGPT and generative AI.   11:30 – How AI might be disruptive, and how data and cost factor into its business impact.   17:15 – Echoes of the early days of cloud, and what that tells us about AI's trajectory. 21:30 -- Rich and David discuss the opportunities for colocations, OEMS and the edge.    24:45 -- The Road Ahead: Trends to watch as generative AI evolves, including societal issues  Here's a link to some of David's recent DCF stories on AI and its impact on various sectors within digital infrastructure.   How Intel, AMD and Nvidia are Approaching the AI Arms Race The AI Arms Race: Startups Offer New Technology, Target Niche Markets For Leading Cloud Platforms, AI Presents A Major Opportunity Dell Technologies, HPE Pursue Multiple Paths into Enterprise AI  Did you like this episode? Be sure to subscribe to the Data Center Frontier show so you get future episodes on your app.

Thursday May 04, 2023

DCF Show host Rich Miller chats with Bill Kleyman, a long-time contributor to Data Center Frontier and one of the keynote speakers at the upcoming Data Center World 2023, where he will share insights from the AFCOM State of the Data Center 2023 industry survey. Bill and Rich dive deep into all the hot topics - including the rise of AI, rising rack density, the cloud  capacity crunch and supply chain and nuclear-powered data centers. It's a fun and interesting discussion.  Here’s a timeline of topics Bill and Rich discuss on the podcast: 1:45 – State of the Data Center: Key trends in Bill's keynote summarizing the AFCOM survey.  9:45 – Trends in rack density and cooling: Will data centers look more like HPC?     17:00 – Nuclear-powered data centers: Why we're hearing more about this, and the prospects for small modular reactors.   22:15 – Bill and Rich talk supply chain, and the ripple effects on data center delivery.  26:00 – The NIMBY Problem: Why community relations matters for data center companies. 29:15  -- Is there a data center shortage on the horizon.   31:00 -- On the front lines of the AI Boom. Bill's work with Neu.ro. "This is an absolutely critical point for our industry." 37:00 -- AI "hallucinations" and reliability. How do we assess societal impact? 43:00 - How might AI address automation and staffing challenges in the data center industry? 49:15 - The shape of the hybrid cloud: Bill's take on the balance between cloud, colo and on-premises data centers.

Copyright Data Center Frontier LLC © 2019

Podcast Powered By Podbean

Version: 20240320