Episodes
Wednesday Nov 29, 2023
Wednesday Nov 29, 2023
For this episode of the Data Center Frontier Show podcast, DCF's Editor in Chief Matt Vincent chats with Brian Green, EVP Operations, Engineering and Project Management, for EdgeConneX. The discussion touches on data center operations, sustainable implementations/deployments, renewable power strategies, and ways to operationalize renewables in the data center.
Under Brian’s leadership, the EdgeConneX Houston data center completed a year-long project measuring the viability of 24/7 carbon-free energy utilizing AI-enabled technology. With this approach, EdgeConneX ensured the data center is powered with 100% renewable electricity, and proved that even if the power grid operates on fossil-fueled electricity generation, real-time hourly increments can be applied to new and existing data centers.
As a result, for every given hour, EdgeConneX and its customers can operate throughout the year without emitting any CO2 with zero reliance on fossil standby generation during dark or cloudy periods. This innovative program will be duplicated at other EdgeConneX facilities globally.
Another real-world example discussed is related to a facility where the local community complained about the noise of the fans. Brian's team worked to improve the noise level by changing fan speeds, and as a result, the data center and the local community realized multiple benefits, including enhanced community relations by removing the noise disturbance, increased efficiencies, and reducing amount of power used, a big cost-saver for the data center.
Along the way, Brian explains how he, along with EdgeConneX team, are big believers in the company's motto: Together, we can innovate for good.
Tuesday Nov 21, 2023
Tuesday Nov 21, 2023
For this special episode of the DCF Show podcast, Data Center Frontier's founder and present Editor at Large, Rich Miller, returns for a visit. Tune in to hear Rich engage with the site's daily editors, Matt Vincent and David Chernicoff, in a discussion covering a range of current data center industry news and views.
Topics include: Dominion Energy's transmission line expansion in Virginia; Aligned Data Centers' market exit in Maryland over a rejected plan for backup diesel generators; an update on issues surrounding Virginia's proposed Prince William Digital Gateway project; Rich's take on the recent Flexential/Cloudflare outages in Hillsboro, Oregon; and more.
Here's a timeline of key points discussed on the podcast:
:10 - For those concerned that the inmates might be running the asylum, the doctor is now in: Rich discusses his latest beat as DCF Editor at Large.
1:30 - We look at the power situation in No. Virginia as explained by one of Rich's latest articles, vis a vis what's going to be required to support growth already in the pipeline, in the form of contracts that Dominion Energy has for power. "Of course, the big issue there is transmission lines," adds Miller. "That's the real constraint on data center power delivery right now. You can build local lines and even substations much more quickly than you can transmission at the regional level. That's really where the bottlenecks are right now."
3:00 - Senior Editor David Chernicoff asks for Rich's take on Aligned Data Centers' recent market exit in Maryland, related to its rejected plan for backup diesel generators. "Is this really going to be the future of how large-scale data center projects are going to have to be approached, with more focus put on dealing with permission to build?" wonders Chernicoff, adding, "And are we going to see a more structured data center lobbying effort on the local level beyond what, say, the DCC [Data Center Coalition] currently does?"
5:19 - In the course of his reponse, Rich says he thinks we'll see just about every data center company realizing the importance of doing their research on the full range of permissions required to build these megascale campuses, which are only getting bigger.
6:12 - Rich adds that he thinks the situation in Maryland illustrates how it's important for data center developers to step back for a strategic discussion regarding depth of planning. "The first thing to know," he points out, "is that Maryand was eager to have the data center industry. They specifically passed incentives that would make them more competitive with Virginia. They saw that Northern Virginia was getting super crowded...and they thought, we've got lots of resources up here in Frederick County, let's see if we can bring some of these folks across the river. And based on that, the Quantum Loophole team found this site."
8:20 - Rich goes on to note how "the key element for a lot of data centers is fiber, and a key component, both strategically and from an investment perspective [in Maryland] is that Quantum Loophole needed to have a connection to the Northern Virginia data center cluster in Ashburn, in Data Center Alley - which is not that far as the crow flies, but to get fiber there, they wound up boring a tunnel underneath the Potomac River, an expensive and time-consuming project that they're in the late stages of now. That's a big investment, and all that was done with the expectation that Maryland wanted data centers."
10:26 - Rich summarizes how the final ruling for Aligned in Maryland "was, effectively, that you can have up to 70 MW but beyond that, you have to follow this other process [where] you're more like a power plant than a data center with backup energy." He adds, "I think one of the issues was [in determining], will all of this capacity ever be turned on all at once? Obviously with diesel generators, that's a lot of emissions. So the air quality boards are wrestling with, on the one hand, having a large company that wants to bring in a lot of investment, a lot of jobs; the flip side is, it's a lot of diesel at a time when we're starting to see the growing effects of climate change, and everybody's trying to think about how we deal with fossil fuel generation. The bottom line is, Aligned pulled out and said, this is just not working. The Governor of Maryland, understanding the issues at stake and the amount of investment that has already been brought there, says that he is working with the legislature to try to 'create some regulatory predictability' for the data center industry. Because it used to be that 70 MW was a lot of capacity, but with the way the industry is going right now, that's not so much."
12:06 - In response to David's reiterated question as to whether the data center industry will now increasingly have to rethink it's whole approach to permitting prior to starting construction, Rich notes, "There's a lot of factors that go into site selection, you're looking at land, fiber, power. The regulatory environment around it, whether there's going to be local resistance, has also become part of the conversation, and rightfully so. One of the things that's definitely going to happen is that data centers have to think hard about their impact on the communities where they're locating, and try to develop sensible policies about how they, for lack of a better term, can be good neighbors, and fit into the communities where they're operating."
14:20 - Taking the discussion back across state lines, Editor in Chief Matt Vincent asks for an update on Rich's thoughts surrounding contentious plans by QTS and Compass Datacenters for a proposed new campus development, dubbed the Prince William Digital Gateway, near a Civil War historic site in Prince William County, Virginia. "This is one of the most unique proposals in the history of the data center industry," explains Miller. "It would be the largest data center project ever proposed. And of course, it's become an enormous political hot potato. It's the first time where we've really seen data centers on the ballot in local elections."
20:41 - After hearing some analysis of the business and political angles in Prince William County, Vincent asks whether Miller thinks the PW Digital Gateway project's future is in doubt, or if it's just that we don't know what's going to happen?
22:50 - Vincent asks Miller for his take on the recent data center outage affecting Flexential and Cloudflare, as written up for DCF by Chernicoff, particularly in the area of incident reports and their usefulness. In the course of responding to a follow-on point by David, Rich says, "I think the question for both levels of providers is, are you delivering on your promises, and what do you need to do to ensure that you can? Let's face it, stuff breaks, stuff happens. The data center industry, I think, is fascinating because people really think about failure modes and what happens, and customers need to do the same."
32:14 - To conclude, Vincent asks for Miller's thoughts on the AI implications of Microsoft's cloud-based supercomputer, running Nvidia H100 GPUs, ranking third on the world's top 500 supercomputers list, as highlighed at the recently ongoing SC23 show in Denver.
Here are links to some related DCF articles:
-- Dominion: Virginia’s Data Center Cluster Could Double in Size-- Dominion Resumes New Connections, But Loudoun Faces Lengthy Power Constraints-- DCF Show: Data Center Diesel Backup Generators In the News-- Cloudflare Outage: There’s Plenty Of Blame To Go Around-- Microsoft Unveils Custom-Designed Data Center AI Chips, Racks and Liquid Cooling
Thursday Nov 16, 2023
Thursday Nov 16, 2023
Ten years into the fourth industrial revolution, we now live in a “datacentered” world where data has become the currency of both business and personal value. In fact, the value proposition for every Fortune 500 company involves data. And now, seemingly out of nowhere, artificial intelligence has come along and is looking to be one of the most disruptive changes to digital infrastructure that we’ve ever seen.In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Sean Farney, Vice President for Data Center Strategy for JLL Americas, about how AI will impact data centers.
Tuesday Nov 07, 2023
Tuesday Nov 07, 2023
The Legend Energy Advisors (Legend EA) vision of energy usage is one in which all companies have real-time visibility into related processes and factors such as equipment efficiency, labor intensity, and consumption of power and other energy resources across their operations.
During this episode of the Data Center Frontier Show podcast, the company's CEO and founder, Dan Crosby, and his associate, Ralph Rodriguez, RCDD, discussed the Legend Analytics platform, which offers commodity risk assessment infrastructure services, and real-time metering for energy usage and efficiency.
The firm contends that only through such "total transparency" will their clients be able to "radically impact" energy and resource consumption intensity at every stage of their businesses.
"My background was in construction and energy brokerage for a number of years before founding Legend," said Crosby. "The basis of it was helping customers understand how they're using energy, and how to use it better so that they can actually interact with markets more proactively and intelligently."
"That helps reduce your carbon footprint in the process," he added. "Our mantra is: it doesn't matter whether you're trying to save money or save the environment, you're going to do both of those things through efficiency -- which will also let you navigate markets more efficiently."
Legend EA's technology empowers the firm's clients to integrate all interrelated energy components of their businesses, while enabling clear, coherent communication across them.
This process drives transparency and accountability on “both sides of the meter,” as reckoned by the company, the better to eliminate physical and financial waste.
As stated on the firm's website, "This transparency drives change from the bottom up, enabling legitimate and demonstrable changes in enterprises’ environmental and financial sustainability."
Legend Analytics is offered as a software as a service (SaaS) platform, with consulting services tailored to the needs of individual customers, who include industrial firms and data center operators, in navigating the power market.
Additionally, the Ledge device, a network interface card (NIC), was recently introduced by Legend EA as a way to securely gather energy consumption data from any system in an organization and bring it to the cloud in real-time.
Here's a timeline of key points discussed on the podcast:
1:15 - Crosby details the three interconnected parts of his firm's service: commodity risk assessment, infrastructure services, and the Legend Analytics platform for understanding energy usage and efficiency.
2:39 - Crosby explains how the Legend Analytics platform works in the case of data center customers, by providing capabilities such as real-time metering at various levels of a facility, as well as automated carbon reporting.
4:46 - The discussion unpacks how the platform is offered as a SaaS, and includes consulting services tailored to each customer's needs.
7:49 - Notes on how the Legend Analytics platform can gather data from disparate systems and consolidate it into one dashboard, allowing for AI analysis and identification of previously unknown issues.
10:25 - Crosby reviews the importance of accurate and real-time emissions tracking for ESG reporting, and provides examples of how the Legend Analytics platform has helped identify errors and save costs for clients.
12:23 - Crosby explains how the company's new, proprietary NIC device, dubbed the Ledge, can securely gather data from any system and bring it to their cloud in real time, lowering costs and improving efficiency.
23:54 - Crosby touches on issues including challenges with power availability; trends in building fiber to power; utilizing power capacity from industrial plants; and on-site generation for enabling stable voltage.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters.
Tuesday Oct 24, 2023
Tuesday Oct 24, 2023
For this episode of the Data Center Frontier Show Podcast, we sat down for a chat with Andy Pernsteiner, Field CTO of VAST Data.
The VAST Data Platform embodies a revolutionary approach to data-intensive AI computing which the company says serves as "the comprehensive software infrastructure required to capture, catalog, refine, enrich, and preserve data" through real-time deep data analysis and deep learning.
In September, VAST Data announced a strategic partnership with CoreWeave, whereby CoreWeave will employ the VAST Data Platform to build a global, NVIDIA-powered accelerated computing cloud for deploying, managing and securing hundreds of petabytes of data for generative AI, high performance computing (HPC) and visual effects (VFX) workloads.
That announcement followed news in August that Core42 (formerly G42 Cloud), a leading cloud provider in the UAE and VAST Data had joined forces in an ambitious strategic partnership to build a central data foundation for a global network of AI supercomputers that will store and learn from hundreds of petabytes of data.
This week, VAST Data has announced another strategic partnership with Lambda, a, Infrastructure-as-a-Service and compute provider for public and private NVIDIA GPU infrastructure, that will enable a hybrid cloud dedicated to AI and deep learning workloads. The partners will build an NVIDIA GPU-powered accelerated computing platform for Generative AI across both public and private clouds. Lambda selected the VAST Data Platform to power its On-Demand GPU Cloud, providing customer GPU deployments for LLM training and inference workloads.
The Lambda, CoreWeave and Core42 announcements represent three burgeoning AI cloud providers within the short space of three months who've chosen to standardize with VAST Data as the scalable data platform behind their respective clouds. Such key partnerships position VAST Data to innovate through a new category of data infrastructure that will build the next-generation public cloud, the company contends
As Field CTO at VAST Data, Andy Pernsteiner is helping the company's customers to build, deploy, and scale some of the world’s largest and most demanding computing environments. Andy spent the past 15 years focused on supporting and building large scale, high performance data platform solutions.
As recounted by his biographical statement, from his humble beginnings as an escalations engineer at pre-IPO Isilon, to leading a team of technical ninjas at MapR, Andy has consistently been on the frontlines of solving some of the toughest challenges that customers face when implementing big data analytics and new-generation AI technologies.
Here's a timeline of key points discussed on the podcast:
0:00 - 4:12 - Introducing the VAST Data Platform; recapping VAST Data's latest news announcements; and introducing VAST Data's Field CTO, Andy Pernsteiner.
4:45 - History of the VAST Data Platform. Observations on the growing "stratification" of AI computing practices.
5:34 - Notes on implementing the evolving VAST Data managed platform, both now and in the future.
6:32 - Andy Pernsteiner: "It won't be for everybody...but we're trying to build something that the vast majority of customers and enterprises can use for AI/ML and deep learning."
07:13 - Reading the room, when very few inside that have heard of "a GPU..." or know what its purpose and role is inside AI/ML infrastructure.
07:56 - Andy Pernsteiner: "The fact that CoreWeave exists at all is proof that the market doesn't yet have a way of solving for this big gap between where we are right now, and where we need to get tom in terms of generative AI and in terms of deep learning."
08:17 - How VAST started as a data storage platform, and was extended to include an ambitious database geared for large-scale AI training and inference.
09:02 - How another aspect of VAST is consolidation, "considering what you'd have to do to stitch together a generative AI practice in the cloud."
09:57 - On how the biggest customer bottleneck now is partly the necessary infrastructure, but also partly the necessary expertise.
10:25 - "We think that AI shouldn't just be for hyperscalers to deploy" - and how CoreWeave fits that model.
11:15 - Additional classifications of VAST Data customers are reviewed.
12:02 - Andy Pernsteiner: "One of the unique things that CoreWeave does is they make it easy to get started with GPUs, but also have the breadth and scale to achieve a production state - versus deploying at scale in the public cloud."
13:15 - VAST Data sees themselves bridging the gap between on-prem and in the cloud.
13:35 - Can we talk about NVIDIA for a minute?
14:13 - Notes on NVIDIA's GPU Direct Storage, which VAST Data is one of only a few vendors to enable.
15:10 - More on VAST Data's "strong, fruitful" years-long partnership with NVIDIA.
15:38 - DCF asks about the implications of recent reports that NVIDIA has asked about leasing data center space for its DGX Cloud service.
16:39 - Bottom line: NVIDIA wants to give customers an easy way to use their GPUs.
18:13 - Is VAST Data being positioned as a universally adopted AI computing platform?
19:22 - Andy Pernsteiner: "The goal was always to evolve into a company and into a product line that would allow the customer to do more than just store the data."
20:24 - Andy Pernsteiner: "I think that in the space that we're putting much of our energy into, there isn't really a competitor."
21:12 - How VAST Data is unique in its support of both structured and unstructured data.
22:08 - Andy Pernsteiner: "In many ways, what sets companies like CoreWeave apart from some of the public cloud providers is they focused on saying, we need something extremely high performance for AI and deep learning. The public cloud was never optimized for that - they were optimized for general purpose. We're optimized for AI and deep learning, because we started from a place where performance, cost and efficiency were the most important things."
23:03 - Andy Pernsteiner: "We're unique in this aspect: we've developed a platform from scratch that's optimized for massive scale, performance and efficiency, and it marries very well with the deep learning concept."
24:20 - DCF revisits the question of bridging the perceptible gap in industry knowledge surrounding AI infrastructure readiness.
25:01 - Comments on the necessity of VAST partnering with organizations to build out infrastructure.
26:12 - Andy Pernsteiner: "It's very fortunate that Nvidia acquired Mellanox in many ways, because it gives them the ability to be authoritative on the networking space as well. Because something that's often overlooked when building out AI and deep learning architectures is that you have GPUs and you have storage, but in order to feed it, you need a network that's very high speed and very robust, and that hasn't been the design for most data centers in the past."
27:43 - Andy Pernsteiner: "One of the unique things that we do, is we can bridge the gap between the high performance networks and the enterprise networks."
28:07 - Andy Pernsteiner: "No longer do people have to have separate silos for high performance and AI and for enterprise workloads. They can have it in one place, even if they keep the segmentation for their applications, for security and other purposes. We're the only vendor that I'm aware of that can bridge the gaps between those two worlds, and do so in a way that lets customers get the full value out of all their data."
28:58 - DCF asks: Armed with VAST Data, is a company like CoreWeave ready to go toe-to-toe with the big hyperscale clouds - or is that not what it's about?
30:38 - Andy Pernsteiner: "We have an engineering organization that's extremely large now that is dedicated to building lots of new applications and services. And our focus on enabling these GPU cloud providers is one of the top priorities for the company right now."
32:26 - DCF asks: Does a platform like VAST Data's address the power availability dilemma that's going to be involved with data centers' widespread uptake of AI computing?
Here are some links to some recent related DCF articles:
Nvidia is Seeking to Redefine Data Center Acceleration
Summer of AI: Hyperscale, Colocation Data Center Infrastructure Focus Tilts Slightly Away From Cloud
AI and HPC Drive Demand for Higher Density Data Centers, New As-a-Service Offerings
How Intel, AMD and Nvidia are Approaching the AI Arms Race
Nvidia is All-In on Generative AI
Tuesday Oct 10, 2023
Tuesday Oct 10, 2023
For the latest episode of the Data Center Frontier Show Podcast, editors Matt Vincent and David Chernicoff sat down with Mike Jackson, Global Director of Product, Data Center and Distributed IT Software for Eaton.
The purpose of the talk was to learn about the company's newly launched BrightLayer Data Centers suite, and how it covers the traditional DCIM use case - and a lot more.
According to Eaton, the BrightLayer Data Centers suite's digital toolset enables facilities to efficiently manage an increasingly complex ecosystem of IT and OT assets, while providing full system visibility into data center white space, grey space and/or distributed infrastructure environments.
"We're looking at a holistic view of the data center and understanding the concepts of space, power, cooling, network fiber," said Jackson. "It starts with the assets and capacity, and understanding: what do you have, and how is it used?"
Here's a timeline of points discussed on the podcast:
0:39 - Inquiring about the BrightLayer platform and its relevance to facets of energy, sustainability, and design in data centers.
7:57 - Explaining the platform's "three legs of the stool": Data center performance management, electrical power monitoring, and distributed IT performance management. Jackson describes how all three elements are part of one code base.
10:42 - Jackson recounts the BrightLayer Data Center suite's beta launch in June and the product's official, commercial launch in September; whereby, out of the gate, over 30 customers are already actively using the platform across different use cases.
13:02 - Jackson explains how the BrightLayer Data Center suite's focus on performance management and sustainability is meant to differentiate the platform from other DCIM systems, in attracting both existing and new Eaton customers.
17:16 - Jackson observes that many customers are being regulated or pushed into sustainability goals, and how the first step for facilities in this situation is measuring and tracking data center consumption. He further contends that the BrightLayer tools can help reduce data center cooling challenges while optimizing workload placement for sustainability, and cost savings.
20:11 - Jackson talks about the importance of integration with other software and data center processes, and the finer points of open API layers and out-of-the-box integrations.
22:26 - In terms of associated hardware, Jackson reviews the Eaton EnergyAware UPS series' ability to proactively manage a data center's power drop via handling utility and battery sources at the same time. He further notes that many customers are now expressing interest in microgrid technology and use of alternative energy sources.
27:21 - Jackson discusses the potential for multitenant data centers to use smart hardware and software to offset costs and improve efficiency, while offering new services to customers and managed service providers.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn.
Tuesday Sep 26, 2023
Tuesday Sep 26, 2023
For this episode of the Data Center Frontier Show Podcast, DCF editors Matt Vincent and David Chernicoff chat with Tiffany Osias, VP of Colocation for Equinix.
Osias begins by discussing the company's investment in a range of data center innovations to help its customers enter new markets and gain competitive advantages through burgeoning AI and machine learning tools.
In the course of the discussion, we also learn about Equinix's deployment of closed loop liquid cooling technologies in six data centers in 2023, and where the company stands on offering increased rack densities for powering AI workloads.
Osias also discusses developments along the course of Equinix helping its customers to optimize their hybrid cloud and multi-cloud architectures and strategies.
Data center sustainability also factors into the conversation, as Osias touches on how Equinix aims to achieve 100% renewable energy coverage by 2030.
Here's a timeline of key discussion points in Data Center Frontier's podcast interview with Equinix VP of Colocation, Tiffany Osias:
1:09 - Osias explains how Equinix invests in data center innovation to help its customers enter new markets, contain costs, and gain a competitive advantage, especially as AI and machine learning become more prevalent in decision-making processes.
1:50 - The discussion turns to how Equinix enables its customers' use of AI by providing secure, reliable service and efficient cooling options, including advanced liquid cooling technologies.
4:07 - Osias remarks on how Equinix plans to deploy closed loop liquid cooling in six data centers in 2023 to meet increasing demand from customers for full liquid-to-liquid environments.
5:49 - We learn how Equinix offers high-density racks for customers running 10-50+ kW per rack, and provides a bespoke footprint for each customer based on their power consumption needs and cooling capabilities.
7:14 - Osias remarks on how liquid cooling can have a positive impact on data center sustainability by reducing physical footprint, carbon emissions from manufacturing, and improving cooling efficiency. The company's use of renewable energy is also examined.
10:19 - Osias describes how AI impacts the Equinix approach to data center infrastructure, and the importance of partnerships and interconnection strategies.
12:09 - Osias discusses how Equinix aims to achieve 100% renewable energy coverage by 2030 and has made progress towards that goal.
13:21 - Notes on how Equinix helps customers optimize their hybrid multi-cloud architecture and interconnect with cloud and storage providers.
Read the full article about the podcast for interview transcript highlights, plus a recent video from Equinix regarding data center sustainability.
Tuesday Sep 12, 2023
Tuesday Sep 12, 2023
Data Center Frontier editors Matt Vincent and David Chernicoff recently caught up with Cyxtera's Field CTO Holland Barry on the occasion of Cyxtera and Hewlett Packard Enterprise (HPE) announcing a new collaboration to help simplify customers' hybrid IT strategies.
Cyxtera is now leveraging HPE's GreenLake edge-to-cloud platform to support an enterprise bare metal platform. The podcast discussion extends to how Cyxtera is presently focused on supporting AI workloads in data centers and collaborating with HPE to offer a multi-hybrid cloud strategy.
Barry revealed during the podcast that Cyxtera already supports 70 kilowatt racks in 18 markets, and is discussing expanded deployments with its customers and partners. Barry added that many present customers are considering moving to private cloud platforms, due to rising public cloud costs and unexpected fees.
Barry said, "My function here at Cyxtera deals largely with the technologies that we both implement internally and also that we deploy within the data centers themselves to make sure that experience of being in the data center colo facility is seamless, feels as much like cloud as it can in terms of the provisioning of services, how we bill for things, things like that."
He added, "Generally speaking, I'm a technologist at heart and I just want to make sure that what we're building is what's useful for the market to consume."
Here's a list of key points discussed on the podcast:
2:01 - Barry talks about Cyxtera's vision for supporting AI workloads in data centers, including cooling technologies, network speed, power designs, and accommodating adjacencies with edge and public cloud platforms.
4:18 - DCF's Chernicoff asks if Cyxtera will offer 70 kilowatt racks a la Digital Realty. Barry explains that Cyxtera already supports this capacity in 18 of its markets, and is in active discussions with customers and partners over an expansion.
5:40 - Barry discusses how Cyxtera's collaboration with HPE addresses rising cloud costs and furthers a multi-hybrid cloud strategy, including Cyxtera's new enterprise bare metal platform and options for opex financing models.
7:53 - Use cases for customers moving to the HPE GreenLake solution via Cyxtera are discussed, including repatriating cloud workloads and tech refreshes.
15:03 - Asked about the convergence of cloud and hybrid IT strategies, Barry says that Cyxtera views themselves as part of such transformations and says the provider is up front with its customers about what workloads are best suited for their platform. The trend of recalibrating workloads from the public cloud to data centers for better cost management is also discussed.
18:22 - Barry expounds on how egress fees and other unexpected costs can lead to a "death by 1000 cuts" situation for public cloud users, driving them to consider private cloud options
19:57 - Barry observes that many customers are realizing the costs of the public cloud and considering moving to a private cloud solution, and emphasizes the importance of Cyxtera making this transition as easy as possible through technology choices and partnerships.
21:57 - Barry comments on the new Cyxtera partnership with HPE in the context of providing choices and solutions to make moving customer workloads to their venue as easy as possible, with the goal of building a multi-hybrid cloud reality in the future.
Background on Cyxtera
Cyxtera Technologies operates a global network of 60 data centers, supports 2,300 customers, and had $746 million in revenue in 2022. The company was formed in 2016 when Medina Capital, led by former Terremark CEO Manuel Medina, teamed with investors including BC Partners to buy the data center portfolio of CenturyLink for $2.15 billion. It was at that time one of several data center players seeking to build a colocation business atop a portfolio of data centers spun off by telecom companies.
This April, Data Center Frontier's Rich Miller reported that Cyxtera Technologies was reportedly fielding interest from suitors as it sought to reduce its debt load. Data Center Dynamics at that time shared that Cyxtera was exploring options for a sale or capital raise, citing a Bloomberg story that said private equity suitors were studying the company's operations. Shares of Cyxtera had fallen sharply in value during that timeframe and were trading at 31 cents a share at one point, giving the company a market capitalization of about $55 million, a far cry from the $3.4 billion valuation placed on the company when it went public in 2021 through a merger with Starboard Value Acquisition Corp.
In May, as shares of Cyxtera fell to new lows, DCF reported lenders for the company said they would provide the colocation provider with $50 million in new funding, allowing it more time to arrange a sale or line up new capital. In June, the colocation provider filed for Chapter 11 bankruptcy. After working for months to find a buyer or reduce its debt load, the company decided it would now restructure through a pre-packaged bankruptcy.
The Chapter 11 filing was part of an arrangement with its lenders, who retained the right to gain a controlling equity interest in the company under terms of a restructuring agreement. At the time of the bankruptcy filing, some of the company's lenders committed to provide $200 million in financing to enable Cyxtera to continue operating as it restructures.
"Cyxtera expects to use the Chapter 11 process to strengthen the company's financial position, meaningfully deleverage its balance sheet and facilitate the business’s long-term success," the company said in a press release." More details have been made available on Cyxtera's restructuring web site. Cyxtera subsidiaries in the United Kingdom, Germany and Singapore are not included in the bankruptcy case, which was filed in New Jersey.
Dgtl Infra's Mary Zhang has done significant recent reporting over the summer on the story of Cyxtera's existing lease rejections in wake of the bankruptcy filing, as well as charting the company's timeline extension for its bankruptcy-led sale process into late September.
In his June reporting on Cyxtera's bankruptcy filing, DCF's Miller noted that:
"Since Cyxtera leases many of its data centers, Cyxtera's Chapter 11 filing creates a potential challenge for its landlords. Cyxtera leases space in 15 facilities operated by Digital Realty, representing $61.5 million in annual revenue, or about 1.7 percent of Digital's annual revenue.
It also leases space in 6 data centers owned by Digital Core REIT, a Singapore-based public company sponsored by Digital Realty. That includes two sites in Los Angeles, three in Silicon Valley and one in Frankfurt. The $16.3 million in annual rent from Cyxtera represents 22.3 percent of revenue for Digital Core REIT.
A bankruptcy filing provides debtors with the opportunity to reject leases to reduce their real estate costs. In its press release, Cyxtera noted that it "is continuing to evaluate its data center footprint, consistent with its commitment to optimizing operations."
An August report from Bloomberg stated that Cyxtera had drawn interest for its assets from multiple parties, including Brookfield Infrastructure Partners and Digital Realty Trust Inc., according to people with knowledge of the situation. as reported by Bloomberg's Reshmi Basu.
In a recent email to this editor regarding Cyxtera, DCF's Miller opined further:
"The key questions for Cyxtera are really all about the bankruptcy outcome, and where that stands. The future could be very different for Cyxtera depending who the winning bidder is and whether they would reject leases. For example, Digital Realty is reported to be one of the bidders. That makes sense, as Cyxtera leases 12 facilities from them and DLR has a vested interest in protecting that income. But if Digital wins the auction, do they keep leasing space in Cyxtera’s many non-Digital sites? Or do they reject those leases and consolidate? The auction winner will guide future strategy for Cyxtera. And it could be very different if it’s a private equity firm vs. a strategic buyer like Digital Realty."
On August 7, concurrent with its business update for Q2, Cyxtera announced that it had reached a key milestone in its Chapter 11 process by filing a proposed plan of reorganization with the U.S. Bankruptcy Court for the District of New Jersey, and said it had reached an agreement with its lenders to optimize the company's capital structure and reduce its pre-filing funded debt by more than $950 million. In its Q2 update, the company said it had delivered solid growth in total revenue, recurring revenue, core revenue and transaction adjusted EBITDA. Cyxtera's August 7 press release added that negotiations around the company's sale alternative remained active.
According to a press release, the proposed reorganization plan is supported by certain of Cyxtera’s lenders who collectively hold over two-thirds of the company’s outstanding first lien debt, and are parties to Cyxtera’s previously announced restructuring support agreement. The company said the proposed plan provides flexibility for the company to pursue a balance sheet recapitalization or a sale of the business.
Cyxtera noted that if the plan is approved and a recapitalization is consummated, the lenders have committed to support a holistic restructuring of the company’s balance sheet. Such a restructuring would eliminate more than $950 million of Cyxtera’s pre-filing debt and provide the company with enhanced financial flexibility to invest in its business for the benefit of its customers and partners.
For Q2 of 2023, Cyxtera said its total revenue increased by $14.9 million, or 8.1% YoY, to $199.0 million in the second quarter of 2023. On a constant currency basis, the company's total revenue increased by $15.1 million, or 8.2% YoY. Recurring revenue increased by $15.8 million, or 9.1% YoY, to $190.0 million in the second quarter.
Cyxtera added that its core revenue increased by $17.4 million, or 10.3% YoY, to $186.2 million in the second quarter. Finally, the company said its transaction Adjusted EBITDA increased by $6.4 million, or 10.7%, to $66.4 million and increased by $6.5 million, or 10.9% YoY, on a constant currency basis, in the second quarter.
Carlos Sagasta, Cyxtera’s Chief Financial Officer, said, “We are pleased to have delivered another quarter of solid growth across the business, underscoring the strength of our offering and the value we create for our global customers. We expect to continue building on this momentum as we successfully complete the process to strengthen our financial position for the long term.”
The press release added that in either a recapitalization or sale scenario, the company remains on track to emerge from the court-supervised process no later than the fall of this year. The company said it had received multiple qualified bids to date. Final bids from interested parties in the sale process were originally due on August 18, a deadline which came and went. An auction slated for August 30 was also cancelled.
Nelson Fonseca, Cyxtera’s Chief Executive Officer, commented, “We continue to make important progress in our court-supervised process, while demonstrating solid performance across our business. Filing this plan with the support of our lenders provides us a path to emerge in a significantly stronger financial position.”
Here are links to some recent DCF articles on Cyxtera:
Colocation Provider Cyxtera Files for Chapter 11 Bankruptcy
Cyxtera Gets $50 Million Funding, More Time to Seek a Buyer
As Stock Price Slumps, Cyxtera Reportedly Mulling Capital Raise or Sale
Cyxtera Goes Public as Starboard SPAC Acquisition Closes
Cyxtera to Go Public Through $3.4 Billion Merger With Starboard SPAC
Tuesday Aug 29, 2023
Tuesday Aug 29, 2023
Recognizing how data center liquid cooling technology has taken the spotlight this year, in this episode of the Data Center Frontier Show podcast, DCF Editor in Chief Matt Vincent sits down with Mark Fenton, Sr. Product Marketing Manager for Cadence Design Systems; and Mark Seymour, Distinguished Engineer with Cadence and co-founder and CEO of Future Facilities, Ltd., a company specializing in digital twin technology for data centers, whom Cadence acquired in July of 2022.
The discussion unpacks some of the implications of the rise data center liquid cooling technology for data center designs in the era of AI, as proclaimed earlier this month in the pages of VentureBeat.
Here's a timeline of points discussed on the podcast:
1:06 - Transitioning to the "AI Era" of Data Centers?
2:06 - The Cloud and AI Are Absolutely Symbiotic
3:40 - Liquid Cooling Customers: Traditional vs. Now
5:43 - The Beauty of Direct Liquid to Chip Technologies
7:07 - The Issue with Rack Retrofits
8:17 - Timing of Liquid Cooling Imperatives for Data Center Design
11:02 - Cost Considerations for Liquid Cooling: Is PUE a Bad Premise?
13:13 - How Data Center Design Tools are Accounting for Liquid Cooling Technologies
14:40 - Digital Twins for Air Cooled vs. Liquid Cooling Data Centers
16:30 - Liquid Cooling Doesn't Stop Inside the White Space
17:31 - How Liquid Cooling Improves Sustainability and ESG for Data Centers
18:46 - Liquid Cooling Can Potentially Produce Higher-Quality Waste Heat
20:18 - The Holistic Efficiencies of Data Center Liquid Cooling
22:29 - From Opportunities to Challenges
23:32 - Data Centers Love a Silver Bullet
25:34 - Evolution of Data Center Liquid Cooling Designs
26:21 - The Problem Is Power Densities are Rising
27:36 - Drawing Distinctions for Immersion Cooling
29:35 - Immersion Cooling Maintenance Questions
Here are links to some recent DCF articles on data center liquid cooling technology:
Investors Are Warming Up to Liquid Cooling
Liquid Cooling Is In Your Future. Are You Ready?
How to Get Started on Your Immersion Cooling Journey
Direct Liquid Cooling - The Ultimate Guide for Data Centers
Liquid Cooling: Going Beyond Water
Four Factors to Consider When Selecting the Right Glycol-Based Fluid for Liquid Cooling
Why Liquid Cooling is Critical for Your Data Center's Future
Tuesday Aug 15, 2023
Tuesday Aug 15, 2023
Premised on DCF's recent article series centered on data center diesel backup generator technology, the latest episode of the Data Center Frontier Show podcast finds site editors Matt Vincent and David Chernicoff recounting how Aligned Data Centers' Quantum Loophole campus was recently called out by the State of Maryland over a permitting snag in a contentiously approved plan for construction of 168 data center diesel generators, amounting to over 500 MW of backup power generation.
Data centers like Aligned's Quantum Loophole campus, which is being raised on the site of a former aluminum smelting plant, seek to do in Maryland what so many others are doing next door in Northern Virginia. Maryland does want the data center business, but isn't having it without certain qualifications to be met in the form of the state's Certificate of Public Convenience and Necessity (CPCN) licensing process.
As recorded by DCD, in wake of the permitting snag, Maryland officials have wondered aloud about clean energy alternatives, even to the point of expressing incredulity that use of carbon-emitting technology is even on the table -- especially given certain outside realities, not least being Aligned's use of microgrid power in its Plano, Texas data center.
Chernicoff and Vincent sidle up to the conclusion that a modular, incremental technology approach allows for a mosaic of available data center backup power generation solutions including diesel to be used, which the overall industry currently requires. Chernicoff also notes how Tier 4 standards for data center diesel power have gotten significantly cleaner after two decades of refinement.
Here’s a timeline of points discussed on the podcast:
1:05 - The Issue with Aligned Data Centers' Quantum Loophole Campus In Maryland
2:00 - Diesel and Maryland Are At Loggerheads
4:00 - If Someplace Ever Screamed Out for a Microgrid ...
5:20 - Perceptions of Diesel Power
6:00 - Cleaner Generators and Backup Power Runtime Realities
6:42 - The 3 Big Players in the Data Center Diesel Generators
7:14 - Competitive Advantages of No-Load Maintenance
8:20 - Alternatives to Diesel: Microgrid, Battery Backup, SMR, and Biodiesel Technologies
9:44 - A Catch-22 Situation for Data Centers
10:41 - Bits and Pieces of Technology
10:59 - The Benefit of Building from a Clean Slate
11:29 - Building an Entire Data Center Campus, You Expect To Be There For a Decade or Three
12:00 - Could a Microgrid Ever Furnish On-Demand Gigawatt Power?
12:27 - Enclosures for Diesel Backup Power Generators
13:21 - Quality of Support a Huge Competitive Factor
14:17 - The Scoop on Supply Chain
15:15 - Diesel Generator Sizing Concerns
16:01 - Overprovisioning for Backup Power Is an Issue
17:10 - Where Diesel Power Generation Meets Sustainability
18:08 - A Stepping Stone to Other Backup Power Solutions?
Here are links to some recent DCF articles on backup power for data centers:
Top-Level Issues to Consider When Selecting Backup Generator Technology
Sustainability Advantages of HVO Fuel for Diesel Generators
Virginia Ends Effort to Shift Data Centers to Generators in Grid Alerts
New Technology and Practices Improve the Environmental Performance of Diesel Generators
Beyond Diesel: Sustainable Onsite Power for Data Centers
Microsoft Plans to Stop Using Diesel Generators by 2030
Google Looks to Batteries as Replacement for Diesel Generators
Rethinking the Data Center: Hydrogen Backup is Latest Microsoft Moonshot