The Future of Immersion Cooling and Its Impact on Data Center Technology
Today’s applications and network-enabled systems require an immense amount of computing power. And, as these applications continue to increasingly leverage High-Performance Computing (HPC) and/or Artificial Intelligence (AI) technologies as part of their core offering, they begin to require computing resources in close proximity – oftentimes in the same rack.
This proximity requirement has driven rack densities from traditional levels of 10 to 15kW per rack to north of 100kW per rack. These high-density racks not only exceed the capabilities of conventional air-cooled solutions in the data center but press the upper limits of more mainstream liquid-cooled solutions, forcing companies to search for alternatives.
When densities reach this high, it forces a medium change, from air to liquid, which has resulted in a renewed interest in an older technology – immersion-based cooling.
Database Center Cooling Advancement Using Liquids
Over the course of the past few months, I’ve seen several articles on hyperscale companies and large IT giants testing out immersion cooling as a potential solution for their present and future data centers. One of the most intriguing new advancements involved Microsoft, which was immersing servers in a liquid of their own creation, to help keep them cool.
But what exactly is immersion cooling? Is it a feasible and scalable way to cool servers? And what possible ramifications could an industry-wide embrace of immersion cooling have on the database center industry? Since the technology is so interesting and seemingly gaining some traction, I wanted to take a closer look at its origins, where it is today and its potential on future data center technology.
What’s Old is New Again
Immersion cooling has actually been around since 1985 with the advent of the Cray-2 computer system. While the Cray-2 was an incredible supercomputer for its day, it would be impossible to run AWS on a system like that. But immersion cooling wasn’t limited only to old-school supercomputers. Government entities historically used immersion cooling as well.
Many of these early immersion cooling case studies used oil-based liquids, which were hazardous if consumed, messy to clean up, and negatively impacted fiber-optic connections. This made it virtually impossible to submerge a computer fully – something that new iterations of immersion cooling are looking to correct.
Today, solutions are entering the market with chemical makeups and properties that make them much more manageable in the mainstream. These designer fluids now more closely mimic water, are generally vegan and have no refractive qualities – meaning that the user can completely immerse the entire computer without impacting optics.
And with these new advancements in immersion cooling technologies and fluids come a renewed interest in the concept across the technology industry. And for very good reason.
Companies are continuously looking for ways to increase their kilowatts per square foot in the data center, while at the same time, trying to have real impacts on Power Usage Effectiveness (PUE). In traditional air-cooled environments, these two goals are generally diametrically opposed. One way to bring these two initiatives in line with one another is the use of liquid cooling.
In hyperscale and enterprise database centers, this traditionally meant bringing liquid to the rack through rear-door exchangers. But even that technology has its limitations, which are now being stressed by the computing requirements of HPC and AI. In an attempt to push those limitations further, enable even more density and drive a lower PUE, the industry is starting to move the liquid down directly to heat exchangers on the chips.
The key difference between traditional liquid cooling and immersion is contact. Traditional liquid-cooled solutions do not expose the computer components directly to fluids. However, in an immersion cooling solution, racks become tanks, and the computers are submerged into an inert fluid. This viscous substance can more effectively remove massive amounts of heat while keeping PUEs just north of one.
But moving in this direction will have significant ramifications on the design of computers and the future data centers that host them.
A Change Agent for Computer and Data Center Design
For immersion cooling to gain adoption, designers must rethink how computers are designed to support submerging them in a horizontal tank rather than a vertical rack. Traditional computers are designed for air-cooled solutions. These devices leverage fans and other moving parts throughout the box that will not operate properly if submerged.
Assuming these parts could function in fluid, the positioning of the fans causes traditional servers to be engineered to be racked and serviced horizontally from both the front and the rear. Immersion cooling requires that designers remove all fans and traditional moving parts and design the server to be serviced vertically, allowing easy access to all components from the exposed edge of the computer. This is not a huge change, but a necessary one, for immersion cooling to become more mainstream.
But what about the data centers themselves? How will a future database center for immersion cooled servers look and operate differently than one for air cooled servers?
Fundamentally, immersion cooling changes the paradigm of the traditional rack. Data centers would have to evolve from having vertical racks to having horizontal tanks. The tanks are slightly larger than a traditional rack, so the footprint would naturally change. Fortunately, most database centers are designed with slab construction, so structurally, the buildings will support the shift with very few changes.
Where the real change would be needed is in the mechanical and electrical systems.
With immersion cooling, a tank would be capable of supporting approximately five times the power/heat rejection compared to traditional air-cooled racks, taking the kW per rack from 20kW to 100kW. That shift would drive future data center providers to investigate ways to bring additional power into the facility, which would impact electrical room sizing and generator sizing. There would be other electrical design ripples downstream required to support the greater power density as well.
On the mechanical front, things would get much simpler. The immersion tanks are all self-contained, so they only need plumbing to provide supply and return water. This would require that the plumbing architecture provide last-mile water service to the tanks.
Most tanks’ kW ratings are based on room temperature water supply, so in theory, the database center provider could remove a large amount of the chiller capacity associated with today’s database centers, assuming customers need only advertised densities. Immersion solutions also have ratings based on chilled water that can range up to three times higher than the advertised base rate, so the data center provider may want to deploy a chiller package but remove all other air handlers.
Should immersion take off, future data center operators would need to adjust to a higher demand for power while adjusting to changes in mechanical plants and upgrades in on-floor plumbing.
Is immersion cooling a feasible option for companies looking for higher density and more power efficiency? It certainly appears to be. Will immersion cooling take off in the very near future and supplant air cooling and liquid cooling in the average hyperscale database center? That really comes down to three factors:
- Manufacturers and their ability to mass produce servers that are “immersion cooling friendly”
- Present and future data center providers and their flexibility and willingness to make the necessary mechanical and power changes to optimize their facilities for immersion cooling
- The requirements of the hyperscalers and IT companies, as well as their increased dependency on HPC and AI.
Regardless, Vantage will be keeping an eye on the exciting new developments to this old technology and looks forward to partnering with customers to determine if immersion cooling is the best solution for their database center and IT requirements.
Steve Conner
Steve Conner serves as vice president, solutions engineering at Vantage Data Centers. He is responsible for leading the company’s sales team on technical requirements in pursuit of new business.
Conner has more than 25 years of experience in building and leading highly motivated sales and engineering teams. Prior to Vantage, Conner led the sales and engineering teams at Cloudistics, taking the start-up’s revenue from $0 to over $5 million in its first year of selling. He held multiple senior level positions at Nutanix where he built a multi-million-dollar business unit focused on managed service providers.
Conner holds a Bachelor of Science degree in computer science from University of Richmond, a Master of Science degree in computer science from George Mason University, and an MBA from Florida Institute of Technology. As part of his focus on technology and enterprise architecture, Conner has earned multiple certifications including CCNP/DP, CISSP and ISSAP.