Deeply Cool: The Rise of Underwater Data Centers

Why Trust Techopedia

On June 10, 2025, China announced the launch of an enterprise-scale underwater data center powered entirely by offshore wind. Destined for the depths off Shanghai, it’s the latest in a series of global initiatives designed to bring down data center energy costs.

Seabed deployment offers several potential wins. Access to untapped real estate near major cities is one, but the main benefit is ready-made natural cooling. Experts say it could reduce the cost of running energy-intensive AI cloud servers by as much as 60%.

Sinking data centers below the surface seems like a simple and even elegant solution to a growing problem, but there are pros and cons. We look at the trend.

Key Takeaways

  • Data center energy consumption is rising fast. AI usually gets the blame, but the cost of keeping cloud servers cool and stable is a major factor.
  • Keeping enterprise-scale air conditioners blowing 24/7 consumes massive amounts of energy, accounting for up to half of data center power consumption.
  • So, the industry is looking for new solutions. Taking advantage of the cold waters at the bottom of lakes and oceans could be one.
  • Live deployments have begun. But there are downside risks and practical challenges to overcome.

Deep Cool for Deepmind?

In the latest move to burnish its sustainable credentials, China has announced plans for a commercial-scale seabed data center. It’s powered entirely by offshore wind and cooled by raw seawater, an eco-double whammy that could make it the world’s first 100% renewables-powered data center.

The partnership deal between the City of Shanghai and Shenzhen-based underwater data center firm Hicloud Technology outlines a two-phase, 24 MW underwater data center (UDC) placed in an undersea location close to the city.

The booming data center sector is always on the hunt for new places to build, but any potential site needs a reliable source of nearby power to be viable. Some hyperscalers are setting up next to nuclear power plants or investing in small modular reactors (SMRs). Others are looking out to sea.

The potential advantages are obvious. One is access to cheap land close to major urban areas. A cheap and natural source of cooling for hotting-up AI processors is the other.

The idea isn’t entirely new. Google was experimenting with floating data centers back in 2013 as one way to use cheap and plentiful seawater as a coolant. Its Hamina data center in Finland has been ocean-cooled since 2010.

In North America, the City of Toronto’s Deep Lake Water Cooling (DLWC) system feeds into an urban ‘cooling loop’ run by the local power utility. It’s used by city data centers to damp down energy costs.

As data centers grow in number and scale, they threaten to overwhelm local grids. Measures to minimize their energy consumption have to be found. But dropping them under the ocean? On first pass, it seems extreme, not to mention impractical.

Why Is Data Center Cooling So Costly?

Data centers are energy hogs, eating power from two troughs: operating electricity to keep servers and appliances running hot, and cooling electricity to stop them from overheating.

Cooling is handled by air conditioning (A/C), which has a dual nature of its own. A/C units have mechanical motors that require substantial power to function and must run constantly in data center environments.

They also expel hot air into the local environment, raising temperatures near each facility and changing microclimates in ways that impact vegetation and animal life.

Bar graph depicting the global data center cooling market size (2023-2033) by structure, highlighting growth in rack and room-based cooling.
Cooling system vendors are making a mint. Source: Market.us

According to global data center consultancy Enconnex, the amount of heat generated by cloud servers is about the same as the amount of electricity they consume, and data center cooling systems already struggle to handle the output.

As energy-intensive AI and ASICS processors take up more and more rack space, extra cooling is needed. However, A/C and ventilation already eat up 30 to 50 percent of data center power bills. Annual spend on powering and maintaining enterprise cooling systems can amount to hundreds of thousands of dollars, while the costs in hyperscale data centers can run to the millions.

Yet you can’t chill a data center on the cheap. Cloud servers and their supporting infrastructure are almost guaranteed to overheat, which can cause system crashes, damage hardware components, lead to downtime, and hit revenues.

Cooling costs are such a big line item on data center P&Ls that outside-the-box solutions are in the cards – including placing them in orbit. Undersea data centers look slightly less wild in comparison.

Pros & Cons

Cheap real estate, low-cost cooling, and proximity to both population centers and power plants are all compelling reasons to give sunken data centers a try. On the downside, there are practical challenges related to accessibility and scalability.

Pros

  • Reduced latency
  • Faster deployment
  • Hardware resilience

Cons

  • Hard to reach
  • Lack of clean energy
  • Scalability issues

The Advantages of Seabed Data Centers

Reduced latency

According to the UN, roughly 40% of the world’s population lives within an hour’s drive (100 kilometers) of a salt or freshwater shoreline. By placing underwater server farms off the coast of urban areas, data would have a shorter distance to travel, reducing network latency.

Faster deployment

Underwater data centers are self-contained units built in a factory, then installed on site – a bit like a prefab house placed on a foundation. That means they benefit from manufacturing processes and efficiencies that work to accelerate timelines. Data center construction projects take much longer to execute.

Hardware resilience

Cloud server hardware tends to last longer when deployed underwater because there is less physical wear and tear from human contact. A dry, lab-like environment ventilated by nitrogen also means less corrosion. In an experiment conducted by Microsoft, the company found that servers deployed in underwater data centers had one-eighth the fail rate of those deployed on land.

The Disadvantages of Seabed Data Centers

Hard to reach

If something does go wrong with an undersea server unit, sending in a repair team won’t be a simple matter. The location could be in deep waters, and weather conditions could delay a specialist team of IT-expert divers. Bespoke equipment will likely be needed, including mini-subs specially kitted for the job.

Lack of clean energy

Despite the seaside location, access to renewable generation sources like offshore wind may not be readily available. And even if there are green data center solutions for power, wind and solar are intermittent. Data centers can’t operate without a constant source of electricity. Some projects have tried wave power generation, but the generator units need large waves to operate.

Scalability

Seabed data centers deployed underwater to date have been limited in size. Linking enough of them together to match the capacity of a land-based hyperscale data center hasn’t been attempted yet, and could fail depending on hardware, power sources, and customer needs.

The Bottom Line

Most of the world’s data lives in the cloud. What if it moved under the sea? Microsoft’s Project Natick experiment off the Scottish coast suggests it could work.

Of 855 servers submerged for over two years, only six of them failed. Yet no plans have emerged to build on the findings. Redmond may have been scared off by the uncertainties.

Seabed data centers would be particularly vulnerable to nation-state adversaries or well-funded saboteurs. GenAI’s arrival means facilities need the flexibility to scale up quickly, and there are big questions to be answered about legal and regulatory jurisdiction when data repositories are located underwater. For now, it’s a matter of wait and sea.

FAQs

What is a seabed data center?

How do underwater data centers save energy?

Are seabed data centers safe and scalable?

Related Reading

Related Terms

Advertisements
Mark de Wolf
Technology Journalist
Mark de Wolf
Technology Journalist

Mark is a tech journalist specializing in AI, FinTech, CleanTech, and Cybersecurity. He graduated with honors from the Ryerson School of Journalism and studied under senior editors from The New York Times, BBC, and The Globe and Mail. His work has appeared in Esports Insider, Energy Central, Autodesk Redshift, and Benzinga.

Advertisements