Microfluidics Enhances AI Chip Performance

Data center rack density has increased rapidly in recent years. Operators are packing more computing power into each server rack to meet the needs of AI and other high-performance computing applications. This means that each rack requires more kilowatts of power and ultimately generates more heat. Cooling infrastructure is struggling to keep up.
“Rack densities have increased from an average of 6 kilowatts per rack eight years ago to the point where racks now ship with 270 kW,” says David Holmes, technical director of global industries at Dell Technologies. “Next year, 480 kW will be ready and the megawatt racks will be there within two years.”
Corintis, a Swiss company, is developing a technology called microfluidics, in which water or another coolant is piped directly to specific parts of a chip to prevent overheating. In a recent test with Microsoft, servers running the company’s Teams video conferencing software recorded heat removal rates that were three times more efficient than other existing cooling methods. Compared to traditional air cooling, microfluidics lowered chip temperatures by more than 80%.
Improving chip performance using microfluidics
A lower chip temperature allows the chip to execute instructions faster, thereby increasing its performance. Chips operating at lower temperatures are also more energy efficient and have lower failure rates. Additionally, the temperature of the air used for cooling can be increased, making the data center more energy efficient by reducing the need for coolers and liquid consumption.
The amount of water needed to cool a chip can be significantly reduced by targeting the flow of liquid to the locations on the chip that generate the most heat. Van Erp noted that the current industry standard is about 1.5 liters per minute per kilowatt of power. As chips approach 10 kW, that will soon mean 15 liters per minute to cool a chip – a figure that will draw the ire of communities worried about the impact of supersized “AI factories” planned for their regions and which could contain a million GPUs or more.
“We need optimized liquid cooling specific to chips, to ensure that every droplet of liquid goes to the right place,” says Remco van Erp, co-founder and CEO of Corintis.
Sam Harrison, founder of Corintis [left] and Remco van Erp hold a cold plate and a microfluidic core, respectively.Corintis
The simulation and optimization software developed by Corintis makes it possible to design a network of microscopically small channels on cold plates. Much like the arteries, veins and capillaries of the body’s circulatory system, the ideal cold plate design for each chip type is a complex network of precisely shaped channels.
Corintis has expanded its additive manufacturing capabilities to be able to mass produce copper parts with channels as narrow as a human hair, around 70 micrometers. Its cold plate technology is compatible with current liquid cooling systems.
The company estimates that this approach can improve cold plate results by at least 25 percent. By working directly with chipmakers to carve channels into the silicon itself, Corintis believes tenfold gains in cooling can eventually be made.
Advancing Liquid Cooling for AI Chips
Liquid cooling is far from new. The IBM 360 mainframe, for example, was water-cooled more than half a century ago. Modern liquid cooling is largely a competition between immersion systems (in which racks and sometimes entire rows of equipment are submerged in cooling fluid) and direct-on-chip systems (in which cooling fluid is piped to a cold plate placed against a chip).
Immersion cooling isn’t ready for prime time yet. And while direct on-chip cooling is widely deployed to keep GPUs cool, it only cools around the surface of the chip.
“Liquid cooling in its current form is a one-size-fits-all solution, relying on simplistic designs that are not tailored to the chip, preventing good heat transfer,” says van Erp. “The optimal design of each chip is a complex network of precisely shaped micro-scale channels, tailored to the chip to guide the coolant to the most critical regions.”
Corintis is already working with chipmakers on improved designs. Chipmakers use the company’s thermal emulation platform to program heat dissipation on silicon test chips with millimeter resolution, then detect the resulting temperature on the chip once the selected cooling method is installed. In other words, Corintis acts as a bridge between chip design and cooling design, enabling chip designers to build future chips for AI applications with superior thermal performance.
The next step is to move from being a bridge between the cooling channel and chip design to unifying these two processes. “Modern chips and cooling are currently two separate things, with the interface between the two being one of the main bottlenecks for heat transfer,” says van Erp.
To increase cooling performance tenfold, Corintis is betting on a future where cooling is tightly coupled as an integral part of the chip itself: microfluidic cooling channels will be etched directly inside the microprocessor package rather than on cold plates on the perimeter.
Corintis has produced more than 10,000 copper cold plates and is increasing its manufacturing capabilities to reach one million cold plates by the end of 2026. The company has also developed a prototype line in Switzerland where it is developing cooling channels directly in chips rather than on a cold plate. This is only intended for small quantities to demonstrate the basic concepts which will then be handed over to chip and cold plate manufacturers.
Corintis announced these expansion plans immediately after the release of Microsoft Teams testing. Additionally, the company is opening offices in the United States to serve its American customers as well as an engineering office in Munich, Germany. Additionally, the company also announced the completion of a US$24 million Series A funding round led by BlueYard Capital and other investors.
From the articles on your site
Related articles on the web


