Enfabrica’s ACF-S and EMFASYS architecture could change how AI clusters process tens of thousands of chips efficiently


- The acquisition of NVIDIA brings engineers to Enbarica directly in its IA ecosystem
- Emfasys chassis mason up to 18 TB of memory for GPU clusters
- The elastic memory fabric effectively releases AA tasks sensitive to time
NVIDIA’s decision to spend more than $ 900 million for Enfabrica was somehow surprised, especially since it was next to a separate investment of $ 5 billion in Intel.
According to Serve“Enfabrica has the coolest technology”, probably because of its unique approach to solving one of the largest AI scaling problems: linking tens of thousands of computer chips so that they can function as a unique system without wasting resources.
This agreement suggests that Nvidia thinks that the resolution of the bottlenecks of interconnection is just as critical as to secure the production capacity of fleas.
A unique approach to data tissue
The accelerated architecture in accelerated calculation fabric (ACF-S) of Enfabrica was built with PCIE tracks on one side and a high-speed network on the other.
Its “Millennium” ACF-S device is a network of 3.2 tops with 128 PCIE tracks that can connect GPUs, NICs and other devices while maintaining flexibility.
The design of the company allows data to move between ports or through the chip with minimum latency, punching Ethernet and PCIe / CXL Technologies.
For AI clusters, it means higher use and less inactive GPU awaiting data, which results in a better return on investment for expensive equipment.
Another piece of offer from Enfabrica is its Emfasys chassis, which uses CXL controllers to accumulate up to 18 TB of memory for GPU clusters.
This fabric of elastic memory allows GPUs to unload data from their limited HBM memory in shared storage on the network.
By releasing HBM for critical tasks, operators can reduce the treatment costs of tokens.
Enfabrica said that reductions could reach up to 50% and allow the workloads on a scale without overcoming local memory capacity.
For large language models and other AI workloads, these capacities could become essential.
The ACF-S chip also offers multiple redundancy at high radius. Instead of a few massive links of 800 Gbit / s, operators can use 32 100 Gbit / s connections.
If a switch fails, only about 3% of the bandwidth is lost, rather than a large part of the network that exceeds the line.
This approach could improve the reliability of large -scale clusters, but it also increases the complexity of network design.
The agreement brings the engineering team of Enfabrica, including CEO Rochan Sankar, directly in Nvidia, rather than leaving such innovation to rivals like AMD or Broadcom.
While NVIDIA’s Intel participation guarantees manufacturing capacity, this acquisition directly deals with scaling ups in AI data centers.


