Nvidia has announced details of a new networking platform designed specifically with generative AI workloads in mind, promising lightning-speed lossless networking.
The company’s new Spectrum-X technology is built using the Spectrum-4 Ethernet switch and BlueField-3 DPU, and promises performance and power efficiency increases of as much as 1.7x.
In a press release, the company said: “The delivery of end-to-end capabilities reduces run-times of massive transformer-based generative AI models,” which in turn enables companies to operate and make decisions more quickly, unlocking cost-saving potential.
Nvidia’s Networking Senior VP, Gilad Shainer, said: “Spectrum-X is a new class of Ethernet networking that removes barriers for next-generation AI workloads that have the potential to transform entire industries.”
Its key component, the Spectrum-4 Ethernet switch, is capable of 51 Tb/sec speeds. What this means for consumers is that, in conjunction with BlueField-3 DPUs and Nvidia LinkX optics, a 400 Gigabit Ethernet network is enabled.
The company looked to provide an insight into the powerful networking capabilities that AI supercomputers can now tap into in real-world cases:
“Nvidia Spectrum-X enables unprecedented scale of 256 200Gb/s ports connected by a single switch, or 16,000 ports in a two-tier leaf-spine topology to support the growth and expansion of AI clouds while maintaining high levels of performance and minimizing network latency.”
Elsewhere, Nvidia CEO Jensen Huang held up the Spectrum-4 switch chip during his presentation at the Computex expo’s opening speech, sharing more insight into its design. The hundred billion transistors on a 90×90 mm die draws around 500 watts, he said.
Already, Nvidia is testing its Spectrum-X in its Israeli data center in the Israel 1, a hyperscale generative AI supercomputer running Dell PowerEdge XE9680 servers based on the Nvidia HGX H100 eight-GPU platform.