At the OCP Global Summit, NVIDIA shared exciting news about the future of large-scale artificial intelligence data centers, also called gigawatt AI factories. The company is introducing the NVIDIA Vera Rubin NVL144 MGX rack servers, which are designed to be highly energy efficient and easy to scale. Over 50 technology partners are gearing up to support this new server system, which can connect hundreds of NVIDIA GPUs for handling demanding AI tasks.
Major technology companies are showcasing new hardware and systems that will support data centers running at 800 volts direct current, a big jump from traditional data center designs. This shift to higher voltage means these data centers can use less materials, save energy, and deliver more power where it is needed most. Companies like Foxconn, CoreWeave, Oracle, and more are already building or planning these advanced data centers.
Vera Rubin NVL144 stands out with its liquid cooling solutions and modular design, which make assembly and maintenance easier. Instead of using cables, it uses a special central board for faster connections and rapid scaling. The entire system is built as an open standard, so different companies can mix and match components that fit their needs. This flexibility helps data center operators grow and update their infrastructure easily.
Another new development is the NVIDIA Kyber rack server. Kyber is designed to support record numbers of GPUs, up to 576 in a single unit, with all the wiring and cooling built into the chassis. Moving to an 800 volt system allows these servers to deliver more than 150 percent extra power through the same copper wiring while reducing costs and saving materials.
NVIDIA is also expanding its NVLink Fusion ecosystem. This technology helps companies connect different types of computer chips together smoothly in the same data center. Industry giants like Intel and Samsung are joining forces with NVIDIA to create customized chips that work well with these powerful AI systems.
All of these advances rely on a wide network of partners making hardware, power supplies, and other key components. By working together and using open standards, these companies are helping to build the next generation of AI data centers that are more powerful, efficient, and easier to manage.
If you want to see these developments in action or learn more, visit NVIDIA and partners at the Open Compute Project Global Summit.
Original article and image: https://blogs.nvidia.com/blog/gigawatt-ai-factories-ocp-vera-rubin/
