Skip to content
Inside the New Wave of NVIDIA Powered Supercomputers Changing Science

Inside the New Wave of NVIDIA Powered Supercomputers Changing Science

The New Era of Accelerated Supercomputing

From quantum physics to climate research, one idea is quietly reshaping modern science accelerated computing. Instead of relying only on traditional CPUs, researchers are stacking massive numbers of GPUs, high speed networks and specialized software to push through problems that used to be impossible.

At the SC25 conference in St. Louis, NVIDIA revealed just how fast this shift is happening. More than 80 new scientific systems built on its accelerated computing platform have gone live worldwide in the last year. Together, they deliver around 4,500 exaflops of AI performance. For context, an exaflop is a billion billion operations every second. Multiply that by thousands and you get an idea of the scale we are talking about.

These machines are not just tech showpieces. They are being used right now to model climate change, discover new drugs, design better materials, explore the universe and even prepare for future earthquakes.

Horizon and the Next Generation of US AI Supercomputers

One of the headline systems is Horizon at the Texas Advanced Computing Center. When it comes online in 2026, it will be the largest academic supercomputer in the United States.

Horizon will pack 4,000 NVIDIA Blackwell GPUs and is expected to reach up to 80 exaflops of AI compute at very low precision. It will use NVIDIA GB200 NVL4 and Vera CPU servers, all wired together with NVIDIA Quantum X800 InfiniBand networking so data can move around the system at incredible speed.

What will scientists actually do with all that power

  • Study diseases at the molecular level Researchers will use molecular dynamics tools and AI driven simulations to understand how viruses behave and spread.
  • Simulate stars and galaxies Astrophysicists will model how stars and galaxies form and compare those simulations with data from telescopes like the James Webb Space Telescope.
  • Design novel materials Teams will explore materials with complex crystal structures, turbulence and quantum scale conductivity for applications in energy and electronics.
  • Map earthquake risks By simulating seismic waves and fault ruptures, scientists aim to improve hazard maps and make earthquake preparation more accurate.

TACC expects Horizon to let researchers run these simulations and AI models at scales that were not realistic before, compressing years of work into days or hours.

The United States Department of Energy is also stepping hard into this new era. It is partnering with NVIDIA to build seven AI supercomputers across two major labs Argonne National Laboratory in Illinois and Los Alamos National Laboratory in New Mexico.

At Argonne, the largest new system is called Solstice. It will feature around 100,000 NVIDIA Blackwell GPUs. A machine of that size, using NVIDIA GB200 NVL72 units, can hit roughly 1,000 exaflops of AI training compute which is more than half again the combined training power of the entire global TOP500 list from mid 2025.

Another Argonne system, Equinox, will include about 10,000 Blackwell GPUs, while three smaller systems Minerva, Janus and Tara will focus on AI inference and training the next generation of AI experts.

At Los Alamos, two new systems named Mission and Vision will be delivered by HPE. They will use the NVIDIA Vera Rubin platform and Quantum X800 InfiniBand. Mission will run classified workloads for the National Nuclear Security Administration. Vision will be open to the wider scientific community for work on foundation models and advanced AI agents.

These follow Doudna, a system coming to Lawrence Berkeley National Laboratory in 2026. Built on NVIDIA Vera Rubin and Quantum X800 InfiniBand, Doudna will support more than 11,000 researchers across areas like fusion energy, materials science, drug discovery and astronomy.

Global Supercomputing Highlights From Europe to Asia

The United States is not alone in this hardware race. Europe and Asia are rolling out their own heavyweight NVIDIA powered systems to keep research and innovation under their own control, a trend often called sovereign AI.

In Germany, the Jülich Supercomputing Centre has switched on JUPITER, Europe’s first exascale supercomputer. It uses 24,000 NVIDIA GH200 Grace Hopper Superchips and NVIDIA Quantum 2 InfiniBand to break the exaflop barrier on the demanding Linpack benchmark for double precision math.

JUPITER is already running global climate simulations at kilometer scale resolution. That level of detail lets scientists better capture local weather patterns and extreme events while still modeling the entire planet.

Across Europe, more NVIDIA based systems are coming online

  • Blue Lion At the Leibniz Supercomputing Centre in Germany, this upcoming system will use the NVIDIA Vera Rubin platform to support climate research, turbulence studies, physics and machine learning.
  • Gefion Denmark’s first AI supercomputer, based on an NVIDIA DGX SuperPOD, will give the country homegrown AI capacity for quantum computing research, clean energy work and biotech projects.
  • Isambard AI The United Kingdom’s most powerful AI supercomputer at the University of Bristol is backing projects like Nightingale AI, a health model trained on National Health Service data, and UK LLM for improved AI reasoning in Welsh and other UK languages.

Across the Pacific, similar stories are playing out.

In Japan, leading research institute RIKEN is adding NVIDIA GB200 NVL4 systems to two new supercomputers one focused on AI for science and another on quantum computing. RIKEN is also collaborating with Fujitsu and NVIDIA on FugakuNEXT, the successor to the famous Fugaku system. It will combine FUJITSU MONAKA X CPUs with NVIDIA technologies through NVLink Fusion to drive earth system modeling, drug discovery and advanced manufacturing.

Tokyo University of Technology has built an AI supercomputer using NVIDIA DGX B200 systems that can reach about 2 exaflops of FP4 performance using fewer than 100 GPUs. This machine will help train large language models and build digital twins, while serving as a training ground for future AI engineers.

Japan’s National Institute of Advanced Industrial Science and Technology has also launched ABCI Q, the world’s largest research supercomputer dedicated to quantum computing, with more than 2,000 NVIDIA H100 GPUs.

In South Korea, the government plans to deploy over 50,000 NVIDIA GPUs across national clouds and AI factories. Companies like Samsung, SK Group and Hyundai Motor Group are building their own NVIDIA Blackwell based AI factories to accelerate research and industrial design.

Meanwhile in Taiwan, NVIDIA and Foxconn are teaming up on an AI factory supercomputer featuring 10,000 NVIDIA Blackwell GPUs. It will support startups, researchers and major industries across the island.

Put together, these systems show a clear trend. Supercomputers are no longer rare one off machines. They are rapidly becoming shared AI infrastructure for entire countries, unlocking new discoveries in science, engineering and beyond.

Original article and image: https://blogs.nvidia.com/blog/sc25-new-science-systems-worldwide/

Cart 0

Your cart is currently empty.

Start Shopping