We put this question to Nvidia and were told "the supercomputer used for MLPerf LLM training with 10,752 H100 GPUs is a different system built with the same DGX SuperPOD architecture." ...
“All DGX SuperPOD customers get the same lead time ... Long lead times for systems containing Nvidia’s H100 GPUs, including DGX, have been a common complaint of OEMs and channel partners ...
The first confirmed designs to use Blackwell include the B100 and the B200 GPUs, the successors to the Hopper-based H100 and H200 ... system is available in a DGX SuperPod cluster.