Huawei’s clusters have close to 4x the ram as NVIDIAs, and TFLOPs is most relevant to training. Huawei has better interconnect technology than NVIDIA, but incompatible with H200s, and so for China/friends use, it’s a much better package. Price/performance of 910 vs 5090 or 6000ada is much higher at single card level. The power cost/availability in China gives them much higher potential deployment rates. Chinese cloud rates tend to be lower than the same model on US clouds.
Yeah I can believe their interconnect is better, given their extensive history in networking.
W.r.t TFLOPs, let me clarify what I meant. Even on traditionally compute-bound workloads (attention, etc.), on H200 it’s actually surprisingly difficult to make full use of the card’s throughput before hitting VRAM bandwidth limits. Tensor core throughput has grown a lot faster than bandwidth has.
I’ve never written a kernel for Huawei chips so I have no idea if they have the same problem. But this problem is there on many datacenter-class NVIDIA chips, which is why they keep introducing features (TMA, TMEM, etc.) to try and lower the time wasted waiting for memory.
Huawei’s clusters have close to 4x the ram as NVIDIAs, and TFLOPs is most relevant to training. Huawei has better interconnect technology than NVIDIA, but incompatible with H200s, and so for China/friends use, it’s a much better package. Price/performance of 910 vs 5090 or 6000ada is much higher at single card level. The power cost/availability in China gives them much higher potential deployment rates. Chinese cloud rates tend to be lower than the same model on US clouds.
Yeah I can believe their interconnect is better, given their extensive history in networking.
W.r.t TFLOPs, let me clarify what I meant. Even on traditionally compute-bound workloads (attention, etc.), on H200 it’s actually surprisingly difficult to make full use of the card’s throughput before hitting VRAM bandwidth limits. Tensor core throughput has grown a lot faster than bandwidth has.
I’ve never written a kernel for Huawei chips so I have no idea if they have the same problem. But this problem is there on many datacenter-class NVIDIA chips, which is why they keep introducing features (TMA, TMEM, etc.) to try and lower the time wasted waiting for memory.