• humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Huawei outperforms NVIDIA at the “cluster” level. Which are mostly turnkey systems for datacenter units. And promises truck container level cluster for next generation that is 30x the zetaflops as NVIDIA rubin cluster. China currently operates at 50% electric production capacity, and energy extremely abundant and low price, which make the per level card performance deficit irrelevant.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      To be fair, the raw FLOPs count doesn’t tell the whole story. On a lot of workloads (including token generation during LLM inference), you’re bound by the memory bandwidth rather than throughput/FLOPs. On H100/H200, keeping the tensor cores fully occupied is surprisingly difficult, and that’s with 3+ TB/s of memory bandwidth. And I believe those cards have much higher throughput (at least at FP8, Ascend wins at FP4 since H100/200 don’t support it) compared to Ascend.

      The Ascend 950PR units have far lower memory bandwidth, reportedly at 1.4 TB/s. Compare that to Blackwell, which has something like 8TB/s of bandwidth. I believe they’re manufacturing their own kind of HBM, so that’s still really impressive considering this is a fairly recent push into manufacturing accelerators. But I’m a bit skeptical it actually outperforms NVIDIA at scale.

      • humanspiral@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Huawei’s clusters have close to 4x the ram as NVIDIAs, and TFLOPs is most relevant to training. Huawei has better interconnect technology than NVIDIA, but incompatible with H200s, and so for China/friends use, it’s a much better package. Price/performance of 910 vs 5090 or 6000ada is much higher at single card level. The power cost/availability in China gives them much higher potential deployment rates. Chinese cloud rates tend to be lower than the same model on US clouds.

        • KingRandomGuy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          Yeah I can believe their interconnect is better, given their extensive history in networking.

          W.r.t TFLOPs, let me clarify what I meant. Even on traditionally compute-bound workloads (attention, etc.), on H200 it’s actually surprisingly difficult to make full use of the card’s throughput before hitting VRAM bandwidth limits. Tensor core throughput has grown a lot faster than bandwidth has.

          I’ve never written a kernel for Huawei chips so I have no idea if they have the same problem. But this problem is there on many datacenter-class NVIDIA chips, which is why they keep introducing features (TMA, TMEM, etc.) to try and lower the time wasted waiting for memory.