• ag10n@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    Yes, you can run it at scale. Which is why it uses Huawei hardware.

    You can run it on anything, scaled or not

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 days ago

      Just not power/cost efficiently on CPU only, is what I meant. CPUs don’t have the compute for batching (running generation requests in parallel). You need an accelerator, like Huawei’s, to be economical.

      It’s fine for local inference, of course.

      • ag10n@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        A whole ecosystem that can run on any hardware, efficiently or not, is a whole ecosystem developed for the Chinese market

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          …I mean, yeah? It’s obviously developed for the Chinese market.

          But that’s theoretical, for now. No CPU backend I can find supports DSV4, and DeepSeek hasn’t contributed anything yet.

          • KingRandomGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            Yeah, I’d expect KTransformers to add support eventually, especially considering their existing support for previous DeepSeek models. One of the tricky parts is that backends need both FP8 and MXFP4 support. As far as I’m aware no inference engine supports both on CPU at the moment (llama.cpp added fp4 support recently, but doesn’t have fp8, while kt-kernel doesn’t support fp4 yet).

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              16 hours ago

              Not to speak of the new attention scheme and the (IIRC) MLP changes.

              I’m very much looking forward to ik_llama.cpp implementing it. I don’t think I can quite fit Flash on my rig (hence no Ktransformers for me) but a little quantization of the sparse layers, and it’d be perfect.

              • KingRandomGuy@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 hours ago

                Makes sense, even Flash is fairly sizable! KTransformers also has a “llamafile” backend which uses GGUFs, but ik_llama will almost certainly perform better if you’re not on a NUMA setup. In my case, I’m using a dual socket motherboard, so KTransformers performs quite a bit better (I think ik_llama hasn’t implemented extensive NUMA optimizations quite yet, but sounds like it’s coming), though I normally use KTransformers for native FP8 weights.

                • brucethemoose@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  13 hours ago

                  It is! 143GB last I checked. I’m on 128GB RAM + 3090, 1 NUMA node, so I think it’s juuust barely too tight. But it should be perfect with a few of the “sparsest” MoEs quantized.

                  If KTransformers supports something like that, I may have to finally check it out, since v4 won’t need many esoteric features.

      • gens@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        LLMs are limited by memory bandwidth much more then calculating power. You need HBM. Dedicated accelerators only lower power usage.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 hours ago

          This is commonly cited, but not strictly true.

          Prompt processing is completely compute limited. And at high batch sizes, where the weights are read once for many tokens generated in parallel, token generation is also quite compute limited. Obviously you want enough bandwidth to match the compute, but its very compute heavy.

          You can see this for yourself. Try ~10 prompts in parallel on a CPU in llama.cpp, and it will slow to a crawl, while a GPU with a narrow bus won’t slow down much.

          Training is a bit more complicated, but that’s not doable on CPUs anyway.

          Now, local inference (aka a batch size of 1), past prompt processing, is heavily bandwidth limited. This is why hybrid inference works alright on CPUs. But this doesn’t really apply to servers, which process many users in parallel with each “pass”.

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      2 days ago

      Nope! You don’t know what you’re talking about. At all. But you can have fun running a 1.6 trillion parameter model on CPU at basically 0 tokens per second at scale, MoE or not.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        You can actually get kind of acceptable performance on CPU alone, but you need rather specific CPUs, like SPR or newer Intel Xeons. These support AMX, which is almost like a mini tensor core, so you can actually get decent throughput in TFLOPs out of GNR Xeons. Memory bandwidth with max channels is also acceptable, something like ~800 GB/s per socket with maxed out MRDIMMs, which is not too far behind consumer GPUs like 3090 and 4090.

        Not anywhere near the performance of real GPUs of course, and not something acceptable for scale or production workloads, but good enough for local inference.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          2 days ago

          You’ve proved my point that you don’t know what you’re talking about by blindly linking to the git repo. Couldn’t find any source that supports your claim? I wonder why.

          Sure you can serve one request at a time to one patient user at a slow token per second rate, which makes running locally viable, but there is no RAM that has the bandwidth to run this model at scale. Even flash would be incredibly slow on CPU with multiple requests. You’d need the high bandwidth of VRAM and to run across multiple GPUs in a scalable way, it requires extremely high bandwidth interconnects between GPUs.

          • ag10n@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            5
            ·
            1 day ago

            Thank you for proving my point. It can be run on a cpu

            “It’s slow, it’s inefficient” it still runs

            It’s a foundational model just like R1 was.

            • theunknownmuncher@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              1 day ago

              Yes, you can run it at scale.

              at scale

              Shift those goalposts! We went from “at scale” to “it still runs”

              • ag10n@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 day ago

                Quote me in full.

                You can run it at scale, on huawei. You can also run it on a cpu

                • theunknownmuncher@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 day ago

                  Quote me in full.

                  Okay!

                  You can run at scale, on huawei. You can also run it on a cpu

                  Yeah, that is absolutely not what you argued.

                  Anyway, you’ve conceded that I’m correct that you cannot run it at scale on a CPU, because running on CPU is too slow and inefficient, and that they instead use GPU hardware like Huawei GPUs to run the model at scale. That’s good enough for me!

                  • Diurnambule@jlai.lu
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    1 day ago

                    Okey, then priced to just screenshot the part after the initial argument. Dude do more efforts.

                  • ag10n@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    1 day ago

                    Your interpretation of the English language has won you an argument! Huzzah

                    So good of you to concede it runs on cpu