• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 days ago

    Just not power/cost efficiently on CPU only, is what I meant. CPUs don’t have the compute for batching (running generation requests in parallel). You need an accelerator, like Huawei’s, to be economical.

    It’s fine for local inference, of course.

    • ag10n@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      A whole ecosystem that can run on any hardware, efficiently or not, is a whole ecosystem developed for the Chinese market

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        …I mean, yeah? It’s obviously developed for the Chinese market.

        But that’s theoretical, for now. No CPU backend I can find supports DSV4, and DeepSeek hasn’t contributed anything yet.

        • KingRandomGuy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          Yeah, I’d expect KTransformers to add support eventually, especially considering their existing support for previous DeepSeek models. One of the tricky parts is that backends need both FP8 and MXFP4 support. As far as I’m aware no inference engine supports both on CPU at the moment (llama.cpp added fp4 support recently, but doesn’t have fp8, while kt-kernel doesn’t support fp4 yet).

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 hours ago

            Not to speak of the new attention scheme and the (IIRC) MLP changes.

            I’m very much looking forward to ik_llama.cpp implementing it. I don’t think I can quite fit Flash on my rig (hence no Ktransformers for me) but a little quantization of the sparse layers, and it’d be perfect.

            • KingRandomGuy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 hours ago

              Makes sense, even Flash is fairly sizable! KTransformers also has a “llamafile” backend which uses GGUFs, but ik_llama will almost certainly perform better if you’re not on a NUMA setup. In my case, I’m using a dual socket motherboard, so KTransformers performs quite a bit better (I think ik_llama hasn’t implemented extensive NUMA optimizations quite yet, but sounds like it’s coming), though I normally use KTransformers for native FP8 weights.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 hours ago

                It is! 143GB last I checked. I’m on 128GB RAM + 3090, 1 NUMA node, so I think it’s juuust barely too tight. But it should be perfect with a few of the “sparsest” MoEs quantized.

                If KTransformers supports something like that, I may have to finally check it out, since v4 won’t need many esoteric features.

    • gens@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      LLMs are limited by memory bandwidth much more then calculating power. You need HBM. Dedicated accelerators only lower power usage.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        This is commonly cited, but not strictly true.

        Prompt processing is completely compute limited. And at high batch sizes, where the weights are read once for many tokens generated in parallel, token generation is also quite compute limited. Obviously you want enough bandwidth to match the compute, but its very compute heavy.

        You can see this for yourself. Try ~10 prompts in parallel on a CPU in llama.cpp, and it will slow to a crawl, while a GPU with a narrow bus won’t slow down much.

        Training is a bit more complicated, but that’s not doable on CPUs anyway.

        Now, local inference (aka a batch size of 1), past prompt processing, is heavily bandwidth limited. This is why hybrid inference works alright on CPUs. But this doesn’t really apply to servers, which process many users in parallel with each “pass”.