• sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 hours ago

    My Steam Deck is my child.

    Maybe if I can get it to run a ‘good enough’ LLM, and also a robotics kinematics suite…

    I can just start building DOG, with a Steam Deck for a face, instead of a Combine scanner bot.

    • los0220@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Gemma 4 seems nice for local usage, way faster than Qwen models.

      I was able to run 27B Gemma on my PC, where 14B Qwen was to slow due to CPU offload

      • percent@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 hours ago

        +1, exactly the same experience. Except Gemma4:26B really sucks with OpenCode. Works great with Pi though