inari@piefed.zip to Technology@lemmy.worldEnglish · 2 days agoDeepSeek ditches Nvidia for Huawei chips in V4 launchcybernews.comexternal-linkmessage-square83linkfedilinkarrow-up1386arrow-down17
arrow-up1379arrow-down1external-linkDeepSeek ditches Nvidia for Huawei chips in V4 launchcybernews.cominari@piefed.zip to Technology@lemmy.worldEnglish · 2 days agomessage-square83linkfedilink
minus-squareAvid Amoeba@lemmy.calinkfedilinkEnglisharrow-up5·1 day agoTake good care of your hw! It’s not like 2 years ago when you could buy stuff off the shelf for reasonable prices. :D
minus-squaresp3ctr4l@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up2·1 day agoMy Steam Deck is my child. Maybe if I can get it to run a ‘good enough’ LLM, and also a robotics kinematics suite… I can just start building DOG, with a Steam Deck for a face, instead of a Combine scanner bot.
minus-squarelos0220@lemmy.worldlinkfedilinkEnglisharrow-up2·20 hours agoGemma 4 seems nice for local usage, way faster than Qwen models. I was able to run 27B Gemma on my PC, where 14B Qwen was to slow due to CPU offload
minus-squarepercent@infosec.publinkfedilinkEnglisharrow-up1·edit-219 hours ago+1, exactly the same experience. Except Gemma4:26B really sucks with OpenCode. Works great with Pi though
Take good care of your hw! It’s not like 2 years ago when you could buy stuff off the shelf for reasonable prices. :D
My Steam Deck is my child.
Maybe if I can get it to run a ‘good enough’ LLM, and also a robotics kinematics suite…
I can just start building DOG, with a Steam Deck for a face, instead of a Combine scanner bot.
Gemma 4 seems nice for local usage, way faster than Qwen models.
I was able to run 27B Gemma on my PC, where 14B Qwen was to slow due to CPU offload
+1, exactly the same experience. Except Gemma4:26B really sucks with OpenCode. Works great with Pi though