It ain’t exactly blazing fast, but it does actually work.
(Reasonably fast if you go down to the 2B param model, I can get the 9B param variant working, though this makes Steam Decky very hot and bothered.)
Yeah, you absolutely do not need Nvidia hardware to run an LLM, but we get blasted with their propoganda suggesting otherwise just all the time in the English speaking West.
Because if you don’t need Nvidia, well, then, this whole AI bubble looks a lot more bubbly.
Hello, no sorry auto correction and going fast do it to my posts.
I wanted to say that NVIDIA is already the worst option for consumer graphic card since AMD made a card with 20go ram which is able to run most open weight models.
I got Qwen 3.5 running on a Steam Deck.
It ain’t exactly blazing fast, but it does actually work.
(Reasonably fast if you go down to the 2B param model, I can get the 9B param variant working, though this makes Steam Decky very hot and bothered.)
Yeah, you absolutely do not need Nvidia hardware to run an LLM, but we get blasted with their propoganda suggesting otherwise just all the time in the English speaking West.
Because if you don’t need Nvidia, well, then, this whole AI bubble looks a lot more bubbly.
Take good care of your hw! It’s not like 2 years ago when you could buy stuff off the shelf for reasonable prices. :D
My Steam Deck is my child.
Maybe if I can get it to run a ‘good enough’ LLM, and also a robotics kinematics suite…
I can just start building DOG, with a Steam Deck for a face, instead of a Combine scanner bot.
Gemma 4 seems nice for local usage, way faster than Qwen models.
I was able to run 27B Gemma on my PC, where 14B Qwen was to slow due to CPU offload
+1, exactly the same experience. Except Gemma4:26B really sucks with OpenCode. Works great with Pi though
Amd have the best consumer grafic card to run llm on the market.
Sorry, I’m not entirely sure what you mean.
Did you mean to say:
“And need to have the best consumer GPU on the market, to run an LLM.”
… likely alluding to an RTX 5090?
So you would be saying that basically it is bullshit, the idea that everyone needs extremely expensive hardware, to run an LLM?
Hello, no sorry auto correction and going fast do it to my posts. I wanted to say that NVIDIA is already the worst option for consumer graphic card since AMD made a card with 20go ram which is able to run most open weight models.
Aha! Ok, that makes sense as well.