LLMs can sometimes point out potential trouble spots, which is also one of the uses that may avoid injecting problematic code (if the LLM is prevented from suggesting a fix). But sadly, that doesn’t seem the type of use KDE is currently limiting themselves to.
I code and do art things. Check https://private.horse64.org/u/ell1e for the person behind this content. For my projects, https://codeberg.org/ell1e has many of them.
- 0 Posts
- 5 Comments
Some of us respectfully disagree with LLMs for programming being “appropriate and legitimate”, at least if that involves generating code and not just locating bugs.
Local LLMs retain significant issues like the one shown in this clip: https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 Unless your model uses 100% properly licensed training data which no code LLM I have found appears to be doing.
Sadly, it seems like they’re going to be pro AI internally: https://discuss.kde.org/t/sorry-to-bring-up-a-contentious-topic-kde-ai-llm-policy/46333 (If you jump in to comment, please try to be constructive rather than full of rage.)
- ell1e@leminal.spacetoNot The Onion@lemmy.world•Germany unveils strategy for becoming Europe’s strongest military by 2039English37·8 days ago
Not like the AfD is about to become the most popular party, it’s really very unlike ~1933 for sure, for sure. (/s)
The “archaic devices” framing of the article seems questionable to me. I don’t think a device from around 9 years ago (Ryzen Gen 1) is archaic. While these are going to be low end now they often are still perfectly usable if they were somewhat higher end at the time. They don’t lack anything a modern system should need, which is easily proven by Linux running on them just fine.