• Kage520@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    15 hours ago

    There really should be a certification course for using AI safely. I’m slop coding a hobby app and I’m shocked at how much it FEELS like it can do, because it can do amazing things, yet fails in the strangest ways. When it feels like it can get away with it, it forgets earlier discussions and moves on without it. So you can spend time hammering out a whole section of code, then move on, and AI will rip out everything that references that code and think of a different way in the moment and code that in instead. It won’t be the same. It probably won’t work, or at least won’t pass all test cases. But if you aren’t paying attention and keep coding, your original part of the project is no longer functioning and you won’t understand why. But every step of the way it’s confident in its answers and you won’t suspect that it fundamentally no longer understands the project.

    • ExFed@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      14 hours ago

      As someone who started writing software over 20 years ago (yikes I feel old), I feel like a lot of the best practices I’ve come to appreciate are really just strategies for mitigating future pain or boring/uninspiring work. When you eliminate most of the cost of rewriting everything from scratch by a machine that feels nothing, then “best practices” kinda lose their meaning.

      Edit: confusing sentence order.

      • Rooster326@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        14 hours ago

        I feel like a lot of the best practices I’ve come to appreciate are really just strategies for mitigating future pain or boring/uninspiring work.

        And now you know the difference between Intelligence and Wisdom.

        Also everything has a cost. The only time something has no cost is when you decide your life, your time, is meaningless.

    • mark@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      15 hours ago

      yup and when you DO catch it spitting out nonsense. it"ll say “oh you right, let me change that”… 🙄 like, why do I have to tell you that you’re wrong about something? You should already know it’s wrong and fix it without me ever pointing it out.

      • LePoisson@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        50 minutes ago

        You already got the right replies from the other two. But I think your comment shows the danger of AI being talked about like it’s the fucking second coming.

        They’re all based on LLM - large language models

        They’re just modeling what “most likely” is the right response. AI doesn’t know shit and that’s why it also will yes and you to death because it really is just a yes and machine spitting out what is likely to appear as a valid response to a prompt.

        It’s very dangerous that people treat AI like it actually has some understanding of the training materials or true knowledge of anything. They’re just very good little parrots.

      • Rooster326@programming.dev
        link
        fedilink
        English
        arrow-up
        14
        ·
        14 hours ago

        But it didn’t even understand it was wrong

        It can’t understand that. It can’t understand anything

        The Human-feedbaxk algorithm dictates humans prefer to receive an apology so it does.

      • SparroHawc@lemmy.zip
        link
        fedilink
        English
        arrow-up
        10
        ·
        14 hours ago

        That’s because it doesn’t really ‘know’ things in the same way you and I do. It’s much more like having a gut reaction to something and then spitting it out as truth; LLMs don’t really have the capability to ruminate about something. The one pass through their neural network is all they get unless it’s a ‘reasoning’ model that then has multiple passes as it generates an approximation of train-of-thought - but even then, its output is still a series of approximations.

        When its training data had something resembling corrections in it, the most likely text that came afterwards was ‘oh you’re right, let me fix that’ - so that’s what the LLM outputs. That’s all there is to it.

    • Rooster326@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      14 hours ago

      There is a course. It’s called experience. Common sense.

      All that any 4 hour YouTube/LinkedIn learning would-do would-be to perpetuate this idea that developers aren’t necessary. Take this course, buy these tokens and become A based God