• Leon@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 days ago

      Don’t know if I’d call that an intention of the machine but rather the creator. Hate to be that kind of person but it’s similar to the whole thing of “guns don’t kill, people do.”

      LLMs aren’t people. They’re not self-aware and don’t have any inner complexities like say, a dog, or a sheep has. There’s no drive or motivation. It’s just maths.

      If you tie someone to a train track, and a train comes along killing them, it’s not like the train or the track intended to kill the person. That was the intent of you, who “programmed” the scenario.

      Similar to guns, strict control is what will be needed to fix these kinds of things. Megalomaniac billionaires who see people as nothing but numbers running amok with narcissistic manipulator systems isn’t a recipe for anything good.

      • HairyHarry@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        Ok, technically you are correct. Still they are lies or let’s call it disinformation or propaganda. Wether the output is controlled by the machine it self having a mind (which of course is sci-fi) or by those who control the machine.

        • WhatAmLemmy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          What you’re calling lies are false positives. To lie you have to know the truth. AI’s are ignorant. They don’t know what anything is, as all they “know” is mathematical patterns in 1’s and 0’s.

          They would only be lies if Google engineers explicitly overrided the model to output the false information. What most implementations of LLM’s are is weaponized incompetence, for-profit. Capitalists know they output false information, and they don’t care, because their only goal is profit and power.

          • hesh@quokk.au
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 days ago

            If Google knows it outputs falsehoods and lets it continue it becomes purposeful. That makes them lies in my book.