• frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      25
      ·
      16 hours ago

      They’re not even pretending. The algorithm says the most likely response to “you fucked up” is “I’m sorry”, so that’s what it prints. There’s zero psychological simulation going on, only statistical text generation.

      • Hacksaw@lemmy.ca
        link
        fedilink
        English
        arrow-up
        15
        ·
        15 hours ago

        I actually didn’t believe you but it’s literally true. First post, immediate apology.

    • Ech@lemmy.ca
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      16 hours ago

      The program can’t pretend any more than it can tell truth. It’s all just impressive regurgitation. Querying it as to why it “chose” to take any action is about as useful as interrogating a boulder on why it “chose” to roll through a house.

    • thisbenzingring@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      17 hours ago

      the next ingestion cycle will probably pick it up but how do we know it’ll use the information in any relevant way 😶