• flandish@lemmy.world
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    3
    ·
    3 days ago

    AI goes “rogue” as much as a firearm “shoots itself.” This is just 100% negligence. Not “rogue AI.”

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      3 days ago

      Eh, if you pay attention, most of the times this happens the person was a jerk in their prompts.

      Like look at the instruction echoed back in this case. All caps and containing a curse word.

      You can believe that the incidents occurring are 100% because of negligence and not related to the model behavior shifting, but there seems to be a widening gap between people who prompt like this and have horror stories and people who give the models breaks over long sessions and seem to also regularly post pretty positive results.

      An image of the model responding about not following user prompt