Eh, if you pay attention, most of the times this happens the person was a jerk in their prompts.
Like look at the instruction echoed back in this case. All caps and containing a curse word.
You can believe that the incidents occurring are 100% because of negligence and not related to the model behavior shifting, but there seems to be a widening gap between people who prompt like this and have horror stories and people who give the models breaks over long sessions and seem to also regularly post pretty positive results.
AI goes “rogue” as much as a firearm “shoots itself.” This is just 100% negligence. Not “rogue AI.”
Eh, if you pay attention, most of the times this happens the person was a jerk in their prompts.
Like look at the instruction echoed back in this case. All caps and containing a curse word.
You can believe that the incidents occurring are 100% because of negligence and not related to the model behavior shifting, but there seems to be a widening gap between people who prompt like this and have horror stories and people who give the models breaks over long sessions and seem to also regularly post pretty positive results.
What in the youtube apology hahaaaaa
the LLM also do not understand what “not guessing” means. Same energy as “make no mistakes” in your prompts
Oh, shit. I should be adding that.
(I’m joking.)
exactly. it’s on the consumer not the model “going rogue.” when i use it, it’s as if it’s a rubber duck or plain english rtfm