They’re not even pretending. The algorithm says the most likely response to “you fucked up” is “I’m sorry”, so that’s what it prints. There’s zero psychological simulation going on, only statistical text generation.
The program can’t pretend any more than it can tell truth. It’s all just impressive regurgitation. Querying it as to why it “chose” to take any action is about as useful as interrogating a boulder on why it “chose” to roll through a house.
It’s so weird how these chatbots always pretend they learnt something after they fuck up.
They literally can’t.
They’re not even pretending. The algorithm says the most likely response to “you fucked up” is “I’m sorry”, so that’s what it prints. There’s zero psychological simulation going on, only statistical text generation.
I actually didn’t believe you but it’s literally true. First post, immediate apology.
The program can’t pretend any more than it can tell truth. It’s all just impressive regurgitation. Querying it as to why it “chose” to take any action is about as useful as interrogating a boulder on why it “chose” to roll through a house.
I mean, they probably do. until it gets purged from the context window. then it just yolos again
the next ingestion cycle will probably pick it up but how do we know it’ll use the information in any relevant way 😶
Only because we are still using vanilla LLMs instead of MAMBA or JEPA
Of course. If you shot your foot with a gun, the solution is surely a bigger gun.