You already got the right replies from the other two. But I think your comment shows the danger of AI being talked about like it’s the fucking second coming.
They’re all based on LLM - large language models
They’re just modeling what “most likely” is the right response. AI doesn’t know shit and that’s why it also will yes and you to death because it really is just a yes and machine spitting out what is likely to appear as a valid response to a prompt.
It’s very dangerous that people treat AI like it actually has some understanding of the training materials or true knowledge of anything. They’re just very good little parrots.
You already got the right replies from the other two. But I think your comment shows the danger of AI being talked about like it’s the fucking second coming.
They’re all based on LLM - large language models
They’re just modeling what “most likely” is the right response. AI doesn’t know shit and that’s why it also will yes and you to death because it really is just a yes and machine spitting out what is likely to appear as a valid response to a prompt.
It’s very dangerous that people treat AI like it actually has some understanding of the training materials or true knowledge of anything. They’re just very good little parrots.