Most importantly, the ability to learn. We’re all just a series of very complex chemical reactions, but we do a lot more than just listening and speaking.
Based on the evidence, I think I’m a bit more of a simpleton who puts in a good effort at the start but loses steam partway through. I guess thanks for the support though.
“Correlates”? As in: “It gives you the answer it best correlates with your prompts/context.” Feels somewhat right both in the sense of AI as tensor-based word-select autocomplete and as a “lower-level” process than genuine thought, one which turns incongruent inputs (“I’m an AI” and “I just deleted prod+backup”) into meaningless output (“The AI is sorry”) that might look OK at a distance.
I’m not attacking you but we really need to figure out how we use language to accurately describe what these programs are doing.
They are outputting a highly likely sequence of words that fit the type of output from their training data that matches the input.
They are fancy autocomplete.
Oh, I know. My comment was more about how we tend to anthropomorphize this stuff and give these models traits they don’t possess.
… and what are you?
A human with my own motivations and complex biological systems that including reasoning and the ability to think critically.
Most importantly, the ability to learn. We’re all just a series of very complex chemical reactions, but we do a lot more than just listening and speaking.
https://arxiv.org/abs/2312.00752
Based on the evidence, I think I’m a bit more of a simpleton who puts in a good effort at the start but loses steam partway through. I guess thanks for the support though.
“Correlates”? As in: “It gives you the answer it best correlates with your prompts/context.” Feels somewhat right both in the sense of AI as tensor-based word-select autocomplete and as a “lower-level” process than genuine thought, one which turns incongruent inputs (“I’m an AI” and “I just deleted prod+backup”) into meaningless output (“The AI is sorry”) that might look OK at a distance.