AGI is inevitable given enough time, assuming we don’t destroy ourselves some other way first.
It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.
That same capacity, however, also enables it to end the human race - either intentionally or as a byproduct of misalignment.
If the “West” doesn’t build it first, then China will. There’s no second place in this race.
Even if all nation-states somehow agreed to stop its development, a rogue underground group would do it - or possibly some random dude in his mom’s basement.
I genuinely see no solution to this. I can only hope things turn out well, or at the very least that it doesn’t happen during my lifetime. The genie isn’t going back into the bottle.
Good work, citizen! The tech bros need you to believe that their dumb digital parrots will eventually, magically metamorphose into AGI. It’s the only thing that keeps that sweet VC money flowing and the AI bubble from popping.
The irony of your response is strong. Also, you DID say that:
I view AGI as inevitable became it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.
It sounds like you’ve bought into techbro bullshit, but don’t realize it.
The “AI” that we have now is not actually AI, that’s just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we’ve got into actual AI like you said isn’t going to happen, full stop.
Don’t believe the horseshit you hear from people trying to sell something.
AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.
It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.
A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet.
Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.
Making what we’ve got into actual AI like you said isn’t going to happen, full stop.
I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.
AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable. The current models may grow sophisticated enough that it is hard to distinguish them from AGI, but will still be LLMs.
I see the current AI bubble as a bunch of guys digging a hole, realizing they can’t get out and deciding the only way out is to keep digging.
AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable.
Plenty of AI systems have already been developed by private individuals on their personal computers. This is not hypothetical. And I’m not claiming that our first AGI will have anything to do with LLMs.
I view AGI as inevitable because it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.
The way I see it:
I genuinely see no solution to this. I can only hope things turn out well, or at the very least that it doesn’t happen during my lifetime. The genie isn’t going back into the bottle.
“It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.”
Sure… If it wasn’t in the hands of people who’s main purpose is to gather more money, resources and power.
It won’t solve all our problems. It will solve theirs.
Good work, citizen! The tech bros need you to believe that their dumb digital parrots will eventually, magically metamorphose into AGI. It’s the only thing that keeps that sweet VC money flowing and the AI bubble from popping.
I’m just going to ignore your completely uncalled-for smug and dismissive tone and note that at no point have I suggested LLMs will lead to AGI.
Thank you for your contribution to making this platform a worse place for everyone.
The irony of your response is strong. Also, you DID say that:
It sounds like you’ve bought into techbro bullshit, but don’t realize it.
Feel free to help me realize it then, because whatever irony or conflict you’re seeing there, I don’t see.
Yes, I can see that.
The “AI” that we have now is not actually AI, that’s just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we’ve got into actual AI like you said isn’t going to happen, full stop.
Don’t believe the horseshit you hear from people trying to sell something.
This is simply just false. We’ve had AI since 1956
AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.
It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.
A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.
I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.
k
AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable. The current models may grow sophisticated enough that it is hard to distinguish them from AGI, but will still be LLMs.
I see the current AI bubble as a bunch of guys digging a hole, realizing they can’t get out and deciding the only way out is to keep digging.
Plenty of AI systems have already been developed by private individuals on their personal computers. This is not hypothetical. And I’m not claiming that our first AGI will have anything to do with LLMs.
I view AGI as inevitable because it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.