• whotookkarl@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago
    • 4 capitalist executives quoted
    • 0 labor leaders quoted
    • 0 relevant scientists quoted

    There is good information in the article showing how executives lied and are currently lying for profits centralizing wealth under oligarchs, but there is another voice that can be quoted instead of the article writer directly opposing quotes. In that way it reads more like an editorial than journalism.

  • BarneyPiccolo@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 days ago

    “Our job at OpenAI and in the AI space — and we need to do a much better job — is to explain to people why … this is going to be really good for them, for their families and for society writ large,”

    And here is the crux of the problem - they are lying to us. After making it very clear that they wanted us to integrate AI into our jobs, it has also become clear that their ultimate objective is to replace as many jobs as possible with AI, even if the AI’s results are substandard, because the AI is so much more profitable.

    We KNOW the objective is to fire as many of us as possible, so the general public has become extremely hostile toward AI. Now the AI companies want to re-brand as family friendly assistants to our lives. Too late, assholes, we’re already onto you. Tell your lies walking.

    It must be awful to have fought to become a billionaire, thinking you could relax on the bodies of your vanquished foes, and enjoy the tranquility that you’ve earned, only to find out that you have created an endless supply of enemies who want you dead. You have to pay millions for security, only to find that someone can still put a bullet through your front window where you were standing only five minutes before. All that money, and the best it can do is buy you a windowless bunker to cower in.

      • sunbeam60@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        3 days ago

        They are currently selling it at a huge loss, agreed. They’ve got plenty of runway for specialised hardware prices to come down, for companies to get hooked and plugged into the ecosystem and for real value to be demonstrated.

        When this happens they’ll raise prices and companies will gladly pay it.

        Profit at this point is not relevant, seen from the perspective of investors.

        • e461h@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          That’s ’embrace, extend, extinguish’ for you. Question is if there is a profitable model to come. The usual economies of scale don’t seem capable of adding up in this case. Even the maniacs on Wall Street are balking.

          • sunbeam60@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            That’s not quite my understanding of EEE.

            • Embrace - adopt something that someone else has done
            • Extend - add proprietary extensions on top of the original, quicker than the original owner can
            • Extinguish - Kill the original owner off by moving quicker then either slow down or kill your own support for the product

            What the AI model owners are doing seems to me just to be normal loss-leading with a view to gain market share.

            • e461h@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              That’s fair. I think they are trying to utilize EEE to replace search, content creation, and more - everything AI is being shoveled into. But the main goal is just to force utilization through any means necessary and establish a new market & sales model they are unable to define.

      • BarneyPiccolo@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        4 days ago

        Not yet, but wait until they’ve reduced their workforce by 75%, and they can save all those associated expenses.

        It won’t work, of course, but they’ve deluded themselves into believing it.

        • e461h@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          Certainly part of the sales pitch. But so far it turns out humans are more efficient (cost less). I think the appeal to companies is the control (and the cost while it’s so heavily subsidized by the industry pushing it). The appeal to the major AI investors and execs is to… privatize the profits and socialize the losses. They will golden parachute themselves and leave the people with their mess.

          • nile_istic@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            I think the appeal to companies is the control

            This part. Rich people never stopped jerking off over the idea of owning slaves.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 days ago

          The vast majority of the costs are HW and infra

          I think they’re hoping that reaches more of a steady state

          • Passerby6497@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            I think they’re hoping that reaches more of a steady state

            With how quickly tech advances and hardware degrades under heavy use, they’re going to be pushing that rock up a hill for a good while lol

  • hume_lemmy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    The article, with the Musk section, points out what nearly everyone else has identified as the primary problem: the people saying that AI will obsolete all workers, and the people saying that those who don’t work don’t deserve to eat, ARE THE EXACT SAME PEOPLE.

    Even the most dumbfuck Magat is going to eventually figure out where that goes and react accordingly.

  • Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    4 days ago

    The way I see it:

    • AGI is inevitable given enough time, assuming we don’t destroy ourselves some other way first.
    • It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.
    • That same capacity, however, also enables it to end the human race - either intentionally or as a byproduct of misalignment.
    • If the “West” doesn’t build it first, then China will. There’s no second place in this race.
    • Even if all nation-states somehow agreed to stop its development, a rogue underground group would do it - or possibly some random dude in his mom’s basement.

    I genuinely see no solution to this. I can only hope things turn out well, or at the very least that it doesn’t happen during my lifetime. The genie isn’t going back into the bottle.

    • Lydon_Feen@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      “It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.”

      Sure… If it wasn’t in the hands of people who’s main purpose is to gather more money, resources and power.

      It won’t solve all our problems. It will solve theirs.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Good work, citizen! The tech bros need you to believe that their dumb digital parrots will eventually, magically metamorphose into AGI. It’s the only thing that keeps that sweet VC money flowing and the AI bubble from popping.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        4 days ago

        I’m just going to ignore your completely uncalled-for smug and dismissive tone and note that at no point have I suggested LLMs will lead to AGI.

        Thank you for your contribution to making this platform a worse place for everyone.

        • DudeImMacGyver@kbin.earth
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          The irony of your response is strong. Also, you DID say that:

          I view AGI as inevitable became it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.

          It sounds like you’ve bought into techbro bullshit, but don’t realize it.

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 days ago

            Feel free to help me realize it then, because whatever irony or conflict you’re seeing there, I don’t see.

            • DudeImMacGyver@kbin.earth
              link
              fedilink
              arrow-up
              1
              ·
              4 days ago

              Yes, I can see that.

              The “AI” that we have now is not actually AI, that’s just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we’ve got into actual AI like you said isn’t going to happen, full stop.

              Don’t believe the horseshit you hear from people trying to sell something.

              • Iconoclast@feddit.uk
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                4 days ago

                The “AI” that we have now is not actually AI

                This is simply just false. We’ve had AI since 1956

                AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.

                It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.

                A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.

                Making what we’ve got into actual AI like you said isn’t going to happen, full stop.

                I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.

    • Simulation6@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable. The current models may grow sophisticated enough that it is hard to distinguish them from AGI, but will still be LLMs.
      I see the current AI bubble as a bunch of guys digging a hole, realizing they can’t get out and deciding the only way out is to keep digging.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        4 days ago

        AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable.

        Plenty of AI systems have already been developed by private individuals on their personal computers. This is not hypothetical. And I’m not claiming that our first AGI will have anything to do with LLMs.

        I view AGI as inevitable because it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.

  • Aatube@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Have the comments here read the article? It’s arguing that the CEOs themselves have spread the doomer narrative and are now being molotov’d as a result. The subject of the title is/includes Altman, hence the Altman cover photo. This was way way better than I expected of Gizmodo (bravo Gizmodo), warning us that execs are only toning down their AI dooming for self-protection.

    Whatever happens, it feels like the AI executives have painted themselves into a corner. They’ve told everyone their product has the potential to destroy everything. They were the doomers, if we want to call it that, at least when it was convenient. And now we seem to be entering a different era where the same people who told us about the dangers of AI try to get us to look exclusively at what they claim are enormous benefits for society; so far, with little to show.

    @gravitas_deficiency@sh.itjust.works @Sundray@lemmus.org

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      4 days ago

      Have the comments here read the article?

      You serious? Ofcourse not - but they did see the letters “AI” in the title.