• Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 hours ago

      Right but so are people. This was all inevitable from the start. Much worse stuff is coming. People should have been embracing methods to avoid this stuff decades ago. I’m not just talking about using a fucking ad blocker. There needed to be measures to fuck with data collection. Become hostile to any site’s or content creators looking to sell viewers anything. Run applications that could feed junk data back to sites.

      Had people started earlier they wouldn’t be facing the giant industry that grew out of a digital spaces that welcomed the things that eventually grew into this data collection beast we all knew were coming.

      • OldChicoAle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        Yeah I agree. On an unrelated but relevant topic, I feel like the early internet was full of naive hopefulness of independent creation and opportunity. It was like a landscape yet free of capitalistic greed and shitery. That’s one reason this behemoth of data collection, surveillance, and ads was able to dominate so quickly and strongly.

        Someone want to go chat on a mybb or phpbb forum for a bit? Like for therapy? I miss the old internet.

  • A_norny_mousse@piefed.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    23 hours ago

    As much as I hate AI, and love a good SkyNet joke:

    It’s not AI itself doing this, it’s the stuff it’s used for. Agentic AI, Palantir, yadda yadda.

    And the delibrately slackened (as they weren’t slack enough already) regulations around it in the USA.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    23 hours ago

    This is the kind of headline which will be studied in the future when they’re trying to answer the question of how things got so bad

  • Tyrq@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 hours ago

    So the other edge of the sword to this is that the tech hallucinates enough that everyone can get some amount of plausible deniability on whatever the ai thinks is real info and plugs into the database.

    Maybe not that comforting, considering they’ll just use the ai as a scapegoat for whatever bullshit excuse they want anyway.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      18 hours ago

      They aren’t using LLMs to do the spying.

      LLMs function because we have a technology now that can operate in a space of extremely high mathematical abstraction. Just consider for a moment what you do know about LLMs. They’re trained on massive amounts of text, while fundamentally they operate by predicting the next token (or, word) in a sequence (or, sentence).

      An LLM is what you get when you use this method of information processing on natural language.

      What if you instead train it on fingerprinting user identities based on web behavior? It doesn’t even output language in this case, now being a different tool operating on the same fundamental information processing methodology.

      What if you train a system to automate semantic analysis, which is much simpler than an LLM? Give it categories like “leftist activist” and see what kind of lists they can garner after processing the likes, shares, replies, views, … of every Reddit user that has ever existed? What if you then cross associate users via writing styles, so they can roughly patch up your old Reddit with your new Lemmy — or maybe even your really old Facebook with your old Reddit? What if they further augment that with ISP data that helps really drive these points home?

      What if they don’t need tens of thousands of analysts to do this kind of thing for every single American citizen, anymore? Something previously seen as intractable and not worthy of consideration outside conspiracists, now might only require a large enough data center. Surely it doesn’t require a data center with a ballroom on top, but that’s more architectural than anything else.

      Edit: let me be more clear about something. LLMs don’t predict the truth. LLMs predict the next token. That being said, they do a really damn good job. Hallucinations are a problem with alignment of that good-job to our expectation of truth — a different issue. So, when you consider the effectiveness of their “spying technology” — do so by comparing it to an LLMs ability to “sound right,” not “be right.”