• 0 Posts
  • 7 Comments
Joined 2 months ago
cake
Cake day: February 18th, 2026

help-circle
  • Yeah, exactly. And to explain why that bottom-up value-build didn’t happen, I need to go on a tangent about information-overload hypercapitalism.

    Before ChatGPT first made waves, there was an initial round of actual “visionaries” who saw AI potential and seeded it. Once there was a demonstrated cool tech demo, the hype cycle started as normal, but unlike in prior years, the story was too big to be contained by any rational limit, and every wall street and silicon valley bro had been primed with years of watching Musk, Jobs, Gates, Balmer, and Zuckerberg, and there are probably literally millions of them with aspiring billionaire god complexes. They sold themselves and each other on this being a technology that could do everything, and realized that by intensifying others’ wild plans, they would give credibility to their own.

    This feedback loop became self-reinforcing, until hyperscalers were creating trillion-dollar plans that objectively were and are insane, but everyone just kind of agreed were not insane out of self-interest. That simultaneously cut off the possibility of bottom-up innovation, because now everyone’s yacht and LA mansion and third vacation island home depended on the tech doing everything short of change your baby’s diaper.

    VC bought and funded the hype for a long time, but the “show me you can make money window” started shrinking from the usual years to months to weeks, and now AI startups aren’t even getting in on it anymore and even OpenAI is discontinuing products to save compute. Now, major companies are desperate to prove their hundreds of billions of dollars in spending was worth anything at all.

    Which means, the rational, normal bottom-up approach you outlined can’t work anymore. There’s no time, and too much money on the line.





  • Banafa says he urges his students to use AI.

    “Don’t be left behind. I mean, if you see any kind of new tools in AI, any new projects by the big name, by OpenAI or Google, go and learn it. Get certifications, take classes that would make you in the front of the line when it comes to hiring,” he said.

    You can smell the misguided desperation from the person quoted through the screen.

    They’re laying off 10% of the workforce and simultaneously rewarding employees who waste as many AI resources as possible, including the one employee who burned $1.4 million in tokens in one month.

    It’s a contest in tech right now who can signal the hardest that they’re “AI-first,” and Zuck continues to show his lack of imagination and independent thought by lighting 10% of the company on fire to make the most smoke in a valley already choking on its own smoldering fumes.


  • I hear that, but it really depends on the service and prompt (including services’ internal prompt that is hidden) and result, which are many times black boxes.

    I personally think artists & labels will have a tough time proving infringement for non-infringing outputs based purely on training data. But there’s really no way of being sure that the “generated” and “uncopyrightable” AI track that’s distilled from unlicensed source music is not itself infringing as a pure substantial similarity (or whatever your locality’s infringement legal test is) question.