And all that data does by definition exclude: “AI” is not built on “all of humankind’s knowledge” but based on whatever a mostly western view of the world and what is relevant looks like. Cultures who are not within that framework, who might even be based on more oral forms of keeping history and knowledge are not represented. Even if those groups are not actively excluded (which again they very often are) there are huge populations who just are not seen by the data do not get a say in how they are represented. Or if they are represented it’s just as problems: Think about unsheltered people for example.
The right loves those patterns because they confirm their prejudices: Ask an image generator for a picture of two people kissing and you most often get a heterosexual couple, often white. Because that’s what the training data looks like. That makes “AI” perfect for creating the form of idealized, fictional “past” that fascists love to allude to (“make America great again“), a past that never existed but that needs to be saved or restored (we’ll get back to that later).
…
we can pretty easily determine the short-term purpose of “AI”: The destruction of labor power.
This dismantling happens on multiple levels by attacking the foundation of what allows those forms of organization to take place.
The first level is very individualistic: By pointing at “the AI” that can replace a worker that worker is pressured into working harder, not asking for raises or any other improvements of their working conditions. Even though “AI” cannot do your job, the threat itself is useful to employers to undermine your individual power, your feeling of being valuable as a worker.
The second level is about attacking the idea of solidarity and connection: Because “AI” will not replace you (again, “AI” cannot replace the absolute majority of workers!) “but someone using AI will”. This sets up kind of a Thunderdome in which we all have to fight against each other for scraps/jobs. This framing implies that you should not unionize and connect with your fellow workers but that you should see them as your enemies, as the people who will take your job and your ability to provide for your family. We know this dynamic, it’s exactly how the right presents migration as “attack”. It also normalizes violence again turning all of existence into an endless fight against one another (unless you are one of the few people in power of course).
The third level is somewhat more devious. Because it makes us do that form of dissolving of social bonds ourselves. An example: If I use an “AI” to generate an illustration instead of asking a designer I am saying that while my skills and labor has value, that of the designer has not. This implicitly cuts my ability to form connections of solidarity with designers whose work and livelihood I have implicitly declared irrelevant. It makes me put myself over my fellow workers, workers who are facing the same struggles as me, who are my comrades. But no more.
but Nick Land as an influental thinker and a right wing reactionary is very suprising and unecpected to me. I still consider his work with ccru (the writtings 1997-2003) to be interesting and having some kind of forward potential but when i realised this accelerionist story i was shocked. and how far reaching it is.
most of all its clear that a modern capitalist, a “humble liberal” as for eg. peter thiel describes himself is a being who have denounced any moral attachment to his product like they (palantir, meta, google) the surveilance capitalists overlord kind of do like the corporations would do for sweatshops in asia. a being who exploits by day and meditate in the afternoon not having to do anything with the suffering and the evil they have caused. As Žižek also noted the age of the shameless and most perverted.
Some notes.
Unionizing is great when you must work in a business with highly concentrated ownership. Unions, when they are democratic, counterbalance the concentrated ownership of your workplace.
An undemocratic union can be a tool for worker management as opposed to worker empowerment.
However, workers should aspire to own their workplaces. That’s what worker coops are for.
In a worker coop workers are the owners and the boss of themselves. That goes far beyond what even the best union can achieve for worker empowerment.
Also workers competing for scraps has been a thing forever now. AI simply pours gasoline on that fire. It’s bad, really bad, but not new.
The main problem is landlessness or assetlessness. When you own nothing, you will accept any conditions to survive. Owning nothing sets your leverage to 0. Even if you have skills, if you can’t say ‘no’ you cannot negotiate. To be able to say ‘no’ reliably and reasonably you need either personal assets, or incredibly useful and resourseful commons, which are assets in which you have an unshakeable share.
We should do away with the landless/assetless condition as a matter of principle. That means wealth accumulations must have a ceiling.
Kinda depressing.
Half the tech is by socialist-leaning lgbtq+ people and the other half is by pedophile fascists, and people picked second forms to popularise.
I guess ad money influences a lot.
Heh
Yup on all counts.
Good post. There is also the cult factor with the current AI/LLM euphoria in where Silicon Valley is selling their ML tech as an omnipresent god and everyone must bow to its superintelligence.
The second level is about attacking the idea of solidarity and connection: Because “AI” will not replace you (again, “AI” cannot replace the absolute majority of workers!) “but someone using AI will”. This sets up kind of a Thunderdome in which we all have to fight against each other for scraps/jobs. This framing implies that you should not unionize and connect with your fellow workers but that you should see them as your enemies, as the people who will take your job and your ability to provide for your family. We know this dynamic, it’s exactly how the right presents migration as “attack”. It also normalizes violence again turning all of existence into an endless fight against one another (unless you are one of the few people in power of course).
This is the latest attempt at socially engineering a cattle pen, atomize everyone into only trusting chatbots and you have a very malleable population.
I read somewhere that “building community is inconvenient” and I think about that a lot these days. Doing the inconvenient thing (ie, asking your designer friend to design something) builds community, but it is arguably less convenient than having a bot do it for you. The benefit of the inconvenience, though, is the community
I am autistic, and expecting me to prefer asking people to do things and building community, instead of being fine with solitariness and the tools that help me achieve it, feels a bit ableist tbh.
You literally need community as a great ape of a social species. It doesn’t have to be huge but something fundamental is missing when we don’t have community.
I’m also autistic. You are, in fact, going to have to interact with other people at some point.
You chose to write a comment to another human being about how you would like to be treated. Imagine if, through doing that, the people you still have to interact with in your daily life learn to actually treat you that way - that too is community.
A neighbor getting groceries for you so you don’t have to go for an overstimulating store and so you can have them arrive at your door within a one minute interval from 10:00 to 10:01 on tuesday and friday is community. Someone tailoring your shirts so you don’t notice any friction against your skin anymore is community.
The tools that capitalism can provide are depersonalized, poorly fitting, and often malicious, but a community can work with you can come to understand you and learn to fit your needs better than anything you’re likely to be able to buy.
I literally said that building community is inconvenient (for everyone).
It’s always the same with those “allies”. “Conform to our arbitrary rules or we will oppress you again”
Can we make everyone at db0 read this? They are very convinced that slop is resistance.
You could read them or just keep strawmaning them.
We don’t consider that.
But their strawman said you do, that’s clearly good enough. No asking questions or reading new information!!
Do you have thoughts on the article? I think this and his article on vulgar displays of power are interesting critques.
I feel the author routinelly conflates technology with technological implementations to make their point. For example, they conflate surveillance, with camera equipment and they conflate bridges, with city planning. What me and others are pointing out, is that we’re not hostile to GenAI as the technology (like cameras, bridges) but we’re absolutely against the popular (i.e. corporate) GenAI implementations.
I feel there’s no harm whatsoever to use GenAI to personal gratification. Games, Porn, Shortcuts on boring but known tasks etc. But there’s absolutely harm to using corporate AI to do so, and there’s absolutely harm in using it for purposes outside of it (facts, propaganda, agents etc)
And yes, any technology built in capitalist systems, will be inevitably corrupted from its inception from those systems. This is not exclusive to GenAI as a technology however.
Things that can be weaponized should always be subjected to scrutiny like this.
Firework sales are banned in some places because they are explosives and they require a licence. It was decided that any level of use is too dangerous for untrained people. Do I have a personal stash of fireworks in my basement? Yes. Am I allowing anyone with an Internet connection to sample the use of my fireworks? No.
Your examples of applications vs technologies is flawed, as the creation and or distribution of those technologies you mentioned are very mature and have restrictions in their creation and use. I can’t build a bridge and let anybody use it. I can’t put a camera wherever I want either. People are harmed by misuse.
If you want to claim that there is no harm, read that first paragraph again and view it through the lens of media that a person consumes while developing. Is allowing the tool to underrepresent certain children and hamper their ability to portray their peers something that we want to happen? Is that inability to portray diversity harmful? Should we restrict that use in some way?
Personal gratification as an asocial thing, we can all agree that everyone should have a choice how they want that to be. Personal gratification is rarely completely that way though. Media is a source of expression and a focal point for shared experiences. In a social environment this tool can be inherently inequitable or used to abuse each other.
You did not correctly read the text. Winner’s essay is “Do artifacts have politics?” i.e. does a particular bridge instantiate some politics, not bridges in general.
So a particular bridge which is designed to exclude black children from school by stopping the bus carrying them sits there having racist politics. Another bridge designed to help black kids get to school might be anti-racist in its politics. They are two separate artefacts.
The author is arguing that the machines as they exist, are fascist artefacts. If you made one say with consent of everyone involved, that could not be used for commercial purposes somehow, the data and outputs were not curated via exploitation of people from the global south, the data were made to be representative of humans truly etc then this would not be a fascist artefact. The author doesn’t really specifically address this as a possibility because I think it’s clear that (a) this isn’t happening and (b) it actually can’t at the moment.
My point is, even a racist bridge has uses as a bridge. The solution is not to tear down the bridge, it’s to address the systematic racism. Likewise with GenAI tech, we are not involved in making it, nor do we promote that the way it was created was correct. We just say that people using it are not necessarily reinforcing the problem, anymore than a random pedestrian crossing that bridge would be reinforcing racism.
Yes, the way the tech was created and is being used, is problematic; an imperfect creation of a deeply flawed system. I’m not disputing that. I’m disputing that any and all uses of an imperfect creation are by necessity “fascist”
And all that data does by definition exclude: “AI” is not built on “all of humankind’s knowledge” but based on whatever a mostly western view of the world and what is relevant looks like.
Its based on a specific western worldview the techbros have, and want to project, while drinking whatever flavor of kool-aid is seen as being edgy at the moment.
I dunno, claiming that fascism was always in our tech is an incorrect statement. The Amiga 1200 wanted to hurt nobody.
I love the Amiga 1200
In his influential 1980’s paper “Do Artifacts Have Politics?” Langdon Winner argues that this view of “neutral technology” does not hold up. That the politics of specific artifacts do not just come from who uses the technology and for what purpose but that technologies have built-in politics that stem from the political views and goals of the people building the technology as well as their internal structure.
He shows this by pointing at how certain bridges were built racist: When the civil rights movement in the US got black kids the right to go to the often better schools that used to only accept white kids, politicians did for example plan roads and bridges in a way that the buses that were supposed to take the black kids to the white schools could not pass the bridges and roads. This was not oversight but design intent. The racism is built into the structure of the artifact itself.
Winner also argues that certain technologies imply certain political or social structures in order to exist: The nuclear bomb implies not just scientists who can build it and a state thinking that that form of destruction is a valid form of acting in the world but also a security state capable of controlling and defending it. You simply cannot build a nuclear bomb without those structures, they are implied if not required, enforced by the artifact itself.
Winner’s work does not argue that the embedded politics of an artifact are always absolute: We do know of many potentially oppressive technologies that have been taken by artists and activists to turn them against their original use. But that is always an uphill battle: Surveillance will always lean towards a more forceful, rigid, less free understanding of government for example. You can use (counter-)surveillance of course but you always have to be aware of not reproducing the logic you are trying to criticize or attack.
Nobody is claiming all technology is fascist, but all embodies some politics and some of that politics is fascist.
The right loves those patterns because they confirm their prejudices: Ask an image generator for a picture of two people kissing and you most often get a heterosexual couple, often white. Because that’s what the training data looks like. That makes “AI” perfect for creating the form of idealized, fictional “past” that fascists love to allude to (“make America great again“), a past that never existed but that needs to be saved or restored (we’ll get back to that later).
It depends which AI you ask, no? Ask AI made in a mostly white country, you’re going to get pictures of mostly white people. Ask AI made in a mostly Asian country, it’ll be mostly Asian people.
This same dilemma came up with search engines. Everyone complained that “doctor” in Google Image search showed mostly white doctors. Except if you made the same search in Weibo, in showed mostly Asian doctors.
Same goes for heterosexual couples, it’s just what’s more common.
I’m starting to get the impression a lot of this anti-AI push is coming from Russia and China, the way it’s being framed as an “anti-West” thing. Because the chips for it are coming from the West. It’s just like vaccines, and how a lot of the anti-vaccine propaganda was coming from Russia and China. Because they need to make the thing that’s saving the world look bad.
I’m not saying AI is saving the world. But a “fascist artifact”? Come on. That’s classic Russian propaganda hyperbole.
I know what you mean, but this looks more like capitalist propaganda. The “creative sector” is ruthlessly competitive and hardcore capitalist. Smearing your competition is a normal tactic for getting ahead.
Note the marketing psychology 101. Build a personal relationship between customers and your brand. Make them feel like they are part of a community. That gets you customer loyalty without having to do something expensive, like offering good value.
Also note how there is never any mention of non-creatives threatened by AI, like translators or programmers. Technologies that only threaten blue collar jobs are not even controversial.
Think about what the actionable takeaway is. Protect the jobs of creatives and expand intellectual property to allow more rent extraction by rights owners. And those marginalized groups? Stay marginalized. But perhaps there should be some changes. The data labelers should lose their jobs (they will be so happy!) The jobs should go to people in our community (which is good in a completely not racist way!). We should keep doing what we’re doing (it’s ok that this churns out racist data, as long as no one trains AI),
Read the whole article, don’t just react to a single bit I highlighted.
I did read the whole article, it feels rambling, vague, and incoherent, with full of these classic talking points.
Like congratulations, in talking about AI, you somehow managed to shoehorn in - unhoused people, MAGA, labour value, worker solidarity, migration, the normalization of violence… you write like a tweaker.
Well on your whole “it’s just what’s more common” this sort of entrenching of the “norm” as what is true is specifically one of the criticisms raised, so it seems strange to raise it as a defense, further much of the training data for all models is in english and reflects global north values and perspectives, particularly european and american because this is what has been digitised.
He also writes about the issues with centralisation and control, disinformation, lack of consent, undermining of government accountability/devaluing of institutions and transparency. It seems weird to dismiss this all as “just use a distillation of chatgpt run by a chinese company”.
deleted by creator