And all that data does by definition exclude: “AI” is not built on “all of humankind’s knowledge” but based on whatever a mostly western view of the world and what is relevant looks like. Cultures who are not within that framework, who might even be based on more oral forms of keeping history and knowledge are not represented. Even if those groups are not actively excluded (which again they very often are) there are huge populations who just are not seen by the data do not get a say in how they are represented. Or if they are represented it’s just as problems: Think about unsheltered people for example.
The right loves those patterns because they confirm their prejudices: Ask an image generator for a picture of two people kissing and you most often get a heterosexual couple, often white. Because that’s what the training data looks like. That makes “AI” perfect for creating the form of idealized, fictional “past” that fascists love to allude to (“make America great again“), a past that never existed but that needs to be saved or restored (we’ll get back to that later).
…
we can pretty easily determine the short-term purpose of “AI”: The destruction of labor power.
This dismantling happens on multiple levels by attacking the foundation of what allows those forms of organization to take place.
The first level is very individualistic: By pointing at “the AI” that can replace a worker that worker is pressured into working harder, not asking for raises or any other improvements of their working conditions. Even though “AI” cannot do your job, the threat itself is useful to employers to undermine your individual power, your feeling of being valuable as a worker.
The second level is about attacking the idea of solidarity and connection: Because “AI” will not replace you (again, “AI” cannot replace the absolute majority of workers!) “but someone using AI will”. This sets up kind of a Thunderdome in which we all have to fight against each other for scraps/jobs. This framing implies that you should not unionize and connect with your fellow workers but that you should see them as your enemies, as the people who will take your job and your ability to provide for your family. We know this dynamic, it’s exactly how the right presents migration as “attack”. It also normalizes violence again turning all of existence into an endless fight against one another (unless you are one of the few people in power of course).
The third level is somewhat more devious. Because it makes us do that form of dissolving of social bonds ourselves. An example: If I use an “AI” to generate an illustration instead of asking a designer I am saying that while my skills and labor has value, that of the designer has not. This implicitly cuts my ability to form connections of solidarity with designers whose work and livelihood I have implicitly declared irrelevant. It makes me put myself over my fellow workers, workers who are facing the same struggles as me, who are my comrades. But no more.
I feel the author routinelly conflates technology with technological implementations to make their point. For example, they conflate surveillance, with camera equipment and they conflate bridges, with city planning. What me and others are pointing out, is that we’re not hostile to GenAI as the technology (like cameras, bridges) but we’re absolutely against the popular (i.e. corporate) GenAI implementations.
I feel there’s no harm whatsoever to use GenAI to personal gratification. Games, Porn, Shortcuts on boring but known tasks etc. But there’s absolutely harm to using corporate AI to do so, and there’s absolutely harm in using it for purposes outside of it (facts, propaganda, agents etc)
And yes, any technology built in capitalist systems, will be inevitably corrupted from its inception from those systems. This is not exclusive to GenAI as a technology however.
Things that can be weaponized should always be subjected to scrutiny like this.
Firework sales are banned in some places because they are explosives and they require a licence. It was decided that any level of use is too dangerous for untrained people. Do I have a personal stash of fireworks in my basement? Yes. Am I allowing anyone with an Internet connection to sample the use of my fireworks? No.
Your examples of applications vs technologies is flawed, as the creation and or distribution of those technologies you mentioned are very mature and have restrictions in their creation and use. I can’t build a bridge and let anybody use it. I can’t put a camera wherever I want either. People are harmed by misuse.
If you want to claim that there is no harm, read that first paragraph again and view it through the lens of media that a person consumes while developing. Is allowing the tool to underrepresent certain children and hamper their ability to portray their peers something that we want to happen? Is that inability to portray diversity harmful? Should we restrict that use in some way?
Personal gratification as an asocial thing, we can all agree that everyone should have a choice how they want that to be. Personal gratification is rarely completely that way though. Media is a source of expression and a focal point for shared experiences. In a social environment this tool can be inherently inequitable or used to abuse each other.
You did not correctly read the text. Winner’s essay is “Do artifacts have politics?” i.e. does a particular bridge instantiate some politics, not bridges in general.
So a particular bridge which is designed to exclude black children from school by stopping the bus carrying them sits there having racist politics. Another bridge designed to help black kids get to school might be anti-racist in its politics. They are two separate artefacts.
The author is arguing that the machines as they exist, are fascist artefacts. If you made one say with consent of everyone involved, that could not be used for commercial purposes somehow, the data and outputs were not curated via exploitation of people from the global south, the data were made to be representative of humans truly etc then this would not be a fascist artefact. The author doesn’t really specifically address this as a possibility because I think it’s clear that (a) this isn’t happening and (b) it actually can’t at the moment.
My point is, even a racist bridge has uses as a bridge. The solution is not to tear down the bridge, it’s to address the systematic racism. Likewise with GenAI tech, we are not involved in making it, nor do we promote that the way it was created was correct. We just say that people using it are not necessarily reinforcing the problem, anymore than a random pedestrian crossing that bridge would be reinforcing racism.
Yes, the way the tech was created and is being used, is problematic; an imperfect creation of a deeply flawed system. I’m not disputing that. I’m disputing that any and all uses of an imperfect creation are by necessity “fascist”