
The Same Old Revolution, Now with Added Fiber
I’ve been forced to bring this blog back out of semi-retirement mainly because of the rampant appropriation and misuse of the site’s title. I refer, of course, to the way in which the over-developed world has seized upon anything brought to you by the letters A and I with a manic intensity equal parts credulosity, desperation, and mendacity. The initial frenzy accompanied generative AI (chatGPT, OpenAI’s DALL-E, and their ilk) but now the term AI is being thrown around both as a vague concept (often being used simply as a marketing buzzword to describe digital products that use older and more simple forms of data gathering and manipulation) and as a half-prayer half-promise (got a problem? AI can fix it!). Naively, I never expected to see anything that matched the astonishing stupidity that marked the early adoption of social media, but the level of non-thought being applied to the phenomenon of AI dwarfs the eagerness with which supposedly smart people cheerfully embraced–to cite just one example–the idea that an entire platform built upon exchanging soundbites was something that would make the world a better place. The clueless adoption of AI is moving more rapidly than the uptake of any other form of information technology I have seen in my lifetime.
A very few people have in opposition spun fantastic tales of Terminator-style AI overlords; an even smaller number have adopted a more cautionary, rational note, suggesting that perhaps we shouldn’t be so hasty to embrace a technology that is in its infancy and about which even some of its creators have voiced alarrm. Most famously, Geoffrey Hinton, the so-called “godfather of AI” quit Google over concerns that the tech juggernaut was rolling out an untested technology without adequate safeguards. Of course, he is what you call an expert. And the word of an expert, in this day and age, is no match for the potential to cheat your way through college, or add a blue sky into the photos of your rain-drenched Florida vacay to keep your Instagram followers envious.
That is why they call it seduction, stupid!
I want to acknowledge that for extremely specialized and high-level functions, some of the recent advances in AI show enormous potential. Groups as varied as the NSA, cosmologists and meteorologists are justly excited by the possibilities of using more sophisticated pattern-recognition tools to help us learn things about the human and natural world that were not visible before. But those are, as I said, extremely specialized applications.
There also there are going to be specific situations where GenAI will definitely help people out. It will automate tasks in ways that may save time in the short-term. It will provide a better version of tasks that have already been turned over to infotech automation (it can hardly do worse than some of the automated “customer service” functions that companies now use).
But this is the same old marketing story that has driven the infotech revolution for decades now: offering incremental gains as revolutionary–or even evolutionary–leaps forward. Apple has long since mastered this when it rolls out a new phone, trying to justify the price by the fact that processor is “faster” (in ways we probably won’t notice) or has better camera (in ways we probably won’t notice) or has a new “must have” feature (but one worth $1,000?) or is simply the same old stuff but now in Rose Gold with an animated lock screen of a rainbow-sherbet shitting unicorn!
This is how the dance of tech seduction works, and most humans fall for it every. single. time. AI helps us buy shoes more efficiently. Then it helps us do our taxes. Then we eliminate human teachers. This is why it is called seduction. You don’t realize you are being seduced, until you wake up in bed with Elon Musk.
Outside the domain of extremely high level uses of generative AI, in the ordinary world, the use of garden-variety levels of AI is being driven by staggering levels of willful stupidity. Tools based around AI are generally being used in one or more of the following ways:
- To allow people to fake an artistic talent or skill they do not possess
- To allow people to pretend to a knowledge they don’t have the intelligence or patience to acquire
- To produce a product that they could create with actual talent and skill but are too lazy to do so (or for which they have convinced themselves that “they don’t have the time,” because, you know, TikTok needs them).
- To create gains in productivity and efficiency (which is capitalism-speak for firing people and replacing them with automated processes which turn out not to be as efficient as the humans they replaced, but by then it is too late to hire real people back)
- To convince us that we are so stupid that we need AI in the first place (Meta’s AI keeps insisting we need help with writing posts, Google’s wretchedly annoying AI search tells us we are incapable of filtering and sorting evidence ourselves, Acrobat’s AI keeps asking me if I need help reading).
Now the last one in particular has me worried, because based on the evidence of how prevalent the first four uses of AI are, they may be right. But given the number of smart people I know who seem powerless to resist the siren song of what is, after all, simply the latest version of the Big Digital Lie (that technology will allow us to achieve a free and frictionless existence) I think there is something else going on with this AI-filia that bears investigating.
All Your Ethics are Belong to Us
My chief problem with generative versions of AI in particular is very simple: it is ethically compromised, root and branch. There are five core problems with recent iterations of AI.
I. It is brought to you by the same companies and sometimes the very same people who brought us the previous generation of invasive and predatory technologies. On the face of it there is something strange about the massive enthusiasm for AI happening at the very same moment that the edifice of social utility that sustained most people’s belief in social media in particular and the attention economy as a whole is finally beginning to crumble. If you go back to the early years of this blog you will see that I was bleating constantly, mainly in snippy asides, about how stupid and toxic was the world being ushered in by social media. Most of my RL friends patted my head indulgently or laughed out loud or said I was simply being a curmudgeon. Now Twitter has turned into the white supremacist dumpster fire of X, TikTok is under threat of being banned by the US government (for, it has to be said, stupidly Xenophobic reasons) but is also being sued by a coalition of US States’ Attorneys General for knowingly harming children’s mental health. Meta’s Facebook is as bad as it ever was, now with the added bonus of less fact-checking, threats of firings for “low performers” and a rollback of all its DEI initiatives. Amazon is being sued by the Justice Department for anti-competitive monopoly practices (although expect that to go away now the Orange One has taken office; there is a reason Bezos Bro’d up to him even before the election).
We now have abundant research evidence of the way in which social media cultures have corroded our civic institutions, damaged our mental health and sapped our souls. Don’t be fooled by those who try to tell you there is “no conclusive evidence” of the harms of social media. This is the same strategy used by climate-change deniers. I’ve read a lot–a lot–of this research over the last few years. It is true that no researcher worth their salt is ever going to say that social media “caused” something like political polarization. America is a sick and twisted society and the causes for that run deep. But social media has clearly played the role of accelerant and amplifier. I am confident that historians twenty years in the future will look back at this period and see that it would have to be a mighty strong coincidence that the Age of Anti-Social Media coincided with the rise of global authoritarianism and the erosion of the belief in a shared civic responsibility underlying the dream of democracy.
And now, apparently, we are going to trust the latest wave of tech innovation brought to us by the same people who have systematically invaded our privacy, harvested our personal information and sold it to the highest bidder, and who have consistently dodged accountability for the research-proven harms their devices and applications have visited upon society? While Silicon Valley loves to claim that “technology is neutral, it is how you use it that matters,” anything created by human beings has their values imprinted onto it. And especially since the last US election, the real values of the tech sector have been clearly on display. And we are going to trust that these people have–this time! For realz!–developed a technology with our best interests at heart? Just how stupid and gullible do they think we are?
Massively gullible and stupid, it appears. And they are right. No broligarch ever lost money betting on the stupidity of technology consumers.
Bottom Line: For the many deluded souls out there, once and for all, the Broligarchs are not your friends. It is not that they only care about making money, although clearly it is partly that. It is that they are already living in an anti-human future, a place they have been comfortably inhabiting for some time. One where you only have to worry about whether a technology works or doesn’t and if so, how efficiently. The impacts of that technology on the lives of real people is of no consequence because even a social catastrophe will never, ever touch them. This is why they are such natural allies of authoritarianism.
2. Generative AI is the product of the largest act of intellectual property theft in the history of humanity. The LLMs (large language models) on which GenAI programs are trained were scraped from the public web. Much of it represents copyrighted material, material that individual people slaved over and for which they were already earning virtually nothing. It is illegal–supposedly–to appropriate copyrighted work and use it for commercial gain without the express permission of the author. If I did this with the work of another author I would be sued in short order, and I would lose in court, and I would be fined a lot of money for even one single instance of theft. When Google does it, it is a business model. This is why all GenAI makers are currently being sued by a massive array of individuals, publications, and artist coalitions.
But it is the response to this this breathtaking act of criminal chutzpah that reveals the underlying social problem that is the enabling condition not only of the reckless rollout of AI, but the slow-motion democratic collapse so many of us are living through. Because the response of everyone who isn’t one of the people whose content has been scraped boils down to this: I don’t care. Far from siding with the little people being demonstrably screwed over, they are tacitly or overtly siding with the gangsters.
Now if I came into your home and stole your TV you would probably care. If I sold you a car and then vanished into the ether and the cops subsequently pulled you over for driving a stolen vehicle, you would care. But because the kind of criminality we are talking about is abstract, not immediately visible, and massive, people don’t see this a problem. This demonstrates, in part, that despite living and playing in a virtual world and claiming to be savvy navigators of that world, many people don’t understand how that virtual world works or what kinds of ethical and moral frameworks we should be employing. So, by and large, they don’t employ any moral or ethical framework (the BS “technology is neutral” idea I referenced above). However, most of the things that are shaping our lives in the virtual sphere are also abstract, invisible, and massive in scale. And, as even a cursory examination of the Broligarchs collective resume indicates, many of these forces are also criminal (surveillance, data-theft, privacy-invasion: capabilities that are not accidents or “bugs” but are in fact features built into the core platforms that we use everyday). However, it also demonstrates that most people functionally don’t care about criminality unless it impacts them personally.
This libertarian navel-gazing approach to ethics and morality has two important implications.
- As the history of technology change indicates over and over, once a society starts devaluing the labor of an entire group of people, it becomes easier to devalue them as people (because, regrettably, one of our only reliable frameworks for thinking about people as people seems to be the (in)dignity of their labour), and it becomes easier to eliminate both those jobs and the people themselves.
- As a society’s tolerance for criminality increases, the regular deployment of criminal behavior by large corporations and governments becomes a virtual certainty. Even more concerning–and something that we in the US now have abundant familiarity with–authoritarian regimes can employ criminality on a massive scale, confident that a primed population will go along with them.
Perhaps the most appalling aspect of the societal cricket-chirping that has accompanied the theft of the intellectual property of millions is that it has been sanctioned by the very people who at least pay lip-service to the importance of human creativity and the integrity of intellectual property. I’m always gobsmacked when I encounter a librarian who is leaning hard into GenAI. This is a profession that has historically been a staunch defender of the rights of creators. The American Library Association, for example, is currently engaged in a brave, desperate fight on multiple fronts against gangs of MAGA thugs across the US trying to strip books out of libraries, get teachers fired, and entire libraries shut down. If you are a librarian who thinks that GenAI is a great idea, then your ability to compartmentalize truly massive criminality means you are a disgrace to your profession and should find another line of work as soon as possible. Maybe as a corporate archivist for TeSSla.
But my own academic colleagues are also demonstrating a frightening ability to look the other way. People who would be mortified if someone stole one of their books or articles and used it to make money are willingly participating in the crime of the century every time they use this tech themselves or encourage their students to use it.
Bottom Line: If you are using any GenAI program, you are benefiting from stolen property, and are in fact if not (yet, hopefully) in law, a criminal accessory.
3. Generative AI is becoming one of the biggest threats to the environment. Last year the Washington Post launched an extended series called “power grab” where they looked at the rise of cloud computing in general but the environmental impacts of AI data centers in particular. Now while our inability to halt the steady flow of selfies is a major contributor to the growth of data centers, it is dwarfed by the amount of computing power needed to run AI programs, which in turn is dwarfed by the quantity of energy required to train the programs (intellectual property theft, it turns out, uses a lot of energy!). Some of you may be aware of this, but if you aren’t, this article from the Post, which draws on academic research, might be an eye opener. After this article, things got worse: so heavy is the demand for power by AI, that some states, in order to lure lucrative data centers to their area, have delayed plans to decommission their coal-fired plants or in some cases, re-opened them.
The standard defense of the AI apologists is the one that I tried to head off above: “Oh, well, everything we do is using large data centers.” Well, for starters, maybe that should encourage all of us to perhaps cut back and some of the digital stuff we are doing. But as I pointed out here, the argument is specious because we are talking about an order of magnitude difference.
However, the really important point here concerns the question of choice and volition. Despite the impression you might get from this piece I am not in search of some mythical purity of intention. One of the features of neolibertarian capitalism is that we are all forced to make some kind of peace with the fact that we are all trapped in a variety of ethically compromised positions. Using devices built with components farmed out of human misery. Wearing clothes made in sweatshops. The fact that we kinda know but try to pretend that there is no impact on real people of what we do is part of the problem. Nevertheless, we are forced into a lot of these positions because we need to perform certain actions. If we need information technology devices even to try and do productive things (rather than, say, trying to keep your Snap stream going) then there are no devices that aren’t built out of conflict minerals.
But using GenAI is-for the moment–a choice. There is environmental damage that we can’t help–for the moment–contributing to, and then there are things that we have some control over. I can choose to dump all my recyclables into the regular trash or not. I can choose to buy a fuel-efficient vehicle or a GMC Yukon Denali (sorry, I mean a GMC Yukon McKinley). If you are a teacher, no one is forcing you to use AI in your classrooms. No one is forcing you to generate DALL-E images of yourself as Cthulhu.
Bottom Line: If you claim to care at all about climate change, you cannot use GenAI and still expect to be taken seriously. For the same reason, I consider other professors who encourage their students to use AI or even “experiment” with it, to be ethically compromised; they might as well make it a class requirement to drive a Hummer to school.
4. AI exploits the labor of vulnerable communities. I will say, that at least some people are aware of the climate costs of AI. What many are not aware of is that AI actually requires a lot of human labor. It is not all a slickly automated array of humming machines. GenAI requires real humans to annotate millions of pieces of data content. Unsurprisingly, what has been true of pretty much every aspect of the labor-chain of the attention economy, from the workers who mine the conflict minerals for our computers through the sweatshops that assemble our phones, to those who worked in content moderation (when that was still a thing) the working conditions for AI workers are pretty terrible.
Bottom Line: This should be the thing that people care about the most. It will be the thing that people care about the least. We are largely immune from guilt that others need to suffer for our comfort and convenience.
5. Gen AI is to white-collar work what automation was to blue-collar work. The consumer fascination with faking nonexistent talent and skills is, to some degree, a side-show. The real market for the AI Bros is large companies, and the goal here is simple: workforce reduction in the name of the holy grail of Capitalism: productivity. If we use GenAI we are willingly participating in a technology built to destroy our own livelihoods. This makes me particularly sad for the students I am teaching, even more so given how many of them are eagerly embracing GenAI and already using it to happily cheat their way through college in ways large and small. I bring them in to a discussion of the kinds of ethical issues surrounding technology and for the most part they seem to make a good faith effort to be nice to me and not use AI shit in my class. But it is a Sisyphean task given the number of colleagues in other departments who have leaped on the AI bandwagon and are doing the Macarena. (Of course, the results of this are often predictable; a friend shared with me the story of a client who is a teacher at another university, and who taught all his students how to use ChatGPT, and then was shocked to find that they all used it to write their final papers. . .in a class on Legal Ethics. I’m sure all those students will make fine additions to the Trump-era Justice Department).
But for the students I teach, I’m fully aware that many of the careers they are eagerly imagining for themselves may not even exist by the time they graduate, or will certainly be needing many fewer people. That may sound overly dramatic. But I will say, as someone who has studied technology and technology change for decades, I have never seen anything improve as quickly, and be adopted so swiftly, with so little pushback, as GenAI. Creatives are under no doubt that this stuff is coming for them. Film and TV writers are already deeply concerned and some novelists also are starting to see the writing on DALL-E-generated wall. You can already find “novels” entirely generated by AI at your favorite digital marketplace. Of course, instead of banning this kind of crap the digital platforms are gleefully selling them and apparently some sick fucks are buying them.
Bottom Line: The Broligarchs don’t care if stuff is real or fake, a positive contribution to society or the end of democracy as we know it. They make money either way. In fact, clearly the profits to be generated by societal collapse are enormous.
Obeying in Advance
Where I work I am surrounded by people who for the most part have Ph.Ds. My own department has recently, hesitatingly, begun to question some of the digi-giddy marketing claims associated with AI, but my university is already leaning into the technology and many faculty, including in my own program, are encouraging students to use AI, mostly under the specious rationale that “they will be using it in the workplace.” Even people who have been skeptical have adopted the accomodationist rationale that many also used for social media: well, it is, inevitable, so we might as well get on board.
Way back in the distant past, when I was in the equivalent of US middle school, we had a unit in our social studies class on the rise of Nazism. Our teacher was great, I understood all the basic events and processes, and on the macro level, how it had all come about. But there was a mystery to it all that nagged at me. On the individual level, what made people go along with it?
Part of the answer was supplied by Daniel Goldhagen’s 1996 book Hitler’s Willing Executioners which despite some of its many problematic aspects (its essentialist notion of national character, for example) points out an obvious truth: Nazism and the Holocaust happened because a lot of ordinary people were totally onboard with it. Unfortunately, in the US, we don’t need to look far to see how that works. The majority of voting Americans elected a paragon of clulessness and cruelty, and millions more were so unworried by the promise (not the possibility, the promise; Trump and his cronies have never hidden what they proposed to do in his second term) of authoritarianism that they couldn’t pry themselves loose from their recliners to go and vote.
But the whole AI “brohaha” has made me realize one other important thing. In order to usher in a wave of injustice and inhumanity all it requires is for people to forget that their vocabulary includes the word “no.” There are many factors that are shaping the credulous academic response to AI and many of those are the same as that which shaped the credulous response to social media: academics in general, but particularly those in the humanities and even more so people in writing programs know that very few people–including, often, our own university administrations–really value what we do. We clutch at any kind of new shiny that might give us a veneer of relevance. So I’m not surprised when the occasional academic might want to hitch their star to GenAI and damn the consequences. But what I’m staggered by is the number of people who never seem to have considered that refusal is a possibility. Instead, they substitute the “we exist to serve the job market” (a bullshit idea to which I do not subscribe) rationale or, despite all their often extensive training in critical analysis, rely on an equally bullshit fatalism. Some of the people I have in mind have spent decades studying writers and sometimes social movements that did not accept a fatalistic view that, well, racism, or lack of voting rights, or labour injustice was inevitable. They said “no.” And then they said “no more.” The fact that some academics seem incapable of learning the lessons of resistance from the people in whose work they specialize has left me more depressed about the impact of literature than I thought possible.
Indeed, far from leading the resistance, some creative avenues have also demonstrated that the ability to naysay has magically been erased from their vocabulary. In 2023 pseudo-novelist Rie Qudan won the prestigious Akutagawa Prize, then admitted to using ChatGPT to help write the novel with “about 5% of the novel” (her claim, not verified) being produced verbatim by ChatGPT. Stunningly, she was allowed to retain the award. Then last year, the non-profit that runs National Novel Writing Month (NaNoWriMo) in a move that any authoritarian or tech entrepreneur would be proud to own, released (and then walked back, and then re-released, and then doubled down) a statement supporting the use of GenAI as a tool to promote greater inclusivity, especially for disabled people (many writers who identified as disabled were in fact angered by the suggestion that the use of writing accommodations was the same as using a tool that promotes intellectual theft). Both these cases, however, are striking examples of the inability of organizations that have positioned themselves as champions of creativity and intellectual endeavor to use the word “no.”
This is what “obeying in advance” looks like in the world of GenAI.
For that “no” to die in people’s throats,however, requires something more fundamental: the disappearance of the ability to form ethics-based value judgments. Now the broligarchs aren’t entirely to blame here; this is a fundamental feature of neo-libertarian capitalism. But Silicon Valley is capitalism turned up to 11. And one of the things that they have gradually primed us to accept over the years when they rolled out innovation after apparently miraculous innovation, is that there are certain questions that should never be asked. Their seduction-by-marginal-gains ensures that when a new product lands in our laps–digital voice assistants! TikTok! Driverless cars! ChatGPT–we never ask two fundamental question:
- Do we really need this?
- What are the real costs?
When it comes to GenAI the conversation revolves almost entirely around, “hey, it can do this thing!” Sure, of course it can. But did we need it to do that thing? And if a lot of us use it to do that thing, what are the costs? Remember, despite capitalist propaganda to the contrary, innovation is not a democratic process. Most creators and companies don’t poll people in advance and asks them if they would like this or that product. Much of our economic output is devoted to creating something and then whipping up a need for it. And Silicon Valley has perfected this game over the last decade. At a fundamental level I am sure that many of the Broligarchs and the people who are their willing executioners know deep down that they are peddling a dodgy product, so they have to create the illusion that this is a done deal as quickly as possible. That is why it is being rolled out over the objections of many experts in their field and is even being deployed in an openly unfinished state; many of the AI applications that we are being forced to use–like Google’s Search AI–are quite open about being “test” versions. Now in a rational world, you test products before you allow large numbers of people to use them. Imagine if this was how things worked in the pharmaceutical industry, or airplane manufacturing. But this is how the tech sector has in fact been operating for years, using all of us as test subjects at scale. And again, in a rational world, if you were testing a product, you would give people the ability to opt out of the test. Try turning off those Google AI search summaries. Go on, I’ll wait.
None of this means that GenAI is “inevitable.” It will be inevitable if we surrender to the illusion of manufactured inevitability and don’t use our words. Or one word in particular. No. Or, if you prefer a more high end polysallabic response: fuck off.
In case you haven’t seen where this argument has been heading, what it comes down to is this. If we are primed to forget that refusal to an apparent inevitability is a possibility, then that is the quintessential precondition for an authoritarian takeover.
Say it with me, friend: Fuck off, Nazis.