“AI” makes writing easier. ChatGPT can produce paragraphs that are good enough for that annoying email you’re procrastinating. It can get you over the hump when you’re stuck, I’m told. And who knows? Maybe the AI will even fall in love with you (a win??).
Setting aside for the moment that AI writing is bad in quality—to use something, however briefly, is to enter a kind of relationship with it, and no relationship goes in one direction.
What effect does convenient writing have on us?
In his 1946 essay “Politics and the English Language,” George Orwell writes: “Our civilization is decadent and our language—so the argument runs—must inevitably share in the general collapse…underneath this lies the half-conscious belief that language is a natural growth and not an instrument which we shape for our own purposes.” This assumption morphs to fit Silicon Valley’s optimism, nihilism, and mysticism. Tech promoters would have us believe that AI’s babbling is as good as speech, or will get there soon.
The problem is that writing is a type of precise thinking, so if you delegate the writing to another, you delegate the thinking, too. (This will be true regardless if AI improves.)
Writing is difficult, yes.
”But you are not obliged to go to all this trouble. You can shirk it by simply throwing your mind open and letting the ready-made phrases come crowding in. They will construct your sentences for you—even think your thoughts for you, to a certain extent—and at need they will perform the important service of partially concealing your meaning even from yourself. It is at this point that the special connection between politics and the debasement of language becomes clear.”
(George Orwell again, in an uncanny prognostication)
AI is the literalization of this shirking. So we shouldn’t be surprised to find that the writing AI spits out routinely sounds a lot like Orwell’s laundry list of bad habits in “Politics and the English Language”: ChatGPT favors a clutter of unnecessary words, received phrases, euphemism, and repetitive sentence rhythms.
Even when it isn’t disclosed, you can feel when an article was composed with AI. There’s a certain tone, a house style. Driven to a bland median, it lacks sincerity, texture, and clarity. That’s why bots shine in writing tasks that involve bullshitting. Yet even where AI finds its best use-case—in formulaic, padded writing, like that seen in cover letters—it does so because it spams the reader with nonsense fluff. It encourages you to skim.
The only interesting creative uses of large language models I’ve seen so far emphasize this non-human style, although notably were created early, before the edges were rounded off the LLMs.
A dead metaphor is a comparison that is used because it’s handy, but holds little meaning, thoughtlessly applied. ChatGPT, which cannot think, produces utterances solely composed of dead metaphors, a zombie language. And I wonder: if a large percentage of the applicants for a job use AI for their personal statement, what good is the metric? Probably the employer will stop requesting cover letters. The quick-fix spawns more issues down the line, clogging the system.
AI inspires grand predictions, but it is nothing more or less than, as Ted Chiang explains well, “a blurry jpeg of all the text on the Web.” And that blur can introduce unexpected elements. I see a version of this when I edit student drafts. For instance, a fantasy writer influenced by The Lord of the Rings will accidentally regurgitate racial stereotypes. It is obvious to them once the problem is pointed out, and the writer feels terrible because it wasn’t intentional. It was a trope they’d seen before, and hadn’t thought through.
Language is always freighted, whether or not it was chosen with care.
The blurred language of AI enters at the individual level, when you delegate to ChatGPT, and it also enters at the societal level, when good writing is buried by the bad, and when readers learn from blurry ideas. The problem is not passive voice itself—it is when a headline obscures the actor in the crime. The problem is not a dead metaphor, it is when whole groups of people are compared to vermin. Students are using AI to do their homework at a point in history when media literacy and analytical thinking is critical. AI is a new iteration of the ongoing enshitification of the internet, but, in our age of misinformation and mistrust, it also presents a real danger. The danger is not that sentient machines will murder us (and we do well enough with the “dumb” machines, as far as mass murder goes). The danger is we stop paying attention when we need to most.
Convenience is a sneaky ill; a personal quick-fix won’t address structural issues, only patch them; for example, because our cities are not walkable and we are overworked, we drive everywhere; as a result, we miss out on chance encounters with others who are different from us; we pollute our environment, worsen our quality of life, and deepen the political divide. The effects seem to be local and temporary, but the damage is universal.
We may not identify convenience as a defining problem of our era, but of course, the point of convenience is not to think about it.
I genuinely do not understand the desire by any writers to use one of these tools. By all means, have fun with them if you can ignore the effects they're having on creative professionals or energy consumption, but don't mistake the fun you're having with writing. You haven't written anything! At most, you've edited a plagiarism aggregator.
‘Enshitification’ will be my new favorite way to describe the spiral of the internet