The Silicon-Tongued Devil

ChatGPT feeds on language, outputting texts that reinforce the basic assumptions of our culture. The rise of AI forces the Left to take a hard look at the politics of language and the linguistics of Noam Chomsky.

Illustration by John Karborn

“Man, come on. I had a rough night,” says Jeffrey “the Dude” Lebowski. “And I hate the fuckin’ Eagles, man.”

Immediately after this line is uttered in the Coen brothers’ 1998 film The Big Lebowski, the Dude is physically removed from the cab he is taking home from Malibu — the beach community he has also just been kicked out of — while the Eagles’ 1972 hit “Peaceful Easy Feeling” trills from the car radio. It’s a plea that encapsulates the conflict at the heart of the movie: the aging New Left adrift and in retreat in George H. W. Bush’s America.

But it goes deeper than that. The Dude’s preference for the sublime, authentic sound of Creedence Clearwater Revival, whose album he loses when his car’s tape deck is stolen, draws a bright line between good and bad culture. It’s the hippie desire for transcendence — late 1960s rock ’n’ roll, in this case, instead of drugs — that makes him hate the easy-listening Eagles, a global smash-hit group that defined the sound of the 1970s every bit as much as Creedence did that of the late 1960s.

It’s not hard to find analogues in the present. Hating Taylor Swift can get you thrown out of a bachelorette party or yelled at in the street, if not ejected from a cab. But it’s harder to find a direct counterpart to Creedence. This is partly because avant-garde culture has splintered so completely in our era that every claim of a pop genre being the essence of aesthetic progress is absurd on its face. Say what you want about the New Left’s failures — at least it was an ethos, man.

This episode from The Big Lebowski goes beyond the fall of the hippies. The fight between pop and the avant-garde is as old as the Enlightenment — probably older. There has been a constant drumbeat of theories claiming that pop would overtake or even eliminate the avant-garde since the 1970s; in fact, this claim is basically what philosopher Jean-François Lyotard and literary scholar Fredric Jameson each called “postmodernism.” The drumbeat only got louder as the internet overran everything. After all, there is a democratizing surface to the establishment of a peer-to-peer global publishing platform. Add social media to that mix, and then what we’re calling “generative AI,” and it’s easy to see why faith in Creedence seems so quaint.

As author Chuck Klosterman has recently argued, the ’90s was the last time anyone really thought that “selling out” was bad or controversial. From an aesthetic standpoint, we’ve all fallen into what I call a “streamhole,” in which algorithms exploit mass popularity, promising us individualized results while actually homogenizing our content. Those hanging on to their faith in the avant-garde are like the humans who have escaped the Matrix, gathering in Zion to plan the revolution that only a god can offer. (It’s no accident that The Matrix depicts raves as a cherished freedom for the enlightened.)

This split between Creedence and the Eagles, between the good avant-garde and the bad pop, actually poses some deep, almost metaphysical questions about culture. And now the explosion of generative AI — and especially large language models, or LLMs, which underlie chatbots like ChatGPT and Bard — are bringing this old aesthetic question back to the forefront. AI and the threat of cultural automation force us to examine the relationship among language, interpretation, and politics along new lines.

The question is, essentially: What should a Marxist believe about the automation of language? Proposals that capitalism has become “semiotic,” relying on the thought of philosopher Félix Guattari, are not technical enough about language to answer this question directly, while Marxist philosophies of language — like that of Raymond Williams or Valentin Voloshinov — have never integrated the world of machines. Any possible answer will have to combine the ideas of one of the greatest nineteenth-century thinkers, Karl Marx, with those of one of the greatest twentieth-century intellectuals — Noam Chomsky.


With Marx’s interest in economics growing by the day, he and Friedrich Engels spent much of the 1840s trying to work out a materialist philosophy as quickly as possible. But they wanted that materialism to be dynamic. By this they meant that it should not be a philosophy based on the real and prior existence of atoms in motion, but instead on the activity of humans. It was this philosophy that precipitated the first volume of Capital, where Marx laid all his chips on the notion of a “metabolism between the human and nature,” defining human life as the social transformation of the physical environment. This transformation was also the origin of philosophy — its first principle, its driving motor, and its only source of legitimacy. Marx’s philosophy remains arguably the only such materialism of its kind to date.

For Marx, the “metabolism” forms the starting point for consciousness. Through our transformation of nature, we “enter into relations” with one another. Whenever you hear Marxists talk about the relations of production, this is what they have in mind. Capitalism is one such set of relations (a “mode of production”), the only one we know thoroughly by experience. Consciousness, Marx thought, was a product of these relations of production, and “in the last instance,” it is determined by that material activity. This idea has been passed down as a brittle distinction between a “base” of economic activity and a “superstructure” of law, government, and culture. The reason we care about it is that somewhere in this abstraction lies ideology.

Every purchase we make and every hour we work, Marx thinks, are shrouded by a trick that papers over the value added to commodities by labor. Consciousness — and language — are not innocent of the mode of production. As he and Engels put it in The German Ideology, human “spirit” is

afflicted with the curse of being “burdened” with matter, which here makes its appearance in the form of agitated layers of air, sounds, in short, of language. Language is as old as consciousness, language is practical consciousness that exists also for other humans . . . . Consciousness is, therefore, from the very beginning a social product, and remains so as long as humans exist at all.

What Marx is saying in his high-flying style here is that language is the medium of production — of our very material existence in the world. We don’t just randomly move things around in the physical world; we create things intentionally, for our use. And we do this in concert with others, not as lone individuals.

The usual way of thinking about base and superstructure is that one determines the other. This is apparent in the dictum that it is “easier to imagine the end of the world than the end of capitalism,” which accounts for so much dystopian fiction that still can’t, even in the grimmest future, think its way out of profit and value. The critic Mark Fisher called this “capitalist realism,” the idea that our narratives and even our imaginations have capitalist guardrails. But when it comes to language — the very thing we are “automating” with AI today — this picture doesn’t quite suffice.

As I’m writing this, OpenAI, which runs ChatGPT and DALL·E, among other AI products, has just had one strange weekend. First the board fired the CEO, Sam Altman, only to turn around and hire him back. The fight was over “alignment,” the idea that we need to produce an “artificial general intelligence” (AGI) that is completely incapable of harming us. If this sounds like science fiction, that’s because it is. It’s also the ideology driving much of AI production today.

By the time of publication, Altman had been restored to his position, and if you needed an example of where Marx’s materialism would be helpful, this is a pretty great case. The fight at OpenAI was apparently driven by Ilya Sutskever, an engineer who was hired to create “superalignment,” basically a bulletproof guarantee that AI will not go off the rails and lead to human extinction. If that sounds bizarre, it should. But it’s also a founding tenet of the company, which started as a nonprofit devoted to producing a safe AGI. The fight between Altman and Sutskever isn’t about whether they can achieve AGI, or even about whether it could be an existential risk to humans. It’s about which path to take in ensuring that it’s safe. Call this metaphysics in the C-suite.

What everyone is ignoring in all the metaphysics, however, is that this fight is about language. It’s AI’s capacity for the intricacies of human language that triggered the meteoric rise of OpenAI, starting with GPT-2 in 2019. That’s the condition for all the other systems we’re trying to plug into “generative AI” — even DALL·E runs on word-image pairs. Language algorithms are now set to become the foundational infrastructure for virtually the entire global economy. It’s no surprise that CEOs are caught in the mystifications of dead-end capitalist fantasies, even if these particular fantasies seem more harebrained than ever. What’s more surprising is that the Left isn’t positioned to deliver the materialist, Marxist critique of AI that it should. That’s because the Left has refused to take language seriously, consigning it to mere “superstructure” and keeping it a good distance from economic materialism.

Marxists have approached this problem in different ways, generally leaning on the notion that language is material, dynamic, and situational. None of the approaches have ever really delivered a philosophy of language that is technical enough to cope with what AI is doing now, and that means all roads to a left critique of the automation of language run through the most famous linguist of our era: Noam Chomsky.


Chomsky may be the greatest intellectual alive. His work on language hasn’t just transformed linguistics; it is partly responsible for what’s called the “cognitive revolution,” the sea change in the sciences that occurred over the second half of the twentieth century. By the 1990s, when the Dude was still clinging to the transcendence of a Creedence tape, Chomsky’s view of mind and language had become scientific bedrock across psychology, linguistics, philosophy, neuroscience, and the still young field of artificial intelligence.

That’s not to say it wasn’t controversial: Chomsky’s quick rise in the 1960s brought with it bitter fights. But they took place during a generational shift in which generative grammar, the “language acquisition device” Chomsky claimed was to be found somewhere in the brain, and the “minimalist program” for understanding language became symbols of a whole group of sciences and their new approach. That approach is based on a broad analogy between the brain (or the mind) and the computer — the “computational theory of mind.” Chomsky himself has remained ambivalent about that analogy, while staying deeply committed to the scientific investigation of the properties of the mind, which he thinks are uniquely available to us through language.

But Chomsky’s career was always really two careers. From an early opposition to the Vietnam War — which, as he pointed out, US propaganda consistently denied was a war or an invasion at all — to his campaigns against the vicious conflicts driven by the United States in Central America in the 1980s, Chomsky has been one of the few leftist voices in American public discourse over two generations. He has leveraged his vast authority on language and the mind into a singular presence in the media chaos of a crumbling empire. He believes that a technological society has no reason to restrain the fundamentally creative forces of its individual citizens, and that syndicated free association — a view commonly called anarcho-syndicalism — would remove the repressions of the state and its imperialist propaganda.

When I was a teenager struggling to understand, first, the weird triumphalism of the Bill Clinton era and then the vicious turn after September 11, my father gave me books by Howard Zinn and Chomsky to read. For me, these were chapter and verse after which almost everything in the media was mere propaganda, specifically constructed to obscure truths about the American empire — truths that a lonely Chomsky had been shouting from the rooftops for years.

Fast-forward three more decades to March 2023, when Chomsky and two coauthors published an op-ed in the New York Times called “The False Promise of ChatGPT.” What they said there, sadly, amounts to a very sophisticated version of how much the Dude hates the fuckin’ Eagles.


The op-ed tacitly acknowledges that there is indeed a political problem with AI. Chomsky and his coauthors argue that machine learning — the discipline behind generative AI and other powerful algorithms — will “degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.” Chomsky has been fighting against this particular conception since the 1950s, so it’s not a surprise that he thinks it’s problematic for it to be released commercially. It’s less clear that his particular blend of cognitive science and politics can truly account for what ChatGPT and similar systems are up to.

I’m not sure it’s possible, for both logistical and legal reasons, to quantify how much data — text, image, and also tracking data of all kinds — are now produced or influenced by AI systems. But it is a far larger proportion than you probably think. LLMs are being plugged into search and to “personal assistants.” Exploration of applications from corporate hiring to jet fighters to mathematical proofs to chemistry has not slowed but accelerated.

A competing op-ed from the Wall Street Journal penned by now deceased Henry Kissinger — a generational Chomsky nemesis — and coauthors argued that ChatGPT was as important a step as the printing press, with similarly wide-ranging implications for policy, foreign and domestic, and the status of knowledge. In a weird way, Chomsky actually agrees with this assessment, if not with its suggestions. Because the new AI is a “lumbering statistical engine for pattern matching” and possesses no capacity for truth or morals, according to Chomsky’s op-ed, it is culturally dangerous. Kissinger recommends that policymakers get ahead of the curve. Chomsky basically just denies that anything meaningful is even occurring.

That’s a problem, and I think it’s located in the fragile relationship between Chomsky’s linguistics and his politics. He once said that he “can’t find intellectually satisfying connections between those two domains,” but instead only tenuous ones. But he does divide between what he calls “Plato’s problem” and “Orwell’s problem.”

Plato’s problem is that our knowledge outstrips our experience. As Chomsky has observed for more than six decades, children learn proper grammar without ever being exposed to all its variations. They learn to invent new words and phrases, and to understand them, far earlier than any machine could. Human learning seems to happen by great leaps, not by baby steps. Plato thought that humans had a kind of memory of the pure forms of thought — good, truth, and beauty — and Chomsky invokes that idea to describe what he thinks language really is.

Orwell’s problem is about what the novelist called “newspeak,” the systematic manipulation of language and meaning by the totalitarian government depicted in Nineteen Eighty-Four. Chomsky uses the exclusion of the notion that the United States invaded Vietnam to argue that democratic systems, through media manipulation, have “genius” apparatuses of “thought control.” This conclusion is actually a step beyond Orwell’s, since totalitarian governments use the threat of violence to back up their imposed language. False binaries are often established as narratives in the media, Chomsky argues, with the aim of “manufacturing consent,” a phrase he borrows from the famed theorist of public opinion Walter Lippmann. The media “can be an awesome force when mobilized in support of the state propaganda system,” Chomsky writes.

The slogan of the Plato-Orwell distinction is “Propaganda is to democracy as violence is to totalitarianism.” This is an excellent description of how different types of states coerce their citizens, but how does it relate to language? Chomsky doesn’t make this clear, and I don’t think he can. He’s missing the fact that newspeak is language. It’s highly arbitrary, reproducible language, yes — and its democratic counterpart is also language, just held within certain bounds. But are those guardrails regarding what can be said really created by some cabal of media experts? How are they able to suppress that genuine grammatical creativity we all innately possess?

Here’s the problem: the propaganda machines that Chomsky thinks manufacture consent are now close to one-hundred-percent AI-driven. It’s not a state or government that’s doing that work — it’s the language capacity of AI, at a cultural scale we’ve simply never seen before. In the Marxist tradition, this is called ideology. The continuity of production and culture has never been more literal. We now have machines that can automate not only Taylor Swift (fingers crossed) but also the type of suppressive effect that Chomsky located in the media. LLMs plug Plato’s problem into Orwell’s problem. The result is genuine chaos, because we cannot tell which part is language and which part is just machines. Maybe the distinction was never clear in the first place.

What stands between Plato and Orwell is culture, and with the rise of generative AI, we have a culture problem. The general intellectual bafflement of 2023 isn’t just a Chomsky problem. It’s a tendency that we have to underestimate culture even when it’s the very thing giving us fits. We may want to believe that “human creativity” — a constant refrain in Chomsky’s writing — isn’t susceptible to statistical techniques. But while it makes sense to reserve judgment on the avant-garde, I think it’s clear that Taylor Swift really could be an AI, partly because hyperproduced media products like her music or Marvel films find a kind of statistical center in the vastness of culture — which is exactly what generative AI does. There’s an unfathomably huge scale of human language production that comes between any formal linguistics and the types of danger that Chomsky and Kissinger both diagnose. GPT systems simply reveal that scale, and we do not like the results. But we cannot afford to ignore them.

According to Chomsky, AI in its current guise forces brute correlations between datasets. That’s a fundamentally different process from what humans do with language, which is to create explanations. We possess a “universal grammar” that allows us to learn with “almost mathematical elegance,” where these programs learn humanly possible and “humanly impossible” languages equally. They “trade merely in probabilities that change over time,” unmoored from any relationship to the truth, and — Chomsky emphasizes — are unable to generate moral judgments.

All this is true. LLMs in particular are fed massive amounts of text, on the order of one trillion words, a truly unthinkable amount of printed language. They then “learn” by compressing the data into patterns, using an extensive but mathematically simple algorithm. The ChatGPT that broke into public discourse at the end of 2022 (it has since been updated many times) cut up its trillions of training words into about fifty thousand “tokens,” mostly words but also little pieces of words that are useful for making language work, like “-ing.”

The initial result — after the “pretraining,” which is what the p in GPT stands for — is a fully determined grid, in which each unit possesses a probability for coming after the one that precedes it. If I say “communist,” the likelihood of “manifesto” shoots up, as does “pig.” LLMs gather a little pool of probable next words and then scan the context to choose which one to place next. If you’ve chatted with one of these systems, you know that it produces, well, good English (and many, many other languages, too). From nearly every theoretical perspective, including Chomsky’s, that just wasn’t supposed to be the case.


Chomsky has been opposing the statistical capture of language for nearly seventy years. It’s worth understanding how his analysis works, because that shows us what AI is doing that forces Plato and Orwell together.

In 1957, Chomsky published Syntactic Structures, a short book that would reshape linguistics, and cognitive science more broadly, for two generations. He wanted to show that the grammar of a language — what allows us to distinguish good from bad sentences — was independent of other factors, including the meanings of words.

He laid out his case with sentences that became almost as famous as he is: “Colorless green ideas sleep furiously” and “Furiously sleep ideas green colorless.” The first sentence is grammatical — it doesn’t mean anything, but it is immediately recognizable as an English sentence. The second sentence is equally meaningless, but it’s also not really a sentence. You trip up when reading it out loud. Something isn’t right.

For Chomsky, what was wrong was more than word order. One target of this thought experiment was Claude Shannon, the founding father of information theory, which is widely considered a primary underpinning of the digital universe. Shannon was interested in communication, which he thought of as the sending of information along a channel to a receiver. He didn’t care if the stuff sent was language, or meaningless signals, or what — all that mattered was nailing the structure of the stuff. He used the English language as his guinea pig for figuring out how to make a secure channel along which information could be safely sent. The formulas he invented for that purpose underlie all digital communication today.

Shannon was also interested in language, but in a different way than Chomsky was. He argued that if you have the letter q on a page, you could technically have twenty-six distinct letters following it (including a space). But it is overwhelmingly likely that u will be the subsequent letter. This idea makes Shannon the intellectual founder of autocorrect — but also of generative AI. It’s a statistical approach to language, based purely on frequency of letters or words. LLMs, as I described them above, follow this idea: find the word frequencies and reproduce them. Shannon never invented a machine that could actually spit out language, but we’re living with the products of his ideas.

Chomsky took direct aim at Shannon. No machine, he suggested, could really distinguish these two sentences from each other. English grammar was not just a “high order of statistical approximation to English.” You have to know grammar, not just predict words or letters.

In his 2023 op-ed, Chomsky uses one of his examples from Syntactic Structures back in 1957, writing that

because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with) . . . . The correct explanations of language are complicated and cannot be learned just by marinating in big data.

I decided to go to the source:

Anonymous: what does this phrase mean? “john is too stubborn to talk to”

ChatGPT: The phrase “John is too stubborn to talk to” suggests that John is very resistant to changing his opinions or behavior, making it difficult or frustrating to engage in a conversation with him. It implies that John’s stubbornness is a barrier to effective communication or discussion because he may be unwilling to consider different perspectives or compromise on his views.

For those keeping score at home: Shannon (and Sam Altman): 1, Chomsky: 0.

So I asked ChatGPT about colorless green ideas, in both versions. It gave me a little history of the good phrase, explaining Chomsky’s linguistics in basic but solid terms. It called the ungrammatical phrase (“furiously sleep ideas green colorless”) a “nonsensical combination” and labeled the chat “incoherent request, random words.” With some prodding, it acknowledged that the second phrase was a jumble of the first one, and drew the conclusion that grammar “alone doesn’t guarantee meaningful communication.” That’s almost the opposite of Chomsky’s point — but it’s true, too. More points for ChatGPT.

Here’s the problem. Nothing the machine produced can count as evidence either way. Does it know these sentences, or is statistical prediction just far more powerful than we were able to observe in the 1940s and ’50s? That’s the debate that’s driving the kerfuffle at OpenAI. If the right answers come out, how can you deny that you’re dealing with intelligence?

The problem is that the question itself is wrong: AI really is producing language — but not the kind that tells us how human minds work. The metaphysics of OpenAI can’t be defeated by Chomsky’s framework, because he can’t connect his view of human language and his analysis of propaganda. The missing concept is culture.


The way we need to conceive of AI is more as the Eagles than as Creedence, more the weird swaths of digital culture than the internal workings of the mind that avant-garde art tickles and inspires. But it’s not just Chomsky who hates the Eagles. We all do.

In August, journalists revealed that LLMs are trained on a lot of books, among other things. The Books3 dataset includes hundreds of thousands of books, many copyrighted, with works by William Shakespeare, Stephen King, and Toni Morrison making prominent appearances. Authors were quick to respond to this revelation. The comedian Sarah Silverman is leading a lawsuit against OpenAI and Meta. Margaret Atwood, best known for writing The Handmaid’s Tale, wrote a scathing piece about AI’s use of novels. Stephen King projected indifference, saying he did not think AI would be able to do what he does anytime soon. But the point is not whether AI is able to write books. The point is that AI is books.

Even cognitive scientists have recognized that LLMs are “culture machines.” But the framework of cognitive science, including the still palpable influence of Chomsky, dominates our understanding of these crucial algorithms. So long as that is true, we will not be able to construct a leftist politics for the age of AI.

Cognitive theories of AI have missed the larger point. The theories have not scaled with big data and the massive computing required to perform machine intelligence. Marx’s basic conviction about language provides a starting point for building an alternative understanding that connects language and politics, as these machines actually do in real time. But, to date, Marxist cultural theory has not paid much attention to the problem Chomsky poses.

We take it for granted that works of art build on one another somehow. This may be most obvious in television, or in franchises. You can’t understand Avengers: Endgame if you haven’t seen all the component prequels (I tried once, and it was . . . terrible). But it’s not just within fictional universes that this happens. The style of a film, the flavor of a novel — these are large-scale effects compared to the sentences and shots that comprise them. Even in everyday language, the kind that we use to organize ourselves as a productive society, we don’t just stare at individual words all the time. Long-form conversation is the norm, in which a real relationship is at stake. Couples fight over implicit meaning in ways that are hard to untangle. Bosses fire and promote workers for indirect moves they make in communication, not just for “performance.” All of that plays a role, as Marx pointed out, in the relations of production.

The way production is organized is the topic of Das Kapital’s first volume, of which a large portion is dedicated to the machine factory. Marx argues that machines and workers are pitted against each other in a zero-sum game, marginalizing labor by transforming it into the adjustment of dials and knobs, tending to machines. Think of the self-checkout at the grocery store — someone still has to be there, but radically fewer workers are required, and they’re mostly there to prevent shoplifting and help with inevitable glitches in the machine. Marx thought that machines fulfilled the mission of capital: to dominate and fully subsume labor under its control. The factory is a single machine; workers are just organic, living machine parts.

But the factory still has to be organized using language, so the relations of workers to bosses, and the organization that management executes — basically all of enterprise — still has this utterly human medium as its beating heart. That is what could change with the automation of language.

Digital technologies have rendered the unified factory part of a global machine system. That system is held together by data, connecting supply chains, points-of-sale, factories, and virtually every consumer on earth. This data system has become a condition of contemporary capitalism. But communication has still taken place between humans along the supply chains and in the trenches of global capital. LLMs, depending on how they are deployed now — which no one can yet know — automate this general medium of global production and exchange. They take the language Chomsky thinks can only exist in the profound interior of the human mind and plug it into the unimaginably complex network of global capital. If we deploy cultural generation in that space, we could lose sight — not just control — of capital’s machinations altogether.

All of this is to say that the philosophy of language is more pressing for the Left than it ever has been. The analysis of culture now must enter that picture — the scale and effects of language in the automated world cannot be stated correctly without it. It has to be understood that culture is no longer a mere “superstructure” but the rails on which capital is run.

In an interview in 2012, Chomsky said that “if a molecule gets too big, [the physicists] give it to the chemists.” And then when it gets too big for them, they hand it up to the biologists, then the psychologists, until “finally it ends up in the hands of the literary critics.” AI has made this joke literal — and the vastness of digital language is the hinge on which the critique of capitalism turns in the age of AI.