Don’t Mention the War

With a vacuous social vision, economics confronts the “return of the social question” woefully unprepared.

Illustration by Kotryna Zukauskaite

From the beginning, economics has been a moral philosophy — a vision of capitalist society — as much as a social science.

In classical political economy, class was central to both. In the famous line from David Ricardo’s 1817 Principles, political economy’s “principal problem” was discovering the laws that determine how wealth is divided among the “three classes of the community”: laborer, capitalist, and landowner.

Marx, writing half a century later, charged that the classical paradigm had degenerated once the moral implications of Ricardo’s class analysis became too uncomfortable to bear. Apologists and “vulgar” economists had come to deny any fundamental disharmony of interests: production, they claimed, involved the cooperation of different classes, and the rewards to each reflected the freely agreed market value of what they contributed.

Neoclassical economics is the heir to the moral vision of these “vulgar” economists — although marginalism put its analysis on a far more sophisticated footing. Yet while the discipline’s moral-philosophical side is unavoidable, today it goes largely unacknowledged by the field. Few mainstream economists now would echo Ricardo’s claim, and practically none would speak of “classes.”

Instead, according to a leading introductory textbook, “economics is the study of how society manages its scarce resources.” Individuals are the analytical building blocks, there are no fundamental conflicts of interest, and the market coordinates their activities so that absent “distortions,” everybody gains. Why the system “delivers the goods,” why it sometimes “goes astray,” and what causes its long-run growth and short-run “ups and downs” — these, in the words of a competing text, are its primary questions.

This preoccupation with function and dysfunction reveals the second face of economics, a role which fully emerged only in the generation after World War I. This is technocracy: the specialized use of economic knowledge in the management of the capitalist state. Classes and movements, parties and politicians struggle to control the state, to stamp it with their power and ideologies; technocracy specializes in the very different task of utilizing the state to solve whichever practical problems it is assigned.

Macroeconomics, in particular, was born out of the dysfunction of the Great Depression, and provided a system of ideas that united disparate branches of the capitalist state behind a coherent managerial project. Microeconomics was also recast as a guide for policy, going beyond traditional laissez-faire to call on the state to actively target “market imperfections” and internalize “externalities.”

But what counts as an economic problem? For the technocrats, inequality becomes a problem only from the moment social disturbances erupt from below, destabilizing consensus and short-circuiting the system’s smooth operation. At that point, the task is to find “solutions” that can quiet the conflict within the existing rules, on terms that can serve the interests of very different forces, depending on whose politicians are in control.

When no solution seems possible under the institutional status quo, the dysfunctions drag on, setting the stage for a turning point where political forces seek to change the rules of the game itself — as with the Keynesian revolution or with Reaganism and Thatcherism.

The technocracy’s interest in class at a given moment is one index of class politics’ intensity in that place and time. In the postwar Golden Age of capitalism, technical studies of inflation, unemployment, and labor markets were rife with the explicit language of class, even of class struggle, for that was the only way to make sense of those subjects in countries and periods where labor movements were strong and class conflict sharp. But the subsequent defeat of working classes everywhere caused the social question to fade from the field; by the mid 1980s it had almost totally disappeared from mainstream economics.

There are signs, however, that in the post-Occupy era the social question is creeping back as a bona fide problem, forcing economics, in turn, to reprise its moral-philosophical role — a role for which it is woefully unprepared.


The latest sign can be found in the American Economics Association’s Journal of Economic Perspectives, one of the field’s most read publications, whose summer issue features a symposium by leading economists on “inequality and the top 1%.” There is little doubt that the journal’s choice of topic is, at least in part, an homage to the 2011 Occupy mobilization and its best known slogan, “We are the 99%.”

That class-defining credo, coined by anarchists and propagated in the streets, was itself drawn from the work of the Beltway technocracy: its genealogy can be traced to the mid 1980s, when Democratic Congressional staffers, working with Congressional Budget Office economists, created a computer model to calculate the distributional effects of tax changes, yielding for the first time regularly published statistics on the incomes of this tiny upper stratum.

Armed with a rhetorical bludgeon against GOP tax plans, Democratic politicians — Michael Dukakis, Bill Clinton, Al Gore — went about hammering the phrase “wealthiest one percent” into public discourse. Ever since the occupation of Zuccotti Park, the concept has been used for a very different purpose: the attempt to forge a new class subjectivity.

Released online in draft form, one symposium essay has already attracted major interest, almost all of it negative: “Defending the One Percent,” by N. Gregory Mankiw. Chair of the Harvard Economics department, author of the nation’s best-selling economics textbook, former chief economist in the Bush administration, Mankiw is the Republican Party’s most prestigious academic economist.

His essay can be read as an attempt at a sophisticated Republican response to the growing tendency in economics’ dominant technocratic-liberal wing — epitomized by former Obama adviser Larry Summers — to legitimize the subject of social inequality as a policy problem in its own right.

But what’s interesting about Mankiw’s argument is that this former chair of the White House Council of Economic Advisers vehemently rejects the whole premise of the technocratic approach. He insists that the subject of income distribution can be adequately addressed only as an explicitly ethical issue. In this, he confirms an observation — and candid admission — made by one of liberal journalism’s leading Republican-watchers, Jonathan Chait, who once explained in the New Republic that the reason conservatives so often seem to ignore conflicting empirical evidence in economics is that “conservatism, unlike liberalism, overlays a deeper set of philosophical principles.”


Addressing himself to fellow members of the economics fraternity, Mankiw refers throughout his article to something he calls “the economist’s standard framework.” By this he means two things: the economist’s positive theory of income distribution, namely the theory of marginal productivity; and the economist’s normative theory of distribution, that is, utilitarianism. Mankiw holds very different attitudes toward these two canons.

First comes the positive theory. “In the standard competitive labor market,” he explains, “a person’s earnings equal the value of his or her marginal productivity.” This precept becomes the basis of his whole ethical system. He admits that in the real world the assumptions of this classical model are sometimes violated. Therefore “the key issue is the extent to which the high incomes of the top 1 percent reflect high productivity rather than some market imperfection.” But Mankiw thinks these imperfections are relatively rare, at least compared to how liberals see things.

Then he turns to the dominant normative theory. This takes the form of the textbook model of “optimal taxation” formulated by James Mirrlees in 1971. In good utilitarian form, it recommends that the government redistribute income — within limits, set by taxation’s inevitable unintended consequences — from those with a lower marginal utility of money (i.e., the rich, who get little satisfaction from each extra dollar) to those with a higher marginal utility of money (the poor, who get much more satisfaction).

Mankiw’s blood absolutely rebels at the theory’s redistributionism — and he’s no more enamored of the philosopher John Rawls’ theory of justice, which mandates redistribution as the policy we all would have voluntarily agreed to, had we not known what kind of life we would be born into. Mankiw sees both of these liberal theories as morally bankrupt.

In their place, he proposes an ethical framework that he calls the Just Deserts perspective. “According to this view,” he writes, “people should receive compensation congruent with their contributions.” How do we know what those contributions are? The theory of income distribution tells us: “just deserts” are exactly what an individual’s endowments would sell for in a free, perfect market.

Now we’ve returned to economics’ more comforting positive theory, for as Mankiw reminds us, in a “classical competitive equilibrium” (that is, a perfect market system), an individual would earn “the value of his or her own marginal product” and there would be no need for redistribution.

“The role of government arises as the economy departs from this classical benchmark,” Mankiw explains. So the government should tax externalities like pollution; provide public goods like roads and bridges; transfer income to the poor (“because fighting poverty can be viewed as a public good”); and pay for it with progressive taxation, which can be justified by the greater benefits the rich derive from government.

But “confiscatory tax rates are wrong, even ignoring any incentive effects.” That is because a person’s income is her just reward, so “using the force of government to seize such a large share of the fruits of someone else’s labor is unjust, even if the taking is sanctioned by a majority of the citizenry.”

Mankiw’s piece triggered a flood of scathing reviews from the economics blogs. Conservative wonk Josh Barro: “Not impressed.” Liberal policy writer Jonathan Chait: “An embarrassing piece of ignorant tripe.” The Economist’s Matt Steinglass deadpanned: “The 1 percent need better defenders.” Practically every blog had something to say about the piece, and rarely anything good.

Trying to refute Rawls and Mill to bolster his Just Deserts framework, Mankiw bumbled into the role of armchair philosopher, with embarrassing results. He rested his case on the curious grounds that the liberal normative theories justifying taxes on the rich, if taken to their ultimate conclusions, would also sanction horrors like state-mandated organ donation. (Himself a supporter of taxation if carried out for non-redistributive purposes, Mankiw doesn’t explain what would keep his own version of Leviathan from degenerating into an organ-harvesting dystopia.) And that was not the essay’s only obvious shortcoming. He brushed aside critiques of CEO pay and declining social mobility with comments so breezy they practically slid off the page.

“Greg Mankiw’s musings on moral philosophy,” wrote Matt Yglesias (hardly a dogmatic leveller himself), “are a strong argument for rigid disciplinary boundaries.”


But beyond the abstract debates over ethical systems or the narrow empirical disputes, the picture that emerged from this noisy exchange was, if you looked closely enough, one of broad agreement over certain unspoken fundamentals. Mainstream economics, despite its image of bloodless scientism — and despite its own denials — possesses what can properly be called a unified “social vision”: a coherent moral-philosophical account of how our society works, and how it ought to work.

This is not an economic theory that dispenses some uniform set of policy conclusions. It’s a meta-theory. And this meta-theory is almost universally shared by both liberal and conservative economists.

Like many philosophers, economists begin their story by imagining a fictional utopia. Mankiw calls it the “classical benchmark.” But there are other names for it — “the standard competitive model,” “perfect markets,” “an Arrow-Debreu world.” The central principle of this utopia is that it’s an absolutely free market: a pure night-watchman state, with no restrictions on voluntary exchange between self-interested individuals, hence no minimum wages or rent controls, and no income taxes. It’s further assumed that in such a market, everyone earns the marginal product of their labor — or, as Mankiw puts it, “compensation congruent with their contributions.”

Now, ever since its birth more than a century ago, there’s been debate about whether the theory of marginal productivity actually constitutes an ethical theory of just deserts, as Mankiw believes. Certainly several of its late nineteenth-century architects agreed with him, seeing the doctrine’s conservative implications as one of its main attractions — most famously John Bates Clark, one of the founders of the American Economic Association.

The theory holds that under perfect markets, competition will drive the market wage for a given type of worker toward that worker’s marginal productivity, which can be defined as the amount of revenue a firm would lose, due to foregone output, if one such worker were removed from production. (An analogous process involving the “marginal product of capital” is said to determine the profit rate.)

Figures like Clark and Mankiw have held that this marginal increment of revenue can be regarded as the worker’s “contribution” to production, thus legitimating her wage. Others, such as Alfred Marshall, have seen this last step as an ethical leap of logic, since the theory itself points to a whole range of factors that will help determine a worker’s marginal product but have nothing to do with her own personal actions or attributes: fluctuations in the prices of various goods, the structure of demand, random shifts such as harvests or the weather, technological changes inside or outside the worker’s own industry.

And, of course, the theory says nothing about how individuals acquired their endowment of factors (labor or capital) in the first place, or why the alleged contribution of capital should be regarded as the contribution of its owner.

But whether or not it actually does justify capitalist income distribution, the marginal productivity theory certainly sounds like an ethical theory — especially to impressionable freshmen in lecture halls or traveling businessmen browsing the op-ed page. As Mankiw’s textbook puts it, in answer to Ricardo’s classic question: “We can now explain how much income goes to labor, how much goes to landowners, and how much goes to the owners of capital. . . . Labor, land, and capital each earn the value of their marginal contribution to the production process.”

For eight decades, starting in the 1870s, many of the finest minds of economics tried to prove mathematically that if this imaginary perfect free-market economy did exist, it would be both rational and beneficent; that through the blind workings of the free market, supply and demand for every good would spontaneously match and the results would be desirable. (Or at least they tried to discover what conditions would be necessary to guarantee that outcome.)

The culmination of that work, in Kenneth Arrow and Gérard Debreu’s Nobel Prize-winning 1954 proof, was long seen by many as the profession’s defining achievement. Arrow and Debreu proved that such an economy would always contain at least one potential configuration of prices and products in which supply and demand would match in every market, and that this configuration would be “optimal,” in the sense that no one could be made better off without someone else being made worse off.

This is where the neoclassicals derive their moral-philosophical vision of society: they posit a quasi-metaphysical thought experiment in which our real-life capitalist economy is held up as merely a messy, imperfect copy of the perfect benchmark. Then they stipulate that we can never really deem anything in our own economy unsatisfactory (“suboptimal”) without first showing that it results from some specific divergence from the fictional utopia.

The logic of this move is impeccable: since the pure-market model produces optimal results on paper, those who claim our own society falls short must identify the specific impurity at fault. The same goes for wages, which must be assumed equal to marginal productivity — “congruent to contribution” — unless a specific “distortion” or “imperfection” can be documented.

In the words of Franklin Fisher, an eminent MIT theorist, the theorems underpinning this thought experiment constitute “the central set of propositions that economists have to offer the outside world — propositions that are, in a real sense, the foundations of Western capitalism. . . . They underlie all the looser statements about the desirability of a free-market system.”

Politically, the communicants of this faith divide sharply over the question of how often “imperfections” actually occur in real life — and therefore how much human meddling in nature’s plan should be allowed. For example, Professor Mark Thoma, a Paul Krugman-style liberal with an influential economics blog, strenuously disputes the idea that markets are always the best solution: “There is nothing special about markets per se — they can perform very badly in some circumstances. It is competitive markets that are magic.” Some free-market policies could actually be harmful: they could “move the outcome further from the ideal competitive benchmark rather than closer to it.”

This is where the discipline’s technocratic and philosophical sides collide: any policy proposal may be advocated; but the moral-philosophical framework must be respected. We can fail the model, but the model can never fail.

And yet the model does fail.  All models in social science are unrealistic. But the “ideal competitive benchmark” (the Arrow-Debreu world and its family of general equilibrium models) is not just unrealistic. It depicts a world that is neither possible nor imaginable — and yet it is also undesirable. Here are some of its assumptions: All markets must be perfectly competitive (whereas most of ours are not); if such a world existed, the requirement of perfect competition would rule out any division of labor or long-run economic growth.

There must be an infinite number of futures markets — one for every good in existence, delivered at every future date, for the rest of time. And yet, in the model, time doesn’t really exist: all economic decisions for all of human history were made in an auction at the beginning of the world.

Moreover, far from being harmonious, this theoretical world has been discovered to be chaotic — perpetually in random motion, never actually arriving at any of its “optimal” configurations except by accident. This finding alone nullifies the very meaning of the theory. That is why some of the leading theorists who developed these models — towering figures like Frank Hahn and Kenneth Arrow — use words like “sterile,” “arid” and “empty” to describe them. Yet this is the benchmark — the light in the sky by which our ships are supposed to be guided.

Counterfactuals play a useful role in any science. But the model on which economists claim to base so many judgments is not a counterfactual. It describes a state of the world that could never actually exist, and would be undesirable if it could. Such judgments, and the entire intellectual framework that generates them, represent not scientific conclusions, but a system of belief — no more true or false than the statement “Man is born free but is everywhere in chains.”

This belief system — this social vision underlying mainstream economics — is deeply flawed. Its analytical foundations are broken and its moral vision venerates as its highest ideal an impossible laissez-faire dystopia. We need to seek our vision from other sources.


Karl Marx died in London in March 1883. John Maynard Keynes was born in Cambridge less than three months later. Both were masters of a “classical” theoretical system which they simultaneously built on and transcended. For Marx, this background system was that of Ricardo and his followers; for Keynes it was the neoclassical economics of Alfred Marshall. If Marx appeared to leave behind a legacy of pure moral philosophy, arch-technocracy seems to be Keynes’ bequest. And yet the truth is not so simple.

Marx was scathing on the political economy of his day. But he saw it as having degenerated from a project he respected as scientific: Ricardo’s system and its extension by the post-Ricardians of the 1820s. The heart of the classical paradigm, as Marx saw it, was to explain the distribution of a surplus — output over and above what is needed to replace inputs, including the sustenance of workers. Surplus did not of course arrive with capitalism, but capitalism gave it a new form — surplus value — and masked it behind voluntary market transactions.

In Ricardo’s system, the rent a landlord enjoyed had depended on the fertility of that land. But there was nothing productive about owning land; landowners got paid — they appropriated the surplus — simply because they controlled access to something that was useful.

Marx argued that capitalists were much like landowners in this respect, capturing surplus by virtue of monopolizing the means of production. That argument was reversed by the vulgar economists of Marx’s day, and later perfected by the neoclassicals: profit was very much like rent, so rent could not be all that bad: everybody gets paid the marginal product of the productive factors they own.

What separated the classicals from the neoclassicals and their forerunners was an acknowledgment of conflict in distribution. Marx wanted an economic analysis that would completely restore the sense of class antagonism represented in Ricardo’s system, which had been expunged from the field once it came to be dominated by vulgarizers and anti-Ricardians.

But he also wanted to go deeper. The “critique” in his “critique of political economy” meant “refute,” so far as the class-harmony nostrums of the vulgar economists were concerned. But with respect to Ricardo’s political economy, Marx meant the word in the Kantian sense of exposing the conditions of its existence. He set out to show that the relationships of economic variables in modern capitalist society were immovably grounded in underlying social relationships. They were not eternal or logical, but historically specific and inherently political.

This vision of social conflict — with an emphasis on both “social” and “conflict” — was the essential premise of Marx’s moral-philosophical analysis of the capitalist economy.

As for economics’ second face — its technocratic aspect — Marx lived mostly before it came into its own. Yet what he saw of it he viewed as inseparable from the antagonisms in the underlying economy. Marx saw deep importance in the struggle for the British Ten Hours’ Bill, adopted in 1847. For him, conflict over the distribution of goods was only a part of the class struggle: the surplus was also a question of time.

People spent more of their day working than their material standard of living required, and the whole of this day was spent under the domination of capital.  He mocked the bourgeois economists — those “notorious organs of science” — who had “predicted, and to their heart’s content proved, that any legal restriction of the hours of labor must sound the death knell of British industry, which, vampire-like, could but live by sucking blood.”

The fight over the Ten Hours’ Bill had been so fierce, Marx argued, because “it told indeed upon the great contrast between the blind rule of the supply and demand laws which form the political economy of the middle class, and social production controlled by social foresight, which forms the political economy of the working class.”


Keynes was a master of Alfred Marshall’s marginalist economics, a system of “bourgeois political economy” far more sophisticated than any Marx had contended with in his day.

And yet Marx’s “social production controlled by social foresight” is exactly what Keynes came to embrace — within limits. In politics, Keynes was a bourgeois radical, a bohemian liberal who accepted the market but rejected the values of business civilization. Like Marx, he saw capitalism as a great historical advance, but one which, as he wrote, necessarily brought in its wake “all kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital.”

His social philosophy expressed the hope that once capital became sufficiently abundant; once the “functionless investor” died out; once the productivity of labor became sufficiently great, and our working time contracted to three hours a day, “we shall then be free at last to discard” those capitalist customs and habits. “We shall once more value ends above means and prefer the good to the useful.”

On the surface, Keynes’ critique of neoclassical economics (which he called “classical”) was much more limited in scope than Marx’s. His fundamental innovation was the theory of effective demand: the idea that employment is set by total spending, so that the market system has no automatic tendency to settle on full employment. Keynes himself was keen to stress that the General Theory was radical only on that particular point, and that once the state intervened to assure full employment, “the [neo]classical theory comes into its own again.”

Yet in order to reach that conclusion, Keynes had to challenge conventional economic theory on fundamental points, which lent themselves to more radical readings — and brought them into contact with Marx. Neoclassicals had held that full employment was ensured by the workings of the market, that the wage functioned like any other price, rising and falling to align the supply and demand for labor (at least eventually, or once wage rigidities and other imperfections were swept away).

But Keynes established that the wage was not like any other price — it constituted not just the employer’s cost but the bulk of society’s income, out of which spending and demand for goods was generated, so there was nothing preventing a persistent equilibrium of substantial unemployment.

Rather than depending on the wage, the level of employment depended on effective demand. This, in turn, danced to the tune of investment, so that employment today depends on firms’ expectations of profitability in the future — expectations held more or less confidently, but always fallible. Keynes saw human beings as coping with fundamental uncertainty about the future. This could leave market outcomes wild and unpredictable, so that free-market price flexibility might lead not to harmonious equilibrium but to chaotic results.

By the same logic, he rejected the neoclassical notion that workers bargain over their real wage — that is, over units of consumption. Lacking knowledge of how much goods will cost in the future, workers can evaluate only their relative wage; and that brings the question of income distribution into the heart of economic theory.

All of this opened the way to what neoclassical economists resist most militantly: indeterminacy, with all its radical implications.

This was indeed the road taken by a number of figures in Keynes’ circle at Cambridge, and in the broader milieu that has come to be known as “post-Keynesian,” whose influences come as much from Marx as from Keynes. Keynes’ colleague and collaborator Joan Robinson, a virtuoso theorist and perennial Nobel nominee, tried more than any other figure to develop all these ideas into a systematic alternative and radical approach.

Robinson’s Cambridge set included the Polish Marxist economist Michał Kalecki who, before Keynes, had independently worked out the theory of effective demand (a concept prefigured in Marx’s Capital), paying close attention to class and imperfect competition. Another member was Piero Sraffa, who revived and refined Ricardo’s approach to launch an attack on the neoclassical theory of value. All these figures saw Keynes’ concept of demand-determined employment as one key building block in a broader assault on orthodoxy.

Unlike the neoclassical vision, in which income distribution is fated by existing technologies, preferences, and endowments, in this vision it is a process of active conflict. The incomes of different groups, rather than smoothly adjusting to shifts in supply and demand, tend to be the baseline around which the rest of the economic system adjusts. The income distribution is treated as an evolutionary process, shaped by norms and institutions inherited from the past, which change as a result of extra-economic events — that is, history, politics, institutions, and struggle.

As Joan Robinson liked to say, to get a full explanation of capitalist distribution you would have to go back into “the dark backward and abysm of time,” where capital emerged from a world of serfs, peasants, lords, and artisans, and work your way forward — tracking what happened as capital accumulated, as it made and remade its workforce, as that workforce organized and fought back, and as states intervened to regulate the employment bargain in various ways.

The neoclassical vision of income distribution rests on two very shaky assumptions. First, unlike Ricardo and the other classicals, it simply assumes that firms are able to respond to changes in the prices of different factors — the different kinds of labor and capital — by freely adjusting the various proportions in which they’re used. Without that assumption, labor may literally have no marginal product, and the same would go for any other factor, or any particular type of labor. In that case, the Marxian or Ricardian conclusion would hold: the wage would be whatever workers could wrest for themselves, and profit would be whatever was left over.

Of course, it’s fine to build a simplified model with unrealistic assumptions and then see what happens when the assumptions are varied. But at some point, it seems, mainstream economists largely forgot that this assumption of “differentiable production functions” was a simplification — let alone one of questionable realism. As a result, in today’s economics literature it’s almost never questioned, and textbooks don’t even alert students to the issue. Yet as a general supposition about how production works, it is, of course, unrealistic: What are you supposed to do if your labor consists of ditch-diggers and your capital consists of shovels? How exactly do you vary your proportions of labor and capital?

There were two major neoclassical attempts to address this problem. In the 1930s, the great economist John Hicks recognized its seriousness and tried to save marginal productivity theory through the back door. In the short run, firms may be unable to substitute labor and capital freely, Hicks conceded; but in the long run the same result might be obtained by letting consumers do the substituting. If labor gets relatively more expensive, so will labor-intensive goods. Consumers will switch their purchases away from such goods, indirectly pushing the price of labor back down to its equilibrium level, set by the marginal product.

But to make the math manageable for such a model, Hicks had to assume that prices for everything else stayed constant. It was not until the Arrow-Debreu model and its more sophisticated math came along after the war that the issue could be examined without Hicks’s artificial assumption of constant prices. Using the Arrow-Debreu setup, theorists in the 1970s found a possibility that marginal productivity could fail and that factor prices would be left indeterminate, but they also found it highly unlikely that the starting patterns of ownership in the economy would be arranged in just the right way so as to generate an indeterminate result. Marginal productivity theory appeared to have been saved.

But this finding, in turn, has been shown to depend on one of those hopelessly unreal assumptions of the Arrow-Debreu model: that of a timeless economy where all of history’s transactions are agreed to simultaneously at the start. In a series of works over the past two decades, Michael Mandler, a University of London general-equilibrium theorist with impeccable neoclassical credentials, has shown that once economic decisions are pictured as being made sequentially, as in real life, ownership patterns turn out to evolve through time in highly specific ways — and they systematically gravitate toward precisely the kinds of patterns that generate indeterminacy of factor prices.

As a result, the central problem with marginal productivity theory that John Hicks recognized in the 1930s has never gone away: without the arbitrary assumption of freely differentiable production functions, wages and profits are not fixed by technologies and tastes. They are set by “something else” — something outside the competitive model.


The second central flaw in mainstream economics’ distribution theory is its assumption of full employment, or something like it. In the Marx-Keynes vision, aggregate demand and the resulting unemployment level are absolutely central to income distribution. When demand is strong and unemployment low, employers are forced to call up labor from an ever-dwindling reserve army, dramatically strengthening the bargaining power of all workers, especially those who would otherwise be most powerless [5].

But the neoclassical school has never accepted that vision. It has always sought to cleanly sever the “micro” issue of income distribution from the “macro” issues of unemployment and demand. In pre-Keynesian days, it did so by assuming that the free-market wage ensured full employment, so that any accidental rise in the wage above its “correct,” marginal-product level would immediately thwart itself by causing firms to shed workers — inducing the temporary unemployment needed to get the wage back down to its equilibrium. (The opposite would happen if the wage fell below its marginal product.) Any remaining unemployment was either “voluntary” or caused by imperfections impeding free wage adjustment.

After Keynes, the Depression, and the experience of the war, the full-employment assumption was no longer tenable. But the bulk of postwar mainstream economics was loath to give up its marginal-productivity framework. The result was a series of loosely connected ad hoc arguments with which economists were never really satisfied. In the long run, marginal productivity was still held to determine the wage, but in the short run — for reasons left mostly obscure — wages were assumed to be fixed, or “sticky.” This obstructed the “normal” free-market mechanism and created a need for government policy to ensure full employment by stimulating aggregate demand.

By the late 1950s, full employment — truly full employment — prevailed throughout much of the industrialized world, and economists began to talk about a new phenomenon: chronic or “creeping” inflation. Nowadays we’re used to continually rising prices; it seems natural. But in the postwar period it was a novelty. Previously, prices had tended to rise in booms and fall in recessions, occasionally getting out of hand during wars. An unflagging but moderate uptrend was new.

Class broke into macroeconomics at this point, because price-setting was heavily shaped by wage-setting. In a world where the majority of wages were highly politicized — determined in set-piece confrontations between flesh-and-blood representatives of their respective social classes — a technocratic debate broke out around the question of whether unions and wage-setting institutions were an independent factor: did working-class militancy push money-wages up, or was workers’ bargaining power simply a function of excessive demand set by mistaken government policy?

In the 1970s, the simultaneous explosion of working-class militancy and inflation around the world caused this two-sided debate to fracture into a kaleidoscope of theoretical positions. Many of them, ranging from reactionary to radical, eventually came to agree on one point: at a given moment in time, an economy has some benchmark level, or range, of unemployment, below which inflation will rise and above which it will fall. This level goes by a variety of names, but in most of the literature — especially the mainstream literature — it’s called the NAIRU, or “Non-Accelerating Inflation Rate of Unemployment.”

How an economist views the relationship between unemployment and income distribution will depend on how they interpret this NAIRU concept. Today, the dominant “New Consensus” version of macroeconomics — the version embraced by both Paul Krugman and Greg Mankiw — has used the NAIRU to revive the essence of the old pre-Keynesian full-employment assumption and its separation of distribution from demand — with a few notable changes.

The NAIRU, or “equilibrium unemployment,” now stands in for full employment. Its level, as in pre-Keynesian days, is still supposedly determined by the strength of obstacles to free wage adjustment, such as unions and unemployment benefits. But rather than the freely adjusting market wage of yore, it is now the freely adjusting interest rate set by a beneficent central bank that is supposed to assure continuous “full employment” — that is, unemployment roughly at the equilibrium NAIRU level.

The Marx-Keynes tradition, to the extent it accepts the concept at all, rejects the notion of the NAIRU as a technical equilibrium determined by obstacles to free-market wage flexibility. It sees history, politics, and institutions as its main determinants, and the rate itself — both the actual rate in the economy and the official rate posited by the technocratic class — as a vital and ongoing terrain of class struggle.

In fact, the whole history of the concept — from its birth in the crucible of sixties-era working-class radicalization, to its controversial working and reworking amidst the chaotic class politics of the late seventies, to its grim institutionalization, especially in Europe, as the ideological enforcement arm of capitalist power — has been a chronicle of class conflict.

Today, it’s in the name of the NAIRU that the European Central Bank issues detailed directives to elected governments on how quickly they’re expected to dismantle a century of working-class achievements in pensions, disability benefits, unemployment payments, and union rights.

In the US, the NAIRU has been the public rationale for what’s said behind closed doors at the Fed, where transcripts show policymakers obsessed with minute shifts in workers’ psychology — debating how “insecure” they feel; whether that’s producing “favorable” wage settlements; and sometimes, what the effect will be on the NAIRU.

In 1997, the Dallas Fed president worried that the Teamsters’ successful UPS strike had done “a good deal of damage” and could “go a long way toward undermining the wage flexibility that we started to get” after Reagan broke the air-traffic controllers. Alan Greenspan agreed: “The air traffic controllers’ confrontation with President Reagan set in motion a fundamental change in policy for this country more than fifteen years ago. It is conceivable that we will look back at the UPS strike and say that it, too, signaled a significant change.”

The Boston Fed president mused that the importance of strikes may be “not so much their near-term impact on economic activity or inflation but rather their longer-term impact on people’s perceptions of the relative power of labor unions versus management . . . in an environment in which there seems to be a great deal of concern about whether Wall Street, shareholders, and management are enriching themselves at the expense of workers’ standards of living.” “Even nonunion relationships between labor and management” could be affected, she fretted.

In the late 1990s, Greenspan’s “traumatized worker” hypothesis convinced the Fed to let unemployment fall to its lowest level since the Golden Age of Capitalism. But by 1999, it was seen to be causing rapid wage growth and falling wage inequality. So despite no sign of inflationary pressure, interest rates were raised and the boom brought to an end.

“If we continue to talk about tight labor markets as if that is a truly evil phenomenon,” the New York Fed president brooded in a 2000 meeting transcript, “we are going to convince the American people that what we believe in is not price stability, which is for the good of everybody, but a differentiation in income distribution that goes against the working people.”


Greenspan’s “Great Moderation” ended abruptly in 2008. Five years on, much of the rich world is far below even the ersatz “full employment” of the NAIRU era. The mainstream macroeconomic consensus of the good times, in which New Keynesians and New Classicals disputed the foundations but agreed on the basic message, appears to have collapsed into a rancorous battle between saltwater and freshwater economists.

Meanwhile, much of the technocracy seems to have concluded that policy has reached its limits: no more can be expected from monetary policy, while fiscal policy is hamstrung by debt levels. In the midst of the worst macroeconomic conditions since the Depression, we have witnessed the improbable resurgence of the once-discredited budget-balancing “Treasury View” of the pre-Keynesian era. In the battle between Austerians and Stimulators, there is no question that we side with the likes of Paul Krugman. Mass unemployment is a position of great weakness for the working class.

But in the long run, radicals need something more from their economics. Class conflict is at the heart of the capitalist economy and the capitalist state, yet neoclassical economics will not acknowledge the fact. How, then, should we think about economics as a discipline and the question of inequality as its subject? At an individual level, there are truly great economists working in the mainstream — some harboring deeply humane instincts, and some even with good politics. As a body of knowledge, economics yields a flood of invaluable empirical data and a trove of sophisticated tools for thinking through discrete analytical questions.

But as a vision of capitalist society, mainstream economics is simply hollow at its core — and the hollow place has been filled up with a distorted bourgeois ideology that does nothing but impede our understanding of the social world.