Music, Modernism, and the Twilight of the Elites

By now it is becoming hard to remember that, at the peak of its popularity and influence, classical music carried with it an undeniable intellectual and even moral authority, qualities which would rub off on composers and performers such as Aaron Copland, Leonard Bernstein, Albert Schweitzer, Pierre Boulez, Van Cliburn and Igor Stravinsky, all of […]

By now it is becoming hard to remember that, at the peak of its popularity and influence, classical music carried with it an undeniable intellectual and even moral authority, qualities which would rub off on composers and performers such as Aaron Copland, Leonard Bernstein, Albert Schweitzer, Pierre Boulez, Van Cliburn and Igor Stravinsky, all of whom would, in different ways, play leading roles within the social and cultural landscape of the cold war period.

In one respect, not so much has changed in the years since: musicians with large, loyal and enthusiastic followings continue to have an outsized influence, frequently recruited for initiatives ranging from animal rights to medical marijuana to environmental justice. The songs of Bruce Springsteen, Johnny Cash, Smokey Robinson and Garth Brooks (to take a few names at random) have been staples at candidates’ rallies for decades, their endorsements actively sought out by political figures. Musicians have also served in various political positions in recent years, among them Congressmen John Hall (of the seventies rock band Orleans) and Sonny Bono, author of landmark copyright legislation. In Latin America, Gilberto Gil and Suzanne Baca have been appointed as Ministers of Culture in Brazil and Peru respectively, positions of substantial power and influence.

That those just mentioned are all pop musicians is indicative of a widely recognized, albeit infrequently discussed, development: classical music’s precipitous loss of prestige and cultural authority over the past two or three decades. The reasons for this are a larger topic than what I will examine here. Rather I will focus on one corner of the classical music world where the fall from grace has been particularly dramatic — namely, what used to be called “contemporary” or “modern” music, but which now, insofar as it is recognized as a distinct genre, requires an additional generic qualifier: contemporary classical music.

Before I do so, I will concede that in the scheme of things, whether one or another kind of music gets subsidized, composed, and performed, is not a matter of much significance. What is significant and revealing is the basic outline of the collapse: an elite sanctioned enterprise is challenged, loses its capacity to claim expertise and, ultimately, its privileged status. This trajectory was recapitulated on many occasions outside of the world of music and the arts. Carried to its extreme, elite privilege and impunity is at the core of why many of us now find ourselves on the streets. And so the larger drama which is now being played out is more than a little familiar to composers of my generation, as the brief recapitulation of that which I will call our nakba will reveal.

The story begins in the nineteenth century, when composers such as Beethoven, Wagner, Verdi and Brahms achieved a societal status and cultural centrality equal to if not unmatched by artists in any previous historical epoch. Their exalted status would continue through the modernist period, though they were met with increasing incomprehension and bewilderment in the face of works which were, in some cases, deliberate assaults on bourgeois sensibilities.

Doubts as to the viability of musical modernism intensified during the post war period and would emerge as a frequent topic of highbrow discourse a generation ago. Books and magazine articles with titles such as “The Agony of Modern Music”[1], “Terminal Prestige”[2], “Rationalizing Culture”[3] and “The Twilight of the Tenured Composer[4]” took aim at what one then-marginal composer described as “a wasteland, dominated by these maniacs, these creeps, who were trying to make everyone write this crazy creepy music.”[5]At a certain point a dominant critical narrative would emerge announcing the death of contemporary classical music at the hands of a cadre of modernist zealots. Working within the legacy of the second Viennese School (Schoenberg, Berg and Webern), figures such as Pierre Boulez and Milton Babbitt — so the story goes — attempted to reduce the creation of music to a technocratic specialty. More fluent with the manipulation of abstruse arithmetic formulas than with the nuts and bolts of melody, harmonic progressions, audible form, or sonic appeal, they believed that the creation of music could be reduced to autonomous syntactic form at its most cognitively opaque, with little concern as to what, or even whether, a message of any significance was communicated to their audiences.

In short, it was music without meaning and was received as such, not just by general audiences but by its nominal target audience of intellectuals, who — in a minor recapitulation of the well-worn trahison des clercs maneuver — fled towards the greener pastures of, first, jazz and then eventually a wide variety of popular and world musics. At its nadir, classical composition would be regarded along lines argued by philosopher Stanley Cavell, as a fundamentally fraudulent enterprise[6]. And while Cavell’s description was directed narrowly at the aleatoric experimental tradition associated with John Cage and his circle, the indictment would frequently be extended to composers working within the high modernist tradition hostile to Cage and his school, and even to a wholesale rejection of twentieth century music[7], including composers who saw themselves as reacting against modernism.

These widely circulated critiques would result in classical composers being marginalized from what had been the exclusive perches from which their influence could be projected. That they have been dislodged from these is routinely attested to by examples like the following: When the New York Times seeks “expert” opinions on “where music’s soul resides,” Paul Simon, Roseanne Cash and others from the pop music world are asked to contribute their perspectives. No classical composer is represented. Or, while the New York Review of Books, during its first decade, would open its pages to Robert Craft and Charles Rosen extolling the work of then-cutting edge high modernists such as Elliott Carter, now its musical perspective tends to be centered around vernacular idioms. What would have been dismissed as celebrity biographies of Eminem, rap, various Motown artists and Bob Dylan in years past are now respectfully reviewed and often praised. A somewhat more downscale indication of contemporary composers’ marginalization can be seen by perusing the guest docket of the middle-brow public radio interview show Fresh Air, which lists only two classical composers (Steve Reich and John Adams) in the past two years, next to columns of jazz, rock, soul, hip-hop and world music musicians.

I should stress here that I do not endorse what is by now the dominant conventional wisdom, which assigns responsibility to academic high modernism for contemporary music’s waning cultural authority. While I have rather little affinity with the idiom, the allegation, in particular, of fraudulence, that is directed at its practitioners (insofar as it has any meaning) is obviously belied by the high level of technical competence, in the most traditional sense of the term, of the composers working within it. Rather than denigrating academic modernism, my objective here is to note that such charges directed against it would come to assume the status of a conventional wisdom and, as such, had an important impact on the ultimate viability of the field and in the capacity of recognized elites to dictate the terms of the broader public’s engagements with musical culture.

Expressed in more familiar terms, what all this amounted to was the spectacle of a Blanche Dubois of art forms getting its comeuppance. And it was long overdue to many — including to cultural populists who resented classical music’s combination of arrogance and preciosity, not to mention its access to funding sources insulating it from the harsh dictates of the capitalist marketplace.

But the pleasures of Schadenfreude would turn out be short lived as other disciplines in the arts and humanities would soon follow classical composers in a march towards the cultural margins. Among the first to join in would be literary scholarship. Just as the charge of fraudulence would emanate first from within the ranks of composers, so too would the dimmest views of literary studies emerge first from within the field. Many traditional literary scholars earnestly expressed their discomfort with post-modernist and poststructuralist tendencies in academic journals. But the best known of these salvos would appear in satirical works such as Frederick Crews’ The Pooh Perplex and its sequel Postmodern Pooh — alarmingly pitch perfect take downs of the range of critical “theories” dominant across two different academic generations. Along similar lines, David Lodge’s series of novels poked fun at the pantheon of academic celebrities, pretty much all of whom emerge as buffoons — most notably the character of Morris Zapp, a stand-in for the literary critic Stanley Fish, whose own family demands a ransom from his terrorist abductors for his release from captivity. Probably the least known of these satires would be David Bromwich and Edward Mendelson’s Raritan article “Historicizing Phrenology”[8] which would be seriously discussed within the field for nearly eight years before being exposed for a hoax by journalist Ron Rosenbaum.

Coming from within the academic fraternity, these portraits, while certainly uncomfortable, could be accepted as the normal and healthy internal self-criticism which any serious discipline takes for granted. More problematic was the increasing recognition that the most skeptical, corrosive and hostile views of the field were prevalent on the outside. This became apparent with a rather less friendly satirical attack delivered by NYU physicist Alan Sokal in the form of an absurdist article accepted for publication by the flagship postmodern journal Social Text. The Sokal hoax, as it became known, demonstrated, at minimum, a widespread perception that leading members of the post-modernist movement were incapable of rational discourse, profoundly ignorant and simultaneously contemptuous of, scientific norms and altogether lacking in intellectual self-discipline. Subsequent defenses mounted in its wake only made matters worse and opened the field up to additional kicks from the political Right administered by Paul Gross and Normal Levitt in their book “Higher Superstition”[9]. More damage would be inflicted by the disclosure that one of the leading figures of the deconstruction movement in literary theory, Paul de Man of Yale University, had authored antisemitic works for a Belgian collaborationist newspaper. With de Man’s unmasking, the relativist tenets fundamental to poststructuralist criticism would take on a much darker subtext as a form of Holocaust denialism — one that was familiar to its continental followers, but would be particularly abhorrent to many of its most enthusiastic domestic adherents and potential sympathizers.

The terminal point of this descent would be mordantly described in 1999 by Andrew Delbanco in the New York Review:

A couple of years ago, in an article explaining how funds for faculty positions are allocated in American universities, the provost of the University of California at Berkeley offered some frank advice to department chairs, whose job partly consists of lobbying for a share of the budget. “On every campus,” she wrote, “there is one department whose name need only be mentioned to make people laugh; you don’t want that department to be yours.” The provost, Carol Christ (who retains her faculty position as a literature professor), does not name the offender — but everyone knows that if you want to locate the laughingstock on your local campus these days, your best bet is to stop by the English department.

Just as classical composers continue to function within in the academy in a reduced capacity, so too do college English departments continue to service diminishing numbers of undergraduate majors, as well as non-majors in required survey classes. One difference between the two fields is the fact that classical composers, being perhaps more painfully aware of their declining status among peers, now seem reluctant to express opinions outside of narrowly defined music theoretical topics. Literary scholars appear somewhat less constrained in offering their expert opinions on a wide range of topics — or “texts,” as they construe the empirical basis of their discipline. Among the least reticent is Harvard’s Elaine Scarry who, in a series of articles, discussed the role of high-intensity radiated fields in the explosion of TWA flight 800 in 1997. That Scarry, the Cabot Professor of Aesthetics, has virtually no credentials in the relevant, and highly technical, subject matter suggests that a high degree of intellectual self-confidence (not to say chutzpah) remains deeply rooted at least among the elite ranks of humanities faculty. A less flamboyant example is provided by the recent political writings of Yale’s David Bromwich in the London Review of Books and elsewhere. These are brilliant and profoundly damning portraits of the Obama presidency, widely read in the nascent and (one hopes) fast-developing movement of opposition to the administration’s various constitutional, environmental and economic outrages. While they are outstanding instances of journalistic polemics in the tradition of I. F. Stone, they are reasonably viewed as categorically distinct from Bromwich’s central (albeit impressively broad) scholarly domain of English imaginative literature extending from Shakespeare to Hazlitt to Wordsworth to modern poetry.

It should come as no surprise that those whose academic credentials have become “a laughingstock” would direct their energies towards other fora where their rhetorical fluency and sensitivity to textual nuance might be profitably exercised. Something of the same has occurred among academic composers, many of whom now work within idioms (e.g. rock and world musics) which share little relationship to the canonic works that constituted the core of their academic training. These exercises in academic moonlighting, while in one respect reflecting favorably on the underlying competence of those engaging in them, are at the same time consistent with the dim, stigmatized view of the academic humanities as best avoided by those wanting to be taken seriously within the broader culture.

The public shaming of the humanities was the cause of some satisfaction outside these disciplines — notably among skeptics who saw them as, at best, consisting of the expression of subjective tastes and prejudices, and at worst, the parading of charlatanism and fraudulence in a pose of expertise. But the shaming has recently expanded in surprising directions. Among the disciplines whose stock had risen as the humanities declined were those whose empirical methodologies were seen as exempting them from the charge of subjectivity run amok. Foremost among these was what was seen as the most rigorous of the social sciences, namely, the field of economics — in particular the neoclassical rational choice doctrine, which acquired unparalleled prestige, its leading practitioners commanding large salaries, its models pushing to the margins older Keynesian and other approaches, and its graduates having their pick of entry level jobs within an elsewhere tight academic job market.

The best known figures would move effortlessly around the three legged stool of government, corporate board rooms and the academy. Milton Friedman, Alan Greenspan, Lawrence Summers, and Jeffrey Sachs, would become nearly papal in their influence on public policy both here and abroad, their pronouncements on public policy granted ex-cathedra status by elites in media and policy making circles. For the broad public, the economists from the Freakonomics franchise applied their presumed expertise to everything from prostitution to gun control, musical taste, and climate change, subjects far from interest rates, demand curves and price equilibria. These would help to establish the capitalist market as a secular icon, especially among nobody’s-fool hipsters who had been traditionally resistant to the blandishments of right wing economic ideology.

The next step in this progression was, as everyone knows by now, off the cliff, which is to say a near-total collapse of the field’s intellectual and moral standing. The economists’ nakba can be dated to the years of the housing bubble, which would be missed by virtually the entirety of the mainstream of the profession — a failure roughly equivalent to the membership of the American Astronomical Society failing to predict an eclipse. Perhaps the cathartic moment was Alan Greenspan’s testimony before the house banking committee admitting to “flaws” which resulted in “the model’s” predictive failure. Greenspan was, of course, careful to couch his mea culpa in typically obscure scientistic rhetoric, equating himself inasmuch to a civil engineer or biochemist discussing a faulty construction technique or ineffective prescription pharmaceutical. The underlying premise was that the basic principles of economic science remained sound but that they had been mistakenly applied in this particular instance. What was increasingly perceived by those outside and a significant minority on the inside was that economics was no more an objective science than were the humanistic disciplines many economists had viewed as empirically soft and intellectually suspect. Indeed, much of economics would be shown to embody the most cynical view of the humanistic enterprises: an elaborately staged mascarade whereby elite prejudices were transformed into eternal truths, implemented in public policies, somehow always having the effect of further enriching elites to the immiserating detriment of the other 99%.

The transformation of the economics discipline from deity to demon is best charted in Charles Ferguson’s 2010 documentary Inside Job, which demonstrates in clinical detail how an elaborately fashioned edifice of mathematical rigor and formal methodology was built on a foundation of self-interested fraudulence. Columbia Professors Glenn Hubbard and Frederick Mishkin are shown to be simple con-men, albeit extremely effective ones, having parlayed their academic reputations into positions of considerable influence and, not coincidentally, considerable personal wealth. In a subsequent article for the Chronicle of Higher Education, Ferguson took aim at the conflict of interest policies obtaining in elite universities, comparing these with the similarly lax regulations which lubricate, rather than impede, the revolving door between the government, the defense industries and private lobbying firms. Ferguson concludes with the Hippocratic imperative “academe heal thyself!” in recognition of the central role which supposedly disinterested academic inquiry has played in promoting the fraudulent assumptions at the root of economic policy for two political generations, one which has visited untold harm on tens if not hundreds of millions of people.

Mainstream economists have been slow to recognize and even slower to accept their responsibility for, the decisive role of their discipline in the global catastrophe. Slower still has been the political establishment, within which mainstream economic retains much of its luster. That this is the case can be seen in the response to the economic crisis, the imposition of an international regime of austerity that takes a page taken directly from the supply-side playbook. There are now indications, most conspicuously in the form of the Wall Street occupation movement, that this conventional wisdom will be, at least in part, dislodged. That this will not happen without intense external pressure gives a good indication of the fundamental corruption at the heart of the economics profession and its essential role in the ongoing war of dispossession waged by economic elites against the vast majority of the population.

In fairness, it should be recognized that more than a few economists have harbored doubts and have been increasingly willing to express them. One indication can be seen in the Nobel Prizes which have, in recent years, been granted to those well outside of the right wing mainstream — notably the 2006 prize awarded to the psychologist Daniel Kahneman, whose work is a direct repudiation of the assumption that humans are capable of consistently exercising the cognitive capacity for rational choice in the economic realm. Another Nobel Prize winner, Paul Krugman, enjoys a New York Times platform for his attacks on what he has dubbed mainstream “freshwater” economics, accusing it of “recapitulating 80-year-old fallacies in the belief that they’re profound insights, because [most economists] are ignorant of the hard-won insights of the past” According to Krugman,  “many economists aren’t even trying to get at the truth [but engage in] the invention of stories to rationalize the disaster in a way that supports their side of the partisan divide.”

“All this” Krugman continues, “makes me wonder what kind of an enterprise I’ve devoted my life to.”  While literary scholars and composers may, in their darker moments, harbor real doubts about how their fields are practiced, few, even in the depths of despair, would ask this sort of question about their fundamental purpose and premises.

So far, I have focused on the negative side of these instances of intellectual disgrace, in particular the perception of a fundamental bankruptcy at the heart of expertise as it was defined within the post-war institutional mainstream. But there is also a positive moral, which is that the collapse of institutional authority has brought with it the recognition that the ideas, analyses and predictions of amateurs — average, uncredentialled citizens — have matched and frequently exceeded in insight, reliability, and accuracy those of credentialed experts. While this has been apparent in all of the fields mentioned above, it is perhaps most conspicuous in music, where it is now taken for granted that works of emotional power, intellectual substance and structural sophistication are routinely produced well outside of expert class — which is to say, those who have been granted the honorific job description “composer.”

The relevant form of technical expertise differentiating composers from musicians who just happen to create music has been that of musical notation, for centuries, assumed to be the exclusive medium through which musical works with pretentions to artistic seriousness had to be conveyed. With the folk and rock music revolutions of the fifties and sixties, the notated score would be relegated to the status of a historical artifact, as recordings and broadcasts of performances by musicians became the main medium through which music was communicated from composer to performer to audiences. The rigorous musical training necessary to produce legible scores and to decode the complex hieroglyphics of musical notation would be seen, in a kind of mirror-image musical Reformation, as a barrier imposed by elites designed to suppress the right of the masses to participate. Pete Seeger would become the seminal figure in this revolution, insisting that the joys of both music and music making should be available to all, not just those able to negotiate music on the page. The removal of the barrier resulted in the art form which has defined the culture of every generation since: Motown, Bob Dylan, Lennon and McCartney, Joni Mitchell, The Last Poets, Kurt Cobain, Kanye West, the canon of contemporary music as it is now uncontroversially defined, virtually all of which has emanated from those who lacked either the means or the inclination to develop conventional musical literacy.

Of course, while the mandarin class of its day ridiculed the possibility of market-oriented ephemera having any connection to musical “culture” in any meaningful sense of the term, it has now been at least two generations that this view has been on the defensive. By now, those who would claim cultural centrality or even relevance for the icons of sixties of high modernism promoted by elites — Carter, Babbitt, and Boulez —  seem merely ridiculous, the musical analogues of defenders of Enver Hoxha or Kim Jong-Il. A new chapter of this story is now emerging, however. For with the complete triumph of market fundamentalism within economics, and its close relative market-populism within the world of arts and cultural production, there are more than a few indications that discontent is simmering, not only with the social conditions the market has wrought but with the inevitable limitations it imposes on the range of creative expression.

There is an increasing recognition that within the capitalist marketplace music is necessarily consigned to a utilitarian function, as a delivery vehicle for commercial solicitations or at best the expression of life style choices and social identities achieved through consumerist acquisition. It was predictable that the three-to-four minute song would increasingly define the exclusive limits of musical form as neo-liberalism tightened its grip. This limited formal vocabulary contrasts starkly with the musical culture of the canonic period which, while containing a substantial song literature, is notable for the centrality of instrumental music making use of autonomous, as opposed to textually based, musical forms. The engagement with instrumental works requires the immersion in a world whose logic is dictated by its own self-contained and self-sufficient rules — of antecedent and consequent phrases, diminution and augmentation, perfect, imperfect or deceptive cadences, motivic transformation and development. As these have nothing to do with system of control and domination which defines our lives elsewhere — the exploitation of labor, the provision of services or the acquisition of raw materials — so-called pure music constitutes a realm of experience in which homo economicus has no status. And it is for this reason that the appreciation of autonomous, non-referential musical structure, while admittedly constituting a form of escapism for most, is, at its best, a fundamentally subversive act through recognizing that another world is, at least in a metaphorical sense, temporarily possible.

It is not surprising, then, that large scale autonomous musical discourse and syntax would become a relic of the past — as would the medium, musical notation, in which extended musical form was communicated. And, as suggested earlier, there is an inseparable connection between autonomous content and the medium in which it is conveyed: fluency with musical notation is required to acquire a technical understanding of autonomous forms (sonata, fugue, rondo, etc.); but more importantly notation functions as a musical blueprint required for composers to conceptualize and subsequently create a coherent large-scale musical architecture of any sort. As the goal of creating an extended autonomous form within music is rejected, the technical means by for achieving it inevitably wither away.

This realization begs a notable hypothetical question: What if a social and economic climate had existed in which the musical statements of the sixties had emanated from literate musical culture — perhaps as an adversarial tendency within it — rather than repudiating that culture’s most fundamental premises? What if Kurt Cobain had gone to Conservatory? While we certainly would not have symphonies in anything like the traditional sense we might have had something along the lines of a viable musical third stream.

But there is reason to believe that such a “third way” would have failed, just as have political “third ways” beloved of generations of reformist liberal intellectuals. As for why

this is the case, here is an attempt at an explanation that I ventured some years ago:

Audiences and composers have expectations which can’t be met by music transmitted from composer to performer via the print medium. It is not possible to achieve the frenzies of activity, the extremes of density, nor the near-optimal matching of musical material to instrumental capabilities inherent in the process of trial and error, improvisation, and recording studio cutting and pasting which defines the creation of most contemporary music. Nor . . . are the unlimited sonic resources, the absolute rhythmic precision, and extremes of speed, frequency, and amplitude possible within the digital realm accessible to composers working within the print music medium.

The basic sound world and musical syntax of popular music defines a distinct language, one which is inextricably linked to the medium in which it is communicated. It has become the musical lingua franca of our day and all attempts to speak the language within the literate medium tend to come across as stilted and unnatural. Most musicians in my generation and younger are now more or less equally fluent in both languages and are to give both languages their due: just as it is impossible that this could have been conceived within a non-literate medium, it is equally unimaginable that this could be inspired, created and performed within the medium of notation.

At the risk of pushing the analogy to an extreme, we might speculate that we are now at the beginning of a liberation of political energies comparable to that of the musical revolution of the sixties. Elite expertise having been undermined and repudiated, elites of all stripes are increasingly viewed as debased, clueless, cynical and corrupt. And it stands to reason that the leadership structure of the left has reflected this awareness, its own elites having been replaced by a horizontal democracy in which, in principle, all are given a voice and expected to participate. It seems quite likely that at some point, the General Assembly hootenanny will come to an end, and some form of top down organization will impose itself, either under duress or out of the recognition of practical, political necessity. At that point, it will be time to resuscitate what were assumed to be those moribund traditions centered around the conception of structuring large scale social arrangements for the benefit of tens of millions rather than ad hoc solutions relevant to achieving impressive but nonetheless limited activist goals. For the moment, we should recognize the nakba of the elites as the first step and welcome and celebrate the productive anarchy which has necessarily accompanied it.

  1. Henry Pleasants, The Agony of Modern Music. New York: Simon & Schuster, 1955.
  2. Susan McClary, “Terminal Prestige: The Case of Avant Garde Composition,” Cultural Critique12, 1989, pp. 57-81
  3. Georgina Born, Rationalizing Culture: IRCAM, Boulez, and the Institutionalization of the Musical Avant-Garde. Berkeley: University of California Press, 1995.
  4. Richard Sennett, “The Twilight of the Tenured Composer,” Harper’s 269 (December 1984), pp. 70-71.
  5. Composer Phillip Glass quoted in John Rockwell, All American Music: Composition of the late 20th Century, New York: Knopf, 1983
  6. Cavell, Stanley. “Music Discomposed,” in Must We Mean What We Say?, Cambridge and New York: Cambridge University Press, 1976.
  7. For an extreme, albeit not atypical, example, “Twentieth-century music is like pedophilia. No matter how persuasively and persistently its champions urge their cause, it will never be accepted by the public at large, who will continue to regard it with incomprehension, outrage and repugnance.” Kingsley Amis, quoted in Paul Fussell, The Anti-Egotist: Kingsley Amis, Man of Letters, New York: Oxford University Press, 1994.
  8. Salazar, Rothman. “Historicizing Phrenology: Wordsworth, Pynchon, and the Discursive Economy of the Cranial Text.” Raritan 8 (1988): 80–91
  9. Paul R. Gross and Norman Levitt, Higher Superstition: The Academic Left and Its Quarrels With Science, Baltimore: Johns Hopkins University Press, 1994.

If you like this article, please subscribe or donate.