“Innovation” is the new mantra of American higher education. It is invoked as an indubitable good by college presidents and government officials, business advisors and philanthropists alike. It typically refers to developments in technology aimed at delivering “educational content” and transforming the way that universities themselves work, as well as developing products for businesses outside. In our fast-moving technology-driven world, so the assumption goes, we need to leave old practices and products (like the iPhone 4) behind, and embrace the new.
The previous mantra was “excellence.” One might trace the history of American higher education through its keywords; several other phrases, along with “innovation,” encapsulate its current values. “Critical thinking” has largely replaced the older goal of “moral education.” “New knowledge” has supplanted the “furniture of the mind,” which was provided by the classics (as the Yale Report of 1828 put it) through the late nineteenth century and is still presented as the rationale for great books programs. “Diversity,” too, remains near the top of the list; it used to mean the varieties of schools in the far-flung American system, from small religious colleges to large state universities, but since the 1980s has come to denote the various identities of students.
The embrace of “innovation” in many ways picks up where “excellence” left off. This earlier obsession, as Bill Readings pointed out in his 1996 book The University in Ruins, indicated a shift in the core idea of the university as based on philosophical reason or national culture to the more corporate-inflected “university of excellence.” “Excellence” has not disappeared, but it has faded in force over the past decade as “innovation” has become universities’ primary mission, their octane as “economic engines.” We now inhabit “the university of innovation,” with new Innovation Centers, Innovation Institutes, and Innovation Initiatives on almost every campus.
And who could be against innovation? Only crotchety professors, and maybe the Amish. But beyond the vista of continual progress that it projects, the rush for innovation carries a deliberate politics that has largely gone unquestioned. Innovation for what? For what purpose, and with what result? And for whose benefit?
A fundamental premise of American higher education—from the days of Thomas Jefferson to the founding of state universities during the nineteenth century, the Truman Commission and GI Bill after the Second World War, and other policy initiatives such as the Higher Education Act of 1965—was equality. Of course, not everyone can go to Harvard, but the impetus during the great postwar expansion was to make public higher education commensurate with one you might receive at Harvard. Berkeley was once the “Harvard of the West,” and the school where I got my PhD, SUNY-Stony Brook, was touted as the “Berkeley of the East.” Federal policy sought to establish a certain parity among schools and offer a comparable quality of education to all, from community colleges in California to graduate schools in New York.
Current calls for university innovation reflect a major step back from that goal. Advocates of innovation, as the term is most commonly used, propose to mechanize teaching, particularly for less privileged students, building a deeply stratified structure of educational opportunity and normalizing inequality under the banner of convenience and freedom of choice. At its core, the innovation agenda represents the interests of the business elite over those of educators or students. At its worst, it is a property grab of a formerly public service.
Like “excellence,” “innovation” was imported from business theory. Its most influential blueprint is business guru Clayton Christensen’s 2011 The Innovative University: Changing the DNA of Higher Education from the Inside Out (co-written with Henry J. Eyring), which extends Christensen’s vaunted idea of “disruptive innovation,” prominent since his 1997 book The Innovator’s Dilemma, to universities rather than factories.
Christensen’s basic approach is to apply the spirit of “creative destruction” to business theory. He holds that businesses tend to stay with what’s been successful, but, counter-intuitively, that is their downfall. To survive and flourish, Christensen argues, a business should not continue on its established path, even if it has proved successful, but instead dismantle and rebuild it, particularly using new technology.
One of Christensen’s key case studies is that of the American steel industry, which he argues precipitated its own decline by staying with what it knew through the 1970s. In turn, companies with an innovative spirit, like “mini-mill” steel, stepped in and took the markets from underneath them. Disruptive innovators typically start with cheaper products from the bottom of the product line, picking up the low-end market that established companies ignored, and from there build a base to then take over a major share of the market. Another such example is the transistor radio, which eventually took the field from large box radios.
The Innovative University applies Christensen’s theory wholesale to higher education, which he and Eyring cast as a huge consumer enterprise, now languishing like the steel industry was in the 1970s. The problem, in their diagnosis, lies in higher education’s “DNA.” It has been operating under the same structure for centuries, and costs have only risen. Most products are produced more cheaply over time; why not higher ed? Their solution, as with producing steel or radios, is largely new technology—or, more precisely, the reconfiguration of labor through the use of technology. Christensen and Eyring highlight the cases of Brigham Young University–Idaho, as well as for-profits such as DeVry University, a technical college, which conduct the majority of their classes online. This, they argue, is the alchemy of disruption in action, supplanting traditional labor-intensive, face-to-face instruction with a new, cheaper, mechanized practice. It is convenient for students, readily measurable, and saves on labor from professors.
Christensen and Eyring do put their fingers on a key problem of higher education—its increasing cost and thus diminished or impeded access for students. But their business framework simplistically conceives of this problem as a natural effect of the market rather than one of policy, and their business-driven solutions abandon the fundamental American goal of equal educational opportunity. The authors admit that the DeVrys of the world offer an inferior education, like a transistor radio, but find no problem with that; what matters is that such institutions offer student-customers an expanded market from which to choose. Under the guise of democratizing higher ed, they offer its Walmartization.
Throughout their book, Christensen and Eyring compare Harvard and BYU-Idaho, acknowledging that Harvard provides better conditions, learning, and experience. But no harm, no foul: in their consumerist framework, Harvard is a luxury good, like buying an expensive shirt at Bloomingdale’s. It’s too bad more people can’t have it, but if you conceive of it as a consumer good, it is not a deprivation of one’s rights because you can still buy a shirt at Walmart. It might not be as well constructed, fashionable, or made from as high-quality a fabric, but it will earn you credit for wearing a shirt. And, like Walmart superstores, online providers such as DeVry or Southern New Hampshire University are open twenty-four hours a day!
The problem with Christensen and Eyring’s cheery assessment starts with their basic premise, that education is a consumer good. If you instead consider it a social right, what they describe is deprivation, because you are denied an education of equivalent quality.
Tapping the cliché of the Ivory Tower, innovators like Christensen and Eyring portray the American university as obsolete and resistant to change. However, one lesson of U.S. higher education history is that it has changed continually; according to the leading historian Roger Geiger in his new History of American Higher Education (2015), it has evolved over ten or eleven generations in several, sometimes contradictory, ways. There was the small religious college with a set classical curriculum in the seventeenth century; the research university with electives and new sciences in the late nineteenth and early twentieth centuries; the mass and relatively free public education system established during the mid-twentieth century; and many variations in between. The American university’s evolution from the Revolutionary War era through the 1970s largely reflects an elaboration of Jefferson’s idea of education as an instrument of democracy—a vision that gained momentum with the profusion of state universities in the nineteenth century, spurred by policies like the Morrill Land-Grant acts, and that reached its peak in the postwar era thanks to substantial federal and state support.
Christensen and Eyring report that education costs have increased as if this were a natural consequence of economic weather, by remarking on the “downturn that knocked the wind out of the traditional universities.” But the true culprit, especially at public institutions, is the decline of public support. As a 2015 Demos report, “Pulling Up the Higher-Ed Ladder” documents, almost every state has reduced the proportion of funding it devotes to education over the past four decades; in the twenty years from 1990 to 2010 alone, funding on average shrank from more than half the expense of tuition to less than one third. That has had immediate effects on the precipitous rise of student debt, student work hours, and family indebtedness. These changes in American higher education are a result of deliberate policy, following a larger ideological drive toward privatization. Education has been restructured as an individual responsibility rather than a public one.
This shift and its associated costs have exacerbated inequality among the student population as well as in the nation at large. A 2015 Pell Institute report demonstrates that the past three decades have seen drastically widening gaps between rich and poor students in attendance and completion rates. And Elizabeth Armstrong and Laura Hamilton show in their 2013 book Paying for the Party how higher education reinforces inequality not only in numbers but in ways that permeate people’s lives and opportunities, from where they live to whom they socialize with. Those results might be fine for a country living under an aristocracy, but not for a democracy. And they have snowballed under the banner of innovation.
The push to commercialize higher education, then, has transformed universities not only quantitatively but qualitatively. The business framework seeks to erase the differences between nonprofit public institutions and profit-making private enterprises. What the innovators brush aside is that something other than buying and selling goes on at universities—self-exploration, intellectual enrichment, civic participation, professional training, and so on. Universities are not like stores but like churches, which in fact is the analogy that helped establish their independent legal status in the Supreme Court decision of 1819, which confirmed the autonomy of Dartmouth College. Why should universities have a tax exemption if they are really businesses?
Another way to think of it is like this: universities receive donations, like churches, to support their non-market roles. Why would people donate money if they were profit-seeking businesses? While I like to shop at Macy’s and hope that it stays in business, I would not donate money to it, whereas I have to both universities and churches. Of course universities should adopt best business practices, but if they act primarily as businesses, seeking to sell a product (education) and construing students as customers, then they abnegate their non-market role. This is a fundamental contradiction in the push to commercialize higher ed.
Moreover, while the disruptive model might fit tech companies that produce faster devices each year—and it’s hardly assured, as Jill Lepore argued in the New Yorker in 2014, pointing to Christensen’s dubious evidence and demonstrating that “sustaining innovations” have a far greater record of success than the disruptive kind—a similar process of mechanization does not fit education. Contemporary cognitive neurologists tell us that learning depends on empathy and affective engagement, which one has with other people, not with machines. And it takes time. Small children learn to read best when they have an adult read to them regularly, and the same process affects college-age and other students. iPads alone won’t do it.
One of the more compelling examples that Christensen uses to illustrate the effect of disruptive innovation is the case of the transistor radio. Though it had inferior sound compared to large tabletop radios, it was cheap and made music readily available, especially for young listeners in the 1960s. Eventually sound quality improved, and transistors became cheaper and omnipresent. Fair enough. But we should remember that a transistor radio is a delivery device. Its content is the music, which still requires time-consuming craft to make, and people still value the performance of music, paying a substantial premium for live concerts over MP3s. Education works more like music, requiring craft and best expressed in a scenario of human contact and performance.
Boosters of online education often ignore this fact, preferring to parrot that young people love technology, so it’s inevitable and good. Christensen and Eyring lead with this premise, arguing that young people like Facebook and so will naturally gravitate to online higher education. But this is belied by the actual evidence. According to Jonathan Stein, Student Regent in California, students are not so easily sold on the innovators’ bill of goods. They want genuine face-to-face contact, not screens. Online programs have poor records of completion, and recent studies have shown that online participation, which promotes multitasking, diffuses attention and is actually less efficient and less effective than traditional modes of learning.
Moreover, the Facebook justification rests on what I would call “candy reasoning.” If students like to eat candy, should we then encourage it? This reveals a fundamental flaw with the innovators’ consumerist framework. The structure of higher education should not depend solely on students’ desires, but on what we as a society collectively deem useful and essential to know, and what experts know that students do not.
The Facebook rationale matches much of the other reasoning for online education. It relies more on self-interested promotional statements than on serious scholarship. The Innovative University, for instance, tends to rely on marketing slogans as evidence. At one point, Christensen and Eyring tout DeVry’s “rigorous standards”—a phrase they draw from the university’s own website. Their assessment has no scholarly credibility and might well be sponsored content.
If students don’t benefit from online education, who does? The most conspicuous beneficiaries are the technology providers, who hope to reap a great deal of untapped profit for new “delivery systems” from a major public service. This backdoor privatization process also shifts control of the enterprise from faculty to external entrepreneurs, making faculty a subservient labor force. From a managerial perspective, the major impediment—and what Christensen and Eyring wish to disrupt—is faculty control. As Christopher Newfield, a leading critic of higher education, has pointed out, disruptive innovation “isn’t a theory of innovation but a theory of governance.” Innovators want to displace faculty from their traditional role as the constitutive body of the university and turn them into piecework employees for hire.
To wit, in more than 400 pages of The Innovative University, Christensen and Eyring barely mention faculty and portray any consequential decisions as top-down and administrative. Their book tells the history of American higher education as if it were the result of a few great college presidents who implemented major disruptions. It’s a history seen through the lenses of CEOs, and its hero is not the student who might learn, grow, and rise (the main character of the great state university systems) nor the professor who might seek truth or create knowledge (the hero of the great research universities), but the manager.
Innovation, of course, refers to other practices besides online education, most prominently the push to design and produce new products for commercial use and to incubate business start-ups. The word has also become internalized within academia as an all-purpose value for research and teaching, a buzzword that deans demand and faculty can claim to enact. We are all innovators!
Such unthinking celebrations of innovation have a dubious track record. Take the current system of student debt, an “innovation” in financial instruments and arrangements. Inaugurated under the Higher Education Act of 1965, loans were initially intended to provide a boost to underprivileged students. But after they were deregulated in the 1980s, they grew exponentially as a source of higher-ed funding—and as a major stream of profit for banks. To make matters worse, a 1978 amendment to bankruptcy law gave student loans a separate status so that they were no longer dischargeable when declaring personal bankruptcy. This innovation has leveraged enormous profits for Sallie Mae and other carriers. Is this what innovation is for?
Another major innovation was the change in regulations that made for-profit colleges eligible for federal student loans in the 1990s. Now, according to the 2012 senate investigation of for-profit colleges (what came to be known as the Harkin committee report), most for-profits receive nearly 90 percent of their funding from federal student loans. In other words, despite expounding the virtues of the free market, for-profit colleges rely almost entirely on government support. This is yet another innovative strategy of privatization, channeling money from the public sphere to private, profit-seeking corporations. It is a funding model that supports finance, not students. The new beneficiary of this system of higher education is the financier.
In addition to its material effects, the culture of innovation has also had profound psychological effects. In his book of the same name, Jonathan Crary calls it the feeling of “24/7,” the way that our devices permeate all of our time, disrupting basic human rhythms. We feel, as the Italian autonomist Paolo Virno describes it, “a chain of perceptive shocks” causing a continuous disequilibrium and anxiety. The innovative ethos, in Virno’s assessment, cultivates opportunism above all: as workers cede job security to new technologies, their lives are destabilized, and civic spirit loses out in the scramble to get ahead. Is this what we want our universities to teach?
The changes that American higher education has seen in the last three decades are not irrevocable. Germany, for instance, made a historic turn a decade ago, when it instituted fees in what had historically been a state-funded system. But it recently rescinded them and returned to the policy of free tuition, finding that it better supported the country’s society and very productive economy.
If we are to achieve a more democratic system of higher education, it won’t be through privatization and mechanization. We need to ask fundamentally different questions. What would be today’s analog to Jefferson’s proposal to build a free public university? The Truman Commission Report excoriated the inequality of higher education in its day. What might a new Truman Report look like? How can we reclaim the language of change for uses that support and build on the egalitarian spirit of education? That is the kind of innovation we need.
Jeffrey J. Williams is the author of How to Be an Intellectual: Essays on Criticism, Culture, and the University (Fordham, 2014), which includes four pieces that first appeared in Dissent.