There was a time not so long ago when big tech companies were celebrated by both the conservative and liberal mainstream as proof positive of the genius of American capitalism. Silicon Valley optimism was a countervailing force against generalized malaise and stagnation. Criticism of the tech sector stayed mostly on the margins.
In the last few years, this has changed. The corporations at the very top of the industry, now among the most highly valued and powerful companies in the world, are subject to scrutiny and attack from all directions. The critiques take a variety of forms—anti-monopoly, pro-privacy, for workers’ rights, against collaboration with authoritarian governments. But nothing has so dominated the media discussion as concerns raised by the migration of political discourse to social media platforms. In the unsettled earth of the Trump era, a new culture war around political speech online has taken root.
The battlefield for this war is content moderation—how the material that users post on online platforms is regulated. In the earlier days of the internet, content moderation was often driven by the whims of webmasters and volunteers deputized to decide what (and who) got banned. In the United States at least, the government only got involved in cases that involved what it deemed serious legal matters, from copyright infringement to child pornography. As traffic on the World Wide Web skyrocketed and shifted to the “walled gardens” of social media platforms, the stakes for corporate content moderation increased, the workforce responsible for it grew larger and larger, and politicians became more and more interested in the process.
It is only in the last few years, as Sarah T. Roberts writes in her new book Behind the Screen: Content Moderation in the Shadow of Social Media, that we have seen “industrial-scale organized content moderation activities of professionals who are paid for their evaluative gatekeeping services.” But in our extremely online political times, the “mods” aren’t just responsible for enforcing the minimum standards necessary to keep users and advertisers happy and the platforms growing. They have been cast as central players in the fight for democracy, whether as its antagonists or its delinquent guardians.
The new culture war is, in part, a way of displacing implacable divisions onto the malfeasance of irresponsible business leaders—to assign blame for political dysfunction to the know-it-all rich kids who promised their tech was going to connect the world in digital harmony. It also represents an incomplete coming to terms with the cultural upheaval wrought by these companies, which have rapidly transformed social relations from the global to the most intimate levels. These battles reveal how big tech firms have crystallized problems long in the making. They also reveal how inadequately prepared all of us are for a full reckoning with the insidious power of these companies, and what their power tells us about the conditions of political life today.
Anti-tech sentiment is most evident, and central to political messaging, on the right. For conservatives aware that they are losing the culture wars against Hollywood, the media, and the academy, Silicon Valley serves as a new convenient excuse for the diminishing popularity of their ideas.
Even as the Republican Party maintains its grip on power, majorities of younger people express attitudes at odds with social conservatism across the board. Large U.S. corporations recognize these trends; they have almost all decided that homophobia, racism, and sexism are toxic to their brand identities. To a party built on an alliance between big business and cultural conservatism, this amounts to betrayal. Fox News personality Tucker Carlson captured the mood at the National Conservatism Conference—a convergence of the Trump-aligned right-wing intelligentsia—in July: “in 2019 the threat to the things that I want to do and the things that I want to say, the threat to my conscience, to the ability to believe what I choose to believe . . . those threats really come primarily from companies and not from the federal government.”
Nowhere has the rage at “woke capitalism” been as pure as that directed at Silicon Valley. While companies like Google have been attacked by Republicans since the Obama administration (they often pointed to the revolving door between employment in the White House and corporate HQ), the criticisms are growing in frequency and intensity. The tech companies stand accused of liberal bias in their content moderation and the censorship of “free speech”—the last rhetorical redoubt of a right wing whose ideas cannot win a majority.
Democratic elected officials reject the problem of censorship as illusory. Instead, they worry there is not enough moderation. They want social media companies to do more to police violent, hateful, or intentionally misleading information. These concerns go back to the 2016 elections, when Trump successfully used Twitter (among other means) to overpower a media establishment enthralled by a man they believed was too outrageous to win, and Facebook became a dominant delivery mechanism for political information, misinformation, and disinformation—all while a company owned by a Trump-aligned hedge fund manager mined data on 87 million users from the social media corporation to direct campaign messaging more effectively.
Many liberals point to poisonous online discourse—a problem without a clear solution—as a major factor in Trump’s rise. But it was in the wake of the deadly far-right rally in Charlottesville in August 2017 that calls to regulate extremist online political expression became commonplace. Major figures in media and the Democratic Party demanded a crackdown on fascist-friendly online communities on platforms like Facebook, Twitter, Reddit, and YouTube. If digging all the way down to the gnarled roots of the alt-right appeared too big a task for Democrats, they could at least find ways to disrupt racist activists where they pushed their ideas and formed virtual communities of the like-minded. Over two years later, it’s de rigueur to take social media companies to task in the aftermath of each new episode of far-right violence.
On the one hand, the bulk of right-wing complaints about censorship are bogus; on the other hand, as violence, hate, and lies have moved into the conservative mainstream, the line between Silicon Valley pro-Democratic partisanship and upholding “neutral” community standards has blurred. While the majority of Republicans still disavow explicit white supremacy and misogyny in public, they are making the most of this ambiguity.
The new terrain of debate was on full display during two hearings held on back to back days on Capitol Hill in April. The House Judiciary Committee hearing on April 9 on “Hate Crimes and the Rise of White Nationalism,” spurred in part by the massacre in Christchurch, New Zealand, was followed by a Senate hearing on “Stifling Free Speech: Technological Censorship and the Public Discourse.” Each expressed the ways partisans of the new culture wars are attempting to mobilize anger at social media companies, whether for unleashing animal spirits previously kept in check by a more well-ordered public sphere or for ordering that public sphere with too firm a hand.
The House panel featured not only representatives from civil rights organizations but spokespeople for Google and Facebook. Democrats demanded that tech companies do more to stop the far right, while issuing empty threats about the potential consequences of inaction. Republicans, meanwhile, used the opportunity to link hateful exclusionary ideologies with “identity politics.” Candace Owens, the former Turning Point USA operative who has called for a “Blexit” of black people from the Democratic Party—and was cited as an influence by the Christchurch killer—suggested that the real reason Democrats were picking on social media companies was because they had broken liberals’ “monopoly on minds.” On the right, this belief that online platforms have freed political discourse from the bounds of liberal media institutions goes hand in hand with a critique of tech companies for trying to suppress those same ideas.
As the hearings proceeded, YouTube disabled comments on their livestream because of the volume of racist comments being posted. Republican Louie Gohmert suggested that the comment ban might “be another hate hoax.” The following day’s Senate Judiciary Committee hearings, chaired by Ted Cruz, continued on this conspiratorial theme. Republicans shared anecdote after anecdote about biased application of social media community guidelines for removing offensive content and suspending or banning those who violate these guidelines, such as an anti-abortion group whose advertisement featuring a fetus was removed from Twitter (that policy was later reversed). Republican Marsha Blackburn, who once had a campaign ad on Twitter temporarily blocked because she claimed to have “stopped the sale of baby body parts,” invoked the location of these companies’ headquarters—“California”—like it was ISIS-controlled territory. “The power of big tech,” Cruz said, “is something that William Randolph Hearst at the height of yellow journalism could not have imagined.”
Opinion polling last year revealed that large majorities of Republicans and Republican-leaning independents (85 percent according to Pew) believe that social media sites are intentionally censoring conservative political speech. But they also still generally like the services they get from platforms like Facebook and believe they’ve had a positive effect on society as a whole. Most of the more normie right wing, in other words, is happy to keep sharing conservative memes and news stories in algorithmically bounded information silos online, drifting along to the ebbs and flows of corporate policy.
Indeed, while previous battles in the culture war could at least make a claim on expressing the views of a large portion of the U.S. electorate—the religious right—the fight against big tech really isn’t really about mass mobilization. It’s about the special operators: right-wing influencers and hucksters, and their hardcore online followers, who have made headway into the mainstream while monetizing their transgressive online brands—people, in other words, like Trump himself, for whom deplatforming is an existential threat. The online right, driven by aggrieved young men, seems to possess political clout beyond its relatively small size compared to the broader Republican electorate. They are vocal and mobilized, and lend an air of youth to an aging GOP base. If the culture wars of the second half of the twentieth century spoke to the moral sensibilities of an alleged majority, the battle for “free speech” online today is grounded in a minoritarian provocation: reinserting taboo, far-right ideas into normal conversation.
The relatively narrow constituency for the right-wing fight against Silicon Valley doesn’t mean that the sector isn’t worried about its consequences. Ever since the Tea Party Republican comeback in 2010, big tech has gone to great pains to counter the (correct) impression that it shares more culturally with pro-corporate Democrats than with Republican lawmakers. These efforts have intensified in the Trump era. In 2018 Facebook hired two groups to conduct audits of its content moderation policies—one led by a civil rights law firm investigating white supremacist groups on the platform, and another led by former GOP Senator Jon Kyl to ferret out the phantom problem of anti-conservative moderation. In congressional hearings, Facebook reps invoke Kyl’s name like it’s a totemic protection against the appearance of liberal bias. In June, Twitter CEO Jack Dorsey hosted dinners with conservative activists and media figures to attempt to ameliorate their complaints.
It isn’t surprising that social media companies would try to appease conservatives, but it’s a strategy bound to fail: pledging to investigate a non-existent problem will only produce more claims of bias when the investigations don’t turn up evidence of that nonexistent problem. In August, after holding a “Social Media Summit” to trumpet charges of the suppression of conservative ideas, Trump mocked the groveling figure of Google CEO Sundar Pichai: he “was in the Oval Office working very hard to explain how much he liked me, what a great job the Administration is doing, that Google was not involved with China’s military, that they didn’t help Crooked Hillary over me in the 2016 Election.” Trump didn’t want Pichai’s sincere promises to do better; he wanted to humiliate him while doubling down on the notion that Google would never give Republicans a fair shake.
No conservative politician other than Trump has mobilized anger against tech companies to greater effect than Missouri Senator Josh Hawley, the only elected official to speak at the recent National Conservatism Conference. (In his speech, he rejected “a political consensus that reflects the interests not of the American middle, but of a powerful upper class and their cosmopolitan priorities.”) In his first term in the Senate, Hawley has already supported five different bills targeting tech companies.
One of these bills focuses on an obscure obsession of the anti-tech right: Section 230 of the Communications Decency Act of 1996. That law makes a distinction between content publishers, who are legally liable for what they put out, and more neutral platforms, which bear little responsibility for the material that users post. While the law was written a decade before the rapid ascent of social media platforms, these companies have benefited immensely from the legal immunity it has granted them. Republicans have begun to argue, however, that moderation suggests a form of editorial control, even though the law specifically states that policing users does not cause a platform to lose its “safe harbor protection,” as Tarleton Gillespie points out in his 2018 book, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. These companies have “the right but not the responsibility,” as many of their service agreements read, to do whatever they want in regard to content moderation. Hawley’s proposal would force tech companies to convince the Federal Trade Commission that they are politically neutral—a kind of ideological stress test—in order to maintain their immunity under the law. Trump has, of this writing, drafted an executive order that would follow these recommendations.
Worried commentators have pointed to the new configuration around National Conservatism as the beginnings of a coherent Trumpist fascism, in which businesses would be forced into an accord with labor under the ideological banner of bolstering the institutions of religion, the family, and the nation-state. But Hawley’s proposals, like those of others pushing the new anti-corporate nationalism, reveal the emptiness at the core of their vision, at least for now. They might lament the two-income trap’s detrimental effects on “the family” and the lack of national loyalties of the international bourgeoisie, but Hawley and his allies offer little besides a plan to force big business leaders to affirm the allegedly conservative opinions of middle America. Their nationalism is a program of virtual sturm und drang alongside continued demobilization and quiet in meatspace. That vision is, coincidentally, a good business model for the tech companies themselves.
For the moment, big tech appears less concerned about Democratic proposals surrounding online speech. Left of center, no one has made a more visible threat to tech capitalist interests than Elizabeth Warren, who is advocating for the break-up of Amazon, Google, Facebook, and Apple. Her plan, however, is not really about how tech companies empower the right; instead, it is one part of her broader agenda to rebalance the competitive forces of the market economy in the face of monopolization. Democrats more focused on the diffusion of false news stories and the amplification of far-right ideas have, by contrast, more or less argued for a continuation of the neoliberal regime of corporate self-regulation. Even one of the bolder international agendas in this vein, pushed by New Zealand Prime Minister Jacinda Ardern after the Christchurch massacre, consists mostly of calling for companies in partnership with governments to develop better “standards” and “frameworks” and to make content moderation decisions with more transparency. This agenda, backed by dozens of governments in Europe and Asia as well as Canada, also has the full support of Amazon, Dailymotion, Google, and Microsoft.
Tech companies have become adept at responding to moments of public outcry from liberals and the left. After Charlottesville, Google, PayPal, and GoDaddy cut off services to a number of white supremacist organizations. At other moments of coordinated harassment tied to real-world violence, different platforms have banned figures like Milo Yiannopoulos, Alex Jones, and Paul Joseph Watson. They constantly update their service agreements and so-called community guidelines in an effort to keep up with new developments on the right. (It took Facebook until earlier this year to determine that “white nationalists” could be banned for the same reasons as “white supremacists.”) And their teams of content moderators are growing by the year. Republican officials, for the most part, stay silent on the specifics of these developments, while making noise about censorship more broadly. Tellingly, for all the fanfare around Trump’s July summit on censorship on social media, none of his invited guests had actually been suspended or banned by tech companies.
For the most part, the goals of safer and more sanitized online speech are shared by Democrats, looking to show they’re doing their part to fight white supremacy, and tech executives, who want the broadest market saturation possible. If they can hang onto the alt-right shitlords, too, all the better for the bottom line. But it’s easy enough to periodically cut off the most cancerous flesh, and then allow it to metastasize once again. Keep as big a mass as possible, as long as it doesn’t completely kill the host.
In Custodians of the Internet, Gillespie argues that the issue of moderating content has helped define public debate over the internet for decades—from the now-quaint panic over pornography to the attempt to stamp out piracy to the current battles around online harassment, fraudulent disinformation, and the miasma of hate speech in which the far right has festered and bloomed. Indeed, Gillespie writes, “the work of policing all this caustic content and abuse haunts what [tech companies] think their platforms are and what they must accomplish.” Short of repressive measures on the level of the Chinese state’s online policing apparatus, a series of rolling battles with the mods—quickly updating policies with every scandal, every mass shooting, every congressional inquiry—will shape the online culture wars for years to come.
At the House hearing on white supremacy in April, Alexandria Walden, a representative for Google, stated, “Our community guidelines are politically neutral.” As the mainstream of U.S. conservatism moves further and further right, the tech sector will likely continue to cling to this claim even as it becomes increasingly detached from reality. Tech companies only have an interest in deciding what’s acceptable in the digital public sphere insofar as it contributes to universal usage. They won’t find it easy to set a priori conditions for the public sphere when the boundaries of legitimate discourse are as deeply contested as they are today, but they can find ways to do just enough to avoid more serious problems with state officials.
We should be worried about allowing tech companies to determine what is acceptable speech. Already, efforts to stop the spread of “false” and “negative” information have hurt left-wing media. Black Lives Matter groups and activists have been banned under anti-discrimination statutes; drag queens have been kicked off Facebook after right-wing trolls reported them for using “fake” identities. Sometimes these moves are reversed under activist pressure, but these conflicts reveal the contested political backdrop to “neutral” moderation. The rules of moderation are set by a small group whose members, Gillespie writes, “have quickly become familiar with one another through professional conferences, changes in employment, and informal contact.” The problem isn’t their liberal bias but their professional allegiance to an industry only incidentally concerned about any sort of public interest.
Twitter, Facebook, and Google are enormously powerful, barely regulated companies. The fact that right-wing violence flows through the information channels they own is just one terrible symptom of that power. Better corporate self-regulation is pretty obviously insufficient to the tasks at hand, however those tasks are defined. The companies deploy the language of inclusive community when convenient, but will only block far-right activity when it doesn’t threaten profits. It’s hard to imagine a regime of state regulation under the U.S. legal system that would be effective either; there are too many cracks in these systems, featuring a huge volume of content posted across extensive networks, to stop the bad things all the time, even if regulators agreed on what sort of speech is unacceptable. And these days, Republicans are making more credible threats of legal harassment than Democrats. Antitrust, meanwhile, would have little impact on stanching the toxic seepage online; if anything, consolidated online space is more conducive to regulation than a more fractured and competitive digital ecosystem.
For some on the left, the critique of antitrust for big tech has led to a more utopian goal: public and/or worker ownership of social media platforms, along with search engines and video hosting sites, on the model of utilities. Companies like Google have become monopolies whose functions have become essential to social life; why should they be run in the interests of shareholders rather than in the service of the public? There are creative benefits to thinking in this more radical direction, but we must also face the shortcomings of a simple transfer of existing online infrastructure into the hands of the public, however defined.
There is no better place to begin confronting these limits than in the brutalizing work that goes into constituting the online public sphere. Some sites, like Reddit, rely primarily on volunteer mods to police their online communities, with interference from the top only in extreme circumstances (like when the heads of the company shuttered some of the most toxically racist subreddits after Charlottesville). But for the biggest platforms, moderation relies on a subcontracted, low-wage, disposable labor force working in combination with artificial intelligence mechanisms. These workers, whose activities big companies prefer to keep as invisible as possible, spend their days looking at violent and disturbing content so that the rest of us don’t have to. As Casey Newton wrote about one such Facebook subcontractor in Arizona earlier this year in the Verge,
It is an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. It’s a place where employees can be fired for making just a few errors a week—and where those who remain live in fear of the former colleagues who return seeking vengeance. . . . it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views.
In other words, it’s a harrowing example of the experience of terrible work in our low-unemployment economy—and the alternatingly soothing and aggravating effects of checking out online to cope. It’s a microcosm of the conditions that, to the puzzlement of an insulated ruling class, have contributed to an angry rejection of the political status quo alongside epidemic levels of mental illness. Under current circumstances, there is no way to make the internet a bearable experience for the majority without sacrificing the well-being of these workers—our canaries in the digital coal mine.
In his bracing new book, The Twittering Machine, Richard Seymour writes that if social media “confronts us with a string of calamities—addiction, depression, ‘fake news,’ trolls, online mobs, alt-right subcultures—it is only exploiting and magnifying problems that are already socially pervasive.” Social media companies wield immense power, but they operate on scales large and small, building on and deepening an already intense anomie in our culture. “They work,” Seymour writes, “because competitive individualism was already culturally and politically incentivized, and the rise of mass celebrity ecologies was already under way. And they work, in part, because they address legitimate wants: they offer opportunities for recognition, for creative self-styling, for interruptions to monotony, for reverie or thinking-as-leisure-time.”
Tech corporations, with our willing or unwilling assistance, have changed the texture of political life everywhere they reach—accelerating the pace of events, shortening the lifespan of attention to significant problems, creating new collectivities, turning discrete periods of activity into a constant stream, upending establishment control over public discourse, upending the notion of public discourse itself. These rapid changes now coincide with a Republican administration that has brought longer-simmering problems to the surface of American life: declining legitimacy of institutional pillars, corruption, stagnation, paralysis, authoritarianism, imperialism, ecological catastrophe.
No doubt, we should spend less time worrying about how these problems manifest in online political discourse than confronting the problems themselves. But we also cannot avoid the way that Silicon Valley has changed the political terrain. To find our footing, we need to work in the here and now to cultivate a less competitive and cruel culture, a less punitive and narcissistic sensibility, to find more forgiveness and warmth for the majority who deserve better than what they are getting. Already, against great odds set by tech companies themselves, many people are trying to find better ways of relating to one other—online and off—even as they train their sights on the powerful interests arrayed against human thriving.
We cannot go backward, however much the right pines for some organic national community or liberals wax nostalgic about a more civil political culture, neither of which ever really existed at all. Instead, we will have to move ahead through creative and critical engagement with each other in the worlds that big tech has made—or in their ruins.
Nick Serpe is a senior editor at Dissent.