Four questions on Parler, Trump and Big Tech

So much of social life and public discourse occurs in the virtual realm, raising profound concerns about the law, ethics and truth. 

Trump supporters attend a rally with the president, hours before members of the crowd stormed the U.S. Congress, Jan. 6, 2021. For weeks, far-right users hinted openly in widely shared posts on social media that chaos would erupt at the U.S. Capitol while Congress convened to certify the election results. (John Minchillo/AP)

It feels weird to be rooting for Amazon.

I’m not talking about the news earlier this month that Amazon, along with Microsoft and some other large corporate players, will reconsider political contributions to members of Congress who voted to overturn the presidential election results. When I heard that, my gut reaction was closer to disgust than applause. Really?! They were giving money to those creeps? But of course they were. My gut was just a little out of tune with political reality.

No, I’m talking about the Parler versus Big Tech drama that unfolded in the days after Jan. 6. First, Twitter and Facebook banned Donald Trump from their platforms, citing his role in whipping up violence at the U.S. Capitol. Amid speculation that he would move over to Parler, where far-right conspiracy theories and brutal political fantasies thrive, Apple and Google struck the app from their app stores. Finally, on Jan. 9, in a coup de grâce that took the platform entirely offline, Amazon booted Parler from its web-hosting services.

It felt good to watch Big Tech squish Parler like a bug.

It also felt good to listen to Parler’s lawyer stumblingly, unconvincingly argue to a federal judge in Seattle that Amazon breached its contract, and that the platform would suffer irreparable harm unless its Amazon web services were swiftly restored. I can’t imagine that bid will succeed.

And yet. As many commentators have pointed out, this drama also showcased the unbridled power of Big Tech. Social media platforms like Facebook, YouTube, Twitter and Parler, and the digital infrastructure that supports them, increasingly determine — well, who gets a platform. In this instance, many progressives like how the tech giants used that power. But what about when we don’t? The Parler episode raises big questions about free speech, public and private spheres, what these platforms have become and the stake all of us have in how they are run.

Like it or not, a growing portion of social life and public discourse takes place in the virtual realm. We spend hours each day on social media platforms, where people share news articles, images and videos, express thoughts trivial and profound, announce personal news, advance political opinions, blow off steam, ask questions, make appeals, argue, complain, criticize, explain, hurl insults and engage in every other form of human speech imaginable. Private companies advertise on these platforms. Organizations of all sorts, from government agencies to nonprofits, use them to communicate with the public. For politicians across the political spectrum, they are megaphones. For many of us, they’re our primary conduit of information. Social media platforms have become public squares in the vastest, most cacophonous, most consequential sense.

This raises four questions:

First, what responsibility do platforms have to suppress speech that could be judged illegal? It’s unclear whether violent threats made on Parler meet the narrow definitions of incitement and “true threats” that lie outside First Amendment protections. But it certainly seems possible. When users on a social media site engage in unprotected speech — other examples include defamation, copyright infringement and some kinds of pornography — should the platform carry any risk of liability?

Second, what is the platforms’ role in maintaining standards of civil discourse? In prohibiting and removing content that, while not illegal, may still be deemed harmful or objectionable? Big Tech’s Trump purge didn’t depend on whether he and his hardcore followers incited crime in the legal sense; they clearly enough violated Facebook’s and Twitter’s terms of use, and Parler (lawsuit notwithstanding) clearly overstepped Amazon’s policies, which prohibit content that “violates the rights of others, or that may be harmful to others.” Of course, phrases like these are vague and can be selectively applied. Is it appropriate and sufficient to leave such decisions entirely up to the market — each company writes its own rules, enforces them as it sees fit, and users can leave if they don’t like it — or is there a role for public regulation?

Third, and conversely, to what extent do people have a right to speak and be heard on social media? One can imagine a future in which Amazon acquires Twitter and Facebook and then, suddenly, posts about Amazon workers’ grievances and efforts to organize are quietly demoted by algorithm — they’re not deleted, it’s just that people rarely see them. Would Amazon do this? I don’t know. Would it be legal? I don’t see why not. Should social media platforms be required to offer “fair” access, and what does this mean? What degree of transparency should be expected regarding the platforms’ inner workings, and what recourse should users have who feel they’ve been treated unfairly?

Finally, there’s the little matter of truth. These sites are where many people find their news and construct their picture of the world. What responsibility do the platforms have to purge or flag misinformation and disinformation? News stories inevitably embed facts (and falsehoods) in larger, nonneutral narratives, making it hard to insulate fact-checking from politics — at least in today’s highly charged environment, where one person’s lie is another’s fervent conviction.

Since 1996, all these questions have been governed by Section 230 of the Communications Decency Act, which was designed to encourage “interactive computer services” to figure out sensible ways of blocking offensive material in the brave new world of online message boards. It did this by shielding them from liability both for overfiltering — by protecting actions taken in good faith to eliminate objectionable content — and for the inevitable underfiltering, by clarifying that the service provider should not be treated as the publisher or speaker of information posted by users.

Twenty-five years later, Section 230 is the center of a maelstrom of controversy. Many Republicans who feel the major platforms have a liberal bias think the law gives them too much license to ban users and remove or flag content in opaque and politically motivated ways. Many Democrats believe it provides too much cover for sites that tolerate or even encourage harmful and illegal behavior. Some want Section 230 repealed outright — a move that would likely bury large swaths of the Internet beneath an avalanche of liability — while others want to amend it. A few like it just as it is.

The expulsion of a sitting president from social media, and of an entire social media platform from the internet, raises the stakes of this conversation considerably. One side thinks Big Tech finally used its powers for good. The other side thinks it used them for evil. But both sides agree that Big Tech has altogether too much power over our public discourse. This year, expect to see some moves to trim that power down to size.

Please support independent local news for all.

We rely on donations from readers like you to sustain Crosscut's in-depth reporting on issues critical to the PNW.

Donate

About the Authors & Contributors

Katie Wilson

Katie Wilson

Katie Wilson, a contributing columnist, is the General Secretary of the Transit Riders Union.