
Free speech supremacy and the arrogance of the tech bros.
As polarization and misinformation have ramped up over the last half decade, social media has become the focus of academics, political commentators, and politicians alike. No longer the darlings of Silicon Valley, social media giants like Facebook (including WhatsApp and Instagram), Twitter, and YouTube have come under pressure for what they allow to proliferate on their platforms. Pernicious information has spread like wildfire and the algorithms of engagement which support these behemoth platforms have been gamed in order to spread conspiracy, anger, and resentment.
The companies, long reluctant to accept their role as publishers, cast themselves as conduits for exchange, not liable for any actions on their platforms. That began to change in the late 2010s. The corrosive presidency of Donald Trump, which had been built off of his prodigious Twitter usage, led to a fever pitch of anger directed towards the lack of oversight and responsibility from the platforms. Russian disinformation campaigns in the 2016 election had already placed the platforms under a cloud that they could be so easily used to persuade public opinion under false pretenses.
In early 2019, Facebook took its first steps towards accountability by banning far-right provocateurs Alex Jones, Milo Yiannopoulos, Laura Loomer, and Paul Joseph Watson. But the company reportedly only felt under pressure to do so after the gunman used the platform to livestream his terror attack on two Mosques in Christchurch, New Zealand. Facebook also felt pressure to not restrict right-wing voices and purposely circumvented established policies out of fear that it would accelerate pressure from congressional Republicans and the Trump White House.
The Covid-19 crisis also placed pressures on the platforms. Covid denialism and conspiracy theories cropped up using platforms to spread their potential deadly messages. This eventually led to Facebook, Twitter, and YouTube all labeling misleading information or directing users to information from the World Health Organization or the Center for Disease Control and Prevention.
Finally, in the waning months of 2020 as the American presidential election was drawing to a close Twitter and Facebook struck out at falsehoods spread by Trump on their platforms, taking down several misleading posts as well as labeling others as being disputed information. Ultimately, misinformation cast by the president that the election had been stolen, spread on the platforms and led to an assault on the US Capitol as Congress verified the results of the Electoral College that Joe Biden would be the next President. Trump had riled up the crowd only hours before at a rally several blocks from the Capitol and refused to diffuse the situation after it began, infamously sending out a video well after the assault began telling his followers that they should go home and that they were “very special,” while doubling down on the false claim that the election had been stolen from him. Twitter reacted by shutting down interaction with the video and adding a label that the information was disputed.
In the aftermath of the January 6th insurrection on the Capitol both Twitter and Facebook temporarily removed President Trump. Twitter decided that the deplatforming was permanent while Facebook has kicked the decision to their newly formed oversight board. While the insurrection forced more decisive action from the established players of social media it also brought additional scrutiny to smaller platforms which had played possibly even larger rolls.
The social media platform Parler came under particular focus. Parler had become known as the conservative twitter, offering similar short blogging posts but without the occasional editorializing. Parler had attracted Republican politicians, media figures, and conspiracy theorists to its platform because it offered an entirely censorship free experience. Parler was formed to be a beacon of ‘free speech.’ Although, the insurrection destabilized the platform as its IP provider Amazon Web Service removed Parler and caused the platform to temporarily go offline as they found another provider.
Parler and platforms like it purport to allow for true, unfettered free speech. As a result, these types of platforms almost always become hotbeds for hate speech, islamophobia, anti-Semitism, racism, neo-Nazism, and far-right conspiracy theories. The moniker of ‘free speech’ itself has become a rallying cry for conservatives who don’t like when their most extreme views are considered harmful or potential dangerous. This is difficult ground to tread on. The rights to free speech are enshrined in the very first Amendment to the US Constitution. It is viewed as one of the most fundamental rights in a democracy. Nearly every Western democracy has adopted some form of speech protection. However, the freedom to speech is not absolute and the idea that somehow the right to it means there can be no complications from it are misguided.
As many have pointed out, the freedom of speech applies to the government’s relationship to speech. It has nothing to do with individual companies and their decisions. Twitter, like the local pub on the corner, can choose to kick out a rowdy patron for any reason they see fit, as long as it’s not due to a protected status like race or religion. Conservatives increasingly feel under threat for their beliefs and seem to conflate their beliefs for a protected status. This leads to a ‘free speech’ supremacy in which all speech should not only be allowed but should bear no ramifications.
The free speech many conservatives claim is under threat is oftentimes speech that others find offensive or hateful. Which is fine under the law, they won’t be charged for a legal offense for making such statements. But they can be judged and shunned for such speech. The world that many conservatives dream of is one of free speech absolutism, wherein, they are allowed to say whatever they like and no one can judge them for it.
This is obviously impossible, in order to create a world of free speech supremacy some speech will inherently be judged permissible while others are not. If hate speech is judged to be acceptable in the world of free speech supremacy than to speak out against that speech, or to shun that speaker would be an assault on free speech. Obviously this classes some speech as being more permissible than others and therefore breaks the idea of free speech supremacy. It is why free speech platforms invariably become platforms for hate and misinformation: they allow a home for speech shunned elsewhere while the vast majority of speech continues on perfectly fine on more mainstream platforms.
These platforms are going to continue to appear as the larger platforms slowly become more responsible for the content they allow to spread. They are almost exclusively started by young white men who think their knowledge of coding elevates their thinking. They spout some high minded sounding drivel about free speech and taking back control from tech giants but they do little to actually encourage an environment of positive or useful societal discourse on their platforms. It's a facet of the digital age that is quickly becoming tired: a talented coder believes that because they can build something that they must and their arrogance due to their economically useful skill makes them believe they are superior moral and ethical thinkers as well.
Free speech is complicated, free speech is dangerous, free speech is necessary. We’ve not yet figured out how to deal with free speech in the digital age. We’re only now in the infancy of these new mediums and it will take time to sort out how best to handle free speech in the digital realm. It’s not the first technological advancement which has shaken up the dissemination of speech or information and it certainly won’t be the last. How citizens and governments sort out how speech is handled online will determine if online spaces can be used to achieve even greater social good or become cesspools of convoluted ‘free speech’ supremacy.

