Moderation vs. Censorship: the Media's Free Speech Dilemma Sisi Li

"Anyone can post anything."

One of social media’s most distinct traits is that anyone can post something onto the internet for the world to see. It's been the basis for the sprawling Internet culture that's exploded over the last two decades. However, we have seen this fundamental core of social media have its ups and downs. On one hand, free speech should be a human right for everyone, and its place online has allowed exposure on crucial topics that would have otherwise been neglected/left unseen. On the other hand, this very same argument has led to increased spreads of misinformation and bigotry.

“For all the good social media brings, it has also created unrivalled opportunities for the resentful, the bitter and the frankly sociopathic to reach those they couldn’t previously touch. Children have been groomed for sexual exploitation, terrorists radicalised, the gullible sucked into conspiracy theories, teenage girls coached to self-harm, and hate normalised on platforms that have faced too little by way of consequence.” - Gabby Hinsliff, The Guardian

Of course, that is not to say content moderation on the web is nonexistent. Popular social media sites like Instagram, Twitter, and Facebook all have guidelines and rules for their users to follow. But why is it that misinformation/hate speech can and has still gotten through?

Truth is, there is such a thin line when it comes to moderating content on such big platforms: there is omitting legitimate hate speech/false information, and then there is selectively silencing what a company deems is “right” or not. For big companies like Facebook or Google, they have so much power over the one-billon+ users they receive, and that number is only climbing (Machlis). There is no one to moderate the moderators: who is to say if one day Instagram decides to ban a certain celebrity or hashtag? What will social media users do then?

In certain cases, governments have even hid behind the argument that they were “moderating” online content for their citizens in order to take down unwanted information. The Indian government was reported to be “using the premise of misinformation to overreach and suppress criticism of the administration’s handling of the pandemic” by requesting that companies like Twitter remove posts criticizing India's COVID-19 measures (Ghaffary).

In China, the government uses the guise of “regulation” to perform country-wide acts of censorship daily, blocking access to networks like Google, Facebook, and Youtube completely from their citizens. Nicknamed China's "Great Firewall", it is one of the most infamous examples of mass censorship online.

But of course, those are the extremes. Moderating content is otherwise a useful idea when it comes to managing genuine misinformation. Take the US 2020 presidential election and COVID-19 as major examples. Twitter used multiple warning graphics to alert people to the unreliability of the content from certain tweets about how the election was panning out, or about the status of the pandemic.

So what does free speech really mean, in terms of the media?

This is where the unique power dynamic between the media and the people comes in. On one hand, powerful media companies have the responsibility/right to moderate content they don't want to be seen on their platform. On the other hand, social media is the 'people's voice', and they deserve to be able to use social media without worrying their content could be taken down without warning. The uniqueness of social media is that it's able to harness so many different people under the same platform- and although sure, there are some terrible people that exist on these said platforms, they are often just a small minority. How long before the media starts blurring the border between simply a different opinion and a harmful one?

In short, it’s a lose-lose situation- big media companies will be criticized if they overly supervise content, and criticized if they keep moderation levels the same.

But the risk of promoting misinformation and hate speech outweighs the value of having liberty of expression online: if we want to keep social media a safe and trustworthy place, we need to keep adapting and enforcing online content guidelines. So while people should still be able to post what they want, it is the media's responsibility to properly evaluate content without bias, and without tipping over into dangerous 'censorship' territory.

And just these last few years, several platforms have made some huge efforts; with "Tumblr’s recent ban on nudity, Twitter’s continued back-and-forth on suspending and banning extremist users, Facebook’s recent efforts to curtail misleading ads" (Coaston).

But perhaps the root of the problem can not be found in social media, but rather, ourselves. If the general public were more aware and educated about the things they put on the internet, life would be a lot easier for the media world. Alas, that is its own issue on a different spectrum. So for now, the effort must continue for moderating and supervising the online world, in order to keep it safe and appropriate for all users.