In certain cases, governments have even hid behind the argument that they were “moderating” online content for their citizens in order to take down unwanted information. The Indian government was reported to be “using the premise of misinformation to overreach and suppress criticism of the administration’s handling of the pandemic” by requesting that companies like Twitter remove posts criticizing India's COVID-19 measures (Ghaffary).
In China, the government uses the guise of “regulation” to perform country-wide acts of censorship daily, blocking access to networks like Google, Facebook, and Youtube completely from their citizens. Nicknamed China's "Great Firewall", it is one of the most infamous examples of mass censorship online.
But of course, those are the extremes. Moderating content is otherwise a useful idea when it comes to managing genuine misinformation. Take the US 2020 presidential election and COVID-19 as major examples. Twitter used multiple warning graphics to alert people to the unreliability of the content from certain tweets about how the election was panning out, or about the status of the pandemic.
But the risk of promoting misinformation and hate speech outweighs the value of having liberty of expression online: if we want to keep social media a safe and trustworthy place, we need to keep adapting and enforcing online content guidelines. So while people should still be able to post what they want, it is the media's responsibility to properly evaluate content without bias, and without tipping over into dangerous 'censorship' territory.
And just these last few years, several platforms have made some huge efforts; with "Tumblr’s recent ban on nudity, Twitter’s continued back-and-forth on suspending and banning extremist users, Facebook’s recent efforts to curtail misleading ads" (Coaston).