Censorship on Social Media

Social media censorship is becoming a bigger and bigger issue in the age of 2021. It can also be a confusing topic to tackle, when considering the rules and guidelines that seem to apply to some posts but not to others, which seemingly depend on how many followers a user has. What difference does the level of a person’s celebrity make to what is and isn’t considered a breach in terms of service and community guidelines on social media platforms?

Here, now, today, I’m going to try and make some sense of censorship on social media. I’m also going to try to answer 4 questions;
What is the responsibility of social media platforms in terms of moderating misinformation and incendiary viewpoints?
What should be considered taking censorship too far on social media?
Where is the line of what and who should and shouldn’t be censored? And,
What happens when social media platforms begin to cherry pick what they believe does and doesn’t violate terms of service and community guidelines?

If you are an individual content creator on social media, the terms of service might be pretty well known to you. Generally speaking, depending on the platform, there are restrictions on what you can post in terms of hate speech, profanity (use of swearing), misinformation, bullying, harassment, and nudity and sexually explicit content. Some platforms are more lenient than others, some have close to no rules (I’m looking at you, Reddit), and some have very extreme rules.

In most cases, Youtube outlines the most extreme rules in relation to these practices, and has caused its fair share of controversy in its history as it has interfered with the earning capacity of individual content creators that drive the traffic to their website, but more on that later.

Traditionally, social media platforms are allowed to self-govern what they permit on their platforms. Some governments have stepped in to create some additional guidelines for platforms to follow, but by and large, social media companies are private entities that are allowed to choose what they deem to be acceptable and unacceptable content.

It is common knowledge that Donald Trump (Former President of the United States of America) was a highly active user of social media, specifically Twitter. During his time in office from 2017 to 2021, he tweeted 25,000 times from his @realDonaldTrump account. Many of these tweets were circulated internationally, and were often subject to scrutiny or made into memes at his expense. As the COVID-19 pandemic worsened in America, Twitter started moderating his account, in the form of hiding or adding fact-check labels to many of his tweets, due to the misinformation he was promoting.

During the leadup to the USA 2020 Election, he tweeted several times about how postal voting would lead to electoral fraud. This was false. Following his election loss, Donald Trump began continuously undermining the results of the election, still on the basis that the results were fraudulent due to individuals casting multiple votes or intentional miscounting of votes after the election. His tweets were also found to have influenced the storming of the Capital in January of 2021, which resulted in 140 injured and 5 dead. Shortly after, Donald Trump was permanently banned from Twitter, and suspended from Facebook and Instagram. There was a 73% decrease in election misinformation in the week after the ban.

Donald Trump’s twitter account banned (Photo by John Cameron on Unsplash)

While there is some disagreement online about whether or not Donald Trump should’ve faced bans from social media, it’s clear that the outcome resulted in less misinformation and encouragement of violence being spread about issues that were greatly important to society. But what precedent has this set? Is it up to individual social media outlets to decide who does and doesn’t get a voice?

In the past, there have been petitions with hundreds of thousands of signatures imploring social media platforms to ban or delete the accounts of specific users, generally in response to controversy surrounding something the individual posted on their account. In the cases I have seen, accounts have not been banned, which I believe was due to platforms not wanting to completely censor voices or individual creators. Instead, many videos and posts have been deleted on the basis of violating terms of service. But now with the precedent set, what is to stop social media giants from extinguishing the voices of users with opinions that differ to mainstream?

It’s clear to see that differing opinions can create huge debate in online spaces, sometimes expanding to real life. Is it so far fetched that in a world of increasing control over the masses, social media might extinguish these debates in an attempt to discourage real violence or law breaking, by banning users that have opinions that don’t follow the mainstream?

Let me know your thoughts in the comments!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s