In September of 2021, Twitter began testing a new feature for users called Safety Mode. The technology was an attempt to cut down on cyber bullying, and just general toxicity; a problem that has been perceives as a major problem on the online platform. An “internet troll” as internet slang has named it, is a person who takes part in the action of posting inflammatory and off-putting messages or posts which the intent to provoke readers and other internet users into displaying some sort of emotional response or manipulating a user’s perception of a topic. These responses can be inappropriate in maturity, sexual content, or just spam off-topic posts, among other things. With many people affected by these “internet trolls” daily, the issue of censorship has been a hotly debated issue for Twitter and many companies in the social media industry for years. As time has gone on, the line between free speech spaces and private corporate rights has become murkier.
Introducing Twitter Safety Mode. The technology is relatively simple. When the algorithm picks up repeated messages, also called spam, or detects some sort of insulting and harmful language, Twitter will temporarily block the offending user from the “victim” user for 7 days. The block comes directly from Twitter, with the existing replies being moved to the bottom of the reply list.
On one hand, some expect these social media companies to act as a sort of pseudo-governmental authority who ardently follow the tenets of free speech. On the other, when instances of abuse or violence occur on or through their sites, questions immediately begin being asked about how much the responsibilities should lay at their feet to prevent these tragedies from occurring. An example of this is the case of Steve Stephens, who shot and killed a man in Cleveland and then stayed on the run from the police for two days. This case does not seem all that out of the ordinary, except for the fact that Stephens proceeded to “Facebook-Live” his experience during these two days to thousands of viewers. This is just one case, but to add to the complexity of censorship, the examples illustrating these concerns vastly range in moral ambiguity.
Some instances are relatively direct. In another unfortunate instance in March of 2019, when a gunman entered a mosque in Christchurch, New Zealand and took the lives of 50 people. It was later found out that the gunman had posted his “manifesto” on the controversial social media site 8chan prior to the attack. This was not the first time the site saw this extreme type of content, as the site was home of a surprising amount of content linked to mass shootings around the world. It didn’t take long for world governments to begin to crack down on 8chan, which today still operates, with a new name, 8kun, with a user base significantly less than prior to the shooting. The argument against sites like these is that the server as a whole generates a home for social toxicity, which contributes to things like domestic terrorism, extremist beliefs, and racist propaganda.
While you may read this and believe that sites such as these should be banned, one must look at another site in this exact same lane. Reddit is known to most of us as a mainstream social media site for discussion threads ranging in topics from television shows to soup recipes. However, Reddit’s beginnings were very similar in catering to the underbelly of society. While the site arguably fosters some toxic threads, it has become so diversified that the proponents for censorship are finding it increasingly difficult to make a clear argument based on toxic culture. Last year, Reddit has gained so much popularity that they ran a Super Bowl advertisement, undoubtedly attracting thousands of more users to partake in a range of discussions on the site. For sites such as Reddit, Twitter and YouTube which are more respected and established, and also house a much greater span of opinion, the argument shifts and begins drifting further into authoritarianism.
Based on the response to 8chan, it is clear that people favor some level of censorship on their internet – but where do we draw the line? If I was to state that content is “extreme”, there is undeniably a large range of ideas that drift into someone’s mind. So once again we asked the question: where do we draw that line? As of recent years, it has been harder to clearly define the term “extreme”. The definitions pushed as of recently have been greatly politicized. Conservative media producers such as Ben Shapiro and Steven Crowder have had videos and tweets flagged and taken down for espousing views which, in the eyes of the respective sites, were harmful and dangerous rhetoric. The executives at these companies have often used the term “hate speech” to justify their censorship, especially surrounding the areas of transgenderism, race, and islamophobia. The right-wing media, including the score of conservative internet personalities that are scattered across the web, have spoken out over this censorship indicating that these sites should be bastions of free speech and that they are serving for the left-wing mainstream media by silencing legitimate conservative opinions under the umbrella term “hate speech”. The capstone of these perceived transgressions was Former President Donald Trump’s banning from Twitter, and subsequent creation of his own social media company, leading some to question the greater implications this trend of censorship means for future generations.
But the fact remains that these Social Media Companies are private entities and do not necessarily have to abide by free speech guidelines. Twitter’s rolling out of Safe Mode, Instagram’s sensitive post filters, and alerts surrounding COVID-19 possible fake news on all sites reminds the consumer that the mega media companies will make sure their bottom line will not be tarnished by any form of controversial political extremism.
Contact Mark at firstname.lastname@example.org