Social media is a modern Wonderland — somewhere you might venture for a quick, fun visit, but then get sucked down the rabbit hole and can’t escape. With just one click you enter a realm with endless puppy videos, inundations of family updates and loads of controversial content, including hate speech that often targets minorities.
The end of September resulted in an uproar when Ye, formally known as Kanye West, took to Instagram and Twitter to make antisemitic remarks. After posting a Tweet saying he intended to go “death con 3 on JEWISH PEOPLE,” West’s Twitter was placed in Read Only mode, as was his Instagram.
West is not the first public figure to be punished by a social media platform. The past decade has seen the removal of Alex Jones, representative Marjorie Taylor Green and former President Donald Trump, among others, for violating social platform policies. Hateful, threatening speech keeps others from feeling safe to express their opinions, defeating a platform’s purpose of providing a space where everyone can share their points of view. Removing violators from the site may seem like the natural solution to this problem, but there is also great danger in pushing speakers off of mainstream platforms and to smaller corners of the internet.
In their hateful conduct policy, Twitter says that it is dedicated “to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized.”
Instead of silencing policy violators, the removal of controversial public figures from mainstream social media often results in them turning to other platforms that align with their ideology. Smaller social platforms are more likely to create echo chamber situations where one belief or ideology is reinforced by repetition.
Not long after being restricted on Instagram and Twitter, West started negotiations to purchase Parler, a predominantly conservative social media site. Similarly, Trump established Truth Social, a platform committed to encouraging free speech with no discrimination based on ideology, after he was banned from Twitter. While the social media giants give users much more influence than other sites, that doesn’t mean other small acquisitions and startups have no implications.
There is nothing wrong with building an online community among people with similar interests. The algorithms of mainstream social media feed users content that they express interest in so they can connect with other like-minded users, but when an entire platform’s uniting factor is hateful speech or ideology, these small social sites can become dangerous. With hateful rhetoric adopted as the norm, platforms have the power to corrupt their users.
Gab, a social media platform similar to Twitter, is known for harboring white supremacists and neo-Nazis. Robert Bowers, the lead suspect in the October 2018 Pittsburgh synagogue shooting, was frequently active on the platform, even posting the day of the attack.
Similarly, a report published by the Office of the New York State Attorney General found that the alleged shooter of the Buffalo supermarket attack on May 14 frequented subreddits, Discord and 4chan, a mock reddit site filled with hate speech. In these small online environments, the shooter shared his ideas for a racially motivated attack and received advice on how to carry it out. His ideologies emulated the norm, so he was not contested or reported by other users for his violent comments.
The shooter admitted to authorities that social media had been his biggest impetus. “There was little to no influence on my personal beliefs by people I met in person,” he said.
With influential figures moving to these smaller platforms, there is the risk of their hate speech having greatly detrimental impacts. If users never challenge or report them, then they will continue spewing the same harmful messages without repercussions. They will be further idolized by those with warped ideologies and their followers will continue to perpetuate their narrative. Even if no physical harm is carried out, the risk of keeping hate speech in a vacuum should be enough to scare us into taking some form of action.
But what exactly can be done?
Government regulation of social media is a slippery slope. Beyond general Securities Exchange Commission rules, the government would slip into totalitarianism and violate free market principles if it limited who could purchase or start social media platforms.
Sam Terilli, department chair of and media management at UM, explained the government’s role in regulating social media.
“If the US government were to try to cut off Kanye, a former president, Parler, Gab or anyone, they’ll just prop up somewhere else. You’ll never be able to truly choke that off unless you’re willing to go totalitarian,” Terilli said. “Government should stay out of licensing on the basis of content or the point of view, but stay deeply involved in the general regulatory business, keeping the same rules applying to everybody — NYT, Facebook, NBC, Parler, Gab, whoever.”
Since there is no clear way for government regulation to help stop the spread of echo chambers, the responsibility falls on the social media companies themselves. Perhaps a change in the algorithms of mainstream sites would stop the spread of conspiracy theories and hate speech, but in implementing those changes, there is the risk of altering social media’s efficacy on a general level. For example, if a baker joins a platform because they want to connect with other bakers, but the algorithm won’t pair them with similar users, then what is the point?
The alternative is that the social media giants loosen their policies to keep influential figures on mainstream sites. At least this way, other users could challenge controversial claims so they are not reinforced as norms. Perhaps this is Elon Musk’s logic in intending to welcome banned public figures back onto Twitter after his recent acquisition of the platform.
The question is: would it be better to have people say harmful things on a massive public platform where they can be opposed, or on a smaller, homogenous one where less people overall will see it but the ideas are reinforced and idolized?
The other bit of advice I can give is directed to individual users. It is easier said than done, but avoid idolizing others. Social media can foster parasocial relationships, which are one-sided relationships where users think they intimately know the public figure. This can lead to intense admiration, which can prevent people from seeing their idol’s faults. Placing another human on a pedestal is very dangerous because followers can slowly synchronize their beliefs with their idol’s. Try to mix up your feed so you aren’t inundated with only one person or organization’s words.
There are not two sides to hate speech. But in general political discussions,it’s important to at least try to empathize with the other side. When you see a topic you disagree with, try to write 250 words of your opinion , and then 250 words on what the opposing opinion would be. This may seem like a silly exercise, but it can be very beneficial for two reasons. First, it is an exercise in empathy. You will have to put yourself in the opposing viewpoint’s shoes to write on their behalf, which could give you a new perspective on the issue that you had not considered before. And second, when you write what you believe, you have to examine why you believe that. This can be eye opening because you may find missing links in your argument. It is healthy to examine our beliefs every once in a while.
Social media’s tendency to perpetuate hate speech and inspire violence does not have a simple solution, but making people aware of the problem is the first step in decreasing its severity. Falling down the rabbit hole into social media Wonderland can have positive benefits, like making new friends and networking, but it also has its fair share of Mad Hatters. Just be mindful of how deep you venture and who you are getting your information from.
Sabrina Wilson is a freshman from Winfield, Kan., majoring in Broadcast Journalism.