Social media platforms are facing a myriad of criticisms, ranging from accusations of contributing to rising anxiety and suicide rates among American teenagers to profiting from selling personal data and compromising individual privacy. The Black Lives Matter movement in 2020 demonstrated that prominent racial justice and equality movements could be organized through digital activism, which gained widespread support and online engagement. Social media is an instrument of political change, but these changes are dangerously consequential. Amid the COVID-19 pandemic, Capitol riots in the United States, and increase in political polarization, the world is abandoning the perception that social media does not significantly impact domestic and world politics.
As many aspects of people’s lives were forced into the digital realm during the pandemic, social media platforms were utilized even more for entertainment, communication, and connection. According to a New York Times analysis of internet usage from January to March 2020, average daily traffic on Facebook skyrocketed 27 percent and 15.3 percent on YouTube since the first U.S. COVID-19 death. In March, Mark Zuckerberg stated in a conference call with reporters that traffic for video calling “exploded” and messaging, particularly on WhatsApp, “doubled in volume,” says an additional New York Times report.
Increased engagement and a subsequent rise in power for social media companies like Facebook have resulted in more sinister consequences: the spread of misinformation and disinformation. In September 2020, the World Health Organization and other United Nation agencies issued a joint statement reiterating a global concern over the COVID-19 ‘infodemic,’ or an “overabundance of information” that has led to the widespread dissemination of misinformation and disinformation. The statement also included a call for member states and stakeholders, including social media platforms, to combat the infodemic. Nonetheless, the use of social media platforms has interfered in the integrity of elections, incited political violence, and contributed to the spread of misinformation and political polarization around the world.
Recommendation algorithms in social media affect perceptions that contribute to political polarization. “Right now, social media companies like Facebook profit off of segmenting us and feeding us personalized content that both validates and exploits our biases,” according to Yaël Eisenstat, a former CIA analyst, diplomat, and Facebook employee at a TED conference in August 2020. Eisenstat continues, “Their bottom line depends on provoking a strong emotion to keep us engaged, often incentivizing the most inflammatory and polarizing voices, to the point where finding common ground no longer feels possible.”
Social media platforms are also breeding grounds for the spread of fake news and misinformation, which also contributes to political division. In 2018, three MIT scholars published a study, based on over a decade of data, that found false news spreads on Twitter six times faster than real news stories. Moreover, false news stories were found to be 70 percent more likely to be retweeted than real news stories.
How people consume their news ultimately affects their perceptions of the world and their political views. According to Pew Research Center, “one-in-five U.S. adults say they often get news via social media.” Consumption of false information not only creates a misinformed electorate but ultimately makes finding common ground and engaging in civil discourse more challenging.
The Markup’s Citizen Browser Project found that Facebook users who voted for President Joe Biden and users who voted for President Trump in the 2020 election held different views on the January 6, 2021 U.S. Capitol riots – their respective social media feeds showed stories that catered to each group’s political biases. Facebook users with differing political beliefs were also shown stories from different sources altogether. Biden voters were more frequently served sources like The Washington Post, The New York Times, and CNN. Meanwhile, Trump voters were more frequently served sources like The Daily Wire, Fox News, and Breitbart.
In parallel, the design of social media recommendation algorithms partly contributed to the political violence in the U.S. Capitol. Dr. James Kimble, a Communications Professor and propaganda expert at Seton Hall University, stated, “Social media enables you to craft an echo chamber,” and that there is a “sense of self-selection where all you hear is what you want to hear and you don’t hear your opponents,” in a recent interview for The Global Current.
The result, argues Kimble, is “disastrous for public discourse” because varying perspectives “do not collide with each other and thus grow more and more strong and seem true to those people.” He includes that discourse must be free from threats of violence, asserting that “some of these tweets flirted with the idea of domestic terrorism or encouraged people to be violent to show up at the Capitol.”
The U.S. Capitol riots on January 6, 2021 are considered by some experts to be a result of misinformation campaigns and recommendation algorithms on social media platforms like Twitter, Parler, and Gab. In an interview with The Diplomatic Envoy, Professor John H. Shannon, J.D., of Seton Hall’s Stillman School of Business and an expert in digital transformations impact on business, law, and society has one explanation. “One of the great strengths and weaknesses on the planet is you can find people with similar views and ideas and theories. Social media movies geography and time constraints we no longer always have to deal with and bring such communities together.”
Political communities, such as terrorist organizations, who organize and recruit worldwide through social media platforms, are evidence of this. In 2016, an internal Facebook analysis of German political groups found that “64% of all extremist group joins are due to our [Facebook’s] recommendation tools,” according to a Wall Street Journal report from May 2020.
While many criticize social media platforms themselves, state actors are also guilty of abusing the platforms to incite violence. In some cases, governments targeted people in their own countries. In 2018, the UN published a report saying military leaders in Myanmar used Facebook, a popular platform in the country, to conduct a systematic propaganda campaign against Rohingya Muslims, a minority ethnic group with a history of facing persecution in the Buddhist-majority country.
“The role of social media is significant,” according to the UN report. “Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet. Although improved in recent months, the response of Facebook has been slow and ineffective. The extent to which Facebook posts and messages have led to real-world discrimination.”
Cynthia M. Wong, a former Senior Internet Researcher at Human Rights Watch, explained in the Netflix documentary The Social Dilemma that this campaign “helped incite violence against the Rohingya Muslims that included mass killings, burning of entire villages, mass rape, and other serious crimes against humanity that have led to 700,000 Rohingya Muslims having to flee the country.”
Social media can invade many facets of daily living, from forming and cementing political opinions to being used as an instrument to promote genocide. On top of that, investigative journalist Carole Cadwalladr presented a startling judgment in her 2019 TED Talk about Facebook’s role in recent elections. Her conclusion addressed “whether or not it is possible to have a free election again.” She stated that “as it stands, I don’t think it is.”
In September 2020, an internal memo by Sophie Zhang, a former data scientist for the Facebook Site Integrity team, was exposed to the public. Zhang found evidence that foreign governments, political parties, and other actors in Honduras, Azerbaijan, India, Spain, Brazil, Bolivia, Ecuador, and Ukraine were using fake accounts and/or organizing campaigns on Facebook to influence public opinion and elections. Additionally, Zhang stated that she and her colleagues removed “10.5 million fake reactions and fans from high-profile politicians in Brazil and the U.S. in the 2018 elections.”
Further evidence shows that social media is being abused to interfere in elections. According to “Challenging Truth and Trust: A Global In ventory of Organized Social Media Manipulation,” a report by the Oxford Internet Institute, there is evidence that out of 48 countries examined, 30 have political parties that are deliberately using computational propaganda on social media platforms during elections or referenda.
Two mainstream examples of this deliberate abuse of social media include Russia’s interference in the 2016 U.S. Presidential election and the United Kingdom’s 2016 referendum to leave the European Union. In both cases, the Internet Research Agency (IRA), a company supported by the Russian government, organized disinformation campaigns by writing and posting fake content and creating thousands of fake social media accounts to spread propaganda. The IRA created accounts on Twitter, Facebook, Instagram, YouTube, and other social media platforms. Fake content curated by the IRA was retweeted over a staggering two million times and reached over 288 million views on Twitter. Leading up to the 2016 U.S. Presidential election, Russian posts reached 126 million U.S. Facebook accounts, according to a 2019 Park Advisors report sponsored by the U.S. State Department.
In anticipation of the 2020 Presidential election, Facebook suspended the recommendation tab for political groups to try and avoid another election fiasco. After election day on November 3, Facebook temporarily cut off all political ads in the U.S. in order “to reduce opportunities for confusion or abuse” the company stated. Additionally, from October 29, 2020 to December 9, 2020, Instagram temporarily removed the “Recent” tab from hashtag Instagram pages in the United States as a precaution against the spread of misinformation.
In addition to the precautions taken by Facebook and Instagram, some critics and experts have suggested taxing data mining, fixing the algorithm, and even dissolving social media companies all together to prevent further consequences from disinformation campaigns. However, regulation is the resounding suggestion among experts.
Professor John. H. Shannon, speaking on the legal aspects of social media regulation to The Diplomatic Envoy, stated, “They are not enough. This problem will require regulation; regulation is the way we protect the commons. We are in the early stages of trying to regulate a largely unregulated industry we call technology.”
Dr. Viswa Viswanathan, an Associate Professor of Computing and Decision Sciences at Seton Hall University, concedes that regulation is a possible solution to problems caused by social media, but he does not believe regulation alone is a panacea for all of these issues. A fundamental takeaway is that “people need to know how to think critically or else they will always be targets of exploitation,” asserts Viswanathan.
One reason why misinformation campaigns are so successful is their ability to manipulate a target audience. Dr. Viswanathan elaborates on this, claiming “the educational system (at all levels) has mostly failed to help people to think critically” because it has come to view itself as an economic tool. One question that remains is if critical thinking, regulation, or other solutions can ultimately prevent social media’s disastrous impact on political polarization, political violence, and election integrity.