by Jake Wellborn, guest writer
How do social media algorithms recommend content to users?
Social media companies use AI-powered algorithms that make use of an individual’s data to show content that they are likely to engage with. The data these algorithms analyze are things such as a user’s age, gender, and location. This demographic data is useful to the algorithms because it can analyze what content other members of the same general demographic have or have not engaged with and show this content accordingly. While this is the initial way these algorithms recommend content, a user’s previous history regarding what content they seek out is taken into consideration as well. This is used in conjunction with a user’s search history to allow the algorithms to develop a rounded view of a person’s interests. According to software company Algolia, all of these things are used to increase the amount of time users spend on a site or app.
How do social media echo chambers affect the average person’s view of the world?
Many people in the United States are on social media for at least an hour or two a day and a lot of these people get their news almost exclusively through social media. This sort of engagement is only more prominent among the younger generations and has, for many people, completely replaced other forms of media such as cable news. While this can have its advantages, it often leads to a situation in which a person is only being exposed to news and commentary that is biased towards the political alignment the algorithms have determined them to have. Oftentimes the content recommended can be incredibly fringe, misleading, or outright false, giving its audience a false sense of what is truly going on in the world. A good example of this in recent times was how the perception people had towards the COVID-19 pandemic was influenced by their online activity. Many people came to believe radical conspiracy theories regarding the origins or nature of the virus without any evidence beyond the deluge of social media content they were being shown by these algorithms. This led to individuals believing baseless lies such as the virus either being fake or the result of some global conspiracy spearheaded by a shadowy cult of Satan worshiping child sex traffickers. Vaccine skepticism also ran rampant during this time as social media users propagated unsubstantiated anecdotes reporting the supposed negative effects of the COVID-19 vaccines. An article published in the peer-reviewed journal Frontiers in Psychology entitled “Social Media News Use Induces COVID-19 Vaccine Hesitancy Through Skepticism Regarding Its Efficacy: A Longitudinal Study from the United States” discusses this. The authors summarize the findings, and I quote “Social media news use was positively associated with skepticism regarding vaccine efficacy and those with higher levels of news literacy were less skeptical.” This created a situation in which people were putting their health and the health of those around them at risk because they were trapped in an online echo chamber that convinced them of falsehoods.
Does being more online lead a person to be more polarized?
Research regarding whether increasing political polarization in the general populace can be connected to social media can be hard to interpret, but most studies seem to establish a link. Emily Kubin and Christian von Sikorski’s article “The role of (social) media in political polarization: a systematic review”, states that “regarding affective polarization, nearly all experiments found that social media can further affectively polarize people” (196). While social media can’t be singled out as the ultimate cause of political tribalism and polarization, it is responsible for exacerbating it. Chronic online activity and engagement in political content have also been linked anecdotally to some degree to radicalization, which can be dangerous. According to the Associated Press, the perpetrator of the 2022 Buffalo supermarket shooting described in his manifesto how he was radicalized after visiting websites such as 4chan and The Daily Stormer. While not specifically social media websites, the 4chan message boards, and content he was exposed to on The Daily Stormer still played a role in him adopting white-nationalist neo-Nazis views. This is an extreme example of how online echo chambers can create great polarization and division in the American population.
Research regarding whether increasing political polarization in the general populace can be connected to social media can be hard to interpret, but most studies seem to establish a link.
Does this polarization fueled by social media and algorithms influence elections in the United States?
Since the dawn of the 21st century and especially within the last decade, many politicians have decided to try to take advantage of social media in one way or another. Almost every politician will have a Twitter and Instagram profile through which they intend to get their message out and increase awareness of their campaigns during crucial election seasons. However, due to recommendation algorithms, who will see these public servant’s posts will be heavily influenced by what side of the political spectrum they are on. An example of this could be seen during the 2016 presidential election when Cambridge Analytica was able to collect and sell social media user’s data to campaigns that were then able to influence prospective voters with targeted ads. The consulting firm was able to analyze the data and make predictions from it regarding how a user might align politically and advertise to them accordingly. This made it so that many social media users were only getting one type of targeting advertisement that left them with a biased and one-sided view of the election. This is rather commonplace when it comes to how social media companies are used for election campaigns in the United States and it’s not hard to see how this will exert an influence on the democratic elections in the country.
How is social media used to spread misinformation and what should be done to address this?
Social media’s role in the spreading of misinformation has been well-documented, from the alleged Russian interference in the 2016 election through social media campaigns to COVID-19 conspiracies. In Kristina Hook and Ernesto Verdeja’s article “Social Media Misinformation and the Prevention of Political Instability and Mass Atrocities”, they assert that research has found that “organized social media misinformation campaigns have operated in at least 81 countries, a trend that continues to grow yearly, with a sizable number of state-backed and private corporation manipulation efforts”. This is a terrifying prospect and one that should be taken with the utmost seriousness by governments around the world. As almost anyone can post on social media, anyone can post misleading information, whether intentionally or not. Users who end up liking or otherwise engaging with this content will often be shown more posts along the same lines, leading them to have a skewed view of the world backed up by tons of false or misleading information fed to them through social media. Effective solutions to this problem are few and far between. Some people would advocate that social media companies clamp down on misinformation in the harshest way possible, by outright removing posts containing it from their platform. This “solution” would require the curtailing of freedom of expression and is not one that should be considered lightly. Not to mention that it can often be difficult to determine what exactly would qualify as disinformation and a “Ministry of Truth” style organization doesn’t come with the best of connotations. Finding ways to get social media companies to modify the way their algorithms recommend content is the best solution one can hope for right now, even if the actual details of this will be difficult to figure out. Social media companies won’t want to change the way they show users content if it means it will result in less engagement. According to a Wall Street Journal article from 2021, internal memos from within Facebook in 2018 revealed that the company was aware of how its platform has played a role in dividing Americans. The memos revealed that Mark Zuckerberg was resistant to algorithmic changes that would permanently decrease the flow of content likely to stoke political hatred. A Brookings article states that although Facebook reportedly does implement these algorithmic changes for certain periods, Zuckerberg knew that making them anything but temporary would cut into user engagement. Therefore, legislation must be considered that will require Social Media companies to alter their algorithms to show less divisive content to their users.
How will social media affect the 2024 presidential election?
This will of course be difficult to fully predict as we are still relatively early. Yoel Roth, former head of Twitter’s Trust and Safety team stated at a panel intended to handle election speech and disinformation that he was worried that platforms will have fewer resources in place to handle various issues than in previous elections. TikTok, Discord, and Twitch are going to implement new tools to try to combat disinformation during this election season, but time will tell how exactly this is going to work. Instagram already has fact-check disclaimers on posts deemed misleading, but many users seem to view any posts with these disclaimers on them as being automatically trustworthy. This is a huge problem as many people have a knee-jerk tendency to view misinformation disclaimers as proof that a misleading post is accurate. The increasing role of AI and its ability to create convincing deepfakes will also potentially prove to have an effect on the election in 2024. If posts containing AI deepfakes portraying candidates proliferate on social media unchecked, it will certainly sow chaos. There are many variables that need to be considered and addressed for the 2024 election season when it comes to social media and time will tell how everything will play out.