Engagement Algorithms: Designed to Keep You Hooked
Crucially, these algorithms judge content not by its truthfulness or social value, but by raw performance metrics. If a post or video gets strong engagement (lots of views, comments, reshares, etc.), the system takes that as a signal to show it to even more people. The underlying logic is simple: content that provokes a reaction keeps users scrolling and clicking, which means more ad impressions for the company. As a result, social media algorithms have been shown to preference posts that spark emotions – outrage, fear, anger, excitement – because those get people to respond and thus spend more time on the app[3]. Facebook’s own algorithm, for instance, was found to elevate content that elicited the most intense emotional reactions from users, in order to maximize time on site[3]. In short, the AI’s only agenda is to capture your attention; it neither knows nor cares if the content is political, funny, or false, as long as you don’t look away.
The Accidental Rabbit Hole: From Curiosity to Extremism
These engagement-optimizing algorithms have an unintended side effect: they can end up steering users toward ever more extreme and emotionally charged content. It often starts innocently. Suppose a user begins by watching a slightly political or opinionated video – something mild, perhaps a mainstream news clip or a commentary with a partisan lean. The platform’s AI notices that this content held the user’s attention (say they watched most of it, liked it, or left a comment). To the algorithm, that’s a success. Its response? Feed the user more content along those lines. On YouTube, this means the “Up Next” panel and autoplay will start queuing up videos that are a bit stronger in tone or more provocative on the same topic, because similar content has historically kept viewers engaged[4]. Over time, this can become a feedback loop: with each click on a slightly more sensational video, the system learns that the viewer is interested and doubles down by recommending something even more extreme, controversial, or emotionally stimulating to keep them watching.
Studies have documented this rabbit hole effect. A 2023 audit of YouTube’s recommendation algorithm found that the platform tends to reinforce whatever political bias a user starts with – and for right-leaning viewers especially, it pushes them toward far more extreme content over time[5][6]. In that large experiment, researchers at UC Davis created thousands of “sock puppet” accounts simulating users of different political persuasions. The result was telling: YouTube’s automated suggestions consistently served up videos aligned with the user’s existing ideology, and for the most right-leaning profiles, about 40% of their recommended videos came from channels promoting political extremism or conspiracy theories[5][6]. In contrast, left-leaning accounts got far fewer extremist recommendations. This shows that if your viewing history marks you as conservative, the algorithm is likely to lead you to more hardline nationalist or conspiratorial videos. As one author of the study put it, “YouTube’s recommendations can, on their own, activate a cycle of exposure to more and more problematic videos,” creating a self-reinforcing spiral of radicalization[7]. Without the user actively seeking it out, the system starts magnifying political content in one direction – an accidental push fueled purely by what keeps eyes on the screen. Users have reported anecdotal examples of this progression: watch a few innocuous videos about, say, immigration policy, and before long your YouTube sidebar is full of incendiary clips about migrant “invasions” or ultranationalist vlogs. The algorithm’s intent isn’t to radicalize, but its engagement-first logic often results in amplifying the most attention-grabbing (and often extreme) voices.
Instagram’s recommendation model produces a similar phenomenon through the Explore page and algorithmic Reels. On Instagram, if you show interest in slightly edgy or political content – for example, by liking a post with a nationalist slogan or watching an emotional video clip to the end – the app will take that as a cue to serve you more intense material in that vein. Instagram’s Explore feed is explicitly tailored “based on an individual’s historical interactions”[8], meaning it tries to guess what you want more of. The danger is that if what grabs your attention is outrage or shock, the algorithm will happily give you a steady diet of outrage. Experts observing Instagram have noted that a teen clicking on a seemingly harmless political meme can trigger a cascade: “A kid might like something edgy… and that triggers the algorithm. That then sends them tumbling down into anti-feminist, racist, Holocaust denial, neo-Nazi type of content,” one parent reported after monitoring her sons’ Instagram use[9]. In fact, an investigative study by the Center for Countering Digital Hate found that simply liking one piece of misinformation on Instagram – be it about elections, COVID-19, or a divisive social issue – caused the algorithm to start promoting significantly more extremist and conspiratorial content to that user[10]. “Instagram’s algorithm leads users down rabbit-holes to a warren of extremist content,” the researchers wrote, noting that the platform was actively linking people who had mild interests in, say, anti-vaccine content to much more radical posts laced with antisemitism and QAnon conspiracy theories[11][12]. In other words, the system cross-pollinates high-engagement extremist narratives: if you engage with anti-immigration memes, you might soon see suggested posts about white supremacist groups; follow a patriotic hashtag, and your Explore tab fills with even harder-line ultranationalist slogans. This happens not because Instagram’s AI has a right-wing agenda, but because emotionally charged content (anger, fear, tribal pride) generates strong reactions, and those reactions are the fuel the algorithm runs on.
Emotionally Charged Content: Fertile Ground for Right-Wing Narratives
Why do right-wing political messages seem to thrive in this algorithm-driven environment? A key reason is the emotional intensity of the content. Populist and far-right creators often lean into themes that are deeply visceral: national pride, identity, fear of outsiders, anger at “elites” or scapegoated groups, nostalgia for tradition, or outrage at social changes. These themes naturally provoke strong feelings in their target audiences – exactly the kind of feelings that make someone hit the share button or leave an ALL-CAPS comment. From the algorithm’s point of view, such passionate responses are golden feedback. High watch times, replaying videos, heated comment threads, rapid shares – all signal that this content is keeping users glued to the platform. So the AI dutifully amplifies it.
Observers have compared the process to an automated system learning to serve ever “sweeter” junk content to satisfy our subconscious cravings. Tech sociologist Zeynep Tufekci famously noted that YouTube’s recommendation AI appears to have figured out that “edgy,” divisive material draws people in. “This is a bit like an autopilot cafeteria in a school that has figured out children have a sweet tooth,” Tufekci said. “So the food gets higher and higher in sugar, fat and salt – while the videos recommended by YouTube get more and more bizarre or hateful.”[4] In practice, this means if slight controversy engages you, greater controversy will engage you even more – a logic the algorithm relentlessly follows. Right-wing propagandists often capitalize on this by crafting content that pushes emotional buttons hard. A video that rages about a crime by an immigrant or spins a conspiracy theory about a looming threat to one’s country can ignite fear and anger, emotions that prompt people to react and pass the message along. The more people react, the more the AI promotes that video to others. A feedback loop is born: outrage yields clicks, clicks boost reach, and wider reach creates even more outrage.
Real-world examples illustrate how this dynamic plays out. Researchers have noted a pattern of YouTube “radicalization” in which viewers are subtly guided from relatively mainstream conservative videos toward extreme right-wing content over time[13]. In one documented case, a user who started out watching standard Republican political commentary found that YouTube’s autoplay and suggestions gradually led him to far-right shock jock rants, white nationalist channels, and bizarre conspiracy theories about “cultural Marxism.” By chasing engagement, the platform had effectively pulled a casual viewer into a fringe worldview. A report by Data & Society termed this the “Alternative Influence Network,” describing how a cluster of far-right YouTube personalities use the algorithm’s tendencies to their advantage. They collaborate and appear on each other’s channels to boost visibility, and the YouTube algorithm’s tendency to chain related content together means a viewer can easily hop from a milder influencer to more extreme ones in that network[13][14]. According to the report, YouTube became a “breeding ground for far-right radicalisation, where people interested in conservative and libertarian ideas are quickly exposed to white nationalist ones.”[13] And although this is partly fueled by the creators’ savvy networking, “YouTube’s recommendation algorithms are [also] partly to blame,” the study noted[15].
On Instagram, we’ve seen nationalist and ultra-conservative content go strikingly viral thanks to similar mechanisms. In India, for example, short patriotic videos set to stirring music and flashy graphics routinely rack up millions of views on Reels – their emotional appeal (pride and fervor for the nation) translating into huge engagement numbers, which in turn cause Instagram’s algorithm to propel them to even more users. Across Europe and the U.S., hardline activists have used Instagram to spread anti-immigration and anti-Muslim memes that play on fear and anger. Because these posts get a rise out of people – supporters eagerly share them, opponents leave outraged comments – they end up featured on many users’ Explore pages, far beyond the original audience. A recent investigation in Germany by Global Witness found that non-partisan social media users were being recommended roughly twice as much right-leaning political content as left-leaning content in the lead-up to elections[16]. In other words, the algorithms (Instagram among them) were disproportionately amplifying content favorable to the far-right AfD party. This kind of skew can emerge simply because far-right posts were provoking more engagement, and the recommendation system had no “fairness” adjustment to balance it out. Instagram has even become a recruitment hotbed for extremist groups. In 2021, the Center for Countering Digital Hate warned that Instagram’s algorithms were actively directing young users toward neo-Nazi and white supremacist content via its network of meme pages and suggestions[17][10]. A teenager might innocently follow an edgy meme account, only to have Instagram start recommending ever more radical pages. Experts noted that neo-Nazi groups cleverly use “stylized and punchy” visual memes on Instagram – often humorous or ironic on the surface – to package hateful ideologies in a highly shareable form[18][19]. These memes spread like wildfire among youth, and the algorithm, seeing the surge, pushes them even harder, not realizing it’s propagating extremist propaganda. The emotional hook (even if cloaked in humor) keeps the engagement high. As Imran Ahmed of CCDH put it bluntly, “Instagram is actively pulling its predominantly young users down an extremist rabbit hole.”[20]
It’s important to emphasize that the algorithms are agnostic about the politics per se. They did not set out to favor right-wing (or left-wing) ideology. What they favor is engagement, and it so happens that some of the most engaging content out there is politically charged, fear-inducing, or divisive. Unfortunately – or perhaps predictably – this means content from the right-wing populist playbook often checks all the boxes: it’s provocative, emotionally resonant, and shareable. Thus, without any human intent, the design of these AI systems creates ideal conditions for such narratives to thrive. As one analysis noted, social platforms’ machine-learning models prioritize “high-engagement content without consideration for misinformation,” which is exactly why sensational fake news and partisan rumors can outperform sober, factual reporting[3]. Extreme messages (e.g. “Outrageous claim X will destroy our country!”) simply draw more clicks than moderate ones, so the algorithms end up amplifying those messages far and wide.
Echo Chambers and Filter Bubbles
One consequence of these personalized recommendation loops is the formation of echo chambers. As the AI keeps showing you what it thinks you want to see, you’re less and less likely to encounter content that challenges your existing views. Over time, your feed becomes a bubble filled only with like-minded voices – a digital comfort zone of confirmation bias. Social media algorithms curate each person’s feed so individually that two users can live in completely different information worlds. If one user consistently engages with right-wing posts, their feeds on YouTube, Instagram, or Facebook will tilt ever more to the right, eventually excluding opposing perspectives entirely. Platforms preferentially serve up content “specifically curated for them – creating an online echo chamber that isolates their viewpoints”[2].
Research confirms that these algorithmic filters reinforce our pre-existing beliefs. The YouTube audit mentioned earlier found that the platform’s algorithm “does recommend videos that mostly match a user’s ideology” – essentially validating what the viewer already believes[21]. So a conservative gets flooded with more conservative (and eventually far-right) videos, while a liberal sees mostly liberal-leaning content. Over time, each side not only hears its own narrative on repeat, but also loses sight of the other side’s perspective entirely. Sociological studies suggest that when people are exposed only to homogenous viewpoints, they tend to become more extreme in their thinking (a phenomenon known as group polarization)[22]. It’s no surprise, then, that heavy users of partisan social media may grow even more entrenched and radical in their politics. Their feeds function as self-affirming echo chambers: everything they see seems to confirm that their side is right and the other side is absurd or evil – because the algorithm has quietly filtered out almost anything that contradicts that narrative.
This isolation is often referred to as the “filter bubble” effect (a term popularized by author Eli Pariser). Inside your bubble, the platform’s AI shows you content it predicts you’ll agree with or appreciate, based on your past clicks. You stop encountering news or posts that might broaden your viewpoint or fact-check false assumptions. For example, an Instagram user who starts following anti-immigration pages will likely see only posts reinforcing anti-immigrant sentiment in Explore, while posts presenting immigrants in a positive or nuanced light won’t surface. Over time, the user might falsely assume that everyone thinks the same way, since dissenting voices have been algorithmically muted from their online experience. This can create a distorted reality and heightened partisanship. Indeed, polarization in many countries has increased alongside the rise of algorithm-driven feeds. One Pew Research study noted that partisan divides in the U.S. deepened notably in the early 2010s – precisely when Facebook, YouTube, and others saw explosive growth and began algorithmically tailoring content for users[23]. The correlation suggests that social media echo chambers may be a driving force behind the “us vs. them” climate we see today.
Micro-Targeting: Political Ads and Personalized Propaganda
Beyond organic content recommendations, another AI-driven factor in modern politics is micro-targeted advertising. Social media platforms gather an enormous trove of personal data on their users – from basic demographics (age, location, gender) to detailed interest profiles inferred from every like and view. Advanced algorithms can analyze this data to predict things like your political leanings, personality traits, and vulnerabilities. In a notorious example, researchers discovered that just a few dozen Facebook “likes” could allow a model to accurately predict a user’s probability of voting for a certain party, as well as sensitive attributes like their sexual orientation or susceptibility to substance abuse[24][25]. This predictive profiling became the backbone of powerful political ad tools. Campaigns and consultancies (like the now-infamous Cambridge Analytica) have used such algorithms to segment voters into narrow categories and target them with tailored messages[24]. For instance, if the data suggests you are a middle-aged man who is anxious about crime and receptive to nationalist rhetoric, a campaign’s ad tech can automatically show you an ad highlighting an opponent’s lax immigration stance or a dramatic video about defending “law and order.” Your neighbor might simultaneously be shown a completely different ad from the same candidate focusing on jobs or religion – whatever the AI predicts will resonate with each individual.
This micro-targeting can amplify right-wing narratives by delivering them directly to the people most primed to agree, in emotionally resonant ways. During election cycles, social media platforms become inundated with thousands of variably tuned political ads. Many are fear-based appeals or misleading claims crafted to trigger specific audiences. Because these ads are often “dark” (visible only to the targeted users and not broadly broadcast), they can spread divisive or false messages with little public scrutiny. The algorithmic ad delivery system ensures that the most inflammatory versions of a message find those who will be most enraged or motivated by it, which can deepen political divides. For example, in 2016, the Trump campaign’s data team (reportedly aided by Cambridge Analytica’s models) ran Facebook ads hyping the threat of refugee crime to voters flagged as anti-immigration, while showing different content to other groups. Voters living in different micro-targeted realities received vastly different impressions of the issues.
The combination of micro-targeting and algorithmic feeds also enables “astroturf” campaigns – where coordinated networks of accounts flood social media with partisan talking points or conspiracy theories aimed at particular demographics. Because the platforms’ AI will naturally boost content that gets engagement, a paid or bot-driven effort to push, say, a “viral” anti-government slogan can quickly catch the algorithm’s attention. It then starts recommending that content organically, achieving far wider reach than the initial campaign could on its own. Right-wing strategists have been adept at exploiting this, using memes, targeted groups, and advertising to inject their narratives into the recommendation ecosystem. The end result is a form of personalized propaganda: different segments of the population see the specific politically charged messages that are most likely to sway them. This precision was never possible in the era of broadcast TV or print ads. Now, AI-driven platforms act almost like custom propaganda delivery machines, albeit under the guise of ordinary social media content.
Outrage in the Attention Economy
In the social media attention economy, outrage equals profit. Posts that trigger anger or fear are six times more likely to spread than factual, neutral posts[26]. The platforms know that when you’re outraged, you’re engaged – furiously commenting, sharing to vent or rally others, compulsively checking back for updates. This keeps you glued to the app, which boosts the platform’s metrics. Thus, outrage becomes a commodity. Content creators (including political influencers) have figured this out. Many deliberately craft their videos or posts to stoke strong reactions, knowing that the algorithms will reward them with more visibility. It’s no coincidence that some of the biggest political YouTube stars and Instagram commentators are those who use hyperbolic, inflammatory language. They’ll use sensational headlines like “The TRUTH about Crime They Don’t Want You to Know!” or emotionally charged images designed to make you mad or scared. Once they hook an audience, the algorithm helps them snowball that following by promoting their high-engagement posts to new users.
Even more mainstream political actors feel pressure to amp up the emotion. We’ve seen traditional media outlets and politicians adopt the “outrage bait” style on social media to avoid losing attention. A calm, nuanced policy discussion simply cannot compete with a fiery tirade in terms of shares and clicks. Over time, this tilts the entire political discourse online toward more extreme, confrontationist postures. A commentary in The Guardian noted that “extremism pays” in the current social media landscape – divisive content often generates the most ad revenue and follower growth, giving platforms little financial incentive to shut it down[27]. Right-wing populists, in particular, have reaped huge gains by leveraging this dynamic: their core messages of conflict and outrage naturally thrive under engagement-maximizing algorithms.
There’s also a reinforcing cycle: as certain influencers gain large followings by playing to the algorithm, they gain real political influence. When a YouTuber with millions of subscribers (earned through sensational content) endorses a candidate or promotes a conspiracy theory, it can shift opinions in a segment of the electorate. The platform has effectively trained audiences to respond to that influencer’s style of messaging. The attention economy thus translates into power – followers can be mobilized for protests, voting, or harassment campaigns. We saw this with various conspiracy-driven movements in recent years: social platforms helped fringe theorists build huge communities that later had real-world impact (from COVID-19 misinformation protests to the January 6th Capitol riot). All because outrage and fear proved to be the ultimate clickbait, and the algorithm dutifully amplified those who mastered its language.
Conclusion: Navigating an Algorithm-Driven Political Landscape
Social media algorithms are incredibly powerful forces in shaping public opinion – and their influence has largely been an unintended byproduct of their design. These AI systems did not set out with a partisan mission, but their single-minded pursuit of engagement created a perfect storm where right-wing political narratives could surge to prominence, propelled by emotion and automation. The result is a new political reality: one where outrage is amplified, echo chambers are the norm, and the boundaries of acceptable discourse can shift rapidly towards extremes.
The rise of right-wing ideologies online is a complex phenomenon with many factors, but the role of platform algorithms is central. By rewarding content that triggers visceral reactions, YouTube, Instagram, and other platforms have given an edge to messaging that taps into fear, anger, and tribal identity – hallmarks of populist and far-right propaganda. When millions of people are individually fed a steady diet of one-sided content, the fabric of democratic debate frays. We end up with parallel information universes, increased polarization, and a public that can be more easily manipulated by whoever best exploits the algorithm.
What can users do in the face of this? The onus is partly on the platforms to adjust their systems (and indeed there are growing calls for transparency and reform). But as individuals, we can take steps to stay balanced and informed despite the algorithmic push:
· Practice critical thinking: Don’t accept every viral post or video at face value. Consider who might be trying to provoke you and why. Fact-check startling claims with reliable sources before trusting them.
· Seek diverse content: Make a conscious effort to follow a range of perspectives, including voices you might not agree with. Break out of your comfort bubble by reading content from different sides or neutral news outlets. This can “reset” the algorithm to some extent by signaling your interest in broader viewpoints.
· Be aware of algorithmic influence: Simply understanding that what you see is being filtered for engagement can help you put things in context. Remember that trending content isn’t necessarily true or important – it might just be effective at pushing emotional buttons. Recognize when you’re being pulled into an outrage cycle and take a step back.
In the end, the goal is to reclaim agency over what you consume. Social media’s AI-driven feeds excel at steering our attention, but we don’t have to follow blindly. By actively curating our own media diet and thinking critically, we can resist the most corrosive effects of these algorithms. The emergence of this issue is a wake-up call: our information ecosystem is now largely governed by machine logic that values engagement above all else. To preserve a healthy, democratic discourse, both tech companies and users must work to inject more human intentionality into the mix – whether that’s through better algorithmic design or personal vigilance.
The rise of right-wing politics via Instagram, YouTube and their AI algorithms is a cautionary tale of technology’s unintended consequences. It shows how quickly public opinion can shift when millions are nudged by unseen, personalized forces. Going forward, treating algorithmic feeds with caution and seeking out factual, varied information are crucial. Social media has given everyone a voice, but it has also built echo chambers and megaphones for anger. Navigating this landscape requires awareness and balance. As the saying goes, “If you’re not paying for the product, you are the product.” Our attention is the currency, and outrage is the easiest way to spend it. By recognizing that, we can start to make choices – about what to click, what to share, and what to believe – that keep us informed rather than merely inflamed.
Ultimately, AI-powered platforms never intended to promote any particular ideology, but their engagement-driven design has unintentionally supercharged the spread of extreme right-wing narratives. It’s a stark reminder that technology is not neutral in its effects. Until social media algorithms are reined in or reformed, the best defense for users is to stay educated, stay critical, and consciously seek the truth beyond the sensational swirl the algorithm serves up. By doing so, we can enjoy the connectivity these platforms offer without surrendering our capacity for independent thought in the face of automated persuasion.
Sources:
· UC Davis News (2023) – Audit of YouTube’s algorithm finds it recommends extremist content to right-leaning users[5][6]
· Business Insider (2021) – Instagram’s algorithm “leads users down rabbit-holes” of extremist content, aiding far-right recruitment[10][9]
· Guardian (2018) – YouTube’s former engineer and researchers describe how the recommendation system favors divisive, sensational material[4][1]
· Young Australians in International Affairs (2021) – Analysis of social media algorithms driving polarization through echo chambers and emotional engagement[2][3]
· Guardian (2018) – “Alternative Influence Network” report on how far-right YouTubers exploit the platform and algorithm to radicalize users[13][15]
· Center for Countering Digital Hate (2020) – “Malgorithm” report on Instagram’s new algorithm recommending conspiracy and extremist posts to maximize engagement[11][12]
· Guardian (2018) – Cambridge Analytica scandal shows how Facebook data and algorithms were used to micro-target political ads based on personality[24][25]
· Global Witness (2025) – Investigation finding social media algorithms (TikTok, X, Instagram) in Germany pushed substantially more right-wing content to users than left-wing[16]
[1] [4] 'Fiction is outperforming reality': how YouTube's algorithm distorts truth | YouTube | The Guardian
https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth
[2] [3] [22] [23] [26] How Social Media Algorithms Are Increasing Political Polarisation
[5] [6] [7] [21] YouTube Video Recommendations Lead to More Extremist Content for Right-Leaning Users, Researchers Suggest | UC Davis
[8] [9] [10] [17] [18] [19] [20] Instagram: Neo-Nazi Groups Recruiting Teenagers Via Memes - Business Insider
[11] [12] Microsoft Word - Malgorithm - ES.docx
[13] [14] [15] [27] YouTube's 'alternative influence network' breeds rightwing radicalisation, report finds | Social media | The Guardian
[16] X and TikTok algorithms push pro-AfD content to German users | Global Witness
[24] [25] How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool | Big data | The Guardian
https://www.theguardian.com/technology/2018/mar/17/facebook-cambridge-analytica-kogan-data-algorithm
0 Comments