Extreme groups express more outrage than moderates, but moderates are more likely to be influenced by the posts
“Social media’s incentives are changing the tone of our political conversations online,” said Yale’s William Brady, a postdoctoral researcher in the Yale Department of Psychology who worked with associate professor of psychology Molly Crockett on the research.
Researchers measured expressions of moral outrage on Twitter during real life controversial events, and found that algorithms and engagement metrics encouraged them to continue – something which can be used as both a force for good and ill.
“This is the first evidence that some people learn to express more outrage over time because they are rewarded by the basic design of social media,” Brady said.
The researchers monitored 12.7 million tweets from 7331 users and used machine learning software to track if users became more outraged over time. They found that users who received more feedback for morally outraged posts were likely to express outrage more often in later posts.
They also found that members of politically extreme groups expressed more outrage than those of moderate groups – but people in those moderate groups were more influenced by engagement rewards.
Our studies find that people with politically moderate friends and followers are more sensitive to social feedback that reinforces their outrage expressions,” Crockett said.
“This suggests a mechanism for how moderate groups can become politically radicalized over time — the rewards of social media create positive feedback loops that exacerbate outrage.”
Other research has also suggested that sensationalised and emotional content performs better than less hyperbolic content. A study of 100 million headlines found that phrases like “make you cry” and “give you goosebumps” were some of the top performing ‘word series’ – the way that headlines are packaged to make them readable.
The notion that algorithms and engagement metrics push users to more outrageous content is one that has come up previously: in 2020, Facebook executives reportedly shelved research that would make the social media site less politically polarising, with features that would diminish this effect being described as “antigrowth” and requiring “a moral stance.”
“Our algorithms exploit the human brain’s attraction to divisiveness,” a 2018 presentation apparently warned, adding that if action was not taken Facebook would provide users “more and more divisive content in an effort to gain user attention & increase time on the platform.”
Part of the reason for this could be that “punishing norms violations is satisfying”, according to neuroscientist Robert Sapolsky.
A 2004 study suggested that the reward-related regions of the brain are consistently activated when players administered punishment, and historically could hark back to the public shaming rituals like those found in pre-industrial England.