top of page

Moderating Extremism
The Challenge of Combating Online Harms
(under contract with Princeton University Press)

size_vs_threshold_v2.jpg

The relationship between platform size and moderation

smex_num_platforms_suspended_notsuspended.png

Militant groups' handles that are banned from social media tend to be concentrated in a small number of platforms, while other platforms allow groups to maintain active accounts

Content moderation on social media platforms has become one of the most pressing and contested global policy issues. From combating misinformation and hate speech, to terrorist propaganda and other harmful material, debates over how to make online platforms safe have taken center stage. Nowhere have these efforts been more prominent than the moderation of terrorist and extremist activity on digital media platforms. “Dangerous organizations” – militant or hate-based groups, extremist organizations, and other violent movements – have become one of the main targets of moderation subject to mass content takedowns, account suspensions, and other sanctions. But despite the push to moderate harmful content on social media, these actors continue to flourish online – advancing their cause, recruiting supporters, and inspiring violence. What explains the digital resilience of militant organizations?

 

Moderating Extremism provides a deep dive into extremist networks’ adaptation to content moderation on social media platforms. I argue that divergence between platforms’ content policies allows militant organizations to become resistant to moderation. I offer a theory of digital resilience that explains how variation in moderation across platforms creates “virtual safe havens” in which banned actors can organize, launch campaigns, and mobilize supporters. I also show that the ability to evade moderation is mitigated when platforms align their moderation standards. Drawing on rich evidence from a variety of sources – including data on the online activity of over a hundred militant organizations, archives of banned terrorist propaganda, and a time series of platform moderation policies – I explain how digital resilience is powerfully shaped by the degree of variation in the way technology platforms police speech online. 

 

Understanding how dangerous organizations adapt to moderation sheds light on growing challenges at the frontier of mitigating online harms. Divergent standards in content moderation are a feature of our increasingly decentralized online information ecosystem, yet their effects on the ability to moderate harms are rarely considered in debates over social media regulation. Policymakers often rush to suppress or take down offensive content online, while failing to consider the consequences of these approaches in the broader digital environment. By explaining the ways in which variation in content moderation across social media platforms can be exploited by militant actors, the book provides an important account for why extremism continues to be a problem for our digitally connected societies. 

bottom of page