Moderating Extremism
The Challenge of Combating Online Harms

size_vs_threshold_v2.jpg

The relationship between platform size and moderation

smex_num_platforms_suspended_notsuspended.png

The regulation of social media platforms has become one of the most pressing and contested global policy issues. From combating misinformation and hate speech, to terrorist propaganda and other harmful material, debates over how to make online platforms safe have taken center stage. Despite recent efforts to combat online extremism, many violent actors still successfully operate on social media platforms, disseminating propaganda, recruiting supporters, and inspiring violence. How can these actors — who face an increasingly disruptive information environment — continue to use the internet effectively to advance their cause?

 

In a new book project, I offer a theory that explains why violent actors remain resilient on social media despite efforts to regulate their content. I argue that inconsistency in platform moderation policies allows groups to migrate between platforms, adapt messaging to diverging platform rules, and mobilize supporters on less moderated sites. I offer rich evidence from a variety of sources — including novel data on the online activity of over 100 militant organizations, archives of banned terrorist propaganda, cross-platform data on militant networks, and a time series of platform moderation policies — to shed light on this new "digital battlefield" where violent extremism remains a persistent challenge.

Extremist groups' handles that are banned from social media tend to be concentrated in a small number of platforms, while other platforms allow groups to maintain active accounts