I ended my last blog post by asking, “What information and under which circumstances can and should LEAs remove terrorist propaganda from social media platforms and other websites? When would such actions fall under the scope of the EU data protection framework?”
In this second blog post of my exploration into the matter based on an upcoming publication for CRC Press, part of the Online Terrorist Propaganda, Recruitment, And Radicalization Book Project that I wrote together with Dr. Milda Macenaite, I will specifically explore Data collection for countering online terrorist propaganda, recruitment and radicalization.
It is ever-more clear that terrorist groups are increasingly using the internet as a means to recruit new supporters and to promote their causes. In fact, in 2017 Europol, in its Internet Organised Crime Threat Assessment, identified more than 150 social media platforms, which include Twitter, YouTube and Facebook, that are used for such purposes. It’s clear that the persistence and availability of terrorist propaganda is becoming a growing concerns for LEAs and governments when such “innocent” platforms are increasingly the means of dissemination for illegal and extremist content.
ISIS in this sense is incredible innovative in its use of social media. We have seen the organisation use socials for the live-streaming of attacks, the recruiting of new members, to indoctrinate new members, and of course ad we have seen in recent years, to incite individuals and terrorist cells to violence. To do so, ISIS regularly makes use of automated social media accounts, also known as bots, which are designed to increase the likelihood that specific content goes viral as well as online archiving tools that retain content that is removed from other platforms. These tools allow ISIS, even when not creating new content, to upload old content regularly, creating an “echo effect”.
An example of the tech-savvy nature of ISIS is their ability to get around Facebook’s automatic detection features when hosting Facebook Live meetings, linking banned materials in the comments fields.
What’s being done to counter such actions you may ask? The European Union as gathered together its resources to implement a coordinated and multilateral response between nation and EU-level public authorities, services providers, platforms and civil society groups in order to “address in particular the use of the Internet for terrorism radicalisation and
recruitment purposes as well as for on-line hate speech that fuels fear, spreads misconceptions and stereotypes targeting specific communities and groups, and incites to violence and hatred, notably by developing, including with Internet Service Providers, cooperation on strategic communication and, where appropriate, internet referral units”.
The blocking and removal of extremist content ranging from social media content to blogs, YouTube and more is key in this action. In this area, a dedicated unit, part of Europol’s European Counterterrorism Centre, which aids Member States in tackling online terrorism propaganda, plays a key role and since July 2015 the EU internet referral unit (EU IRU) has monitored and closed social media accounts that are taken down by ISPs.
The tasks of the EU TRU include:
- coordination and sharing the flagging of terrorist and
violent extremist content online with relevant partners;
- coordination with industry to carry out referrals;
- support for competent authorities by providing strategic and operational analysis;
- acting as a hub of expertise in these fields.
Furthermore, the EU IRU, in cooperation with LEAs from Member States and industry flag and refer terrorist content online while civil society attempts to prepare counter-narratives. The latter has been carried out by a network of experts (Radicalisation Awareness Network) with financial support from the EU.
Industry members have also worked to improve the automatic detection of terrorist propaganda content, leading to the development of several industry initiatives and tools. In fact, in 2017, Facebook, Microsoft, Twitter and YouTube announced their collaboration through the Global Internet Forum to Counter Terrorism (GIFCT), a formalisation of their already on-going cooperation to address the problem of terrorist content on their platforms.
When it comes to regulation, the European Commission has taken a legislative action and
has prepared a proposal for a Regulation on preventing the dissemination of terrorist content
online, (Proposal for a Regulation on preventing the dissemination of terrorist content online, COM(2018) 640 final, 2018/0331 (COD), 12 September 2018.).
It is increasingly clear that in order to successfully counter online terrorist propaganda, recruitment, and radicalization requires the sharing of information, content blocking and removal represents just one part of the fight against terrorist propaganda, recruitment and radicalization and data can be potentially further collected for crime analysis purposes,
which will be considered in my next post.