The Prime Minister says online extremism must be tackled – and here’s how

June 5, 2017

In her first statement after the London Bridge terrorist attack Prime Minister Theresa May called for decisive action to prevent the internet being used as a ‘safe space’ for extremism. While the processes of radicalisation of the three as-yet-unidentified attackers are unknown, we can be in no doubt that the battle against Islamist terrorism and extremism is as much ideological – and especially virtual – as it is military or physical. My research shows that over two-thirds of those involved in Islamist terrorism offences in the UK were known to have consumed extremist or instructional material – almost exclusively online – while in an increasing number of cases the internet was cited as a major site for radicalisation.

The Prime Minister’s assessment was clear – more needs to be done internationally to regulate the online world ‘to prevent the spread of extremism and terrorist planning’, while here in the UK more needs to be done to ‘reduce the risks of extremism online’. She is right. Repeatedly the tech industry has promised action, offering to increase safeguards and intensify their efforts, yet there has been a consistent shortfall in delivery. Successful prosecutions for propagandist Islamist terrorism offences such as possessing or disseminating terrorist material – almost exclusively digital material accessed or shared online – have increased in recent years, yet the average sentence for such offences (between two and half and three years) is woefully inadequate.

The Prime Minister’s statement follows concerted political efforts to prioritise countering online extremism. At the G7 summit last week, she urged world leaders to do more and charged tech companies with raising their game when it comes to identifying and removing extremist content. The Prime Minister’s words follow repeated criticism of those companies in recent months. The Home Affairs Select Committee has branded corporations like Google, Facebook and Twitter a ‘disgrace’ for their refusal to act robustly against extremist material.

Central to the Prime Minister’s statement was the recognition that that ungoverned spaces must be minimized as a prerequisite to opposing extremism. This reflects the current counter-radicalisation strategy, under which public institutions, the charitable sector and broadcasters have a responsibility to protect themselves from extremist abuse and shut down spaces where extremists operate. The Conservative Party 2017 Manifesto extends this maxim to the virtual world with the fundamental premise that ‘online rules should reflect those that govern our lives offline’. A cornerstone of proposed policy is the creation of a ‘digital charter’ to generate digital business development and make the UK ‘the safest place in the world to be online’. The charter is rooted in a rights-and-responsibilities ethos, and reflects the Conservative Party’s renewed and wider commitments to corporate responsibility.

Much of the detail behind the Prime Minister’s statement can be found in the Conservative Manifesto. It promises to enforce a new code of responsibility for the industry, requiring that companies do not direct users to ‘hate speech or other sources of harm’ and enabling the reporting of ‘inappropriate, bullying, harmful or illegal content’. The Manifesto pledges to create a regulatory framework and introduce a sanctions regime for those who fail to remove content that ‘breaches UK law’. It commits to ensuring that companies ‘develop technical tools to identify and remove terrorist propaganda’, share expertise with smaller companies and help civil society promote counter-narratives online. It also proposes a levy on social media and tech companies modelled on the gambling industry levy and promises to do more to challenge terrorists’ abuse of online encryption.

These are welcome and broad commitments, but as ever, the devil will be in the detail. In the past, the tech companies have proven adept at making the right noises at moments of heightened public concern – only to fall woefully short on implementation. We suggest that a new regulator be established either within the UK communications regulator, Ofcom, or parallel to it. As such, it should set new minimum industry-wide requirements in terms of accountability and transparency that ensure that internet users (like those accessing television or radio content) are protected from harmful material online.

Fundamentally, the tech companies should be treated as publishing outlets, and, as such, must be made to take real responsibility for their content. More remains to be done to achieve this. The proposed content take-down will be on a ‘comply-or-explain basis’, a light-touch approach that may be problematic in relation to extremist material. Moreover, the tech companies have long displayed an absence of will to develop systems or standards for identifying and removing such content. As such, they need to be pushed – by both government and civil society – to implement codes of conduct that explicitly reject extremism in all its forms.

At the same time, tech companies should cooperate more closely and provide financial support for the Metropolitan Police’s Counter Terrorism Internet Referral Unit, the tax-payer-funded body responsible for the removal of online content that incites or glorifies terrorism. The unit also plays a role in identifying individuals whose extremist activity online gives cause for concern, which at times has led to criminal convictions. In her statement, the Prime Minister suggested the government would consider increasing the length of custodial sentences for terrorism-related offences. She should start with possession or dissemination offences as a first step to effectively challenging those committed to actively glorifying hatred and violence in the name of a political ideology.

Join our mailing list