Sharing in safety: deterring predatory behaviour on children’s social networks
Policy Exchange’s Sarah Fink and Colleen Nwaodor reflect on a recent round table discussion convened by Policy Exchange’s Digital Government Unit. The event explored the problem of predatory behaviour on social networks designed for children, and how this challenge might be overcome.
The debate around child safety online is extensive and complex. One particular area of concern is the risk posed by predatory activity on social networks specifically designed for children. This issue came to a head earlier in 2012 following an investigation run by Channel 4 into activity in the online game Habbo. This found sexualised chat content and potential for paedophiles to use the site to “groom” victims. This prompted the temporary suspension of Habbo’s chat function whilst improved controls, filters and moderation were implemented. Habbo is just one of a number of sites and social networks designed for children (others include Club Penguin, Moshi Monsters, and many more).
A useful starting point is considering what websites designed for children are already doing to counteract predatory and inappropriate behaviour. Some sites have worked on features that use sign-in with facial recognition software and / or a human checking the images to verify that it is a child (and not an adult) logging on. Others vet sign-ups against sex-offender registers, or use parental accounts to help manage what kids get up to. Others still have tried restricting messaging and chat functions to whitelists or pre-defined phrases, as a way to limit the scope for predatory messaging.
The focus of our discussion was on what further steps could be taken to deter speculative predatory activity on sites specifically designed for children. Beyond making it difficult to engage other users in sexualised conversations, we wanted to explore technical measures to deter people from attempting this sort of behaviour in the first place. The hypothesis we chose to discuss was that a meaningful deterrent might be achieved if people were reasonably sure that grossly inappropriate activity would be traced back to them in real life (as opposed to just getting them kicked off the site in question).
This is clearly a narrow slice of a much larger set of issues (see here and here for starters) related to child safety, online protections and freedoms, education, parenting and law enforcement. We did not have time to cover the whole landscape, and instead chose to focus specifically on the pros, cons and practicalities of the hypothesis outlined above. We did not suppose that this was a silver bullet for solving all of the problems of child safety online, merely that there might be scope to make more progress in the right direction. The remainder of this note should therefore be read with that caveat in mind.
Identities, tokens and authentication
This is not the first time that website operators have needed to verify the identity of their users. For websites dealing in adult content and gambling, for example, a credit card is often required when a user signs up. In the context of websites designed for children, participants discussed requiring a parent or guardian’s authorisation via presentation of their credit card credentials – not necessarily to be charged, but as a way to tie any inappropriate behaviour online back to a person.
The use of a credit card for sign-up, however, appears to carry a set of challenges. Primarily, credit cards (and other banking products) are designed for payments, not for verifying identities. Even if they were ready to be used for this purpose, this sort of approach might only work for children who have an engaged and financially included parent or guardian.
Participants also discussed the role that age verification might play. This is an important factor in the online safety debate; there are already solutions being developed and the scope for further technological innovation. The ability to verify an individual’s age, however, is not necessarily the same as being able to trace a user in real life should the need arise.
Some governments issue electronic IDs as a way to control access to websites designed for children. Belgium, for example, offers a product called Kids ID for those aged 6 and up to join youths-only chat sites. There may be problems scaling this sort of approach – including hurdles around cost and adoption – but nevertheless attitudes across the world towards these sorts of IDs is interesting. In the UK, given our history of aversion to identity cards and centralised databases, this sort of approach might encounter strong opposition.
Another part of a solution might be the use of kitemarks. A kitemark might be attached to sites that join a scheme for tracing the perpetrators of predatory activity. A highly visible kitemark would serve both as a deterrent to potential predators and a signal to parents that a site was taking additional steps to protect its users.
Participants discussed the challenges around establishing a kitemark as best practice. For a kitemark to gain traction it may be important to get some of the big players on board early, establishing it as a de facto standard that everyone recognises and understands. There may be particular challenges for industry players trying to forge this sort of consensus on their own without ending up embroiled in accusations that their behaviour was cartel-like or anti-competitive.
This seems to be one of the biggest challenges for any potential scheme. Participants discussed a number of questions and issues around liability. If, for instance, a user’s account is tied to an identifier that they have stolen, and they then go on to commit an offence online – who would be responsible for this misidentification? Or what if an error or oversight at the identity assurance end resulted in an individual being authenticated incorrectly? One can imagine the complexities in determining liability, and whether or not it rests with the issuer, the perpetrator, or elsewhere.
The international dimension
The internet is by definition international, and collaboration across borders seems to be an important element of the child safety debate. Participants discussed the different levels of action required, and whether it might take too long to reach a common international or global approach.
Some argued for a pragmatic approach, starting with a solution that works for the UK and then leaving scope to scale up. One way to achieve this might be to begin with a (small) number of organisations who are ready to implement their own individual solutions, step from here to shared solutions based on common technologies (e.g. mobile wallets) and then on to a long-term, Europe-wide agreement.
The starting point for taking these ideas forward might be to trial a kitemark or similar system on a small scale, with a few key domestic players and using tools that are already available. The aim would be to establish a proof of concept and best practice, to demonstrate that a solution could be implemented without degrading the user experience, impacting on site traffic or overstepping the mark on privacy. The approach might benefit from one organisation appointed as a focal point to help steer progress.
A question of balance
Clearly even one child experiencing unwanted sexual approaches, grooming or abuse is one child too many. But we must not lose sight of the potential unintended consequences of measures that might be taken to protect children.
Participants discussed the potential for measures designed to control activity to also damage the broader user experience. In extreme cases, poor user experience can result in users quitting a site altogether. There is little point in the existence of a safe social network for children if they don’t actually want to use it and are driven elsewhere.
We also need to be mindful of the positive potential of these sorts of networks. The internet can be a place where children who would not otherwise have a voice are able to express themselves, interact with their peers and form relationships. At Habbo, the suspension of chat facilities silenced many tens of thousands of conversations. We must not disregard the potential for the internet to be an empowering place for all ages.
The thin end of the wedge?
If additional solutions for deterring predatory activity online were developed, there is an important question about the extent to which this should be leveraged to deal with other concerns.
Participants discussed a range of scenarios, starting with the potential for calls for measures to be extended from a focus on sites designed specifically for children, to one that encompassed all social networks. Participants also discussed the potential for calls to extend the approach to control other inappropriate behaviour online – for example conversations around self-harm, “thinspo” and cyber bullying.
Determining how far to go is a challenging question to answer, and it may be very difficult to give strong assurances that technical measures designed to solve one particular problem would not in time be appropriated to do far more.
Overall the discussion raised a number of questions that would benefit from further research and deliberation, including:
- How can we build up a robust, accepted evidence base about the scale and nature of the problem, to help identify where best to focus limited resources?
- How do we balance the trade-offs, ensuring that social networks for children can be not only safe, but a place where young people want to spend time?
- How can a wide range of businesses, organisations and countries cooperate to come up with solutions that reach across borders and improve outcomes?
- If we were to attempt a scheme that linked users back to real-world identities, what are the practical and legal hurdles and how might these be overcome?
To find out more about the work of the Digital Government Unit work please get in touch by email or connect with us on Twitter: @PXDigitalGov