Facebook Child Abuse Detection Hit By New EU Rules

The Council Of The European Union Said It Was Aware Of The Problem It Posed For The Detection Of Child Abuse-Related Content

Facebook Child Abuse Detection Hit By New EU Rules

Facebook has switched off some of its child abuse detection tools in Europe in response to new rules from the EU. The company said it has had no choice but to do so, since the new privacy directive bans automatic scanning of private messages. The change only applies to messaging services rather than all content uploaded to Facebook. However, there is no change in the UK, where measures are “consistent with applicable laws”, Facebook said.

The problem has emerged despite warnings from child protection advocates that the new privacy rules effectively ban automated systems scanning for child sexual abuse images and other illegal content. Some other firms, including Microsoft, have not made such changes, arguing the most responsible approach is to keep the technology functioning. But few companies deal with the sheer volume of private messages as Facebook, which runs its own messenger service and owns Instagram. Last week, it was revealed that implementing it would put restrictions on some features of messaging apps. But the impact on child protection tools has also now kicked in.

“This train crash has been approaching since the summer,” said John Carr of the Children’s Charities’ Coalition on Internet Safety. “Neither the EU nor one of the world’s largest and most powerful tech companies could find a way of avoiding it. It is a very sad day for the children of Europe. “We are heading for a very strange world if privacy laws can be used to make it easier for paedophiles to contact children, or for people to circulate or store pictures of children being raped.”

Spotting exploitation

The new rules update long-standing privacy protocols so that they now cover additional forms of messaging – such as email and online chat.

  • This has had the unintended effect of prohibiting advanced tools designed to spot:
  • Newly created violent and exploitative images not yet logged by investigators
  • Online conversations that have the hallmarks of abusers grooming victims

Anna Edmundson, head of policy at the NSPCC, said tech firms’ ability to scan for such content was “fundamental” to protection efforts. In October, the Council of the European Union said it was aware of the problem it posed for the detection of child abuse-related content – because no exception for scanning for it existed in law.”Providers will not be able to continue to take such action unless a legislative measure is adopted with urgency and brought into effect by 21 December 2020,” the October press release warned.

However, no such exemption arrived in time.

As a result, “the European Commission and child safety experts have said that the directive does not provide a legal basis for these tools”, Facebook said in a blog post explaining the issue. “The safety of our community is paramount, and we are advocating for changes that will allow us to resume our efforts to identify this type of material.”

Other options

The social network added that it could not estimate how many people or messages would be affected, in part because lower-tech solutions still remain an option. Children or adults sent inappropriate messages can still report them for investigation, and there are basic safety measures such as a restrictions on adults messaging children they do not already know. But Facebook-owned WhatsApp is not affected – since messages on that platform are end-to-end encrypted and cannot be scanned anyway. Facebook and other big internet companies haven’t said how many alerts the controversial EU data privacy law could stop – but experts have already worked it out. The US National Center for Missing & Exploited Children acts as a global warehouse for reports of online child abuse material. In the first nine months of this year, it received more than 52 million reports – and about 2.3 million of those involved offenders or victims in the EU. That’s about 250,000 alerts a month – a figure that’s consistent with 2019.

The overwhelming majority of those reports come from Facebook. So even if some nations will continue receiving alerts, the loss of the EU data will have a significant effect on investigating what is often a borderless crime. The European Commission plans to introduce legislation addressing child sexual abuse online next year. In the meantime, the only way to bridge the legal gap is, Facebook maintains, an exemption granted by either the European Parliament or individual member states. Microsoft, however, has opted to maintain its child safety procedures despite the legal difficulties. In a joint statement with Google, LinkedIn, Roblox, and Yubo, it said that the complex impact of multiple, related laws “has created significant ambiguity” with unintended consequences that could hurt users. “Given this, we believe the only responsible approach is to remain steadfast in honouring our safety commitments that European – and, indeed, users around the world – expect and rely upon,” it said. It noted that despite there being “no articulated, harmonised regulatory approach for the near future”, the companies remained hopeful that officials would resolve the problem “soon”.

This news was originally published at BBC