Child protection and the limits of censorship

It’s unusual for a civil liberties organization — which Prostasia is — to be calling for censorship. But we are also, and first and foremost, a child protection organization. And one of the great achievements of the child protection movement worldwide was building a global consensus that the possession or distribution of child pornography¹ must be criminalized. The Optional Protocol to the Convention on the Rights of the Child on the sale of children, child prostitution and child pornography came into force in 2002 and now has 183 signatories, more even than the Convention against Torture.

But when the worthy principles enshrined in the Convention come to be applied in practice, what exactly do we want to be done? To be concrete, if you search for “child pornography” on Google or Bing, what would you like to happen? Would you like links to child pornography to come up? Would you like an alarm bell to go off at the FBI and to have your name added to a government watch list? Or would you like something in between?

What we have is something in between. Hopefully, links to unlawful images won’t come up — but neither should you expect a visit from the FBI. Instead, search engines censor links to unlawful images from their search results and will cooperate with authorities who are responsible for their removal at source. They also voluntarily insert a warning at the top of the page that looks like this (Google left, Bing right).

Subject to some due process questions addressed later in this article, search engines are unquestionably doing the right thing here. Unlawful images of minors are not free speech, and so direct links to such images are bad (which justifies censoring the results), and we ought also to be doing what we can to prevent people from seeking them out (which justifies including the warning message). But since legitimate search results about child pornography can still be accessed, Internet freedom is not being infringed either.

Beyond Censoring Search Results

But the case of blocking search results is a fairly easy example. What else can and should be done to limit the availability of unlawful images of minors online? And what shouldn’t be done? Here are some other examples to consider:

  • Wikipedia contains an article about a classic rock album that includes a small copy of the album’s cover art, depicting a naked girl with her genitals obscured. Wikipedia insists that the image, although distasteful, is not unlawful and that its inclusion in the article has encyclopedic value. Others disagree. Who should decide? Should ISPs block users from accessing the image? The page? The whole of Wikipedia?
  • An instant messaging program is used to exchange naked images of minors. A police investigation finds that most of the images exchanged are being sent between teenagers; in other words, the app is being used for sexting. Rather than charging all of these teenagers with child pornography offenses, should the CEO of the company be charged with a criminal offense instead, to force the company to self-police the use of its service?

These examples aren’t just hypotheticals, they are actual cases. The first case took place in the United Kingdom in 2008, where the Internet Watch Foundation (IWF) has a private arrangement with most Internet access providers to provide them with a list of web addresses known to contain unlawful images of minors, which can be used to block those websites when users try to access them. When the Wikipedia page with the tasteless album cover was added to this list, the intended effect was broader than strictly necessary — since apart from blocking the image, it also blocked British users from reading information about the album. But in practice, it was broader still. Due to technical limitations of the way that website blocking works, British users were also blocked from logging in to Wikipedia to make edits to any page.

The second case mentioned above took place in Korea in 2014. The CEO of the company that makes the Kakao Talk chat app was held responsible for monitoring the use of the app and filtering out “lewd content.” There are some other chat apps, such as Telegram and Skype that do monitor and filter images sent through their systems, and report these to authorities if they match a database of known unlawful images (read more about this technology below). But this wouldn’t have saved the Kakao CEO from liability, because many of the images exchanged were not previously known.

As you can see, both of these cases present some problems. We don’t want to have to shut down an entire website or social media platform just because it contains one possibly unlawful image, particularly when a more targeted approach is available. And when it comes to chat services, automatically filtering out known unlawful images is feasible, but it would be a step too far to infringe users’ privacy by requiring every message that they exchange to be manually moderated and reviewed.

What Can’t be Done?

These are just two examples. But there is a principle that underpins them, and it’s this: within the constraints of our resources and abilities, the only measures that we shouldn’t take to prevent the transmission of unlawful images of minors online are measures that would infringe human rights, as embodied in the U.S. Constitution and in other applicable national and international human rights laws.

The only measures that we shouldn’t take to prevent the transmission of unlawful images of minors online are measures that would infringe human rights.

One of the ways that human rights can be infringed is by blocking or filtering legitimate content. This was established in an influential 2010 report of the then United Nations Special Rapporteur for Freedom of Expression and Opinion, Frank La Rue, who stated that the mandatory filtering or blocking of the Internet is an infringement of the right to freedom of expression, except in certain very narrowly defined cases — one of which is the blocking of unlawful images of minors.

Another human right that can be infringed by overbroad attempts at blocking unlawful images of minors is privacy. A 2015 report from Frank La Rue’s successor, David Kaye, recommended that governments should not limit the use of technologies that allow individuals to protect their privacy online, such as encrypted and anonymous communications tools.

Once again, limitations to the right of privacy can be imposed, but they must be necessary and proportionate. For example, it may be a proportionate restriction of privacy for a government to seek a judicial warrant to place particular criminal suspects under electronic surveillance, but it would be disproportionate to effectively place all of the users of an app or device under surveillance by prohibiting them from using strong encryption.

Human rights necessarily place some limits on our ability to police the online transmission of unlawful images of minors, just as they also limit our ability to investigate and prosecute other crimes such as terrorism. But thankfully there is still much that authorities and Internet platforms can do to limit the transmission of such images, without infringing human rights or imposing authoritarian-level restrictions on the free and open Internet.

What Can be Done?

One of the most effective tools that we have is the use of image hash lists. An image hash is a unique identifier of an image (or a video), which can be used by automated systems to stop that image from being shared or downloaded. The use of image hashes results in a lower likelihood of false positives than URL (web address) lists, such as the one that resulted in Wikipedia being blocked. That’s because the content available at a URL can change, and might content a mixture of lawful and unlawful material. But the hash of an image that has been identified as unlawful will, in more than 99.9% of cases, only result in a match to that specific image.

The industry standard technology for the checking of images against an image hash database is Microsoft’s PhotoDNA system, which it makes available to law enforcement and Internet platforms and developers at no charge. A second generation of this technology is also beginning to be used for matching videos.

Although Microsoft develops the technology, it is not responsible for populating the database with image hashes. That responsibility falls to independent organizations, one of which has already been mentioned above — the Internet Watch Foundation. The IWF maintains a database of images through reports from Internet users and law enforcement authorities, and through its own independent investigations. These are categorized by their severity, and made available to third parties under license. A similar database of image hashes is maintained by the U.S.-based National Center for Missing and Exploited Children (NCMEC), which sources images from its Cyber Tipline and from reports that Internet companies are required to submit to it by law. A third database project, used internationally by law enforcement agencies, is called Project Vic.

These technologies and databases have the potential to be used in the fight against the dissemination of unlawful imagery in ways that uphold the human rights of freedom of expression and privacy. But that doesn’t mean they can’t also be used in ways that infringe those rights. For example, just as unlawful images of minors can be added to these databases for identification, so could photographs of adult sex workers be used by such systems to identify and censor their constitutionally protected speech. The use of these systems can also be expanded into other content areas, in which the legality of the content is less well-defined, such as blocking content that promotes terrorism. It’s not hard to imagine this technology ballooning out of control, and being to block journalism (on the pretext of it being “fake news”), or to censor alleged copyright infringements. In fact, a proposal to do exactly that is about to pass into law in Europe.

Therefore, civil rights organizations like Prostasia can’t just give carte blancheto the use of these automated censorship systems without there being safeguards in place to ensure that their use is strictly limited to the case of unlawful images of minors. This is because we have a responsibility to ensure that child protection isn’t used to legitimize broader regimes of censorship. Thankfully, the IWF agrees with this position, having recently stated in a submission to the United Kingdom government, “It is our belief that content that is deemed to be harmful and which should be removed from the internet should be defined in law and not subject to discretionary, subjective interpretation.”

We have a responsibility to ensure that child protection isn’t used to legitimize broader regimes of censorship.

Ideally, such images should have actually been found to be unlawful by a court. But since new obviously-unlawful images of minors can be shared online in moments, courts act too slowly for their spread to be curbed. So as a compromise, the NCMEC allows images that are within the “worst of the worst” category to be added to its list before a judicial determination on their legality has been made. The IWF’s list is not so exclusive, though it does categorize its images on a scale of severity that allow users of the list to choose only to automatically filter out images in the most serious categories.

How to Deal with Grey Areas

There is always going to be some room for discretionary interpretation when it comes to the less serious categories of image, which carries the potential that some lawful images might mistakenly be included. Some tech companies have been notoriously overzealous in their reporting of potentially unlawful images to the NCMEC and IWF, including images such as drawings of characters from the Simpsons in sexual situations, which although distasteful to many, would be constitutionally protected speech in the United States. Other examples of common grey areas are innocent family photos at bathtime or the beach, photographs by recognized and respected artists or journalists, nude photos of young-looking adults, and legitimate medical or sex education materials.

These are all areas in which the potential for well-intended measures to limit the dissemination of unlawful images of minors can easily overstep and infringe on the constitutional and human right of free expression. On the other hand, it’s also true that a given Internet platform isn’t required to carry all constitutionally protected speech, and that its communities of users might not want to see some of these types of imagery. Moreover, global platforms may have to contend with a variety of different speech laws around the world, some of which may draw the line of illegality at a different point than the U.S. Constitution does — for example, in the United Kingdom, drawn depictions of underage sexual activity are considered unlawful.

How should Internet platforms deal with these grey areas? Well because they are grey, a more nuanced approach to them is needed than the use of the automated blocking tools described above. For one thing, when images from the NCMEC’s database are blocked by major Internet platforms, account holders who uploaded, downloaded or shared those images are also reported to law enforcement authorities and their accounts are closed. This would be a grossly disproportionate response to the sharing of an image that wasn’t unlawful, but merely inappropriate for the platform.

Platforms do have a right to decide that there are types of lawful speech that they nevertheless don’t want any part of, and to ban these from their platform under their terms of service. This is usually called “content moderation” rather than “censorship,” though it doesn’t hurt to bear in mind that the former is often used as a euphemism for the latter. Facebook, for example, has chosen not to allow even adult nudity on its platform.

Manila Principles on Intermediary Liability

Where individual Internet platforms do choose to disallow certain types of lawful sexual content, they ought to follow a set of best practice standards called the Manila Principles on Intermediary Liability. The Manila Principles require Internet platforms to be transparent about what content they do and don’t allow, to provide mechanisms of review, and to restrict that content by the narrowest method possible — for example, only restricting access to a particular offending image rather than an entire account or website, and only restricting it in a certain country if that is necessary to comply with local laws.

Later this year Prostasia will be working with Internet platforms and experts to develop another set of best practice standards, which will be focused more specifically on the case of child protection. We will be suggesting, for example, that when developing child protection policies, platforms should consult with relevant experts to ensure that the policies do in fact effectively protect children — because some kinds of sexual censorship (such as censorship of safe sex or CSA prevention information) can actually harm children and young people. Platforms should also consult with representatives of those whose rights are typically impacted by overreaching child protection laws and policies. One of Prostasia’s aims is to act as a “one stop shop” where platforms, especially smaller ones that do not have in-house expertise, can obtain balanced and independent expert advice about where to draw the appropriate line.

Transparency and Due Process

In addition to transparency about what content is being restricted, there should be a level of transparency and due process in the process by which it is restricted. This applies not only to platforms such as Google and Facebook who are making their own individual censorship and content moderation decisions, but also — in fact, all the more so — to the vendors of technologies such as PhotoDNA, and the maintainers of shared blocklists such as the IWF and NCMEC, whose decisions have a direct impact on the actions taken by many others.

We need to be able to ask these platforms and vendors questions like, who is creating the blocklist or algorithm used to censor unlawful images of minors? What criteria they are using to construct it, and who they are consulting with to develop these criteria? Are communications over their platform securely encrypted, or is there a backdoor that allows them to be scanned for CSA imagery (but also, potentially, for copyright infringements or for keywords associated with terrorism)?

It’s a dangerous fallacy to think that because child sexual abuse is such a terrible crime that we shouldn’t be able to ask these sorts of important questions. After all, if we can’t ask questions about how unlawful images are censored, are there also other topics about which we can’t ask such questions? What are those other topics?

If we can’t ask questions about how unlawful images are censored, are there also other topics about which we can’t ask such questions?

Since the Wikipedia incident, the IWF has sought to improve its own transparency and accountability, in response to the recommendations of an independent human rights audit conducted in 2013. Platforms are also doing better, in response to advocacy from groups such as the Electronic Frontier Foundation, with Twitter having recently having begun to share more informative notices to users whose content is taken down, and Facebook giving more detail in its transparency reporting.

Many platforms terms of service are also fairly clear about the consequences of uploading or sharing unlawful material. Google, for example, notifies users that the content of their Gmail emails might be scanned and states this about unlawful images of minors:

Google has a zero-tolerance policy against child sexual abuse imagery. If we become aware of such content, we will report it to the appropriate authorities and may take disciplinary action, including termination, against the Google Accounts of those involved.

However, more can be done to ensure that the policies and terms of service concerning child protection are transparent, effective, and consistent with human rights standards. Prostasia Foundation intends to publish a whitepaper analyzing the child protection policies and practices of service providers and platforms in 2019.

Platform Liability

Almost every country in the world has committed itself to eliminating unlawful images of minors from the Internet, and Internet platforms are already at the forefront of this battle, pushing forward the edges of technology in doing so. These are platforms that can be used for good, or for ill, and the filtering technologies that they use to combat unlawful images of minors can also be used for good, or for ill. Part of the mission of Prostasia Foundation is to help ensure that both are only used for good.

Can more be done in this fight? Yes of course it can, but always at some additional cost. Given that the cost of child sexual abuse in America every year was recently estimated at $9 billion, we should be willing to pay a high cost to reduce child sexual abuse. But that cost must never get as high as sacrificing the freedom of expression or privacy of our citizens.

This means that given the great potential for hash-based filtering to be used to harm all our rights and freedoms, this technology should only be allowed to be used on images that are indisputably unlawful, and that it should be conducted in an accountable and transparent enough manner for the public to be able to be able to make sure of this. Provided that users are informed that their activities are being monitored and to what extent, there is nothing wrong with such technologies being used to filter and block such unlawful images from being transmitted, and to alert authorities about those involved.

We believe that for now hash filtering by platforms should remain a voluntary industry best practice, rather than being a legally mandated censorship system that governments would inevitably seek to expand to be applied in other areas, as is already happening. With that said, it shouldn’t be left entirely to private companies either, since private censorship is inherently no better than public censorship. Child protection groups and civil rights groups — such as Prostasia, which, conveniently, is both — are also important to ensure the accountability and effectiveness of these private censorship measures.

In order for the system that we have to work, Internet platforms must not be pushed into a liability regime that holds them liable for what their users do online, lest pure economic risk aversion induce them to censor substantially more lawful content than they already do. This is, indeed, exactly what FOSTA/SESTA does, and the result has been exactly as predicted. That’s why Prostasia Foundation, the only child protection to oppose FOSTA from its outset, supports its repeal, and a return to a liability regime that is rightly centered on child sex abusers, not on Internet intermediaries and the most marginalized of their users.

That doesn’t mean that images that test the limits of what is legal should be freely available online. Provided that they do so in a transparent and accountable fashion, platforms should be free to set their own policies on what lawful content they allow, and to use their own automated and human review processes to enforce these policies, including acting on third-party complaints where appropriate.

There is a current inquiry in Europe, and a separate one in the United Kingdom, that ask whether we should abandon this largely self-regulatory “notice and takedown” system. But in the absence of evidence that some other system could be more effective while remaining compliant with human rights standards, we should hesitate to abandon a system that has worked well so far. In response to the UK inquiry, in which other groups like the National Society for the Prevention of Cruelty to Children (NSPCC) call for tough statutory regulation of social networks, the Internet Watch Foundation pushes back against such suggestions:

The current political narrative in general places a lot of blame at the doors of the large tech companies for “needing to do more” to remove illegal and harmful content online. However, there are examples of flawed legislation which will have a negative impact on the availability of information, the freedom of expression online and many other of the internet’s benefits if Britain decides to introduce greater regulation through proposing legislation by that focusses all their attention on “tech companies needing to do more.”

Technology does change and develop, and it might be that in future, we can predict what images should be removed before anyone gets to see them at all. But today, we must work with the technologies and systems that we have. We have an open Internet that has allowed these technologies and systems to develop in an organic fashion over the last 20 years. They are not perfect. But incremental, evidence-based improvements are the best way forward in improving the state of the art of child protection, rather than introducing sweeping new laws written by morals campaigners.

Censorship is not always a dirty word. It’s the word we use to describe the limits to freedom of expression. And unlawful images of minors are one of the only types of content on which there is a universal consensus in favor of its elimination from the Internet. But despite — or indeed because of — how appalling this content is, there is the risk that it will lead us to create overreaching censorship regimes that can and will be used to harm innocents.

As both a child protection organization and a civil rights organization, Prostasia aims to become a voice for balance and reason, working with diverse other stakeholders to more effectively eliminate unlawful images of minors, in way that accords with the highest values of the society that we would like our children to grow up in. Please consider supporting us in our mission.


¹ Despite being the most commonly-used term, “child pornography” has fallen out of favor because it downplays the exploitation and abuse that is usually involved in the production of such images. On the other hand, the more favored term “child sexual exploitation material” can unfairly stigmatize teens who exchange in sexting with their peers. The conflation of these images with images of child sexual abuse contributes to the imposition of draconian sentences on young people, which is itself a harm we aim to reduce. For that reason, we use the more neutral phrase “unlawful images of minors” in this article.

Comments

  1. […] exploitation is not protected expression. Hence there is no conflict for a platform in adopting a tough, no-tolerance approach to child exploitation, while also […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.