Facebook: the new censor’s office

Social media is usurping the role of the state argues Jillian York

With great power comes great responsibility, but for Silicon Valley’s mega-corporations, that hasn’t quite set in. The companies that host the vast majority of our online expression – most only a little over a decade old – have amassed a tremendous amount of power, and the ability to influence everything from how we consume news to what we wear.

For many users of corporate platforms, social media feels a bit like a public square, where debate and trade occur and movements begin. We’ve come to treat platforms like utilities, but they are not – nor can they be – neutral. The laws and proprietary rules that govern these companies have a uniquely American flavor, as does our increasing reliance on them to do the job of the state for us.

The recent US Senate hearing, during which CEO Mark Zuckerberg was questioned by unwitting lawmakers about his company’s actions, addressed this issue. Senator Ted Cruz asked Zuckerberg directly whether his company deems itself a neutral platform, noting that Section 230 of the country's Communications Decency Act (CDA 230) provides a freedom from liability to neutral platforms hosting speech (note: platforms needn’t be neutral to benefit from Section 230). The young CEO hedged, citing unfamiliarity with the law.

In recent months, the calls for corporations to impose or increase regulation of certain types of speech have reached a fever pitch. In the halls of governance, the opinion pages of major newspapers, and the policy recommendations of NGOs, the consensus is that Facebook, Google, and the like should take on the mantle of government and censor hate speech, regulate ‘fake news’, and fight extremism...all without significant (or in some cases, any) oversight from civil society.

In Europe, this is already happening. In 2016, the European Commission signed a ‘code of conduct’ with four major American tech companies – Microsoft, Google, Facebook, and Twitter – aimed at reducing illegal content online. According to the code, companies should review reported content within a certain time frame and delete hateful speech that goes against their own terms of service. The code does not refer to illegal content per se, but rather pushes companies to adhere to their own proprietary governance structures. Civil society groups were initially part of consultations, but resigned from them in 2016, citing a lack of transparency and public input.

Similarly, the German Netzwerkdurchsezungsgesetz (NetzDG) law, which went into effect in late 2017, requires companies to delete certain content (such as threats of violence and slander) within 24 hours of a complaint being received (or, in cases of legal complexity, within seven days). The law was roundly criticized by internet activists in Germany and abroad, and has already produced a great deal of false positives.

While Europe is understandably concerned with the rising tide of hate speech, deputizing American companies to determine what is or isn’t hateful is an odd way of dealing with it. After all, these are the same companies that elevate ‘civil’ hate speech above profanity, routinely censor counterspeech, and often fail to take white supremacist terrorism seriously.

These regulations give the illusion of safety and security, while in fact they are further eroding democracy by placing ever more power in the hands of unaccountable actors

Furthermore, these regulations give the illusion of safety and security, while in fact they are further eroding democracy by placing ever more power in the hands of unaccountable actors.

For nearly a decade now, tech companies have been imposing increasingly stricter parameters on how we express ourselves online. They restrict the expression of sexuality, and of women’s bodies. They place limits on our use of profanity, regardless of context. We’ve allowed corporate executives – rather than elected officials – to define acceptable speech, and they’ve done a mediocre job, at best.

More recently, we’ve seen demands for these same companies to define ‘news’. Social media platforms such as Facebook have historically taken a hands-off approach to regulating what media outlets post on their platforms, while simultaneously creating partnerships with news organizations. In the wake of the Trump election, however, there have been increased calls for the company and its competitors to fight misinformation.

It is a noble goal, but can we trust tech companies to get it right? The answer is a resounding no. At a recent event hosted by the Financial Times, Facebook Head of News Partnerships Campbell Brown was asked whether Breitbart – the far-right media site associated with members of the Trump administration and known for pushing dishonest and hateful content – was a trustworthy source, Brown replied, ‘To some people, it is. To some people, it is not.’

Brown’s mealy-mouthed response is telling: Facebook, despite its founder’s lofty proclamations, is unwilling to do the work – and should anyone be surprised, when one of the company’s sitting board members sometimes shares Breitbart articles on the platform?

Other efforts proposed by the company rely upon users to determine the trustworthiness of particular media, a rather dangerous proposition given the popularity of fora like Breitbart. Only slightly better are proposals to create a ‘downvote’ button – as seen on Reddit and elsewhere – for users to ‘dislike’ media that they deem untrustworthy.

Furthermore, we’ve seen what can happen when companies do step in to regulate media – major outlets remain untouched, while small and foreign-language media outlets come up against the companies’ censors. In December 2017, a small Ukrainian outfit was banned briefly from Facebook, and received only a notice that they had ‘triggered a malicious ad rule’. And just a month prior, the San Diego City Beat found content censored from its account for unknowingly violating a content rule. And Facebook’s recent changes to its news algorithm is undoubtedly impacting small publishers the most.

All of these knee-jerk reactions, from the European Code of Conduct to corporate efforts against fake news, share some common threads: They’ve been developed outside of democratic norms, without transparent processes for public and expert input. And perhaps more notably, they treat problematic speech as something to simply be hidden from view, rather than dealt with at its roots.

Perhaps, then, it’s unsurprising that some of the best solutions to these problems are coming from external actors. First Draft, a project of the Harvard Kennedy School’s Shorenstein Center, aims to fight misinformation through research and education, and working with partners, the platform will cover a range of techniques used by journalists to verify content. Other projects, including the Dangerous Speech Project, aim to create awareness – and nuance – around speech that can cause or exacerbate violence and share ways to counter it.

we must prioritize democratic norms above profit maximization

It’s important to acknowledge that misinformation, violent speech, harassment and other forms of expression can create real harms, and that the concerns about them expressed by various actors are valid. But it is also vital that we acknowledge the harm that handing over governance to corporations – whose primary interest is profit – can engender.

But with power already in their hands, it’s nevertheless important that we push companies toward meaningful transparency and accountability. For starters, this means opening up their processes for public scrutiny; offering due process to users whose content is taken down; and being straight with users about how their data is collected and used.

As we look toward solutions, we must prioritize democratic norms above profit maximization. This means devising solutions that aim for inclusivity and due process and work toward getting at the root of problems – rather than treating them like loose nails to be tamped down with a giant hammer.