skip to Main Content

How to regulate the Internet without killing it: interview with David Kaye (UN)

(Via International Journalism Festival)

David Kaye is the United Nations’ Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression.

Mr. Kaye, the book [Speech Police. The Global Struggle to Govern the Internet, ndr] opens with a passage from political theorist Benjamin Barber, arguing in 1998 that either the web “belongs to us” or it has nothing to do with democracy. Fast forward to 2019, and the web appears to belong to Facebook, Google, Amazon, Tencent, Alibaba — not its users. We are now about to rewrite the rules of the game, internationally, and your book argues that “democratic governance” is essential: “It’s time to put individual and democratic rights at the center of corporate content moderation and government regulation of the companies” you write. And yet throughout the essay you also argue that many of the remedies that are being proposed for the “dark sides” of the web may further undermine democracy instead of strengthening it. What’s going on, and why should users be concerned? Why a book that goes by the sinister, Orwellian title of “Speech police”?

That’s a bunch of hard questions. Let me start with the title first. The title, “Speech Police”, is in many respects designed to be descriptive. We have all of these different actors now that are competing to police our speech online. You have the companies, you have governments, you have the public that is pushing for different kinds of constraints — we might think of them as social constraints. The digital age and social media in particular have made us focus more than we used to, certainly in democratic countries, on those who control what we can and cannot say. And that’s a big difference from the kind of environment we’ve lived in before. So, as for the quote from Benjamin Barber, ultimately what I want us to be thinking about is: how do we strive to protect the original promise of the internet, which was indeed a democratic space, a place where people had voice and where you had open debate? And that for me is the core of Benjamin Barber’s message — and it was like twenty years ago… My view is that we need to be taking a hard look at how governments regulate the internet and how the companies do it, so that we can move towards a space where we feel there’s democratic control of online space. I talk in the book about different possibilities for doing that, but so far I think governments and companies have not been succeeding in making that environment happen.

The global conversation around how to regulate a common space in which almost all humanity is connected truly is a crucial one. Are we having it in a healthy way though? Many digital rights organisations and academics denounce that most of the claims put forward in the media and by politicians are completely devoid of evidence, and more generally highlight that norms and rules are being written in response to moral panics, media sensationalism and prejudice — if not the interests of a dying, old media world that long looked at the internet more as an enemy than as a fundamental change in human communications.

That’s a separate problem that I’m not addressing in the book — the role that social media, and Facebook in particular, play in undermining the media. I think some claims are probably overblown. They definitely had an impact on traditional media, but the book is not about that. What I’m really trying to do is to get people to think not only about the specific problems of hate speech or disinformation, but rather about how to make decisions around those problems. Should those be problems that are resolved only by the companies? That heads toward a place where we’ll end up having corporate decision-making and profit-driven decision-making. So we don’t want that —although companies have responsibilities. We also don’t want governments to be making these decisions in ways that, like you said, are driven only by a sense of moral panic. They should also be driven by a question: how do we maintain and promote the original ideals of the internet? And my main concern there is that when governments, particularly in Europe, have been trying to address this particular problem, they’ve been doing it in incredibly sloppy, irresponsible ways. I get the motivation to do it. NetzDG is a well-intentioned piece of legislation that, either inadvertently or on purpose (doesn’t really matter), gives the companies more power to make decisions about what is German law. That’s not democratic. So we need to be stepping back, taking a breath and thinking: what makes sense? How do we want the decisions to be made here? To what extent do we want them made by the companies, which have an obvious responsibility to protect rights? To what extent do we want governments to rethink the role of traditional public institutions, like our courts, and make decisions about what is lawful and what is not? And once we make those decisions, I think the specifics around hate speech or disinformation will answer themselves, in a way, because they should be rooted in democratic principles — but overseen and constrained by democratic institutions.

Is Europe actually a role model here though? Yes, it internationally paved the way for a strong, solid privacy legislation with the GDPR. But considering, for example, the debate around the risks of the copyright Directive in terms of fundamental rights of users, or the dangerous rhetoric about “fake news” adopted by many European leaders (twisted by political leaders all over the world into a tool against the free press), can we still say that the EU is part of the solution, or has it become part of the problem too? In the book for example you speak of a sort of “liability plus” that would be imposed upon platforms even before the “illicit” or “harmful” content is actually posted. That sounds a frighteningly lot like a Chinese model of governance, rather than a properly European one — as it implies having upload filters for basically everything the government doesn’t like, in order to remove all ills from the online world and make it a sanitised, clean environment for a “harmonious” society… Is this a good policy posture for Europe?

When you look at what’s happening in Europe I actually think it’s hard to make a general claim, because when you drill down beneath the different policies and policymakers, you just see a lot of difference. So at a bureaucratic level, if you look for example at the European Commission, there is a lot of good people there, who are trying to get this right. But I think they have a lot of pressure from some governments to “eradicate” — yes, this is the word you hear regularly — hate speech or terrorist content; like there’s going to be some laser weapon to do it! I think that pressure, which is very political, is really problematic and you see it in different spaces. On the other hand, I’m not depressed about the situation. I think there’s hope. And the most recent form of hope is the French social media regulation that has just been released. I looked at the introduction, and it really uses language that is similar to the kinds of language that I’ve been talking about, and that many people in civil society have been talking about for the last several years. Which means you want to have plurality of media, diversity, human rights norms, you want to base your decisions on necessity and proportionality, and you want to have your courts involved in making decisions. I think that’s good. Seeing something like that gives me hope that there could be some constraint. On the other hand the French are also pushing for terrorist content directive (which forces digital platform to remove terrorist content within 1 hour, ndr). So, no government and no institution is monolithic, and part of my goal in the book is to try to encourage the good policy and the good workarounds around this.

You argue that the platforms themselves are well-intentioned, but — no matter how hard they try — they just can’t moderate so much content and grapple with such complex issues. These are legitimate and understandable concerns. And yet, time and again these same platforms just seem unable to perform even the basic policing you would expect, especially in places such as Myanmar where Facebook’s presence has basically been non-existent, even in the wake of serious calls for violence and genocide on its platform. Also, and in striking contrast, you explicitly mention “politically cross-checked” pages, which “tend to be high-profile or popular pages requiring more than one level of evaluation before the company takes action against them”. Do you think there is some kind of preferential treatment for political or opinion leaders more generally, compared to the way in which users are treated? And is this fair?

Yes on your first question. All platforms now have an explicit policy around newsworthiness. My concern around newsworthiness is that it’s important that what leaders are saying is publicised. But it also means that you have parallel standards: if a random user, like you or me, does something that the platform thinks violates the rules, it could be taken down — fairly quickly or not — and there could be some account action. If you’re president of the United States, or a member of the UKIP…

A recent op-ed on the New York Times by Facebook co-founder Chris Hughes re-ignited the debate around breaking up Big Tech, starting from Facebook. Can it be part of the solution to any of these problems?

There’s no question that this is part of the conversation now. Is it the answer? Maybe, but I think it’s not enough to talk about the break up. It is kind of a negative approach, and I don’t disagree with that. But it’s a reaction to what Facebook has become. And if you do break up, then what do you do next? I think that a real, honest conversation about it has to be that it’s not just about regulatory change on competition, but it’s also about the responsibility of governments to create an environment that is much more amenable to competition. That might mean tax incentives to new social media platforms. It may mean creating more socially-funded public social media. I mean, there’s all sorts of ideas that we haven’t even really considered in significant ways because the antitrust is only one part. Competition policy has to be followed with actual investment in pro-competition and pro-diversity approaches. And that means all sorts of assessment of the information environment, the media environment, and so forth. Even if you break up Facebook, which is Chris Hughes’s point, I’m not quite clear on how that has an impact on Facebook as a platform, putting aside Instagram and WhatsApp. How does that affect its rules, and its reach? I’m not sure it changes all that much, and I’m especially not sure it changes much in terms of the global platform. I mean, this has been one of my biggest questions to those pushing for break up. What is the American responsibility to ensure that brake up doesn’t cause real harm to users overseas? Because, you know: you break it, you own it. A US company has developed an extraordinary power in markets and jurisdictions outside of the United States; so now US policymakers have a definite responsibility to ensure that when they try to fix that problem in the United States, they don’t inadvertently cause massive problems outside the United States. I mean, it’s just a huge externalities problem that I don’t think anybody is really focusing on right now in the US.

Continua a leggere su International Journalism Festival.

Crediti foto: UN Photo/Rick Bajornas

Back To Top