 It's up to you. Ladies and gentlemen, get settled if you can. I know it's been a really long day, but I think we have a very good panel in front of us. If you could settle down, please. My name is Stephen Erlanger. I'm with the New York Times. I would also like to thank Thiele. I congratulate him on this 10th anniversary, though this feels a bit like a wedding cake, but we don't have to think about it. This is, to me, as a career journalist, a very important panel. We're here to talk about truth and trust in the digital age. Now, it seems to me truth and trust are questions in any age. They were questions in the medieval period, too. But now we're also interconnected. We're also subject to our phones. In a way, we're prisoners of our phones. And our phones tell a great deal about us. One of the things that the internet has clearly done is it's made an enormous amount of knowledge available to everyone and in many different languages. But as Henry Kissinger once said, knowledge is not wisdom. They're two very different things. One can know many, many things, but not understand much at all. And this is the great fear. So I think what we have is a very varied panel with a lot of expertise and a lot of humanity. I'm not going to introduce everyone because you all have your packs and you can look people up yourselves. And we're going to go right to it. I'm also going to try to leave 20 minutes at the end for questions. And we're going to try to be done by 6 o'clock just to kind of re-win some time and get back on schedule. So I appreciate your attention. And first, a speaker we have, quite extraordinarily, is the chief rabbi of France, Chaim Corsier. Chaim, up to you. Go ahead. Thank you very much. It's a challenge for me to talk, especially in such eminent personalities. And by the way, I'm going to be honest, because as for me, in 30 minutes, it's the beginning of the shabbat, I'm going to clip myself in order not to face the debate with you in particular, which will free me from this terrible constraint. To talk to you about a subject that is very important, what is the truth? How to define the truth? If we consider that the truth is absolute, then there can't be any evolution in the world. And to share it with you, I would like to tell you a story that I lived when I was in Castelan, the smallest sub-prefecture of France, where I attended Mass. And the priest, who was great, explained to these faithful the passage of the Gospels. You see, I know the Gospels, I do the benchmark, I study the competition too. And so the Gospels explain that the first will be the last, the last will be the first. People, these faithful had trouble, and the priest explains to them, by raising their hands, as I can do it now, by saying, listen, you don't understand what the first will be the last, you know the pétanque, and all the faithful say, yes, we know the pétanque. And so he told them, when you throw the pétanque, you throw the cauchon, so I don't know if here in Morocco we can talk about cauchon, it's the little cauchon, like I'm Rabin, I'm also a piquet by the thing. So you throw the little cauchon and all the balls try to be the closest to the cauchon. And then there is one who tries to shoot on a ball that is very close to the cauchon, he tries to shoot, he shoots and he misses the ball, and his ball goes very, very, very far. Then there is another who tries to shoot, he misses the ball, but he hits the cauchon which goes very, very, very far, next to the first ball. Globally, it's Galilé. So now how do we understand Galilé with the cauchon? It's not complicated. You have a truth. You think that a truth is yours, it's yours and you want to share it, you affirm it, but you are far from the truth accepted by everyone, you are very far. And then at one point, the minds evolve and the truth evolves comes to you. So we can see that there is a truth of a moment and there is a truth of another moment. It's what Steven said yesterday, we're talking about it or we're talking about it, the truth is also conditioned by the moment. So we can see that if there is a truth, a moment that changes, there is no absolute truth. And this idea seems so essential to me that there is a text that says that when the first one who uses the world media, the first one in history, is Moïse, because on Mount Sinai, he receives the Torah, the 10 commandments, and there is a kind of percussion of the world by these values. The world is still percussion by these values. You will love your next one like yourself, you prefer peace. These values are at the heart of our societies. There is a midrash, that is to say an allegory comment that explains to us that Moïse asks God what will become of this law? And God says to him, turn around. And Moïse finds himself 1500 years later at the study house of a Aminan rabbi called Rabbi Akiba, and he hears him comment the Bible and he doesn't understand anything. Moïse doesn't understand anything. So he says to God, but he starts to change as long as that. He listens to him well. And Moïse hears that when he is confronted, when the Amin Rabbi Akiba is confronted with a question, at that moment he tells him, I know him from my master who knows him because he learned it from Moïse. That is to say that he tells him the story of things, that is to say the story of truth. And maybe science, for example, has changed between Galilee and us, but it's not because there has been Galilee that we can be there, us. So the principle of truth is a path that only works with the fuel that feeds this debate here. It's the dialogue, it's the exchange. And when, in the Talmud, for example, there is an Aminan rabbi discussing with Shammai, one says white, the other says black, the other says yes, the other says no, the other says forbidden, they never agree, it's there. Never. There is a voice that comes out of the sky and that says the word of one and the word of the other are the words of the living God. That is to say that truth is not in the affirmation of one thing, but in an ethical tension between two positions that force to find a balance. Look at how popular wisdom translates. No matter the culture, everywhere we affirm that it is worth a bad arbitration than a good trial. Yet the trial is supposed to give the truth, the legal truth. I'm a third or less. Well, no, we prefer a bad arbitration that does not give us the truth, who is guilty, who is responsible, who has to pay, but a sort of arbitration that makes everyone not lose too much. Finally, that's exactly what we're trying to build every day. And the big question of the fake news that harasses us is not so much about what is broadcast, but how do we hierarchize without talking about fake news, because social networks have increased. But when you read the press every morning, you see that you read the same event, told, we heard Renaud earlier, told in the Figaro, told in Liberation, told in... Look at the Balfour Declaration, it's the anniversary of the Balfour Declaration. You read it yesterday in Liberation, it's a disaster, in the Figaro, I don't think they're talking about it, in the world. It will depend, we'll see tonight. In short, every day, listen to my advice, modest, the best is the cross. In the cross, at least, you have an ethics, a form of... I tell you, I read the cross with attention. The Parisian, but you see, five serious media, I'm not talking about small fanzines, small sites, no, serious media, treat the same information with a different angle. Finally, the truth is probably the conjugation of all these angles. It's actually the dialogue. That is, the ability to enrich the truth of the other. And that's what makes, in the debate between religions, we have to give thanks to the republic and the laïcité that the republic has. Because if there is no laïcité, either you need a country or an open government, like Morocco can be, so we allow everyone to live their own way. Either we keep a republican model, which is a model where the state is neutral, and everyone has the freedom to practice religion, that is, in fact, no one says something serious, no one says I have the truth. Every religion affirms, we have our truth, and so that my truth can be expressed, I have to fight so that the truth of others can exist. But as soon as I say that, I affirm that there are several truths, and that will be, in my opinion, their whole game of your debate. Now, I kept 12 seconds to see if you can all wish Shabbat shalom. Yes, shabbat shalom. Good shabbat, because I'm going to do the shabbat. Okay, thank you. Thank you. Well, thank you very much. And shabbat shalom also. Okay, well, thank you. I think the point I take from this is important, which is that truth is a balance. It's something that emerges from dialogue and discussion. But I'm always remembering what I think Daniel Patrick Moynihan, a great American politician and writer used to say, which is you can have your own opinions, but you cannot have your own facts. And so when we talk about truth, yes, truth emerges from discussion, but there's some things that simply are true, even if Galileo had to die for something he understood later was correct. Anyway, we're now going to move to Susan Liotto, who does lots of things, but has a real specialty in ethics. And I think in this internet age, particularly with the scandals swarming around and the way the media is being used for them, I think has quite a lot to say. Susan, please. Thank you. And I echo the thanks and congratulations to Tian. It's an honor to be here. So what I'd like to try to do is to focus on a couple of themes that I think undergird this link between technology and what we might call truth and indeed trust under siege. And in particular, I'd like to look at this intertwining of technology, power and truth and see what it yields in terms of what we should do because there's a lot of explaining and complaining, but at the end of the day, we need to take action. And I would suggest that we need to rethink the way we're making decisions in this technologically laden world. So to start with power today as we've been hearing throughout the day is scattered. We heard it from the president of the ICRC, we heard it from Ambassador Eisenstadt. Power is scattered to people like the WannaCry hackers or like the extremist Buddhist monk in Myanmar who disregarded the government's prohibition on his preaching and just took to Facebook with his verbal abuse of the Rohingyas and some horrific photographs. The problem with the scattering of power is that there's no corresponding assumption of ethical responsibility for the deployment of the power. And in fact, we don't really even know who has the power. The second power dynamic is a concentration of power in the technology companies. All the time in the news, we hear about the so-called Big Five, the Amazons and Googles and Facebooks. But in fact, it goes much further and not just to the Ubers of the world but all the way down the chain to the startups. And the fundamental responsibility issue here is that they typically do not, and there are exceptions, but they typically do not think ethics first and then put their technology out there. In fact, many of them I would suggest have a proactive strategy of just doing and waiting until they have a head-on collision with a regulator or consumers who will stop them. And so the question here is, how do we rebalance the allocation of responsibility? And the starting point for me at least at this point is to say that this tagline that they're only a platform is simply no longer acceptable. We can't have online sex trafficking, recruiting of terrorists and all manner of wrongdoing and have these companies just saying that they are just a neutral platform. On the other hand, we can't have regulators of targeting and quashing innovation in ways that can also be negative for society. Now, the final point about power and technology is that technology has disempowered state institutions. Starting with the law, we see that legal systems lag very far behind technology, which is constantly changing at an increasingly fast pace and the law simply can't keep up. We see that legal systems are very ill-equipped with the cross-border impact of technology and understandably legislators just don't understand the technology. And similarly, state institutions are going to be falling short with respect to power and there are many complicated examples. I'll stick to one, which is cyber warfare. I don't know of many states who could run a cyber war without recourse to the private sector or indeed individuals. So technology has totally disrupted this power dynamic and the first part of the what we do question is that we need to make decisions in this new reality and not thinking about a balance of power that is outdated, even a year or two outdated. And now truth, technology has also catalyzed this epidemic of compromised truth. So fake news is a major example, but there are other examples out there. There's a Chinese app called Meitu that allows one to take away a few wrinkles and take away a few pounds in a matter of seconds and then put a photo on a dating app. So they're all manner of contagion, but in order to do the right thing, in order to make good decisions, we must insist on truth, the kind of scientifically verifiable or social science research based truth. And to Steven's earlier point, I had the privilege of interviewing Salman Rushdie a couple months ago and he said, you know, it's not because you say the world is round that it's round, the world doesn't need you to believe that it's round for it to be so. And I think we all again need to be substantially committed to truth. More generally, when we put all of these dynamics together, the power and the contagious nature of truth driven by technology, we have to ask ourselves, what else about our decision making needs to shift? And I would suggest a couple of things. One is that we need to broaden the conversation. It can't be that the innovators and those who control the innovations be they large corporations or holders of super majority voting shares in Silicon Valley, they can't be the deciders on behalf of society about when and how innovation is unleashed on society. We need a much broader conversation. I have a personal challenge of trying to figure out how to do this, but it needs to involve academic institutions and think tanks. It needs to involve corporates and nonprofits and governments of all kinds. And above all, it needs to go beyond the US and Western Europe because all of the impacts of technology are different around the world, but they are certainly global. And at the moment, the only sort of checks and balances are institutions like the European Commission and largely sort of lobbying in America, et cetera. The other thing we need to do with our decision making in my view is to look at it through three lenses all focused on humanity. One is the individual. So if we're looking at, for example, gene editing and incidentally everything I'm saying applies across any kind of technology, whether it's Bitcoin or civilian space travel or gene editing or social media. But if we take the example of gene editing, a patient with Huntington's disease wants it now and understandably. But at the same time, if we're looking at it through the societal lens, we're very worried about all of the risks of this, what the experts say is a scissor simple technique and what happens if we lose control of it. And if we look at humanity writ large, we're very concerned about potentially permanently altering the human germline. So all of these questions though have potential implications for individuals and for society and for what I would call humanity writ large. And then finally, we need to look at this very daunting and complex reality that we have with this complicated distribution of power, lack of understanding about where it is and who's responsible. And we need to avoid taking refuge in the binary. So we seem to be suffering from an epidemic of binary decision making. As a London resident, I'll call out Brexit as the crowning example of a disastrous decision, the only result of which could have been divisiveness and waste. But there are others. A physical example is President Trump's wall, one side or the other. We have transport for London, Uber in or out. And I think we should be asking not so much yes or no with these technologies that have both positive and negative, but we should be asking when and under what circumstances. How can we maximize the positive benefits and minimize the risk? So I think it's up there. Thank you very much. I mean, one thing I just wanted to ask you what always troubles me is at least under American law as I understand it, things like Facebook are not really publishers. They don't have the obligations of publishers. They're kind of highways on which all kinds of garbage can pass. And they say we have no responsibility for what travels on our road. We're just the road, right? And yet the minute you begin to talk about regulation, certainly in Western democracies, you begin to have a slightly chilling effect who regulates what and where does freedom of speech end? And we'll come back to this, but I'm just curious to ask you how much is this becoming a restriction on one sense of the freedom of the internet? I mean, have we had the party and now we're worried about the hangover or what? I think it's a great question. I mean, first of all, I should say that all of these companies have a lot of margin for proactive ethical decision-making before we ever get to infringement on free speech. And all of this will have to be about effective ethical decision-making above and beyond the law. Because as I said, the law will never catch up and we wouldn't want it to because the law would undoubtedly cross certain lines. But I am very, I should say I'm very pro-innovation. I'm very pro-business. I'm very pro-free speech. And I don't think ethical decision-making tramples on any of that. Even in the US and in the UK, but in particular in the US, even the First Amendment doesn't protect some of the speech that I'm talking about. It doesn't protect inciting murder. It doesn't protect certain kinds of hate speech. It certainly doesn't protect child pornography and online sex trafficking. As the Supreme Court once said, free speech does not include shouting fire in a crowded theater. Right. Okay. Next we have Oliver Bussmann. Now here we've been talking about some of the ethics and the issues around the technology, trust and truth. Oliver is going to explain to us that the real technology, and I think he has things that we can learn from. So Oliver, please. Steve, yeah, thank you so much for the introduction. Yeah, it's exactly what Susan said. We are coming out of 20 years of Facebook, Google, dominating communication channels and content. And there's a lack of oversight and making sure that can be trust, these kind of sources of information. And I wanna walk you through something, it sounds a little bit technical, I would say, but I'm gonna use some animation to walk you through something that I believe is the next wave of internet that we've been through 20 years ago. And it's called Blockchain and I would say it's the new technology of trust because it will change how we interact between each other, how we transact because it's leveraging the open internet, it's leveraging cryptographic functionality and we will see a lot of things will get faster, more secure, especially it comes to key information and also to establish a new way of trust. So something that for 20 years ago we swing more to a central model, we will see we'll swing to a totally different model. And let me explain that in a small animation that I think we are at the beginning of a major change. If you look at today how we interact is we transfer over the internet information by duplication. We always copy music, PDFs, PowerPoint, whatever if you wanna exchange this between two parties. So there's a lot of duplication and most of the cases somebody that verifies that this is being processed in the right way and that generates a lot of complexity. Complexity that all the parties have to reconcile their positions, they have to maintain their own books I would say and that have to be always in sync. That's something especially in the financial service industry if you buy and sell trades or stocks takes few days, there are a lot of artists involved and it takes time and it's intransparent. And then now with the blockchain, what's the desire is that we're moving now in a world that physical assets can be transferred. That means we don't wanna have a duplication of assets whatever is out there, we don't wanna use a third party. It's almost like a self-regulated way to exchange doing a deal, doing a transaction, exchanging information. And so the blockchain technology at the end is it's something between two parties now. It's very direct peer-to-peer, the information that you store on the open internet cannot be changed, it's immutable and is happening without a third party. So it eliminates potentially maybe a Facebook, a Uber, playing houses, other third parties that usually between two parties are involved. Because the key point if you look, what is the blockchain? I think you can discuss this in four topics. One is everything is digital, there's no paperwork anymore. Everything will be stored in a way and the difference is that the network that you and I and other parties will verify in an automatic way those transactions by consensus. So it's a very decentralized way and I will have an example in a minute about the media industry that the community at the end will make sure that those news are the right ones and also fast checks, et cetera. And it's secure, so you cannot add, move, change this information if it's out there, it's verified, it's secured through cryptographic functionality. So it's a complete different way how we interact going forward. And it means, what's the difference is at the end, the middleman, it can be a buyer and a seller, it can be an eBay that will disappear. There will be a direct relationship between an investor and a company, somebody wanna invest. And that's for me the real example right now in the industry because the venture capital industry is getting diminished by this because there's a new technology out there with the blockchain that companies can sell their virtual stocks through the blockchain technology and it's getting more funding right now than it called initial coin offerings than the venture capital industry. So venture capital is easy, as a middle person is already under pressure. The same also between reader and producer, the middleman exchanging this information will disappear. And this is a serious business because it's very comparable with what happened 20 years ago. We're talking about 2,000 startups working in old industries, financial service industry, media, logistic healthcare, building up these new solutions. And if you look at how much money already is invested, very similar then to the internet startup companies 20 years ago, 500 million per year. But that number that was the last year was 700 million invested in blockchain startup. This year will be over 3 billion. Because with the technology there is now access to capital that was not in place before. So we see a democratization of access to capital that usually we saw bottlenecks through private equity firms and venture capital firms will be free up. So there's something that I want to put you on here or put this on the radar screen of you because this has an impact on geography, on region, on this development. Let me walk you through one example. I'm not talking about financial service industry because the financial services banking is the fast mover. I would say very close to that is the media industry. And Susan mentioned that we run into an issue there's some content being produced out there. And nobody can verify is it real, is it fake, et cetera. So there's a company out there and there are many other decentralized startups out there that are putting now news networks out there called decentralized news networks. That is a platform for producer, writers, reviewers and readers. That means the producer provide content. There is a community of fast check reviewer. And the community is then also they get paid for that, provide based on guidelines is this information correct and provides this information to them to the community. And there's an incentive for that because everybody gets through certain cryptocurrency gets paid for that. So there is a way also to increase the adoption for that. So there's collaboration that's decentralized and there's a factual way also to put information out there. So something that's really fascinating to watch. I think the adoption of those kind of technology will be faster than I would say a Facebook. It took them over three years to get to 50 million. I think this will be faster because there is a monetary effect that the user get instead a company like Facebook for commercials. And this can be applied for all industry like financial services, logistic, healthcare and other industry. At the end, we're talking about significant benefits from perspective. Exactly. We're talking about simplification. We're talking about simple speed, transparency from point of view. So the technology itself, at the end, it's a new technology of trust that the community will have enabled and there is a momentum across the region that I think is unstoppable. Thank you very, very much. I must say I still feel it chill. I suspect the tax man also feels it chill because if there's no record, then there is no tax. There is a record because everything will be a record on the internet. So the traceability of those transactions is even higher from an AML, anti-money, and ordinary perspective from that perspective. If you see, and it's a global theme, it's not limited to certain countries, et cetera, it will be cross-border existed. That's great, thank you. I mean, what I'm hoping is when we get time to questions, I mean, people in the audience will have specific things to ask. The whole idea of decentralized media fascinates me, partly because it seems to eliminate the whole idea of professionalism, professional editors, training, of craft. If everything is a hobby, then who do you trust? You put professional editor could get incentivized to be part of the radio management because they get paid for that. So that's a different role that the editor could play in the future. Well, as a former editor, I'd probably make more money with your blockchain than I do. Absolutely, I understand. Because I think you have much bigger access to information that can be verified. Anyway, thank you. So, but there are lots of legal issues that come up about the digital world about how news is done, about how people's information is followed. I mean, and we're lucky, I think, to have Antida Nordam here. Who is the professor of public law and works on these issues, and El Paila va confroncer. Merci. Je vais mettre deux mots pour normalement la présentation est censée arriver. Je voulais tout d'abord, évidemment, remercier Thierry Nombreyal de m'avoir invité à participer à cette conférence. C'est un honneur pour moi de faire partie de ce panel. Alors, en préparant cette intervention, pardon, je me suis étonnée du fait que, finalement, je n'étais pas particulièrement affectée par les fake news, ou les théories relatives à la post-vérité, parce que je dois vous avouer que je n'utilise pas les réseaux sociaux paradoxal, évidemment, pour quelqu'un qui travaille sur le numérique. J'espère que je n'ai pas perdu toute crédibilité. Néanmoins, je n'utilise pas les réseaux sociaux parce que je n'ai pas confiance dans les réseaux sociaux. Et là, je suis bien dans le thème de notre panel. Cependant, ce n'est pas le cas de 51% des citoyens américains, si on prend cet exemple, qui, selon un rapport de l'agence Reuters de 2017, s'informe uniquement par les réseaux sociaux. Le fait que je ne me sente pas directement concerné par ce phénomène ne signifie pas que je ne m'intéresse pas à ce phénomène, heureusement, et ma contribution sera donc de montrer les aspects juridiques de la question. Plusieurs questions se posent ici sur, finalement, le droit existant. Est-ce que le droit existant est suffisant? Est-ce qu'il faut créer de nouvelles règles pour encadrer ce nouveau phénomène? Comment préserver l'équilibre entre réglementation des fake news, protection de la liberté d'expression, protection de la liberté d'information? Il existe différentes manières pour le droit de réagir à un phénomène nouveau. Il doit d'abord tenter de le qualifier par rapport aux règles existantes, aux catégories existantes, rechercher éventuellement qui sanctionner, et si les solutions ne sont pas suffisantes, voire éventuellement quelles sont les causes du phénomène, en se posant toujours la question de la forme que devront prendre ces solutions juridiques. Ce sont donc les quatre points que je vais présenter ici devant vous. Premier point, peut-on qualifier juridiquement les fake news? La première opération à laquelle procède un juriste lorsqu'il est confronté à un phénomène c'est de considérer les faits et de tenter de les qualifier juridiquement, c'est-à-dire de faire entrer les faits dans des catégories juridiques existantes. Cette opération de qualification va permettre de définir le régime juridique correspondant. Ce qui caractérise les fake news c'est le mensonge. Il faudrait pouvoir qualifier ce mensonge. Le droit ne sanctionne pas le mensonge en tant que tel, mais peut le faire en fonction du context en ajoutant certain éléments à ce mensonge. Plusieurs qualifications existent en droit français ou en droit de l'Union européenne qui pourraient être utilisées pour qualifier le phénomène des fake news. Je vous ai donné quelques exemples. Le délit de fake news. Le délit de fake news c'est le fait de diffuser des informations fausses ou mensongères, et donc ce délit existe en droit français. Il est pénalement sanctionné. Il est certes peu utilisé mais pourrait tout à fait être ravivé dans ce nouveau context. L'action en diffamation également ou le droit à la réputation en ligne, l'irréputation qui est mentionnée dans le nouveau règlement européen sur la protection des données personnelles est une adaptation de la diffamation dans le context numérique. On a aussi des anciennes institutions comme le droit à la vie privée les atteintes aux intérêts fondamentaux de la nation ou encore la publicité trompeuse qui pourrait être mobilisé pour qualifier le phénomène des fake news. On voit donc que le droit existant peut encadrer ce phénomène et pourtant certains considèrent qu'il est insuffisant, qu'il est lacunaire, notamment parce qu'il ne serait pas à même de sanctionner tous les participants au phénomène des fake news en raison de l'ampleur de ce phénomène. Alors, qui sanctionner ? Je vois finalement deux types d'auteur. Tout d'abord, l'opérateur de plateforme en ligne dont vous parguiez tout à l'heure. La plateforme numérique désigne de nombreux acteurs du numérique. Ça peut être le moteur de recherche, les réseaux sociaux, la plateforme de ventes. Donc, c'est une définition très large. Et le choix de la qualification ici sera important pour définir les obligations de ces plateformes numériques et le régime de responsabilité qui sera applicable. Alors, il y a aussi plusieurs exemples. Si on considère que la plateforme en ligne est un simple hébergeur au sens des directives européennes, dans ce cas-là, on pourra dire que cette plateforme en ligne n'a qu'une responsabilité à léger parce que les hébergeurs sont censés être neutres, avoir un comportement technique, automatique, passif qui justifie un allègement du régime de responsabilité. De la même manière, si on les considère comme des prestataires techniques, uniquement, ils n'auront pas d'obligations générales de surveillance des contenus, même si les États peuvent exiger de leur part un certain nombre de précautions à l'égard des contenus qui sont diffusés. En revanche, si on qualifie les plateformes en ligne de responsable de traitement des données, comme cela a été fait pour Google, par exemple, comme moteur de recherche par la Cour de justice de l'Union européenne, alors il faudra leur attribuer un certain nombre d'obligations, de loyautés, de transparence, voire une obligation de déréférencement, par exemple. Même chose, si on considère que les plateformes en ligne sont des fournisseurs de contenus, c'est-à-dire des auteurs, des éditeurs, là aussi, une responsabilité renforcée sera exigée, une responsabilité éditoriale. Donc la question qui se pose, c'est comment qualifier ces plateformes numériques? Est-ce qu'on peut dire que ce sont des éditeurs ou simplement des opérateurs techniques? Le problème d'attribuer le contrôle du contenu à ces opérateurs privés, c'est que finalement, on va avoir un contrôle de l'ordre public numérique par des acteurs privés, ce qui nous amène à réfléchir aussi à la notion de souveraineté à l'ère numérique. Les auteurs qui peuvent également être sanctionnés, ce sont les auteurs mêmes de ces fausses informations et qui obligent à penser à une responsabilisation de ces auteurs. Sauf que ce système de responsabilisation cause plusieurs problèmes. D'abord, la question de l'anonymat. Comment identifier les auteurs des fausses informations? Ce n'est pas toujours évident. Et ensuite, deux problèmes juridiques. Comment définir le statut de ces auteurs? Différencier l'auteur lambda d'un l'auteur journaliste, par exemple, qui n'ont pas les mêmes obligations, sont pas soumis au même régime. Et un deuxième problème, qui est celui de l'extraterritorialité du droit, c'est-à-dire quel droit appliquer lorsque, finalement, ce qui caractérise Internet est à la fois l'ubiquité et l'immédiaté. Alors, on l'a aussi, on voit que le droit a des solutions à proposer qui, pourtant, ne semblent pas suffisamment satisfaisantes pour certains. Donc on pourrait essayer de s'attaquer aux causes du phénomène et j'envoie deux les biens financiers qui rapportent, finalement, les fake news et la défiance, une cause plus profonde, la défiance vis-à-vis des institutions et la classe dirigeante. Alors, comment le droit peut-il lutter contre ces gains financiers? Il faudrait s'attaquer aux modèles économiques de la gratuité qui, finalement, font d'une grande partie du système d'Internet. Et il n'est pas sûr que le fait de faire payer l'information garantisse une plus grande confiance dans les médias. La défiance aussi vis-à-vis de la classe dirigeante, alors la restauration de la confiance peut se faire par la sanction, évidemment, des responsables, par une protection effective des victimes et aussi par de nouveaux instruments juridiques. Et j'en viens à mon dernier point. Comment réglementer au mieux ce phénomène? Deux solutions. Soit on adopte des instruments juridiques non contraignant, la corrégulation, l'autorégulation avec les acteurs privés, ou sinon un instrument juridique contraignant. La question qui se pose aujourd'hui est de savoir si nous avons besoin d'une nouvelle convention internationale sur le numérique, qui me semble, a priori, être difficile à élaborer. En conclusion, deux remarques. Tout d'abord, il me semble qu'il faut être absolument prudent sur quant à la volonté de créer, à tout prix, de nouvelles règles pour encadrer un nouveau phénomène technologique. Et surtout, les nouvelles propositions qui sont faites sont intéressantes, mais elles reposent sur l'idée que les fake news constituent un phénomène du nature nouvel en raison de leur environnement technologique. Il me semble au contraire qu'il s'agit d'un changement plutôt de degré que de nature. Et donc, dans ce cas-là, le droit existant est certainement en partie tout à fait suffisant. Je vous remercie. And... Excellent, merci, Antouda. I have one question for you, if I may. And maybe it's a naive question, but is there a complication or a difference between the Napoleonic code and the common law on these issues? Or are these issues so global that no one's come to grips with this kind of issue? If you allow me, I prefer speaking in French to be more precise. I think there are both common issues in all the states, and that's why we have international negotiations at the international level, for example, on cyberattacks in the GGE framework of the United Nations. And at the same time, we see a movement from the states that consist of each one of their sides trying to manage for themselves the digital activities. And it's the whole issue that I mentioned earlier about sovereignty at Lernon X. This idea that the states try to protect their rights, protect their values, by justifying the application of their national rights to these digital activities. Knowing that these digital activities are essentially the fact of American companies that would justify the application of an essentially American right. Sometimes I fear that governments are more worried about protecting their values and protecting their powers than they are protecting their citizens. But we'll see. Okay, I don't know, it's fine, it's fine, it's fine. I'm about to lose my last panelist. Now, what we have to conclude, I think it's quite interesting because it's a test case in these problems. Stefan Hümen has done, runs an NGO that looks into the issue of fake news and has done a lot of work in the last German election and the way news was manipulated or not manipulated and the impact that it had. So it brings us down, I think, to a very good real case of the issues we're talking about. Stefan. Thank you, Stephen. And I'm honored to have the opportunity to speak to you here at the World Policy Conference. And I think it's a really timely topic. Many of you might have on the way over to Marrakesh picked up the latest issue of The Economist and their cover story actually asked is social media undermining democracy? And this is really remarkable how our discourse, our discussion about the internet has really changed in the past years. I wanna start with a quote from Hillary Clinton when she was still US Secretary of State. And she said, the internet has become the public space of the 21st century, the world's town square, classroom, marketplace, coffee house, and nightclub. We all shape and are shaped by what happens there, all two billion of us and counting. She said that in May 2011 and really captured a very optimistic mood about the internet at that time. It was a place where people could, or the place was seen as a place where people can globally connect, where they can share ideas and where they can actually shape the world for better. And now fast forward to the end of 2016 and Hillary Clinton coming from an upset defeat in the US presidential election. And she calls, the internet has become for her a space where there's a fake news epidemic with real world consequences. And for her, of course, the real world consequences mean that she's not president. She lost the presidential election. But the fake news discussion is not just about the United States and the consequences are real and they can be seen around the globe. And so I think it's a really good topic for world policy conference. We had Brexit, of course, and in the course of the Brexit campaign, lots of fake news like this one were shared and spread. We have the latest stories in New York Times covered this. This is from The Guardian mentioning that in Myanmar fake images and also fake news are used to instigate violence against the Rohingya Muslim minority. Then also a topic we talked about this morning at the conference on the independence vote in Catalonia and also in the context of the Catalonian discussion. There was lots of spread of fake news, a lot of fake news, for example, about fake incidents of police brutality. I mean, there was some incidents with the police, but also lots of made up stories that were shared widely on the internet. So we have heard the term a lot. I think it's important to take a moment and reflect what this term actually means, fake news, because we have found when we looked further into it that the term is widely used and on the left you see fake news has become a political term. It's been used by, most prominently, probably by President Trump to discredit the mainstream media and calling that media fake news. Often we make up news in terms as forms of satire. I mean, we're not that concerned about that. In the middle you see that fake news can just result from poor journalism. Journalists make mistakes. Media organizations make mistakes. Editors make mistakes. That can result in fake news, but usually they get quickly corrected. But what we're really concerned is more on the right side, what is underlined in red, which is intentionally spread false information. Usually a story that's taken out of context that's intentionally misinterpreted to give it a different spin and to drive an agenda. And that's really when you look at all these cases that I've cited. If you look at US presidential campaign, if you look at the Brexit campaign, if you look at Catalonia, you'll actually find that spreading fake news are not just incidents on the internet. They're usually part of campaigns. They are a strategy for political mobilization in all of these cases. And I would argue here that we should understand really the problem of fake news as they are being used as a strategy for political mobilization. And you can rightly ask, and we should rightly ask ourselves, what's actually new about this? What's actually new about fake news? And here's an example from medieval Europe spreading fake news. This is fake news about Jews killing Christian babies and blood libel. And this is an image that depicts that that was used to incite violence and programs against Jewish people. So fake news have been around throughout history. So what's really new about them? And this brings us, of course, to what we have been talking about here at the panel today, the internet revolution, the digital revolution, the spread of the internet, and the spread of social media have given all of us here the ability to produce news and to share them, to distribute them, to functions that used to be held by radio, television, print, that used to be the traditional gatekeepers of how news and information is created and distributed. And these media organizations have lost this central gatekeeping function. Now everyone who's connected to the internet who has the ability to use Facebook, who has a smartphone to take pictures, can create stories and can share them on the internet and can also distribute news stories to friends and networks. That's what's really new. And at the same time, we see that the established media is in a crisis, not only in a crisis of revenue, as Stephen earlier mentioned, with declining revenue, but also that there's a lot of distrust in established news media around the world. We can see that it's not the same everywhere. So some countries, there's still much more trust in news media. In Germany, I will show you just another slide next. It's still quite high. There are other countries where it's much lower. This is also a problem in the US where the high penetration of social media, so lots of use of social media combined with the distrust in the established media system has really led to the fake news problem and proliferation of fake news. So let's take a look at Germany. Also in Germany, and this is coming now from our own data, this is from a survey we did right after the German election. And what we've seen here is that in Germany, there is distrust in the media, but there's this particular group that has very high distrust in media in Germany. And those are the AfD voters. AfD is the Alternative for Germany. It's this new party, staunchly anti-immigrant and that had an upset surprise success in the German election, getting more than 12% of the national vote, getting into parliament. And you will see that they have very high numbers among AfD voters who distrust media. And they are also the group that is most likely to believe fake news and that is most active in sharing fake news. And we would therefore also argue that in Germany, fake news were mostly used in the context of the election campaign as a political strategy to mobilize AfD voters. And you will probably ask yourselves, and we are asking ourselves also about, you know, what should we do about it? Do we need regulation? What's the responsibilities? The ethical responsibilities of the social media companies? I will end with three things that are based from our data that we can explore more in the discussion. The red line or the red column shows you the engagement with fake news and the green one, this is a specific case I won't get into in Germany, the green one is the engagement of the debunking news. So news that corrected the story. And what you find consistently as lots of media entities now try to correct news, debunk fake news, is that the effect is not as big as the initial fake news stories. Fake news get much more attention and there's much more engagement with fake news on social media platforms. That's the first thing. The second thing is that a lot of fake news result from what I would call poor journalism or poor press releases that are ambiguous and then are turned into fake news. So here's another story that came up in the context of the German election campaign where there was an incident where bottles were thrown at the German police and the initial press release talked about a gathering of 1,000 young people and bottles weren't thrown at the police. Then a media report made, turned these 1,000 young people into 1,000 rioters and then fake news stories appeared that we're talking about 1,000 young migrant rioters that were throwing bottles at the police. And you can see that's what we see consistently is when there's ambiguous reporting if there's a poor reporting on or poor press release that this is often taken advantage of to put a new spin on it and use that spin for political purpose here. Again, to put blame against migrants. And finally what we see, what really doesn't work that well and our data shows this too is the fabrication of news if you completely make stuff up. And the most effective way to stop that is if organizations immediately put out a counter narrative on social media. So for this was a fake news story that apparently a German minister in a big German state had said the police should not talk about migrant criminality. That they should suppress this issue. And when this news appeared, immediately his office put out a statement that he'd never said that this is not true, that this is fake and you see that his actually debunking story was very widely shared and let that the fake news story was completely ineffective. So it's also really important that we very quickly react when fake news stories come up. And I would like to end it here. Thank you for your attention. Stefan, thank you very, very much. That was great. While you think of your questions, we have about 20 minutes left. I've tried to leave 20 minutes for questions before we go back to the panel. I must say, we all have fact checkers. We have teams of fact checkers. Every time Donald Trump issues a tweet or makes a speech or says something, we have fact checkers saying this is right, this is wrong, this is wrong, this is wrong. Seems to make no difference at all. Because people seem to get their news in silos and if you believe Trump, you believe him. And if you don't believe Trump, anything he says has to be wrong. And this is, it's the nature of our divided politics in America, we see it in Britain also. We see it lots of places. But there's no question that fake news, real fake news as opposed to the kind we're supposed to be putting out, can make things worse. And it seems quite clear that Russian hackers, groups, were trying very hard to touch on polarizing issues in America before the election to create unease and unrest. And this was, I think, one of the most fascinating aspects of the whole meddling in the American election campaign. Did it make a difference? Hard to know. Did it change the election? I doubt it. But maybe it served as a kind of antibody for future elections. We'll have to see. So we're open to questions. All I would ask you to do is identify yourself. If possible, please ask a question as opposed to give a speech. That would be really appreciated. So, ma'am, first here, is this great? And then this gentleman over here, please go ahead. And then this lady behind. Good evening, everyone. I am Amanda Mati from South Africa. I run a digital media agency. I have a couple of questions, but I'll stick to three key ones. To start off with, Stefan, your stats. Are they not a representative of the cultural understanding of knowledge within the German community? I asked this in the context of South Africa where we've noted that content that is usually negative is what spears the purchasing of newspapers in our country. And it's what's driven a lot of our purchasing, but power in the print space as well as digital. So just to understand, is there no reference to that? Then in the context of digital and social media, and I play in the social media space, how, from your expert advice, do we make provision and in the inclusion of citizen journalism? We're looking at a lot of the fake news that are coming through that are actually citizen-based journalists. I say journalists in adverted commerce, not really journalists, but people that are normal. We know from the statistics that are released through the Facebooks and the Twitters that a mother will trust the advice of another mother, a product supported by another mother. How then, are we not then breaking that barrier down? And then lastly, in the discussion about legalizations and regulations, I have a few issues about that, but it's debatable. At what point do we bring in the startups into that conversation? Particularly about how we regulate the meters that are brought out. It seems to me it's like a barrier of entry for startups when we already have legal IT matters. Great questions, thank you. This gentleman, please identify yourself. Laurent Cohen, tenuji. What? I guess, I'm afraid it's more a reaction than a question, but I can put it in a question form. I mean, coming back on the legal discussion or presentation we had, I don't see why, I mean when we talk about fake news that really have an impact on politics and democracy and potentially on the election, when you think that Donald Trump was elected with just 75,000 votes in three states, you can assume that maybe the Russian interference had some impact at this such a small margin. So if you compare with the financial markets where the dissemination of false information is very severely sanctioned criminally, I don't see why intentional fake news in the sense of manipulation should not be also criminally sanctioned. Then the question is where do you hit? And there, I recognize the difficulties, but if you make an analogy with corruption, the fight against corruption, the OECD convention against corruption decided to hit where it's maybe the easiest, and maybe it's not fair, it's hard to target corrupt foreign officials, but you can hit companies that corrupt them faster and so more easily. And so with this analogy, I think the social networks are easier to target, they've got plenty of money, they can either do more effort in monitoring the content, and if not, they should be heavily sanctioned. And last point, I think the US is in better position to do this and maybe that's what's happening now in Congress, but if not, other countries can do some of it. The European Court of Justice had the right to be forgotten and that had a sort of global impact, so I think things can be done. Merci. Thank you. This lady here, and then, and then Mayor, please. Yeah, my question is, first of all. Can you identify yourself, do you mind? Sorry. Yes, Carrie Alfredi Hardy. I'm, my question is, based on what you're saying here, who should be the arbiter? Because we're talking about citizen-based journalism as the lady over here talked about, but then you have the question, for example, that was brought up at a recent conference with Baltic ambassadors, where there was deliberate misinformation being planted by state organisms. So it's not merely a question of what's coming out on social media, but it's also what's coming out in state-sponsored organisms. And that, to me, is something where you can't simply say that the arbiter should be a state-appointed regulator. And so if you could speak to that, I'd be great, great. Thank you. That's a very good question, having looked a lot at RT, for instance. I'm Mio Shitri from Israel. A few years ago, I talked in this conference about the cyber, and I would like to know what you think about the cyber in this, in this subject that we're speaking about, because the cyber became to be much more stronger than ever before, and it is developing very, very strongly. And of course, it had a very big influence about the possibility of creating a fake news, because by cyber, you can get immediately into almost every site of every campaign and everything, and see everything in it. That's what happened in the United States. That's what I'd like to hear your relation about these cyber attacks. Thank you. There are a couple, at least one or two hands, way in the back, or maybe it was... I had a friend helping me. That's what I thought. My question is for you, Susan. Sorry, just identify yourself. My name's Natalie Cartwright. I am one of the people who runs a startup. I have a AI startup that works directly with banks, and the reason why I'm at this conference is we're relatively early stage. We're about Series A, but because of our channel partners with banks, my product will be in the hands of tens of millions of people over the next couple years. I'm really interested in having an ethical first approach, but it's not that easy to know where to start or how to do that. So I'd love your advice on how someone in my position is able to do that and what your approach would be. And you also mentioned that you're interested in having that conversation. I would love to be a part of it if it does happen. Thank you. Great. Yes, is that... Thank you. Cooper, I think. Hard to see. Richard Cooper, Harvard. One of the speakers, maybe two, mentioned anonymity. Could we do something about that? The highways, as Erlonger calls them, don't admit anyone on them without a name. Now, of course, one can give fake names, but you could make that illegal and therefore chargeable. So can we eliminate anonymity in these social media? Thank you. There's one more, fine, and then we'll go back to the panel. Yes, ma'am. Sorry, thank you very much. Daniel Khatib, I wanna ask, we have been speaking here about fake news, about who's responsible for that, how to correct them. My question is very simple. Is it feasible giving the big amount of data that's on social media every day? And now the radicalization is mostly done over the internet, over social media. Is it feasible? Who can do that? Who can do such a big job? Thank you. Yes, I think in the end, you've asked the hardest question. I think what I'll do, given we've got a little time left, is just go back to the panel and have you respond to whatever has been addressed to you, but what makes sense to you and in the usual way, we'll go in reverse order. So, Stefan. Okay, that was a very good question and of course, culture plays a big role and I even think it's human nature. Culture is important, also human nature. We are drawn to things that steer us up emotionally and actually fake news that work, if you look at them at all the fake news that have been really successful, they are very emotional, they touch you. I mean, this is why in Germany, for example, a lot of that is on immigrant crime, but then crime against vulnerable people, against children, against women, because that steers you up emotionally. And also the social networks have been optimized to feed into that attention economy that we have and people click on that and therefore it also shows up in your news feed more and it feeds more into it and so that's something we will have to talk about how we deal with that and how we reverse a process where basically technology takes advantage of some issues with our nature being drawn into these emotional issues and the people are saying that we need to talk about, for example, how algorithms select your news feed and that, for example, the user, the user of Facebook should get control what kind of news they want to have featured. Do I want to see more of what's happening in the family? Do I want to see more diverse kind of news? The ability to really have your own say in terms of what kind of news you want to be fed on social media rather than the algorithm just picking up on your natural tendencies. We will have to have these kinds of conversations. I wanted to make one brief comment on the regulatory issue because Germany has just gone down this road this year and adopted a law forcing Facebook, Twitter, social media companies to take down illegal speech within 24 hours. And the focus you have to understand is illegal speech. So libel, if you tell lies about a person, something like that. Hate speech. Hate speech. Yeah, that needs to be taken down. That's illegal speech in Germany. And if they don't take, if those social media companies don't take it down, within 24 hours they can be heavily fined. The problem with fake news is, or with the fake news that I've been seeing are political fake news. They wouldn't fall under this kind of law. There's fake news. Most fake news in Germany is not illegal and not illegal in most democracies. Especially if it's about political stories. You want people to be able to express themselves freely. So I'm very skeptical about regulatory approaches because it starts already with the problem of how you define fake news that would be illegal and taking it down has huge implications for freedom of speech censorship. No, that's great. And I will end it here. That's good. No, I mean part of the problem is speed. I mean, 24 hours seems not very short, frankly, to take down hate speech and so on. And just to the last question. The only way you can do this is with AI, is technology. There's no, I mean, there's millions of posts going up. You will need smart technology to do this because there's no way that human beings can review all of this. Which is kind of part of our circular problem. Undeniable. To go back to what Stéphane just said, I think you shouldn't try to define the fake news juridically because it's such a diverse phenomenon that trying to define it juridically seems too complicated. You have to, on the contrary, use what you have and finally contextualize things. Yesterday, it was the context, it was the problem of the public order. We have notions in law that are generally enough to be able to respond to these specificities and to want to regulate at all costs. So we have a French senator who wanted to make a law proposal to adopt a law on fake news. On the contrary, I think it's a danger because we will have to define what it is. But we saw it, there's a great diversity of these fake news. To answer the different questions, I would just like to add the idea of co-regulation. It's true that today, historically, the right to digital activities is a right, it's a soft law, it's a right that is not contraignant, which is essentially private origin. And today, we are more in a trend where we will have to co-regulate between the public and private actors with sanctions that can be just responsible, that is, without hard sanctions, but also sanctions that are starting to be hard from the European Court of Justice, from the European Commission. We have the European Court of Human Rights, we have European jurisprudence, which today is not a problem and brings a real solution and interprets the existing rules in the eyes of this current situation. And I think that's the best solution. Thank you. Okay, Oliver. Yeah, from my perspective, I think with the big amount of fake news, there is no 100% solution out there. So I think even if the government or regulation tried to put a 25-hour limit on that, there is no 100% coverage because the amount of data, the tools, maybe through the artificial intelligence tools, I think there is a way to close that gap, but I think it will be not there. So the current way how we do issue information, I think, needs a radical change. And I think the gentleman from Harvard is the right one, I think. In the future, you will see that everybody will have a digital identity to do anything kind of business out there and then we can identify if somebody really be trusted going forward. It's a nice idea, except in Britain, you don't have a national ID card in America. You don't have one. You have data shoots. You have all kinds of issues. But there are certain nations already. Estonia, other countries are now ramping up. So that's something that everybody's working on. No, no, no, it's true. And then, you know, someone once said to me, you know, if you asked an American if they'd be willing to have a chip put in their head, so the government could follow them around and actually listen to them all the time, they, of course, would say no. But of course, we all do it voluntarily. Sorry, yeah, exactly. Susie. On the general question about just the complexity of the challenge, I think technology has to be a part. Regulation has to be a part. Education has to be a part. And then we haven't really talked a lot about politics, but just maintaining the kind of liberal, democratic fora in which debate can happen back to the rabbi's point. And I'm not talking about religion, but I'm just talking about the kind of vigorous debate that helps defend truth. So it's gonna be a multifaceted solution. It's not gonna be any one. But the point is, again, this allocation of responsibility across different stakeholders. I also think it's a question of picking our battles. We aren't gonna be able to get rid of all fake news or indeed all negative consequences of different technologies. The question is what really matters? And then finally on this question, there are ways, and to the gentleman's point, there are ways to introduce regulation that is manageable, for example, advertising. There's no reason, in my view, why these companies should be able to have one standard for advertising online and a far stricter standard for advertising in the paper version of the New York Times. With respect to Richard's comment on anonymity, it's an important point, and we know from other sites like Yik Yak, which was an anonymous social media site that has been taken down, that the FBI got involved in from time to time. We know that worse things happen on anonymous sites. The problem is that the perpetrators on anonymous sites are very hard to find and the resources required to do so are disproportionate in many cases. And the harm is already done. And indeed, that's a big problem with this point I made earlier about the law lagging behind technology, which is by the time the law gets around to doing anything, it's too late and the harm is done. And then finally, the question on AI. I'd be happy to take it offline in more detail. You should have a look at a network that's forming with companies like Salesforce and Microsoft. But the fundamental question for startups is from the very beginning to ask, what is the real good we're doing with this technology and where might there be risk? And where there's risk, what might we do to mitigate that risk? And in your case, look at others, look at DeepMind, look at the other companies that are out there and see what they're thinking is and how they're thinking on these issues might be relevant to yours. But I'm happy to take it offline. Great. And then to conclude, I just wanted to make one comment since we're talking about fake news and my president keeps attacking my newspaper and others for fake news. The one thing you have to understand about President Trump is he actually adores the New York Times. He has a very intimate love-hate relationship with the New York Times. He grew up with the New York Times. He was from New York. He grew up in Queens. The New York Times to him was Manhattan. It was the elite. It was glamour. He actually wants our love as much as he dislikes us. And of course, when he calls us fake news, clearly what he's trying to do, he's using us as puppets in his play that he's creating, but he's simply trying to make sure that when we actually do real news, which we tend to do, that particularly touches him and his administration, he can undermine its credibility by calling it all fake. Now, how you control the president of the United States is beyond me, but I do want to ask you to join me in thanking the panel for what is a great discussion and of which was on time also. Thank you. Jelly.