 All right, well, good afternoon, everybody. My name is Ken Roth. I'm the director of Human Rights Watch. I will introduce our panel in a moment. But we are grateful to President Trump for playing the warm-up act role for us. But we were counting on more of the audience transferring over. But we at least know that we have a good audience online, and we appreciate all of you for your loyalty and interest. So we have a lot to talk about. I'm going to start with Zaid Al-Hussein, just on my left. Who is, I think as many of you know, the UN High Commissioner for Human Rights has done a superb job. And let's just get the big topic off the table. What do you think about what Trump just said? Yeah, from a human rights perspective, I was quite disappointed because I thought there were more issues that he would give us to mull over. The one point that really stuck in my mind, and he said the same thing at the UN in September, is that he urged all countries to pursue their own interests, almost without reference to the fact that if you do all of that, if each country is narrowly pursuing its agenda, it will clash with the agendas of others. And we will take the world back to 1913 once again. Power politics, interests to be pursued at the expense of values or rights now. And it's just not possible that you can do that without it all coming apart at some stage and people suffering grievously. So it doesn't hold intellectually to make that argument. Ethnic nationalisms, chauvinistic nationalisms, a sense that there is a supremacy within communities determined on the basis of color or ethnicity and that others are somehow lesser people or that certain countries are somehow morally superior to others. That's what always seems to get us into trouble. It's the script of the 20th century, at least the early part. But I was in the Balkans during the war there in the early 90s or mid 90s. And it was also part of what we saw at the end of the last century. And we simply cannot go back there. So I simply don't understand how you can make that argument and then speak of love and harmony for all peoples. It's just incongruous. Obviously, Trump is not the only one making these kinds of arguments. That's right. And you see this as you travel around the world. Can you talk a bit about how you see this rise of authoritarian populism as a threat to rights and what kind of response you're seeing to it that's been effective? Well, it's deeply destructive. I mean, it's not anything new. When you go back to that famous mayor of Vienna, Karl Luger, in the 1890s, who was the first to really instrumentalize in Europe anti-Semitism for a political end. And he was asked, you have Jewish friends. Why are you so anti-Semitic? And he said he didn't see them as the Jews that he was discussing. And one has to recall that when Hitler was in Vienna in 1912 and he went to one of the speeches, it formed the cornerstone of his own sort of deeply embedded sense of anti-Semitism. And from that, you know, emerged something very toxic. And it's these toxic narratives that develop out of a chauvinistic nationalism or any sense of exceptionalism that drives us into the ditch as humanity. And so, no, we need to base our rhetoric. We need to base our thinking on the rights of everyone. We have to sort out our problems. Yes, we shouldn't leave people behind. But we shouldn't look for solutions at the expense of those who are already marginalized or weak. And what we're seeing in Europe recently with the formation of a coalition in Austria. And I have to say, I read the document between the two parties. And it's worrisome there, because it has within it. I mean, xenophobia is clearly there. And there is a sort of sense that Austria now may start to attack following Hungary and Poland in its sort of anti-European stance. It's sort of dressed up as, of course, a public face to the chancellor here, where he seems to be pro-European. Although, when you look at the details, I worry about it. So we see these trends in Europe. And elsewhere, of course, it's very evident around the world. Well, let's go to one of those elsewhere. And I'm going to turn to Wei Wei Niu, who is an activist from Myanmar, who is also Rohingya, and whose people have suffered probably most dramatically in the last year from the kind of outbreak that Zeyn described. Could you maybe just let the audience know what happened and how did we get here, not just with respect to the Rohingya, but more broadly within Myanmar? Yes, thank you. What's happened, I think. I don't need to explain too much. It's been all over the news in this, for at least for a few months. And it's become one of the worst humanitarian crisis in the world. And the Bangladesh basically become the largest numbers of refugee recipient country. So I think it is what's happening in Burma. It's actually as a result of this ethnic nationalism, which has been ceded and cultivated for decades by the dictatorship in Burma. And as you may know, Burma was under the military dictatorship more than five decades. And military has this divide and rule policy at the same time. There was a cultivation of ethnic nationalism, basically promotions of Buddhist supremacism and Buddhist Burma supremacy in the society. And that fundamental, the rise of the nationalism is actually enable the crisis to become bigger and bigger, and which end up as it is described at the High Commissioner Office report last year in February as amount to crimes against humanity. And basically, the entire Rohingya population has been under the threat of their existence. And at least 2 third of the population has been erased from their homeland and almost more than 300 villages in northern Rakhine state, where Rohingya are have been living. Three more than 300 villages has been burned. And many people were killed. There were very horrific reports on the sexual violence and arbitrary detentions, and all kind of abuses has been ongoing and faced by at least more than 600,000 refugees within five months. And it's still continuing. And it can continue, as I said, it is because of the rise of the Buddhist fundamentalisms and nationalisms in the country. While the society is accept this kind of horrific violation to a community or to a group, while society itself have ignorance on those kinds of crimes, and while there is huge number of huge denial of what's going on, what's happened to the next door or in our neighbor within inside the country, plus outside of the country. And it's allowed the perpetrators to continue and to really create the largest humanitarian crisis. I think this is where we are right now. I think many people are surprised by the concept of a Buddhist extremist. People tend to think of Buddhism as a nice peaceful religion, and not as including elements that are spouting hatred. But could you talk a little bit about how Buddhist extremists played out the 969 movement and the like, and particularly the role of Facebook in fomenting this hatred toward the Rohingya and other minorities? Yeah, I don't think Buddhism is a peaceful religion. And all other religions are peaceful and teach to have a peaceful life. And that's what my fundamental thought. And as growing up in Buddhist majority country, I do have huge respect to the religion and its practices. But how it's the religious fundamentalism arise in Burma, I think it's because of the systems and uneven situation. So religion has been a certain element of the religion. Our groups of religious leaders has been given opportunity to really create the anti-Muslim sentiment or to really promote propaganda and hate around the other groups that those group describe, which is maybe Muslims or maybe Christians or maybe many other groups. And it could happen because they were allowed to do it or they were encouraged or they were facilitated to be able to do so. And but surprisingly, not surprisingly, basically by having this fourth industrial revolution with the development of technology and social media basically become an important platform. They have been using the social media, basically Facebook. This group has been using the social media in very effective way to really spread the hate and propaganda. While people like us group, while the positive narrative has not been like effectively coming out, but this negative narrative in a way, using social media as a platform to create hate. It's been really in the situations of Burma, it's really strong and very institutionalized in a way. They use a lot of resources and effort and money, a lot of resources for really, according to our knowledge. They've been using very systemic way while we're not ready or we don't have enough resources to counter or to really bring positive narrative in systemic way. OK. Nick Thompson, the editor-in-chief of Wired Magazine, who I learned today got his start as a Burma activist. So maybe you're ideally set to bridge what we see in Myanmar today and the use of social media as a kind of a double-edged thing. Could you broaden that and just talk about what are the benefits and the harms of some of these new platforms? Yeah, it's so interesting, right? So technology gets better every year and it kind of gets deeper into our lives in every year. And of course, with human rights, their elements work and be for great good, right? You can document crimes on YouTube and you can use that as evidence. You can resistance activists can organize together on Facebook and the state can also track you down, right? Or they can use YouTube to spread propaganda, right? Or they can use satellite imagery to find out where you are on the forest. So technology helps and it hurts. And the question is where the balance of power is, right? And is it more on the side of the human rights activists or is it more on the side of the oppressors? And I was just thinking back to when I was a student. And so one of the things I did in the late 90s was spend a lot of time with something called the Free Burma Coalition. And the idea was to get students around the country to put pressure on corporations to withdraw from Burma in the hope of sort of putting pressure on the government, allowing, you know, helping the democratic opposition. And at that time, technology was totally on the side of the human rights activists, right? Because we were sort of the first ones to know how to use it, right? There was very little, even internet in the state of Burma. The government couldn't really manipulate and control it. And so there was this giant world coalition. It certainly seemed like technology overall was helping human rights activism. Now it seems the opposite, right? And if you look certainly at what's happening in Myanmar, it's certainly a tool of oppression, more than it's a tool of liberation. So what shifted? What happened? I don't know, but, you know, thinking back to what Ken said at the beginning, I just had a thought that maybe it's that the structure of the most important platforms should sort of push a global community, right? It should be great. You should be able to meet people from around the world, right? We should become more of one people, right? That's how it should work. But it's not. Because of the structure of the algorithms and the way we work it, they actually seem to make us more tribalistic, right? And you can see that in American politics. That's how you get a Trump versus Hillary election. And you can see it in Myanmar. So the structure of the algorithms, or the way we use it, seems to make us more tribal and less unified. And so that probably is what has tipped the balance a little bit towards technology being more a tool of oppression than of liberation. What's the answer to this? I mean, is it because the platforms economically thrive on trying to get you to click and engage as often as possible, which tends to push in this tribalistic direction. Can you count on them to revise, or is George Soros right and we have to start treating them like utilities that need to be regulated? What's the answer? So let's take as a premise that it's definitely the case that platforms make you more tribal, which I don't, it was a complicated debate. But if you would accept that as a premise, the first way to fix that would presumably be to have the platforms change the algorithms, right? Like make it so people actually do meet other people. So we just wrote a story in Wired. We actually took on this challenge and let's find the place on the internet where people change their minds, right? Where they meet new people. And we found a place on Reddit called Change My View. And you go there and what you do is you submit a view and then people argue with you civilly and they get points if they actually convince you. And it's turned into this sort of this wonderful game in this conversation. It turns out where like the rare place on the internet where wonderful conversations happen. So you can imagine the major platforms, YouTube, Facebook, Twitter, all of them saying, okay, let's incorporate something like that into the algorithms. Let's try to get people to change their views. And let's, if somebody who tends to have, if you are an ethnic Berman, let's promote posts from Rohingya, right? Or if you're Rohingya, let's promote posts from the ethnic Karen or something like that. You can imagine Facebook taking that on as a cause. Now, maybe it would hurt their profitability, but perhaps they could do that. The second route would be to regulate them. As Soros suggested yesterday, and I don't exactly know how you do that, but you could, you know, you could put mandates on them or you could say, you could say, people are starting to say this like Facebook, you are responsible if there is ethnic cleansing and then you can try Facebook, right? You can imagine laws being set up where they are responsible for certain outcomes. So that's another way to change it. Or, you know, the other thing you should do is you should just work to understand that and see it, which is where Trevor's work comes in. Like learning and actually understanding and visualizing what's happening is something we can do right now. Well, let me move to Trevor Peglin, who is a remarkable artist who, the Guardian described you as focusing on hidden spaces. But I think that, I mean, Nick is describing the problem of the platforms enhancing tribalism and having us reinforce our own views rather than actually be people who might let us change our minds. Trevor, you seem to focus more on a different value, which is the value of privacy, of being let alone, of being able to operate anonymously and your art does that and your public speaking and your intellectual engagement does that as well. Could you just talk a bit about the pros and the cons of technology with respect to this concern? Yeah, I mean, I think when we look at a platform like a Facebook or a Google or a Verizon for that matter, there's several layers that are going on. Obviously, there's a layer that you experience, you're talking to people, you're interacting with others, you're encountering ideas, there's kind of this cultural, dialogical layer happening. But what you don't see is that there's a whole infrastructure behind it, which is collecting information about you, location data, your preferences, your habits, some really, really intimate details about your life and trying to do stuff with that data. One of the things that I talk about when I talk about technology is that it's never neutral. What it's going to do is amplify whatever the business model or power interests of whomever it is that actually controls that infrastructure, like the ethics are literally built into the cables and the transistors and the capacitors or what have you. So when we look at that underlying layer, there's an enormous amount of very intimate information about all of us that's being collected. So we see there a kind of concentration of power. Now, today, that's predominantly used to try to sell me more motorcycle boots than you, whatever it is that you might be interested in, but as we see rates of profit decline and that sort of thing, there's all kinds of other ways that it's very easy to imagine because it's already happening, other ways in which that data will be used, being able to modulate one's access to credit, one's access to health insurance in a more kind of capitalist environment. How do you maximize profitability of that data? In other environments where you have more state control or kind of state sensualization, you're going to see that data being used as in the case of China, to modulate one's things that we would think of as civil liberties, one's ability to travel, one's ability to participate in public. Do you want to just describe this social credit system a bit so people? Yeah, so China has implemented a system that is supposed to be widespread by 2020, which monitors all the citizens' online activities and gives you points or deducts you points based on how much of a good citizen you are. If you say nice things about the government, your points go up. If you go to work on time, your points go up. All of the behaviors that one engages in that the state wants to promote allows you more access to travel, to get visas, to get access to better schools, discounts on movie tickets, all kinds of everyday things that one does in life. And the other thing that it does is looks at who your associations are. Are your friends? People who have good credit scores. So what this does is basically, in a way, modulate one's de facto rights within that society. And it's easy to single out China because it's a very clear example, but something similar is happening in Europe and the United States and South America all over the world, which is our ability to, again, get access to things like credit or health insurance, which we need to be able to access to, to participate in the society to one extent or another, are also modulated in those kinds of ways. And I think what- Because they collect so much data on us. Because they collect so much data. Because they collect so much data. For example, if you're a Fitbit, you can sign up for a program where you get a discount on your health insurance because it can monitor how often you go to the gym and that sort of thing. So this kind of dynamic is going to become more and more widespread. And I think the question that it suggests is do we need a new conception of human rights in this emerging landscape? Do we need to think about what kinds of, what parts of our everyday lives should not be subject to data collection or to have consequences? I guess I think about my own life when I think about myself as a teenager, like I broke laws, I didn't go to school, I did all kinds of stuff that if I was alive now would have negative and very long lasting consequences for my ability to contribute to society. And so I think about the future, what are, how do we create an equitable future in which people do have equal access to the things we need access to in order to participate in society? All right, let me bring all of you in now and be a bit prescriptive because in a sense we're seeing different sides of this problem. We see that these platforms on the one hand can do tremendous good. They are a way for like-minded people to communicate, their way to challenge governments. It's very difficult for governments to do things secretly anymore because everybody's got a mobile phone, it can take a picture. So they are empowering, but they also tend to make it easier for divisive, hate-driven language to spread. They do tend to divide us into categories. And they're not only invading our privacy, but they are collecting so much data that where we began with this rise of authoritarianism, I mean, China's taking it to a sort of a dystopia extreme, but you can see how as governments have this degree of control, they can make our lives miserable. But are governments the answer to the center or are they the problem? Do we want governments regulating speech or is that going to just introduce censorship which is gonna make the internet a more impoverished place for us? Where do we wanna go here? I mean, I'll just talk about speech for a minute because it's like one of the most amazing things about covering Silicon Valley over the last, even the last three years, but really the last maybe even 10 years has been the total change in philosophy on free speech. So if you look at Silicon Valley and you look at, say, early Twitter, like early Twitter called itself the free speech wing or the free speech part, right? Everybody talked about, and here's another example, Twitter, Twitter made it really easy to have anonymous accounts, right? You just create an account with an egg and say whatever you want. And the reason they did that was partly because they wanted to empower dissidents to allow people to say what they wanted to challenge the government, right? Really good idea. And I thought that was a great idea at the time. But now we've seen what happens, right? So you allow people to have egg accounts and suddenly it's trolls, or in the worst case, it's like state-sponsored bot armies, right? And so suddenly it turns out this thing that was invented kind of maybe to help speech and kind of maybe to help human rights can be used to completely abuse it. So what's happened in Silicon Valley is a total flip, right? And so now nobody talks about free speech the way they did 10 years ago, or at least none of the people with the platforms, they talk about wanting to sort of structure speech and encourage kind of speech. And in fact, Google and Instagram have implemented algorithms that automatically filter speech, right? They get rid of what they call toxic comments. It's not hard to imagine an algorithm designed to identify toxic comments being used to identify, I don't know, pleasant comments and elevate them or anti-state comments and like that. So there's a total, total change. And what's also interesting is that it's not really a bad thing, right? I used to think of myself as a free speech absolutist and somebody who really wanted to have maximum speech on the platforms, but the egg accounts kind of destroyed Twitter. So I'm not against Twitter getting rid of the egg accounts. And so both in my own mind and in the conversations going on, there is a real tension and a flip because the absolute position of everybody gets to say what they can, sunlight is the best disinfectant, actually led to kind of chaos, but you also don't want state manipulation, particularly or corporate manipulation, particularly when our algorithms can like completely sort our speech. So we are in a messy, interesting moment. Let's divide this conversation into two parts if I could, because on the one hand there's the issue of anonymous speech and how that has opened up the world to bots and feigned individuals and the like. Then separately there is the problem of hate speech, divisive speech. So let's focus for a moment on anonymous speech. Now, one solution to that might be to say there has to be a real person there, so no bots, but a real person can remain anonymous because there are sometimes good reasons to be anonymous. I mean, is that a good place to be, Zade or Trevor? I mean, I'll say quickly, that is a good starting point for sure. I mean, you can actually like, the anonymity that a platform bakes into it has a huge effect on the conversations, right? So if you look at LinkedIn, which is the least anonymous, because anything you say is not just tied to your picture and your name, it is tied to your whole professional identity and if you're a jerk, your boss will see it, right? Those conversations are really good, right? Facebook's kind of one level below that and Twitter is the worst because you can just be whoever you want more or less. So it is absolutely true that the less anonymity you bake in, the better the conversations are. But then you're putting a really high value on sort of quality conversation and a really low value on the ability of people who want to not disclose their identities, being able to participate in conversations. Where would we put the other three of you? Yeah. So what's the balance? Well, of course, all of this is occurring in a context where there are laws that basically distinguish between freedom of expression and incitement to hatred. In the US, you have the First Amendment. Throughout much of the world, the International Covenant on Civil and Political Rights is a binding document on states and it distinguishes that between the two. There is no fine print in terms of how you get to these gray areas, but we're working on formula for this and I'm beginning to think if we convert it into mathematical formula, you could probably find the algorithms for that as well. And states do have an obligation to keep a very wide berth available for freedom of expression, the maximum. But once you begin to move a whole population to the extreme. So the thinking is, of course, that it's the best check against tyranny. Have all this cacophony of sound and discussions raging back and forth. The danger with the social media is that it's not that you have one or two outliers that you get used to spewing out some stuff, which is all right, let them do it. We can take it. It's when suddenly there's a density of hatred that has moved center to the right or extreme left to wherever it may be that then poses an overall threat to the entire society and then you're in huge trouble. The laws are there and it's just understanding how the law applies. And what was remarkable to me being in Silicon Valley in September is I didn't get the impression that those who determine what is on and what content comes off are really aware of the applicable legal standard. And it's a matter of sort of informing them and then getting to the engineers, getting to the companies and saying, look, you're not just technologists. You're not just working on algorithms. You're actually contributing in some way to the maintenance of the well-being of humanity. And so be aware of your responsibilities for goodness' sake because you can create amazing good for us and you can create amazing harm. And that's I think the point that we have to make it. So I keep going back to these discussions that Niels Bohr had in Los Alamos with the physicists who were working on the atomic project and they were just sitting there trying to figure out how to make efficient work in a way that would convert it into a weapon. And they were so determined to make it work and he kept saying to them, think about the consequences. I mean, it wasn't meant to stop the project, but he was so concerned about where this is all going to lead. And Oppenheimer wanted him to explain this to the physicists. I mean, they needed to hear it. What I think we need to do now is have more of these companies, have more people, ethicists, those who care about these consequences, sort of embed themselves in these companies. I want to add something to what you said earlier too, is that you mentioned that the availability of these platforms dramatically increased transparency. And I think that that's true on one hand. On the other hand, I worked on the Ed Snowden project and when we started looking at that, it described a system of mass surveillance that literally nobody else in the world had imagined could exist. People in the technology industry understood that it was technically possible to build something like that, but nobody suspected that the scale of that system was what it was. Another thing we learned from that, I worked with a lot of human rights watch people in the aughts on the CIA. We always assumed the CIA was a relatively small intelligence agency, actually around four or $5 billion. And we got the black budget from Snowden. Turns out it's the biggest one, $25 billion. The point being that it is possible to have massive secret structures that in the age of the internet. And I would make an analogy here to looking at a Facebook or a Google and I would sort of advocate that we would want to see more transparency on the back end. What are they actually collecting? What are they doing with it? Who is subscribing to it? Who is using these platforms and what kinds of ways? And for me, I think that would be a part of an overall remedy, not just regulating speech which I bristle it a little bit, but they definitely would advocate for much more transparency in the platforms themselves. Yeah, well, again, I mean, there are sort of two different trends. Let me just come back to you on the regulating speech and then I think we want to look a bit more at the big data analysis and what that means for us. But in principle, it sounds great, like let's stop hate speech. I mean, who wants hate speech out there? But it tends to be the powerful who decide, hatred towards whom really matters. And I think it's interesting to look at how this played out in Burma because there has actually been somewhat selective concern about hate speech, if I understand correctly. Sorry, selective concern about hate speech. In other words, if you look at what is permitted on Facebook and what is not permitted on Facebook, is it equal concern across the board for kinds of hate speech? Yeah, I think Facebook basically play a huge role in Myanmar current situation. And I just feel like it is quite dangerous because how do we make sure the information privacy? The Facebook basically asks to provide people like data, bio-data is most of the time. And if we want a accreditation and we did literally need to give all the bio-data. And we've been hearing that the Facebook, I mean, at some point they somehow coordinate with government in most many cases, like sometimes in this kind of, I mean, the companies has to provide the information about the users if the government requested. So I'm really concerned about the information privacy and privacy of the users. And what has been happening when it's come to the balance in on hate speech is we don't know what's going on behind the curtain with the Facebook, but it's clearly saying that some, there are some anonymous account, but those anonymous account who are promoting or countering hate speech and racism, those account has been targeted or watched or sometime even deleted. But at the same time, those account who has been creating a promoting hate are being allowed or sustained. Although it's among the public, it's obvious that who is who. But in all, but from the Facebook version, we have seen that a lot of the accounts who has been countering hate speech, hate speeches and racism are being deleted and why the other part of the, like for example, like hate speech users are being allowed to continue. So this is something that quite worries them. So I really wanna see these companies are really responsible on the way they operate in the country level and state level. And because we clearly, you know, like started to realize that this platform has been, this social media platform has been used to really create the conflict and violence and which really basically fueling the situations like, you know, mass atrocities and crimes. So it is really important that the, mainly for the Facebook, I have a huge concern. I think they really have to put the principles of protecting privacy of the people at the same time, taking responsibility, how they really operate in the ground in terms of avoiding the social problems, like, you know, basically like contributing to the crimes. That is very important and it is a huge concern. Could I jump in there? Please. We're advocating together with Human Rights Watch and many of the human rights organizations for that to be accountability for what has happened in Northern Rakhine. And we're going to do this forcefully over the next few months. And we hope the senior officers in the military in the Tatmada pay attention, including Minaw Lan. It would be interesting to know whether in the future, as we set up an accountability mechanism, or indeed we get to a stage where you hold trials and a company like Facebook, then is subpoenaed by the judicial authorities. And you're dealing now with really serious issues. To what extent they will then work positively or let's say in a manner that is, you know, is in the way that we expect them to work, hand over the data or whatever data they have, because clearly there was much incitement that over the years, when you take the situation of the Rohingya, they lost more rights from 2012 to 2017 than they did in the previous 50 years. And so this has been a trend that has been very clear. And alongside this has been this incitement to hatred, which has reached fever pitch in the most recent times. But it would be very interesting to see how they would respond to these sorts of requests. So that's, I think, really complicated. So the way that the tech companies sort of so far have had immunity, in the United States immunity from regulation, immunity from courts, they say we're a platform, right? We're a platform, so we just let people publish, we're not responsible for what people publish on it. As soon as you start to raise the specter that, and one of the reasons why they're able to maintain that is because they don't actually modify content or try to like knock out hateful content or elevate good content. They don't actively try to sort of sort and edit their platform, right? And part of the reason they don't sort and edit their platform is because they feel like you would lose that immunity. So now they're starting to do that, right? And you've seen Facebook make a bunch of announcements that they're gonna, for example, rank news publications based on whether they're trustworthy, right? And elevate them or whether they inform people. And I think that's a good step in the right direction. But the risk is once they start to do that, they could be more liable for things like human rights trials or like government regulation. So the risk is if you start to talk about the possibility of subpoenaing them for human rights violations, their position might be, oh, ah, we're gonna back off completely. We're not gonna do anything to try to sort and filter the content. We're not gonna try to introduce you to people from an ethnic group you don't know or elevate more informed content because we've gotta maintain our sort of, our platform excuse. So it's a really interesting way that you could incentivize good behavior and it could actually incentivize bad behavior. Although there may also be a difference. I mean, what you're talking about is liability. So if you're, nobody holds the phone company responsible because I made a phone call saying, you know, Zade killed his neighbor. Right. But if they could hold wired responsible if you print that because you made an editorial decision. Right. So where are the platforms? Where is Facebook in between? In the particular case of Myanmar, if what Facebook is doing is actively colluding with the government to delete dissident accounts and elevate oppressive accounts, then yeah, screw them, right? If what they're doing is they have an algorithm that because of the way it's evolved over time sort of promotes tribalism and you know, possibly sometimes hatred, then work to kind of fix the algorithm. So it actually, it depends on what exactly they're doing, how I feel on this issue. They screw them, it's too hard. I mean, they are complying with local laws but they should not do that if they are in fact doing that which it sounds like they are. I mean, also there's, I think we, when we say Facebook deleted an account, I'd be really surprised if Facebook did that on its own. You know, because I mean, after all, it's the Burmese government that is now prosecuting two Reuters journalists for hate speech against the government because they were investigating a mass atrocity against the Rohingya. So, you know, I think we have to recognize that these platforms operating various countries are very susceptible to government interference. But that's, I think that's a very different thing from the idea of, you know, turning over data but that also raises questions because, you know, it sounds great to turn over data to help prosecute crimes against humanity, you know, by all means. But what happens when China says, turn over your data, we wanna know who that person is who was spewing this anti-government rhetoric. And, you know, right now, the platform answer is keep your data out of China. You know, we'd start here, can't give it to you, sorry. You know, which is a good answer, but China's saying you wanna operate here, you gotta put your data here. And then, you know, the company's risk being complicit in not only censorship, but actually prosecution of people for being dissidents. So I don't, you know, these are complicated issues. They're complicated issues. And you remind me of Venezuela this month, they arrested two people under this new sort of anti-hatred law. And on the face of it, you would think, well, it's not bad to have a affirmation that incitement to hatred, it's unlawful, but it's being applied against those who are criticizing the government. And so you can see the way that this is all being manipulated. Whenever you have a situation of power, you know, the powerful tend to protect their own. And so the governments that enforce hate laws tend to protect the powerful first. And it tends to be the minorities, like the Rohingya, who get screwed in the process. So it's, and that's the difficulty, but on the other hand, I mean, if you say, okay, let's just ask the platforms to do that, you get into the liability issue that Nick is outlying. Or, you know, I mean, as Trevor's saying, do we trust these platforms? You know, I mean, and is there a way we can introduce, you know, more popular control? Is there, I mean, what are your prescriptions, Trevor, in terms of, you know, is there a way that we can actually through, you know, more nuanced refined consent recapture some control of these virtual identities that are being created for us. And that with big data analysis, in many ways, you know, are probably more important than just our physical identities walking around. No, I mean, this is something that I think about a lot. And I'm gonna say something that's very gonna be very unpopular at WEF, but I don't think that there's market solutions to this kind of stuff. Like in the sense that I think that if you're a company, you are set up in such a way you have certain obligations to your shareholders, et cetera, et cetera. And I think that those are gonna be fundamentally incompatible with the platforms that we would like to have in the future as those of us who care about civil liberties and more equitable societies. So I don't know technically how this would work, but I think conceptually, we have to come to an understanding of almost like a new space of civic space, really online. And think about what areas of the online world do we want to have be like a library. A library is a place where you can check out any book that you want, and the police don't get a record of it. And that second part is just as important as the first part in terms of it being a democratic institution. And this seems like a ridiculous thing to say, particularly as an American and looking at the way that the government functions, but I really do feel like there has to be a new sense of civil society and a new conception of human rights, that that seems intuitive to me that that's gonna be the way forward. But if you, I mean, it's easy to say I should read whatever I want without people knowing. But should I be able to say whatever I want without people knowing? Or does the world then look more like Twitter than LinkedIn coming back to mixed point? You know, I think that it's, that there's gotta be a smarter way to do that. And I think one path to that is more transparency on the part of the companies themselves, of the algorithm selves, you know, some transparency, like how do these systems actually work? Can we look into them and see what kinds of things they're by default promoting, whether or not anybody's at the controls or not? And I only say that because when we look at the history of hate speech or whatever, you know, in the history of laws themselves, you know, in the past it was illegal to be homosexual. You know, it was illegal to marry somebody from a different sex. So when we look at the history of that relationship between law and speech and anonymity and what people do, it's only by breaking the laws and having very unpopular opinions that there has been some modicum of social progress. So we don't wanna lose that. That's right. We have just a few minutes left. Do we have a microphone so we can, okay. So let me just turn to the audience and right up here in the front where we have a question. Hi, thank you very much. My name is Sousan. I'm a Palestinian from Israel and I do strategic litigation before the Israeli Supreme Court on behalf of Palestinians. And we have been facing a lot of the issues that you have addressed regarding technology and limiting freedom of expression, mainly with regards to Palestinians. But I was wondering, my experience goes mainly on social and economic rights. And I was wondering about the relation between technology and social and economic rights in the context that you have been addressing today. So technology can help with e-learning markets to get educated. Access to internet can help on health issues or on social credit or like economic rights or getting more social welfare. But we know that technology or half of the world basically is disconnected from internet or doesn't have access to internet. So there is a lot of fear in few years that the same technology that might help to more education, more health and so on will lead to more inequalities that we have been trying for long time to basically eradicate and eliminate. And I haven't seen so far any, or maybe it's, maybe there are and I'm not aware of, like is there any kind of thoughts to address this issue within the international law? Because Mr. Hessen, you have addressed the issue in the beginning that there should be a reconsideration or maybe like rereading of the human rights law and the conventions, which addressed the very conservative right to education on a very conservative level, but not on a level that combines between technology and education. So what are your thoughts on this? So does technology help or hurt the problem of inequality? It does both. It's not binary social. So it does both. It depends on the context of course and how the authorities in question try to employ it when considering the right to education and the equal right to education and what sort of education. And when I first began to think about this years ago and the amount of information available to us doesn't make us necessarily better thinkers. I used to think to myself, info, info everywhere, not a thought to think. And to a certain extent, sort of it describes where we are and yesterday George Soros spoke to the press about this, that there's a sort of deadening of mind almost because sometimes there's so much and there's no clarity in the way we're thinking this, but clearly what you see also in many parts of the world where you don't have proper infrastructure and there is access to the internet. You have access to some of the greatest minds with top lectures. And I remember talking and listening to some Iraqis say how 10 years ago, they used to download lectures from various universities around the world when their own universities were not functioning. And so the access to education can be greatly enabled by technology. But at the same time, it cannot create problems of the mind which we need to be aware of as well. I'll just say briefly that, I can't even make it briefly, it's so complicated, but technology is this really weird thing where it definitely helps it become an individual entrepreneur and start a company or start a marketplace and sell stuff on eBay or become a driver and join with Uber. So it helps you, it helps it sort of the most unequal parts of the economic scale and that's good. But what it also does is because of network effects and the people who get the most people to join their network, it makes it easier to get the next person to join which makes it easier to get the next person leads to this massive consolidation. So you see massive accumulation of wealth at the top end and the destruction of stuff in the middle. So you end up like, let's just take Uber, makes it great for people who don't have jobs to get jobs as drivers, makes it not good for all the cab companies and everything else that it totally knocks out of business and then accumulates massive amounts of wealth to a very small number of people who invested in it. So it's like, it does do good. And then it also probably, I'm sure Weff has this somewhere, if you were to chart technological development or technological penetration in the country and inequality, you would probably see that inequality increases, right? And so then the question is, can you regulate the tech companies or can you regulate the markets in a way to counter that? Can you, can the public sector help in certain ways? Can the education system work in certain ways? But technology should, it should, it should be a force for equality and it's not. And so the big question is how to make it that. Yes, right up here in the front. Thank you very much. My name is Paolo Koumou, with the Africa platform, Best in Nairobi and... You speak up a little bit, please. I just listened to this conversation and I'm wondering, other than this being the choir, I am wondering whether we need to begin rethinking how we advocate for human rights. And I've got two reasons for this. One is, I think those who are in civil society, we are still caught up in this belief that we had a hard sense of founding, that there's something called public sector, that is private sector and that is civil society. But what you've noticed is that it does businesses and companies of might, businesses and governments, might a long time ago. They got married, they had a wedding and they're living happily ever after. But we still don't see that. So we still think that there's some regulator somewhere who needs to do something. There is some private sector who need to do something. But just listen to any government leader speak and they speak the language of business almost right for a beginning. You didn't listen to that earlier today, did you? Yes, listen. Do the same with any major, whatever business person and they could be seated on a political stage and they would say the same, same thing. So I'm wondering whether those of us who are in human rights are still stuck in the old mode and then maybe we need to rethink who we advocate with. So that's the first part. But the second part is the tendency to go to the top layer advocacy. So there's a lot of conversation here about what First Big is doing. But nobody's talking about the consolidation that people like Amazon are beginning to have. Nobody's talking about how much power Google is now using to decide what we see and what we don't see. If you were to open every website, including that of the human rights watch, the very first thing that you see is we use cookies and these cookies is to help us enhance your experience. But actually they are saying, what we want to do is to capture your habits and then use that to make whatever decisions we have. But we don't seem to see those subtle things as building up to the massive things that now Facebook do. So they start off very nicely but we don't seem to see them starting that way. But when two or three now become massive and they now become an issue, we focus on those. But literally all of us here are guilty of capturing information. Sometimes we pretend that we are seeking people's information but what we simply do is that we entice them with nice knowledge and nice information but in the process we capture a lot of things is in that not an abuse of human rights. Trevor, I think this makes sense for you. I mean, on the one hand we have this, can you trust government to regulate business when they are so aligned and in trust and how do we enter into that and gain control? Yeah, I mean, I think that is the question, right? I mean, and especially in a society like the US where it is such an intensely neoliberal kind of setup. I think there's some positive directions people are trying to figure out. New York City, for example, just passed a law that if you're gonna use algorithms in doing criminal justice sentencing which is becoming a bigger, bigger thing, you have to have auditable systems which is actually very hard to do with machine learning systems. So there are due process requirements that you can place, that's definitely one approach and that's something that the ACLU has been sort of at the forefront at as well as the AI Now Institute at NYU. It's trying to think about how do you insert more traditional concepts of justice and the functioning of justice into the oversight of platforms that are becoming such a part of our everyday life. So I think there's some little examples of that and I think that there's just really smart people that are thinking exactly about that question right now. But I think we're at the really early stages of being able to conceptualize it. No, I mean, the question is a great question. Because in many countries, the elites dominate every space and the government and the private sector and the military and the intelligence are all indistinguishable. It's the same group of people, essentially, that run the country. And to think that at a national level you could have a regulator distinct from the judiciary or the military and it's all the same, I mean, it's perfect. And so maybe we have to define, you know, new international instruments and to see how you do that is quite, I mean, it's quite difficult to imagine it. But perhaps that's where we need to go and civil society and certainly those organizations that have a broad imprint sort of imprint need to think about this deeply. I mean, it's a great question, great question. I would like to have a little bit of faith in the idea that a critical mass of countries or nation-states of civil societies could have a big influence on how the whole platform works. In other words, if a mass of people got together and articulated a series of guidelines, I think that those guidelines could become much more widespread, just simply as a result of the fact that these are planetary-scale infrastructures. Probably he's gonna take a bottom-up approach like that because top-down is just too captured at this stage. And we had a question up here, yes. Thank you, I'm Michael Bramowitz from Freedom House. Thank you for a great panel. I just wanted to pick up on the gentleman's question about the business community. You know, being here in Davos for the week has made me maybe a bit more pessimistic about where the business community is on support for democracy and human rights. And I'm just wondering, maybe particularly for Zade, but any of the panels, you know, what's been your takeaway on that and specifically, I mean, this has been an issue that all of you have been working on for quite some time, but how do we really make the case for democracy, for human rights, among this community when there's just so much money at stake in these conversations? Maybe do you wanna start with that? I mean, if your business is operating in Burma, why should they care about democracy then? I mean, yeah, it's a question that we have to regulate. I think, I mean, in the past few decades, maybe after World War II, the war leaders were formed to promote democracy or human rights. You know, the topic were basically popular and I'm not sure they were real, like a genuine political leadership to really promote those, like the principles or values of democracy and human rights, but in our generations, what we see in today and what I have been realizing is there is lack political will or genuine political will to really promote democracy and human rights and to protect human being. And I just think that the human rights agenda basically has been sold for their own interest by the leaders of the world, including the most powerful country's leaders. So, and it has been basically, human rights has been attached to their own interests, to geopolitical or business interests. And in most cases, for example, let's say a situation of Rohingya, there aren't enough of their interests to really address the issue. So, the issue has been ignored. Let's say, in compared with the situations in Middle East and in Burma or in South Asia, attention has not been getting enough and the issue has been ignored or somehow neglected, not prioritized by the, around the world discussion. So, I just felt like we really need the real, like moral leadership to really again enhance the value of human rights and democracy. And we are kind of like constrain in or like, lack in the moral leadership to really genuinely promote human rights and democracy. And now this is the time we have to understand and realize to really cultivate it again. And this, the platform like this, like I mean the West and where the, not only the political leaders, but also the corporate leaders, private sector leaders has to pay attention and really come up with a certain level of moral leadership to really uphold the values and the values of human rights and democracy and to protect in people. And this is something that I feel like, basically ingenuity as such a shame and very, very sad about this. Okay, we're gonna have to, I think, leave it there. But let me, Nick Thompson, Wei Wei Nu, Trevor Paglin, Zadal Hussein, let me thank all four of you for your very insightful contributions today. Thank you all for joining us. Thank you.