 Our next speaker is Ethan Zuckerman who, again, requires no introduction to this group, and Ethan is a senior researcher at the Berkman Center and has been involved in a number of really important projects involving media, particularly the developing world, among the media cloud, and are you all set up, Ethan, to start up? So I'm going to save the discussion for the end of this because we're running a bit late. So, Jeff, thank you for that introduction, and a lot of the work that I do for people who don't know my work well is on technology and activism, and a couple of years ago I put forth a pretty simple theory called the cute cat theory, and it was based around a fairly simple observation. When the web's initially invented, it's created by physicists to try to figure out how you can share theoretical papers. Once we get Web 2.0, once we get this whole participatory web, it's about helping people share pictures of their kittens, and it's important to understand this because this actually has implications for activists, and it turns out that almost all the tools that we use for social media for trying to do things like share pictures of our kittens turn out to have actually very interesting implications for activists. In many cases, a tool that's incredibly popular for sharing kittens is the most powerful tool you can put in the hands of an activist. And the reasons for this are pretty straightforward and pretty simple. These tools are really easy to use. They're designed for mass markets. They're really pervasive. People are already using them. You've got millions and millions of people using them, and the fairly subtle point in all of this, I think, is that there's a tremendous cost to an authoritarian government in trying to figure out how to control or censor them. If I'm working in a country that has a high degree of censorship and I put up an explicit human rights site, there's very little cost to that government in censoring that site because there's only a few hundred or a few thousand people looking at that. But if I go onto Facebook and find a way to use my Facebook group to promote awareness of human rights abuses, and if I force the government to shut down Facebook, not only are they alerting the 500 people who cared about my issue in the first place, they're now alerting everybody who had wanted to use Facebook as well. So the goal behind the cute cat theory is to try to figure out some way to raise the social cost of censorship. Now, over the last couple of years, I've been doing research with a guy named Hal Roberts over at the Berkman Center. We've been looking at some of the challenges that come into play for activist and independent media. And one of the most massive ones is distributed denial of service attacks, the ability of people to be able to launch attacks from thousands of computers and shut down that individual website. And this turns out to be yet another reason why you might want to use social media rather than having your site viewed by 500 people. Because your site viewed by 500 people is on this little matchbox of a server that a hundred determined people can stomp into the ground, whereas stomping Facebook or stomping Google or stomping YouTube in the ground is a much, much harder thing to do. So I put this theory up, and I've been thinking about it a great deal in the context of Tunisia. Tunisia actually has a very deep connection to cute cats. Not only are some of the best examples of cute cat theory in practice from Tunisia, but in fact, Tunisia is filled with cute cats. They show up in all sorts of places, including on football pitches. This is not Photoshopped, by the way. And this image was, in fact, how Judith actually convinced me to come give a talk, was showing me that image in the bottom left. She convinced me that drawing an analogy between what's recently happened in Tunisia and the cute cat theory was a helpful thing to do. And in fact, understanding what has actually happened in the Arab Spring protests that start in the city of 40,000 people, Sidi Bouzid Tunisia, is very helpful to think through the cute cat theory. Now, if you haven't read about this in detail, Zainab Tufesius in the room is one of the great introductions to all of this, but there are a couple of good pieces out there that make the point that what happens in this dusty little town to have it spread throughout the country and to have it spread throughout the region is absolutely remarkable. There have been a lot of protests in Tunisia. There have even been protests where people have set themselves on fire. There have even been, in 2010, fruit sellers setting themselves on fire to protest corruption. But one of the reasons that this protest spreads out of Sidi Bouzid is that we find a way to use social media to document it and to have it go from social media to a broader audience. Sami Ben-Garbia, my dear friend and colleague, puts forth a model that looks something like this. And it's a bit of a self-serving model because Sami's organization is the second step of it. But it's actually a model that makes sense, whereas almost no other models make sense in explaining what happens in Sidi Bouzid. You have these protests take place. Protests in the past basically got very little attention. The government comes in, shuts it down, makes it impossible for anyone to pay attention to it. In this case, what happens is the protests get filmed and those videos get posted onto Facebook. And Facebook has some great advantages and some great disadvantages. The great advantage is that it's incredibly pervasive. The great disadvantage is that it's extremely difficult to find this content and make sense of it. It's hard to search for it. It's hard to tag it. It's incredibly hard to translate it, which turns out to be really critical because Tunisian Arabic is not everybody else's Arabic. So you have this group, Nawat, which Sami runs, along with a bunch of other activists, who come in and basically mine Facebook, grab these videos, figure out how to put them on, figure out how to subtitle them. That becomes a resource for Al Jazeera, which desperately wants to cover what's going on, but is literally shut out of Tunisia. They've never been able to open a bureau there because of Ben Ali. But everyone's got a satellite dish and so very quickly you have stuff moving from Facebook through Nawat, through Jazeera, and then people are able to see a mirror of the protest. They can see what's happening in Sidi Bouzid. They can make the decision in Tunis whether or not they want to get involved with events there. So why Facebook? It turns out that the main reason for why Facebook is a really simple reason. Tunisia's blocked everything else. They've blocked YouTube. They've blocked Vimeo. They've blocked Daily Motion. They've blocked all of this. And the reason that they haven't blocked Facebook is sort of fascinating. They try to block it in 2008, and what ends up happening as a result is that they end up with a 3X usage increase. It gets blocked and so many people are interested in the idea that there must be something on Facebook that they find ways to get around the firewall, to use a proxy server, to get online and get onto it. There's a massive social media campaign about this. And so what happens is that Tunisia, at a certain point, the government concludes that they're better off trying to figure out how to embrace the technology rather than fighting the technology. And one of the little-known facts is that before Ben Ali steps down, he offers a set of concessions. The day before he steps down, he offers concessions. One concession is that we're gonna stop firing live ammunition at protesters. The second is that we're gonna reduce the price of bread and oil. And the third is that we're going to end internet censorship. And if you think about this set of three concessions being offered, it's kind of a fascinating set. The only way that I can make it make it any sense to me is the fact that at this point, Ben Ali has 234,000 Facebook friends. He's only behind Goodluck Jonathan in Nigeria at that point as far as the all-important African leader, Facebook Table. So Facebook becomes incredibly important as a space for Tunisia. And this is an interesting vindication. I think of this theory that I'm putting forth, which is that it's possible to have these activist uses of this very popular social media not really designed for this purpose. So the point of my talk is not really to gloat about having vindication for this theory, although I have to admit that's nice and enjoyable. It's mostly actually to talk about where that theory is starting to fail. And obviously the first place that it fails in Egypt because it turns out that for any use of social media for protest, you end up in a situation where if you shut off the internet, you largely shut off that ability to use social media effectively. This isn't really a critique of the notion of using one platform or another. This is simply pointing towards a basic vulnerability. When a society is deeply threatened, it's gonna cut off its communication systems even if there are massive social and economic consequences to it. A much scarier approach to this is the Chinese approach to cute cat theory. And the Chinese approach to cute cat theory basically says we've got Chinese cute cat technology. We can generate as many cute cats as you would like. Those are cute cats that natively speak Mandarin or Cantonese. They're posted directly to Youku, not to YouTube. We block all the rest of the social media. We make Chinese language social media on locally hosted platforms that have a huge staff of sensors who manually review the videos to make sure that the cats aren't putting their paws in the air in demanding revolution but are in fact just flushing the toilet. And this is a much more effective way to combat the cute cat theory, but it has the complication that it costs an immense amount of money. You need to have a massive audience to be able to do it, more than 400 million users. You need to be able to ramp up the labor. If you think about the possibility of social media taking off if everything had to be manually reviewed, it's very hard to imagine that happening in America at our levels of labor costs, it's actually conceivable within a Chinese labor system. But that's not what I'm really scared of. What I'm really scared of is the idea that we are embracing these spaces without thinking about the implication of moving our public speech into private spaces. When we try to run a revolution on Facebook or on YouTube or on any of these tools, we are trying to organize a protest within a shopping mall. And we are leaving it up to the owners of these spaces, whether or not we're going to be able to use these as a public space for these political purposes. Now, when you throw out the observation, there's immediately two different ways you can go with it. The first is to offer the suggestion that the people who run these spaces, the giant evil corporations, are in fact anti-revolutionary, they will never permit social movement. And that's not actually what I wanna argue. And I should point out, I didn't actually put the horns in. If you go and search for images for Bill Gates Evil and Zuckerberg Eagle and for Google Evil, you actually get the devil images associated with it. It's sort of an interesting internet index. I'm thinking about doing that on an ongoing basis for who shows up. But I actually think it's the wrong way to look at. I think the right way to look at it is that what we have is the challenge of using digital public spaces that have been designed for one sort of behavior for a very different sort of behavior. So we end up with situations like this all the time. And I'm not meaning to beat up Facebook. This happened to be the most recent example I saw of this, where Facebook has shut down the account of an activist associated with Medan. It's quite possible that this account has gone back up. What happens here is the intersection between a process that makes perfectly good sense for maintaining community norms and doesn't work well in a human rights situation. The process is if more than five or so people complain about someone's profile, you go in and investigate. And if you don't read that profile because you don't read Russian, you're probably gonna disable that profile at that point. The truth is that creates this interesting asymmetry. If you can get five people who are pissed off about this journalist's profile to come in and complain either from their real accounts or from fake accounts, you have the possibility of having that account disabled. And so you end up with this asymmetry where it's the right way, perhaps, to manage a community when it's a community of friends who know each other socially, but it has a real serious problem as far as the world for activists. Similarly, activist content all the time ends up bumping up against terms of service. And the terms of service may be the right thing to keep Facebook or YouTube as a safe space where you're not stumbling onto imagery of people being murdered. But in many cases, that's the activist content we need a way to share. There's a massive debate about whether or not real name identity is a safe thing for activists. And there's the real problem that many activists don't wanna reveal themselves because the governments they're fighting against suddenly have a way to trace them and trace their friends if they're using real names. But in many systems, we use real names as a way of essentially ensuring that we can control discourse in that space and not have it turn into the anonymous common threads on something like YouTube. So the trick with this is that organizations are slowly but surely figuring out how to deal with this. But it's a matter of slowly but surely. This is a video that I'm not gonna show not just because of time, but because it's incredibly uncomfortable. But I recommend looking it up given the US continuing involvement in Bahrain. This is a video that got sent to me by Bahraini activists. And it was the first evidence in the Bahrain protests that this was not gonna go the same way that it went in Tunisia or in Egypt. These were government troops shooting unarmed civilians who were clearly marching peacefully, hands raised towards a military barrier. And then we see in the video, shots fired, protesters fall to the ground, blood, et cetera. And the reason that people had sent these to me directly was that they were convinced that as soon as they posted them to YouTube, they'd be pulled down. Because they are absolutely technically in terms of service violations. So I took them, I cashed them, I tried to figure out how we were gonna get them out in different places on Global Voices and elsewhere. And to their great credit, YouTube did this instead. It allowed them to remain up. It put this disclaimer on top of them, essentially saying, do not click this unless you absolutely positively know what you're doing. But we're going to essentially make an exception to our terms of service to allow this content to remain. Now this is great and this is laudable and it's incredibly laudable that Facebook has found ways to stop deleting YLGoneem's profile and then allow it to be celebrated and allow it to grow. And it's great that sites across the board are finding ways to do this, but it pulls out some basic tensions. It's not real easy for YouTube to advertise on this site. Most advertisers really don't want their banner ad next to this particular video. It's simply counter to the business model associated with it. It also has some really interesting implications. At the point we conclude that YouTube is the tool for revolution because it's a great way to share video. Does it make it possible for YouTube to go into a market like Vietnam or a market like China where that's going to be viewed as an extremely difficult thing to do? Look, the challenge is this. We've created these fears. And I do mean we because this is the world that I literally come out of. I used to be the CTO for an early web content company. We created these tools to have a certain set of behaviors. We wanted friends to be able to exchange information with one another. And we built terms of services that made it possible for us to control that behavior. And in many cases, empowered ourselves to take down any content we were troubled by. But now we end up in a scenario where we've allowed these privately held spaces to become our network public spheres. These are where we're having our political debates. And at a certain point, the terms of service that we've put forward to enable that interpersonal communication, that blurry line between public and private that Dana's talking about, may be completely inappropriate for trying to figure out how you start a public movement. And so the challenge becomes this. How do we ask corporations, private companies, that have liabilities to shareholders and responsibilities and such, do we ask these organizations to take on this role of hosting these deeply important public conversations, or do we try to figure out the alternative knowing full well that the alternative is a really, really difficult one. We never drove anyone to do activism on Facebook or on YouTube because we thought it was the best set of tools. We drove people to do it because of all the other things that make these tools appealing for activism. Activists aren't gonna move away from these tools, even as they turn out not to be the right spaces. We need to figure out how to have a dialogue about how we build public spaces atop them. So, thanks.