 So, as Margot was saying, we are very happy and honored to receive Janine Joff. So, Janine Joff is an American activist for freedom of expression, and she's no director... Oh, it's this laptop. She's no working... Well, working. She's director of international freedom of expression at the EFF. She's also a journalist, and I think today's talk is our not-so-secret future, which is a nice mix between freedom of expression and privacy. So, Jillian, thank you very much. Thank you. Pour myself some water. Give me just one moment to pull my slides. No? I don't know how to use a Mac. Perfect. Okay. So, I really enjoyed the last panel. We use NextCloud products at EFF. And EFF, actually, this is just sort of a side note. It's not my talk. But the Electronic Frontier Foundation has always had a very strict policy about what tools we can use in the office. And, you know, we're constantly being flexible about this because it's not reasonable to always demand that people, you know, use this kind of phone only or never use Google. I know. But I think it's really great because part of it is, you know, practicing what we preach and trying to set an example for other organizations and other people. But, I mean, this is also something that means that, like, we're one of the, we have one of the only websites that has no trackers on it that I've ever found. So I'm going to be talking about privacy and surveillance today. Most of my work is actually around freedom of expression. And so whenever I give a talk about privacy, I feel a little bit out of place. Yeah, this is not my precise expertise. And yet, at the same time, these two issues are so closely and tightly interlinked because without privacy, how can we feel confident in expressing ourselves? How can we feel safe in being who we are and putting ourselves out there for the world if we're not able to also protect parts of ourselves and keep those parts of ourselves private? So when I leave my house every morning, well, raise your hand, actually, if you know where the closest surveillance camera is to your front door. Okay, that's a few. So there's one just outside my house because I live right near a hotel. And when I walk out of my house every morning, I know exactly where the first surveillance camera is. And also when I come home late at night, it's Berlin and I come home at four or five in the morning and I see that surveillance camera. I'm just like, I feel like it's capturing all of the things that I'm doing or not doing right. So when I moved to Berlin, I started kind of this habit of trying to count surveillance cameras because I think Berlin's unique in the world at the moment, at least for cities of its size. It doesn't have nearly as many surveillance cameras as most other cities that I spend time in. And then of course, if you go to the other extreme of that, then you have London where I can't even count the number that I can see at one moment because there are just so many of them everywhere. So with that in mind, I would posit that ubiquitous surveillance is being rolled out into consumer products and being developed by these sort of corporations between corporations and nation states, and they're creating these interconnected, invisible systems. It's almost like we're living under a microscope. And as I said before, the lack of privacy has the effect of chilling our free speech. And it's not just the lack of privacy that has the effect of chilling our free speech. As we heard a lot about six years ago when Edward Snowden revealed the extent of US surveillance on individuals online, but it can even be the idea of surveillance, the perception that there might be surveillance happening. There was a 2012 survey that was done of people in high internet penetration countries, North America, Europe, and a few other parts of the world that found that 50% of that survey's respondents believe that they're being watched. Now, it's more likely that about 100% of them are actually being watched. So it's kind of amazing that only 50% think they are. But at the same time, just that perception of surveillance can stifle the ways that we talk about our lives and the ways that we express ourselves, whether it's online or sometimes out there in the public sphere. So I think that we have to be asking ourselves a few questions. My slides are out of order. There we go. Who creates the technology? Who benefits from it? And who should have the right to collect and use our data? Right now, there's a murky gray zone between who builds, who captures, and who benefits from our information. Just to throw out an example before I get into the examples of my actual talk. Have you heard of anyone familiar with Adam Harvey's work around facial recognition databases? So this past February, I was on vacation. I was in Columbia having a really nice time and ignoring surveillance cameras and not thinking about work. And I got a message telling me that I might want to look at something. I was like, ugh, I don't want to look at this, you know, this is work. I'm not going to. But it turns out, I'm glad that I did because it turns out that I'm in one of the largest facial recognition databases that's ever existed. And so what happened, the short version of this story, is that there's a group of researchers who created, and this is hilarious to me because I don't think of myself as a celebrity, but a Microsoft celebrity database. And the definition of celebrity in this database was also a bit murky. It included a lot of names that you would know, people from sports and film, singers, politicians. But then there was also sort of the subsection of people who were working in digital rights, privacy and free speech. And some of them are much bigger names than mine. Some of them are names that I hadn't heard of. But we had somehow gotten caught up in this attempt to create this large database of celebrities. And I think that the reason behind it is really simple. There are photographs and videos of me giving talks from the past 10 years. And so to paint a picture of what it looks like from somebody to age from, let's say, 25 to 35 years old, you could easily utilize those images, put them in a database and do that for 100, 1,000, a million people, and you'd have a pretty good sense of training data for teaching machines to understand how people age over a period of 10 years. Anyway, long story short, I got caught up in this one database, and then what happened was that the images from that database were used by other researchers and so on and so forth until basically it turns out that I'm in a U.S. government facial recognition database that's being used for probably nefarious purposes. And this is really common because while this capture is happening, we don't often know where these images are going into, what they're being used for, and again, who's benefiting from them. You might remember this, but you've probably seen things similar to it. I think the most recent one that I can think of is the Facebook challenge or Instagram challenge where people were putting photos from 10 years ago and right now, side by side, and it only took about 15 minutes before somebody that I follow for work was saying, obviously, this is a data capture, they're trying to use this to train systems, and I have no doubt that somebody is doing that, even if that's not why Facebook launched it in the first place. So this particular site is from, I think, 2005, it's pretty old, and it was really simple. It was a Microsoft product, it was actually not even a product, it wasn't meant to be really public. It was an experiment, I believe, through Microsoft Research Center that don't quote me on that. And this experiment launched and some people found the website and it turned out that they really were excited by it and they wanted to use it because for some people, for me at least, I look at this and I see a bug, but a lot of people see a killer feature, something where they can upload a photo of themselves and be flattered by what the machine tells them, and in this case it was to tell them how old they look. Now, I don't want to know that, but I'm guessing a lot of people did because what happened was that this site, even though it wasn't meant to be so public, it went viral really quickly and it actually crashed the servers, again, 2005 internet we're talking about here. But as a result, you have this site that was sort of online for a brief period of time that went down shortly after and we have no idea how that data that was captured by this website was being used. And this is a pretty common phenomenon, as I said, we see this a lot of the time. And so the other thing that I would mention too is that I think that the means of production has always traditionally been controlled by those seeking to consolidate power, but when it comes to tech and its actors in Silicon Valley, I lived in San Francisco up until about five years ago. I think that we're seeing a new form of power consolidation and new networks forming between governments and states and the way that they operate in all of this sphere. And in Silicon Valley what's happening is that it's creating a greater and greater divide, both in terms of the wealth gap, as I'm sure you know if you read the news, but also in terms of the divide between convenience and harm. And so a lot of the technologies that are coming out that are designed for use in the United States, for example, are here. A lot of the things that are being proliferated and marketed as really important tools are we're basically signing over a lot of our data, as you well know from the work that you do, for the trade-off of convenience. Okay, so back to facial recognition for a second. So the facial recognition market is expected to be 8.64 billion by 2021, and that's actually an outdated quote that's from two years ago. So it was a projection of 2021, I'm guessing it's gotten even worse since then. We've seen a lot of developments. So the How Old Am I project that I just showed you is just one silly example. Like I said, we've seen other ones that are really similar. There was another one in July 2016 called Face My Age, where people not only uploaded images of themselves, but also information like their age, their gender, and their marital status, in order for the database to sort of give them some sort of feedback. But in that particular case, it went down before anyone had a chance to know what that data was being used for. So we don't know. There's some image database out there with all of that information that people have handed over, but it's gone, it's lost to the public. So V-Contacta, is anyone familiar with this site? Okay, so it's a Russian social networking site. And then there's FindFace, which pairs up with it. And what FindFace was used for is basically the very purpose of it, is to identify individuals. So you take a photo of somebody on the street, or you find a photograph of somebody online, and you run it against this database, and you're able to identify them based on the social networking information that they've given over. I'm sure that you can do this with Facebook and its social graph as well, but this just makes it so much easier to do it with V-Contacta. And of course, this has already been exploited in really horrific ways. So the first experiment that I was aware of was a man called Igor Svetkov, and he did an experiment in St. Petersburg where he took photos of people on public transit and ran them against this. And he was actually just trying to show how this database could be used for potential harm. He didn't do anyone harm, he was just kind of exposing it. But what happened next? Well, somebody took a lesson from that, and what they did was they took stills from pornographic films and used that to dox the actresses in those films, which is absolutely unacceptable and just demonstrates to me how easily these tools can be exploited. Oh, I think that's cut off a little bit, sorry. There we go. So users of FineFace can simply take out their phone, they can photograph a person, run the image against the 410 million user database, and it's able to identify users' faces with more than 70% accuracy. And here's the founder of FineFace talking about how he thinks it should be used. So he says that it also looks for similar people, so you could just upload a photo of a movie star, hint, hint. Or your ex and fine 10 girls who look similar to her and send them messages, because of course that's what every woman wants. Hashtag stalker tech. And so this is just one really particularly horrific example that's been used in really horrific ways. But the fact is that these companies are constantly proliferating. And again, these next slides are actually a little bit outdated. There are still companies that exist, but only in the sense that I've seen even more horrific examples since then, since I put this together, and I just haven't had a chance to get them into the slides. So SiteCorp, this is a fun one. So SiteCorp uses eye tracking to figure out your age, your gender, and your mood. It provides a complete understanding of human behavior with just a few lines of code. I mean, who wouldn't want that, right? Nothing that psychologists haven't been trying to figure out for centuries. And then another one called Kairos, which allows you to understand people with facial recognition technology, just in case you can't understand people under normal circumstances, just use our technology and you'll be able to. And then Cognitech, which this is my favorite, they use anonymous face recognition for people analytics. I'm not sure they know what anonymous face recognition means. I certainly don't. The whole concept of facial recognition is the matching of people. But don't worry, it's anonymous. Trust us. And so what happens to these databases? Well, this is a quote from Kate Crawford. There we go. Algorithms learn by being fed certain images, often chosen by engineers. And that's a key point that I'll come back to you shortly. The system builds a model of the world based on those images. So if a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing non-white faces. And there's a really important key here, often chosen by engineers. I love engineers and I think that engineers should be part of every conversation in policy and in other workings of different corporations or governments or nonprofits, what have you. But I also believe the reverse, that it's really important to have people who've studied the social sciences in the conversations about how these databases are being built and used. And in a lot of cases, that's not what's happening. And as a result, everyone knows Google, right? You're familiar with Google? Google identified photos of black people as gorillas. This is a real thing that happened. A colleague of mine, or now colleague of mine, was the person who kind of figured this out and published it. And yeah, why did this happen? Well, it's just what happens when you train things without having the social context and awareness to train them to do a certain thing. And we've seen this actually time and time again. There was another example in China, like 2007, 2008, where there was an attempt to use not facial recognition, but essentially what's like skin recognition to block porn in certain search engines. And the tool, whatever it was, was really effective in blocking porn that included white people. But anyone with darker skin, the porn still showed up. So it was kind of interesting, I guess, if you were upset by Chinese censorship and you wanted to find porn, you could still do that. But on the other hand, you have this really strange racial bias happening there. And of course, this is not a benign example. It's not a real effect on people's lives, but it's also something that as soon as it was brought to their attention, they fixed it. But there are other examples that are being constantly built, I'm really having a hard time with that word today, proliferated, and utilized that I think can have even more horrendous consequences. So for example, neural networks learning to identify criminals by their faces. And this is not, again, this isn't matching people who've been convicted of crimes in other states or countries. This is actually just using things like emotional intelligence recognition, like what Kairos promotes, to identify people who might commit a crime. Really dangerous stuff, in my opinion. And here's another example from a Georgetown study, the perpetual lineup which came out a few years ago. It's really a really good study that found that 117 million U.S. adults are in facial recognition databases, and most of those come from live surveillance videos or other types of police feeds like that. And of course, these uses of facial recognition technology clearly have a disproportionate impact on already marginalized communities, and that's always the case when we're talking about these kinds of uses of technology. And so, I think it's a really important point to make as well that without interrogation of the testing and training data, there will always be replication bias, and that these technologies have the ability to really create sort of a complete picture of who you are based on just a few pieces, a few small pieces of data. So, oh, actually forgot to mention this as well. It's not just facial recognition, it's also things like gate recognition, and this next one really kills me, tattoo recognition. I wish I'd known about that when I was in my late teens and starting to get tattoos, that someday they would be used against me like this. Now, obviously, I can't hide from the systems. So, this is just a last comment on FindFace, and this was from a Guardian journalist. Found that, in the future, FindFace's designers imagine a world where people walking past you on the street could find your social network profile by sneaking a photograph of you, and how many times have you seen that happen? I see that happen all the time in Berlin, and it really makes me upset. And that shops, advertisers, and the police could pick your face out of the crowds and then track you down via social networks. This is already possible, as we saw with VKontakte and FindFace. The question is, how quickly is it going to become possible to do this on a sort of real-time basis, and are we going to let that happen? So, I think we have to be asking ourselves, what world do we want? Privacy is an uphill battle. Oh, that was from this slide, sorry. I love that. Privacy is an uphill battle, but I think that it's one worth fighting for. And so, if we want, I think that we really need to be asking ourselves out of the box, not within the legal parameters that we already have, not within the contextual parameters that we have, or the limitations that we have in terms of funding, or in terms of our individual capacity, but I think that we really need to look at what world do we want, and how are we going to get there? And I think we really need some out-of-the-box thinking on this. And so, you all have the capacity to change the future, and let's do it, because privacy matters. Thank you. I didn't have a timer, so I have no idea if that went short or long. Ah, short, okay, well, I'm happy to answer questions. Yeah, I try to be an optimist about stuff like this. Oh, sure, yeah, so the question was, since we're seeing so much of this technology being put out in different countries around the world, in China, different types of markets, do I see any real opportunity for us to cancel this, to shut it down? I, yeah, like I said, I try to be an optimist. I think when it comes to the diversity of actors, particularly the diversity of nation states that are already utilizing these technologies, and I didn't even get into biometrics. I could have talked a really long time about biometrics and refugee populations and how horrific it is. But I think that, you know, I don't have a lot of hope right now that we're going to be fighting back against China's use of facial recognition. Like, I don't feel that that's within the capacity of most of you, or most of us. At the same time, I think that there's real opportunities within democratic countries to push back against the way these technologies are being used before they're implemented further. So in Berlin, for example, you have the, is it Sudkreuz? Am I going to get this right? Yeah. So Sudkreuz where they're using facial recognition systems on, you know, basically on, when you're using the escalator, there's a camera pointed at you. And those technologies are already being piloted in Berlin, but I think that we have the opportunity to stop them. And in the U.S. there were actually a couple of recent wins in this regard. Oakland, California banned facial recognition technology. There's currently a huge campaign by Fight for the Future and many other organizations. I just know that they organized it to ban facial recognition at the municipal level in cities across the U.S. And I believe that that's starting to get some traction in Europe as well. You should definitely look, like, seek them out, get in touch with them if that's something that you're interested in. So yeah, I mean I do think that now's the time and that it's not too late in a lot of these jurisdictions, maybe not London, but in many places, including Berlin, it's not too late to fight back. I wish that I could channel my colleagues that have been working on these campaigns. Oh, yes. So the question is how can we convince politicians to ban these technologies? I have to say I've not been working on that advocacy directly. So I wish, I really wish that I could, like, harness and channel all of the power of my colleagues who've been in this fight. But, you know, I think that there's an increased recognition at the moment, at least among certain politicians and certain governments, that we have gone a little bit too far with the use of certain technologies. Not in all cases. I think, like, the copyright battle is one that I'm just sort of horrified by. But when I look at things like this, I think that we, first off, I mean, I think we need to arm ourselves with data. So there is significant research that demonstrates how these technologies can be weaponized against marginalized communities in particular. And in democratic states that haven't gone full right-wing yet, at least, I think that we, you know, if we arm ourselves with data and bring it to politicians, I think that's one way that we should really be looking at this, because we do have the data and we do have the capacity to demonstrate to them that the harms of these technologies are greater than the good, at least at the current moment. And then I think that, you know, we should also be looking at regulation. So banning it, not just banning the technologies, but banning what can be sold to whom, what can be built how. Any other questions? Okay, this is a question I can definitely answer. So the question was, when it comes to the big companies, the big social networks, do I think that we should, you know, boycott them, throw them out, lobby them to change, regulate them, et cetera. I can answer this question more broadly, not as specifically to facial recognition. So a big part of my work is actually trying to get big social media companies to reform in their practices around speech, so around content moderation, both in terms of what they maybe should be taking down and what they shouldn't be taking down. And I think my answer is a little complicated. I want to boycott them, deep in my soul I want to boycott them. At the same time, I think that it's really important to recognize that companies like Facebook have what, 2.8 billion, and I think that that's an out-of-date number at this point, but something like 2.8 billion users around the world, that's almost a third of the global population. And so while we have rooms like this and even bigger rooms like this, where people are aware of the pitfalls, all of the pitfalls of these platforms and have the capacity to move to federated networks or have the capacity to get off social networks entirely, then I think we should. And I think with the FF having zero trackers and not using certain documents, or not using certain tools, as I said at the beginning of the talk, I think that our communities should be setting examples. At the same time, 2.8 billion people, many of whom have free or reduced cost access to social media platforms through programs like Facebook's FreeBasics or whatever it's called these days, and other kind of net neutrality-busting programs that are put out basically to help people connect to the internet but really to get them to start their internet experience on Facebook. And so I think we can't forget about those folks. I think it's really easy to just say, let's boycott this or let's replace these networks and don't get me wrong. I think we need to be doing that, but I think we also have to be working to reform and or regulate. And the answer, whether it's reform or regulate, really depends on the issue, so it would be very sort of details that I would get into on that. But I do think that we need to be looking at it from both sides. And suing them. And leaking information, you know, there's a lot of options. All right, perfect. Thank you.