 And we're live. Hello, I'm Danny O'Brien. I'm the International Director of the Electronic Frontier Foundation. And you join us here in San Francisco, where we're conducting another one of our little experiments into Facebook Live, doing a live stream question and answer session. We've had a lot of questions coming in, and this is our attempt to allow everybody to hear our answers about what to do with the incoming Trump administration. A lot of people are worrying that some of the warnings that we've given out about the nature of government surveillance may shift from being something that they've considered, perhaps as a theory, to actual day-to-day practice. Now, we've written a couple of pieces on this, and I've been thinking about it quite deeply within the organization. I'm joined today by Jacob Hoffman-Andries. Jacob, you're a senior staff technologist. That's right, Danny. OK, good. And you've been working with Erica Portnoy and a few others to sort of try and plot out what people can do in the, is it, 59 days until they're presidential? I think that's right. OK, so we had a blog post about this going through a few points. And I thought we'd spend this time talking a little about what those individual steps can be. And if you do have any questions, just type them into the comment bar underneath. And my colleague Dave Mass, who is over there, an investigative activist, will be investigating your questions and then handing them to me very subtly. So I'll ask them at the end, probably. So I think you gave five points in the post. And each of them are things that we've talked about companies doing in the past. But I think really what we're trying to convey is the threat model has slightly changed. And so there may be stronger reasons for you to do that. Just starting with the first one, pseudonymity. What does that mean, and what would companies have to do to change what they're doing now? Yeah, so the idea is that pseudonymous speech is actually a really important factor to be able to speak against government power when the government is trying to restrict First Amendment rights. And this isn't a new idea. It has a long history in the US going back to the Federalist Papers, one of our founding documents as the Constitution was being adopted. The idea being that it's easier to speak your mind when there's lower risk of retaliation by the government or by third parties who might try to intimidate you out of speech. The problem is on a lot of digital platforms where people are expressing their points of view, pseudonymous access is officially not allowed by policy. So including Facebook, where we're giving this presentation, Google+, and unfortunately more platforms are considering this type of policy. So in this sort of situation, what we're sort of arguing for is there should be at least some room for pseudonymous conversation. In fact, I mean, people implement this themselves. My background is sort of dealing with the internet internationally. And we've seen that whole communities will adopt pseudonyms even on a service that ostensibly blocks them. But the way that somewhere like Facebook will enforce this is by, if somebody complains that somebody's pseudonymous, they'll take down their account or they'll require government ID. So what changes if you're in a political environment where you're either concerned that the government may target somebody because of what they're saying, either for defamation lawsuits or something like that, or other users kind of harassing people because of what they say. What should you change? Yeah, so we're requesting that companies that provide a communication platform make allowances for their users to be pseudonymous because we think that is a very important value in free speech. Platforms that kind of make a best effort take down of people whose names they don't feel are real mean that the people who do adopt pseudonyms in violation of that policy are at risk of having their expression taken down, having their posts and their account disabled. So I think one of the things that we've been thinking about because it's been quite hard to get an organization like Facebook to kind of rewrite what is like one of the basis of their business model right now. But one of the things we have been suggesting to people is because it's such a black and white thing, either you're using your real name or you're not to allow some gray area there and also to not make it so easy to retaliate against someone. I think my experience is in countries where the government is trying to chill speech or in fact just organize political groups. What they'll do is find a group that is run by somebody using a pseudonym and then target that person and complain. So are we saying that there should be more room in that for someone who's using a pseudonym to kind of argue in favor of using a pseudonym? Yeah, absolutely. And you also mentioned the threats don't just come from the government. What we see especially in repressive regimes is there are often groups, sometimes violent groups that consider themselves aligned with the government and will seek out people whose viewpoints they disagree with and will try to intimidate them online and sometimes escalate to in-person intimidation. There's a tactic called swatting where people will try to get the police called somebody's house in the hopes of killing or injuring them in a miscommunication. So there are a lot of reasons why people who are at risk of intimidation need to use pseudonyms. I think one of the interesting things of a platform like Facebook which sort of encourages engagement and has real names is people don't necessarily know when they're publicly revealing their choices. So one thing that I know that we've been looking at is that again, taking the analogy from other countries, people have got into trouble in Thailand and other places for clicking like because you use Facebook a lot, you realize that when you click like that's a public signal, that's something that everybody else sees. Do you think it makes sense for a company like Facebook to think more carefully about how they're revealing that real name information? Yeah, absolutely. I think Facebook has a pretty comprehensive set of privacy controls, but there are some things that you're just not allowed to keep private. Can I ask you something that's sort of a little separate from all of these questions? Now you've worked in the past in many of these companies. You worked at Google and you've worked in Twitter. How, if people are concerned within those companies about this kind of thing, how do you make change? How do you affect change? Yeah, so it's a great question. I actually did lead an internal group at Twitter to ask for change from management and it was remarkably successful. I think a lot of people working for these big companies don't realize just how much power they have. And I think, often it's not that hard to get the ear of your manager or even upper management. And often you don't need to. If you're designing a new product or a new feature, if you're an engineer, you have a lot of say over what gets logged and what gets retained and how that feature works, so. Yeah, so the other thing is that I think that's always been useful for us is trying to include a use case in this. So the traditional use cases are a college student who wants to communicate with their friends and just dropping in the use case of an activist here or abroad really helps sort of clarify, well, look, what are the things that are we gonna do that will enable engagement and what are the things that are gonna actually get people into trouble, right? And I would encourage people not to necessarily view it in a top-down way, sometimes one-to-one conversations with the person who's in charge of a given feature can be remarkably effective. And sometimes all that's needed is an ask or a request or sometimes you need to make the request and you need to follow it up and keep pushing a small handful of people or convince some of your colleagues to join with you and say, this is a thing that we think is important. We wanna protect our users. So moving down the list, and this one might be a little bit tricky because it gets kind of the business model of a lot of these companies. So in the blog post, we talked about a behavioral analysis, which is, excuse me, it's my alarm to remind me I should turn up and interview you. So in behavioral analysis, explain perhaps what that is for people who are. Yeah, so behavioral analysis is the practice of collecting navigation data from across the web. So when you browse a website, most websites have a lot of embedded tracking code from various sites, Twitter, Facebook, Google, dozens of ad companies. And so each of those organizations finds out that you visited the page that have that tracking code on it. And they collect that data and they try to figure out who you are and what you like. And there was, you know, behavioral analysis is maybe even an old fashioned term because it used to be more about what you liked with the idea of ad targeting. But now increasingly, this type of tracking is attempting to divine your real world identity by connecting your identity across multiple sites. And so you logged in over here and you gave them your address. We're gonna connect that through a data broker. And when you visit this other site, we know your real identity and your home address. So this is the idea of being able to extrapolate facts about you, not from information that you've given these companies, but by sort of divining what sort of person you might be from your online behavior. Yeah, that's, I think, the most important aspect of behavioral analysis is that it's generally not consensual. The user didn't give over this data. They're not necessarily receiving something that they value in return. So how does this change in an environment where you're more concerned about the government conducting surveillance or attempting to target people? Yeah, so one of the threats under a government that is seeking more data from private companies, which we think may happen, is that they may seek data that's been collected from people without their knowledge. And so the number of sites that have a log of all the websites that you visit is quite large. And so we're asking if you collect behavioral data to both collect less of it, really turn a hard eye towards what do we actually need, what actually improves the product for our users, and slice out whatever isn't absolutely necessary. I definitely can say firsthand that a lot of data gets collected purely out of inertia because somebody started collecting it and it didn't turn out to be useful, but nobody wants to be the one who flips the switch to turn it off. So we're asking people to be courageous and flip that switch. The other thing we're asking is to offer a real opt-out. So a lot of companies say, oh, you can opt out of our tracking, but often what they really mean is you can change it so ads won't be targeted on it. So you can't tell you're being tracked, but the data is still collected, it's still on the servers, and it's still subject to what I think would be an overbroad subpoena by the government. But you don't want to be in the situation of saying, well, we have all that data, but we're going to have to fight you in court to say we don't want to give it to you. Much better to say, sorry, we don't have the data. And that kind of user control is important generally because I think one of the things that we see is you're going to have a big spectrum of how concerned people are about their data. Besides what might happen in the next four years, you can definitely feel that there are vulnerable groups who are more cautious about what they're doing and for them to really enjoy and use your product, you have to sort of play to the fact that they may now trust you a little bit less and they want to have that kind of autonomy. So being able to go into a system and turn off some settings and say, okay, now I feel much more comfortable with the idea of sharing the information that I want to share rather than having you deduce what you want to deduce. So continuing in this line of thinking about what and why you're collecting, the next one we had is just delete those logs, right? So this isn't about extrapolating information, this is about the fire hose of data that many companies end up collecting on their users when they're using their service, right? Are there any constraints on what people collect these days? Not really, I mean, most companies take an attitude of collected all will figure it out later. Pokemon kind of a bit. Yeah, exactly. So, you know, and I think we're seeing that that's actually a harmful approach and that it's important to be much more thoughtful about data that you collect and how long you keep it and whether you really need it. And I think in a lot of places that log collection goes relatively unnoticed, you know, there's no one whose job it is to say, okay, here are the categories we collect, here's how long we retain them and here's our plan to cut that down. So I think for anyone working at a tech company, there's a lot of reward to just really drilling down and looking at, okay, what are these logs? What's actually in them? What do we use it for? Do we need it? Is it worth the risk to our users? And importantly, how long do we retain it? So everybody should have a log retention policy and it should be as short as you can make it. You know, 90 days is kind of a good baseline for data that's not too terribly sensitive. But if it has sensitive identifiers, and I think IP addresses are actually one of these, when the government comes knocking on your door for data, often what they want is a set of IP addresses to go further track down users and meet space. And so, you know, separating out IP addresses into different logs and rotating those on a much shorter basis like seven days, I think is an important safety measure for users. Right, and taking a sort of overall view of what you're collecting allows you to kind of parcel off some of this stuff. The way I've tried to argue, because my sensation is when you talk to people, is the instinctive thing to be is to be a pack wrap. Collect this, we don't know, we might have to debug something that happened six months ago and we have to, this information might come in useful, where's that business model pivots or something like that. But one of the ways I sort of argue about it is in the same way as you can build up technical debt by just slamming code in as fast as you can, I feel like you build up a sort of legal debt here. Because far and apart from sort of these models that we're describing of like mass government sort of access to data, it puts you into a very vulnerable position from subpoenas in civil cases as well. If you collect all of this data, you end up being a sort of legal honeypot because everybody wants access to it. Right, yeah, and so you potentially increase, not only the risks of being required to give data on your users that shouldn't be given, but also your legal expenses and getting tied up in court battles. And the one thing I wanna clarify too on the retention of IP data, one of the areas I've worked in is authentication and the user safety. And to some extent you do wanna retain a history of, okay, where has this user come from in the past? But that doesn't need to be at that high level of detail of IP address. You can say, okay, they logged in from this subnet or from this geographical region and we're gonna retain that rather than, here is how to find that exact user. So, we talk a lot about encryption at EFF practically every day before we have breakfast, but there were a couple of points that you made in the blog post, you and Erika, that were specifically about encryption. The first one was ensuring that there's encryption when your data moves, when it's in transit. I'm assuming that's TLS that says this out. Yeah, HTTPS is how I usually like to talk about it because that's what you usually see as the HTTPS. And also that hasn't changed in 15 years where all the TLS and SSL change every six weeks, right? So yeah, okay. Is that enough under these situations? You know, it's not enough, it's a layered defense because it's protecting against kind of a different type of attack. So what we've been talking about so far is government coming in with legal process and saying give us all your data. What HTTPS protects against is the government going around the law and using bulk, we think illegal surveillance, to collect that data off the wire. So if you as a user visit a page that doesn't have HTTPS, the NSA definitely has the ability to read that content. And this is an issue EFF has been advocating on for seven plus years now to get everybody, every website to adopt HTTPS. And there's a lot of progress. Most of the big social media companies have adopted HTTPS. Most of the email companies have adopted HTTPS, but it really needs to be everybody. And so this is a good ask for some of the smaller companies, some of the startups. Make sure, okay, not only is your main page HTTPS, but every little subdomain you have, you kind of need to lock it all down to be safe from that spying. And one of the things that I sort of noticed is that, I mean, it's this argument of safety in numbers. So one of the members of the Trump administration has already gone on the record saying that they feel that the use of encryption is a red flag. And I think that it's useful for everybody to turn on encryption to remove that stigma as it were, right? I mean, encryption isn't a red flag. It's just a way of protecting your private data. Right. And it's a way of protecting everybody's not only private data, but a national infrastructure. One of the more interesting attacks we've seen based on not having HTTPS in place was China's Great Canon attack, which injected JavaScript code in a number of websites in order to create a massive DDoS attack against GitHub. And I think going back to what we were discussing about some behavioral analysis, I mean, people I also don't think I think of how the NSA can conduct that kind of thing en masse. I mean, they don't need tracking cookies, right? They can see everything that's unencrypted, including tracking cookies. And we saw evidence in the SODEN revelations that cookies were one of those signals that they could use both to kind of de-anonymize people or collect together their practices and extrapolate from that. So it's important to sort of use encryption to grant a sort of herd immunity to the whole of the internet ecosystem. Okay, and there's finally, there's enabling end-to-end encryption by default. Now, we were talking about this earlier, and you said this is specifically about messaging, right? Yeah, so if you're running a service where users are sending each other messages, you have the ability to take yourself out of the loop and say, even if somebody comes to us with an illegitimate request for our users' messages, we don't have it. That's end-to-end encryption. And so what that means is HTTPS is in transit encryption. So the data is encrypted between the individual's computer and the web server, but the web server decrypts that data, usually logs it and stores it, and so that's right for government access. But with end-to-end encryption, instead of encrypting the data to go to a server, you encrypt the data to go all the way to the other person you're talking to, or other people that you're talking to. So all the services is a bundle of encrypted text. Now, there's been some real progress in end-to-end encryption in the messaging world, and you see from Signal to WhatsApp to Google products to new products like WIRE, which I think is done by the old Skype team. Sometimes, we sit and review these products, and I know that sometimes we are a little down on just being able to turn that on or off, right? Can you explain why that is? Yeah, so end-to-end encryption has been around for a long time since the 90s with PGP. And even though PGP has been around for a long time, not many people use it. And a big part of the problem is it's really error-prone. You can think you're using PGP and forget to do the right actions to actually encrypt your message and accidentally send something really private and sensitive, totally unencrypted. So long-heart experience building these tools has shown that if it's something you have to remember to turn on or off, you're gonna mess it up and you're gonna forget to turn it on when you most need it. And the technology is good enough and easy enough that these days you mostly don't need to, you can just leave it on. There's definitely apps like Signal like what's app that default to encrypting all the time. And so we think it's great, companies are building strong end-to-end encryption apps, we absolutely need that. And those apps should default to encrypting all the time if they think that's important for their user's privacy. Right, right. I mean, I definitely see with some messaging apps, they also default to internally logging. So I've discovered that what I thought were relatively private conversations are actually now searchable in my Gmail. And it's disturbing for yourself but because you have, I think, a different idea of how ephemeral messages are compared to email. But I also feel like the group who should be most sensitive to the fact that collecting all of this data might have consequences is actually politicians at this point because we've seen so many sort of scandals attached to the dumping of emails or the legal subpoenaing of emails. I think part of what we're gonna do in the next few years is try and really convey not just to the general public but also to politicians and people in power that this isn't just about protecting civil liberties for everyone else, it's about protecting their ability to do their work in an often very oppositional environment. All right, so we've gone through the shortlist that we had. Dave Mass, who's just off screen here, has been very assiduously collecting all of your questions and I'm just gonna ask him to come through. It's like I should have a little envelope. So, okay, so he's a pretty good. So let's start with something that's about the last thing we talked about. We have a question here that says end-to-end encryption means companies cannot add very much value beyond being a pipe. Thoughts? So I think that depends on the notion that all the value has to come from them analyzing your data and analyzing your data on their servers. There's actually, the fact that there are so many different end-to-end encrypted apps being developed right now shows that people do believe there can be a lot of value built into the app and what the user sees without sharing that data with any servers. So you can make things easier to use, more user-friendly, you can add features like stickers and cool stuff. I think one of the examples we're really talking about is Google's Allo where they say we're gonna build an AI around your chat so we can save you some keystrokes and we can tell you what your reply is gonna be. And that's an interesting, neat idea but I think it has to be weighed against real risks to user's data and also it can be built more locally. AI based on what's just on your phone is not gonna learn as fast as if Google's trying to mash everybody's data into one big model but you can still build really compelling features locally. And there's this sort of interesting trend as well. I remember the day that Android announced that they were building speech recognition in locally so even when you were offline. And I know that Apple has been working on a lot of ideas where the processing goes on under your own control, on your own device and that gives you that kind of feeling of security that you're not just splurging data across the internet. And it seems to be still something that hasn't been fully explored. So I have another question on the end-to-end part of our selection of to-dos. Could you comment on the usability of end-to-end encryption apps? Yeah, so I think this is one of the reasons I'm really enthusiastic about end-to-end encryption these days is in the last few years we've seen a renaissance of usability of encryption. People have really given a lot of thought to what people want from chat and what keeps them away from awkward encryption tools like PGP and so things like Signal and WhatsApp have a lot of features that people want. They have multi-user chat. They have multi-device chat. You can get something on your tablet and on your phone but it's still end-to-end encrypted between everyone in that conversation. So I think it's getting a lot better. I think there's still a lot to be done but I think the technology is really ready and people should be using it. Yeah, and it's also, I find it sort of interesting now how some of the sort of side things that people don't consider that much about encryption usability, things like how do you convey keys? How do you convey fingerprints? And there's a lot of really active development in that. Really encouraging. So this is a question that I think I already know the answer to. Someone asks, is DRM the answer? If you ask that question, the answer is usually no. Own your data and revoke access from Facebook, Twitter, Google, et cetera. Yeah, I think this may be a different meaning of digital rights management that we usually interpret but getting kind of at what I think is the underlying question is, pulling your data away from these companies and saying, okay, I'm only gonna do stuff that's on my computer, I'm not gonna use any of these services that don't respect my privacy. I think that's definitely an option but I think increasingly it means opting out of civil society and choosing not to speak out and have your voice heard. And I think that's a bad trade-off and I think instead we need to use those voices to demand that the services we use respect our rights. And we've talked a lot about technology in this conversation but I think it's worth emphasizing that technology can't do it all. I mean, if you had, if you imagined a perfect way of locking down data so that you would give permission for someone to have, they could still copy it. They could still run off and do whatever they wanted with it unless there was a framework, a judicial framework or what have you to ensure that there were punishments in place for that kind of misuse. So even though we've sort of concentrated on the technology side here, this, a lot of what we're describing leads room for companies to also pursue legal defenses as well. And I think it's really important for companies if somebody does come knocking for your data and you think that's unlawful or unconstitutional, you have your day in court and your users don't necessarily have that because they don't know that this is going on. And you're prohibited from notifying them in some cases. So you have to represent the user, not only in the technology that you build but also in the courts. So. And I want to expand on that a little more. It was not in all cases that you're forbidden from telling users what's going on and actually that's one of our big requests for companies with our who has your back report is anytime somebody comes asking for your user's data to the full extent that you can notify those users and give them a chance to fight back on their own behalf. Right. I mean, well, a lot of what we're talking about here is stuff that we're giving in response to people's concerns. And I think those concerns are genuine but we really don't know what's going to happen in the next few months and years. One of the ways that we can tell whether people's fears are being executed upon or whether we're actually doing better than we thought is through transparency. If we know how many requests are going to companies, if we know what's happening behind the scenes, that gives everybody a chance to either feel more confident about using the services that they see or making their own preparations or behavioral changes to fill safer in their actions. Okay, we have one more question here and I think then we'll wrap up. Does EFF provide assistance to privacy advocate coders in need of business creation and regulatory advice? If not, where should they go? That's a good question. I think I actually don't have the answer to that. Do you have? I think you should contact us. If you need advice in this area, particularly if you're a small startup, but also if you're someone in these organizations, drop us a note at info at EFF.org. I mean, we're always happy to have conversations with people. And we also realize that there are details and complexities to implementing any of these ideas that it's not as straightforward as sometimes we make out because we have to sort of concentrate all of this together into a pithy phrase. But we're happy to help and we're happy to also to put people in touch with people who've gone through these experiences and want to do it at other companies too. So by all means get in touch or keep reading our blog, Deep Links, which you'll be able to see Erica Portnoy's and Jacob's blog post going into some of the details about this. And become a member. One of the great ways to find out everything we're doing in small reasonable ribs and drafts is becoming a member, donating, and signing up for all of our materials. So Dave, unless there's anything else to cover. Well, oh, okay. I'll do one more. This is our bonus one. Would export regulations on crypto force users to use weaker ciphers? So this is an interesting question. This is something that was the case in the late 90s and early aughts. And it was something that EFF very specifically fought against because we saw that it was weakening security for everybody both within the US and abroad. And fortunately we won that. And so currently there aren't significant export controls that restrict people to using weak ciphers. But what we've seen, even 16, 15 years on, is that those decisions made under the shadow of those export controls led to protocols that were weaker than they needed to be. And so we're still fixing those bugs and they're still causing serious problems that cost money to industry to fix and put people at risk and put infrastructure at risk. So we're hoping there's no push for more export controls and we'll definitely fight that if that's on the table. Yeah, we're definitely anticipating more pushback about crypto and backdoors. And we'll continue fighting against that and it would be great if you could join us in that fight. So to close, I just wanna say to anybody who's listening, never doubt that you can make a difference. You may feel alone, you may feel like it's too much work, but just start the work wherever you start it and keep at it. And I think you can really make things a lot better for the people who use your services. Yeah, and if nothing, start the conversation within your organization because you might be surprised by how many people are actually thinking along the same lines as you. All right, thanks very much and we'll see you next time on Facebook Live or maybe on another platform.