 Good morning everyone. My name is Sita Penya Gangadaran. I'm a senior research fellow with New America's Open Technology Institute. For the past three and a half years I've spent a great deal of time working with groups and researchers on the topic of the digital divide. And today's discussion, entitled, is cybersecurity the next digital divide. We'll have us thinking about the concept of cybersecurity in a more everyday context. What does the common person experience and think about in relation to digital safety and security? It's not often as Anne-Marie Slaughter was mentioning that we use this term in relation to the potential for misuse or access to information, my information as it transits from one person to another. In addition to thinking about the common person, we'll spend some time thinking about society's most marginalized members, people who don't have access to many, not just technology, but to many basic needs. And we're going to do that by engaging three panelists who have thought long and hard about what it means to be secure, how to engineer or design for security, and what's at stake. Joining us are Tara Whalen, staff privacy analyst at Google, and non-residential fellow at the Stanford Center for Internet and Society. Sada Gersis, a postdoctoral fellow at New York University. And Daniel Kahn Gilmore, technology fellow with the ACLU's speech, privacy, and technology project. All of you have a background in computer scientists, you're all computer scientists by training, and have been involved in policy debates thinking about security and privacy. So I want to dive right in. As I mentioned, I've spent a lot of time working on issues of the digital divide, looking at the long-term unemployed, recipients of public assistance, typically older adults, perhaps individuals who have limited English speaking skills, low levels of literacy. For example, the national and low access to the Internet. For example, the National Telecommunications and Information Administration reported last year that 30% of households in America still do not have access to the Internet, access to high speed broadband. Are they, are these individuals who are on the quote unquote wrong side of the digital divide, are they more secure because they are not connected to digital services or digital infrastructure? Well, so access to the Internet and broadband is only one piece of the puzzle in terms of connection to the digital infrastructure. Many of the people who are in these households most likely have mobile phones, and certainly surveillance can take place on the mobile phone network, as well as on the Internet. So in terms of people being more safe because they don't have Internet access, I think that's certainly, there's certainly no guarantee there. And for the populations that you mentioned, people who are in positions of continued employment, people who have other demands on their time, often things like a mobile phone that has to be on all the time and provide some level of tracking and other kinds of surveillance concerns, they simply have to submit to them in order to go about their everyday life. So the lack of access to the Internet itself, I think, is not, doesn't provide any sort of security guarantee for those people. Sada or Tara? I could add also that people who want to get themselves involved in, you know, groups that do security, if they want to educate themselves that they'd like to build a group who could give them information about security, again, being connected helps you build these kind of groups. So not only maybe you're not able to put as much information online at home, but you will still have access to the information will still be put on about you and these other sources, but also maybe a little harder for you to become engaged with the broader community, which doesn't help your security as well. Maybe it's interesting to, in addition to the digital divide, make the distinction between surveillance divide and the privacy divide in the sense that some communities are more likely to be subject to surveillance, regardless of whether they, it is based on the devices or surveillance of their communities through CCTV cameras or the police. And I think we know from studies that women are also more likely to be subject to surveillance or harassment online. So I think there's a divide as to what surveillance means to different communities. And there's a second divide, which is a privacy divide in the sense of who has access to an understanding of what it means to protect their privacy and to claim their rights with respect to privacy. And I don't think these groups necessarily overlap. So we'll come back to that idea of your community connections and security, but I actually want to ask if you can describe to me what does it mean to be secure, right? If I'm walking into a public library and speaking to a group of people who haven't accessed technology very frequently or on their own terms, what does it mean? So I think the, there are some basic things that you would like to have for communication security, things like making sure that your communication is only readable by the person to whom you're sending it, being able to act anonymously without identifying yourself, should you need to, and being able to be part of communities that aren't necessarily under direct surveillance by an adversary. So all of these sort of things are ways to think about securing your communications and the communities that you live in, not just the individuals, but also the communities. Technologically, that ends up being tools that provide encryption, tools that provide anonymity services, but it also has to do with sort of behavioral patterns and patterns of thinking about where the different forms of surveillance, like what Seida mentioned, where those forms of surveillance show up in terms of the other pieces of surveillance that you may not be thinking of. Is that your same take on what technical security means? Well, ACLU has done some great work to show that the phones that we use are absolutely insecure, and there's been a great failure in the market of any of the parties responsible for getting the phones to us to make sure that they're secure and not just making us vulnerable. And the fact that you said that a lot of the community, for a lot of communities, their only access to the internet is going to be through their phones actually makes them more vulnerable to these kind of security weaknesses that are embedded in our current system of information. But I think we need to maybe take a wider look at what it means to be informationally secure. I think that one thing is to make sure that the data that emanates from the individual is somehow secured either through their phone or the communications, as you talked about, possibly using encryption, making sure there are no eavesdroppers and anonymity, meaning that they can use services without necessarily identifying themselves. But I think we need to go beyond, I think data breaches is also a matter of your technical security. And the companies that have breached databases should be reporting back and letting the individuals know what this means now. And I think there are serious concerns with some of the new information sharing legislation leading to, let's say, removing some of this liability and what the impact of that will be on these communities. I think informational security is also being informed about how your data is collected and having the choice to use services without having your information collected. And I think it's also a lot about how information is used to profile individuals or used to, let's say, design their environment. What we see right now is a lot of data mining and data being used as an access to truth and as a way of making decisions in policymaking, data becomes kind of the lens through which we look at the world. But we know that especially for communities that don't have a good representation in these data sets that the impact of data mining on them could be very different than on those communities that we have a better understanding of what the data points are, what they mean, what they stand for. So there's a disparate impact of data mining and profiling on different communities that we're not even able to properly articulate yet. And I think this kind of discrimination through data mining is also part of technical security. Just a little on to what Sada said about the impact on community. So you talked about people who are unemployed. So the information security then applies to a broader level of security, I think in people's lives for things like job security and physical security, because the information that's revealed about you, for your communications, it may be things you put on a social network that you didn't know how to configure to allow the groups you wanted to see certain information to see it. This can go beyond just this information sphere into your broader lives, which has again, a strong impact on someone who's in a marginalized community. So it sounds like what you're talking about is that technical security is really not a sufficient way to think about security among vulnerable communities. It's a precondition. Having devices that are absolutely insecure, which work phones I can say are, and we can talk about that maybe in detail later, is basically a bad precondition for having anything above that. So it's a precondition. So let's actually talk about that now. All right. What what needs to happen to the technologies, the devices themselves or, you know, what constitutes a secure mobile phone? I don't know that we have one yet. So so in your ideal world, what does it look like? Well, the issue is not just the device itself, but the network that it connects to. And so to, to say, well, we can make a community just a secure handset. If that handset is connecting to a broader infrastructure that itself enables all kinds of, of tracking opportunities and metadata collection, and potentially content collection, then the device, it doesn't matter what the device is. So some of these questions ultimately need to be addressed at a sort of infrastructural level. It's it's not enough to say, well, we can build this one tool. We need to say, look, if people, if we want people to be able to have secure communications, then the fundamental networks and protocols that those networks use, the ways that the devices talk to each other online, those mechanisms need to be secured. And they need to be they need to provide people with the ability to have confidential communications and the ability to operate privately on the networks. Because the ability to have that kind of all seeing analysis into these tools is actually makes everyone involved in the network massively insecure. It's not possible. And this is a sort of well understood within the engineering community, that it's not possible to actually engineer broad scale surveillance mechanisms to allow just the good guys to surveil. You simply can't build the protocols. You can't build the networks in such a way that allow access to one group of people that we think are the good guys and simultaneously keep out the actors that we might think are nefarious. To make these things work, we actually need to have networks that are that are that have security built in at the at the levels of the way that we define the network and the protocol stack. So that's interesting because when I've thought about the question of making cybersecurity more accessible to members of low income communities or vulnerable populations, the thing that immediately comes to mind is the question of usability. So I have spent time in the field where I'm observing people in the classroom, usually older adults, again someone that has limited language, English language skills, someone who spends at least three classes literally trying to figure out how to drag the mouse from one side of the computer screen to the other. So that's the first bit. The second bit, usually the last five weeks of the class, is spent understanding what in the world is a username and a password. And so what I've seen is a variety, just like this complete cognitive dissonance as to what does it mean to have an identity online? People are definitely choosing insecure passwords, something that's easy to remember. And if you have low literate skills or limited English skills, you are going to pick something that is much easier to remember that a computer could decipher quite easily. You're more than likely sharing your password and username with other individuals because you've not done this before. And so usability, I mean, it seems like an obvious thing to really focus on. And I guess I'm hearing that that's not an... I think we shouldn't pit usability with secured infrastructure against one another. I think we need them both. So for me to say that the infrastructure needs to be built in a secure way is not at all to say that we should discard usability. I agree that this is a critical concern. But we have usable tools like mobile phones that people understand and learn how to use even people who have a low technological literacy that are working in an insecure way. So the fact of the usability doesn't actually solve the security problem. Interesting. Zeta or Tara? Well, I've spent a lot of time in usability and security. So this is a subject near and dear to my heart. Usability, fundamentally important. I would say obviously I would agree with Daniel that they shouldn't be in opposition. Sometimes it comes down to a matter of priorities as to what things we focus on. These are very hard problems. I think some of the issues we're still grappling with are we have a large number of users with various backgrounds, levels of expertise. We have people who have disabilities. We have questions around age, literacy. All of these issues come into how well are we serving our user populations. We're working on it. I think I've heard a lot more discussions in the last maybe five years or so in which I'm hearing this talked about a lot more. So I'm hoping that there are more people who are prepared to work on this issue, work on the research side, work on putting money into initiatives. There have been a few recently, you know, things like I guess simply secure or came out. I think there was the reset the net and did something recently. They put together a set of tools for people that were supposedly easier for people to use. So I may want to speak about that. But it was an issue. It was an effort, shall we say, to give people a set of tools that they had identified as easier for people to use. They didn't have to go out into the world and figure out all these things themselves. So I'm hoping we crack some of these problems but they are they are difficult. Even something like the SSL certificate, for example, to pick a maybe not so simple example, but a particular example, has been an issue for what do people understand when things break. It's difficult to explain nuances in which your communication. Is this a risk? I'm not sure. What went wrong? We're not entirely sure. How much information do we give you so that you can make an informed decision? These are difficult problems and there have been incremental steps towards improving things, but we still haven't actually cracked the hardest point of this. Ideally, you wouldn't end up in a situation where the person had to make this decision, but we all know that systems are imperfect and they break and so we need to be able to support people in those situations when things break down. There's a very hard word to pronounce that really helps to analyze this problem that we're speaking at and it's called Responsibilization. We can do a competition. You have to unpack that one. I will unpack that. In very short description, it's about encouraging individuals to manage their risk themselves and we're increasingly asking individuals to manage their risk. This comes as a result of organizations, companies, governments, streamlining their processes, most likely through digital information systems, which encourage certain new risks, but these risks are not taken over by the organizations but externalized to the individual users. So what we're doing, for example, is collecting a lot of data and all the risks associated with that we're externalizing to the user saying, well, if you didn't want to be part of this risk, you should have protected yourself. So we're actually pushing a lot of responsibility onto the user saying, if you think there are risks that are coming in your direction as a result of this new information technology, you are responsible for protecting yourself from it. This is very problematic, of course. I think we've done projects in the past to say, instead of burdening the users with protecting their privacy, we should ask phone companies to or whoever is making the phones we're using to give them secure, secure phones. We should make sure that the network is secured in a way that your communications cannot be eavesdropped on, either by a third party like the government but also by your baby partner who might be violent towards you. And I think in the case of username, I think there are a lot of sites that are asking for username and password when they don't need to. You could just use those systems anonymously without giving this information, but then we're pushing people to sign in and to be uniquely identified towards the service, incurring more risks. And in some cases, I think there is a risk in the sense that you want to be logging in and securing your communication with that organization who's giving you services, but they're not securing their services well themselves. Like they're asking you security questions like what is your mother's maiden name, which is usually public information. And then saying that the users are responsible for not taking care of keeping their mother's maiden name private, which is again burdening the user with bad security design. And I think there's a lot to unpack there, which is not just about usability. So I want to come back to a theme that Daniel had mentioned earlier. I think you were referring to, I mean, I'm hearing that there's a shared responsibility that seems to exist. And you had earlier pointed to this idea that a community, right, that we shouldn't be thinking about individual security, that a community is part of the process. And I'm wondering what that looks like or what that entails in both the work that you've done as a developer of open source tools and in your work at the ACLU. So there are many different ways I think that a community's security can be impacted by the tools that they use and the communications systems that they use. So I guess there's sort of at least two different ways I'd like to answer the question and I'll try to be brief with them. One way is that for a tool to actually be developed in a way that benefits the users, those users, the tool, the people developing the tool need to be engaged with the user base. The user base needs to give feedback to the tool developers and the tool developers can improve the tool and make sure that it meets the needs of the users better. How do you establish those communications channels and encourage people to contribute in those ways to the tools that they rely on is a tough question. And I think we need more people working on trying to get those communications channels open and to value that kind of feedback. Another way that I think the communities themselves, there's also a way that you can do surveillance of a community that doesn't amount to surveillance of any one individual. And this is a separate question about how do we secure a community. But I think we need to also think about the ways that communities have marginalized people. So for example, LGBT communities in places that have homophobic laws or homophobic culture have ways of communicating with each other. And you can, rather than just surveilling any one individual, you can focus surveillance on the community itself and build up information based on the communications patterns of the community as a whole. And so whether any one individual within that community has protected their information, the fact that they're still participating in a community conversation highlights them as a potential target and that that itself is a risk. So there's sort of two ways that I wanted to make sure that the community aspect gets brought into the conversation. So that suggests that we need a broader base of people using secure technologies. And I want a reality check as to where we're at, because I heard you say something about hypotheticals. And Tara, you also mentioned that, you know, we have a lot to do. So where, what's the state of the market, for example, with regards to secure technologies? I mean, how many people are using, let's set aside the question of vulnerable populations for a second and just understand the broad base of consumers that do practice, you know, use these encryption tools or anonymizing tools, tools that keep both the individual and the community. I mean, what are we looking at here? It's interesting that at one level, the user community is massive because there is already an infrastructure, even it may be imperfect, that already has a large amount of encryption deployed. There are levels of their protocol levels in which things are rolled out every day. So much of this we don't necessarily see. It's not the same as deciding you're going to download a particular tool to add another level of encryption to your instant message or do off the record messaging or a particular anonymization tool, but you're already sort of embedded. So at one level, it's a whole lot of people who are already using it. That's probably not the group you're talking about, but we do need to remember there are already a bunch of people who are taking advantage of these tools who may or may not realize the degree to which they are using the tools that are already out there. I don't have a good read on who is using the other, the tools that are a little more off the beaten path, say, it can be there are people who have had an incident happen to them who may suddenly decide that this is something they need to do. There may be people who are part of larger communities who have brought this discourse forward that something you need to be doing is taking more care on your communications. I think in those groups, we're not seeing maybe the diversity that you might see in the broader community that I mentioned earlier who are using the tools. If you look at some of the developer communities where there's volunteer labor, for example. So if the way you hear about these tools is because you're involved in a developer community, the diversity in these groups is not particularly large. Daniel may want to add a bit more to this. The numbers are pretty low. Among those, for example, the number of women who are participating is low. Anyone who is in a group in which they are marginalized, for example, tends not to have an excess of resources to participate in a free labor project. You're someone who has multiple jobs. You're someone who is taking care of children. You may not have the ability to decide that you're going to sit down and dedicate a few more hours a week to develop a tool. Now this is exactly the sort of people who developers need to be speaking with. So trying to bridge that gap, I think, will be an interesting challenge that if we actually want to hear from the users and not just the people who feel they know what the users want, you do have to involve people in participatory design requires you design with people and not just for people. And so I'm intrigued just to see how we might bridge that gap. Daniel, how good or bad? In terms of the diversity within the developer community, it's terrible. And also with the user community, I mean, I'm just well, so the thing about looking at the user community of particularly for privacy preserving tools is that often the user community of the privacy preserving tools don't want to identify themselves because they're interested in preserving their privacy. So there's a bit of a chicken and egg problem in actually determining that and developers who build tools that are that that do actually want to preserve the privacy of the users probably don't collect a ton of information. So it's hard to that one is is hard to answer. I suspect that the numbers are relatively low, though, certainly compared to the amount of network usage overall, it's clear that the numbers are low. So maybe it's good to distinguish like three types of encryption use that is out there right now. One is basically what we popularly know as HTTPS or the lock goes on in your browser, which basically protects the communication between you or your device and the service provider that you're speaking to. Those are against man in the middle attacks. It's important to talk about men and where their position now. The next one and that's being increasingly used on phones and tablets is man at the end attacks. So those are us. So that's when companies use encryption to put controls over what we can do with the devices that we are using. And those two are the demand at the end encryption use is quite popular, getting more popular. The man in the middle towards the service is getting more popular due to also increased privacy concerns. And then there's a third type, which is kind of what you guys were talking about with the developer communities and free software and and the lack of diversity and the minus skill numbers of users. And that's what we I will for now call end to end encryption. OK, these are three, the mate man at the end, middle man in the middle and then end to end. It's not perfect, but let's just try in this kind of classification. And what happened in the last two months, which is rather let's say worrying is that we had a number of government officials speak against the end to end encryption and it's possible popularization through applying and companies applying end to end to a wider user base. So Apple said they would provide a quasi end to end application to their users on their using I message. Google's started developing something that we haven't yet seen deployed and and Facebook said that they would integrate it into WhatsApp. And we saw government officials react very allergically saying that this would mean that law enforcement would not be able to do their jobs. Cameron said we need to ban encryption, which was not differentiated because it I think means that we would also ban encryption against mates and the men in the middle attacks, which was not not well received. And Obama said something similar, even even maybe stronger. He said people, companies would be liable if because of the use of end to end encryption, they would find out that an attack happened or somebody was harmed, sending the message to companies in my opinion that they should not implement these technologies. So I think there's a whole economy of where encryption gets applied and where it's encouraged and discouraged. And we would like to see end to end encourage. And one way we could do that is having organizations with large user base implement this properly, not like I message, but that's another detail and and make sure that it's available for for the privacy of the users. But we haven't seen that happen. So I want to respond to your let's talk about that later maybe, because I am actually really interested in the quality of the security that end users are receiving. So one thing that has been of concern in in particularly in marginalized communities is that stuff that they use across the board doesn't work. It's of low quality, right? And so I'm wondering, you know, are we at risk of seeing tools developed and deployed that aren't quite protecting us as much as they should be? And then I'll come back to some larger questions. But I think that, you know, from the perspective of the marginalized communities that I've worked with, that is a very prominent concern. Are you getting what you think you're getting? So. There are very few tools that are widely deployed that are providing people with full anonymity and confidentiality and privacy protection. There are often gaps in terms of what I would call key management. So how you identify the remote parties that you're communicating with. There can be gaps in metadata analysis. There can be simply bad encryption. People using encryption mechanisms that we know to be broken or to be substandard. So I think the communities that you work with are right to be concerned that what they're getting doesn't maybe live up to the level of security that they want. That said, there are tools that are out there that are a significant step up than where they have been. And, you know, say to mentioned HTTPS. You know, three years ago, there was a HTTPS traffic was a small fraction of what was going on on the Internet. And now, if you look at all web traffic, HTTPS traffic is significantly larger than it used to be. Many people who run websites have decided, you know what, we need to be doing this. This should be the default. This should be the new standard. Like, why were we sending clear text, meaning unencrypted communication across the Internet in the first place? The only thing that does is put our users and ourselves, the services we run at risk. So this doesn't get all the way to the end-to-end encryption that, say, that is pushing forward, but it is a step up and it does protect users against certain kinds of attacks. Now, there are still failures. I don't know if people in the room heard about the Lenovo Superfish incident last week. That was an attack against HTTPS, right? So Lenovo actually made it so that anybody who had bought a Lenovo machine and had absent-mindedly clicked yes, yes, yes on all of the licensing agreements, just to be clear, who here reads all of their licensing agreements? Wow, two people, three people. Okay, so that's very rare. I think you got three people in the crowd this size, it's usually zero. So if you click the yes, yes, yes and you had a new Lenovo machine, they would actively intercept all of the HTTPS communications that were going on on that device. So HTTPS is getting better and better, that's getting more widely deployed, but there are still attacks that can happen against it. And so I think we need to be, you know, that attack happened because people took the machine that they were given by the vendor and they just used it in the way that everyone normally uses machines. So we need to make sure that we have an eye on that kind of a situation as well. So thanks for asking the crowd about they're doing a crowd check. Actually, I'm really curious to see a show of hands in the room of how many people are working directly with vulnerable communities or marginalized populations. We have a few in the back as well. So for the benefit of those who raised their hands in the back of the room and myself as well. You know, I've heard you talk about usability. I've heard you talk about protocols and infrastructure. I've heard you mention the role of government and both as barriers and opportunities. And so for those of us who are working with vulnerable communities, what is the greatest opportunity that we have ahead to institute more secure technologies? What's going to get us to a place where these tools are easy to use? What what what should we be hopeful for? I think what you'd be hopeful for is that if we can get a bit more beginning of an adoption, a wider adoption of the tools, but I think we're getting a bit of a more you're hearing more from users who are expressing a desire for these were beginning to hear about tools that they might want that might be useful to them. If we can begin to break down some of these barriers, what I'm hoping is we will hear from some voices we didn't hear from before, who will give us actual information about what people need rather than what it is that we believe that you needed in the vacuum when we didn't actually talk to you but filled in our imaginations rather than actually, you know, bringing you into our groups. So I'm hoping I'm hopeful that there will be perhaps more funding for things like this, there will be more availability for projects to be funded to look into usability issues, to look into seriously tackling some of these gaps that we have. People are managing very large complex projects for large user communities on shoe strings. And they're very, very dedicated and very expert personnel who are just asked to do a wide variety of very complicated tasks to the best of their ability. They may not have the resources to do serious usability testing to bring in people who need to be doing the testing with them. So if they had a bit more of that, I would hope that the quality of the tools would improve that there were more dissemination of these. It'd be great to have people who could do documentation, who could maybe have user support for people who aren't going to jump on an IRC channel when they have a problem but might actually want to have a closer relationship with someone who can talk them through their problems. So this is my hope. These are, I recognize we have a lot of large challenges, but I am optimistic that we will perhaps move closer to that sort of ideal of tools that are more available to a wider group of people and give them the security that they're actually looking for. OK. Well, I think I'm going to look at it more structurally, and I think I'm going to come back to some of the proposals for cybersecurity. What we see in the cybersecurity strategies, if you look at the research and development strategy and also the executive order, is a move away from securing critical infrastructure to making it resilient. This is, if I could very shortly describe that, that saying we cannot add security to the networks that we have because it's an add on that we thought about it too late. So we should not rely on security instead. We should try to make communities or systems or critical infrastructure adaptable to attack. So let's take attacks as a given, you know, data breaches as a given, loss of security and privacy as a given and try to learn from past mistakes by surveilling everything all the time so that we can recognize when bad attacks are going to happen in the future. Resilience is, in a sense, a failure of the state to is a project that replaces the failure of the state to provide security to its citizens and the people living within its sovereign borders and putting the responsibility, again, coming back to responsibility on the individual communities to secure themselves or take precautions or organizations to secure themselves. That also goes for private entities, companies. So I think in this game, the disenfranchise more vulnerable communities are going to lose even more because they already don't have the resources to protect themselves. And now the government is going to come and say, why don't you make yourself a little bit more resilient? And I think the structural point that we need to look at here is that very careful move towards resilience and seeing that not everybody is going to have the equal resources to make themselves resilient and maybe think about security as something that we keep with us and not just give up on. So I want to frame my response, I think, in terms of wins that I think can benefit the entire network. And we desperately need extra security for marginalized communities. But one concern about trying to provide targeted extra security support at a technical level to marginalized communities is that it highlights who's active within those marginalized communities. And so at some level, what we actually need and that this goes back to arguing for infrastructural change and protocol development, the more people who are not in marginalized communities who use tools that provide the same protections, provide effectively a baseline expectation that these are the normal tools. These are the tools to be used. They'll bring in more additional funding as a result. They'll bring in a wider user base. They'll bring in more traffic that will look the same on the network as traffic from the marginalized communities. So if one of the goals that we want to see is better support for the security of marginalized communities and individuals within those marginalized communities, then everyone actually needs to take on these same set of tools and pick them up and use them actively, even if you don't particularly feel that you're a member of a threatened group. Great. So we have time for questions. I'm just going to open it up to the floor. And I know that we have a hashtag where people are potentially joining on conversation. I'll just point it out to you. For those of you that are listening in, it's hashtag New AM Cyber. But let's have a show of hands for questions. Yes, in the back there. One of the most disenfranchised groups in Afghanistan are the women that fight every day for equality. We established the Afghan Trusted Women's Network. We think it's a matter of life and death, not just whether or not you lose your account. There is a secure means by which women can get on the Afghan Trusted Women's Network through a portal entirely secure and they can discuss issues from everything from small businesses that they're attempting to educational issues. So there are things out there when it comes to portal technology that are secure enough for people, especially those in a difficult situation like women and children are in Afghanistan, to discuss those issues that are sensitive. And we look at that as a matter of life and death. In some cases, just the mere use of technology endangers their lives. So they not only have to exercise some operational security when they log on, but when they're on that portal, they're very secure. Second anecdote. Yeah, go ahead. Is there a question? I want to be sensitive. Yeah, the question I have is, has the panel considered secure portals for the online collaboration for groups that are at risk? And I mean that also in southern Syria, at subnational level, where we know that people that have sent just simple emails have been intercepted by ISIS, have probably been taken away and never heard or seen from again. So what is your experience with those secure portals as a solution for online collaboration securely? I'm afraid I don't know the architecture of the system that you're describing specifically. I'm happy to hear that you're working on projects like that. I think we do need more people trying to build these sorts of tools. One of the concerns that I would have based on just the brief description that you gave in terms of secure portals would be that there's probably a large amount of information stored on the servers of these systems. And if these communities come under attack or are targeted, then the fact that that information is stored in the centralized place makes that particular place a point of vulnerability. And this is one of the sort of externalities I think that Seda had mentioned where if the administrators of that system don't adequately secure it. And I'm not saying that your administrators are not adequately securing it. I certainly hope they are. But if they lose, because someone has tried to compromise the system, if it's centralized in that way, then all of the people who have participated become at risk. So that's a concern that I would have in a model that relies on a sort of centralized and trusted intermediary to provide that communications network. I saw another hand go up in that general area. Yes. And if you could just be sure to ask a question straight off, that'd be great. Yeah, so I remember a couple years ago that food stamp processing went down for a whole bunch of states. And I'm wondering, it seems like a very practical question here. Is the food stamp system as secure as the commercial credit card system? Or do we even know? Is anyone checking? The commercial credit card system in the United States is based on things that you can just trivially photograph with your mobile phone in a restaurant. And I can't speak as to the technical security of the food stamp system. But my understanding is that the credit card system in the United States is backed by the legal framework around fraudulent draws, not around the technology of the use of the credit card itself. I was just going to add, we talked earlier about security and we spent a lot of time talking about confidentiality. We didn't actually mention the availability part where people traditionally talk about the confidentiality integrity availability triad that comes up a lot. And this would be a case of this availability of people needing or dependent on a system in order to eat. And this is not having that security against an attack is a place where the security failed, a group of very vulnerable people. And again, this is nothing new with confidentiality but availability, we have to think about that as well. One of the sayings in communities that are focused on economic justice or community development is that systems for the poor are poor systems. And hopefully we'll see that changing in the future. I saw a hand go up here. Yes, this gentleman in the green. My question is about the topography or the geography of how the digital world actually appears. Just thinking about the weather these days and that weather doesn't follow along county lines and the internet obviously doesn't fit necessarily in national borders. So how should we look at the internet and how we connect to it and how we interact with it? Perhaps just markers with which we change our behavior with when we get online, when we use our phones and things. What are some things to understand based upon how you would describe what the internet is actually, how it's actually designed and then how we should interact with it? Can you be more specific? How we should interact with the internet is a big question. No. Well, okay, to be on the more tech... But also brief. Yeah, I guess on the more technical side, I mean, I've used Tor servers to watch BBC internet videos that I can't see because I'm in the US. But my digital identity can be copied and I can fake it. So how do we... So I guess that's what I'm more curious about. If this can be done all over the world and I can appear anywhere on the world and instantaneously, well, how do we, if we don't understand it as well, go about protecting ourselves or interacting with it? Just things to keep in mind. It's just a very big question, it's huge, right? I'll let you pause on that and we'll come back to it. Come in. Hi, thanks, great panel. I'm wondering what you guys make of the trend of providers beginning to charge users for the privilege not to be tracked. For example, reportedly AT&T, enrolling out its gigabit service, will allow you to opt out of super cookie tracking of all of your online activity to deliver targeted ads for $29 a month. Although the answer is perhaps fairly obvious. What do you think this means for low-income communities in their privacy and security and do you see this to be a growing trend? It's clearly a growing trend and it is not just AT&T that's doing this. Another situation that I think has a similar consequence is the sort of Facebook zero style plans for folks who are unaware of how this stuff works. The Facebook zero plan is where Facebook says to your mobile provider, we'll cover the connectivity cost as long as they're talking to Facebook. So what if you could get your mobile phone plan and your mobile phone plan was free as long as the only parties you were talking to was Facebook? So then Facebook becomes your mobile phone and that's your network and they are sitting at a central point for data collection and surveillance. And the answer to your question and maybe it's a leading question but the answer is that for communities without funds that is the only way that they're going to get initial access and the long-term view of that is that actual access to what we currently think of as the internet to the whole world could become the domain of just the people with the ability to pay for it. And I think that's a tragic outcome if that continues in that direction. We have time for one or two more questions. I see this gentleman here. My question has to do with existing infrastructure that's already in place in terms of affecting everyday cybersecurity issues for everyday people. So what would the proliferation of use of the Tor browser as this gentleman mentioned have to do with security issues affecting everyday people? Do you think that's a good solution? Thank you. I would love to see more people using the Tor browser. I don't think it's a solution to all of the problems that we face. The Tor browser provides a very specific set of bounded anonymity preservation but it would be great to see more people using it. Again, it doesn't solve all the problems. I was very happy to see Facebook open a Tor hidden service. Not because I have a particular stake in Facebook. I don't actually use it but I am happy to see that that's there because it points out that the use of Tor is a fundamental, it's an activity that many people would want to do simply because they're blocked from the network services that they want, whether that's by their government or by their employer or by their home internet provider. I want to end with a question that will hopefully get us thinking through the connection between this conversation and the rest of the conversations that we'll have throughout the day. So throughout the day we'll see this concept of cybersecurity and all of its permutations and I get the sense that the conversations and some of the ideas that we've been talking about in terms of accessibility, availability, now affordability protocols and standards setting. What do you hope of these issues that we've been talking about this morning in this session, travel and or intersect with some of the conversations that are happening later today? Maybe I can follow up on two questions. The anonymity question, I think there are a lot of times in what we call real life or the flesh world where we rely on anonymity and the digital world, the way it is designed right now makes it very difficult to enact these things in the real world anonymously. Crisis lines has been an example and I think it's very important that we think as the digital and virtual are enmeshed and we can no longer make that very simple distinction between I'm anonymous online but not online or whatever. These two things are so enmeshed that we have to make sure that certain basic cultural and societal practices like anonymous speech or anonymous access to services also remain available in this new enmeshed environment. I'm not saying online, right, in this new environment. And the gentleman who was talking about Afghanistan and Syria, those are some of the most surveilled countries, not only by the US but also by virtue of making that surveillance infrastructure available, we're enabling parties in those countries to surveil on their own populations. In the case of Syria, we have the thing, the matter that they have bought massive amounts of deep packet inspection software from companies mostly in the West and they've been using that to surveil their populations, which endangers any project for any of the minorities or disenfranchised people or vulnerable populations living in those societies. So while we can always look at the security and privacy of the tools that we develop, they are only as secure and as private as the general environment in which they exist. And if we go for an environment that's based on surveillance because we think that's a good strategy, then we're endangering the existence of these tools and therefore anonymity and privacy in this new enmeshed world. So let's think of things as an interacting system. That is correct. So very quickly because we're out of time, I just want to get a word or two from Tara and Daniel. It's a little hard to add on to what Sadie said, I think she's pretty much wrapped most of it up. I like the discussion of the virtual world and the physical world because we spend so much time in the digital, I think we do tend to forget that we have these broader communities and social systems that we interact with. So I would like to hear a lot more discussion about some of our larger social and interpersonal systems in our continuing discussions throughout the day and beyond. Daniel, quickly. To add to what they said, I just wanted to reinforce the idea that as policy proposals are made, they often have technological components and if you ask for a policy proposal that allows the kind of deep surveillance that we've been sort of warning about here, that surveillance is not just going to ultimately be used by the parties that you think will have access to it. And I think we just want, I just want to make sure that proposals like that are understood in the risks that they pose to the network as a whole and to everyone who's involved in it. Great, thank you. Please join me in thanking our panelists. Thank you.