 We're going to get started in just a few moments. So please grab a seat. So welcome to the Berkman Center's Luncheon series. My name is David O'Brien. I'm a senior researcher at Berkman. I lead our privacy and cybersecurity initiatives. And before we get started today, just a quick note. We are recording this event, and it's being broadcast live over the internet. So for the sake of posterity, please just be aware of that. It's my pleasure today, though, to introduce you to this great panel, and Camille Francois. This panel includes several people from the Berkman Center, starting on the left here. If you guys could just give a wave as we go around. Bruce Schneier. Bruce is a longtime Berkman Center fellow now, cryptographer, well-known security technologist, New York Times bestselling author, among other things. To his right is Josephine Wolfe. Josephine is a Berkman faculty associate. She recently completed her PhD, I believe, at MIT, and is now an assistant professor at the Rochester Institute of Technology. And to her right is Andy Ellis. Andy, a Berkman affiliate, and currently the chief security officer at Akamai Technologies. And last, but certainly not least, is Camille. Camille was a fellow at the Berkman Center from 2013 to 2015. Her research focuses on cybersecurity human rights and state interactions in cyberspace. She's led a number of initiatives at Mozilla, DARPA, Google, and the French American Foundation, among other places. And the topic for today is the new Mozilla report on cybersecurity. If you haven't read it, I highly suggest it. We have a link on our website. The study was led by Camille. It involved more than 30 cybersecurity experts. And it uses something called the modified Delphi method. If you're not familiar with the Delphi method, it was something that was developed more than 60 years ago by the Rand Corporation. And the general idea is to draw out consensus among the group, the group of experts, that is, and to figure out what are the priorities and what is the current state of cybersecurity. So I'll let Camille say a little bit more about the report. And we'll hand things over to her. Camille? Yeah. OK. I don't really like the slow battery thing. OK. So what I'm going to do is give you a quick intro on the report, the methodology, and the key insights, and the three panels who were all part of the Delphi expert panel might add to this. And then we're going to open up for sharing their thoughts and for a conversation. The full report is available on that link. I've really only used the key insights in some of the graphs, so I encourage you to go back to the full report if there's any question you want to see a bit or into. First, I want to say a note on the process. Sorry, let me just get the notes. OK. Yeah. So no, I want to start with the title. Sorry. So we did call that Towards a User-Centric Policy Framework, which is the barbarian title. The fun title were supposed to be an EF5 digital rights person, a cyber military, a CTO, walk in a bar, dot, dot, dot, and sort of see what happens, which was, but we went for the formal title. Though it does say something, because what we're trying to do with this study is to gather a couple insights to build a positive cybersecurity agenda. So when you're from a digital rights perspective, of an activism perspective, cybersecurity policy is really not the most enthusiastic field out there. It generally needs a lot of policy framework and legal proposals that are either accused of being technically ill-informed or damageable for digital rights. And so we're trying to figure out what does a positive cybersecurity policy agenda look like? What are a set of proposals that people can agree on? And we're trying to build that from a consensual and expert-based process. We call it user-centric. It's also contrasting with national security-centric, though we did make sure that a lot of national security perspectives were brought and represented into this panel. So the study, I believe, is the first one to use the Delphi Technique for Cybersecurity. And it shows where consensus can happen among a diverse group of people and also where consensus is less likely to happen, which is even the interesting part of this story. So for the process, it was really six steps. The first one is a very broad survey on the definition of cybersecurity. We had 32 experts, and we said, what is cybersecurity in your terms? What is it trying to secure? What do you think are the more important issues? What are the most pressing issues? Generally, what topics do you think should be addressed? You guys remember that it was very broad. It was a lot of data that came from that. It was quite interesting. Then we had a pseudonymous email discussion on the role of government in cybersecurity through the lens of the 2014 Sony hack. So we said, here's what happened. Here's what responded. What do you think this says around how government is set up to address cybersecurity? What works? What doesn't work? What would you change? And I want to take this opportunity to say, it's really not a broad landscape of everything that's cybersecurity related. The question throughout the study was really cybersecurity policy, as in, what policy measures one can government do for cybersecurity? So you will see key cybersecurity more technical parts being lower than you would have expected them in the chart. But that's also because, again, we've asked our experts to consider policy suggestions for cybersecurity. And then we had an iterative round of feedback on the role of government in cybersecurity. So the entire Delphi process is based on the idea that you get people's expertise and then you sort of like getting back to the panel says, here's what we heard from you. How do you react to this and how are you adjusting in order to create consensus? Then we ranked the expert suggested priorities for government action in cybersecurity. And this is where the 36 prepositions that we're going to see are coming from. I also want to stress that the 36 prepositions that we ended up discussing were only generated by the expert panel. There was no addition, no suppression. Sometimes we recombined them when they were too close. When two policy proposals were really close, sometimes they were combined. But it's only generated by the expert panel. So the entire process and the entire material is what the experts brought and contributed to this process. Then, yeah, sorry. So first, the four is the priorities. So what should be addressed? What problems, what issues should be addressed in priorities? And five, it's like, what would you do about it? What, you know, what the policies are when this is the 36th thing? And then we ranked them as desirability and feasibility. And on that, we also had feedback loops. Just a short word on the panel composition. As you say, we had different segments. Academia, civil liberties people, government and military, security industry, and technology industry. But the panelists were not recruited to fit multi-stakeholders laws. They were really just recruited because they were great experts to work with on that. Okay, first quick insight, which is a kind of entertaining intro to this study, which is the definition. We had people talk about the definition of cybersecurity. That's a word graph of the most recurring terms when we did this exercise. I guess the fun part of it is that everyone starts talking about the definition of cybersecurity, saying it's gonna be really complicated. No one's gonna agree. And then they kind of all said the same thing, which was entertaining. About half of the panel referenced in their definition the three core technical elements of cybersecurity. So confidentiality, integrity, and availability of information. And then they said, I know no one's gonna agree to that, but here are the three core technical elements and here are the things that really should be included at the core of that definition. Context related to scale, different devices, different actors, different types of risk, and also elements of human rights, elements of privacy, and elements of economics and market belonging at the very core of that definition. These are a couple quotes because of the Delphi process. They're not attributed rights and they were shared with the experts, but you guys can, you're not to say, well, that was me. But one we thought was actually pretty illustrative of the general tone of the definition of work was one that said cybersecurity's freedom from fear of attack and authorized access or use of one's identity, data, network, or system by anyone for any reason and in any way. That coming from the government and military segment of the panel and we thought captured really well what others had contributed. It's the one that's up there. Then we had an exercise on what do you think gets too much attention in the cybersecurity policy debate and what gets not enough attention. If the labels are too small, I think you can still figure it out. Cyber metaphor is the one we created because it's a construct in which I added all the cyber terrorism thing, all the cyber war thing, all the cyber or Magadan thing, cyber 9-11 was also in there. So there was this general idea that what is getting too much attention is a cyber fear mongering, or that could have been another one to replace it. And on the other side, cybersecurity issues not getting enough attention. There's more proposals that actually more different from one another and you can see things from ranging from hotel Wi-Fi to outdated infrastructure and herd immunity in public education. So it's like broader range. People had very specific things to say on things that were not getting enough attention. I remember a passionate, there was also a lot of passion in the answers but we'll go back from that because it's kind of fun. There was a passionate piece of data saying USB plugs and rental cars are gonna be a major security problem. It's not getting enough attention. So yeah, just very wide range of issues on what's not getting enough attention in the public debate. Then we grouped the insights in three environments. The user environment, the normative environment and the technological environment. On the user environment, nothing particularly surprising. You will see that the idea was really here to say that people might be the problem but they really should not be the focus of the blame. An interesting data point on that is when experts discussed cryptography. There was a lot of talking about cryptography and how to make it more widely available. All the experts' proposals that were around adapting cryptography, making it more automatic, making it easier, where four times as more popular, four times more popular, than proposals on cryptography that talked about training people and education. So this idea that yes, the user is the challenging part of cybersecurity, yet policies should really focus on making things easier for the user, taking humans out of the loop. In this bucket of insights, we also had a lot on patching of vulnerable software. So on the user front, I wanna toss something in there which is we often blame the user, right? And in systems analysis, we say that any time you have user error, it's an opportunity to redesign a bad system. Users are not at fault, it's the systems that we build. And so that's sort of the gist of what you're hearing is people are blaming the user and it's not about teach the user to do better, it's about build systems that users can actually operate safely. Yeah, that was a really well-shared view among the experts and yeah, and patching of vulnerable software was also something that was hugely discussed. This was really one thing that the expert constantly were returning to saying, okay, patching of vulnerable software is actually something on which we need government policy to help us think it through, with some experts saying this might even be an area which deserves a bit of light-handed paternalism with even an expert comparing it to poliovax lines thing. In the 40s, we didn't really ask people whether they wanted to get it, so we should think about how to generalize software updates. Technological environment, again, nothing earth-shattering or very surprising. Cryptography remains a priority. As we said, it's about making it easier and then making the tools more automatic. Publicate infrastructure was the most emotional topic of this study. Reading through the comments, the expert comments from the public key infrastructure was really odd. People got really mad, very angry. There was a lot of, it should get fixed ASAP and that's really the most polite quote out of this bucket. People got really, really angry at the public key infrastructure, seeing it's an insane pile of shit, it's absolutely intolerable. This should definitely be the priority with a lot of people, like three people specifically, saying this is a joke, all caps, which, you know, cybersecurity experts don't use all caps when they want to answer cybersecurity study surveys. So that was our emotional part of the survey. I think it highlights the needs for more thinking and prioritizing of this issue. I think authentication standards also set up as a hard problem to solve. And the last part is actually the most interesting one on this slide and we'll go back to it when we see the plotted chart. Providing funding to sustain and supplement security efforts in free and open source software was the only absolutely most important consensus for the entire panel. It is really the one proposal that was deemed to be the most feasible, the most desirable, that was the fastest consensus point. And it really is, when we mapped the policy proposal, it really stands apart as being the priority, the easy and desirable place to start. We thought that was a really good insight. The normative environment, I'll go quickly through that. The part here that was surprising is that we usually think that people saying we wanna build norms in cyberspace is a very specific sub part of the map of actors involved. It wasn't the case. It was really shared amongst all stakeholders. Every part of the panel said that they wanted to work on more norms for acceptable behavior in cyberspace. They added a couple of framing notes on this as in it should apply to all stakeholders, including corporations. It should take into account human rights. It should be publicly stated and discussed. I think in the past couple of years, technology has come around to norms here. That surprising thing is that it's happening now. That a few years ago this wouldn't have happened. I think there's been a change in the way technologies are thinking about norms and the value of norms. And I mean, I don't know how you guys feel about it, but I feel like this is probably not a data point we would have seen a couple of years ago. Like it used to be not a popular debate for cyber security policy. And it was interesting to see it was supported by really all segments of the panel. It's interesting. I don't know if I thought about enough. Off the top of my head, I think it's because there's a realization of how important they are. For me, it's looking at how norms have worked in other areas of this sort of conflict. And as technologists, we often will belittle those sorts of soft human, not quantifiable solutions. And I think the recognition of the social aspects here that not laws, not technology, but norms really do make a huge difference. And so there's been a growing consensus, which I think is interesting. I wonder if part of it is also related to watching policymakers try to make kind of harder laws and realizing that there's a lot that's not working in that space or isn't getting at the things that people are perhaps most concerned about and kind of contributing to some greater appreciation for other tools. I might actually tie it back to one of the first slides which had a definition I don't agree with, where some said cyber security is about being free of the fear of all these bad things. There's a very black and white definition. And I think if you go back several years, technologists thought that either you had security or you had no security and they didn't grasp that it's a range. We have a norm in our society that people aren't just gonna come break down this glass and break in and do bad things here. We don't have that norm on the internet. So this idea that there are defenses that you build that work within one norm that don't work within another one might be part of what contributes to that understanding. And that's the same thing that's leading to this idea that we could have good encryption without involving the user. That's pretty fragile encryption. It's opportunistic, there are lots of hacks against it, but SSL works, my phone is encrypted. The encryption where nobody pays attention might be the weakest, but it's also the most ubiquitous. And I think that's that same kind of thinking that this good enough is probably better than trying to get the math perfect. I'll get on to the last two slides. This one is really interesting. This is the weighted map of the 36 policy suggestions. And it gets to the last point we wanna talk about. So up there 29, I believe is a, so they're mapped as infeasibility and desirability. We had identified a couple of buckets on that map. The standalone support for free and open source software security audits. A couple underdeveloped issues that really didn't make it very high on feasibility or desirability, which is because they were segmented in one part of the panel. And a couple of issues that everyone had addressed, that everyone had talked about, that were really identified as key issues for cybersecurity policy, but that were still ranked as very desirable, obviously top of the debate, though very low on feasibility. So we grouped three of them and we call them the cyber elephants. Mostly- Another cyber metaphor. It is another cyber metaphor. You're right. They got a cyber stampede in a second. Yeah. We actually had fun discussions about how they should be named. And then we suggested the cyber elephants might not really capture it because they're not the cyber elephants in the room. As in, yes, they're always in the room, but they are addressed in a pretty straightforward way. They're the oldest, though, of the cybersecurity policy debates. So they're well-developed. They're the well-sharded grounds of cybersecurity policy. So just, yeah, they're information sharing, critical infrastructure, cryptography. They have different implications, but for these, they are really identified as traditional cybersecurity policy debates on which the trade-offs and the complexities are really well-identified. What we thought this meant is they are fundamental and they should be addressed. So they're also likely to overshadow a whole other different range of cybersecurity policy levers and to easily get to blocking the debate. Doesn't mean that they shouldn't be addressed. It means that from a research perspective, it was really interesting to see them grouped and focus all the tension and sort of be isolated in the data as three things that went together. So, what after the framing of the couple key results and graph? Though the rest is in the report, I think I'll put the map back for us to continue having a discussion and I want to turn to Joe and the inverts for thoughts on what it was to be part of this process and these early results. So it was a lot of fun. I felt like I was in a science fiction novel. John Brunner or something. Anonymous remailers and we're having these conversations and it was very interesting because we dove in onto some of these topics in ways that were very ego-friendly and that you could say things that you wouldn't have said if people knew you were the one saying them because you didn't have to worry about them saying, God, he really thinks that and you didn't have to worry about, oh, I'm arguing with Bruce, I'd better be nice because I have to see him every week. But it was definitely insightful and I think digging out the cyber elephants to me was this interesting. These are wicked problems. Everybody argues them, everybody rehashes them at the end of the day. It is not clear that any policy solution is actually going to help us very much. I think that's what everybody sort of came to or they're afraid that the policy solution that will show up supports the side they don't agree with. Cryptography probably sits into that realm of either we get perfect cryptography or we get a surveillance state and there's not really a lot of convenient policy land in between the two. So being able to tease out things that might actually help that if we could divert some attention over to those and stop arguing about crypto and information sharing for a while, maybe we can get better. So I have to say I want to thank Camille and all the hard work she put in because as she gave you some sort of gentle flavor of when you ask 30 people who spend a lot of time thinking about cybersecurity, what they think the government should be doing, what everybody should be doing differently, a lot of very emotional feelings and a lot of caps lock and a lot of ranting and turning all of that into a report and recommendations is no small amount of work. But it was a lot of fun. I'm glad, that's kind of you to say. The question that you had, Camille, was could you see who was sending the messages as they were going? So two things, I could see everything, right? And I had also recruited you guys. I knew like, I also, some of them were transparent of the way they talked. The other part that I could have added, that's not in the report, but it's like, some people are extremely transparent in the vocabulary that they're using. It's very easy to tell a military person from the middle of the year. It's, you know, it says like, I literally have things that say, we shall focus on the enemy. And you're like, oh, I know who you are. You know, like, or you know, people who say at the root of the problem, like, oh, you're an engineer. Like it's, it was, that's also like, kind of fun. I could have clipped a little annex of who says what and who talks about this, which way. Which I think is actually like, it's a fun anecdote, but it's also part of the problem as in you see people talking about the same issue in extremely different ways with different framing and different vocabularies. So part of the work was also threading this back together, acting as an intermediary for them to actually have a productive debate, which is interesting to see. Sorry. So anyway, to go back, I think one of the things that was interesting for me about this process and about this report are the different ways people think about feasibility of change, right? And we have that graph up there of sort of what's feasible and what's important. And where there's consensus around what's important, which is not always the case, right? We can sort of sometimes agree on most people think that, you know, supporting free and open software security seems to be, seems to be generally a good idea. And the question of what makes that feasible or infeasible to different groups, if you're an engineer, if you're a government person, if you're a civil rights activist, I think is where a lot of the interesting elements of kind of not disagreement, but different perspectives really came out to me. Sort of what an engineer thinks is infeasible is totally different from what somebody who works in Washington DC and deals with the personalities and the politicking thinks of as infeasible and trying to kind of figure out if there are pathways through all of those different ideas of feasibility to things that everybody agrees are important, is I think part of this that we don't talk about enough, right? All of us have sort of rants around things that, you know, like apparently public key infrastructure that make us furious, but the question of, you know, is this feasible from the narrow slice that I look at? Is it feasible from the slice that you look at? What would it mean to try and get over all those hurdles? Was I think one of the things that came out in this process that was really interesting and important for me? I guess what is not surprising with someone disheartening is how far we are away from solutions, that we are much better at identifying problems than solutions. And there's still a lot of disagreement on what the problems are. But Demi, what you said, these are wicked problems. It's not at all obvious what the next steps are. It's not at all obvious how to move forward and that there wasn't this consensus, okay, here are the things we should do. And it doesn't surprise me. I mean, I've been thinking about this and I have trouble coming up with solutions in a lot of cases, but we as a group had trouble and I think that was interesting to me. The whole process was interesting, but it was kind of, I don't know, a little bit optimistic that maybe this process could magically come up with something we hadn't thought of before, but it's the stuff we talk about for the past couple of decades that we're still talking about and we didn't come up with, and now here's the answer. Because we've been doing it between then and now if we had it. I'm sorry, the graph doesn't have the specific policy solutions. They're in the report. Remember the joke about the number of jokes that are, it's kind of like that. No, actually I was about to get the... Three is critical infrastructure. Three is critical infrastructure. The thing that everyone wants to get fixed, that's really important, everyone wants to do that, but no one knows how to do it is that critical infrastructure. Right, see this is bad, right? This is bad. Right, and let's just take as an example of why, for those of you who don't know why number three is hard, let's give you a small anecdote and let's say that TLS is critical infrastructure. That's a weird abstraction, but hold on a few for a moment. So we want to move everybody running a website from crappy versions of SSL, you know, SSL three or TLS one up into the modern world, TLS one, two. Right, so it seems really easy to do, except because Google changed the terms of use on Android with I think 2.2, most of these cheap Android phones sold in the third world aren't using modern Android and so they're using versions of Android that don't move forward. Or the Japanese feature phone market only supports SSL three. This is a hard problem if you're a Japanese bank, do you move forward to secure cryptography but at the expense of all of your users? Right, and who's gonna pay for some of these things? So very simple problem. Now, every piece of infrastructure has a problem that looks like that. So that's why it's hard, we can just say upgrade your critical infrastructure and do good things, but there are legitimate business and user reasons why people are making these bad choices or what we think of as bad choices, but to them it's the best of all of the bad worlds they have to work in. And just to address the outer circle, 10 is the clear international strategy to increase risk and consequences for adversaries attacking through cyberspace. So this is clearly a military frame proposal that still made it to fairly desirable and feasible. And two is actually interesting, it's require notification to customers when their personal information is compromised. Another one that is less traditional than you would have expected for being here is number five, which says change the legal frameworks that provide a shilling effect for security researchers and academics and ensure that security researchers and entrepreneurs can tackle such questions without legal uncertainty, which is not usually top of the cyber security policy proposals in the public debate, but made it quite far in desirability and still third okay on feasibility. That's what happens when you get a lot of academics involved. Yeah, I think I recall one of the conversations on that one and the problem became is it's, you want to enable a set of actors to do a certain set of actions while not enabling a different set of actors to do the exact same set of actions. So we'd love to have a framework which says, look, if you are a good minded person who's helping to find flaws so that we can improve the world, right? You're that NGO who discovered that Volkswagen's been cheating for a long time. We really want to enable you to do that. But at the same time, I don't want to say, look, if you are a hacker group who wants to break in and conduct cyber extortion, we want to make that legal, right? But it's the same set of actions that both groups will do. This is a hard problem. I think everyone's mostly in agreement that security research ought to not suffer a chilling effect, very desirable. How do you do that? Y'all are more lawyers than I am, so that's your problem. Well, that's worth saying since we're sitting here in a law school that I think one of the reasons that that debate has proved so difficult for people on the research side is because the way that laws like the Computer Fraud and Abuse Act, like the Digital Millennium Copyright Act, are written is really around trying to make specific technical things illegal. And what you're talking about is saying not necessarily the technology what you're doing that should be illegal but what you then go on to do with that technical stuff is, which is obviously a much harder thing to regulate in a lot of ways than probably one of the reasons why it hasn't been, but that you can't sort of say, this technical thing is going to be illegal for everybody and allow for those distinctions. You have to say it's illegal to use that, kill people, driving these cars or something. And you'll see the same kind of debates around information sharing. Everyone agrees that information sharing is a good thing, that the more information is out there, the better research data we have, the better vulnerability data, the better we know what's going on and better decisions we can make, the more we can learn. But how to do that, how to enable the right kind of sharing, not the wrong kind of sharing under the right circumstances with the right people in the right way. What becomes a, what was a very agreed upon, right, a goal and it sounds good, just gets buried in the details of this is actually really hard. And you still see the current information here in legislation opposed by quite a lot of security experts because it's not the right way to do information sharing and it's a way we think is actually harmful. This is bad. And so how do we get over that? Or in what happened last week with the US and China, you know, there's new cybersecurity, I guess not a treaty, it's sort of in a handshake agreement, right? You know, does it mean, we all agree that this is the right direction, but is this particular thing the right thing and does it mean anything and how do we make it work and how do we build on it? Suddenly it gets very, very hard. So we want to open it up for a conversation with everyone in the room. You've already, you've already, right? We can bloviate some more if you'd like. But I think actually, while people are thinking of a question, you know, when you think about security, right, it's a really complicated, you know, we come back to that definitions. I think we all said confidentiality, integrity and availability because you expect us to say those words. But we don't agree with that and we don't think everybody else will. Hey, Andy, I said in your own words. I know, but I think I may have said the same thing. I have to really open. But to put it in context, if you say to secure something, what does that actually mean? Right, so go back to Secretary of Defense many decades ago, when asked what was the hardest thing he had was using language with the military. And if he gave the order, like, go secure that building, you know, the Marines would go in and kill everybody and blow up the building. The Army would surround it with concertina wire and sandbags and have a password of the day if you wanted to get in. And if not, then they might shoot you. And the Navy, if you ask them to secure a building, would make sure that the lights were all turned off and the windows were closed and the doors were locked. And the Army would give him a three-year lease with an option to buy. And so when we talk about security, this is the problem we run into, right? I think that security is putting the risk profile of a system in line with the risk profile of a business. That's it, right? If we're willing to accept risk to be in business, we do it all the time, or to do whatever we're doing, your system should just reflect that. And the problem is when there's a mismatch, you get unpleasantly surprised. But there are other people who will say, no, security means you can't be attacked at all. Well, if you don't have a system, you don't have a business, you can't be attacked, that might not be very profitable for you. Good, now I have people to argue with. Thank you. So my name's Matt. I'm a fellow at the Center. And one of my questions is just, I'm curious if any of the policies looked at maybe mitigating the damage that security incidents caused. So for example, if a social security number is revealed, which we know happens quite often, there's damage from that, and maybe is a better solution to just make it so that we're not using social security numbers to verify identities, then to go ahead and spend all our efforts securing social security numbers. In the past couple of years, we've seen a lot more out of the security industry about that sort of thing. And the general term is resilience, and it's things like mitigation, recoverability, and adaptability, and agility. And all of those, after you've been attacked, how do you deal with it? I mean, lots of different things. And if you look at biological systems, there's a whole lot that happens there that traditionally was not done a lot in computer security. Computer security is all about protection, defense, and then some detection, very little in response. I've seen a change very recently in the past two years. So yes, there's a lot more being talked about. How do we make our systems fail safe? What that means is when your car fails, it fails by slowing down, not speeding up. Fail safe, fail secure, instead of failing dangerous and open, or being able to recover from attacks. You look at some of the big attacks that have made the news, the Sony, and Ashley Madison, and Target, and how they responded, how they recovered. It was really indicative of how poorly they thought about these sorts of things. And if you as an individual, as a company, can deal with that. Now, sometimes it's easy. The credit industry has gotten very good now at giving you a new credit card when something bad happens. And they're really good at recovering. And they recover by that reissuance. If your personal data has been leaked and it's embarrassing stuff on Ashley Madison, that's going to be a little harder to recover your marriage from this. But maybe you want to think about that when you post it. And maybe the system doesn't want to save some of that data because of that risk. My guess is the next few years, we're going to see a backlash against big data based on that kind of idea that we as a company are now much more at risk because I have all of your data. If I screw up, you're going to sue me and that'll be bad. Or there'll be a regulation and that's going to be bad. So I'm going to protect myself by not saving it. And so I think that is being thought about a lot more now. I think one of the things that's really interesting, as you say, looking at how companies do this in the wake of big incidents, are the policy tools that they have at their disposal, how they try to use those to mitigate the damage that they see being done, and generally how ineffective that is. So if you look at, say, Sony and Ashley Madison, or AvidLife Media, which owns Ashley Madison, after their breaches, both of them issued DMCA takedown notices for people who were posting emails or scripts or internal databases from Ashley Madison. That was a pretty slow way to go after hundreds or thousands of people tweeting or posting, but that was one policy tool they tried to use. Sony was seeding sort of fake files for people to download in hopes that they would start downloading those instead of the real stolen data. So we've definitely seen some sort of creative attempts to leverage different technical and policy tools. And I think it's fair to say that none of them have proved very effective and that there's definitely sort of a need to think about whether there are policies that would actually be targeted at that instead of kind of reappropriating copyright policy to think about something that it was really never designed for. I think the problem you're highlighting, which is one that more people are talking about, which is that your social security number is both an authenticator and an identifier, is one that is still not well understood as a problem. But it is tangentially addressed here up in the upper left corner with the disclosure rules. The fact that we're already seeing policies around when an end user's data is breached, you have to tell them, and it effectively means it's now a public disclosure. We have created a Pagovian tax on storing user data. So the hope is that that tax will do two things. One, as Bruce notes, cause people to keep less of it. But a second is to change the understanding within the populace and within lawmakers about how these are being overused. I want to say misused because there's no guidelines for how to use a social security number really anymore. There were, but everybody ignored them. So maybe at some point within the next decade or two, that will be a conversation that we're ready to have. And since we're mentioning social security numbers, another interesting data point was that people were really mad about passwords too. So there was a lot of discussion on how to move beyond passwords as authentication mechanisms. It didn't actually result in any, it was big in the problems phase, and that's a good example of Bruce, where you were talking about how we're still good at discussing problems, but not so great with solutions. When we talked about issues, password was really something that was very permanent in that conversation. It did not result in any form of policy proposal. The only thing that sort of made it out of this debate into the policy map was policy frameworks that encourage companies to have two-factor authentication. And it's four, so you can see it out there. It's deemed to be feasible. I'm impressed you memorized all these numbers, I really am. It's like there are things that government can do to help corporations move forward in enabling two-factor authentication. That's the part of it that we can address from a policy perspective. The rest of the problem, the moving beyond passwords as authentication mechanisms sort of like is devolved into the non-solutions map. Yeah, I think passwords were a thing that everybody hates. We think that's a festering pile of shit, as Camille said earlier. And we were promised that the PKI would let us move past it. And so that's why there's so much anger about the fact that the PKI doesn't give us the tools to really move authentication forward. Those two things, I think, really resonate for a lot of people as things we would like to have be better and, oh my god, is it bad? Although I'm personally, because I wrote an essay in 1998 explaining why PKI wouldn't help. You really were the oracle of that time. Sorry, Paul Roberts, Security Ledger. Hi, Andy. So I'm picking up on things that I think all three of our panelists have commented on, but I would note that the prevailing wisdom on the west coast, on the left coast, is that the more data your company possesses, the wealthier you are, because you can find endless ways to monetize that data. Maybe not now, but sometime in the future. So I guess the question would be, what is going to be the forcing function for companies to start, as Bruce suggested, looking at the risk and the cost of holding that data? Is it going to be the insurance industry? Is it going to be regulations from the federal government or state governments? How is that transition going to happen? Because it strikes me that right now the prevailing wisdom is actually 180 degrees of that, which is do a startup, get venture capital, collect data, and then figure out how to make money off it down the road. So I think it's going to be liabilities. I think it's an enormous, right now we're sort of in the maximum hype of big data. And it's that. It's collect everything. It has some value. The cost of storage is marginal. But when you start adding in these liabilities of losing the data, the cost of storage goes up. I think goes up considerably. And when you see the marginal value of the more data, and I think we're starting to see these papers on big data research, that the differential value of twice as much data is not twice as much value. And so there's going to be some diminishing returns. There'll be a point where people say, saving this isn't worth it. This data is only good for six months. Why do I need to save it for five, 10 years? This data is only valuable in aggregate. Why am I saving the individual data? And so I want to see greater liabilities. I don't think it's going to be regulation. I think it is going to be liabilities, either the insurance industry or lawyers saying, look, do you really want to save that? I mean, Ashley Madison, I mean, they could have designed their system more securely. And I think this is interesting. The way it worked is you pay with a credit card number, which they kept with your name, so I say with your account. And they did that because the way to get more money out of you is doing a recurring monthly bill. So they needed your number and your account. If they had a different business model, which is you pay and we credit the money, then we delete your name and the account is there with the money. And then when it expires, it goes into defunct and you've got to pay again. That would be a much more secure system. You would not be able to attach names to user IDs. But they didn't do that because it's a more profitable business model to do the other thing. I think, in general, the profitable business models tend to be the most dangerous ones, right? So I think there needs to be some way to reflect that danger back into the business model. I'm actually unconvinced that liability is going to solve it, specifically for the reason you mentioned, which is within the startup world, right? When we think about liability for established companies, liability is an interesting risk because you're this healthy organism and that liability is a cancer that could kill you. If you're a startup, you're a zombie walking and you know the date of your death and you're really hoping to figure out how to become alive on the other side of it. So you see startups taking insane and crazy risks because those risks aren't real for them. They're already dead. A liability that doesn't take effect until you become alive is totally awesome. It means you've got to be alive for a while. So that's why I'm not entirely convinced that liability alone is going to solve that problem. I'm very fertile. Cyber metaphor territory here. I want to add just a quick point since you mentioned what's the role of corporations in that debate. This study was convened by Mozilla and sadly, neither Yochai or Chris who are the two Mozilla point person on this project could join today. But I really encourage you to reach out to them. They did a fantastic work convening the study and they actually did that because they intend to have a positive user centric cybersecurity agenda as part of the work they're doing. So just go ask them these questions too because they are on the delivery what now sound of the story. You had a question? There's a microphone over there. I've just got more from the computer science department. So my question is in both the feasibility and desirability aspects of this there seems to be a fairly substantial overlap from any of these things with whether or not there's real consensus on what the problem was. So my question is do you have a sense of which of these policy ideas the thing that's needed to push them into let's do it is a technological solution or a better idea for exactly how the policy should work and which of them it's just no we don't agree because it's surveillance versus privacy or something like that. It's a really interesting question. I think that's part of the story and the cyber elephants. The cyber elephants is that the trade-offs are really well identified it's very well charted yes it is a priority the debate is kind of almost too stuck for us to move forward we don't agree fundamentally on how to proceed but it's fairly mapped out and straightforward. Hence the surprising part on the free and open source software funding bit which did not actually relate to any problems discussion it sort of appeared out of one panelist and it got you know it got the others grabbed it and ran with it right so it appeared it appeared in one panelist and then it got very popular but it wasn't it's so in this type of study it's two different types of data point right some of them you see them coming either because as you said they're so rooted in like you see them arriving in the definition and you see them arriving in framing the problem then there are quite a lot of people who start talking about the solution and they end up on the map some others have completely different trajectories as in someone just puts it and everyone runs with it and they're very popular. So another map that we did is actually trying to figure out the elasticity of the proposition given the amount of support they had from initial people right so they are some policies that were suggested by a very either one individual or two in the panel that got a lot of popularity and some others that you could see coming more steadily and again like the most surprising one in terms of trajectory is FOSS funding really obvious place to start it seems. The question you asked is a really interesting one and not one I've sort of thought a lot about in relation to these specific proposals and I should also say I don't know the numbers as well as Camilda so I can't speak directly to what's on the chart where to my mind a lot of what we came around to thinking of as more feasible were some of the issues where it seemed like there wasn't a need for a brand new technological insight but there was a need for money going towards free and open software or there was a need for wider implementation of already existing two factor authentication in that I want to say that was sort of how we defined feasibility but I think that was what came across to me in some of the conversations was if you want to kind of completely redo something like PKI and re-understand what we would do there that requires new technology probably in a way that we haven't thought through fully yet. If you want something like lots more places texting code to your cell phone and you check an account or login that's technology we already have and it's more a question of how do you get people to do it and I'm not sure I should say having said that it's necessarily easier to come up with money or to make people do something they don't want to do than it is to design new technology I think that's often what feasibility conversations come around to so your question of what requires a new technology a new policy versus what are we just not agreed on there was some sense in which the new technology seemed like the harder piece which may or may not be accurate but made it seem less physical to approach some of these. And Joe, to go in your sense adding just a slight data point when you look at whether different segments of the panel have different rating on feasibility it's not on desirability it's very clear like people actually want different things according to different segments it's not so much the case on feasibility and I was surprised by that things that are deemed feasible are generally feasible across segments or unfeasible across segments and it's a very different story for desirability so it goes in although I find that it's very often that the technologists want the lawyers to solve the problem and the lawyers want the technologists to solve the problem that happens way too often because the other side is so much easier and the same is true of users and engineers engineers would like users to be better educated and users would like engineers to build easier systems When I went through it and looked at what did desirability mean it sort of came down to if you're a policy won desirability meant that in one sentence you could say what somebody should or sorry feasibility was in one sentence you could say what needed to be done and it wasn't like solve world peace right so it's fund more free and open source software that's totally feasible to a policy won I've given you a thing go find your own money and put it here whereas implement more crypto everywhere like where do you start what does this mean even if we could have gotten everybody to agree that that was desirable that's totally not a feasible statement to hand to somebody and say well just apply some millions or billions of dollars and do it I don't know what I'd get out of the other end of it I know what I might get out of more million dollars at the critical infrastructure initiative and I think one of the things you're getting at there is that centralization of the people who you need to sort of cooperate and get behind something is a big factor in feasibility from a lot of perspectives are there a few actors we can target and try and get to do something or we're really looking at the whole world and trying to make millions of people do something I wanted to answer your question give you another answer for that yesterday I was talking to a high up guy at a big familiar Silicon Valley company who said the tide is on the turn that big data is now considered for many a toxic asset that's the term he used also he called it the radon gas of data because he says we're hoovering up the entire periodic table of personal data and there's got to be some carcinogens in here somewhere he also called it a silent killer so those are three... who there are so many biology and chemistry metaphors in this? what a bit of the zombie thing which I think is a real takeaway one of the ways we can observe this trend is corporations attitude towards their own employees email for instance and in the recent years we've seen a lot of corporations say we're not going to keep our employees emails because so much comes with it someone's going to publish them right it's an interesting nexus when these companies are themselves technology companies and you know it sort of like puts them on a path of having a different approach to these issues this one I should add is when selling solutions to other big companies so there's a demand on the big company side for less data I'm George Bokray an independent scholar from Central Square and old enough to remember social security cards which said not to be used for identification purposes this is supposed to be about user centric right and I'm wondering what's going to happen from here so you have your expert panel who did this whole structure is there going to be a user panel is there going to be a hacked businesses panel is there going to be a Kevin Mitnick and friends panel to go over these kinds of things because I think all three of those are very useful to look at your work and critique it and add to it so it's a really good question that's one of the reasons why I really want to thank Mozilla for having convened and done and thought about this research they intend to work on this from their perspective we wanted this research to be very clear transparent we're putting it out there you know if the methodology is also open we can talk about how we met it there's very steps if anyone wants to replicate that with a different set of panel on a different set of conclusions that really they can do that they should do that and Mozilla on their end is going to incorporate these thinking into their own policy process as a corporation I should also say that this is a MacArthur foundation funded study and I know that this is a process they might use on other work they decided to support I don't know so it's really just a first step taken publicly openly and we're happy to share all the data, everything we've learned and we encourage others to replicate it Hi, I'm Peter Nature among other things I run a seminar seriously at IEEE that sometimes addresses issues related to this I noticed you talked a little bit about norms and I also see the problem getting a lot worse with the internet of things introducing a lot of cheaper devices which will be harder to get to use the latest most expensive technology plus advances in machine learning making arbitrary anonymous people on the internet potentially having more information about you than you do yourself not necessarily having identifiable resources to make them really vulnerable in a court and I'm wondering given that governments at policies can't do it alone whether you see a role to establish a consumer reports or something like that to make some of this technological information more accessible to ordinary users who kind of bounce between ignorance where they have no idea about the extent of botnets who are sending emails from their personal computers to fear when they discover that someone has stolen things and trying to get more technology-specific information available to ordinary people and do you see a role for Berkman trying to produce that I think there was something like that John Palfrey had organized here a number of years ago whether you see anything like that for vulnerabilities in extent software and how people can get around that so I'm not actually convinced that that's the right approach or that it makes sense given the long march of history of technology if we look at the adoption of technologies and the way they've improved the human condition is they often start out as something that is relatively simple and hard to understand and master and so only experts get to use them think trains for instance and as they become more available and readily accessible by the consumer they become less easy to understand for the user we look back and say 50 years ago people changed the oil in their own cars most people today don't and in fact you open up the hood of your car and it's a seamless mass of metal now you can't even tell that there's an engine under there in vehicles and so the progress of technology I used to be able to build a phone I totally can't anymore so you can look at all of those and so if we model that and say well that's the norm for technology so we can't head for a world that requires the users to gain better understanding of how the technology actually works I think what we have to do is have a march towards the technology has to actually work and work includes work well or work safely and that's the problem we have is that we're on this cusp for things on the internet I hate the internet of things because it applies that they're supposed to be there where people basically said oh look I have a toaster I will put it on the internet why because it's a toaster on the internet I mean we remember those the first one's cool, the second one's dumb the first one's cool, the second one's done and by the fifth one you're like oh my god if you think the ones that will now send them an image and it will burn the image into the toast and you're thinking wait well what if I sent it an image that's the shape of the toast and so you're burning the whole toast could I start a fire? I really hope not that's not a real case but it's a hazard analysis that I would go look at why is this thing on the internet and so we're in this cusp where people are going to put things on the internet that are fundamentally unsafe and do not have a means of becoming safe in the future which is what worries me in 15 years the things that are going to the internet will hopefully have a path to become safe over time but right now it's hardware people are not doing auto updating weird control schemas that are using ancient weird protocols that are known to be broken I am optimistic I think that might get better and will just be stuck with a bunch of crap still on the internet so I don't believe the pacemakers yet actually on the internet you have to bridge them to get them onto the internet but yes that model of oh I have a pacemaker that works by remote control the lack of decent safety and hazard analysis in the biomedical industry is stunning and shocking I'm really glad that the FDA is starting to get involved you saw their ruling on the hospara pump so we see that there's a recognition that that's a problem but we have medical systems that are not designed for adversarial environments they're designed for pristine sterile environments like a hospital okay well hi Jackie Kerr I'm a fellow at the Belfer Center I had just a couple quick questions I was curious how the issues of vulnerability markets or both bounding programs came up in this kind of multi-stakeholder discussion and I also was interested if there were any particularly surprising or interesting issues that came up at the intersection of cyber security and global internet governance processes yeah do you think there was comments on the vulnerability equity process it ballooned and died it didn't make it to policy solutions at all it was really only focused on two sub-segments of the panel no interest in other segments that was really interesting I would put it in fundamental conversations that need to be developed in the future but didn't seem like it was mature yet or at least got its share in this process two sub-segments like two particular groups yeah which two? I believe it was government and military of course was the first one and I remember specifically looking into the vulnerability equity process I don't want to be wrong I'm not sure if civil rights was the second one or security was the second one interesting yeah it was but no hits on the others but some other issues were like that it's not unusual compared to other issues but this one funny you ask because I actually did check on which parts of the segments it appeared but definitely something that I think will take more and more importance in the future of cyber security policy conversations though to be fair the process didn't really highlight that specifically and the second question was things at the intersection of global internet governance processes and cyber security which sometimes are framed as very separate discussions so I was curious what intersections that were I think that part of the conversational norms really got into that so there was distinctively no mention at all of anything that is internet governance no mention of internet governance anywhere in this process and again as I said a lot of data just not internet governance but definitely a lot of conversations on global cyber security norms specific discussions of the UNGG process because it was happening at the same time and yes definitely this disappeared very clearly and the way it's captured here is on global cyber norms to be communicated very clearly done in a multi-stakeholder way including corporations and focused towards international actors so there's civil rights framing of this proposition there's a military framing of that proposition they're both somewhere on the map but ultimately they do mean the same thing and it was definitely something we could observe very clearly and as we said it was I think a surprise for all of us because we didn't expect that to be so clear and don't think it would have happened a couple years ago I do think the norms piece is sort of at least what I think I don't know how many speak for everybody who's contributing to this report but in terms of sort of what makes policy kind of harder and more real versus more kind of mutually agreed upon and handshake I think of pretty much everything global as falling into that latter category if you're making agreements between different countries then you're looking at things like we just saw with the US and China that it's not as easy to pin down as concretely as some of the more international policymaking and I think that was one of the reasons that norms came up in this study it's one of the reasons we've seen a lot of them in the past couple years is understanding that that's one way to try and get at the international dynamic of it I think partly it may just be sort of a function of how I'm trying to think of the nicest word to use about the global internet governance debates how kind of fraught they have been over the past few years and how little concrete there is to point to in them and say if you were trying to say something actually specific about what you would want to change this seems to be something that a body like ICANN or whichever IGF this is not the acronyms that I feel most comfortable diving into but the ITU all of the different players in this space have been addressing in kind of a very specific way to my mind there's a lot of discussion of security in those kind of global forums absolutely but in so many different contexts in so many different ways with so many different countries viewing it differently that it's hard to pin down what it's actually about I suspect it may be that for many of us we sort of have this belief and desire that the global internet governance groups will remain far away from the internet itself and so we don't necessarily want to bring them into a conversation when we're brainstorming things and again as you said it's really interesting because it did not surface like it nowhere in the debates nowhere in the proposals nowhere on the map it's like internet governance kept out of the discussion on cyber security policy I think we're more worried about the harm that those groups could cause the help that they would give Any time for one last question I just have another broader picture question let's say the feasibility of these technologies protection increases but I still have the remaining question of who will bear the cost of this security will the companies integrate it will the price be given down to the consumers how do we avoid making less accessible internet a more expensive internet if you will so I think that's part of the feasibility it's not just the technical feasibility but it's the operational feasibility can we pay for it in some meaningful way can we align the people who are who are subjected to the risk should be the ones to pay for the mitigation so can we align the economics and I think a lot of times we get the economics wrong and perfectly good technology fails so I think to me that's part of feasibility getting that part working and I think you're right that's a very big one and one that technologists used to minimize especially in security but I think now we don't I think people really recognize that making those economic incentives aligned is very important to making the security work as an example one of the things we have in the very low feasibility category is securing critical infrastructure bcp38 which is the first thing most people will come to which for those of you not familiar with how the internet works is the rule that says if you're a network don't allow traffic to leave your network that claims to come from a different network that seems really simple right don't let people forge addresses can't get it done like the big networks mostly do it but that's okay because other people are forging traffic that claims to come from them if we could do this the DDoS business would be harmed like it would be really hard to do DDoS that are reflective attacks if we could get bcp38 universally implemented but it's in that category if not very feasible we all want it to happen technically it's easy just getting economics working as for who's going to decide how those costs are allocated I think that's policy but also really lawyers and judges is going to be a huge part of that in terms of settling individual incidents around who pays whom and scaring the various people who feel they might fall into those categories understanding how those liability regimes are going to play out which I would say really early stage of trying to define well I want to thank all of you for coming and having this discussion with us and especially you guys for being so patient throughout the many surveys of that process and the email discussions and for making the climate to come and impact the thoughts on the study with us today it was really fantastic having you onboard on this