 All right, good morning everybody, and thanks for joining us. Welcome to New America. My name is Ross Shulman. I'm co-director of New America's Cyber Security Initiative, along with Ian Wallace, who's back there waving his hand, saying hi. The Cyber Security Initiative is a joint project between New America's International Security Project and the Open Technology Institute, also here in New America, of which I am the representative co-director, Ian's with the ISP. And we're focused on a broad range, a wealth of interesting topics, including vulnerabilities management, state and local governance, and many more. This morning, we've got a very interesting panel for you. Many of you probably know, but some of you perhaps don't know about the ISP. But they have created, through a collaborative plushness with a lot of the very important folks in the security industry, a set of best practices for how to set up cyber defenses if you're a small company, all the way up to a large enterprise. And it's a really interesting document. I urge you guys to read it if you haven't, and we're going to get some more information on it here in just a sec. But along with those controls, recently, CIS has started branching out into discussing the privacy implications of the security controls themselves. And that's what we're here to talk about today. How do these two topics work together? How do they dovetail? And when you're implementing security, what do you need to be thinking about in terms of privacy? With that said, folks, I know that there are plenty of folks watching online. If you want to tweet questions at us, please tweet to at newamcyber, is our handle, and use the hashtag newamcis if you want to chat about the goings on. And finally, before we get started, I wanted to plug for those who are interested in the topic. This Friday, we have another event put on by the cyber security initiative at 10 a.m. right here about on public-private partnerships aside from information sharing, because I feel like that topic's kind of been done. So without further ado, let me introduce our panel this morning. Directly to my left here is Rick Doton, Chief Cyber and Information Security at the Crumpton Group. Yeah, that's that right, great. To his left is Joan McNam. Joan is the Director of Privacy Education and Policy with the Office of the Attorney General in California for their Department of Justice. And finally, Tony Segar on the end. He's the Senior VP and Chief Evangelist for the Internet, for the Center for Internet Security. So I assume we'll be getting some evangelizing this morning. You got the message. Great. And so I want to start actually with you, Tony, sort of as the evangelist for CIS. Fill us in a little bit. Presume that we don't know anything about CIS, about the controls, or about the privacy. And just give us a brief rundown of what exactly these are. Sure, thank you. Very briefly, the Center for Internet Security is a nonprofit in cybersecurity. And the tagline we used to use is making best practice common practice. Let me tell you where that came from. I spent 35 years of the National Security Agency, all of it in cyber defense, so what we call cyber defense today, and ran the vulnerability finding part of defense. So things like penetration testing teams, people that find zero days in software and that sort of thing. So I've had the luxury of studying this at very large scale, right? Seeing flaws that are found in the basic mathematics, in the software, in the individual products, up to and including the operational world. And so that's where terms like red teams and penetration testing teams come in. And I'll say, having watched that for decades, there were sort of two observations that led us to where we are today. One is that you see the same problem over and over and over again, right? It's not, we're not getting attacked by magic every day, what we're getting attacked by is millions of repeats of the same thing over and over again. So just seeing that happen at very large scale. And it wasn't that people were lazy or they didn't care, it was that this is a very operationally complicated problem, right? There's just a thousand things to deal with, which leads to the second observation. Defenders, even though they have more tools, I grew up in the 70s in this, right? We didn't have any security tools to speak of, we really had no security market to draw upon, we had no notion of threat feeds and the things that we have today. So we had really very little, we have mountains of that stuff today and the problem today is we're overwhelmed by it. So we go from sort of this very narrow focus, the government cares to this wide open marketplace, which comes with lots of great power, but also lots of noise. So this, you know, same problems over and over again and people are just overwhelmed. And so to help with that problem of drowning and the term I used was the fog of more, more tools, more technology, more training than ever, but we're getting worse. And gathered a small group of friends, said no one leaves the room till we all agree on a small number of things all of our friends should do, right? And the temptation in security, if you hire, you know, you want 10 security opinions, hire five consultants, right? There's just so much noise and so many opinions and every one of us wants to solve world hunger. You know, oh, here's a list of 100 things to do and if you get to that I'll give you 500 more. So the idea is keep it simple, don't try to solve everything, but get started. Help people with that, that really the foundation basic steps. Things that we know have value against real attacks, not postulated attacks, not the million things that might happen, but things that really do happen. So that simple idea was a two page letter. I never dreamed it would become what it has become. The idea got picked up by the Sands Institute, which is the big teacher of this practice in this business. They turn it from five friends into 5,000 friends on the mailing list. Eventually I retired in 2012, took the project back over and spun it into a non-profit company. Because I thought the idea was important enough that it really needed to be independent of anyone's business model. You needed a neutral, I'm a market neutral place, so that's what brought me to CIS, the Center for Internet Security. We're also the world's largest producer of independent producer of security benchmarks. So the controls are kind of at the level of operational practice, right? What technology should you have? How do you run it? What kind of procedure should you have in place? Benchmarks are at the lower level. How do I configure a Windows desktop? Or a mail server or a web server? It's a very detailed, beyond the ability of most people to do on their own. So the model has been the same, it's basically committed volunteerism. We grab the people together, the non-profit is small but mighty, and so we are really kind of an organizing principle, like a community organizer that brings all these folks together. So we have adopters around the world and lots of success. And then starting a couple years ago, we introduced a major new version, version six, October 2015, and it was time to make a stand on privacy. We felt very comfortable with the IT security part of what we were doing. I think it stood up very well in the marketplace, lots of adopters, a lot of support from industry. But as we know, everything you do in IT is reflected somehow in the management of data, collection, storage, everything you do about the permission that people have to do things on the enterprise to look at information and so forth. There's a reflection of that in the IT, right? If you don't do that well, you can never get the privacy problem dealt with. So it was time for us to put some ideas out on the table to gather experts and the beauty of the CIS model is that one of the great virtues of our industry is it's full of really amazing people who are willing to give up their time if we could just find the common cause to rally them around, so including the folks here on the panel. So the idea was for us to take a stand, help people with this problem beyond the IT part of it, and then look at the implications of the data and the controls and the things and how we're gonna address them in privacy. So I won't go on, because I know you wanna talk more about that in other dimensions, but that's the kind of rough story, right? We're really about helping people understand the problem that they need to solve, translate that into action, which is really what the controls are, and then the phrase I use, it's not about the list, right? If you want a good list of things to do, you can go to lots of places. Really, CIS is about creating a community around the idea so that people can achieve what's on the list. How do we help each other? How do we learn from each other? How do we organize ourselves relative to the marketplace? How do we look at the topics that are associated with or reflected in the technology? So that's really the story behind the privacy part. Thank you, Tony, for that intro. That helps set the stage really well. Rick, you talked a lot about sort of volunteerism. I hear you're one of the idiots, I mean, excuse me, experts that offered your time. So talk about it a little bit from the volunteer's perspective. How does the process work? And why were you interested in sort of chipping in on, well, CIS broadly, but also the privacy implications in particular? Well, CIS broadly, near the beginning, maybe years ago, I kind of got involved because I knew a lot of the same people. It's a small community, particularly here in D.C. And we were talking about it with the consensus audit guidelines back then. And so we started contributing to that. The privacy element was one that I found that when I was part of one of the reviews of one of the versions, I said, you know, privacy is only mentioned once in this. And I said, but on the plus side, the ISO 27002 was only mentioned twice. So you're not doing too bad there. And so we talked with Jane Lute at the time was president about we need to do something of privacy. And as Tony had mentioned, that was where the privacy impact assessment kind of guide came out first with version six. And we had all worked on that. And then knowing that we wanted a more thorough privacy addendum to talk about how these controls apply to privacy because one of the things that was very dear to me is understanding the difference between privacy and security. I've been a security guy my entire career. And about 10 years ago, I had a customer who had a privacy issue. And I said, oh yeah, that's confidentiality. I got that. And I'm like, no, no, no, this is privacy. And I was like, no, no, we got it. I mean, we're professional hackers. We've done risk assessments. We know all this stuff really well. It's fine. And at the end I kind of got argued down of like, I'm wrong. You know, it's not just encrypting things and putting it over here. It's the appropriate use of data based on regulation or consent of the person who gave it to you. I mean, that's very different of how you kind of manage this data. And one of the things, keeping that in mind over the years and when I became a CISO myself, gone through the safe harbor process back when that was the thing. And understanding all the things as a defender that were privacy, we really found that there was a big gap in what IT people understood about privacy. They're holding on the privacy data that they need to control and they don't realize it. And so that's really the thing that kind of got me into kind of pushing this and getting to wanting to build this, work with a group to produce this document. Great, thanks. And then, Joan, let's turn to you because you're a bit of the outsider on the panel. You work for the AG's office in California. Tell us a little bit of why you're here. Yeah, I'm not a security expert by any means. Nor am I an attorney, so you need to know that because I have lots of opinions which I'm perfectly willing to share. They are not legal opinions, they are my opinions. One of the things we do in the California Attorney General's office in the privacy unit where I work is we get reports of data breaches. By our breach law, entities are required to report to the Attorney General whenever they have a breach affecting more than 500 Californians. So, and I and my staff are the ones who get these reports and who look at them before we decide what all we might do. And when we get them, we look at them all. Some of them we look at very closely. Sometimes we take an enforcement action in relation to a breach, but most of the time we don't take an enforcement action. What a breach is an opening and it opens up practices. And so we can look in and then ask more questions to find out what are the practices regarding data that might be problematic. And when we see things and see that there are lessons to be learned, we write about and publish them. Nearly every year we publish a breach report showing what we saw in breaches reported to us that year and what recommendations we have. So in 2016, we were looking at four years of data breaches. And what we saw is similar to what Tony's remarked about what he saw in defending at NSA, is that while there are a variety of types of breaches, that they had many elements in common, many in that the vast majority of them were enabled or facilitated by known vulnerabilities for which there are known controls. And we saw kind of the same things that Tony was talking about, that yes, the metaphor I use is if you're protecting your house, maybe somebody could be coming overhead in a helicopter and drop a spider robot down your chimney that would automatically find the hidden jewels and crack the safe and take them out. More likely there are far more people who are coming in through the unlocked doors, the open windows, the fact that you don't have a burglar alarm and that you're not fenced. So if you can get the burglar alarm, put up the fence, lock the doors, close the windows, you're gonna defend against most of the attacks. Either you'll stop it, you'll slow it, it'll mitigate it, you'll discover it faster, and it won't be as devastating. So we wanted to, in our recommendation in that year's report, we wanted to move the ball forward by pointing to something more concrete than what cybersecurity law says. So essentially all of the cybersecurity laws are not real specific about what to do. HIPAA perhaps gets closest to being specific in some ways. What they say commonly is use reasonable and appropriate security to protect the information. And so we wanted to put some flesh on the bones of the legal concept of reasonableness. And so we started looking at security standards. And we looked at the NIST cybersecurity framework, we looked at ISO2702, we looked at a number of standards and found that they had many things in common and they were hundreds and hundreds and hundreds of pages long. And certainly I didn't know the fog of more term, but there was certainly more. And then we came upon what I had previously known as the SANS Top 20, the CIS Critical Security Controls, 20 things that are the top priority things to do that will give you the most bang-pure buck against the most common kinds of cyber attack. And we spent some time reviewing these, I didn't know anybody at CIS and decided, okay, this is gonna be what we're gonna point to. So in that report, we made a recommendation that the security controls as relevant to one's environment would constitute a floor of reasonable security, that the failure to implement the relevant controls would be an indication of unreasonable security. What reasonableness might be achieved by the controls or not, but this is the bottom line. And that's how I got involved with CIS and with the controls. Who is the, you know, California is a diverse state, obviously, and you have Google who probably has better security than the CIS baseline, I would imagine, and all the way down to a corner store somewhere, is there a target that you guys are aimed at? Well, we're aiming at everybody who holds other people's information because they have legal and ethical obligations to protect it by law. But in our outreach efforts, we've specifically, particularly focused on small businesses because they don't have the technological and other sorts of resources to get a good grip on security, and yet they can, very small businesses, can have a whole lot of very sensitive data, think of medical practices, legal practices, and we saw in our data, in our breach information, that they are being targeted. In 2016, for example, 24% of the breaches reported to us were in small businesses by SBA definition, so it's specific to the industry sector. So they are being targeted, and we felt that they needed help. Let's dig in a little bit more on the privacy implications themselves. Tony, just walk us through how the privacy implications guide, it sort of marches in lockstep with the CIS controls themselves, and addresses each one individually. How do those, how do the privacy implications, how do they get an organization, be it a small pop shop to something bigger, from zero to at least some privacy? If you look at the history of, I'll say, security frameworks in the documents, some of the things that Jonah plowed through to try and make sense of it, you have everything from what I'll call cosmic frameworks, very high level, all encompassing, very comprehensive, but they're often so high level that, Yeah, and so they don't seem to change behavior much. They are nice for what they do, but they're a classic world hunger, and write your report on how you save the world. And then of course then you have to set up a massive bureaucracy of vetting, assessment, enforcement reports that probably never get read and that sort of thing. Down to, I'll call it highly specific, very granular. We've always kind of aimed for the middle. That is, the more sort of technically specific you make things, the less broadly applicable they are, right? And every enterprise is different in the sense of it's kind of basic mission, the IT it has, the kind of people it has, the way the management thinks of the role of IT in the business. So there's no one size fits all, yet there's gotta be something better than cosmic, where everyone is just kind of, so we aim for this kind of middle ground here, and I just got someone who just told me the other day, one of the things we love about your work is it's very prescriptive. I said, you know, I get about 5% of people say, we really can't stand your work, because it's so prescriptive, and so we get that. Or it's not prescriptive enough. Exactly, so you have to, there's no one answer, you have to think about what level you aim at. So in the technical controls, we aim towards being specific enough that people know what to do, yet they still have options on how to do it. When you look at privacy, and sort of look at the content and the data, you can't be quite as prescriptive. You have to help people make decisions, right, good choices. And so everything you do in cyberspace has some kind of implication around privacy. The data you collect, who has access to it, how it's stored, how it's accessed, how it's combined. And so you have to help people make good choices. I mean, that's kind of one of our life philosophies at CIS, right? About helping people make an intelligent choice, number one, and then be able to explain to others how they made that choice, right? And these are really two distinct and really important problems. So helping people is a good thing, but at the end of the day, they're gonna have to answer to a breach report, an auditor, an insurance company, you know, a legal system. So they have to be able to explain, at some level, what they have done. So that's the reason we put that basic approach. In my background, I mentioned it in that essay, we didn't call it privacy, but you know, we had all these operational missions, like the red team testing, blue team testing, the unfortunately named Comsec monitoring, where we would listen to friendly forces to see whether we're giving away for free to the bad guys. Every one of those has the same implications that we're talking about today, right? Every one of them, NSA was doing that work because of some legal framework that defined we were authorized to do it and therefore allowed to collect certain data. We had to just build in technical controls, right? To minimize the opportunity for failure. We had to identify the kind of data and what's the minimum amount of data needed to achieve the purpose? Okay, can we make the rest of it anonymized or deleted, you know, at the collection point and not put into the database? And then- We're basically building the steps. Yeah, and then the role of people, right? The people that are actually doing analysis, right? Who is authorized, what process do you have to make sure that they have the right skills? What process do you have to recover from a mistake? How is it reported? How is it cleaned up afterwards? So we had built all that, you know, as a framework for these operational type missions that we had at NSA. So look around at what we were doing in the controls. I thought, you know, we haven't done any of this. Great, great folks like Rick said, you know, you need to think more about these kinds of problems. Jane Lute was the CEO of our non-profit at the time. She had a major emphasis on that when she was at Homeland Security. So really the way the privacy appendix got put into version six between Rick, I think coming to talk to me and Jane. And I said, Jane, yes, we will do something about privacy and the controls someday. Because I had already finished with version six. It was ready to go. She says, no, someday is now. We're going to do it now. And I think we grabbed Rick at some point. Yeah, the CIA, we did it last two weeks. Yeah, so this- I said, if you could turn around two weeks because we've already committed to a release date and to the credit, I mean, great people jumped in. This is the beauty of our model, right? This is folks who had the background that we couldn't afford to hire, let's say. And the passion for this as a topic immediately came together, grabbed their friends, who some of the former privacy folks at DHS became part of the team. Just a pretty amazing collection of ideas got put together in a hurry. But to kind of summarize, this was about how can we help people make sure that they thought about all the implications of this. We lined it up with the controls because we often think, well, we'll worry about that privacy stuff later. We'll see what we have and then we'll figure out how to control it. Who should know? If you don't design that in, it's almost impossible to get right at the tail end. And we wanted to tie it together at the control level so that we could make a choice and say, while you're thinking about this piece of technology to deal with this part of the cyber problem, you realize you're collecting this kind of data. What are the implications of it? Who should have access to it? You're going to store it. How much of it do we actually need for a downstream analysis? Let's make those choices right up front. And that actually raises another interesting question that really I kind of want to put to all of you, but Joan, I'd love it if you could sort of address it first. And that is in reading through the privacy implications guide, you would be blind if you didn't notice that almost every recommendation says, first of all, get IT and legal together and get them to talk to each other. Two questions, I think, Joan, from your perspective. First of all, I imagine that the specification in every single stage comes about because that's not happening as often as we'd like. And then secondly, why is that so crucial? Well, I think the reason for calling on legal to get involved all the time is that that is generally in large companies where the privacy responsibility is situated. It's either in the legal counsel's office or it is part of legal in some way. So the idea is to bring the people responsible for privacy together with the people responsible for cybersecurity. And one of the things that appealed to me in working on this project of the privacy implications document is I think there are operational opportunities to improve both the security and the privacy by getting them to work together. There's a kind of different mindset in the two practitioners, I think. Privacy, well, security practitioners are technical, process-oriented. Privacy people are often not always as accustomed to thinking practically and procedurally. I mean, yes they are, yes they do, you have to. You have to be more than somebody who knows the laws to be a privacy officer. You have to be able to work with people and understand the processes of your organization. And I think that they can learn from each other by working together. So there are opportunities to make both better such as one of the first two controls about knowing what you have. Knowing what an inventorying and gathering information on the hardware and the software that you are using in your organization. Well, that's an opportunity to do a data inventory just right along while you're doing it. And doing a data inventory is a really difficult thing that privacy officers always need to do. You need to know where is the kind of information you're supposed to protect? What kind do you have? What does it connect to trying to draw a data map which they're all gonna, multinational companies are gonna have to master because the new European regulation that takes effect next May requires it. And that's, I look forward to a continuing proliferation of tools to assist in that. But even in a more rudimentary way, when you're inventorying the hardware and software, that's a good time to start on the data inventory. So that's just an example of how they can improve each other. Great. Yeah. Rick, I wanna, if you have anything to add to that question, I'd love to hear that too. But I also wanna throw another one your way which is, I think, a little bit related perhaps. But, and that is, you're a practitioner. You've done this in the field, unlike the rest of us who just go pine about it. Well, okay, that's not true, but it's true for me. And so, what are the challenges? What's the downside of the privacy, of the privacy implications guide? What's the hardest thing to sort of do for your average company that played out in there? Well, the biggest challenge is just inner company communication. Technology's always easy. That's why I asked you that. I figured that was the answer. And so, yes, and that's why we really footstomp the act that we need to get these two groups together. Because in each of those, if you read through it, there are examples even in the inventory that's like, all right, well, executives have this type of phone or this type of device. Well, if I know that and that fleek, then everyone knows if you have this device and therefore you're executive. Or in monitoring, in particularly with mobile devices, if I'm monitoring mobile devices, I know where people are. Do I need to know where they are in the world at any given time? Have I produced a, do the chief privacy officer know that there's a capability that everyone is being tracked all the time? And is that in the privacy statement for the employees? So those are just kind of things that we're kind of bringing up as questions that neither side has the answer to. Because I always say that cybersecurity folks and lawyers, and Joan is not a lawyer as she makes very clear, but she plays with them all the time. We don't make the final decisions. We provide counsel. And so therefore the challenge, the biggest challenge is like, yes, okay, this is what we need to do. This group has never talked to this group before. No one likes working with lawyers. I love working with lawyers, by the way, to all the lawyers out there. And so. Have I told you that I'm a lawyer, by the way? Oh, there you go. See, we knew we liked you, there you go. So getting them together and saying, all right, well, do you realize that this full pact of capture we have in place collects everything, including stuff that is privacy protected and case data for litigation hold and PHI for people who are emailing their doctors. And no one knew that that was there sitting on the server. Well, we need to understand how to protect that. And then it's that data. Question, and that is, as you guys were putting this together, how do you solve what must be a really inherent conflict between a security engineer's desire to log literally everything for later and the clear privacy problems that that presents? Well, then that's where they need to get together. It's like, well, Chief Privacy Officer, what would satisfy you? It's like, okay, well, that is isolated, that we have access control, that we know who gets it, we have reports on who gets it, we have, you know, when F ever is looked at and that there was a process by which maybe we almost like a warrant, you know, allow someone to go access it. You know what I'm saying? They're in defined retention period. Yeah, exactly, retention period, things like that. So because if you just say, and that's where the challenge comes in, is if they find out, oh, we have this data, no, we can't store it. Well, there's a business reason for storing it from an IT risk management perspective from a forensic standpoint. And there is a regulatory requirement or a company policy against having it. So how can we come together and resolve it? Because all of these can be resolved with policy or just privacy statements or things that employees sign off that I acknowledge that I have no expectation of privacy and that they know where I am at given time. They will not tell my spouse or will not be using any case unless it's in my consent. You know, it's like there are ways around it, but if you ask each of them individually, they'll say, no, you can't do that or yes, we have to do that. The conversation was exactly right. Wouldn't write. We'd do anything to avoid making a hard decision up front. And this discussion about the implications is a hard decision. And you know, sort of thinking it through and you exactly write that. Technologists think, well, we can collect it so we'll collect it. Because we might need it someday. It'll make our mining algorithms really clever if we have all this. As compared to thinking about, what purpose do we need to achieve with it? What's the minimum set of data it takes to achieve that and then who should be authorized to deal with that data, what processes, what people interpret. And those are hard to tease out. And we'd all rather get to the more interesting problem of installing stuff and running stuff. But if you don't do those things up front, you'll create unsolvable problems. I mean, this is the story of many of the, what I call, national level threat sharing things that I've been involved with for a long time. You know, well, we'll just share across the public and private sectors and everyone knows stuff about cyber incidents and we'll all share it and we'll all be really smart, right? Well, you never get past the initial stage because you get mired in this decision about the data, how it was collected, and we always try to solve the worst case first. Store everything, right? Then the lawyers go, uh-uh, no. So now you're the three-year discussion that leads nowhere. And that's exactly the thing is that the IT doesn't wanna tell the attorneys because they think they're gonna have this argument that they don't feel like they have a defense against. And so that's another reason why we wanna kind of enforce, and empower both sides to say, okay, let's come to agreement. Don't, you know, you mentioned that small and medium businesses are sort of a big portion of the problem here. Those sorts of companies oftentimes don't have a privacy officer or even a lawyer of any kind, except, you know, maybe an HR person or a corporate formation person. Or until they have a breach. Right, until they have a breach. Yes, and then they've got lots of lawyers. But, you know, what advice can we give to really small organizations that just can't get a lawyer? In the outreach we've been doing, we've had sort of two basic messages. One is, you are a target too. Having seen that, they feel like, well, nobody's gonna come after me. You know, I'm just over here in nowhere and I'm the small little thing. And yet, you've got all this data and you are visible on the internet and you probably have many doors and windows open compared to some of your bigger colleagues. So, but we want to go beyond be afraid, be very afraid too. There are things you can do that can significantly reduce your risk and reduce the seriousness when you, if you do get attacked. And so, one thing we found really great about working with DIS is they have recently developed some new materials that are targeted to small and medium-sized enterprises to give them a sort of implementation guide to make the controls, which are very well explained in a document that's only like 90 pages or so compared to the framework, the NIST framework is 300, I think. At even smaller and focused on the first, five or six controls, smaller document that organizes in a way that makes sense and lays it out for them. New other materials that are going to be developed that are in process of being developed go on to a next step to help small business owners work with consultants because that's the way it's done. They're not doing the things themselves. They're hiring somebody and they are often enough hiring somebody who's an IT person, not a security person whose mission is to get the data flowing and keep you from calling for help. So, leave everything default and open because that's gonna generate the fewer calls to the help desk but that's not necessarily the most secure way to do it. So, we're glad that CIS is gonna be working toward helping business owners question potential consultants and then help consultants who are not security experts see what some of the basic things are that they can do. And even if they are security consultants they don't understand privacy because they've never had to deal with it on both sides. And then the other element of the small mean businesses oftentimes they are suppliers of goods or services to larger companies. Those larger companies based on guidance of the industry are applying more requirements for small mean of business for security and privacy and in transparency in that. In its supply chain kind of. Yeah, exactly. In its supply chain thing because as well as they are a target themselves just because they have access to infrastructure that can be used to make them money for various reasons, for various inverse ways. They are also a conduit to a larger target that is a supplier. For example. You know, yeah, exactly, target as an example. Great. Rick, walk us through a little bit. And so you're one of those outside consultants essentially. If a company, smaller or large, whatever, comes to you and says, hey, we just read about the news today or yesterday, as it was. And you know what? I'm starting to get scared. Can you help me? How do you, what do you do when you sit them down? Like how does this actually progress? Well, personally I start with half talking about the business. What do you guys do? Do you make something? Do you create something? Do you share with people? Are you supplying something? Because what would be a bad event? If you have intellectual property stolen, if you don't have access to your data, if other people don't have access to your data, if something blows up, these are all bad things. Okay, now what systems support that process that you don't want to go wrong as we just described? And so it really kind of backs down from, because IT risk management is not about protecting IT, it's about protecting the business. And then as we work down, then we kind of, and that's where the controls come in handy. It's like, do you know what you have on your network? Accident configuration management, one, two, three, because I can't protect it. The analogy I use is, if I'm a sheep herder, I cannot do my job if I don't know how many animals are in the field. Which ones belong to me and which one are sheep? So if I don't know that, how can I do my job? And so, and that's where HIPAA, just like GDPR is gonna have, is like knowing what systems are holding this data and what the data is and where it is, is the first step. So starting with that, if you don't have that, don't try to go buy cool new technology that protects against this thing. Because a lot of things, this most recent incident that's hitting the world, or the second wave of it, is still a failure and just basic hygiene. If you were doing what you need to do anyway, it's not like this was exceptional circumstance. So start with the basics. And so that's how we do it, is like, let's understand the business, what's important, what systems support this aspect of the business, what data is important to you, who do you share it with. And sometimes there might be privacy implications to that that they are not aware of. It's like you're collecting customer data because unfortunately, organizations that don't have strong regulatory requirements seem to think that they can just punt on that. You know, it's like, well, the only thing I have is data disclosure laws of which there are 47 of and I need to worry about. And we do a lot with multinational companies or international companies, a lot in Canada, a lot in Mexico, a lot in Europe. And so they all have different kind of elements. And so understanding what is that floor, as you said, that you need to be meeting, but even if you have one of these regulations, that should be just the floor. You really should be building a security program, an IT security risk management program on how to protect the business. And therefore, these requirements, these regulations, these laws are just a reporting exercise of like, oh, okay, we need that because we're doing this already because we're doing what's important to protect the business from these bad things which I just identified and then realized which system support those. Well, you just said the R word, which is everyone in DC's favorite term. So speaking of which, regulations. Oh. I thought it was going to risk. Risk is what this is all about. They don't work in sign language. No, we don't care about risk. We care about regulation. But Tony, from your perspective as someone who is at CIS, I don't know if you, did you anticipate that these, not just the controls themselves, but also the privacy implications guide part of it? Did you anticipate that a state would sort of grab these and say, you know what, this is the floor? I knew someone would. We were lucky to run into Joan. I remember when we first saw the Databricks report and our small team was looking at each other. Did you make this happen? Did you make this happen? Did anyone know how this happened? And so I think someone cold called California, right? They turned the general's office, we tracked Joan down. And that was both, remember we're running this with a really small company. We did not have some massive marketing department and some big outreach group. And so a lot of what has happened has been very organic. You know, we put it out there. And so the, I can't tell you by the way the amount of satisfaction we get when someone was really bright doing something really important independently lashes onto what we're doing. But the intent was to put it out there. And that's why I really pushed for this movement from the sponsorship of a very noble for-profit company, the Sands Institute, into a non-profit home. Because in a for-profit setting, right? This, no matter the noble intent of Sands, governments can only interact with a for-profit in certain ways, right? And there's a certain formality and legal requirements being that and so forth. And the basic idea was this really should be, this is a classic community shared problem, shared solution space, right? If you have a problem in cyberspace, I guarantee you there's millions of others like you, you just can't find them, right? You have no mechanism to bring you together, share stories, how do I solve it? How do I convince my boss? How do I convince the regulator, et cetera? So the idea was to make it as open and accessible as possible with the least sort of friction and cost and so forth. So the fact that California jumped onto it was a really pleasant surprise, but it's not surprising in the sense of, this is really what we wanted to have happen. Then the question was, what do we do next? Right, so we talked to Jen. We found out and she relayed the story of one of her biggest concerns was small and medium businesses doing business in California. And she'd reached a conclusion that we had reached, they are never gonna be able to defend themselves. It's just, you know, as a national security strategy as a matter of social goodwill, thinking that we're gonna, that there's gonna be suddenly an army of great security practitioners coming out of the schools and great tools and they're gonna hire these people with small medium businesses, it'll never happen. It's not scalable. So the question is, what can we do then to help people who are never gonna become experts? We don't ask every small medium business to have doctors on staff and their own pilots for the private jets they don't own and so forth. So how do we scale this? Then, you know, so the fact that someone got it says, then let's figure out how we can help Joan achieve what she wants. So I have a list of top 10 things to do, which actually has about 150 things to do, and things pop into the top one or two when someone great shows up at my doorstep who really cares, like a Rick or a Joan. So we decided we would speed up some work we were gonna do to support small medium businesses. So the first stage of that, Joan described, which is make the language more accessible. Security people are great at talking to other security people, right? We just, we love the language, we love the acronyms, we can do this all day, and everyone else is on the side that, oh, what are these people talking about? So we wanted to change the language of the control, focus on the plainest English, focus on the kind of questions every small business should be able to answer, and then help them with what are the resources available to them at either low or no cost. It would help them do that. I was inspired by a paper that someone had written years ago who was in a small business, found themselves with zero resources, found the original sans top 20, and said, I can do most of that if I just use the stuff that no one else is using. We already have this here, and here's some freeware here, and here's some open source here, and they kind of cobbled it together, and they wrote this paper. So I grabbed that paper, and that was one of the inspirations. In fact, I found the author, who's now the CISO for the Federal Reserve Bank of Atlanta, by the way. So he went from working literally on a shoestring in security to running a major enterprise there, and so he's a part of that team, right? And that's the kind of person that wound up on the team. So the idea was then, for folks that are trying to solve a problem, what can we do from CIS then to help them institutionalize it? Because again, it's not Joan's problem, it's everybody's problem. She just happens to be the inspiration to get going on it. Well, and I think it's a general interest in helping small businesses because cybersecurity is the ultimate tragedy of the commons. The weakest link exposes everybody. It exposes the internet. So we need to have small businesses up there again. I live in a rural county, and my doctor is a one-person practice, and about two years ago, he figured out what I do for a living, and I cannot leave his office without him turning to me and goes, and please just tell me what to buy. You know, and he's already, he can't figure out what to do about HIPAA, right? And this is literally a one-man office with two people at work, the desk, and the billing and the records and the patients. And again, there's, you know what? I want my doctor reading medical journals and I got cyber journals, right? And we collectively want our experts in their fields to become better at that, not expect them to become better at the cyber. And it's a good analogy, the medical field is a good analogy because like any pandemic starts with, you know, the sick, you know, the old, the young, you know, they're not, you know, prepared to be able to do it, and it's the same kind of thing. And there's some kind of tipping point of how many people that are vulnerable that could really, you know, spread something and what happened yesterday and a few weeks ago is kind of an example of that, but to fix it was really easy. I'm curious, just because New America is a nonprofit, is how do you find, or have you given any thought to how these controls are put in place by a nonprofit versus a business or is it just exactly the same? No difference at all. The only, when I go into organization, the best indicator of how good they're doing IT risk management is the buying of the executives. If they understand their role in risk management, that, you know, IT as a function of risk management to the business, no matter what the business, no matter what the industry, no matter how big or small, then that is a measure of success because that's the hardest thing to get is that governance process, as we kind of talked about. And so, as long as the leadership of the organization understands that this is important, this is a business risk, I need a management just like managers, or any other of the business risks that I track, then that's better than pushing it down. I was like, oh, I have IT people, they're smart, they know what to do to protect me. Like that's the wrong answer. And non-profits, well, they may not have profits at stake, they have a lot of profits at stake. They depend upon trust of supporters. I think I remember years ago, one of the earliest data breaches when California was the only state that had a lot, when I worked in another privacy office in state government, there was a breach at a couple of blood banks had a breach and they had social security numbers. So imagine how you'd feel about giving blood. If you get a letter from your blood bank saying, sorry, they lost your social security number. So it's really vital that non-profits. I think government is another example of having a special responsibility for cybersecurity and privacy, but right now let's say cybersecurity, not only are they the same legal obligations that private sector has when they have it, but it really depends on trust and a lack of trust in government is a bad thing. People have to give their information to the government. You can't, I'll go to another DMV. I don't like the way you handle my information. You don't get to do that. You have to give it to IRS. You have to give it, which really doubles the obligation on the part of the government. Much of our social fabric runs on trust. And that's why I emphasize it's not good enough to just do a good job. You have to be able to explain to others, right? So they have competence. And we've entered a world. I grew up in a world of what I call trust as a binary condition. You work for the government or you don't. You have a security clearance or you don't. You're one of our trusted contractors or you're not, information is classified or it's not. Today, trust is really a dynamic condition that's negotiated constantly. So no one trusts anyone all the time. We trust each other for a purpose, for a particular span of time typically, for a financial transaction. You're in my supply chain now, but maybe you're not next year. And so these, you absolutely have to have universal ways to talk about what's the problem we both face now in a business relationship. What are you doing about it? Here's what I'm doing about it. Let's negotiate if that's acceptable to both of us. And this is just part of life now, but right now it's kind of implicit, right? We don't think about it so much. Joan, what's been the feedback from organizations in California, companies and so forth? And I know it's only been a few months, but I'm curious if you've gotten phone calls from people screaming on the other end of the line. Well, initially when the report came out, like other publications that we do, it got a lot of coverage in the law blogs, most of which were saying to their clients, we should do this, which is great. There was one that said, they've made a ceiling a floor. This is undoable, which of course we disagree with, and it's particularly since our recommendation, which was not a law, but a recommendation, was to apply the controls that are relevant to your environment. So we're not saying everybody do absolutely everything exactly as it is described. We reasonable and appropriate makes sense. With small businesses, there's been a very great interest in finding out more. We've done more than a dozen workshops in the past year with local chambers of commerce, with county medical societies, dental societies, county bar associations. Law firms are interesting challenges, which they're coming to recognize that they are the target of a lot of interest because they have a lot of really valuable information and may be less well protected than their clients. Hmm, so there's been a real interest. Are they able to keep the confidence? Yeah. Which I'm sure they're aware of as well. What advice do you have for other states around the country? Is this something that you would, are you evangelizing like Tonya? Well, we're not, we're certainly, I think state attorneys general are very focused on cybersecurity because they all have data breach laws. And remember, a data breach opens up practices. And so you get, you see what's going on that you wouldn't even have known about before. I mean, sometimes people ask, how come there's so many data breaches? There didn't used to be, well, maybe there were. And we just didn't know. So I think they're aware of that. We aren't proselytizing the controls. We're proselytizing do all you can to educate people on how they can best protect the data that's entrusted to them. Tony, do you have, do you have plans to? He's the evangelist. Right, as the evangelist, it's on the panel. It's a living. Are you guys looking at other states? Obviously, California is a great example. It's a huge state, it has a lot of tech, it has a lot of businesses. But there are other states out there that are like in targets. So we already have a number of states that are adopters of the controls and are kind of heading where we're going, they're part of that. CIS is also home of the Multistate ISAC, Information Sharing Analysis Center. So we, that is our constituency, right? For a big part of CIS. So we host events, webcasts, define best practices, help people recover from bad things, all that sort of stuff. And then we have relationships with folks like the National Governors Association and other, I'll call them wholesale, approaches out into state and local government. So there is a lot of interest in dealing with this problem, as Janet said, state, local, if you, you know, in the typical state and local government is a woefully underfunded, under, you know, undermanned for this kind of problem, right? They are not gonna hire, you know, thousands of brand new cybersecurity practitioners. They're not gonna bring in million dollar consulting gigs. They're really doing this very thinly. And many states, I think, and you know better than I, are not tightly managed at the central level. They would call, what I call horizontal, lots of little agencies trying to do it on their own, their own IT, their own privacy, their own security. So, so there's a lot of challenges in dealing with state and local. It's a very large, very diverse, but a lot of it is like, same problem. Well, it does, I always say, there's no federal government like there's a set of tribes that loosely operate together on a good day and fight with each other in budget time. Yeah, that's more or less right. And that's not even taking into account Congress. At this point, I think, you know, we've been talking for an hour here or so and I'm sure we've got folks who probably have questions. So again, if you're watching the live stream and want to shoot questions at us, you can use the new AM cyber account if you want to throw an ad sign in front of that. And also the hashtag is new AM CIS if you want to ping questions at our panel. And with that, I'll, yeah, yeah. She's not your lawyer though. Yeah, yeah, briefly. The general data protection regulation, GDPR, the European Union has had a comprehensive privacy legal scheme for, since the early 90s. But the existing one is called the directive, the data protection directive. And that means it's up to the individual member nations to pass their own national laws that are consistent with the directive. The new regulation is going to apply as law to all the different EU nations. And so that's one thing that's different about, but also it includes some newer provisions because imagine laws passed in the 90s, how they're missing certain things. So among the features of the new GDPR, the regulation are this requirement to do data inventories and risk assessments and what are some of the others? Well, the important thing is that it applies to any multinational company organization that has employees, customers, or any offices in European Union. And so some of these small, which including anyone who has a website that has customers in the EU, will have to be able to abide by this. So that becomes very broad and very impactful to folks in the US. Because before there was this, European privacy laws have always been more strict in the US and that's a whole other panel on that. But the basics is that because they did not feel that anything in the US was safe enough to hold EU privacy data, there was at one time the safe harbor that a company could individually say, hey, I'm holding it to the same standard. And then that went away because they didn't think it was good enough and then privacy shield for about a few months. And then now this GDPR that's coming out is gonna be in the longer term. So if we talk a little bit about it in the control, we don't wanna get into it because again, that's a whole other paper. But it's gonna impact a lot of people in the world. And we may get some benefits out of it. And then that just as a big state like California can have an impact on other states so that even before they had data breach notification laws after one big breach in 2005 in which the company did not notify in other states even though people were affected, they now do even. They started notifying big multi-state breaches starting resulting in notification in states even before they had breach. Just for everybody, because it's easier to be piecing out people. It's operationally easier as well as safer. Yeah. Hi, Maurice Turner, Congressional Innovation Fellow. Going along what you were talking about with California having broad breaching impacts when it comes to regulation, doesn't it by sort of a de facto manner make California's regulations a federal regulation? And going along with that, what efforts have been made by California to coordinate their regulations with other states and also with federal regulation makers? It's sort of like the highest common denominator might tend to prevail across multiple jurisdictions. That can happen as in case of auto emission standards, for example. But we actually do look when we're developing, not the Attorney General's Office looks when we are developing best practice recommendations. We look globally to try and harmonize what we recommend. So we're not talking about regulations here with global requirements. So aiming to make it more implementable. In the case of laws, California, the legislature often does consider other states and other laws. In the same brief report in which we pointed to the CIS Critical Security Controls, we also encourage harmonizing of state data breach notification laws as a better alternative than preemption by a federal law, all the versions of which existed up to that point, would have lowered the standard, harmonizing at the lowest common denominator. So our recommendation is that states look at trimming some of the peaks and valleys in some respects to address the issue of compliance complication, recognizing that the states have been able to respond like laboratories of democracy and come up with things that might take longer to happen at the federal level, absence, absence state innovation. My name is Dalton Booker. I'm a program associate with the odds good Institute for International Studies. And my question is, you know, we're talking a lot about, you know, breaches of security and breaches of privacy from external actors. But I wonder if you guys, if any of you guys could speak to breaches of security and privacy by governmental actors, you know, most recent example or a recent example being the matter with the San Bernardino shooters. The iPhone, stuff like that. Risk is irrelevant of intent, you know, and when you're managing it. And it also is irrelevant of the actor. You know, we're understanding the actor and their intent helps in better defense. But it really, it's not like, you know, I have these controls to go after state actors. I have these controls for insider threat, you know, because insider, we talk a lot about insider threat as being, you know, I say four different quadrants. It's the most likely it's unintentional and, you know, non-malicious. It's all the way to, you know, yeah, exactly. That's the most common thing. And that's, again, risk is irrelevant of intent to malicious but non-intentional, intentional, non-malicious, to malicious and intentional. You know, those four different quadrants. And you need to kind of address all of those from an insider perspective and an outside perspective as well. So there's nothing unique, I guess I'll say, there's nothing unique about what you would do in a good IT response or practice regardless of what you're worried about. Probably outside the threat model for most average companies as well. Unless you're Apple and you're making an iPhone, you're probably worried about it. Right, you're an industrial base, but that's also, you know, part of the thing is having a risk register and a risk, you know, risk model of like, these are the things I worry about and these are the things I worry about them doing, you know, doing it against me. And that made include that. Yeah, it's a matter of sort of like tradecraft, right? Right. And the controls are aimed at the mass market. We should all worry about this even if we don't know we should. We should just do it, but also, you know, we look at the controls also with an eye and with partners like the US cert and other people who are dealing with incidents. Right, so the idea of controls is both prevention but also narrowing the tax surface and also helping to manage, I'll call it, accountability. Right, one of the people I learned from in the business many years ago said when you can't build insecurity then build in accountability. So that means administrative privilege and rights. And, you know, so when something happens you can figure out, it's not always about the human, right? Someone posed as that human, someone took over an account but if you can't sort of bound the problem you can never figure out exactly what happened. So some of the controls are really there to help you with that problem too. To say, you know, if I need to dig back deeper can I narrow down how this attack entered my enterprise who allowed it to go to the next step what process did they take over so that you can do that kind of analysis. And what can we do about those things? Right, and then that might take, but again, for most of the kind of things that we do I say people wish they had more thread information. Here's my opinion, the overwhelming vast majority of things you need to do to defend yourself in cyberspace are already in the public. The information that you need, if you could take action on what you already have without any spooky fancy information you would be so far above the norm you'd be essentially invincible against most advocates. Hi, Ian Wallace from here in New America. Thank you for this fantastic panel. I think what California is doing presents a really interesting natural experiment. So my question is, to what extent to you using that opportunity by working with researchers comparing what's going on in California to what happened before, what might be happening in other countries or what might be happening in other jurisdictions where the controls are being used and how can we make sure that we learn the lessons from this, not just sort of anecdotally, but with real data? I don't know, it's a hard thing to track, even just to get a handle on who's actually implementing the controls. So it would be exceedingly useful and I just don't, it's not something that we've got within our reach. Yeah, one thing we do now that we could not do just a couple years ago is track better who downloads them, who's adopting, what are they doing. One of our philosophies is most of us learn best by learning from someone else who's done it already. So again, part of the role of the nonprofit is to find people that have solved things, help extract the lesson, write it down, document it and share it much more broadly. So we have a bunch of those things going on. I would say, with some experience, here's just an idea on the fly, maybe I can talk Joan into proposing an RSA talk with me for 2018, because I think looking at the sort of technical control side and privacy side together might be a good discussion that we could set up or even this kind of panel is frankly above the level. The average talk that I see in many of these major topics. We also are working with a number of other folks from CIS to look at these well-called data-driven studies. We are not to where we would like to be in this business, where we actually make decisions based upon data. This is pretty much still a folklore, wizardry model that's reflected in the things that we do. I gave the example of, why did we pick this set of controls? Five friends, right? I looked around the five friends. They're smarter than me. We came up with something. That's a trust-based model. That's five friends became 5,000 friends, right? Still basic models, just more of them. What we've done over the last few years is we've approached I think almost all the leading threat intelligence companies who basically I've been looking for everyone who collects lots of data based upon their business model, summarizes it at the end of some time period and gives that away for free. The Verizon data breach folks, HP, Semantic, Palo Alto, all those kind of folks working with them to talk to the authors, map the summary of what they are seeing into the controls. What I'm doing is trying to build a more rigorous data-driven. There are a lot of people working on the big data, advanced analytics into this, more power to them. I hope they succeed, but I can't wait. So now we're working with, I'll call it the authoritative sources of what's going wrong out there. And now we're talking to a number of academic folks and potential partners around sort of specific studies around how do I establish a real solid empirical basis to good practice, right? Again, we need these things socially, right, across the board. But this is the way we've been approaching it from the kind of trust me, you know, moving towards a more data-driven approach. I'd love to be able to help Joan. I think we're a ways off, though, from sort of this before-and-after kind of data-gathering. The machinery is just not in place that we have in the CDC, you know, for the spread of the user or whatever. I think a lot of us are working towards it. I mean, the difficulty of really knowing what's going on, as I look year by year, it looks like we've had an increasing share of breaches reported to us by small businesses. So is that because they are more often having breaches, or is it because they are learning that they are required to report breaches or because they are learning that they had a breach? You know... Yeah. Yeah. Hi, Emma Fagawa from the Cybersecurity Initiative here. But I have an eye on Twitter, and there's a question that's coming online. Two-part question for Tony. One is if you could discuss version V7.0 and treatment of middle boxes, and if you'll answer that, maybe explain what that is. Treatment of middle boxes and V7.0. You could monitor that briefly, but then also if you could discuss the adoption of controls globally. I'm not sure I can answer the first question, but I can guarantee I know who asked the question. So middle boxes is a term for... So, you know, we are where we are in the state of the internet, right? And what you see, I'll read this sort of cartoonishly, governments trying to figure out their role in something that they didn't control or design. So you have this issue of monitoring. How do I monitor for bad things? At the same time, there's this mad rush to encrypt everything, right? To tunnel everything, the web accesses, transactions, and, you know, their security merit to that also. But that also blocks things in the middle from monitoring for bad packets, malware, and so forth, right? So there's kind of a natural tension around these issues of where do I monitor for best effect and where do I, in effect, protect in a tunnel or a sleeve, you know, in a cryptographic sense, information. And no one has a simple answer to that. The marketplace has responded. I mean, there are products that you can buy today that act as a middle box, right? That is, the end user in your enterprise thinks they are talking to, you know, and a merchant or whoever online, they're really being interrupted by a middle box which is managing the security, looking, monitoring, right, and sort of establishing this invisible, I used to call it a man-in-the-middle attack machine, but they sit in the middle for security purpose, right, to monitor transactions that otherwise would be unprotected and unmonitored at the boundary of an enterprise. So anyway, so there's a lot of discussion about how that is best managed. I think that is a part of the environment, and this is on our list of topics to talk about for the next version. So we're struggling right now, not struggling. We're gathering the kinds of topics and these sort of tough questions about how or should we address them in version seven. We haven't committed to a date for that yet, but, you know, there's nothing static about this environment, right? The technology is rapidly changing, the role of governments is changing, the role that they play is changing, and every nation has a different, you know, thinking about either how they manage security or what their role is or what they want their citizens to see or not see. So this is the kind of topic that really is mapped by this middle box idea. I'm sorry, the second question. Yes, you know, one of the great surprises of this, although we are home in the U.S., the adoption base is actually very significant outside the U.S. I think a third of our downloads, something like that, are non-U.S. And we have a couple of our great friends and adopters and crazed volunteers up here on the stage. We have a countless number of them outside the U.S. So we have volunteer translations of the controls in the multiple languages. We have people who are on their own, carry the story of the controls to European standards bodies, to multination across Asia, to all kinds of different forms. It's something that, again, happened organically for this year and beyond. Now we're sort of thinking, maybe we should get more involved in these things and figure out where we want them to go and support the people that have taken on their own initiative to create these. So it is incredibly exciting, though, the level of interest, I'll say, in the U.K., Canada, Germany, Europe at large, India, Japan, Lithuania, all kinds of adoption outside of that. So it really helps us see, number one, the hunger for something. I think there is a general social concern, you might say, distrust of central governments. And people are looking for independent nonprofit voices, I think, in cybersecurity. So I think we helped fill that niche. So I think we're not even at the beginnings of the story on outside the U.S. adoption of the controls. I think it involves being less U.S. centric and kind of referencing other international standards and kind of considering things like GDPR coming up and things like that to kind of accommodate that. Cyril Draft from MIT. Could you talk about any benchmarks you currently have or thinking of having in privacy or in data protection and integrity, which is one of your top 20? Well, benchmarks are tough. I may defer to Tony on what CIS does generally otherwise, because it's all about the acceptable level of risk of an organization. Some of that may be dictated by a regulation, which again, we understand is the floor. But I have dealt with organizations that in the same industry of the same size that are more open than others, the culture of what they accept and what they don't accept because as cybersecurity risk management practitioners, we don't make the final decision, the executives do. They're responsible for keeping the business in business, keeping the organization up and running and anything that could impact that, which may happen to be a technical risk, maybe a legal risk, it could be a physical risk. So my coming around to the answer is it might be different for every organization and it's from my perspective, because I deal with commercial clients, it is a business risk of like, what is the benchmark of, at this point it's okay and at this point it's bad and we got to react to it and that might be different for everybody. Unfortunately it's not a firm answer, but that's my perspective. Just to couple at this point, the whole issue of data protection, within the controls document itself, our goal is to make sure that you have thought through the issues of data protection, of the type and where it is and how it's stored and that the technical controls who manage it are in place. And then these issues of how it specifically applies to a business, we would address in a complimentary way like we have done with privacy. Have you thought through these things, here are the implications of the technical control. You'll see I think a different approach to cryptography in general within version seven of the control for example. Right now there's pieces of it everywhere but it really is a, a very foundationally different way to look at it so that the mechanism is in place to manage all these kinds of business decisions. At the end of the day, the goal is to have confidence in our business decisions that translate into a policy and that the policy is enforced through technology. And that moment if that's where you're going with like quantum computing and quantum proofing encryption, which is kind of like, okay for some organizations that might be a thing based on who they're or if they're not. Well this isn't exactly the same use of benchmarking as is used in security. I think in privacy there's a rethinking of benchmarking that the US privacy legal scheme has basically come to be notice and choice. It's a marketplace model. Tell people what you're doing with their data and they can choose. There's a general dissatisfaction with that among a lot of people in that the notice is often unnoticeable and the choice is take it or leave it. So what is a better approach? The European approach comes out of human rights rather than a sort of marketplace approach and our marketplace approach and lack of a comprehensive law may have been instrumental in the development of Silicon Valley. I've read a paper on that how law made Silicon Valley that the light hand touch but with big data with the persistent surveillance of the internet of everything the need to from a societal perspective not just an individual rights perspective the need to have a concept of privacy and a legal scheme that can defend it appropriately I think it's gotten greater and there's a lot of thinking going on in that regard. One final quick thing is I spent a lot of time in Canada and one of the things that someone from RCMP had kind of brought up is let's not confuse privacy and anonymity because are we trying to get to a point of being anonymous online or are we trying to point of like private which has a little more nuances to it and I think that a lot of times because of the fear of the government it's like why wouldn't it be anonymous and so you confuse that with being well I need to protect my privacy we still have license plates on our cars that's a good example of privacy but not anonymity aluminum I think we have time for one more question and Mr. Morges from here as well so there's been a lot of talk about the positive aspects of the controls. I'm curious if there is anything and it might be that I don't fully understand the way in which the controls sort of came about but I'm gathering that there basically was a group that generated some sort of consensus on what should be in there. Was there anything that any of you would have liked to see in there that didn't make it in there the critical shortcomings of the controls themselves? Every year we argue about that and that's one of the things I've been on many different large scale panels and things like that and what I appreciate about this is the buck stop with Tony because I've got this in large organizations volunteers not unlike this that we never get anything done because everyone is trying for their thing and then we just kind of push the decision down and we're still arguing about the same thing but that's what's kind of great about this Tony said he has this list of 150 things and if someone pops up like oh that's going to go up higher on the list but everyone has their things that they're pet things and Tony's like nope this is what we're going to do. It's like okay I never claimed to run in IT your consensus process it's not really a consensus the discipline here isn't keeping the list short not in making the list long we could sit down together we come up with this a thousand this afternoon without even blinking that's great but it's not helpful in the sense of what we're trying to do so the discipline is about okay we've talked about this enough discussions over we're making a choice and that's the thing that if you go to a formal standards body they're often a consensus driven everyone brings their pet rock their favorite idea they take years to evolve that's what we have today but they don't move at the kind of speed that we're trying to do with the control so the goal is discipline, priority shorter is better be much more focused we're not trying to replace the FISMA catalog, ISO, COVID those have their own place we are very much about what bad things are happening and what should everyone be doing to stop it so I would say my definition consensus everyone walks away from the table that some favorite thing of theirs is not in the list and that's okay, we're actually better served there's ever see a quote so good you wish you would set it here's the one that was said by a guy named Hal Pomerance who is a consultant and instructor for SAM he once wrote this in his blog and I have his permission to use it the problem with the information security business is that experts agree on 90% of what needs to get done that's pretty good however, they spend 90% of their time arguing to the death about the last 10% it's like that's the world I grew up in we argue angels on the head of a pin about the nuanced helicopter drop the robot spider down the chimney I've been in that discussion thousands of times people say it with a straight face but for most of us we should not worry about that one last quote another quote from one of my tech heroes who's fourth law of technology is that in technology policy debates, the policy perspective must prevail so one of the policies that significantly informs the CIS critical security control is the idea of priority of getting the most bang for your buck so that's one of the things that I think is really valuable about them but having said that a long list of things we've been collecting and we welcome input on topics that we must think about for version 7 and some of them will be in, some of them will change the way version 7 addresses them versus prior things so we have been collecting feedback and this is a crowdsourced idea right this is about who's the best out there who's got a better idea, bring it to the table we will figure out a way to deal with it so there's no mystery think tank behind me we have Rick and Dan volunteers who contribute to this it's a great note to end on Tony, Joan, Rick thank you all so much for joining us here today can we get a round of applause for these folks Friday 10 o'clock right back here public-private partnerships aside from information sharing we'll have another great panel for you on then thanks very much