 And I think we'll find that this panel will duplicate a little bit the previous panel, so they stole a little bit of our Thunder, I guess, group, right? But I'm sure we'll exceed expectations or at least we'll try to in this regard. It seems like in my previous life in the government where I worked in the Department of Defense for 31 years, and it seems like I'm always asked to answer the more difficult question, which is how do you stop something versus just describing the threat? It's sort of like when I worked in NSA for many, many years, and I was on the defensive side most of the time, and it seemed like our job was a lot harder than the offensive guys. So I think I find myself in the same situation, but luckily we've got a great panel, and this panel will try to take the momentum from the previous one to try to, I think, get to some interesting thoughts on stopping this threat that you heard earlier, talked about by the previous speakers. And we, as you heard in the previous group, it is certainly a daunting problem that we face these days, and so I'm going to just open it up for the panel, to each one of them give a few minutes of their thoughts as leaders in this field, and we're very fortunate to have, as I said, this group. We've had a few people who had some unfortunate family emergencies at the last moment who had to drop out, but we've got this group together that I think will be as equal, if not superior than our original group. And so with that, let me just kind of start and work our way down left to right to keep it easy, if that's to behave with you. Stuart, I think many of you may know, but Stuart was at the Department of Homeland Security. He was the first senior in charge of policy, Assistant Secretary for Policy for DHS, and as we all know, when we created DHS after 9-11, that was a very, very tough job, and he's learned a lot of lessons from that, I'm sure. And so let's have Stuart give us some of his thoughts on stopping the threat, Stuart. Yeah, the lesson is don't do it. Doing a startup in government, which I've now done a couple of, is just deeply painful, but. So I would like to, you can't hear you? Okay. I'd like to popularize and start this with what I call Baker's Law, which is, our security sucks, but so does theirs. We can, you know, the fact is the real enemy of security is operational necessity. There's things you have to do. You've got to accomplish the mission. You take a little bit of a shortcut, and that's the end of your cybersecurity. Yeah. And that, you know, that operational necessity works on the other side, too. They've stolen stuff, and they've got to get it down to their state-owned oil company in time for them to get their bid in as well. And they're going to take shortcuts, and we're going to be able to figure out who's doing this. And this is the critical point. I sometimes liken this to Pigpen. He's got this just ball of dust surrounding him. This is what we're like in cyberspace. There is just bits of digital DNA flying off us at all times as we take one shortcut or another and find ourselves losing control of our identifying information. We're doing that, happening all the time. We all know that, and so are the people who are attacking us. The important thing about that is this means that we can attribute these attacks. We can actually identify the guys who are doing it. Sometimes I put up that photo of the anonymous attackers who were busted because they put up a very low-cut picture of one of their girlfriends to mock law enforcement and didn't realize that the picture had been taken with an iPhone, which very helpfully provided the geographical coordinates of the girlfriend. They didn't show her head, just the rest of her. I often thought that, you know, the Secret Service and the FBI must have arm wrestled for who was going to do the ID in that case. So, we can begin to identify people who are attacking us. That's the attribution stage. We really can do a much better job than we have in attribution. And then we have to bring, like I'm a Scots, Irish kind of guy. We need to bring the pain. We need to show the folks who are attacking us that it's a painful thing to do and they'd be better off choosing a different career. And for that, I think we are going to have to get much more creative. But I testified last week to Judiciary Committee and suggested a number of things that we could be doing. All you have to do is read the Mandiant Report, read the Trend Micro Report, read some of the report that Citizen Lab did. There is, there are lots of clues to the identities of the attackers. We know where they went to school. In one case, they went to Sichuan University and the kid who was engaged in those hacking attacks later went to work for Tencent, which is an enormous Chinese internet company with a big subsidiary located in the United States. Sichuan University needs visas to send their people to the United States. So does Tencent. Why aren't we saying, hey, we got an investigation going. We'd like you to cooperate. If you don't cooperate, no visas. You can go home and train. There's no reason why we shouldn't be doing that today. Especially designated nationals. We have systems for saying these are people who are engaged in trade in conflict diamonds and we want to take the people who are engaged in that, designate them as folks who the US government says no one can do business with those conflict diamond nationals. They do the same thing for Belarusian oligarchs. The Magnitsky Act does this for people who are interfering with human rights in China. Well, for God's sake, you know, we have people who are interfering with our human rights right here in the United States. We ought to start designating those nationals and causing some pain for people who are engaged in these attacks. We know enough to designate them. Let's start doing it. And then finally, and I'll close with this, we need to take the information that we're getting and follow it through not just to the attackers, not just 61398, but to the guys they're feeding with our stuff. And we need to find ways to tag that quote information as it goes back to China and then on to a state owned oil company so that we can say, we know where that information went. We've tagged it and followed it all the way. And now we are going to take every nickel you have in the Western world for engaging in economic espionage with criminal prosecution, civil lawsuits and the like. We can do all of that if we set our minds to it. We're going to have to change some laws, but not very seriously. We just have to take it seriously. Thanks. Good. So, Jenny, I'm going to let you go last because you're our government representative here this morning. We got pressed a button. I wondered how that worked. Okay, thank you. Yeah, just I thought I was going to be the oldest person on the panel till Stu walked up. So now I feel like I've been vindicated. But just to give you some context, when Chris Englis was talking, he was talking about the code making and code breaking roles at NSA. And I was a code maker for the early part of my career. Probably I had been there about 20 years when Chris actually arrived on the scene and he was working in something called the Computer Security Center, which was relatively new at that time in the 80s because we were starting to think about, yeah, we've been building encryption. That's what I did. I built encryption boxes. I was not a crypto mathematician per se. I never really understood the difference between a Fibonacci sequence and a Fibonacci-Cocon sequence. It was always somewhat mysterious, but I could make the boxes work. So that's what I did. But we started thinking about, OK, beyond encryption, as you know, in the 80s, the internet was just starting to early 80s. The internet was just starting to take shape. So we started to think about this question called Computer Security at the time. And there was the Orange Book and the Red Book and the Yellow Book. And I think Chris was involved in that. He was trying to think of the colors. So I said, what color is we going to publish next? I think that was his job. But anyway, we went on from there and we evolved into information assurance and I think it's still officially called information assurance, but it's really focused on cybersecurity now. And I'll say the lot's been covered already. I don't want to repeat what the previous panel has said. But a couple of observations out of the discussion. Education particularly is a good thing, but it's never going to solve the problem. I mean, if you're expecting consumers to change their behavior, that's really a fool's mission. They're going to behave the way consumers behave. And if they get an email, some people will always respond to that email saying, you've won the Ugandan lottery. Just send us your bank information. We'll send the money to you. Okay. We're never going to stop that behavior. You know, educating them. And I think we're going to have to do more in the consumer space about automating the security processes there. You know, we shouldn't have to have the consumer check the box and say, I want the automatic updates. I mean, that's dumb. I mean, that ought to be a part. You want to use the system. You want to use the operating system. You want to use the applications. You've got to have the updating process in effect so we can go so that the providers, the technology providers, can fix the problem for you. You know, we participate in something called the NCSA, the National Cybersecurity Alliance, which is DHS, you know, industry-sponsored thing. And if you go to the NCSA website, it's got 10 guidelines for how do you secure yourself. Okay. And it says, you know, change your password regularly. The one I like is configure your computer in a secure fashion. Okay. So how is the consumer going to do that? I have trouble. I've been in this business a long time and I'm not a computer geek. I was studying vacuum tubes when I was back in college. So I'm not a computer geek. Computer science hadn't even invented yet. And, you know, how do you expect the consumer to quote, configure their machine securely? I mean, that's just not going to happen. So we have to start to deal with that from a... If you want to use the technology, you're going to have to accept the fact that your security has to be managed by the technology providers. It's never going to be configure it yourself. You figure that out. Okay. The step to the larger question, how do we protect critical infrastructure? Let me say a couple of things. Number one, I think defense in depth has been very successful. But it's the security model has always been evolving. When the internet started, there was no security. Everybody trusted each other. It was a set of host machines at academic institutions that were networked together. Everybody trusted each other. And if somebody got out of line, they were quickly put back in line by the rest of the peers. That obviously has changed. I think the firewall was first invented probably late 80s, early 90s. They still have the Bill Ovan Cheswick book on my desk called Foyling the Wiley Hacker. And it was the internet firewall was invented. And it basically said, you know, we've got to close some of the ports and protocols that you're not using and keep those guys out. Well, that worked for a while. But then the hacker said, well, now I'm going to figure out how to tunnel through that firewall and tunnel through the protocols you have open. So it's always been, you know, defense in depth came along as an evolution of that thinking. Clearly, we're now at the stage where we've got to move away from, you know, the early discussions about the static defense and go to a much more dynamic, adoptive environment. And that's going to take some things that we're going to have to do collaboratively. Information sharing, if I can use that word, is an essential part of that. But I think of it in really in terms of a higher level. I mean, sharing signatures, sharing threat warnings, that's all great. And it needs to be, you know, we need to do more of that and make it more operationally focused. I think somebody already pointed out, you know, sharing for the purpose of sharing is a waste of time. Sharing so I can block a new threat, that's much more useful to me. But if I start to think about how do I get ahead of the power curve, I've really got to do my analytics. If I'm looking at my internet portal and I'm analyzing the traffic flows, that's good, okay? But what I really want to be able to do is analyze it at the next larger scale, up to, you know, in fact, ultimately take it to the global network level, because that's the way I can actually understand what's really happening in the network. And as new threats emerge, I can be on top of those. And that's going to take some doing to move from where we are today and localized analytics to global base. We do it at AT&T. We do it on our global network infrastructure. So we have a pretty good view, but it's only our view. I don't share the view with Verizon at the native level. We're now doing more collaboration with Verizon and the other tier one carriers as we're dealing with DDoS attacks against the financial institutions, which I think Ellen mentioned earlier. And so we're actually changing our business model in effect to help deal with that. But the driver ultimately is how do I really understand what's happening out in that global infrastructure and be able to deal with threats as they're emerging. Those zero days start out, you know, somewhere. And we want to be able to find them where they're starting as opposed to, gee, I just got compromised. Now I got to clean up the mess. So that's kind of a, I'll stop here because we are kind of constrained on time. But, you know, we got to change our thinking and our approach to security and a point solution here, a point solution there. We've got to move to a much more global view of what's happening. And that's going to take global collaboration, international collaboration as a part of that. We're going to, we're starting to try to do that at the internet service provider level and opening up dialogues with international partners. We peer with, you know, a large number of the global carriers. And so why aren't we doing more in terms of just developing a common understanding of what's happening in the network? And I can go down to the packet and, you know, protocol level and be able to identify those zero days before they become successful. That's what you're really trying to do is get ahead of the threat. We also have to work towards driving the technology base to be naturally more resilient and more secure. We're just starting to understand how to do that. But in the commercial world, that's a big challenge simply because, you know, the technology is always moving. And, you know, Microsoft's been trying to deal with security at the native level in their development process. But, you know, they brought out Windows 8, okay? And Windows 8 brings a whole new flavor to things in the operating system realm. And so we're going to learn things as we go. I see we learn something every day about cybersecurity. We usually learn several things just because we're there and we're doing it. So you never stop and say, I understand the problem. In fact, I'll close with my favorite saying in the cybersecurity business. If you think you understand your problem, you're badly diluting yourself. Thank you. Thanks, John. Okay, now next is John Gilligan. John was during the DOD days, probably one of the most innovative CIOs that we had. He was the CIO for the Air Force. And now he runs his own company. And, John, could you give us your thoughts? Thanks, Bob. And thanks to CSIS and FireEye for putting on this session. You know, I'm thinking back and it's been almost 40 years since I first got involved in cybersecurity, computer security back then. I went to a seminar in graduate school and ended up getting a graduate. At the end of it, they said, we have graduate assistantships available. And that caught my attention and so I raised my hand and spent the next couple of years designing, trying to design secure systems, trying to mathematically prove systems. I spent most of my subsequent career not doing computer security but designing and building IT systems and now more recently, you know, in helping manage companies. And the topic of stopping the threat to me has to be looked at in terms of a business perspective. And I come to these sessions and candidly, my head hurts. And it reminds me of a story that I tell often when I was CIO of the Air Force. We spent about $7 billion on IT, a lot of money on computer security. I had a pretty good background in computer security. And what I would find is each year as NSA came in to do their penetration analysis of the services, then they would call us all together so they'd line us up like a panel here, Army, Navy, Air Force, and they would debrief us on what they found. And the first time they did that, I was terribly embarrassed because it wasn't, did NSA succeed in breaking in, it was how long it took and that how long was in minutes and seconds and the types of attacks, every one of them was successful. And I'm thinking, my goodness, if somebody from the media had been sitting in this audience, I would be pillared in the media for spending $7 billion and not being able to protect even. And so the second time this happened, I was very frustrated. So second year, same briefing, very frustrating. And I went to NSA and I said, I need to know where to start, it's not like we're not spending money, I need to know where to start. And that ensued a discussion that I'll shorten, but at the end of that discussion, NSA came back and said, well, we've now analyzed the threat and based on the threat, here's where you ought to start. That was enormously enlightening and so I want to fast forward that same discussion today. So Verizon just produced their latest Verizon data breach investigative report, very enlightening, there are a number of other reports that are similar out there. But to me what catches my attention is that really things haven't changed dramatically from my Air Force days, the majority of threats in terms of number are unsophisticated and they're attacking very straightforward weaknesses. That's really important. The second and some of these statistics were mentioned earlier today in the presentations, that the breaches are not discovered until weeks and months after they occur. Unsophisticated and discovered weeks and months later and most often discovered by people external to the organization and yet we're spending 30 billion was the number that they've dwelt used, others have used a lot more. We're spending all of this money, what the heck is going on? Well, I've spent some time trying to analyze that, you know, being on the board of several companies, this becomes quite important as gosh, if we're going to spend all this money, we would hope we would get some return. And what I've discovered through that analysis and looking at the reports is in fact the most prominent threats are unsophisticated attacks and it turns out they're relatively easy defeated. We have now demonstrated through, you know, research and applying different names, but a set of minimal baseline set of controls that you can be effective in protecting against most of those attacks. One set was developed in the United States, it's called the critical security controls, SANS Institute, NSA, a number of other organizations did it. The Australians have come up with their similar top 35. Interestingly enough, their research shows that only four of those, four, four controls are effective against 85% of the threat. So the conclusion is what we know how to deal with, we know what we need to do, we just don't do a very good job of then implementing these threats. Now I'll tell you a little secret, as CIO what I learned is it doesn't cost a lot of money to implement these baseline controls. Why? Because most of them are essential to operating and managing the network. It's just doing them in a disciplined manner. And in fact, most organizations are already spending the money, in fact, many are spending more than they need to because it's not that they don't have the controls, it's that they have multiple sets of overlapping controls inconsistently applied and so they leave gaps, et cetera. So step one is implement this baseline of critical security controls. Now that does not address the sophisticated threats and I acknowledge that. But if you don't do that, you're wasting your money trying to address sophisticated threats. You're kidding yourself. So all of the discussions about the sort of, I recall my kids used to play soccer, everybody'd huddle around the ball. And that's what I see often organizations saying, we're gonna go after these sophisticated threats, they're shiny, they're exciting, but unless you have done that foundation work, you're wasting your energy. Now what I have found is organizations that beyond the critical controls, what they're doing and what's most economical is not to continue to layer control upon control upon control. And I think that's the big flaw in what NIST has been providing in their risk management framework. It's well done, but it really just continues to drive cost upon cost upon cost. What we're seeing, and we heard this well today, is that the very sophisticated advance persistent threats in the nation state attacks, they're agile, they're intelligent, they're dynamic. And in order to respond to that, you have to be likewise, and I think there's been great discussion about that, so I won't repeat, but you have to implement that time, that same type of capability. It cannot be done strictly with tools. It has to be a human element. The sharing of actionable intelligence is critically important. The ability to look at patterns of attacks, and eventually, those that are most sophisticated are actually able to predict what's gonna happen. What's the next step of the attack? Why? Because they see the patterns, they're studying it. And so I think all of those then are the next step, and sharing becomes absolutely critical, because organizations in general can't afford to do that on their own. So anyway, let me stop there, but I think we sort of know some steps. It's not to say that other comments, I mean, obviously diplomatic and other avenues we ought to pursue as well, but I think from a technical perspective, there is a better roadmap than perhaps we've been able to implement. So I'll stop there. Thanks, John. And I think John points out an important topic that he's working on with Apsia, which is focusing on the theme of cyber economics. And I think that's one of the key elements of cyber where we are today, which is I think if we can get our resources proportional to the threat so that we can take care of the ankle-biting problems with the least amount of resources, but do it smartly, but then focus the other part of our resources on the stuff that will kill you, that's what we gotta focus on. And unfortunately we're spending, I think we're finding a disproportion of our resources on the ankle-biting problems, because we're not doing the basic stuff. And as a result, we don't have enough resources to focus on that 15% that will be the stuff that will give you the heavy injuries or the fatalities, like in a case of an automobile analogy. So I think that's the situation we're in today relative to moving from static to dynamic defenses that we've talked about this morning. So how do we get at this situation in cyber economics of moving in this new paradigm of security where we are facing ourselves right now? And so we've asked Irv Lachko to talk about an area that he's focused a lot of his attention on. He's a CSIS senior fellow, which is called active defense, and we've talked a little bit about that. So I'd like to, with that setup, Irv, I'd like you to talk a little bit building upon what John mentioned. Okay, thanks, Bob. Right, so I'm gonna talk about this thing called active cyber defense. And of course, the minute I say that, I guarantee that we're all in this room on a completely different page in terms of what that means, because there is no widely accepted definition. So the defense strategy for operations in cyberspace defines it as basically real time technical protection of the dot mill network. But in the popular parlance and the articles that are showing up in the media, it's often interpreted as meaning hacking back. And so there's a lack of understanding of what the term means, and often what happens in discussions about active cyber defense is people end up in one of two extreme areas, either looking at this sort of hacking back area, which gets legally very dicey very quickly, as any lawyer will tell you, or just saying, well, we're just gonna look at activities that we can do within our network, which are perfectly legal and have been going on in some cases for decades. So honey nets and gathering threat intelligence in a variety of ways. And that's perfectly safe legally. But there's an interesting gray area that's developing, and people are starting to pay a lot more attention to this for a couple of reasons. The first is that the government simply cannot respond to the magnitude of the threat facing the private sector. If you read all these reports, you just see it's a huge problem, and a lot of companies and organizations are on their own. Now, hopefully with some of the initiatives that the government has announced, with the information sharing, things might help, but unless it's a major breach, the FBI just doesn't have the resources to come and help you in a lot of cases. The other thing is the private sector is growing incredibly sophisticated in many ways in terms of its ability to analyze the threat and potentially even respond to the threat. So there's an increase in motivation and capability on the private sector side. And so then there's this interesting question that's starting to come up, which is how far can the private sector go to protect its intellectual property and its assets that may be leaving the organization? So a lot of folks are starting to look at this. And again, in particular, there's this interesting sort of gray zone where one can start to look at things like beacons in information that leave your network. So can you put a passive watermark on your document, for example, and have it leave your network and then you can search for it to see if anyone's stolen the information. That's one thing, what if you put an active beacon? So it's actually signaling home from wherever to so you can track it, is that legal? Well, it starts to get more tricky. What about information that might leave your network and self-destruct or something like that? That gets really tricky. And then there's all kinds of questions one can get into in terms of if someone is accessing your network and they're connected to your network, do you have any rights at all to leverage that connection to gather intelligence that you might be able to use? Again, it gets into really tricky legal questions. So there's not a lot of clarity right now. And in fact, what's interesting is there's a lot of debate on the Computer Fraud and Abuse Act right now on the Hill because some people believe that the Act is either too strong or being applied too strongly. So the Aaron Schwartz case is one example of that but there's some others where people feel that it's a bit too strong and that the language is a bit too loose. And there's other people who feel that it actually needs to be strengthened so that you can deter this activity more effectively. So there's debates there, Harvey Rishikov is here and he's leading a task force at the American Bar Association that's looking at this issue. So there's a lot of interest in this issue. And one of the things that comes up is this question of, first of all, roles and responsibilities. So what can the private sector do on its own? What can the government do? How can they work together? And so there's a number of questions that come up there and I hope this tees up Jenny a little bit with things like the ECS program that DHS has developed where there is a partnership between the government and the private sector to share information and provide some protections and one could think about whether that kind of activity should continue. There's also a question of should there be clear lines in the sand, in other words, should the laws be very clear about what companies can do to protect themselves or is it better to have some legal ambiguity and let either case law sort of work itself out through the system or provide some ambiguity for the attackers so they're not exactly sure what the lines are and they're not sure what steps companies can take to protect themselves. So there's debate there. So I'll just stop right there and then happy to discuss it further if there's any questions. Thanks, sir. And Jenny is in DHS, worked at the CERC for quite a while but currently is the director of as a long title stakeholder engagement and cyber infrastructure resilience leader I guess is your title. But as Herb said, she's got the hard job we talked a little bit about information, sharing and collaboration and Jenny has the job of promoting that at DHS so if we could, ready to give us some updates and perspective. Sure, thanks, Bob. And is this catching my voice on the microphone? Yes? Yes, okay, good. So one of the things that government can do recognizing that there's a huge scale of critical infrastructure partners that we need to work with and a relatively limited size of government resources. One thing we can do is share information that we have. We do have some unique sources of information whether it's from our partners in the intelligence community, whether it's what DHS sees from across the dot gov, what it's our friends at DODC, protect it at DODC on their networks, law enforcement, et cetera. So we have a broad set of information that we can share and Sean Henry is right with that term does get overused a lot. It's a big blanket term. So when we work with critical infrastructure, sharing information, we need to recognize who we're sharing with what so that they can take action. Sometimes that's actionable indicators. It's MD5 hash values. Sometimes it's sitting down with CEOs or CIOs. So we found to be a very productive group to work with to make sure that they understand the threat. What really is the threat landscape? What are those most important things where they wanna allocate their scarce resources? What decisions are they making in configuring their networks that may introduce significant risk? And what about some of the new technologies out there? I've had a number of CIOs come to us and say, should we be implementing application whitelisting? Is it worth the effort? Questions like that where there are a lot of vendors out there that are proposing different solutions and they are looking for some objective lessons learned that they can find out from government. So how do we work with people that are making strategic investment decisions, CEOs, CIOs, et cetera? And then how do we work with folks within the critical infrastructure companies that are doing that real hands-on protection? Those are those actionable indicators. And actually right now today we're having a quarterly meeting of our Advanced Threat Technical Exchange, which is part of our, when we have awful names for things, I think Secret Service always has great, really cool names for their programs and we always have awful acronyms that don't even spell anything. Our Cybersecurity Information Sharing and Collaboration Program. I know it's awful, an awful acronym, I'll take suggestions. But that's where we share a sensitive but unclassified information with critical infrastructure companies so that they can protect their own and their customers' networks and they provide information back to us about what they're seeing on their networks. We do that through a legal agreement called a Cooperative Research and Development Agreement that really lays out how they can use our information and how we can use their information so that everybody is on the same page. And it's a program that grew out of lessons learned from the DIB CSIA program, a pilot that we did jointly with DOD in the financial services sector and now is something where we've learned a lot of lessons and is available to all of the critical infrastructure sectors. So far we have 14 sectors, not in totality, but members of 14 different sectors that participate both through information sharing and analysis centers or individual companies where companies choose to do that or where there is no information sharing and analysis center. And so what we do through that program is we share those machine readable indicators and we are working toward increased machine readability. Started at very basic CSV but we're now working in a format referred to as sticks and taxi, which some of our partners are actually piloting true machine to machine communications with no human in the loop. What we're putting out in those formats is still pulled down off of a secure website. But we share information with them. They share information with us on a regular basis. We've shared almost 20,000 indicators through the program. When we started about 18 months ago, about 80% of the indicators were coming from government and 20% were coming from the industry partners. Now it's 6040, which I think it really shows that the industry partners are starting to see value and putting more in. And we're really seeing unique things about what our threat actors are doing in different sectors that we would not see in government. So there may be one threat actor that deals very differently when they're working against a manufacturing company than they would if they're trying to get into the Department of Defense. So very different TTPs from those groups. So we have the flow of the actionable indicators. We have mitigation strategies that go out. But then we've also found great value in these analyst to analyst exchanges like we're having today, where people come in and talk to their peers and say, this is what happened to us and this is how I dealt with it. And then people can ask questions. And there have been many, many examples of where somebody has heard a company from another sector brief and they've been ready for what happens to them in the future, or they've been able to go back and immediately apply something that's already happening to them. It's interesting because some people come into the room and will go and throw out their name, company and sector. And then other people will just say, my name is Bob. And that's all that they're willing to share. We do allow that anonymity for those who want it. And we have government partners and industry partners that participate there. So I think that's an important part of the information exchanges. It's not just all about the ones and zeros going back and forth, but it's getting the smart people from industry and government having those technical discussions about what's working and what isn't with the actors. So that's our SISP program. As Err mentioned, Enhanced Cybersecurity Service is a new effort that DHS has launched. It's something that started with the defense industrial base, DIB opt-in activity. And back in the spring, DHS took over the relationship with the commercial service providers who provide those DIB opt-in services. Now with the executive order that came out, DHS is able to work with those providers so that the services can be provided to all 16 critical infrastructure sectors. So for those of you who are not intimately familiar with what I mean when I say DIB opt-in or Enhanced Cybersecurity Services, basically we talk about tear-lining information and what we share through SISP is unclassified. We use a traffic light protocol that governs whether it's proprietary data or whether it's information that came from the government. Not everything can be tear-lined. So when we do Enhanced Cybersecurity Services, this is where we provide those classified indicators up to the most sensitive levels of classification to information and communication technology providers so that they can protect their customers' networks if their customers choose to buy those services. Right now it is email filtering and DNS sync holding. Those are the two countermeasures that are available. In addition to increasing the sectors who can buy the services, and the pilot that DOD did was only ISPs participating. We've expanded the kinds of ICT providers who can participate, manage security services providers, AV companies, companies like that have expressed an interest in coming in through the program. So it is very new in offering these services to other sectors. There are, anytime you do something that's new and different, you don't realize all of the little details that you need to sort out until you do it. So how do you validate who can buy the services? How do you figure out a process for adding new countermeasures? What are all those details that have to be sorted out? How do you get a memorandum of agreement signed and then get the systems accredited at the SCI level? But this gives us an opportunity to take that information that can't be quickly tear-lined down to an unclassified level. Get it to those ICT providers that a large percentage of our critical infrastructure community uses. And if those critical infrastructure companies are interested, allowing them to buy the services where they receive that protection with the classified information. So it's a very quick overview. I'm happy to answer more questions and go in different directions. Great, thanks for that. Good. Okay, so we've got about 10 minutes before we have to move on to the next session. And so what I'd like to do is either open this up for some questions from the group here since we have limited time. I've got one right here from this gentleman. Yes, Mike, yeah. My name is Alex Lawson. I'm from inside U.S.-China trade. Mr. Baker raised the specter of some different sort of enforcement tactics that the government can take. He mentioned the visa issue. He hinted at some financial sanctions. This kind of seems to center around business strategies for preventing attacks, which is valuable. But I was sort of hoping to maybe get some information from you on tax that the government thinks are worthwhile for enforcement, the visa thing, the sanctions, or are there some other options that the government can consider? There's been a lot of sort of public naming and shaming going on more. They were in, you know, China was mentioned in the DOD report, but I don't know if there's anything with a little more teeth that the government can consider as a next step. I can only speak on behalf of DHS. And so DHS's role in cyber in our Office of Cybersecurity Communications is very much focused on prevent, protect, building resilience, and then responding when there is an incident. So I can't speak on behalf of my inter-agency partners, but obviously this is an issue that is discussed as a whole of government discussion, each with our respective roles. I understand. Another question from the group? Yes, sir. Stand up and give us your name, please. Here's your mic here so we can hear. Thank you. My name is Alexander Soli. I'm a Delta risk, and I was considering the idea, something that David mentioned earlier about how most of our cybersecurity is based off of, sorry, most of the antivirus is based off of blacklisting and such. And I was wondering if anyone has been considering sort of alternative to that, or any sort of legality issues involved with creating some sort of autonomous way of finding critical vulnerabilities and such. That's what maybe Irving could do. Do they wanna try that, or I can, I'll give you a lawyer's view of cybersecurity, which is not necessarily something you should take to the bank, or at least I wouldn't code it directly, but yeah, I think one of our biggest strategic problems, I'll tell you a story from the border. When I went down to the border, when I just started at DHS, and I was dealing with the border patrol, routinely they'd say, well, we sent out to border agents, and they brought back 30 people who were trying to cross the border. And I finally said, how did two agents bring back 30 people? And he said, oh, well, we surround them. But the real answer was they surrendered because the worst thing that would happen is they would be taken back across the border and let go to try again. This is where we are with keeping people out of our networks. After we've spent a boatload of money stopping up all the rat holes, they spearfishes again and 90% of the time somebody's gonna open the email. And the reason is it's getting past all of our signature-based solutions and we do need automated mechanisms for dealing with that. Fire I Know has sponsored this, but the fact is they've got an interesting approach to this, which is to say, let's put this in a virtual machine and just watch what it does. And if it doesn't do what PDFs usually do, then we're not delivering it. And you don't actually have to know in advance that this is bad. You don't even have to know what bad things it's trying to do. If it's doing something that Adobe didn't tell you it was supposed to do, you just don't deliver it. It seems to me that that has some real potential to make it much harder for people to get back in and to go back to the border, and now when they stop people crossing the border, there's a distinct chance that there will be a struggle, even gunplay. And that is a reflection, oddly, of how much better border security is because they don't expect to be able to get in if they can't break past this. If they're coming from Latin America, they're gonna have a long flight. They're not just gonna go across the border. And I think we will know we're doing a better job when it is harder to get people out of the network than it is today. Thanks, Jim. Any other questions? We got a little bit more time. Yeah. Hi, I'm DJ Bello from Robert Morris University. Talk a little bit louder, please. Or get closer, we might get. Hi, I'm DJ Bello, and I'm from Robert Morris University. I know hacking back is a big gray area, but how is that area, how's the U.S. area and viewpoint on that like compared to other countries? And how does that affect our defense modeling? Well, let me take a quick shot at that. That's an interesting and complex question. First of all, I like to say, for every cyber warrior, there's a cyber lawyer looking over their shoulder and AT&T is no different than that. And the government, my government experience was the same way. The problem today is, for example, the Computer Fraud and Abuse Act basically says what you cannot do in cyber, in attacking somebody else's system. It says if you do this, it's illegal, you can get arrested for doing that. It doesn't define what you can do going back in the other direction. It doesn't define any particular framework for what you can do in response to a penetration. So there's a lot of discussion going on right now, including on the Hill, as we were looking at the cyber legislation. The SISPA Act, of course, has been passed by the House. That's gonna, you know, that's the same topic as being taken up in the Senate. And one of the discussions that's taking place is what countermeasures are or defensive measures or active defense measures or whatever you wanna call it are allowable under the law. I think Irv might have mentioned this that, okay, if I set up a honeypot and I put dummy data in there, that's relatively accepted today from a legal perspective. There are some liability issues, okay? But it's relatively benign, okay? Suppose I set up a honeypot and in that dummy file, I place malware that's gonna destroy the file structure of the machine that it gets downloaded to, okay? That's probably not legal under today's laws and probably never should be. And one of the biggest issues there, which we wrestle with every day, and it's part of this whitelist, blacklist question, is most of the hosts that are compromised, and these are websites and hosting, cloud hosting services that are compromised, are compromised not by the owners of the site, but by somebody who's a botmaster going or controller or a coder, whatever you wanna call it, going around looking for machines to compromise that I can then use to distribute malware. So they may be innocent, but typically the first thing you see where you think this attack is coming from is an innocent bystander. So from a legal perspective, what do you wanna do to an innocent bystander? We're having discussions right now with DHS and with FBI about, okay, we see these sites, we see a bad site. We can tell right away it's downloading malware to our customers. And number one, can I share that legally with you, the identifier information, which is typically IP addresses and URLs and Macs and things like that that we can see out of the network traffic. But then what can you do with that? Can you go in and knock on the door and of the hosting service and say, hey guys, you got a problem and let's work together to get that fixed? That's still a very gray area in the law right now and people, of course, particularly with Syspa, as soon as the idea of sharing and liability coming along with that gets the privacy people very upset because their interpretation of Syspa is, you can break the law and get away with it because you've got immunity in the process. You can do anything you want, whatever you violate anybody's privacy and you've been immunized by this legislation, you can get out of jail free. So that's kind of where we are in wrestling with that whole topic is and we're hoping that out of this Senate and House legislative process, we're gonna get some clearer guidance from a legal perspective about what can be shared, what purposes for which that can be shared and then how do you protect privacy within that context? Is sharing an IP address a violation of somebody's privacy? Not clear today if that is or not. So I've blogged about this and one of the things that I pointed out in a recent blog post is that Luxembourg has turned out to have more Cajones than the entire United States cybersecurity establishment. They got a guy there who just said, he read the mandate report, he said, well, why do you kind of go from the command and control server back to the unit? Why don't I just go looking for all the poison ivy, all the guys running poison ivy and break into their network? And he did that, found all kinds of interesting stuff. The response from the US government has been to say, oh, you know, that could be illegal. That's a bad idea. Well, what they're really saying is we don't know how to protect you but we do know how to prevent you from protecting yourself. It's absolutely nuts. This is an old computer crime section view of the world. Kind of leave it to the professionals. You know, when you say leave it to the professionals when you're dealing with crime, you gotta actually be able to do something if you're a professional. Our professionals are completely unable to protect us. In the end, people will find ways to protect themselves. Currently, the law is written so vaguely that almost everything is illegal. Many things that we do today as a matter of routine are arguably illegal under the Computer Crime and Abuse Act. And until we find a way to embarrass the Justice Department into starting to say, yeah, we didn't mean that you couldn't protect yourself in some way, we are going to have a uniquely disabled cybersecurity infrastructure in this country. Thanks, sir. Oh, so just one minute on this. So there's a really, really interesting legal issue which has been discussed here which is absolutely fascinating. There's also a policy question. So let's assume the law gets clarified and there's clarity on what companies can do and not do. But then there's gonna be an equities discussion here. So for example, let's say that companies are given more leeway to take certain action, that might undermine US diplomatic efforts to establish certain norms in cyberspace. Or it might not, or it might be worth having that, or it might not. So there's gonna be a really interesting policy discussion in terms of what the US is trying to accomplish internationally in terms of norms of behavior, even discussions on the internet governance fora, which Jim can tell you all about or what's going on there, but it might have an impact on that as well. So it's a very interesting issue from policy perspective as well as a legal one. So we're at a time, I get the hook from Jim back here, but I would like to summarize one thing from my perspective having been in this business a long time, which is that we've heard some great discussion today, some very difficult legal issues and policy issues that we have to address. But from the standpoint of where we are today versus where we were just looking back five to 10 years ago, I think we've accomplished a lot of things. Now have we accomplished it at the pace we should? Absolutely not. Is the threat having a field day? Absolutely. But I can remember in back having to go to the hill with Sean and a lot of the other folks that were part of the cyber group, the hill didn't even understand any of this stuff five to 10 years ago. All this discussion we're having right now, they didn't have the foggiest notion about this topic. And we'd have to bring props into the hill to get them to understand just the basics of what this topic of cybersecurity was all about. At the same time, 10 years ago, NSA thought they could build pretty much anything that was needed in the most critical areas in this space. It was all about government-built or government-enhanced solutions. NSA doesn't believe that anymore. They realize the private sector in large measure has the capability to build the right solutions in this space. You look at what Ashar was talking about earlier with FireEye. FireEye was built by an idea that was germinated by DARPA. We need a problem solved in zero days. FireEye was born, Incutel invested in it, the CIA's arm for investment and innovation, and it was born. And that's happening all over the place with Silicon Valley and things like that. So the good news is we have lots of solutions coming into play, lots of solutions even in the active defense area. So there's a lot of room for optimism, but there's no doubt that I think we've heard today that there's a tremendous amount of frustration. We've got to speed up, get the car shifted from first gear and get it shifted up to third or fourth gear and what's the best way to go about doing that? We did not get into the issue of how much regulation, how much strong policies that we have to put in place to deal with this. But the reality is I think every one of us should be a little bit more optimistic, but clearly we still have a long way to go in this space. Hopefully the discussions this morning have been enlightening, it brought up some interesting issues I know today, and I know some of us are gonna be around for the next session for further dialogue with each one of you. Jim, did you have some closing comments? Okay, and thank you CSIS for helping to host this.