 Good morning. My name is Jim Lewis. Welcome to CSIS. This will be a good event. It's a timely topic. I always thought that the Wasunar arrangement was somewhat obscure, so it's strange to see it exciting so much attention, but it's something we definitely want to talk about. We have a great panel, but I will say we tried to get government agencies and human rights advocates to join the panel as well. And after our fourth turn down, I said, oh, forget it, the guys we have are good enough. So I don't know why the people who think the rule is a good idea are unwilling to come and say that in public are well, but I will get more than enough from the group we have here on both the pros and the cons of this. Let me introduce them very quickly. I'm just going to say their names and affiliations. We should have bios on their website, on our website at some point, so you can check them out, but they're all experts on this issue. They're experts on many issues, but they are particularly experts on this one. We have Denise Zhang, who is the deputy director here at the Strategic Technology Program at CSIS. Stuart Baker, longtime friend and advocate, former foe in the crypto wars, oddly enough we're on the same side. Something must be wrong. Justin Goodwin from Microsoft, who's worked on these issues for quite a long time and knows them. Laura Galant from FireEye, one of the companies that's doing a lot of work in this field and a real expert. Michael Mani, did I get it right? Good, from Symantec, who actually advises the Commerce Department and knows a fair amount of what they're up to. And then I am going to blow this one. I knew I would blow one name. Mussores? Close enough? That's not bad. Thank you. Katie Mussores from Hacker One. Yes, a real hacker to come and talk to us about. But you aren't wearing a t-shirt. Great. So with that, let's go ahead and get started. And we'll take it off. I think the format will be, first, Denise will ask the panel questions, and then we'll open it up to the audience to hear what you might want to talk about. Thank you. Thanks, Jim. Good morning, and thanks to everyone for being here today, especially our panelists who flew in from the West Coast, which I think is actually the majority of them, or at least half. My name is Denise Jung. I'm a senior fellow here at CSIS and deputy director of our strategic technologies program. I'll be moderating. I want to assure you that we're going to have a really lively and interesting discussion this morning just based on the discussion we had back in the green room about the BIS proposed rule to restrict the export of intrusion software and network surveillance tools. Before we get started, I wanted to just provide a brief overview of the issue. So about a year and a half ago, 41 member countries of the Wassener arrangement got together and updated the agreement. And they decided to include sort of two new categories of software for export control. And they are intrusion software and IP network surveillance systems. I'm going to rely on our panelists to describe what that actually means, how do you define what that actually covers. But the Wassener arrangement is basically an agreement to control the export and proliferation of conventional arms and certain dual use military technologies. So this is a really interesting update because these tools were not traditionally thought of as arms or those types of systems. So the intended goal of this agreement was to limit the export of surveillance technologies to oppressive regimes that use such technologies to stifle political dissent, freedom of speech, or other activities that may violate human rights. And initially, many of the privacy and human rights groups were supportive of the concept. But the agreement language and the subsequent proposed implementation rules that came out in late May from the Department of Commerce have created a lot of consternation for companies, for academia, for NGOs, and individual researchers that believe that the definitions are just way too broad and too vague. And the burdensome licensing requirements will have a huge negative impact on legitimate cybersecurity, defensive cybersecurity activities that companies employ to identify software vulnerabilities and patch those vulnerabilities to conduct penetration testing of their own networks, to share information about malicious cyber threats between companies and other entities, and a host of other types of activities. So the comment period just ended this week on the proposed rule on Monday. I think BIS got a whole slew of comments. And this is really perfect timing to discuss the issue. So with that, I wanted to turn to Stuart, former government official and partner at Steptoe and Johnson and guru, of course, on all things cyber. I was wondering if you could tell us about how the BIS process, the proposed process, works. What's covered? What's not covered? How does the licensing process work? What are the disclosure requirements? And of course, what's the penalty for violating? So that's easy to get a license. And there's also civil fines of hundreds of thousands of dollars, maybe a million in some cases. So you don't score around. There are plenty of criminal prosecutions that have been initiated. Voluntary disclosures are frequent. And they are to the Justice Department if it appears that someone in your organization may have known that the sale was unauthorized. So it's very serious. It was serious because it was a Cold War measure that was designed to keep the Soviet bloc from acquiring Western technologies that would improve their military. There are really two export control regimes. There's the arms regulation, the military equipment, which is regulated by the State Department, and dual-use technology, which is regulated by the Commerce Department under the Export Administration Act. This is dual-use technology at a minimum, and therefore regulated by the Commerce Department. I'm going to talk about Wassenar a little now and then a little at the end. Wassenar is the way in which the United States tried to persuade other Western allies to adopt similar restrictions on the sale of technology to the Soviet bloc. And whereas the US has an export control law, most other countries simply treat Wassenar arrangement as binding by virtue of its language and its status as an international agreement. It is a long list of stuff that we will not sell to disfavored countries without a license. That is to say, every private citizen, private company must go to their government and get a license. And that's pretty much what the understanding is in Wassenar. Traditionally, when we were worried about the Soviet bloc, the US took very hard stands and wanted to cut off large amounts of technology transfers. And Europeans were more sophisticated and a little more eager for their companies to succeed and were much more flexible about granting licenses. And so there was a constant tension in which the US said, you're not living up to the understanding with respect to Wassenar to the Europeans. As we'll see, that dynamic's a little different in this area. But the tension continues to exist. The provision here and the proposed rule that has come out of the Commerce Department as a result of an intense interagency process follows the Wassenar tradition of trying to name the technology and say, that stuff is going to be restricted by way of transfers. And it follows a traditional rule of saying, the stuff is controlled, the products are controlled, the software about the product is controlled, and technology concerning the product is controlled. Technology is going to turn out to be a critical discussion here. It is essentially know-how. Understanding how to design a product is what is also controlled under US law, more controversially elsewhere. And then the final element of the overall structure that is being adopted is deemed exports, which says that you can actually export technology by talking to somebody inside the United States as long as that person is not an American or a Canadian. It is a deemed export, which means that we could export stuff right here right now in violation of the Export Administration Act. And understanding how broad that term is in connection with the products that are covered today is an important part of understanding the rule. So now let me turn to the rule quickly and give you the tools you'll need to understand the remaining speakers. The thing that is controlled are systems that generate, operate, deliver, or communicate with intrusion software. So if you imagine the controls that you would use to drive intrusion software are created, that's supposed to be what's controlled. And the regulators thought that that would turn out to be a major limitation on the scope of this rule. Some will doubt about whether that, in fact, is going to be true. Technology is controlled, but the crucial question is what is intrusion software? And here I'm going to ask you to kind of imagine a two-pronged tests. And the first set of prongs are and prongs. You have to have both of those things. And then within each of those prongs, there are two prongs, which you can have any one of. So let me give you the highlights and then dive in. It's intrusion software is something that is specially designed to either avoid detection by monitoring tools or defeat protective measures. So that's prong one. Either of those things gets you into the definition. If you have the second, which is when you have done those things, you extract or modify data or you modify the standard execution path of a program. So those are, if you're diagramming this, you have to have both of those things. But within those two categories, either one of them will do it. Now, let me go back and just stress what the breadth of that language is, specially designed to avoid detection by monitoring tools. We don't know what monitoring tools are, but we know what avoid detection means. You presumably something in your computer is looking for what you're doing, and you're trying to make sure that something in your computer does not see that happen. Or defeat protective measures. Again, we don't know what protective measures are exactly. There's a kind of moral judgment in there that they'd ought to be protective and warm and loving and maternal, but exactly what a protective measure is, is again, not really well-defined. And then finally, that's the first set of requirements. Then the other is, does your program extract or modify data or does it modify the standard execution path? And the standard execution path, again, there's a kind of moral judgment. The right path is what you're deviating from. But exactly what it means to deviate from the standard path is something that is going to be spelled out by the remainder of our panelists. So why don't I stop there? That is the essential legal framework that was adopted and turned briefly then to Wassonar. Wassonar adopted this rule in the wake of scandals about Western European principally, but also American companies providing tools that were used by authoritarian governments to deal with Arab Spring in ways that, bien-pensante opinion in the West was not comfortable with. And the goal was to say, we should not be providing tools to these governments that would allow them to squash human rights inside their country. That is the goal of this, but as you'll hear, it turns out to be much broader than that. And one of the questions you might ask is to the extent that this has a major impact on our ability to do good cybersecurity, does it actually foster human rights abuses rather than prevent them? I think that there's a real argument to be made that it does. And then finally, the question arises, are we applying these in a multilateral fashion so that it does not disadvantage one country's industry, the US has a very strong cybersecurity industry. But it turns out that many other countries with strong cybersecurity industries are not really governed by Wassonar. That would include Israel, which has a distant relationship to Wassonar, China, which has none, and Russia, which is part of Wassonar, but widely believed not to apply the licensing standards in the same fashion that we do. Last point in that regard is many of you saw that hacking team was itself hacked, or maybe it was an inside job. Looking through those files, it turns out that even though they were selling to very authoritarian governments, they continued to be able to support those products long after Italy had accepted this set of restrictions and perhaps granted a global license to engage in sales. What that means, I think, is that what the US is doing here is probably going to turn out to be something close to unilateral controls on its industry, which will have pretty significant economic impact. So I'll stop there. So Kristen, as Stuart mentioned, the definition of the two categories of software that are covered, intrusion software and IP network surveillance software. The BIS have said is that we're not talking about exploits and malware itself. We're actually talking about command and control frameworks, the delivery vehicle, the delivery mechanism, the tools that you might use to actually find zero days, conduct vulnerability testing. And I'm curious to get your perspective from Microsoft. Since you guys do a lot of penetration testing of your own software, is there a way to define that difference from the software, the malware, and exploits themselves versus the items, the types of software that BIS is covered? In part, the challenge is one of context. When you look at a piece of malware that comes in, the first thing that we do is reverse engineer it. Well, how would you do that? What are the tools that you would use that would support that? Those very underlying elements that we might need would fall under this licensing regime. So to be able to even understand what the malware is, what it says, what it does, you'd have to go and get a license for approval just to be able to start to break it down. Or if it's something that new you haven't seen and you want to cobble together a new tool or new capability to research it, then you'll have to go out and potentially pursue a license for that so that you can analyze the malware. So how do you define or describe this category? I think that the challenge becomes that these are everyday activities. And so if we're looking for a way to articulate what should be regulated and what shouldn't, we really need to be very careful that we are not bringing into the scope the everyday activities that are asked of and required by everybody here, all the security companies that are looking at not only new malware, but new vulnerabilities, new anomalies, and anomalous behavior to try to detect what the threat is and how we can appropriately respond or mitigate. And so that definition is really, it's a sliding scale. It doesn't have an easy set of words we can associate with it, but it comes back to I think one of behavior, which is if it's consistent and repeatedly used as a technique by the security community to address an issue, it does not and should not have that licensed obligation from a security perspective. So Michael, I believe you are on the Department of Commerce Technical Advisory Committee that actually review some of these things prior to them sort of coming to an agreement and putting out a rule. And one of the primary criticisms of the BIS rule is that it just doesn't understand the technology and it doesn't understand what industry is doing in this space. So I was wondering if you could explain sort of the technical consultation process and how it may or may have not worked in this case. Yeah, well, I think to commerce and BIS' credit, they have a very robust consultation process in most cases. They have a whole series of technical advisory committees. The one that I sit on is the Information Systems Technical Advisory Committee, which would seem like that would be sort of right in the sweet spot of what that committee advises on. In this particular case, there's a general conception within I think the various agencies that they understand IT and high tech. They have a pretty good, I think, grasp of hardware systems and probably reasonably good grasp of big operating systems like Apple's or Microsoft's. This particular area of IT, I don't think they have a good understanding of it all. And that because they thought they understood it but didn't really led them down a path that allowed them to think they were getting the right type of control or putting some language in there that would get them what they wanted but without an understanding that what they've actually done is they swept up all the legitimate work that goes on in this space. Essentially the entire defense side of this defense offense war that's going on constantly around the world. So there are a couple of different ways that that happened. The cybersecurity community wasn't well represented on the tax at the time that this was going through. The reach out or the outreach I don't think was done particularly effectively in this case across the different agencies. I'm not gonna certainly single out commerce at this point but you can come to Symantec and say this is what we're thinking we wanna do or this is the papers that are coming through from Boston are on what other countries wanna do because this rule actually is introduced by the UK through their process. We didn't have that interaction. We didn't have that exchange of information. I'm pretty confident I could say that the other folks on the panel didn't have that interaction at all. Now we don't know why that was. There may have been cases where they weren't interested in getting our input can't say that for sure but it didn't happen. So they went into Boston are they rolled into Boston are and they thought they had it figured out. I think that they had the space well understood and they wrote up the rule and I think all the other countries the other 40 countries at Boston are thought they had it pretty well understood as well and somehow they didn't understand that what they were writing actually has a greater impact on the legitimate work that goes on around the world and it doesn't really impact anybody at all that they as Stuart said were targeting for this control and so they passed the rule. Now it's a unanimous process. All 41 nations have to say yes to this which makes it really hard now to go back and say we made a mistake because you got to convince the 41 countries they made a mistake. Well we in our meetings that we've had several meetings we've had with some of the agencies you know BIS and state and DOD and SA. There is significant disagreement I think among those agencies about what they're trying to do here and so the idea that you can also convince the other 40 nations that this was a bad idea is a daunting task and we heard yesterday that they don't want to do it. They're really very resistive to going back and saying we missed the mark on this one. But you know I think I read somewhere that a number of European Union countries have actually moved forward and implemented the agreement and the computer security research community actually prefers some of those other regimes over what we've seen in the United States the proposed rule here. So what are the key differences between maybe the UK model versus what we see here? And anyone should feel free to answer. Well I think that as Stuart mentioned every country has the ability to implement FOSNAR in a fashion, former fashion that is what they want in their country. A lot of countries don't actually put it into law or put it into a regulation. A lot of countries I can say almost every country with possible exception in the UK does not have an export control regime like the United States. It's not as robust, it does not control as many items and indeed again as Stuart mentioned they actually encourage and want people to export to grow their businesses, to grow their economies. It's an economic incentive for them to let stuff move around without too many restrictions. The other aspect of this is that their export control enforcement is not the same model as we have in the US. Most countries do not have a office of export enforcement, OEE. And so they don't really know very much about what their businesses and their industry are doing or where they're sending stuff around the world unless someone comes in and tells them or that company realizes that they overstepped a boundary and they'll come in and voluntarily disclose that they've made a mistake. And the interesting thing in the United States is that there's an investigative process, there's a review. Sometimes as we said you'll get fines if they find out that someone actually knew about it or might have been trying to hide the fact, there could be criminal prosecution. And most other countries, there will be a what happened in a written form. What are you doing to fix it so that it doesn't happen? Again, okay, that sounds good. Thank you very much. Have a nice day. And that's the end of it. So we're not seeing, as Stuart said, they're not seeing the enforcement and the level of care and sweep that the United States is intending for this rule. And so you can get teams or companies like the hacking team being able to get a license in Italy and go right on doing what they're doing. I did. So I think summary really of the difference, the differences is the style of enforcement in the United States is very different. There's a default deny in the United States for these license applications. So there's that barrier. And there's also the removal of some of the exclusions and protected categories that we saw in the other countries' implementations of the Wasunar arrangement. So those two together form the basis for the reason why so many people are up in arms about the proposed United States implementation of the Wasunar arrangement. Whereas, you know, while the Wasunar arrangement and additions of intrusion software were disturbing, you know, to say the least, elsewhere it's not as grave and serious of an issue in terms of its ability to impede the level of defense that the entire internet needs. And in fact, defense itself is more greatly impacted by the controls on offensive technology than offense is. So Laura, I was wondering, you know, as a cyber security, computer security provider, and Michael, you should feel free to chime in as well here. What is the burden on you guys in terms of actually acquiring the license? What's the process that you'd have to go through to acquire the license so that you can continue to provide the services and products that you already provide? Well, let me unpack a little bit. This concept that we keep discussing, which is this will have an effect on the defensive community. My company is one of the, and Symantec as well, is some of the providers of cyber security products that help prevent and understand and detect activity that might be malicious that's trying to go into someone's system. So let me give you an example. Malware software that has functionality that we've seen behave in a way that tried to get at information, data, access in someone's network would be blocked in a sandbox. So it'll be detonated before it comes into the network. That's what our product is, right? So if that malware and our understanding of it in the license that it goes out to our cloud where you understand it is regulated under export control, that means anyone who's buying our product who's not a US citizen or using it as a non-US citizen to think your IT guy who might not be a US citizen at a company in the US won't be able to see that detection activity that's coming in through our product. Now let me take it one step further. The way that this is positioned and how Stuart laid out the intrusion software definition in the current rule makes something like a system administrator tool that's helping reset the network into the same category that we would call a root kit that's presumptively denied in this. So we're sitting here talking about products that have a defensive or just a simple IT administrator quality to them that are structurally the same as the presumptively denied root kits or penetration testing software that would be used for zero day exploits all falls into this overly broad category that we're talking about as intrusion software. So to get to answer your direct question Denise, this means that we're talking about thousands, tens of thousands of export license applications that we would be submitting every time I want the researcher in Singapore to help understand what the unpublished vulnerability, the zero day might be that's trying to hit our client in Japan. But because there's a presumption of denial on zero days. We want you to get the license. Don't bother submitting the application because it's gonna get turned down. So at Symantec, we go even a step farther than that. So what Laura is saying is right in Symantec's park as well and we actually move beyond that to what we call penetration testing. So penetration testing is essentially at a very high level trying to hack into your own systems or hack into your client systems to discover vulnerabilities that we're not known. Well, that's the definition of a zero day. Most of the work that we're doing involves something that we have now been told is gonna presume to be denied if we wanna work in that space. We penetration test all of our products to make sure that they're safe and secure. And we do that by basically adopting the same tools and tactics and techniques, processes and procedures that the bad guys might use that wanna come in for malicious purposes. Then we also penetration test our networks. And because we're a multinational company and every company here and a lot of companies out there, most of the companies out there are all multinational. Their networks are going around the world in real time at the speed of light. And so by definition, when you're in your network, you're gonna be crossing a boundary, which makes it an export. And so you're gonna get swept up in this. And so the work goes on and it is done in a way that is very safe and secure. It's actually like the most secure environment that we can create in the IT world because these vulnerabilities and exploits that we're working with and discovering are so extraordinarily dangerous. And if they got out, they would bankrupt us, they could bankrupt our clients' companies as well. And so we operate in a very controlled, very rigorous process because we're hugely incented to make sure we keep this stuff safe and it doesn't go where it's not supposed to. And then we basically beat up our systems and we beat them to a pulp and we've tried to find where the vulnerabilities are and we always find them. Everybody's system has some vulnerabilities, whether you know it or not. And so because we find all that information and we write it up in a report, we share it with our labs around the world, we have like nine labs around the world, we have 50,000 employees scattered all over the world working on this stuff. It goes to all those people, especially when you've identified a vulnerability or an exploit and we have an expert, for example, in India who is an expert in this vulnerability or this exploit, it goes to him so that he can research as quickly, effectively and efficiently as possible the remediation for that and develop a patch and get the patch out as quickly as possible. All that work would be significantly impacted to almost to the point where a bike come to a screeching halt. Now what does that mean? Let me just go one step further, Denise. That means that every company who has networks that rely on penetration testing to ensure that they're safe is impacted by this rule. We can't deliver to you a penetration test without waiting maybe up to six months to get a license. So that puts your network at risk and all the third parties, hundreds of companies out there, third party companies that do this work will also be caught up in this and wouldn't be able to provide those tests. That actually makes the critical infrastructure of the US and I would say even the critical infrastructure around the world at higher risk of getting penetrated and getting hacked. And that is exactly the opposite of what they wanted to achieve with this rule. They've gone in the, they've inadvertently gone in the other direction. And it hits at every phase of the cybersecurity cycle, detection, prevention, response, and recovery. We have employees in 128 countries and I have never seen a cybersecurity thread, a conversation about a threat or a vulnerability that hasn't involved someone from outside the United States or Canada. It just doesn't happen. I've been the lead attorney for the Microsoft Security Response Center since 2008. It is a global team. It is a global process. Every time there's a vulnerability, we work in a follow the sun type of a capability. So we're moving around the world. And so when you look at, I come back to the everyday use point. Every tactic and technique that's being talked about here, these are everyday use processes for us. And so to have to go on a per email basis to the Department of Commerce and ask for the permission to move to the next step or to share the next piece of data, share the next tool, build the next piece of code to move forward in the analysis, that's just not a workable process. It's not workable for commerce. It's not workable for the private sector. It's not workable in the event that other countries were imposed that sort of regime on us. So we simply have to have something for every security researcher from the smallest individual finder all the way up to the big companies like us that is scalable, that is narrowly tailored and that is focused on things that are not in the category of everyday use. Maybe Katie and then Michael and Stuart. Well, you know, we heard some proposals for addressing this issue, which is sharing information within one organization that has multinational employees that from outside the United States and Canada and that's something called an intra company transfer license. That will not help considering this is a global community of defenders that spans multiple organizations. So even the proposed remedies to some of the problems that you've been hearing are still not adequate. And if you take it all to its logical conclusions of all of the carve outs that would be needed to essentially remove the impediments to defense, what you have left is you're not actually going to be regulating anything at that point. So, I mean, taking a step back and looking at, you know, I've known Kristen for many years, before I worked at Hacker One, I was a security strategist at Microsoft and one of the things that I created there was Microsoft's bug bounty programs. And specifically, there was a program that is still running today called the mitigation bypass bounty. And what that is, it's specifically soliciting for the information that is described in this rule. And Microsoft pays up to $100,000 for information on new exploitation techniques. So these are things that can bypass all of the protective countermeasures as defined in this rule. And the language is almost identical in the description of that bounty program. The reason why that bounty program exists is because the only other way that a company like Microsoft could learn about new exploitation techniques was through actual attacks. So providing a defensive incentive to bring those forward earlier gives companies like Microsoft a head start in defense. Now, that program was launched just a few months before the Wasunar arrangement added these rules. Kristen, would you have let me go forward with that program if this rule had been in place? I can hear you say get to yes, Katie, but who? Yeah. So I mean, I don't think it would have happened. Now, that program has awarded that bounty at least five times in the last two years. Five times that Microsoft gained access to the technology that is regulated in this proposal. And that would have been five times that Microsoft would have not had access to that information to build a more secure operating system for the world. So this is a concrete example of how this regulation impacts defense. Well, and so taking on from what Katie was saying and company like Symantec or all the companies here goes through a rigorous process to understand the technologies and products that we're producing and working with so that we can make a determination on whether or not we need a license. And at Symantec for some of our more complicated technologies or products that can take up to several months to go through the back and forth with the engineering guys. So if you're talking to some guy down in the basement in Mountain View about his technology, it's a process to try to get him to explain to you in English what it is that he's working on and be able to take that and put it into a license application. Under this proposed regime that is out there we're scratching our heads frankly because we don't even know if we would know how to write a license application for this type of work or these types of products because as Katie mentioned and everyone else, this stuff happens and has worked on and is changing so quickly in a matter of hours, sometimes minutes that I'm writing a license application for a piece of technology that we want to work on and within, you know, in the morning and in the afternoon they're on a different track and they're working on a different piece of that or an adjunct to that that would change the license application. So I can't even imagine a scenario where I could put a license application in, have BIS review it and say yes or no in a fashion that would allow my guys to continue working on the projects that they have going forward. So I want to go back to penetration testing because I think it reveals the core flaw of this proposal. Penetration testing is basically taking the tools the bad guys already have and using them against somebody who's paying you to do that to see if they can withstand an attack from the bad guys, it's as simple as that. No one thinks that we should be controlling those kinds of testing. That's silly, that's how you get a more secure system. But because what we're controlling under the law is the tool, then every use of that tool has to be licensed. So you're creating a massive structure for licensing activity that we want to occur. That's a little crazy. Because of the technology rules that say talking about it, know how about it if you talk about what's new in the tool that the bad guys are using, you're also conveying, you're exporting that technology. And so every foreigner you talk to about that is a licensible event. And again, crazy in this context. It's also the case that we're not gonna stop the bad guys from having these tools. They already have the tools. We're taking their tools and using them to test whether we can withstand their attacks. No export control regime is gonna have any impact on the bad guys. There's a narrow little edge case which is if you took those tools and gave them to a government and the government used it against their people, they could be using it in a way that we consider inappropriate. Fine, that is a tiny sliver of the activity in this area. But in order to get at that sliver, we've created this list that says everything is regulated, everybody comes in for a license. How did we get there? I heard Jim Lewis talk about to the fact that we were in what I'm afraid we have to call Crypto World War One now. Now that we're in Crypto World War Two. And I suspect, because a lot of this was a bumper sticker introduced by the ACLU and perhaps some other human rights groups saying this malware should be treated like a weapon and regulated as such. And we should go to Wasunar, we should regulate the malware. And I've long thought that that was really a kind of echo of the Crypto Wars. That they were on the other side of the Crypto Wars, they saw encryption being treated as a weapon, regulated, licensed, subject to Wasunar, and they said, well, that's wrong, but boy, it's powerful. And now they have said, now that we're in control, we wanna use all that power to get at this activity that we don't like, which is certain governments misusing the technology. I think they fundamentally misunderstood the power of international regulation. And of course, the Crypto Wars ended with the guys who thought Crypto should be regulated like a weapon losing for very good economic reasons that apply across the board to this. And so the whole idea of saying we should use export control regulations to reach this kind of activity by designating the thing as a regulated thing is misconceived in a way that's very hard to fix with little license exceptions, which is what the government is now struggling toward. I think we've pointed out a number of ways in which the regime is flawed, but I thought I would add another one or solicit some comments from the panelists. So I'm curious to understand sort of how Congress thinks or is reacting to the proposed rule because as we've seen in the past couple of years, there's been an effort to pass legislation that would improve cyber threat information sharing. And under the proposed rule, it's not just the software that's covered, but any type of data that could include email communications about tactics, techniques, procedures that are being used to conduct vulnerability testing, identify zero days, et cetera, et cetera. That includes the types of information sharing that we're trying to promote with legislation. So it seems to me that there is definitely a rub, a contradiction there. And I'm wondering if anyone has talked to folks on the Hill about this or what's their perspective. Surely some of them are also concerned about the human rights abuses, as well as just the sale and transfer of exploits, malware, malicious code. So what have folks here on that side? One of the things that I think is interesting about the human rights and privacy advocacy groups that are out there, thinking that cyber security is sort of bad because of the penetration testing and intrusion software and that it's a weapon and we should regulate it, as Stuart mentioned. And it's exactly the opposite. The cybersecurity industry is actually, they're their most powerful ally in protecting people's privacy and ensuring that bad things don't happen to people in the human rights arena. And so as we're talking to and doing outreach to the various agencies and Congress, we're pointing out that there's no us versus them here. It's all us on this issue. And to the extent that this rule goes forward as is and you've crippled your cybersecurity industry, you've raised the bar significantly in terms of risk for privacy and human rights. And so that gets their attention and that gets their interest. Now we've gone up and we've spent probably a couple of weeks, we've done many meetings with various committees and staffers. And what we typically find is that some of them may have heard of Lassenar but have no idea really what it is. And so we're seeing this really interesting, this interesting problem that we have, which is no one really understands cybersecurity and what the cybersecurity business does, other than to say, ooh, it's scary and it's bad. And then we also have export control regimes and understanding how export controls work and how Lassenar works to very arcane and very sort of like murky areas that no one really gets. And so we have to spend a lot of time trying to say, here's what's going on, here's what all this means and here's context for you about this. Once we do that, the interest perks up significantly and they are really trying to, I think, find ways that they can join in this conversation and this dialogue. And I think one of the things we've seen consistently across the area of congressional outreach is that this whole sharing of cybersecurity information and research, the president has issued an executive order, I think it's 13691 that says we need to enhance and do everything we can to enable and facilitate security sharing and research in the cybersecurity realm. There is legislation that's being called for, significant legislation being called for, to do enhancements to cybersecurity and sharing and research. And so it's sort of the one hand doesn't know how what the other hand is doing here in some respects. And it does definitely contradict the idea that we wanna regulate and restrict this type of activity. I think Senator Warner has already asked the Commerce Department to ensure that there'll be a second set of comments from after the first set of comments is absorbed and a new reg written. He would like to see that put out for comment. So my sense on this and I've worked with a coalition that includes several people who are on the panel is that it's not that hard to sell in Congress. They say, wait a minute. This is gonna hurt US industry, might hurt cybersecurity generally by restricting what we can do. Doesn't have anything to do with national security and probably won't actually achieve the human rights goals that are proposed. But it will allow the State Department to exercise its moral vanity around the world. And that's why we're doing it. I'm sorry, I'm not buying it. Okay, it's been brought up how this hurts defense as a whole and specifically also hurts the US companies who provide defenses. There's also a company like mine, which is part of this brand new defensive incentive economy. As you know, hackers can make money by selling zero days and exploits on various markets and they can be selling them to be used for offensive purposes. And the programs that various bug bounty programs and programs like Microsoft's mitigation bypass bounty represent a growing but yet still small defense incentive economy for this type of dual use information. And so a company like mine, which is the very first platform provider for vulnerability defense and bug bounties, we have only just begun. Our first bug was filed on our platform 20 months ago since then over 10,000 different bugs were closed. If this, you know, we're to go forward, this will have a serious effect on this growing defensive incentive economy of which my company is one of the players. If you look at culturally, how the cybersecurity industry works, it's at a really interesting juxtaposition against legislation. Legislation is trying to force information sharing. It's trying to create a schema that incents companies to share information with the government. This proposal would be further adding regulation and process to sharing information even within the industry. Now look at that against what cybersecurity personnel typically use to govern the exchange of information. It's called the traffic light protocol. Any of you have heard of it, it's really simple. It's red, don't share with anybody but the person you give it to. Yellow, share with those that you feel are trusted. Green, keep within your environment. And then white, it's public information. It's not a written NDA. It's not a liability exemption approval. It's an entire global network even with a taxonomy that DHS publishes that's based on trust. These are companies, individuals, finders, academics, all around the world in a global environment that operate on a self-imposed, self-regulated system of trust. And so that's the very special and unique nature of what happens in cybersecurity. So if we upend this through a legislative or an onerous regulatory approach that requires each step of this path to have to obtain a license or to see if it can back bend into a position of an exception that decimates this very culture that we have which is enabling global, and I mean global, responders to be able to operate. And so it's just keep that in mind. You know, the security community is governed by a set of trust principles and it is not a regulatory based environment. So we're going to have to continue to think about if we go forward with this, are we really stifling the ways in which the cybersecurity industry works full stop? At this point I think we'll turn to questions from the audience in the corner. Could you please wait for the microphone and also introduce yourself? Michael Schraeg with MIT is a very interesting conversation but the one thing that you've left out and I want to make sure that I understand this because sadly I'm old enough to remember COCOM when Richard Pearl was running it. The issue is what about your customers? If I am Goldman Sachs, if I'm a SIFI, I'm completely network dependent and cyber confident. It used to be that I hired a SISO to deal with this issue but now I've got the cyber risk committee of the board with fiduciary responsibilities and liabilities to go along with that. You're telling me that if I'm running a SIFI, I literally cannot be a fiduciary because I cannot by every single comment that's been made here, guarantee or provide assurance as to the cyber protections of my global network. Is my understanding of this correct based on what you're saying that your customers are screwed? I mean, with all due respect to Symantec and FireEye, it's your customers who aren't able to update as a function of your products and services. Is this a legitimate, is that a fair understanding? Yeah, I think so. I mean, look, I have managed service clients all over the world, right? Our FireEyes and service clients, I can't share with them that this is malware that's trying to hit their network, right? And could that be their branch in Taiwan that's trying to get hit and it's actually a U.S. company? I can't even share it with their security operations center in Taiwan. You can't share it within your company without coming and get licenses. So your IT guys running your worldwide network would need to be able to or would have to come in and get licenses to be able to take whatever report Symantec or FireEye gave you and try to mitigate those vulnerabilities. So I just have one more question. I can't speak. Treasury has not been involved in any of the interagency meetings that I've been into, my knowledge. DoD has been involved, but DoD through their technology directorates and the NSA have chimed in and been less than supportive of industry in these areas. And so one of the things we put in our comments that was really important to try to get out was exactly your point. It's not, it's the critical infrastructure in the U.S. and all over the world, including research in the infrastructure in academia that you're representing. All this gets impacted by it because you all either have by law as a fiduciary responsibility or by industry standards, a requirement to ensure that your networking is safe and secure. And you're gonna do that by your own IT departments by using third parties, by using Symantec or using FireEye to come in and do that type of work for you. We would have to say your network moves outside the U.S. and Canada. Yes, okay. Well, we'll have to come back to you in six months after we have our licenses in place. So when I was at EHS to get into the interagency politics of this, we had an interest in certain export control decisions because we felt there were technologies that had to succeed commercially and therefore had to be easily licensed. That general policy applies, still applies, but the State Department in particular has been enormously resistant to having DHS participate in this. DHS has finally crashed the party and they have a strong interest in the cybersecurity of all the industries that they deal with. I think you're quite right. The Treasury probably still doesn't know that this is going on, but they're gonna hear soon. We have a question in the front here. I'm Terry Murphy. I've learned a lot today. I thought I knew something about this area. I've been out of date, but it still was very enlightening. But also, I'd just like to comment. I served for four years on the RAPTAC. Regulation Procedures Technical Advisory Committee. A Stuart, by the way, did not fully disclose. He was general counsel to NSA. And if he told you a tenth of what he knows, he would have to kill you. So he's tremendously involved in this. I'd like to comment quickly on something that Michael said that was quite disturbing to me. When I joined RAPTAC, and I think it was quite normal, I had, we had secret security clearances. The conference, the meetings were half public and half private. To get a security clearance, you had to fill out a form this big. I had top secret in the Navy. I don't remember anything like that. But, and I had to list all my contacts and all the embassies I'd ever heard of, et cetera, et cetera, et cetera, et cetera. And you'll be getting a letter from OPMs shortly. Well, that's okay, but the process of getting that clearance turned us, literally turned us from lobbyists, vis-a-vis the government to collaborators with the government, with the government. We were there to provide industry. I'm a lawyer, but we were with many client contacts over the years. We were there to provide the industry perspective to the government, but also be part of the government process. And I didn't hear that from Michael. I heard that the government, I'm out of date, that the government has gone away from you. And I have to tell you, when we worked on regs, we didn't just go to the meetings. I couldn't tell you the number of emails that would circulate on the woods instead of shoulds instead of everything. So I, if Senator Warner is, if Stuart is correct about what might happen, if you get another chance, then I strongly urge, you may not be on that anymore, Michael, but I strongly urge anybody who's on these advisory committees to act like the government. That's your job. You're not just as industry. Don't sit there and wait and say, well, they're governing us. So we'd be part of it. So we'd be part of it. For the panelists. Okay. Sure. I could respond to that comment. Now it's taking them too much time, I hope. So I glossed over the process a lot because of just trying to get it out there, but essentially your descriptions are pretty close to why the process still works today. I have a secret clearance. I used to have a top secret clearance. We were told on the ISTAC that even though we're coming from industry, we're not there representing our company. We're there as technical experts to the government to advise them. And they did engage rather late in the game, I think in some cases, when the rule is about to be polished as it is. And all that back and forth and exchange that you talked about went on with the ISTAC. But I can say, and I think that in our meeting yesterday with the various agencies, there is a, I think they're completely wrapped around the axle on their interagency process on this rule. They can't agree on what the rule should say in a lot of ways. And I think I'm being fair in saying this that the technical experts aren't helping them. Well, whoever they're talking to aren't really helping them. And so there are people that are wedded to this thing for emotional and sort of, we're doing the right thing type of reasons without any real knowledge about what it is that they're trying to regulate. So we feed into that, but it's not moving the works. It's all gummed up. So they publish a very broad rule that sweeps in everything. And then they told us yesterday, we expect industry to come back to tell us where we want carve outs, exceptions, and exemptions to this rule. And to me, that suggests a couple of things. One, and happy to hear from folks in the audience. One, you don't really know what it is you're regulating, do you? You don't understand what it is that you're regulating or what it is you're trying to get after here in a way that is effective, that actually would achieve your goal but also preserve a critical industry in the United States. And second, you're really smart technical people aren't helping you and don't really understand it either. And so they have to come back and say, industry help us out here and put all these carve outs in place. And by the end of the day, it's gonna be very interesting to see what is left that is being regulated after we have put all these carve outs in there. Do you have anything worth putting on the books at that point? There's a couple steps we need to take to make sure that that happens. We have to see the US industry invited into participate in the US delegation. We have to see US industry being able to engage in the plenary sessions. There's a technical advisory committee meeting coming up and you guys know better than I, I think it's in September. Having the private sector have a place at the table to be able to talk about these issues and get those discussions moving in a more open and deliberate way. You know, I think one of the things that is being communicated very clearly to state defense and commerce is that the security community wants to engage. We want to be participants in this dialogue. We all agree human rights and that the use of surveillance technology to abuse that is absolutely not the right behavior. And we want to be able to engage on how to make sure that we are doing the right thing and not harming the security of the world's customers writ large because we are overbroad and not necessarily as focused as we should be. So US industry and the delegation start getting ready for plenary and start that dialogue with the technical advisory committee as quickly as we can. So there was, there was a mood yesterday when we were speaking to these different agencies that some of them were very, you know, as you pointed out, kind of emotionally attached to the implementation that was as written. And it reminded me of when I used to be a penetration tester showing a company that their software had bugs and they did not want to accept that in fact there were flaws. And so I feel like we are at this point where the industry here is trying to point out some bugs in this, in this was in our arrangement and the implementation. And ideally we will have a better reception at doing this, pointing out these flaws and being able to remediate them. But I saw some very interesting parallels between, between doing that technical work and having to do this technical advisement of these regulations. We had a question in the back row there. Hi, Carrie Ann from the Organization of American States. My question is twofold. The rule that is being proposed would affect governments and especially the, most governments do have their services being provided in the US. And it would affect as well law enforcement offices where many governments do have in their cybercrime legislation the ability for law enforcement to develop internally intrusion detection software to be able to investigate certain cybercrimes. So my question is, has industry done an impact assessment as to how that would affect your clients, especially government clients, to be able to present to the regulators as to how this would affect globally, not just your private sector entities, but your government entities, including law enforcement. So, and Stuart, you can jump in here as well. There is a government exemption in the rule. So if we were gonna work with the US government theoretically, you know, we would be exempt from licensing requirements. Potential exists in the way that we're reading it and we're, for a good reason, reading it very conservatively. That wouldn't necessarily apply if I wanted to work with foreign governments on penetration testing. So if I wanted to work with Brazil or if I wanted to work with the UK even, I might need to enter into this licensing, this licensing process to work in that. So then you go also to the public and private interactions with law enforcement organizations around the world because law enforcement is also a leader in this area trying to stop this proliferation of hacking going into the wrong people and to the rogue nations. So we would potentially, potentially if I read the rule in a way that it seems to be written, we would need licenses then to also go and work with those international law enforcement organizations. And we would have to tell them, I can't give you the latest tools right now because I'm waiting for my license. You know, it seems to me that law enforcement has a lot of roles here and certainly prosecuting hackers is one of them, but law enforcement also faced with strong encryption, TLS spreading around the world is having to hack their criminal systems in order to gather evidence. I think that is almost in the bullseye of what the State Department intends to regulate and they intend to say, I'm sorry, we don't have a very good relationship with Ecuador this year and we don't wanna help them because we think they're not sufficiently democratic, not sufficiently protective of rights and therefore no one will help the Ecuador police use these tools even if they're investigating drug trafficking. So there's, I think it's extraordinarily risky for a company to work with a foreign government in that way under this rule because I believe that is exactly what the State Department wants to regulate. I have a question in the middle here in the white suit. Thank you, Paul Joyo, NSI. Katie's comments very interesting because it points to the fundamental problem that we face. I mean, the panel, if you look at it, we have Microsoft, arguably could be called the greatest enabler of hacking in the world. We have FireEye, which I'm sure you won't. We have FireEye, tremendous business in exposing and mediating these types of penetrations, semantic, of course, in the same business. And then Katie, you're doing the Lord's work by paying people to expose zero-day. But the presupposition between it all is that we're operating with non-secure systems, non-trusted systems and that's the problem. The question, I think the broader, let's say, gets down to the metaphysics of the issue, which is how can we expect to defend ourselves when the environment that we operate in is an untrusted environment? Now, when Big Men and others built the system for the intelligence community, we have to build a trusted system. So my question is, since Stuart, you mentioned the OPM hack, when are we going to get to the point where at least on the US government side, we build trusted systems that are not open to this type of manipulation that is going on. Even the hacking team has just been hacked and lost 500 megabytes of information in their files. So what can we do on building the trusted network for at least our government? So I do want to defend Microsoft for whom I've never worked. That's it, there you go. But they have taken security seriously since the early aughts and have worked very hard. And as we know, every Tuesday, the first Tuesday of every month, we discover that they still found more problems. It is not easy to build a trusted system. And the principal reason it's not easy to build a trusted system is we don't really want them because the controls that would be necessary to make those systems trusted would also make them hard to use. And at times it's even easier than that. 99% of the time when a PC is compromised, a patch was already available. It hadn't been installed at the time of compromise. So when you're talking about infecting systems and exploit and attack and malware, this is part of a cultural issue. Why aren't people patching? Why aren't people updating? There are things you can do to protect yourselves and your systems and your networks that don't happen. You read about that time and time again. Those are the big hacks that are in the news. You don't necessarily need the skills of advanced persistent actor when you leave the front door unlocked and all your windows open in the back of the house. And so the state of trusted systems, absolutely, that is a desired goal for all of us. And every time we're releasing a new product or putting a new service out there, we are modifying our processes. We're improving our coding. We're refining our operational procedures, but it's a system-based process. And when you- Don't offensive on this topic. So let me suggest one aspect that ties back to this. We don't have good security today. And all the security measures that all the companies up here have implemented are still not preventing things like the OPM hack. And we know we are going to have to innovate because we've got a live adversary that is innovating. And this is going to be a constant war back and forth as they come up with new attacks. We're gonna have to come up with new defenses. This is probably the most dangerous aspect of this rule. Because it basically says, well, we know how this industry works, and we'll just give you exceptions for the things that provide security, and then you'll be fine. But of course, that does nothing for all the innovation that has to occur in this area. In fact, it puts it presumptively under regulation. This is maybe the biggest worry I have about this is that it will turn innovation, which currently is a kind of hourly, daily event, into something that operates at the speed of the general council review. And specifically, the program that I mentioned that I started at Microsoft two years ago was designed to provide insight into the next generation of exploitation techniques so that Microsoft could enable its iterative process and build the next generation of defenses across the platform. Without that information ahead of time, Microsoft is just going to be waiting for an attack to take place to learn about those new exploitation techniques. The bad guys have current exploitation techniques that work. They don't need the advancements in exploitation technology to do their job, whereas the defenders absolutely do. Up here in the middle here. Steve Winters, a consultant. I'm trying to get this into a really simple question, but since we seem to have this image of bad guys, and I think we know who we mean, but at the same time, you have now, is this so well-known, state-sponsored hacking attacks. And it's become very clear that some of the groups from the states are not happy with some of the security companies, in particular, Kaspersky. So therefore, I'll direct this to Mr. Baker, since we heard he had some familiarity with NSA. If you were at NSA now, would you be happy with these regulations as opposed to their never having appeared? So that's a great question. NSA is perceived to be one of the defenders of this role in the interagency. And they are de facto the people who administer what's left of the encryption rules. My guess is they don't need three quarters, maybe not 90% of what is swept into this rule, that that's the human rights side of the State Department that's pushing for that, and perhaps the regulators who handle license applications. I think they, since they have a defensive role as well as an offensive role, they probably don't like the impact on the cybersecurity industry. They probably do like seeing all the offensive and defensive technologies as they leave. They probably like having some, in basketball, it's a hand check. It's not that you're stopping them, but you know what's happening. So you've got intelligence on everything that's happening in the market. Those things I'm sure they like. The rest of this I suspect is more the State Department than NSA. So we actually got a question on Twitter, and this is a question that I've been meaning to ask this panel as well, because we've talked a lot about ways in which the rules are flawed and all the exceptions that would have to be made, et cetera, et cetera. Well, at this point we've signed on to the agreement, we're gonna have to do something. So what is the solution? What, maybe we can just go down the panel here and start with Katie. What is a workable solution? You can't just have a whole list of exceptions and carve-outs, right? I mean, it'd be endless. So what's a practical solution here? Well, I think, I mean, I am relatively new to this particular bug-finding game in regulations. So what, from my understanding, is that since we did agree to the Wassenar arrangement that we do have to implement it somehow. However, it's so clear at this point that there was a fundamental misunderstanding as to what was being regulated. It's along the lines of trying to regulate the first five steps in creating a vaccine, which happened to be the first five steps in creating a biological weapon. They are identical. And so I think at this point, having the right technologists in the room is step one. And I think we've all agreed that that's, the participation in the technical advisory committees is mandatory for those of us who are here and will be impacted and understand the implications. And then the possibility of going back to those 41 countries should not be ruled out. We should be able to raise these issues at a global level because it is a global problem and it will affect global defense. So, and then that's exactly what needs to happen. We were talking with the inter-agency and state was particularly adamant that we weren't gonna go back to Wassenar and it would be impossible to get the other 40 nations of Wassenar to say no. We missed the mark here. Let's go back and redo it. And I don't think the people, at least on this panel, agree with that position. It's like, why can't we go back and have another conversation with them? And this time, let's get the right people in the room and talking about it so that, not only the US delegations that go to Wassenar, but also the other 40 delegations understand what this industry does, what we use these tools and these techniques for and frankly, how you're impacting the defense and the critical infrastructure around the world and making actually the world more dangerous with this rule. So some people have talked about a change that would be less focused on the software and technology types, but more focused on who you're selling to. So the oppressive governments, et cetera. Is that a workable solution? Short of going to renegotiate? I think that still would make us question whether this is the right vehicle. So here's the issue here. Yes, the techie's not being able to speak to policymakers as part of the problem, but the bigger issue here is that we're using a framework that's working around munitions and weaponry and we're talking about tools. And it's right back to your point, Katie, where the first five steps in a vaccine are the same that are used for biological warfare. Source code for a weapon or for the use of a weapon and source code for a legitimate defensive or just a legitimate use tool is completely the same. So this isn't just a dual use munition case and that's why Wassener's not the right vehicle for this, right? So this is about how do we think about the use of this type of tool? And if we're going to regulate use, then it doesn't fit in a technology control agreement. Yeah, I think that's very, very well put. I'll go back to my earlier statement. Technology controls on everyday technology serve no purpose. You know, when you come back to the question of what is the problem we're trying to solve here, the question presented is how do you prevent the use of surveillance software from being used against those involved in human rights, engagements and debates around the world? That is the right question. That is a question that we can remain focused on in a narrow and limited way. It may be that there's a solution from a nation-state perspective. It may be that there's an intent-based perspective, a sale-based perspective that will require a lot of dialogue and input. Right now, we're thrashing around with a system where we're being asked to look at exceptions and terminology that simply doesn't work. We pivot off the fact that the current definition, if you read it broadly, as we should, it can even apply to what we call application compatibility capabilities. So sometimes, if you need your printer to work with your computer, you may have to actually circumvent the security measures on that and enable that with software to modify the intended path of the file in order to get your computer to print out the document. So we can just stop printing papers for going forward until we can get this fixed. It's a silly example that highlights just how over-broad this is. So definitions will matter. We have to rethink definitions. We have to rethink scope. And narrowly tailored, accepting out from an ab initio, everyday use technologies. So four points on this. One, everybody, every agency in the government will admit now that the impact of this is beyond what they expected and that the actual technical implications, the language that was chosen at Wassinar, is much broader than they expected. Well, if they know that now, surely that was true of the Wassinar nations who have fewer bureaucrats devoted to this, which is a basis for going back to Wassinar and say you chose the wrong solution here at a minimum. So then the question is, well, but it's already in the law, it's already in Wassinar, shouldn't we be doing something to implement it? And I would offer two suggestions. One, I've been providing advice in this area for years to companies. And probably three, four years ago, the Commerce Department and NSA started reaching out to people who were in this business saying, by the way, if there's encryption in your product anywhere, and there always is, you're subject already to export controls and you need to come in and talk to us. Now they focused it heavily on this kind of activity, but we are already implementing Wassinar in a relatively effective way, certainly more effective, I think this is part of what we take back to Wassinar, than the Italians who apparently granted a global license for exactly the kind of activity that the Europeans tell us they're trying to regulate. Third, yes, we could do something that is focused on particular governments that we think are engaged in human rights abuses and shouldn't use high tech tools for that. This entire regulatory construct has been built in part by the State Department, so they don't have to say which governments they don't like. They can just do it in a quiet licensing process after the fact. They really should be forced to say, yeah, there are a few governments we don't wanna see using these tools, because that's the problem they're trying to get at, and they're just saving themselves a certain amount of diplomatic pain by regulating everybody and pretending that they can make the decisions later. And then finally, we probably should think about solving this problem much more often with criminal prosecutions. Since this Wassinar idea came up, the State Department has busted a lot of people using these tools in precisely the ways we don't like, including people who are selling it so that you could spy on your spouse, and they've made the case under U.S. law already. Since intent is really the critical distinction between proper use of these tools and improper use, relying on criminal law, which turns on intent, is probably a much more prudent exercise of government power than the export control rules. At this point, we're up to our an hour and a half mark here. This panel has been fantastic and really enlightening for me, and I think our program here at CSIS is gonna try to capture a lot of these ideas, the problems, the flaws with the rule, as well as potential solutions in a brief write-up, and would love to solicit your expertise, folks on the panel, to help us with that. But this has been really terrific, and I wanted to thank everyone for their contribution today, and thanks for being here. Thank you.