 Thanks for coming. This panel, if you saw the correct listing, is called Disclosure, the Mother of All Vulnerabilities. I have a couple of comments to issue first before we get going. This morning, Simple Moment asked an important question about this panel, so all of you who were at his talk and are here, thanks for coming. He wanted to know why Microsoft is represented on a panel about what the feds think. So let me address this right off. First, this panel is not about what the feds think. It's about what security people, researchers, the intelligence community, the executive branch of our government, and hopefully all of you think. Second, Microsoft declined to come speak on this panel, so he's not here. Secondly, I had an important call this morning for all of you who have been following security news the past couple of days and have been tracking the HP Snowsoft issue. Adriel called me from Snowsoft this morning and he asked me to read a statement from him that accurately reflects what we'll be talking about later. He said he does not want the scene to be in an uproar as a result of what happened. That Snowsoft did approach HP in a very blunt way, and he understands that now. Similarly, corporations are used to being approached with subtlety. He said, please ask that we all come together to find a middle ground. This community needs to approach corporate America in a more positive light and he hopes that what we talk about will help us do so. So with that, I'll do my more formal introduction. Everyone here is presumably involved in security in some way. Today's panel will discuss responsible disclosure of vulnerability information. This has become a hot topic lately as debate rages on the best way to safeguard computers. Global Intersect has been weighing in heavily in favor of individual responsibility. There is a plausible and workable way to disclose vulnerability information which suits everyone, but it requires cooperation between all the involved communities. Rise responsible disclosure important. Currently, no security information either gets dumped onto the internet or sucked into the black hole of corporate America. The path to real security is found in the middle ground. Such a course must enable software companies to create patches by giving them sufficient information about the problem and also ensuring that the information does not fall into the wrong hands and that something is done about the problem. And although web defacements are annoying, I'm speaking more of corporate espionage, intellectual property theft, augmentations of terrorism, etc. What are the losses due to attacks increase every year? Why are the viruses using known exploits promulgates so rapidly? Why are advisories released immediately after vendor notification? A large part of the answers is the current practices and lack of communication between computer using communities. The most major vulnerability still affecting the entire internet is disclosure itself. Bringing everyone together is a step-by-step process. I hope that today's panel will help us all move forward into a relatively more secure and responsible computing age. To that end, please allow me to briefly introduce our panel members. To my left is Dick Schaefer, Deputy Director of Information Assurance at the National Security Agency. To his left is Marcus Sacks, Director for Communication Infrastructure and Protection Office of Cyber Space Security at the White House. To his left is Sammy Sadry, Senior Principal Scientist at the Strategic Research Institute. To his left is Martin Lender, Incident Handling Team Leader for CIRC Coordination Center. And finally on the far end is Tom Parker, Director of Research for Global Intersec. Each of the panelists is going to give a very brief comment on disclosure and then we'll talk about some pre-suggested questions and then we'll take questions from the audience. Mr. Schaefer. Thank you Mike. Just a statement that two years ago I sat on a panel up here with then Assistant Secretary Art Money and we had somewhat of a similar discussion. Since that time we've had probably one of the most malicious and egregious events that this country has ever seen. That was totally physical and underlying what might come next could be something that also comes along a cyber dimension. And what would be even worse would be some combination of events both physical and cyber. And so underlying our ability to understand what we've got in the security technology that everyone is using, where the vulnerabilities are, how those things can be fixed. The more that we're able to communicate, the more we're able to talk. And the we is really you all out there, everyone else who is finding deficiencies or vulnerabilities in products they're using and communicating those to the vendor community, hopefully the sooner that those things get fixed. It doesn't always happen quickly. Sometimes it doesn't even happen. But I don't think that should diminish our sense of urgency in trying to address the issues both within this community, within the national security community and within the vendor community as well. Thanks Mike. Good afternoon. My name is Mark Sox. I know a lot of you and I'm glad to see many of your familiar faces here back in the heat of the desert. Since September 11th, of course, many things have changed. Two new offices have appeared at the White House, the Office of Homeland Security that many of you are quite familiar with. And of course, the Department of Homeland Security is in legislation at the moment. We do hope to see that occur by the end of the year. The office I belong to is the Office of Cyber Space Security, probably well known amongst this community, but not well known amongst the rest of the nation, certainly not as well known as the Homeland Security Office. We're very small, 16 people. Mr. Richard Clark, who's the President's Special Advisor for Cyber Space Security, heads up the group and is also the Chairman of the President's Critical Infrastructure Protection Board, a board consisting of 23 members of various federal organizations who meet periodically to work together with industry on security type issues that affect the critical infrastructures. One key piece that we're working on is a strategy for Cyber Space Security, which is currently in a final draft form being circulated amongst the federal agencies for concurrence and amongst some members of industry. We should see a public version of the latter part of September. It'll be published both as paper, CD-ROM and on a website. And when it does come out, there'll be links from WhiteHouse.gov that can take you there. It's fairly lengthy, currently well over 500 pages. What I encourage you to do, please, is take a look at it when it comes out and give us your feedback as quickly and as accurately as possible. We'd like to have a second version out, a little more accurate version by February of next year. And we'll continue to redo and correct and make better this strategy, because we understand, as you do, that Cyber Space changes over time. One of the things you will see addressed, of course, is vulnerability disclosure. Many of you, I'm sure, were present Wednesday morning when Mr. Clark gave his keynote address at Black Hat. He briefly talked about disclosure. Unfortunately, some of his comments were taken a little bit out of context and some press reporting. What I want to reiterate is our position on this is that we do believe in disclosure, but what we believe in is responsible and ethical and legal disclosure. And what this involves, of course, is if you're a researcher or if you're a person who enjoys looking for these types of things, do it within the scope of the law. And our recommendation, of course, is to work with the vendors as best as possible. We do recognize many vendors may or may not be receptive to these types of, this type of input. If that fails, search CC and other third parties are available. And if that fails, we'll try and help you as best as possible. What we would like to try and avoid, of course, is just the blatant open publication of vulnerabilities and exploits without giving the vendors a fair shake. And we'll leave it at that. And I'm sure we'll be having quite a bit more lively conversation as the hour goes long. My name is Sammy Sajiri and I represent the research committee. In particular, my background is on trusted systems design, figuring out how to design operating systems and distributed networks securely. I like to start with a story. A long time ago, some physicians observed that during the polio epidemic that a lot of the children in the cities were getting much harder hit by the polio epidemic than the kids in the country. And they were quite disturbed by this and they were doing correlations among factors. And they couldn't quite figure it out. Well, that they later, as we learn more about immunity, they figured out that the difference was that the children in the country were playing in the dirt. They were breathing dirt. And when they're out in the in the backyard, some of the kids were eating dirt. And this was actually a very, very important observation. It was paradoxical that the kids in the city were getting harder hit because medical care was better and it was a more sterile environment. But what turned out to be very important is that you have to have exposure to the bad things, the bad elements, in order for your immunity system to be able to compensate. So as we learned more about immunity, we learned that it's a very intricate and beautiful system. And it's one of the most sophisticated mechanisms in the body. And one thing we learned is that the first step in that is what's called a macrophage. So a macrophage is the thing that encounters the bacteria or agent when it enters your body, and it slices it up and it presents it to the rest of the immunity system as the bad thing that it should target. Okay, so you might be asking, okay, am I in the wrong conference right now? The immunity conference is down the hall. But it actually turns out to be extremely relevant here because this is exactly the role that I see the hackers and the people who are doing research in finding these vulnerabilities. They are the macrophages of the community. They are the ones that are finding these vulnerabilities, presenting it to the rest of the immunity system, which is the National Critical Infrastructure and every other part of our infrastructure, so that we have a chance to react to it. My perspective here is that a very, very bad event is going to happen in cyberspace and it's eminent. There's every indication from everything going on and every major government report is strongly suggesting that a very, very bad event will happen sometime because we're very vulnerable. And so given that vulnerability, as we sort of run the trade-offs between disclosure and limited disclosure and non-disclosure, what I'm here to do is advocate that this disclosure process is critical to the immunity process or the analogy of the immunity process that we have and that this macrophage function, this cyber macrophage function that's being performed, is important in preparation for what will likely become a very, very devastating event in cyberspace in the very near future. So I am much, much more leaning towards being better prepared by having the macrophages present these vulnerabilities to the rest of the community. Hi, I'm Marty Linder from the CERT Coordination Center and I think what I first want to say is we are a coordination center and our goal is to coordinate with the people reporting the vulnerabilities, the vendors and the public. We get thousands of vulnerability reports a year and each one of them makes it to the vendor. Not all of them become advisories on our part because if we put out a thousand advisories a year, no one would get any work done. Hopefully the vendors are working with us to take some of these vulnerabilities, roll them into patches that come out in the case of Microsoft and service packs because they're not that risky at the time to make it into an advisory from our point of view. And that goes into the whole discussion of vulnerability metrics and what's a bad vulnerability versus one that can be fixed later. And I'm sure there'll be a debate about that. The other thing to point out is a lot of the vulnerabilities aren't with a single vendor. I know if there's a bug in Microsoft Office, the vendor that's going to fix it is Microsoft. But if you take the Apache vulnerabilities over the last couple of months, yes, the Apache source needs to be fixed, but there's a lot of companies out there that are building their products based on Apache. And we try to coordinate with them also so that when Apache puts out a patch, the vendors that are using that source have an opportunity before it becomes public to build their version of the binary so that when the advisory goes out or it becomes public, everyone's ready to go. And I think that's an important thing to remember about a vulnerability. There isn't necessarily just one vendor involved all the time. Hi, I'm Tom Parker, Director of Research for Global Insec. Other of you have known me as MRX. I'll keep this short because we have a lot to get through today. There are a lot of people here from a lot of different communities and different backgrounds, whether it be government, gray hat, black hat, white hat. I think the key to all of this is working together. And without working together, we can't take advantage of the diversity of all the backgrounds and experiences that we've had. And I think that's the main reason we put this panel together to try and discuss how we can better communicate and work together to make the most secure internet. So the underlying question behind vulnerability research and disclosure is what does responsible disclosure really mean? Lots of people have very disparate opinions about what constitutes responsibility. So I'm going to pose that question to each of the communities represented up here and hopefully later stimulate some discussion among the communities represented in front of us. So what does responsible disclosure mean to each of you? Well, I think primarily responsible disclosure means not allowing vulnerabilities which could impact critical national systems fall under the hands of those people who would use that information to take advantage and do harm either to business which reflects into the economic health of the nation or to the national security structure within the nation itself. And so it really is protecting that information which is critical to the livelihood of everyone in this room. My comments, of course, are like exactly what Dick said. Critical infrastructure protection is what we're about. When vulnerabilities are found in public software, it may appear at first that it's not a big deal. If we check into it, we find that in fact, common off the shelf, Microsoft, Oracle, Cisco doesn't matter, widely used within critical infrastructure is probably far more than we expect it is. Responsibility, of course, is a two-way street. It's the responsibility of the person or a group who finds some type of vulnerability or weakness to let the vendor know it's also the responsibility of the vendor to do something about it and not bury it and wait a year or more before patching the software. I've always been, if they were the model where you give the vendor a certain amount of time to respond to put out a patch and then you disclose. And what that time is, is subject to debate. Some people use three months, six months, 30 minutes. Somewhere between 30 minutes and six months is probably the right number. The other side of that is I also believe that part of responsible disclosure is the responsibility to disclose. And I know that's sort of the opposite side of that. And then I certainly believe in the aspect of trying to do it in a way that minimizes damage. But I think those who are discovering vulnerabilities have an important role to play in our critical infrastructure protection by this immunological response, the presentation of the vulnerabilities to the community, the responsible presentation, for sure, but that presentation and that we are responsible to make sure that that gets nurtured and taken care of and that those who are finding them are responsible to make sure they come to light. I think I have to agree with what Sammy said. It's unclear as to what the right amount of time is for vulnerability disclosure. Right now our cert CC, it's basically a 45 day disclosure policy. But if the vendors in good faith are working on a patch, we will adjust that time as long as we are not seeing any active exploitation of the vulnerability. The moment we see active exploitation of vulnerability, we will disclose as much information as we can so that people can defend themselves against the exploit that's underway. And I think that's an important key there that if there is active exploitation of a vulnerability, you need to get the word out at some level so people can defend themselves against that vulnerability. I think responsible disclosure what it is depends very much on your personal outlook on things. Some researchers think that it's certainly not irresponsible to openly disclose exploits to both track and other fora. Some unnamed corporations think it's okay to patch vulnerabilities not telling anyone about it and include it in the next update. I think responsible disclosure is finding a happy middle between all of those grounds. And as Marcus said, while still protecting critical infrastructure, I think it's important that even though website defasements and other annoyances do get a little boring, I think it's important that our first priority is to protect the systems which surround our country. So we heard a little bit about what constitutes responsible disclosure. So now I'd like to flip the coin over and ask what threats do you see as a result of current trends of irresponsibility? Why is responsible disclosure necessary and why is now the right time to be pursuing such a course? I think we read about this in the paper every day. I think we just don't look at what the aggregation of those threats really might mean. Relative to what I may have visibility into or what we may be concerned about and I surely can't address in a forum like this. But suffice it to say that it isn't anything much more terribly impactful than what's read in the paper every day. It's clearly just at a much broader scale. The question of course is why is it important now? Why wasn't it important 30 years ago? Why wasn't it important during the Civil War? It's always been important. What has of course changed is the way we do business. We've been able to communicate globally for over 100 years, goes back to the telegraph and before. But today, I don't think there's a household out there here at least in the United States. It doesn't have a computer or at least some type of internet connection and if you go globally, the numbers are quite high and increasing each year. Security awareness is also up and now we're getting well past the point of critical mass to where pretty much any person and I think all of you are aware of this can download any type of exploit, vulnerability, etc. The script kiddies run, rampant, these sites are well known. The concept of keeping something hidden for a couple of years is a government secret while we quickly or behind the scenes work out a solution. Those days are gone. We now measure the flash to bang in a matter of hours or days from the time something is discovered to the time a working exploit is out. That time is getting almost down to zero and I think there's even been a couple of cases where we've seen exploits out before the vulnerability was even released, which would be like a negative day rather than a O day. So yes, time is right now. We're past the point where we can keep talking about it and it's time to do something about it. My perspective is that in the recent time, certainly within the last five years, we've become incredibly dependent on our critical infrastructure so much so that our society now as we know it, it depends on it. And so what you can do in that infrastructure has become increasingly increasingly dangerous. And so our vulnerability is extraordinary high. And so when you have a high dependence on an increasingly brittle infrastructure, that suggests caution. And so this is a good time to talk about what caution means and how you balance the need to take us from a brittle system to a more robust one and being careful with the brittle system we have so that we can all benefit from what that system provides us today. I think if you go back some number of years ago, 10 or 15 years ago, most computers out there were managed by professionals that took care of the computer. Now we have home computers all over the place that really aren't managed. So we need to, when a vulnerability is discovered, minimize the impact to the people that don't know how to manage their computers. So we need to, when we discover vulnerability, we need to get the problem resolved and then somehow come up with a better method of deploying those patches so that the people that don't manage computers for a living are not the biggest part of our problem. All of these unprotected home machines and university computers that can be leveraged against our critical infrastructure. I think one of the things that springs to mind first is the growing risk of internet-borne worms. Over the last two years we've seen a growing number of them and I think in the future we'll see the time between a vulnerability being released and the time to seeing the worm that exploits that vulnerability shorten. A case study, the Apache exploit for FreeBSD. The Sculper worm, which came out I think about 12 days after the original exploit was posted to bug track, was very poorly written however. It came at a time where people hadn't finished patching their systems yet. Some people still weren't convinced that it was exploitable on non-64 bit platforms. If it was done well it could have caused a lot more damage than it did and I think that's one of the things we need to look at. So my next question before we open this up to the floor is really the core of what this is about and I'll ask the panelists how can all the different communities represented here and even those that aren't represented here work together to reach some sort of ideal disclosure model? I think they really do need to work together. It shouldn't be a contest of rushing to get credit for finding a vulnerability and publishing that in some form such that one can say me first. It's stepping back and saying what are the possible outcomes if this vulnerability is exploited? What are the possible outcomes if I share this with some community that may not have the perspectives that I as the as the researcher or as the scientist or or as a system administrator have based on my knowledge of what could be done with the vulnerability? So it's sharing the information and I hate to use the word responsibly but it is sharing the information responsibly and it's also working not only to identify the vulnerability but to also say and here's what can be done either the vendor to patch in a patch for the vulnerability or in a system sense here's something that can be done between t0 the time that the vulnerability is made known and the time that it's actually a patches available to address the vulnerability here's something that can be done in the system to mitigate the effects if in fact that vulnerability is exploited so it is information sharing it is working together but but I think it's also measured restraint when one finds a flaw a vulnerability and and and to really step back and say what might someone do if they exploit this and what is the likely or possible impact of that being done terribly you know egregious things from the telephone system coming down to the financial system coming down to you know all the all the grades for the seniors being changed in some college class I mean those those all have various impacts some much much more far reaching than than others but all should be addressed with the same level of concern what's going to make this come together is that a continued cultural change we've seen quite a bit of it over the past several years of course the integration of it and it services into businesses that before would never even use a computer for anything other than a fancy electric typewriter but that the part of the culture that hasn't changed it is the information sharing piece of it we've seen some effort over the past few years to build the isaks to let it at a sector level sharing of weaknesses and vulnerabilities say within the banking industry or within transportation or other sectors but even that sharing is done very reluctantly I realize a lot of that has to do with the fact that a company just does not want to know other companies be they peer companies competitors or even somebody that's not in their space don't want them to know of their problems this sharing of information is is also permeating those who are aware of vulnerabilities with not only within themselves but within other systems how do you get that information to somebody who can do something about it and if you are the organization or the individual or the company or what that who can do something about it what mechanisms do you have in place to receive that information to to enter that into the process of making change those cultural things is something that we are making progress over the past few years but that is what has to continue to evolve to the point where we are comfortable with sharing our weaknesses essentially amongst ourselves so that something can be done to our repair it and every negotiation begins with a statement of the interests of all parties and I think we need to understand the interests of the researchers and the interest of the people doing the vulnerability discovery and what values they get out of the disclosures so that that can be properly accommodated there's clear values to individuals and there's clear values to the community and I think in the balance we need to understand those values to accommodate them as best as possible secondly I think we need to properly evaluate the costs and benefits of each possible model of disclosure as I stated earlier there is a long-term value that people tend to neglect which is the long-term effect of a particular action so for example in response to some of the information that terrorists were getting off the network and to some hacker attacks the DOD pulled tremendous amounts of information off the web that had a tremendous cost to the mission effectiveness of the DOD a cost that's really never been evaluated has that cost been worse than the benefit that they retreat that they received from that it's not clear right so what I'm advocating is that we clearly evaluate both the near-term cost and the long-term cost for what we do in a particular how we decide to suppress disclosure to any degree we decide to do it we need to understand the long-term implications to our the defendability of our critical infrastructure lastly I think we need to make sure that those who are in this research business set an integrity standard so you know for that very very tiny fraction of us who don't know it is wrong to disclose information that you get that then is used to kill people that makes you an accomplice right and so we set up an integrity standard that suggests that the community comes down on people who do that who share information with terrorists who then kill 5,000 people that's wrong all right so as long as we all understand that's wrong then you know that standard will help create trust between all parties I'll keep this short I think not everyone is created equal and different people will do different things with a piece of information once it's given to them so I think it's very important that when you identify a vulnerability and you need to tell someone about it either a vendor or a buddy or every you really need to think long and hard about why you're telling them this piece of information and what is the purpose what is the use of that information when you give it to them in the case of the SOCC we have relationships we say with hundreds of companies and that's true but the relationships aren't with the companies the relationships with individuals in the companies that we trust and there's a big difference there it's one thing to pick up the phone and say I called Microsoft it's another thing to pick up the phone and say I called this person in Microsoft because when I call this person I know who I'm telling and I know what he or she's going to do with the information if I pick up the phone and call Bill Gates I have no idea what he's going to do with that information I think that's an important point to remember back to the original question of what the ideal disclosure metric is I think the ideal metric would depend very much upon the individual however what it as I said earlier whatever the ideal metric is I think the first things that need to be protected are critical infrastructures and skater systems you know the systems which allow us to make telephone calls that the systems allow which allows to take a tube the systems which allows to fly safely everything there have been numerous cases where hackers are broken into skater systems at dams and head control of floodgates that isn't acceptable whatever the ideal metric is that the first priority does have to be critical infrastructure okay I'm going to open this for questions from the floor but I'm going to ask let me preface this by requesting that you really think about what the purpose of us being here is it's to work together and to be in a and to build a positive a way that we can work forward so flaming my panelists there's no one any good and I'm just going to shut you down so please ask things that can be addressed positively let me let me pass this mic around yeah I want to see what you think the responsibility of companies is to secure themselves I mean when we had the y2k problem we basically told them that they had to put information about how much they've solved the y2k problem into their sec filing so do you think that should be done with security or do you think something more like a for exemption which I know is going through Congress right now is the better way to do it I think the companies have a tremendous amount of responsibility it it it's not always followed through just as we all see the the quality of what we're getting out of commercial software vendors today is absolutely terrible we are the beta testers for most of this stuff and that's why the ability to find so many vulnerabilities is is there but I think the more that we're able to to push back in their direction and say we're not going to accept this and when things are found we do want them fixed and and the way is is the total consuming community I think the vendors will continue to come along some do it much much better than others but we're going to buy this stuff I mean so we have we have an opportunity to to to get them to take action to to fix the things that that everybody is finding out there I agree I agree what you were saying I see that because the software that the corporations that write the software have no liability they have no vested interest except that they get a black eye whenever a vulnerability comes out are we going to see liability for those corporations to include security within the development process right now many of those companies do not include it they take no liability and they define what the risk to their corporate customers are without the corporate customers being involved I wish we had somebody from Congress here that could address that directly because that would be a lot or department of justice we the federal government of course as was just explained we have an enormous purchasing power ourselves we also have to put up with the end user license agreements we are also bound by those those same words that you are it bothers us as well that in fact I purchase software from a vendor and the vendor places in a paragraph the fact that they are not responsible for what I do with the software they're not responsible with what happens to the software after it's installed well that's rather wrong I'm sure we could spend an entire afternoon debating it there's certainly issues there that are up for discussion and are being discussed whether legislation is the right way to go I'm not personally convinced that that's absolutely correct we would like to try and not legislate as much of this as possible but you're passing Mike's but your point is absolutely valid and it's certainly something that we need to continue working with to come to the what whatever the right answer is to the panel in general we seem to see a lot of bugs that are found by researchers and get sent off to assert to other places to disappear into a black hole never heard from again these things also are sent to the companies again they don't always know who they're sending it to and those that again disappear in black holes the researchers then get frustrated and finally publish their information on bug track often cert talks about waiting until there's a fix or waiting till the box the horse has already escaped from the barn before they announced that there's a problem with padlock how do can we address this so that the researchers are frustrated enough to put bugs out there before this time how do we get the companies to respond I think I'll answer that I think there's a misconception sir for one we get a lot of vulnerability reports we don't publicize as an advisory every single one of them if we did we'd be doing a thousand a year that's way too many but there is another side in our case of the website which is the vulnerability database which are those vulnerabilities which haven't made it to be an advisory but have been reported to us reported to the vendor and it documents the steps that all the parties took to work on that vulnerability and you'll find thousands of them in there that actually have been fixed and they got rolled into a patch that yet wasn't publicized like an advisory or a bulletin from the vendor but they were indeed fixed in a service pack or some other rollout of the software so there's a fine line there and I'm not sure what if you're seeing a vulnerability reported and it truly looks like it goes into a black hole in our case call back and ask and we will go talk to the vendor because there's only so many of us we can't deal with all of them completely but we'll deal with as many of the big ones as we can in a timely manner as we can during the discussions here we seem to be assuming that there's a vendor to go to where there's some sort of support available for the software that's vulnerable in a lot of instances it's either shareware that was put out by an individual or a company that is no longer in business or software that's no longer supported by the vendor what sort of things would you recommend that researchers can do in those instances where the software is no longer supported by it by anyone but it is there's still a vulnerability in it I'll take that one there's a couple things if there's no one behind the product that has the vulnerability no company or anything then it's not going to get fixed so the best thing you can do at that point is to document the workaround if there is one to the vulnerability and if there is no workaround then really the only solution realistically is to not use the product and that's an individual's choice but for the open source community in places like that feel free to contact certain other organizations that might know the individuals who might be able to rebuild the code with a patch in the code and see where it goes from there does that help I'm just wondering if any of the panel members see a greater importance to perhaps reducing the large number of security I guess problems created for businesses and individuals by bugs that could cost them huge economic things outweighed by on the other side we compared to the very slim but real chance of for example a terrorist exploiting a bug that's been since the bugs are generally all released anyway on bug track and I'm sure the terrorists are aware of it and haven't done it too much wouldn't there be some advantage to releasing the bugs to the public a little more quickly so that they can the public and companies can protect themselves against those bugs or work more quickly to protect themselves well the question would be what is what is quick I'm yeah there you go now I can see when you talk about quick are you talking about a researcher finding a vulnerability and publishing it without sending it to the vendor for their patch to be released or are you talking about getting it to the vendor and then having the vendor release the vulnerability which right right we in a perfect world you know if everything worked like it should within a matter of a few days to a few weeks because complex software is complex and to be fair on the software vendors a patch is not something that you do just within a matter of hours or even within a matter of a few days but a reasonable amount of time whatever that is depending on the software to to be turned around and made public absolutely recognizing that that's a perfect world model and we don't live in a perfect world what is the of course at stake here is when a vulnerability is found a researcher finds it and they short circuit the system and publish it directly to the general audience without letting the software vendors have at least a shot at it a fair shot of trying to fix it I think you've talked about responsible disclosure of it turned on I believe that thing on hi there you've talked about the responsible disclosure of information but how many given organizations actually have a responsible disclosure policy this is just a case of dues we say not as we do how many how many organizations actually have a responsible disclosure policy I don't know I don't know the question is how many companies have a responsible disclosure policy different organizations absolutely no I don't I don't have a number to give you I don't know I wouldn't even know the percentage I know many do but it would be unfair to give you a number leave that thing on with the threat that some companies make against the people that find those disclosures could you see some some sort of federal whistleblower protection possibly for people that find disclosures to protect them from attacked by the company that they find the vulnerability whistleblower protection it sounds like survey says whistleblower protection sounds like a good thing let me find out I'm not aware of any effort that that's ongoing like that but I don't know that we're quite to that point yet where we need it but if we get there certainly I got I got asked one question that was requested to be related anonymously to the panel so I'll ask on their behalf everything that we've talked about so far has been domestically based what can we in the US do how does the fact that vulnerabilities are released all over the world impact any kind of work that we move forward as a as we move forward as a country you want me answering all these questions terrorism is a is a global issue I we're beginning to deal with nations that that this country has never had a strong relationship before and we're sharing information with nations that we never thought we'd share information with before the United States is is is clearly a leader both economically militarily and and I think from a you know if you look where is new technology coming from it it's it's coming primarily out of this country and so I think we've got a responsibility to act in a way that we want the rest of the world to follow we need to set that example not going to solve the problem overnight it's not going to solve the problem entirely but I think clearly we have a responsibility to to show how we can do it and how then we would like the rest of the world to do it as well just real quick let me add to that so everybody knows we do have several ongoing efforts with many countries to share what we've learned in terms of critical infrastructure protection and cyber security because of course we are quite quite a well ahead of many nations there are many things we can also learn from other nations others that have gone after the internet in different ways for example many Asian countries the internet is simply your handheld it's contained on your cell phone it's nothing like we have where it's a physical box with a cable or a DSL connection there are things we can learn from other industries from other countries and from other cultures we are working very hard to try and spread what we've learned to learn from others but the question of what happens if we push all this offshore well we're a very interconnected society the internet knows no boundaries there are no as yet no laws that apply globally where I can arrest somebody in a foreign country for breaking a US law I hope we don't ever come to that I hope that what we can do is continue to set the example and this is of course a call for not only the federal government but you as individuals American citizens and those of you that are citizens of other countries to continue to lead the way and set the proper example of how to do this type of business we're talking about here in terms of ethical disclosures unfortunately we're limited by time so let me close by saying thank you all for coming and hopefully in the future for participating in this discussion because it's the people here it's the people at black hat it's the people that actually give serious and legitimate thought to the best way to responsibly disclose that information that are going to be contributing to whatever whatever our government whatever international governments decide is the best way and I personally would like it to come from the populace not just from the government lastly let me say that we do have some information packets on global intersect and also on Microsoft just for flavor that their opinions on responsible disclosure some of the panelists unfortunately have secondary engagements but others of us will be around for further discussion we'll head to the back so that they can use the room for the next panel which I was asked to announce is Richard team hacking the next 10 years thank you all for coming