 All right, well, welcome everybody to the, what, ninth version of the disclosure panel. But that's okay. We promise you actually something that's very interesting that happened over at Black Hat that we're going to get into in a second. But my name is Paul Proctor, and I'm with Gartner. I have my esteemed colleague, David Mortman with Echelon One. We are co-moderating this panel of deviants and others. I'm going to let them introduce themselves because we only got 50 minutes. We're going to keep our answers short, but we're going to get into hopefully some very interesting stuff. So take it away. Hi, I'm Dave. I'm not sure if I'm a deviant or an other, but I'm a security researcher and I've been doing it for good 12, 14 years. I work for a security company called Modesano, which maybe you've heard of, maybe you haven't. And that's who I am. Hello, I'm Pamela Fusco. I've been doing information security for about 23 years. Former CESU at Merck Pharmaceutical, MCI, Digix and Citigroup. I now work for a company called Fishnet Security, and I'm an other, I think. Hello, I'm Robert Graham. I'm not another. Several years ago, I founded a company called Network Ice. We created a product called Black Ice, which was acquired by Internet Security Systems. Now it's their chief scientist for several years. And now I've got a consulting company called Arata Security. Hi, I'm Ian Robertson. I believe I'm another as well. I lead a security research group at a large software vendor research and motion. Ryan Russell, aka Blue Boar. Some of you may have been on some of my mailing lists before, like Von Dev. I used to work at Security Focus. Now I'm on the other end. I work for a software vendor in the QA department. I get to be on the receiving end of the reports. Didle me. Hi, my name is Toby. I... That's Toby. I'm simultaneously another and a deviant, I think. I'm managing the straddle ad as a Heisenberg Uncertainty principle sort of thing. You can't commit to one or the other. Yeah, that's probably Brownian motion. That's deviant. I have been doing InfoSec for longer than I care to talk about and have sat on both the vendor side and the customer side of the situation. And now primarily working, doing consulting for really large companies that end up having to deploy and deal with the results of disclosure or proper and proper disclosure of various sorts. All right. Let's jump right into it. How many people were at Black Hat and saw Rob Graham's talk? Show of hands? Good, a few. All right. Here's what happened. Yes. What they did, he and his accomplice, sorry, Dave Mainer, was discovered that you could take an IPS signature file. You could reverse engineer the image that read that file and you could then discover what the signatures were. If the signatures contained zero days, those zero days could then be turned into... You could discover what the vulnerabilities were and some of those vulnerabilities could be turned into weaponized exploits. Rob, I have to ask you, is that term weaponized? Do you guys have that copyrighted or something? Certainly. Okay. Well, I owe you a dollar then. But anyway, so what this raises, and I'm going to let Rob tell a little bit more of the story because they actually went after tipping point, but the problems actually exist in all IPSs. They can be reverse engineered and this becomes an easy way basically for researchers and bad guys to get their hands on zero-day exploits if they can just get their hands on the signature files. I believe you guys called it hacking for the lazy. But what this leads to is an ethical dilemma now because zero-day exploits and putting them into signature files is part of the business plans of the number of IPS vendors and the implications for people that use those are significant. What we basically have is unintended disclosure and now both the IPS vendors and the people who use those files need to respond. So we're going to start with that. We also have some other things we want to go through. And anybody in the audience, as we start going through this, I'm going to kick it to Rob first to comment a little bit on this and ask a few questions about it. But if you guys want to comment, if you want to ask a question, raise your hand and we'll call on you. So with that, Rob, why don't you give us a few more details? So our talk was the fact that this company bought zero-day from hackers and then put signatures in their product, but then other hackers could then just read those signatures and create zero-day exploits long before the vendors could patch them. Now, buying zero-day from hackers is controversial enough. What we showed from an ethics point of view. So we added fuel to that fire by pointing out that there's more to it than just the ethics of buying from hackers. It's also the ethics of profiting from it to sell your product, yet by causing potential harm because you don't protect it well enough. So what ethical responsibilities do the... Well, first of all, what was the vendor's response to this? Actually, the vendor responded quite well. When we notified them that we were getting their signatures from the product, they promptly pulled all their zero-day. And then they've reinstituted the program, but with a lot more controls. We're still not happy with the controls that put on it, but they are aware of the danger and they are doing more to protect those from hackers easily getting those signatures. Okay. So, Ian, I don't know if you guys use zero-day signatures or not, but as a company that might actually have to deal with something like this, what do you think the obligations of a vendor should be in addressing a situation like this? Right. So just to start it off, I don't work in our IT security department, so I can't comment on the first part of the question. But I do deal with response on issues. And certainly the preferred approach is to have a coordinated disclosure of vulnerabilities. So I would prefer that the signature itself not be zero-day. In fact, that it be made available when the patch is. If they are going to be putting zero-day out in their products, and I can't dictate to another company what they need to be doing, they should certainly be looking into much stronger access controls on those, certainly cryptography and ensuring that in their appliance that those crypto keys are very well protected. Okay. You know, it's interesting, you bring up this idea of you can't dictate to a company. Does anybody want to comment on the responsibility or perhaps the power that an end user would have over a vendor that they're using? Maybe you could dictate to them what it is they want to do, because by them releasing those zero-days, and oh, by the way, I should point out that if you uncover the zero-days inside the signature files, and Robin Dave's paper goes into some good detail about this, a lot of these things are actually, the signatures are pretty lame, and I think that's my term, not theirs, which means that you can easily bypass them. You can still do the exploit, but not actually be detected. So even having those zero-days, if you are a customer, you can have those zero-days, those zero-day signatures, and they're not going to help you either. So by putting them out there, it just puts you at risk. Actually, so a number of the companies that I've done consulting for sit in the Fortune 50 space, and so they end up being big monstrous consumers of products like this, and you'd be surprised at how little or how much the response they get from vendors is lip service. The majority of the IDS and AV vendor responses are, that's great, we really appreciate you, we value as a customer, have a nice day. And so in terms of the level of dictating, you can do some, but I've actually, the point that you landed on is the one that I've been arguing a lot, which is the questionable value in the first place. If you've got the data coming out of tipping point or anything else, and you see a zero-day attack, the amount of information they give you on what the problem is, is so limited that you have a significant challenge in terms of doing any sort of decent analysis of the data, and actually determining, and so you end up being potentially more at risk for spreading this than anything else, and getting a false positive and again chasing your tail, the quality ends up being a problem. The question is whether ethics really end up being the issue at all here. I mean ethics is a beautiful notion, but the fact is that we're dealing with this on an ongoing basis, this is a reality, I'm not sure we have the luxury of ethics on it anymore. We discovered this, yeah, we discovered this in the panel. We actually had representatives from Microsoft and Cisco who declined to be here today. We did tell them that we'd be piling on, so they're expecting that. They want a copy of the video, so if you could turn the camera off. But they were basically, and I'm going to pick on them a little bit here, when we first, over at Black Hat we pitched this more as an ethics panel, and what we discovered was that ethics gets a ton of lip service, but when they were asked directly for how do you bake ethics decisions into your business model, the answers came back were very flowery about the importance of ethics and how they happened with every decision, but I think that... Well, just hold on a minute here. I don't really think that you can really sort of take that hardest stance when there's a question being given with no context, right? So you say, how do you handle ethics? I mean, without any context around that and the situation around it, I don't think that you can meaningfully answer the question. Oh, I agree and let me clarify. What I did was, because I asked them generally, how do you bake ethics into your business decisions, what I did was opened up a platform for, you know, saying basically, well, because we don't have a specific situation to discuss the ethics, as of course ethics come into every single decision, and I think... Ethics come in when shit happens. That's when you call them the ethics people. Oh, God, let me fall back on that. When David and I were prepping this panel, the real question came down to, when does the profit motive get disrupted as an ethical concern? And I think that this thing that's going on with the zero days is an excellent example of this. Brian, go ahead. I don't think Microsoft has a lot of room to complain about other people giving away zero days when they supply their own zero day to the Fortune 500 months in advance. Oh, whoa, I don't know about that. The Patch Validation Program. They hand you, if you're an important enough customer, they hand you the patches months in advance so you can make sure that they work with your internal software and your process and all that. And there's no better map to exactly what the vulnerability was than the patch. I think you're making a bit of a stretch there in terms of referring to it as zero day. I will not argue that there are plenty of tools, if you can't do it by hand, there are plenty of tools that will absolutely take you automatically from patch to exploit. However, that is not handing out zero day. It's just, no, it's not the same thing. It's totally different. I understand your point and I think you're absolutely wrong. I agree. I would add, though, that there's probably a pretty strong analogy to the tipping point. There is a very strong analogy to the patch. But does it make any difference that this could be seen as part of the quality cycle for those patches? Clarify that. Which quality cycle? The corporate quality cycle or the vendor quality cycle? Because I don't care what patch you give me, I still have to put it through my Q&A. Absolutely. So which quality cycle are you referring to? Well, it would be, I think it would be both. That's myself. Okay. While Pam gets her cell, we have a question over there. Can we listen in on her phone call? I agree. Actually, it's not that it's been getting better. It's been that the total number of vulnerabilities has been increasing. It's better known. And the amount, and somebody, I don't know, Ian, Ryan, anyone, please argue with me that we're actually seeing an increase in the percentage of responsible disclosure as far as I've been able to tell. We've seen pretty much static numbers of responsible disclosure in terms of percentage. We've just seen total volume of vulnerabilities going up through the roof. I wouldn't call it responsible disclosure either. I'm kind of on your side. I think that we're more aware of what's going on and it's happening quicker. Therefore, hence, they don't have a choice but to step up to the plate and pretend like it's responsible disclosure. To me, responsible disclosure isn't the same day everyone else finds out about it or when we contact them, responsible disclosure is a little bit before the hand. You know what I'm saying? And I'm not seeing too much of that. What I am seeing though is a little bit more collaboration between the vendors. I'm not seeing that they're handing me a zero-day patch to test and evaluate if that's what we're saying down here because that just doesn't exist. But there is a bit more, but there's more exposure. Therefore, it's already out there. So I wouldn't say that it was responsible disclosure. It's kind of like identification, responsible identification. Maybe I'm not really sure what to call it at this point in time because there's just so much out there now. No, actually, the question that caught it correctly, the question that they were asking us initially is, what are the ethics of providing in your product information that could lead to the leakage of zero-day? Yeah, but the thing is... I don't think everybody can hear his question. The question was that he's saying that people are generally behaving ethically. I think the summary of the question. And the thing is, what we found here with tipping point was that two different groups had used that signature that was supposed to not be public, but zero-day, it was a published zero-day signature, to develop their own exploits from it. And so before the vendor could patch the problem. So for us, that was a serious breach. It was not just a theoretical issue, it was serious. So one of these groups had been doing it for a while. They just get this constant stream of zero-day that they could develop about half the zero-day signatures would result in them generating exploits. And so for us, that was like, you know, yeah, we have a good dam here, but we still have water sort of leaking out in places that could threaten the entire dam. And we need to be on the ball to address those water leakages and plug them up before the whole dam collapses. Actually, I had a question for Pam. Oh, geez, thanks. You knew this one was coming. I did. You're only var on the panel, so... Oh, right, thanks. So what Paul and I were talking about earlier, and I kind of wish actually had you there yesterday, was how do you balance the needs of your customers versus your relationship with the vendors? So a customer comes to you with a security issue or vice versa, how do you balance that need to make sure your customers stay happy and protected versus pushing the vendors to do what he's doing without endangering your business relationship with those big vendors? Well, let me think. I've been on the receiving end as a C so many times over on receiving stuff from vendors that didn't quite work the way we thought it would. There wasn't a lot of cooperation or collaboration to correct the issues once we had the product in-house, and I'm just going to say that it was a SIM product. I'm not going to go any further than that. And wasted a lot of money, so on and so forth. So from where I stand today for Fishnet Security, we do a nice balance, and we have what we call the dirty dozen. We have several different vendors in our pool that we work with. And what Fishnet does is we work with major enterprise clients, Fortune 50s, Fortune 100 companies, to bring to them both VAR, which is rack and stack boxes, load code, whatever, and also kind of like a security consulting service to build out the strategic platform, so on and so forth. So our role is to, one, put the technologies in place, along with the vendors that we work with, but at the end of the day, Dave, it's Fishnet's responsibility to make sure that it all works. So it's my butt that's on the line in front of the customer if the stuff breaks. So in working with the revender, we actually collaborate very closely with them. We send out many of their advisory councils and feed them the information from our clients on next-version release problems, issues. Is that the answer? Is that where we're going with this? Are you looking for like a specific problem where I put something in a client side and it just crapped out? Well, I wasn't thinking so much from that perspective, but you guys have a lot of security consulting as well for your customers, and inevitably, you will find vulnerabilities in the products that you sell. And so do you feel restraint? Well, as a middleman, it's not my fault, right? No, I'm kidding, I'm kidding. Right, but I have no idea if you work with Microsoft or not, but I'll pick all of them. I don't work for Microsoft, no. Let's say NetScreen, Juniper, whoever they are this week. You find a vulnerability in the latest screen OS, blah, blah, blah, blah, blah. You have some responsibility of your customer to push Juniper to fix that, but at the same time, you have a large financial relationship with Juniper, and how do your bosses handle that when you go to them and say, look, we need to push really hard because they're not fixing this. At the end of the day, it's the customer that's the most important. You have to push it with the vendor, and we have pushed people out of our vendor pool for incompatibilities or problems or just, you know, just not working right, technology or not working well with the client. So at the end of the day, it boils down to what's best for the customer, not whether the vendor is going to be my friend at the end of the day because we found a vulnerability and they're having issues addressing it. So actually, let me ask a follow-up question just to... You're asking me a question? Yeah. You can't do that. Can we do that? Yeah, you can do that. You're in a situation. I am not on... You have, you know, you have a relationship with vendors, you have a relationship with your customers. Product X has a vulnerability that you know about that the rest of the world doesn't, but it's reported to the vendor and it hasn't gotten fixed inside of, I don't know, 60 days. When do you tell a customer that they're vulnerable to something? Day one. So you tell them from the beginning? Day one. I hide nothing. That there's a zero-day vulnerability in a product that they have deployed. But then that fails the responsibility. Then that fails responsible disclosure right there because as soon as you tell the customer, it leaked. And no matter... What's leaked? Whatever information you tell the customer. You have to assume this is just a fact. Anytime you pass anything to any customer... Sorry, I got to take care of the client. That's my job. I'm not disagreeing. I'm not disagreeing, but at the same time, you're now arguing rather than responsible disclosure that you have to go for. Look, at the end of the day, a vendor, if they want to stay in business, because this is getting to be a tight world, right? And there's not a whole bunch of us out there that are going to make it if we're in the big vendor world. And you don't want a black merc next to you. Microsoft's big, Cisco's big, Juniper's big, okay? And they're going to be around well because they are deployed everywhere. And it's a lot hell of a lot harder for me to pull them out and bring someone else in. When we're talking about other technologies that might not be as widely deployed as others, if you're going to keep giving me crappy stuff, I'm going to kick you out. One thing... And use somebody else. I'm not arguing with you. That was a dumb question. One thing I wanted to make an analogy... Why do they always have to pick on the chicks? Because you're smarter than we are. We have to gain up on them. What do you want? Yes, Frank. I want to make an analogy here. If you're a doctor, your primary responsibility is to your patient. If you're a Catholic priest and someone tells you something confessional, your primary duty is to your parishioner. If you're a lawyer and your client tells you something, your duty is to your client. So if you're a lawyer, your client tells you something, he broke the law, you still can't go to the police and tell them because your duty is to your... is to your client. And I think in those professions, that's sort of the highest ethical goal that they've got. And I think relatively in our profession as well. If you've got a customer, your goal, and they're paying you money and they're trusting you... It's not about money. Well, it's about the trust. I think your highest ethical concern is to your client. It's to myself first. I'm going to be honest with you. I've been in so many different situations where ethics has come into play. You have a million-dollar bonus dangling in front of your face. No joke. And do I do this or do I do that? If I do this, yay, I can go buy some more jewelry, right? And I like jewelry. Or B, I can tell the truth and have a couple less earrings in my jewelry box there. And I'm not saying that... I've always chosen this path. And the reason is because at the end of the day, it's going to bite you in the butt. And I don't want to ruin my reputation. And I want to be seen as an ethical person that's smart, knows what she's talking about from a technical perspective. I don't want people to think that I'm just seeing them trying to sell them something and take their money. Because when I move on to the next job, I'm not going to get it because people are going to have this reputation of this aura. And I've always tried to go the right path. If in fact I'm disclosing a zero-day to one of my clients, I would hope to think that the vendor's going to be there with me because I'm going to turn to them and tell them I'm working with Customer X and they have 200 million customers out there, which there are some companies that I do work with that their customer base is 200 million. That's pretty damn big, right? It's a lot bigger than the vendor that I'm working with. I'm going to tell that vendor that we need to disclose this to Company X because if we don't, it's going to be huge and it's going to hurt them more if they don't work with the customer and discuss it up front. So, you know, you look at the site, I'm working with a big pool of people here and I'm working with a vendor over here. At the end of the day, if this gets out, it's really big. So the vendor definitely has to be aware that we're going to let the client know always. If it's that big, depending on what it is. Going to Rob's point on Highest Ethic and forgive me for dragging this tired old cliche out, but when you talk about Highest Ethic, there's one Highest Ethic for well, Rob, not currently your company, but your previous incarnation was public. Your highest responsibility, your Highest Ethic is not to your customer, it's to your stockholder and as such, there is, you know, a fairly clear and final and explicit goal, which is you raise stockholder value, which kills all other. So, you know, when you're talking about customer as the highest goal, well, you know, technically they're not. They're useful. They are a resource, but they're not the priority nor the vendor. I mean, it's what you're doing, so there's no stock without customers. I would point that out that you've got certain ethical duties to your investors. Investors give you money. Your ethical duty to them as kind of a client, in a way, is to be up front with, like, following SEC regulations but we have to disclose to the public and not lie on your quarterly filings. That's sort of your goal. From the long term, your goal is to make them a profit, but to do that, you have to say, yes, we ethically think that our clients are our highest ethical concern. In other words, following one set of ethics for your clients, while in the short term, you might benefit your stockholders by screwing your clients out of some money in the long run that hurts you, and your investors really are for the long run, so I would say they're not really in conflict and I've never seen a situation where they were in conflict, but of course you can probably dream up of examples where they are. Okay, Raven, you had a question? No. Okay, sorry. Yes, Raven. We love you, Raven. The rest of the questions. So the question was, this person is a hardware vendor and he ships a piece of hardware. Let's say there's a vulnerability in this. This is a piece of hardware that's not connected to the internet, yet there's a vulnerability in it that needs to be fixed, yet he can't fix it because he has to replace this box. So what's the disclosure of ethics for disclosing bugs in a hardware product? You still have to make them aware of the risks that they're accepting. And I think that's what it boils down to. So it kind of goes with legacy systems, right? Same thing can apply there. When you walk into a company and whether it's hardware, software, application, I don't care what it is, and it's 10 years old and you go, you know, guys, this stuff's got to go. You know, when it's NT, it doesn't exist anymore, right? But yet you walk into some companies, they still have it, but it's not connected to the internet. Okay, fine, whatever. The point is, as long as they're aware, made aware of the vulnerabilities that may exist in that, whatever the product is, whatever the software is, whatever the information is, then you made them aware we can make conscious decisions to say what mitigating factors we are going to put around it, or if we're going to accept the risks, or if we're going to yank the box and put something else in. You still need to make them aware. Otherwise, we're blind to it. So like you said, if it's a piece of hardware, it's not connected to anything, it's basically standalone, and nobody uses standalone, by the way, and it's rare that something's not connected somehow to somewhere, even though it's not connected to the net, it's connected to something else that will be connected. So I'm finding a real hard time finding standalone, anything. But the point is, I think that you still need to disclose to make them aware of what they're accepting in their environment, or what potentially exists in their environment. I would... You know, there's still this underlying old-school concept there that's kind of pre-responsible disclosure terminology, at least, which is the vendor deserves to suffer. There's still that aspect, I think, out there. You know, customers need to be trained that if they're having to do a forklift upgrade because of something in your product, maybe they should not be buying from you. I made a question in the back. So I think that was our point around the notion of obligation to stockholder versus other things. It's not a question of ethics. And I think, as was pointed out, the only time ethics comes up is after something's gone wrong. That's when we make laws too, right? When someone flies through the windshield, whoops, better start wearing a seatbelt. So the question was, is how do we use ethics and corporations in the same sentence? In my experience, corporations actually are very ethical. They don't have a choice to get fined. What, Rob? Lots of bad things happen and corporations are involved, but largely, companies are ethical. And they get punished often when things don't happen well. And only in rare cases you have companies that just go completely off the charts like Enron. Enron is an outlier. It's not a good sample of what the normal corporation is. And corporations have a hard time being unethical because there's so many employees within a company you can't keep unethical behavior a secret. So there are cases, of course, where there's a small collusion among the corporate executives, but that's actually fairly rare. So no matter how much we hate Microsoft's behavior, largely they are doing their duty to their customers and to their shareholders within the social norms of our society. We might not like how big Microsoft is. I certainly don't like the control they have in our industry, but yet I can't say it's necessarily totally unethical. Whose fault is that, though, that they have a control in our industry? It's ours. They produce it. We want it. We buy it. We use it. If you act like sheep, you can't complain that the shepherd's not nice to you. This is all market-driven. 20 years ago, someone went up to Mark McGrath at a conference and said, when is Sung going to stop selling us such shitty software? And his response was, when you stop buying it. Microsoft, a few years ago, they had a little problem. A lot of major companies said, we're not buying this anymore. And suddenly they started making software that was a lot more secure. Period ended story. This is markets here. We're in a capitalist country. Stop buying stuff. You don't like it. And remember, this is... So, let's... When you get some workers. I'll take... Actually, the question was, when does that come out? And I'll have to point out that Microsoft has tried to EOL their previous versions of Lousy Software. And they've gotten yelled at for trying to do so because customers insisted on wanting to hold on it. They have on occasion tried really hard to do something approaching a better thing. And you only have to look at how much money they spend. If nothing else spend on their parties at places like this, they're trying to do the right thing by the security industry. Even if they've got major problems, even if they've got issues, they're still going to have a long time to go. Just something to keep in mind. Rob, I would actually argue... Argue the reverse, which is companies have a hard time doing the ethical thing. If you look at Google saying they don't want to be evil, and yet they still went into China and their argument for doing so was, well, we'll be less evil about it than somebody else. Yeah, I mean, I'm also gonna... I kind of agree with Toby on this one. I mean, I think the reason why companies are as ethical as they are is because they're legally obligated to be as ethical as they are. Man, I don't think that's an awful thing, right? I mean, I don't think that's a condemnation of anything. I think that's just, you know, if you look at how things are structured, it just makes sense that companies have to do what it takes to survive and grow. And the people inside of them generally try to do the right thing. At the same time, if your job's on the line and it's the difference between being employed and unemployed, you may make a less ethical decision than you might normally make from an outsider perspective. So, Google and China is a good example. The Chinese do not... Chinese people do not want freedom of speech. And that's a sad fact. We as Americans think that, oh, my God, how can you live in society without freedom of speech? But ultimately, the Chinese as a whole don't like it. And only a few minority. I think you're wrong. The Internet gives them that. Well, the Internet gives them that. The point is that Google's not being unethical. They're being maybe doing something we don't like, but they're not hiding it necessarily. They're not doing anything illegal. They're following the laws of China. So I would not point to that as an example of something unethical. Though, of course, being a huge freedom of speech advocate, I don't like Google for doing that. Being legally compliant is not the same as being ethical, just for the record on that one. I agree. That was my lay-up applause. Thank you. You should stop buying Google, then. I'm sorry. It's free. I don't read any of the ads. Say that again? I agree. I agree. Does anybody have Webster's definition of ethical with them? What the hell does it mean? Yeah, Google that one, Chris. What does Google say about it? Go to Wikipedia. Exactly. Rob, come up here. He said, I don't think it was a question. Come on, Rob. You know you won't. Rob was pointing out that our idea of ethics is different than China's ethics. In a room of 10 people, there are at least 11 opinions upon what ethics should be. I was kidding, but that's good. Well, you want to come up and read it? I'm sorry, what was the last part? Pertaining to right and wrong. That's actually not specifically true in this case. It's actually corporate ethics, right? This is about Google's ethics. I don't even know why I'm talking about Google. I have no care in the world about what Google's ethics are, but I'm going to get shot up right now. Actually, let's move on, because we only had a few minutes left. I don't know why I'm talking about Google. I have no care in the world about what Google's ethics are, but I'm going to get shot up right now. Actually, let's move on, because we only have a few minutes left. About 12, to be exact. So the next question, which I'm going to... Second question. I'm just making this up one. We talked earlier about Microsoft's early patch release program, and I mean, Microsoft's far from the limit to do that. And what do you feel about the response of it? Is that responsible disclosure? And to whom is it okay to do that, and who is it not? I know, for instance, the federal government has put a lot of pressure on Oracle in the past to release patches to them early before anyone else. And certainly, on very rare instances, Cisco has done it with ISPs and things like that, in a very particular situation. So I'm curious how our panelists feel. When that's all right, when it's not all right, is it ever all right? I think it's an all or none. Tell them all, or tell them none. I think I'm good with it as long as I'm on the list that gets told early. That's what I mean, yeah. I don't know. There's prioritization. Is that ethical? I'm not going to say I like it, right? Because I'm obviously not going to be on the side that gets the patch early. At the same time, you know, even if you look inside of an enterprise, right, they patch differently based on how sensitive the systems are. As a society, I understand that there may be more important people to patch than me. Like the release of my information I am not excited about. Not what I want, right? And as an individual, obviously I would like to get patched first. At the same time, I do have to recognize that if it's between breaking the whole internet and breaking me, I'm willing to take the hit, right? That's to me personally. I have to agree with them. I think if it's in a situation where it's life-threatening, kind of like a skate, power, energy, stuff like that, that they should be. I think humanity comes first, and then we look at how we run the business. So this is DEF CON. And one way to address the answer to that question is have a researcher give Microsoft a critical bug and only give them the details on the condition that when they release the patch, everyone gets it at the same time. And that's a way of, if you're the researcher, you can actually control the answers to such questions. There's probably more than one researcher, though. What I mean is, is that there's no right one answer for everyone. They only get to do that once. But I don't know. Researchers need corporate advocates. That's what you need. To take the message for you. Technically, researchers just need somebody who's, well, they don't need anything, but they benefit from somebody who's willing to pay them for their data. I'm not giving up my jewelry, if that's what you're asking. I got that. I think it's irrelevant. The question was, should all researchers get paid for finding vulnerabilities? Can I steal your quote? Dave has a great story on this one. I thought of a different one, but this came up yesterday. Someone basically said, hey, if I find a vulnerability and I report it to X, I should get paid for it, right? That was basically the gist of it. The person's here, I apologize if I'm mutilating your quote, but that's what sets me up to answer it best. And I think it's a lot like, when I go to sleep at night, you sneak into my house, you perform dentistry on me, I wake up with that wisdom teeth, and you give me a bill for it. I didn't ask you to do it. Why am I paying for free stuff? If you do free work, don't expect to get paid for it. If you want to get paid for it, call up the vendor first and say, hi, I'm a security researcher, and I would like to talk to you about doing business. The ethical thing is, if you've already done the work, you can't get paid for it. Otherwise, you're basically storing the vendor of give me money or I'll do something bad with this. But you can ask for more, if you've got, if you've researched it, told them everything you know and they come and say, ask for more details and you don't know them because you haven't done that work yet, you can actually then ask, okay, I'll research this and more for you so your guys don't have to, and I'll consult on that. But done work is already essentially been paid for and you can't ask for more money. I think it depends on the impact, right? I think it depends on the impact. It's kind of like the drug industry, they research drugs, right? And it takes seven to ten years to take a drug to market and it costs probably about, at this point, $800 million to invest in the drug. You don't know if it's going to make it or not, right? So you're putting money up front in research and development and let's say the researchers aren't getting paid, but in this case they are. So what you're doing is you're investing in something that you don't know if there's going to be a path in it or not at the end of the day you want the path. I look at research and I always, in every budget that I've had as a senior person in companies, I've always allocated money for research and innovation because I feel if we don't, we're stagnant. We're always going to be tactical and I like to be strategic and get away from getting woken up at two or three o'clock and morning every morning to respond to something technical because I didn't do research and development. I didn't do, you know, innovative thinking. So I believe my thought on this would be it depends on the impact of the research. If a researcher brings me something that is going to be worthwhile and is going to feed the need of the many, the need of the few, I would see that they would likely get a benefit from that, whether it's from a part of the patent because you're going to probably make it into IP, I would think, intellectual property, or a part of the ownership of that to get a monetary value. But in my world, I would think that it depends on the impact that the research provides, whether it's positive or negative impact if that makes sense. I think the eye defenses and tipping points of the world are doing a great service for responsible disclosure. Researchers get paid. The vulnerabilities get off the street. The buyer takes care of dealing with the vendor who either is going to fix it or not, or they can do whatever they're going to do there. And I think the problem that Rob was outlining, you know, you can go and dig and find some zero-day, is probably a much smaller side effect for an overall benefit. Did you have something to share with the class, Pam? What? Did you have something to share with the class? No. We have a question in the back. Yeah, you. That's good. Yeah. I'd tell him no. I'd tell him no, that's a threat. That's black now, I'd say. Yeah, there's a difference in there are some better and worse models of how this has been done, but to some extent because of the question of do researchers deserve to get paid, it's irrelevant because the fact is that at this point they want to get paid and they are finding people who are willing to pay them whether it is the vendors or not, they are finding someone. So it's irrelevant whether they deserve to get paid or not because it starts getting into their motivations and things like that when you start making the argument of well, you know, are they doing it because they want to help? It's irrelevant. The fact is they have found, they will find people who will pay them if that's what their goal is and whether you think that's a good thing or bad thing it's something that you have to have a solution for and deal with. So as we are running out of time here, I want to take us back to the beginning and this issue of the ODES inside of IPS signature files by a show of hands who thinks that ODES should be pulled out of signature files based on this. Well, okay, I guess this crowd wants the ODES inside the signature files to get their hands on them. Who thinks, go ahead. Not a completely new problem by the way. I pulled an ODES out of my ISS internet scanner in 98 and found it worked on more boxes than they were saying it did. Who thinks they should be left in? As is. With the protections that we have today. Who wants to comment on why? That was absolutely universal that the ODES should be left in. Who wants to say why they feel that way? You. RF policy. It's been around forever. It's the canonical reference. It's the only one that might bother pointing people to. Well, I actually tend to, it depends what the ethics are. The responsible gets a little dodgy there. At least they're being true to themselves and you know where they stand. I'm not sure I agree that's where ethics stand, but I think it's important that companies have things like that. I think that, this is something that actually my company is a firm believer in. If you were to go to our website, you would see our code of ethics when it comes to vulnerability disclosure. You'll see where we stand on the issue. Because with vulnerability disclosure, there are a lot of different opinions about it and nobody is ever going to agree to it. I've been doing this panel for, I don't know how long, I feel like I'm in a Sartre book. I am going to be doing this panel again and again and again and we're never going to get any further in terms of saying, this is right, this is wrong. It's just not the way it works, right? They're just two different views or more than two different views on the subject. What we need is, we need to have people who can say where they are and say you know where they are. And then that helps at least with something, right? You know who you're dealing with, you know what they think and then you can also hold them accountable when they don't live up to it. So I actually think that that's completely bizarre that everybody universally feels that the, I mean, maybe we didn't explain this well enough. It means that the ODES are essentially out there for everybody to take if they're in the signature files. And you guys are universally saying, yeah, okay, well now I get that from this crowd, but... Well, no... Well, it's not just Rob, it's... Other people are... Yeah, the less talented criminal element will have access to a nice steady stream of weekly ODES that they can weaponize very, very quickly. But in the worst case, if it falls down to the... Right. And the vendor may or may not have access to that information yet at all, that there's a vulnerability. It's a bad thing. So we are down to only one more minute so I just want to check for the people that were actually originally in this talk. Is there any interest in the panel going down to the Q&A room and continuing this conversation? Raise your hand if you would be interested in that. You guys suck. They're dead. Let me guess. Who wants to stick around for Bruce Potter? All right. Who wants to see Bruce Potter on this panel? No. All right, listen. I want to thank the panel. Everybody thank the panel. Thank you. Most popular talk at DEF CON. Thank you much, everybody.