 I'm not important, they're important. Virus is just here to keep me in check. Katie? I'm drinking a nothing. I'm Katie Moe. If you want to know who I am, I don't know. I'll tell you later. I'm, I'm a, is this on? Hello. I'm in the room. Chris Weishopel, CTO and founder of Veracode. But I've been doing disclosure stuff since the loft days in the end of the 90s. I was a buddy of RF policy 1.0. And, uh, render man, uh, I hacked stuff, lily sex toys, uh, but also recently decided to not disclose to certain agencies and countries and such. So. All right. I accidentally ceded my time because they were like serious introductions. So here we go being super serious as I always am. Katie Masaurus, founder and CEO of Luta security. Um, I launched, uh, held the Pentagon launch, hacked the Pentagon, made it so that, uh, our people were no longer thrown in jail for that and then instead paid some money. Uh, launched Microsoft's bug bunny programs, wrote Microsoft's vulnerability coordination policy, created Microsoft vulnerability research, created Symantec vulnerability research, wrote Symantec's vulnerability disclosure policy, co-author, co-editor of the ISO standards for vulnerability disclosure, vulnerability handling processes, former pen tester at at stake. Okay. There will be a test later. All right. Who was first? Katie or Weldpond at Congress? Weld, for sure. 1998? Yeah, he was, he was 1998. I was 2018. It only took 20 years for them to invite me. Oh, okay. We're making some progress. It's progress. Okay, so. It wasn't the cat ears. They're vestigial. So, um, in deference to your employer, would you like to remember one of the questions that were approved by your employer or should I just shoot from the hip? Yeah, okay, so. Fuck. It's, it's only us and Bruce and, and a camera. So, um, so I'm Big Easy. A couple of these folks up here are my friends. I hang out with them. You know, we see each other at conferences and things like that. To start off the ethics village, I wanted to have a conversation not necessarily about responsible disclosure. I want to have a conversation about the ethics of responsible disclosure. So when we're talking about this, we're not necessarily talking about responsible disclosure. We're talking about whether or not it is still ethical to responsibly disclose. That is the question that we would like to explore. And if anybody in the panel would like to step down now, get the fuck out. Can you, can you define what you mean by responsible disclosure? Is that like tell the vendor and allow them to fix it before you tell the world? Is that your. That's right. I think that it is no longer ethical to maintain this practice as a security researcher. I'm an ICS SCADA security researcher. I work at the University of Illinois. I've been there for seven years before that. I did financial. I've been a part of responsible disclosure and a lot of things. I see a SCADA and I'm not speaking for my employer or anybody else. So I'm on the fuck off with virus. But the reason I wanted to have a panel was to just talk about this issue of, you know, Microsoft has a bug bounty program. I don't work there. I am not going to point this finger at anybody who may have used to work at Microsoft. Okay. But I'm saying like, as an example, Cisco might have a vulnerability disclosure program or big companies have. Well, come on. They have these programs and I can remember back going back to the 90s. I started one of the first internet service providers in the state of Kentucky. And back then when people were hung up on the modems, we used to just knock them off with the blue screen of death. And I remember when responsible disclosure first came up and we were hoping to try and get somebody like Rainforest Puppy to come for this. Maybe next ethics village we can get them out to discuss this. But I really started to think about this and as I invited panelists and then I saw Render Man last night and luckily we were both still sober enough to remember that this panel existed. Ish. I had to shake them a little bit. You really should be here at three o'clock. Because I caught your take in Twitter a few weeks ago about responsible disclosure. I really think that the question really is, as security researchers, should we still responsibly disclose? Okay. Katie looks like she's really anxious to talk. So go ahead. I really am responsible to keep using that word. I do not think it means what you think it means. No, seriously. We stopped using it in the ISO standard. We stopped using it at Microsoft. And that is thanks to a conversation with Jake Koons, who is formerly open security foundation. Now he runs risk-based security. But Jake Koons came up to me after I was on a responsible disclosure panel at RSA in 2010. And he was like, can we stop using that word? It has moral judgment, blah, blah, blah. Okay. So with that word in mind, I know what you mean. It's that whole thing of letting the vendor take their time to patch before you go public with it. Having had to write Microsoft's policy, we wrote it with three roles in mind. Finder of the vulnerability, coordinator of the vulnerability in multi-party instances, like Meltdown Spectre type of things. And vendor, receiving the vulnerability report. It was very important to me as the creator of Microsoft Vulnerability Research that there was no language in there that said we wouldn't release security in details of the security vulnerability before a patch was ready because it was important to be able to pull the trigger, pull the ripcord, if there was evidence of attacks and whatnot. That is a nuanced difference between Microsoft's policy and Google's policy, which has a strict deadline of disclosure. So there's ways that you can deal with this. And I absolutely made sure that Microsoft never used that word while I was there. Thank you. All right. Okay. So thank you, Katie. Okay. So you got anything to say, Bruce? I'm glad that Katie was able to strike that from Microsoft. I mean, the responsibility if you're going to use the word lies on the people that wrote the code and the vendor. And that's the way I've always held it. You're responsible for the ship that you write. And if it's bad, it's your problem. I'm sorry, Bruce. Can you slow down your words just a little bit? We've got some lag. Bruce, slow down. I'm from up there. You don't talk real slow there. No, that's perfect. That's perfect. Talked like you're from South Carolina. Or from the South. I'm from New Orleans and we have a draw. You need to draw because we're lagging a little bit. All right. Well, I will keep brief. I think that the responsibility lies with the vendor and asks the system party. You're responsible for a good code. You're responsible for changing your code. And at the end of the day, you know, you bear that burden. So I'm all for striking it from the disclosure debate and trying to enforce some accountability on the vendors to do the right thing and push them all forward. It comes to actually running stuff at the back. Okay. So what about the right side over there? Do you have anything to add? I think that the big thing right now is like, we all consume a lot of these services and such that we're finding vulnerabilities in and we have a stake in this. Previously, I found a bunch of stuff with their traffic control systems and it's really weird giving a talk about that when you have to fly home afterwards. You know, so my ass is in the game. It's, you know, I have a stake in this, but I think lately and where my position comes from is that you could be helping various authorities or regimes or whatever to facilitate things that you might have other ethical issues with. And it becomes this thing of, yeah, you could be saving life in limb, but you could also be making something more secure that could be then turned against people. So it's like... Well, it's tricky. So when I look at it, I think there's a short-term benefit to disclosing issues and there's a long-term benefit. You know, the short-term benefit is that one little bug gets fixed and there's certainly some benefit to that. The product gets a little bit better. But the long-term benefit is that companies will start to realize if they're going to get all these point whack-a-mole vulnerabilities that they're going to have to deal with, they're going to have to come up with a better process for dealing with them, and then they're going to realize it's even cheaper to use the techniques that researchers are using themselves and fix the things. So with Parise's talk at the keynote for Black Hat, she talked about the 90-day... why they had that 90-day drop-dead date where they'll disclose. And the reasoning behind that was it just would force vendors to get better at responding. Right? And... What's that? Well, a lot of companies have lawyers, too. I mean, yeah, individuals, yes. But the compliance rate of fixing within 90 days is now at 98%. So over the time that they've done that, they've gotten a set of customers to be able to respond within 90 days. And the fact that Google did that means that you could do the same thing, too, and they'd probably be able to respond within 90 days. So I look at this long-term gain to doing this, too, of getting vendors to have an expectation of doing the right thing. Well, this now leads to the second part of my question. Would you like a chance to hurt the first one before I put it in the second part? All right, so I say this whole thing is irrelevant because this assumes two factors. One, it assumes that the vendor is ethical. Two, it assumes that it is always legal to act subjectively ethically as the finder. And I can sit here and tell you all kinds of scenarios where that is horseshit. I work in M&A. If we don't buy the company, the bugs get burned. I end up sitting on a bunch of O-Day. If we don't do that, everybody goes to jail for insider trading. I've also done pen tests where we broke a bug in a vendor and we turned the bugs over, and then that client chooses to sell the bugs. And a non-zero number of those bugs have actually killed human beings. Like, these are things I've seen. So the concept of debating a generic ethical approach to discovery and or disclosure, like doing the right thing is always legal, is fundamentally flawed. So, and then... That is absolutely correct. Thanks. But the second part of my question is, what I feel and why I am trying to forward the idea that it's no longer ethical to use the disclosure process is what vendors are doing with automatic updates. So, I have no choice as a user but to accept the automatic update and have no idea what other feature changes a vendor are pushing on me. So, you have an important security update. Now all of a sudden, you've lost a lot of features in the software that you originally bought and then you got a lot of features you didn't want, like, oh, I'm going to start reading your email and targeting advertisement based on the content of your email. Oh, and you just signed off on that in the e-mail. And vendors are using this automatic update to push features out into... Or rescind features. Or rescind features. Or rescind features, exactly. And then I think that's why as researchers, as what does a researcher do when I submit that the process is fundamentally dysfunctional? But that's the process of updates. Yeah, but the update is the symptoms. The update is one of the symptoms. So the vendor's like, oh, I can use this mechanism to change the product to suit my needs economically but not necessarily, oh, this is a security update. You always see these things. You have a security update and then all of a sudden, things change fundamentally in the software that you use to where it's not the same anymore. I think the fact is that they're bundling these things that in order to get the security update, you have to accept this other stuff that you may not want. Exactly. I don't understand how that's the disclosure's problem. Yeah. Because you have no control over that. Some companies are going to do it and some are not. What are you going to do? You're going to research how they're bundling anti-features with fixes before you disclose. That's making it too hard for the disclosure to even make a decision. But you could also decide that if the company's a dick about it and does stuff like that, you can say, I don't want to deal with you. I mean, the only way I could see what you're talking about being relevant is in an extremely activist perspective where you're saying, alright, I'm just going to drop out a time after time on this vendor without telling anybody to force them to release updates that only fix the security flaw and don't package other shit. Which, okay, I mean, I like hostile play. I'm cool with that. Well, that's the way we did it in the 90s. This is an ethics talk. No, that's exactly why this is an ethics talk because I want to know, what do we think, what do we think, everybody's got cards, what do you think is what vendors are doing with the process, ethical or unethical? So you're advocating splitting up the security updates from all of their updates, right? That's kind of where you're going? Yeah, I think that's one thing. I don't want to see feature changes in security updates. That sounds great, but I don't think this is what I'll do with it. Yeah, and so here, so well, something that actually freaked me out when I was working at Microsoft, a CISO of a major, not utility company, but let's just say a very important company, whatever. They basically said, look, we'll do automatic updates for our corporate IT for Windows, but there's an XP controller on a smelting device that is, you know, it's basically the investment has to last us for 50 years, so we are never patching that in any way, shape or form. They airgapped it, good luck with that. But, I mean, essentially, the customer was like, we don't apply updates in certain scenarios, and then another weird thing that I learned was that some customers basically wanted to do update Tuesday, update quarterly instead. They were asking because it cost them to do the testing of the fixes, so it's weird what people will actually accept in the end-use scenarios, and that was something that, even as a vendor, we were surprised by our customers. We also were surprised that they were not willing to give up XP. We basically kept trying to kick them off and make support that much more expensive, and they just were like, oh, thanks for telling us that instead of $25 million extended support contract, it'll be $50 million next year. Thanks for telling us, we'll just write it into our budget, like super weird stuff like that. So anyway, this is all beyond the control of the disclosure, right, of the person who found it. Yeah, and can we bring open source into this? Are you going to make open source teams do separate updates too? They've been working on an update, and they get a security fix in there, and you're going to make them do extra work to come out of separate... I don't have any control. I'm just a moderator or panel. No, I'm just saying this is more work for the development organization. If they're working towards a new release, it's easier to stream in the security fixes. So there's going to be a higher cost for security fixes if it needs to be separate. The ethical debate's a little different on open source. Because if you find a bug in something open source, what's stopping you from also writing the patch? Yeah, not every vulnerability researcher is a good software engineer. It doesn't have to be a good patch, but... It'll be a shitty patch. It'll be a shitty patch. I'm saying, for the guys of anything is better than nothing, if you find the bug, by definition, if it's open source, there's nothing stopping you from fixing the bug. So can I ask a question of the panel since there's people that have been around doing this a long ass time, and I'm not trying to call anyone old? What was the first real auto update that was released by my recollection? I did work for Symantec under contract to evaluate the auto-updating of virus definitions. I think in 2000. When did auto-updating come into vogue, and has there ever been a separation between functionality and security updates in auto-updating? Windows 98's the earliest one, I can remember. Yeah, that's pretty close, I think. Well, and so Windows actually, or actually Microsoft had several different updating mechanisms. It's not like each individual team there was at war with each other and just wrote their own shit or anything. Nothing like that ever occurred. But anyway, there was, at some point, there was some point, I think 16 different updateers from different teams, so weird stuff was happening and then they finally unified. Yeah, exactly, right? And then they finally unified under one system, but even that, you know, flawed. Okay, so I also want this to be interactive because you guys showed up here for some reason, and we have some pretty badass panelists. Does anybody have any questions? About the topic. Well, can, this is going to be, come up here. I don't need a microphone. Well, for the video. Thanks. So, you were talking about the, you know, responsible or whatever, I'm going to call it coordinated disclosure, whatever you want to call it. Are we, what's the alternative if we're saying coordinated, responsible disclosures on ethical? Are we talking full disclosure or what sort of alternatives would you or the panel recommend or suggest? I recommend we acknowledge arms dealing as arms dealing and leave it at that. What is the virus disclosure policy? I'm an arms dealer, fuck off. That's my disclosure policy. I mean, an armed society is a polite society and I have yet to hear a policy yet that makes the society unarmed. Okay, so now this Canadian needs to speak up. So for me, it's a case of, okay, at least, if I find something, I can't leave it alone. I have to at least tell somebody, you know, do best efforts. Sometimes that takes an ungodly amount of effort to even find out who the hell to talk to. Like, you know, finding vulnerabilities in air traffic control, there is, you know, it's a global standard, but, you know, ICIO doesn't necessarily have teeth to enforce anything and it's got to be adopted, everybody. I was like, where do you even start with something like that? I was like, okay, I'm giving a talk at DEF CON where I'm saying, here's my evidence. I can't prove that they've mitigated any of these threats. Please prove me wrong. Six years later, DHS actually proved I wasn't crazy, which is a odd position to be in. But at the same time, I'm looking at it now and I'm thinking I potentially help them secure things that could be now used against us, because yes, my ass is on a plane flying home. Those similar kinds of systems, though, used to drop bombs on people. Let's get a quandary. So it's one of those you'll start to look at what the system is. If it's some, you know, little IoT gadget or something like that, you know, maybe a little bit of PII or something like that, that's one thing. But as you said, you know, everything's a dual use. So if there's the potential that the technology or the owner of the technology views it negatively, I'm like, I can't really consider. So it's, yeah, drink. So, you know, I think I want to use the word responsible on the vendor side, because I just don't think it's talked about enough in that context. And, you know, as an example, vendors have been getting more onerous with trying to report a bug. You know, some of them make you go through the bug bounty program if they have one with all these terms and conditions. There was a blog post by someone from Project Zero about trying to report a bug to Samsung. And the number of different click-throughs they were supposed to go through. In order to report a bug, you had to agree not to ever disclose the bug until they fixed it. Like, that's ridiculous, right? So they decided we can't sign off on that because we have a 90-day, that's our policy, is 90-day disclosure. So how can I submit a bug saying I won't do that? Eventually write it around their whole foreign-based system and found someone to send an email to. But that's not always easy when it's a foreign company, right? Like, finding a right person to speak to when it's a Korean company is not that easy. So I think vendors are making it increasingly difficult for people to disclose to them. They're starting to game the system more. Or it's just they don't realize they need to take, you know, reports. Like I've been doing, like I said, sex toy vendors that have basically failed to realize they were hardware manufacturers making a manually operated device and now they've added connectivity. They're a software company. They don't think of themselves that way. So trying to wake them up to this process because like we've all found at one point or another, there's a hell of you email. Like, you know, it goes... There are all these sex toy vendors who are selling people's data. They've all used it. Yes. I mean, you know, everything's... Information is valuable, but... So the idea...the question was, right, what is...what's the alternative? If...if...coordinated vulnerability. So there isn't any one answer to it. I mean, I think the...I think the thing is flexibility in the process and being able to kind of gauge what your principles are as the...as the finder. What your principles are ideally as a responsible vendor because I agree with Weld here. It's the...the responsibility for dealing with these vulnerabilities, short-term and long-term, is absolutely on the vendor. The vendor has had a huge advantage legally with lobbyists and the Computer Fraud and BUSAC and before the exemptions to the Digital Millennium Copyrights Act. They had a huge advantage over being able to threaten it and intimidate and silence researchers and one of the main things when I first was asked to...to be involved with the ISO standards, it started actually trying to define the roles of researchers and I was like, excuse me, I don't know any hackers who strive for ISO compliance, so can we please make this about the vendors and what they should be doing and stuff. So that's why the ISO standards are like that. I do think that deadlines are actually important. The original...you know, all original vulnerability policies, set expectations for deadlines, that is the norm and what I've seen in...especially media coverage, people freaking out about people disclosing the presence of a bug not even the full technical details and confusing that and then again, blaming the researcher. It's like, kill the messenger a little harder, why don't you? I think part of my like whole life's mission, I mean, this is my 19th year coming to this town for this purpose. Yes, I am old. This is what happens when your hair goes gray. Where do you plug an AC Charger in on Bruce? Right, but the point here is that is the alternative something else? Is it always better to do full disclosure without waiting? Is it always better to wait forever? I don't think either of those is the answer. I think reasonable deadlines are important. Setting expectations is important and then no matter what you do, reasonable people are going to disagree about it. I'm actually super curious with the community. I mean, like so many bugs I've been a part of where the parent organization owns the company alcohol abuse. The parent organization that owns the company that I find the bug in decides, you know what, this bug adds too much risk, we're going to sell the company. Okay, well I'm under NDA because it's a pen test, so I can't tell anybody, but that company doesn't exist anymore for me to pressure to fix it. What happens? Or when I'm on an M&A gig and we don't buy the company and now we're sitting on a bunch of silent O-day some of which is in very large companies like the kind that get targeted by spy agencies. What happens to those bugs? Or bugs where you legit disclose to a vendor, the vendor is forthright and says that they're going to take care of it and they issue a fake patch and then sells their own bug to an intelligence organization. Also a scenario I've been through. Like what is the response to these gray spaces? I'm curious. There's always a whistleblower secure drop. Okay, so the response to these gray areas is I have to put myself personally at risk simply because I know something? Like I owe the world this? No, I'm saying it's an option. Still up to you to decide what you do, but you know, fair enough to drop details to a reporter or some other interested party. Okay, yes, the snowden option is an option. I will never take it, but it exists. Question in the audience. This is a question. Raise a question. I was recently contacted by a researcher complaining to have found the floor in our product that led to as far as they were concerned to improvise. They would not disclose the details without a considerable fee that we had to pay for them in mid-quart. Obviously this... That's not in the ISO standard. We kept the question for the audience. He said that there was a security researcher who contacted him with a bug? A supposed bug that was he never disclosed any details of and they asked for Bitcoin. Lots of Bitcoin. Even though it's gone down a little bit today, still was a lot probably. Yeah. He was trying to blackmail us. And this particular researcher is a word very question-fully here. I don't think he has a history of doing this. He had gone so far as to use other people's research. He didn't say he was going to drop it publicly if you didn't say him? Yes, he did. Can I just say that's not a researcher. That's a criminal. Well, that is literal extortion. That is the definition of extortion. Is extortion unethical? I think so. Nice place here. Would it be bad if something burned down? You bring up a point that in the increase of bug bounties a lot of researchers are asking and some of them are doing straight-up extortion like that. They're asking if there is a bug bounty present and they're not accompanying that with a threat. What I deal with a lot is organizations who confuse those two. They're like, how dare this researcher ask if we have a bug bounty program? And I'm like, well, they did this work and they're just asking so do you or don't you type of thing? Did they threaten to do anything with the information? No? Well, then they're not threatening you. There's an interesting graphic I saw the other day of where the bug bounties were coming from and then who was fulfilling them. There's a lot of overseas bug hunters and I've had numerous exchanges where it was clear we were having a language breakdown and a person could read it as extortion or you could read it as do you have a bug bounty that I can access. It's a pretty nuanced line there I think. Before Congress about this a little bit. Just a little. The Uber data breach, just a little testimony. A little bit cat ears. So the whole Uber data breach 50 million records downloaded by a Florida man. Why is it always a Florida man? Exactly. What happened was Uber, this guy emailed them he didn't actually know about the bug bounty program. They referred him to the bug bounty program saying, oh we have one over here, friendly researcher who's telling us about a flaw and the maximum payment was $10,000 for the bug bounty. Literally the emails that were released he's like yeah I was thinking more six figures. So he successfully extorted them for ten times the amount of their regular bug bounty and Uber during that hearing actually said yes that was an extortion payment and we should not have laundered it from the bug bounty program. So absolutely there are ethical lines that were breached in that and I think Uber took responsibility for it and I think that was the right thing for them to do. I have a question. I'm going to change the subject just a little bit and maybe turn this around. I had a career at a power utility before I was at the University of Illinois and I had a really cool job and I had to skate a system. So I got to disclose a lot of ODate some of it which is still out there a decade later. Now the question because I heard the word criminal earlier when a vendor comes in and threatens a career of everybody that is working at a place so it says, oh if you want to be a consultant when you retire you should really do this and just let this go. Should that be criminal and is that illegal? I don't know. Coercion from people in authority to suppress bugs. If we're going to talk about criminalization and policy versus private how about we talk about why the hell is it a standard practice that the office decided to move into private sector after it? Yeah well it's personally as a disclosure of many vulnerabilities that I know are still out there. From an ethical perspective how long do you wait when civilization teeters in the balance? Dun dun dun I made this panel to ask this question What can you live with? So I think part of it is 18 years well I'll have a much better talk for Shmucon Bruce so I disclosed the bug to Microsoft this was around like 2003 or something and they took a whole year to fix it because it was in their file auditing system which helps them get C2 compliance so the existence of the bug made everyone who had to use a C2 compliant Windows which is why C2 compliance is ridiculous but it was a bug where if you used hard linking in NTFS you bypassed the auditing system seemed to me like an easy thing to fix they said we had to completely rewrite the auditing system to have like new flags on every single file it wasn't something that we could just easily update we'll do it with NTServicePak 3 or 4 I don't know 3 or 4 and they gave a good explanation to me they gave me a good technical explanation they gave me status along the way every couple months that they were actually working on the problem and they said there was just no way to fix it faster and the fact that they gave me confidence that they were actually acting in good faith some bugs just do take that long and so I waited but if they were completely silent with me they didn't acknowledge I gave them the bug but if they were completely silent with me I would have no idea that they were ever going to fix it so why not disclose so I think that's part of it is that vendor finder communication can give you confidence to wait that is in the ISO standard it's all about good faith did you try your best to disclose the vulnerability did you pattern off doors ring enough phones sometimes you hit a wall and you need to just say there's a problem here that needs to get fixed if the only way I need to that I can do this is to send up a flare and set something on fire here it may need to be maybe necessary but again it's if you've got a case where they seem to be a little slow but because they're having to gut half the system and start fresh you have to take that into consideration I think it would be unethical on the side of the researcher to say nope, only 90 days triple shift no it doesn't work that way why would the researcher have to in your words try their best to resolve these things the owner should not be on the researcher before they drop out a public to at least have given it a fair shot to have tried to report it if a fair shot even exists so many organizations actually don't have any clear way to report them you were giving the example even with ones that have bug bounty programs they don't have a clear way to report it shouldn't be on the researcher to try and find a contact via twitter it's just not a function I'll put myself on the spot I'm sitting on a bug right now and the largest healthcare management software in the world have actual confirmation that 93.1% of the entire industry uses it and it was a pen test for a company that owned a company that owned a company and about half way up the chain like day two into my job they said you know what this is actually a massive massive risk we're just going to sell the company so we don't have to deal with this bug so they did and they never told the vendor I can't legally tell anybody I'm just sitting on this bug I mean this is like money this is like millions plural of dollars right there is no what is my ethical response what is the threat to life in limb do you have anything Bruce on this I was just going to ask if is it okay to drop Oday I mean to render man's statement before like why is it on the researcher like is it okay just to drop Oday I think there's some nuanced answers but in general can I just like it drop it and feel okay about myself yes and it was ruled that code is speech so at least in the United States we can go after you for that you know it's I think where some problems may arise are things where the laws especially in the United States will allow companies to go after the researchers for doing so regardless right and even the legal threat and the threats to the employer that was what you brought up as well but I mean who in this room has dropped Oday put your hand up me and Katie did it together we freaking did it right I didn't come out right we dropped Oday we dropped Oday and it's like that was see the thing was we saw it was so good no what happened was we this was a carryover from at stake days we had an advisory we were trying to contact the vendor for I think four months no response we called them on the phone I hate phones you know we had email threads for four months so transition we got bought by Symantec and everything like that eventually we're like we're gonna publish a non detailed version of this just to warn users of the threat because especially as a you know security defense company we had the right to protect our customers about a vulnerability we knew about right so we dropped Oday and oh my god the anger males and I think I wish I had saved that voicemail because there were swears I did not know and that is unique for me well I think part of it was this is the first time this company had to deal with this type of issue which is there's always a first time for every company they called us here responsible that was true so so I think that's that's if you're the if you're the finder for and dealing with a company for the first time it's gonna be a lot more work because they're not gonna have any way to even communicate with you really and think about that at the time Symantec was the largest security software vendor in the world we were threatened by the vendor for dropping a no detailed advisory to let people know that something they had in their possession was insecure and that the vendor hadn't responded so think about how hard it is for an individual researcher to deal with this it is not the onus is not on this this brings up a question that came from the back medical devices so you have an ode on a medical device like a pacemaker or an insulin pump or something like that and you try to responsibly disclose this vulnerability to the maker and the manufacturer so where does the line get drawn if nothing is done about this I think you can send that to the FDA and they're gonna take care of it that's true from experience really yes what did you say not true somebody who actually has relevant current experience in that exact area needs to talk so my name is Steve Chrissy Coley I work for the MITRE Corporation we provide subject matter expertise to the FDA in exactly this area the mic doesn't reach back there it's tied up mustn't start really worth it I can do what I want for the next 15 minutes come on up here so they can hear you so my name is Steve Chrissy Coley I work at MITRE supporting FDA providing subject matter expertise in the area of medical devices when they receive vulnerabilities and so FDA has regulatory authority over medical devices when it comes to safety and of course cybersecurity can have an impact on safety I've seen them wield their influence which is much easier than in apparently the unregulated world of software so anybody who's had any difficulties certainly come to me or you can reach out to the FDA as well they are literally here you can go to Biohacking Village and meet them yes there has been some critique but there has been some critique of their practices in the past but they've been doing as much as possible to make it better I think that's an important point is that everybody was really really terrible or so terrible they didn't even exist on the scale of terrible to not terrible and I think that we should be encouraging especially the regulatory authorities who regulate vendors when they're making progress agreed not a good process before not good outcomes but we should be encouraging those regulatory authorities to come down on the vendors who they regulate when they want to just encourage them to keep going yeah I mean it's a case of if you are making something that has the ability to affect life and limb I have to have some responsibility to attach to it so I did I talked to some congress people last night and they asked how can we have greater engagement with the researchers how can we help them and everything and I said go after the vendors congress critters this is what you should be doing write some more laws that actually apply to the vendors and regulate the vendors and reform the computer fraud and abuse act well and it's incentivized the vendors too there's got to be sticks there's got to be carrots there yeah I mean can we just get a no fault disclosure mechanism I mean because one of those no fault disclosure mechanism that's a great idea I mean like I'm a sitting example of like yeah sure all the things you guys said also none of those will help me right now like just I'm just putting myself in the spotlight maybe a a vulnerability researcher should be treated like a common carrier like in telephone systems you know agnostic this is not anything other than what has been discovered and they shouldn't be faulted for anything that they've discovered because they're not the necessary the root cause of that it was some mistake made at the vendor and you were asking me about a question that was pulling on my pants so I probably shouldn't get a question I was just trying to take them off just one no I think it's some people on the panel will know this is probably a question that would have been uncomfortable asking maybe six or seven months ago because it might be controversial but I think from a regulatory perspective when something affects life in limb I'm curious what role the government should play not only just for vendors but also for researchers because when you're when you're not all speech is protected right so if you're if you're disclosing something that you found that could cost someone their lives or cause them serious personal damage not companies but individuals like what role does the government play in that well one I think you're assuming that a researcher is going to necessarily know all the uses of the code that they found the vulnerability in right they are not necessarily going to know that something they found a vulnerability over here is actually also code reuse used in some kind of life in limb scenario so I think there's a degree of having to just put it this way researcher already did a bunch of free labor having them understand all the use cases of the code I don't think that's in scope and frankly I think what we need to think about is like what is the real threat to life in limb is it full disclosure or is it non-disclosure of discoverable bugs and I think that's the biggest danger it's really nuanced right I think like we're making this black and white but I think the question that I really like interested in is does the role of the government include any regulation around researchers and how they comparable for that right because it's illegal to walk into a room and yell fire if there's no fire because you're inside a panic if there is a fire you're not lying so you're fine there's no rule that says you have to yell fire if you see fire right so just because we may subjectively decide that ethics exist on both sides it doesn't mean that it's the job of policy to police both sides where I think government needs to step in more so is so over besides there was a talk at the underground track about the federal FBI cyber ninja program basically providing access to FBI people in their technical technical operations that before you pull a trigger on something and go all the way let them know they're actually working with the EFF you know Kurt from the EFF was there with Russ from the FBI the analogy they drew was you find a big bag of drugs on the street and you're like I should probably drop these off at the police station said no one black ever