 Hello, my name is Chloe Mustagi and I'm happy to be on the disclosure panel with you for this wonderful biohacking village. And so, before I get started, I just wanted to let you know who I am so you have a better idea of basically with the content I'm going to share with you. So I am the VP of strategy over at point three security and when I'm not that I'm an ethical hacker advocate and if you're wondering what is ethical hacker advocate. I'm trying to fight for rights for hackers and also to change the imagery and the wording that the press has to use about us in a very negative light. But when I'm not doing that I also I'm trying to push for diversity inclusion within the community. I'm the president co founder of wooseck and when I'm not that I'm also the founder of we are hackers which is used to be formerly called as one hackers but we are hackers is a private online community of hackers who are underrepresented genders and I work at all different levels of workshops and all that fun stuff. And yes, I am a parent everyone wants a parent it's basically a parent of any dogs cats and hamsters I guess and bunnies or any of those kind of things anyway. So I'm going to share with you a little bit about the hacker community and the author and the people that are mentioned in the book to attend our weekly meetups on Tuesday at 5pm Pacific time. Anyway, enough about me. Do you want to follow me feel free to I'm on Twitter and Instagram. Let's go into the good stuff though. So what we usually when we see the word hacker when we type in hacker and Google or whatnot. We get like this, these types of inches is dark couldn't figure but also we see these ones that have hold cut out and you know. Anyway, let's just be honest with each other. This is really annoying at this point because what's happening is that people are saying us in this way, something that looks like a criminal. It's not just that but even if you type in ethical hacker or criminal hacker. If you see on both sides of this, you get the exact same imagery so, which is even more annoying because it looks like we're still having this situation on going. And it's not the imagery either it's also the words being used the media which is pressed and marketing organizations keep using the term hacker in a negative way, when they should be saying malicious actors or cyber criminals. But if you're wondering how does this imagery impact us. I'm glad that you're asking because I want those people that's obsessed with brains. So we're going to talk about them. Alright, so first things first that you should know about when it comes to your brain is we're going to have to dive into the amygdala, which is an almond shaped part of your brain within the temporal lobe. Now the thing to know about the amygdala it source who's like us who's not like us and because of that is the fear versus flight mechanism. So if the person somehow has a connection like us or looks like us, we're like, this is a friend versus if the person doesn't look like us or we have no association never had any conversation with someone that is that type of person, we see them as an enemy. And because of that, that is how we have these things called socially constructed beliefs. One in the media when they share hackers and imagery of this hooded figure, or using the term hacker, what is happening it's socially constructed beliefs are now being implanted in other people's minds that hackers are criminals. And that all of us are criminals, we're bad people. And because of that, it has created this fear in the public. Now to know about is that when the amygdala sees, for example, let's put it this way. You know when we always tell someone that we're a hacker would work in the hacker community and usually the person takes a step back and kind of just, or they're like the rock drops, or their eyes get a little big, they're nervous, but they don't want to come off that they are but they totally aren't you know it. And you're like, dang it, I should have gone the security researcher term, but you did it. It's okay. Anyway, what happens is at that point, what's happening is that person knows that now you're a hacker, and they've seen the images they've heard the words in the press and whatnot of hacker. They see you and you tell me you're a hacker, they're taking the step back because they're trying to figure out something. If you are a friend or a foe. Now the thing to note about the amygdala is that even though it sends that signal, it's completely subconscious so you don't even know that you have a fear of it. The only time when you know that it's a fear is if it comes in the prefrontal cortex of your brain which is right here. Anyway, so what it does is that it sends a message that are saying warning warning. There's a hacker near you. Be careful. It's a criminal. They do bad terrible things. Now the thing is, is that that person now has a choice to react, or to ignore it. The only way that you can question socially constructed beliefs and biases is through stories and from meeting people within that community. The thing is, when we're inside our bubble, we're protected from these things we have our own beliefs and it's constantly being verified that you are correct and it's always when we go outside our bubble that's when we realize you're not actually you don't know everything. And to be comfortable with the young comfortable is always a hard thing. So, what the main takeaway if you're wondering Chloe why are we still talking about the brain and the amygdala thing you need to understand is that it's not permanent. So, socially constructed ideas are never permanent, you have a choice to learn about another perspective, but they have to hear the other perspective and right now in the press, we're not seeing that yet. And because of that, we're still being seen as criminals. So, and because of this fear and the public perception, we still have a situation where 93% of the Forbes Global 2000 don't have a vulnerability disclosure program. And 60% of hackers do not report vulnerabilities out of fear of being prosecuted. The whole center preventing like good hacking are such as the CFA and DMCA. And because of this stigma associated with being a hacker and how the public sees us. It also increases the chances of having more malicious attackers, not hackers, attackers, but I try to catch you there and that's like to see if you caught it or not. And what we need to know about is that there are legislation out there that is hurting us still to this day. And what we need right now are some amendments. So, for example, the anti hacking laws, the computer fought abuse act in was actually created in 1984 because Ronald Reagan watched this movie called war games and panic. I didn't tell many of the rules and laws that are been created are usually stem from panic and fear base, and not through empathy. They never really had a conversation with the hacker community to say, Hey, is this right or wrong, what we're doing here. And the thing is, is that we need to push more than ever, some amendments for this. Yes, we do believe that, you know, malicious actors should have to deal with prosecution. If we are going, you know, within scope, then we shouldn't be having to worry about being prosecuted. If we go out of scope by accident, we shouldn't have to worry about being prosecuted. However, if we go out of scope and we exploit. Okay, yes, we did something bad with the malicious intent. The lesser dimension laws are the copyright acts, and basically seeing reverse engineering is seen as a breach, not a breach but it's just raw. It's not legal. So these are things to consider and remember of, and of course we have a simple use policy which is every time that you are on your iPhone or any of those things is like, please, check this terms and conditions. I'm sorry to read it, but it's really hard to understand because we're not attorneys. So it makes it even harder for us to understand what we're reading. And also many of us, English is not our first language. And so that's the other issue is that there's all these things that are set up for hackers ourselves to fall into a situation of being prosecuted, and it's scary. But if you want to start changing things, there is one stuff that you can do right now, which is if you go to change that org, there is this petition that was created, which basically tries to get all of us to sign it so then we can bring it and push for changes to happen. We need organizations to start trusting us and have bilateral trust that they will not prosecute us if we report vulnerability and we don't exploit it. We're also asking for politicians to update, update the laws that we have creating amendments is one of those because it's the laws that they've created are preventing good hacking. And also for the media to let them know that please if you're reporting about the hacker community, please keep us in a good life but if it is someone who did something malicious, don't call them a hacker, call them a malicious actor or cyber criminal. And that's one of those important things but us as a community, we have to try to do our best to stay within scope and not exploit because when one of us breaks that it takes us 10 steps backwards. And I'm looking forward to the other panel is sharing more about what on their side and what they're seeing when it comes to disclosure. Let's start out by kind of getting a little bit more into the weeds here. My focus in this area is going to be more on hospitals and the vulnerabilities involved with hospitals. To start off with, here's a little bit about me. I don't really want to spend too much time on this particular page, other than to say I've got a lot of experience across a lot of different hospitals that gives me a unique perspective to see some interesting patterns. And unfortunately I'm here to scare the crap out of you a little bit about how insecure hospitals really are. And then hopefully I'll be able to motivate you guys to do something about that. Let's first talk about the most important thing here. And that's inventory. When I'm dealing with hospitals, particularly the ones that come into my environment where we start supporting them for the first time they never have an inventory. If we're lucky they've got a spreadsheet similar to one that St. Alice here has. If you get a chance, please participate in the CTF that's actually the inventory list for the CTF that we use just recently and it's pretty accurate it covers pretty much the same thing that I see at hospitals. Now there are some tools that kind of help us figure out inventories. Land sweeper is my favorite, but there's also case SCCM configuration manager, and then you can get real manual with end map mask and those will kind of get you about 80% of the inventory, but it's still going to be mostly accurate. And the reasons for some of that is because there are things that are getting added and removed to the environment all the time and the spreadsheets never get updated and there's always the fun one where there's a grant that comes along and the vendor comes in and installs the devices in the environment and doesn't tell it or is and all of a sudden it and is have to support these tools, and it's kind of a mess so starting out from the very basics inventories is is one of those vulnerabilities that we kind of have to deal with and start figuring out how to get a better inventory for hospitals. The next one we're going to move into is actually discovering vulnerabilities. Now I like to start out with an external scan to figure out all of the stuff that a hospital has put out on the internet that they use to communicate over the internet. Generally speaking, when those first hospitals come into our support area, they are very very vulnerable and we have to start narrowing that down. You can see the little spreadsheet to the left there. That's about a year ago and and I'm proud to say that most of those in the critical areas have at least moved off. And then like I said when a new hospital comes in they're very very vulnerable. It, nobody knows what they've got out on the internet. It's always a fun little situation. And there are a lot of tools that can be used to do this. I use tonnable IO and the reference page you'll have to see a whole bunch of different other ones, but I also like to bring out show Dan IO and census IO. Show Dan is a website that you can go to show Dan that IO, and you can search for medical devices on it and see how many are actually published on the internet that you have access to. In the particular case here I put packs in there and there's about 700 plus medical devices published on the internet that you have access to that you can see about and if you click on the IP of that particular device, you'll be able to see the CVs and all the things that are vulnerable for that device kind of a scary little thing. Now as bad as the external scan is scannable a piece on the internet, the internals are always much much worse because hospitals usually design systems where they have this nice hard firewall that is supposed to protect them from all the different things. And then once you get inside it's not much security whatsoever. So it's like that nice little M&M hard candy shell soft inside. Nice and scary stuff. So how does this kind of come about. Well, I kind of blame it a lot of it on availability. And then we're talking about the CIA availability confidentiality integrity, and then availability availability is king when it comes to hospitals. If you've ever seen a physician get upset about not being able to access what they need right then and there. It's quite a terrifying experience. They usually take it out on everybody and rightfully so because we're talking about life and death here. So that that tends to be one of the challenges that we have to deal with for patch Tuesday that comes out everything getting patched how do you reboot something when it has to be up for that surgery that's going to take place at six o'clock in the morning, the next day. And we could also talk about maybe n plus one and having server farms, but hospitals are kind of cheap and that leads to my next one where fast and cheap is far far more important than good or having it built accurate or or done with quality. So the fast and cheap is always going to win. Nobody really cares how, how much, how good it is or the quality of it is they just want it set up and running. And they don't worry about anything else. And of course, that leads to all kinds of fun little problems because Microsoft is a wonderful company that gets everything to work fairly well that's kind of one of their goals, everything's got to work, but they turn on all the services by default. And when you're pushing something out fast and cheap, you push it out with the default settings. So here we are set up with all these services that we don't need that are running, and they're now vulnerable because they haven't been patched in, you know, at least six months or so. And the last one is is kind of a more challenging one and probably the most important one, both it and IS really we kind of suck at talking to humans were great at talking about technology and working with computers and communicating along those routes. The local staff, they don't talk tech at all they talk human they're worried about emotions and they're worried about being able to understand people's needs and wants and desires and they don't care about the technology. That's just the thing that's there that kind of gets in their way and hopefully will help them out of just a little bit. And then the last one I want to talk about is, there's really no accountability. There's also no incentive to do better or to fix all these types of things. So it kind of kind of took Chloe's point earlier there in her talk. She mentions that there's all these different laws out there but there's no laws about making sure that medical devices or hospital devices are secure and not going to cause those problems. There's also no incentive. There are a few nice little things hopefully Casey will cover some of that stuff in his talk. So next I'm going to talk about the my solution for this and the key where is communication. Now we as it information security is we got to do much much better communication. We got to figure out how to communicate with our target audience. We got to figure out how to deliver our message to our target audience. And get better at understanding when our target audience understands our message. And we may have to modify our message to get that across. Once we're able to effectively communicate, you know those horrible soft skills that nobody ever wants to work on because it's boring and challenging and difficult. And then we can start talking about ensuring that integrity and confidentiality of data is maintained and it's just as important, if not more important than the availability of the data. And we'll be able to talk about how quality is just as important, maybe more so and even cheaper in the long run, then man getting something out as fast as possible as cheap as possible. And we can also start convincing those politicians to hold people accountable and people meeting the CEO boards, all those people that build corporations in order to avoid being accountable to different things. And we'll be able to start building out some incentives to help us have reasons to do things rather than reasons not to do things. Next I want to talk about something that kind of scares me. Last year at the biohacking village, there was a conversation going on, and I believe Bo would said this, and it was very, it was about how physicians don't really care because there's a care about security because there's no body count associated with security. There's no direct correlation of a medical device killing people. And to be honest, I think medical devices are being used to kill people. There's really no way to tell if they have or they haven't been used to kill people because we don't have the technology in place to detect that. There's no socks. There's no sins. There's no IPS. There's no logs. There's none of that good stuff. Not at least at hospitals. At hospitals they have a firewall and they have antivirus top solution type situations. And at hospitals, technology is very, very far behind everybody else. Pagers are still being used, fax machines in multifunction printers are being used to fax between floors. Just this morning I had a fun conversation about how a CD was being sent from one hospital to another hospital and how it's important to verify the data on that CD. And how it's important to make sure that there's no viruses on that CD. And if they're going to be communicating through CDs, they should probably encrypt the data, all of which I don't think any of that has been done. Now, kind of to bring it back around I posted a little image here it's from 2004 about the arrest rates and percentage of crimes cleared and as you can see murders are solved about 62% of the time. So even if a murder is committed then then there's a pretty good chance 40% chance that whoever commits the murder is also going to be able to get away with it. And that's just the way it is with all the legal and legal and law enforcement areas. As I mentioned it was 2004 when this particular one would came out if you do a search and check that hasn't changed it's still right around 60%. So we need to do better we need to start using a communication like I talked about to start holding people accountable and just start providing incentives to ensure that these medical devices aren't actually killing people got to figure it out. So a little scary thought for everybody. If hospitals have to invest in these monitoring site solutions, then the hospitals are also going to have to invest in ways to prevent that from happening. All of this is a huge cost effort and hospitals don't make very much money. A lot of it is spent on administration and they would much rather buy an MRI machine than they would spend the money on securing something. So I'm going to drive home that we've got to set up some incentives. All right, let's pull up this last slide because I don't want to end on too scary or sad of a note. Everybody's trying their absolute best in everything that they're kind of doing. Hopefully we'll keep trying. Hopefully we'll get better and learn more and just kind of do the best we can. Also throw this up. Here's the references. I'm sure that at a certain point the slides will become available and you will be able to click on the links and utilize the information in these. Thanks for hearing me. I believe Casey's up next and we'll hear from him next. All right. Greetings, Defcon. Thank you for joining this talk and thanks to Eric and Chloe leading into my section now. Who the heck is this guy? I am the founder, chairman and CTO of Bug Crowd. My name is Casey Ellis. I've been in information security for about 20 years from a Bug Crowd standpoint and especially as it relates to vulnerability disclosure. Bug Crowd actually pioneered the crowdsource security as a service space. So, you know, vulnerability predates us by a long way, which I'll get to in the talk. We were the first to actually get in the middle and try to make it work better. So, you know, pretty proud of that, but it means we've learned a lot about how that works from a process standpoint from a journey standpoint. For me, you know, started life as a hacker. I'm going into pentest as a career, moved across into the dark side of security solutions and sales, and then broke bad and became an entrepreneur, which eventually led to starting Bug Crowd in 2012. And yeah, you can see the, you know, the hack persona and the hustle persona on the side that pretty much sums up who I am and how I do what I do. So now you know me. That's where to find me as well. So yeah, with respect to the talk, you know, when we started Bug Crowd, it was really all about like, how do we do this? Well, there's this enormous community of white hats from all around the world, which, which Eric's talked about. And some of the things they can do. And, you know, this incredible need for security feedback in all sorts of areas of cybersecurity, like how do we make this easy to do it well? And how can you connect that, that latent potential with the unmet demands of security itself. And, you know, on this, like if you look at it simplistically, it's a fairly easy problem to solve, you just invite a conversation to happen and everything's great. But it turns out there's a lot of devil in the detail, which is, you know, we did anticipate that when we started off, but it's a lot of what we've been continuing to work on over the last eight years. And Disclose.io, which is what I'm going to talk about today, is a product actually of the policy and really the adoption of vulnerability disclosure component of that. Like at this point, where a ways along from where we were even back in 2012, I think nowadays, vulnerabilities becoming this normalized phenomena and this starting to become a lot of social pressure around it, which I think is a really good thing. It's still the company, the kind of thing rather that's you're a bit special, if you do other what other as opposed to being, you know, abnormal or unusual if you don't do it and really Disclose.io is about flipping that but I'll, I'll not preempt myself anymore and we'll get into a bit of backstory. So in 95 Netscape launched the bugs bounty program. They've since dropped the S because it's cleaner, and that is a Silicon Valley joke for anyone who missed it. So that's, there's like lockpick bounty programs, there's all sorts of other things that go back centuries even around getting security feedback through a reward model, but Netscape's broadly, you know, accepted as the first kind of high tech bug bounty program. And really with that kind of almost the, the instantiation of being proactive around vulnerability disclosure and actually inviting and engaging it as a part of your security process. So on disclosure, they're not the same thing. Don't let anyone tell you that they are, but they look enough. They look similar enough in terms of how they operate that, you know, we put that back down to kind of where this whole idea started to really gain momentum. You know bug crowds started in 2012 and in 2013 I've got these flipped around but you get you get it. One of the things that we noticed was, you know, legal teams don't seem to really understand how to write a policy that is almost the opposite of everything they've written before it. So you've traditionally gone out saying, hey, if you're a hacker go away, you're bad. And here's all of the legal stuff to create a legal framework for us to say that to you. Now we're starting to try to get our heads around the idea that, you know, there's locksmiths as well as burglars out there and we actually want the help of the locksmiths. Legal teams don't know how to do that. It's still I believe a new kind of idea in the space of law. And the challenges with legal teams, you know, when they, when they don't, when they're uncertain about what they're doing, they tend to write war on peace. Like they'll, they'll, they'll err on the side of verbosity. So you end up with these great big long confusing briefs. And that's the first part of the problem. The second part is the hackers don't read those things in the first place they should. But it's the Yula problem. You know, I think you get a subset of the community who actually really do dig in and we're very grateful for them. You've got to expect the behavior from the internet that people are just going to click through, look for the thing to do next and go, which is certainly true also in this space. And the challenge is that when you talk about, you know, the anti hacking wars and some of the provisions that have been invoked to chill security research, the Chloe was talking about earlier. You know, the CFAA makes screwing this up as a hacker, potentially a felony crime. And the same is true for, for similar laws in other countries. So in 2014, we thought, all right, let's try to untangle this one and solve it. So we released the open source vulnerability disclosure framework in partnership with Jim Donaro and the folk over at Cipher Law out of DC. And that was, that was awesome. It was basically this idea of like, how do we, you know, kind of compress down the really important things that need to be in a brief like this to the point where people can theoretically just copy paste and get it right, like 90% of the time. At the same time, Katie arts a bunch of others and in disclosure were working on the ISO standards they got released to paying customers so there was this kind of convergence on trying to solve this problem of standardization. So in 2018, what we did was basically take some of the, or actually kind of capitalize on some of the, the attention that was around this concept of legal safe harbor. Thank you to Amit El-Azari for really pushing that forward and her legal bug bounty initiative. And then drop box and a couple of other folks that were going on took the OSVDF and legal bug bounty merged them and put them all under disclosure. Crowd added it as default and off we went from there. So that's the backstory and I think with respect to CFA, you know, it's important to keep in mind that it is inactive use. I personally believe that there, there should be laws in place that address computers as a vector to commit a crime. So that's why we called out if it's not clear what side of the lawyer on, especially when you're trying to help that is a chilling effect that means that security feedback doesn't pass from A to B, and that's always bad. So we're trying to solve that problem. So really what disclose I always about is like, how do we make this idea of a bond disclosure how do we make the adoption of a vulnerability disclosure program go viral. How do we make this practice how do we make that into something that is not this huge push from companies like bug crowd and people in the researcher community. Not this like reactive Oh crap I got tweeted out what do I do now experience for folk on the vendor side something that you know the internet collectively is leaning into and doing well. And that really is the mission of disclose I it's it's a healthy and ubiquitous internet immune system and props to Karen Elizare meets sister for I think making that concept of hackers operating as the immune system of the internet popular in her TED talk feels like a billion years ago at this point but it was a little while back. You know, I think that's true. I think, you know, the, the, the healthcare sector and the internet in general rejection of security input or its inability to to identify that input is important and good, but then it's rejection of that input is kind of like an auto immune deficiency in a lot of ways and you know this is what we're trying to solve like how do we get it to the point where hackers are functioning as the like the T cells of the internet and the internet as an organism as a body is able to cooperate with that and actually have everything grow up and be big and strong as a byproduct. The mission is close I was to is to drive vulnerability disclosure adoption, as I probably made clear at this point, but doing that through safety simplicity and standardization so if that's what you remember about this project that's going to be kind of the key take away, because it does involve everyone it is open source. You know, I'm known for bug crowds sit in front of a big bug crowd logo and this is kind of the commercial concept construct for starting this project but at this point we have basically spun it out into its own thing it's it's very much something that we want everyone to participate in regardless of whether they're a customer of my company or not so that's that's kind of the goal here. So what is it how are we doing it. How do you unscrew a 30 year old problem on the internet. Glad you asked so there's three three main components to it there's the terms. You know the purpose of the terms is to make it easy. Like how do we eliminate the friction of understanding and adopting BDP language in a way that benefits hackers and companies and the overall outcome of that is ideally standardization of what those terms should look like. Eventually, ideally to the point where things like CFAA, the MCA and others start to come under, you know, the right kind of pressure to be reformed to actually accommodate for good face in our space. I think that's the utopian goal. We're a ways off from that and there's a lot of other things for the legal folk to be occupying themselves with right now but you know in the meantime we can we can do our part. The second piece is the list which is to aggregate adoption and really to drive adoption as well. You know the whole idea there is network effect which is that kind of virality piece I was talking about before. How do we get it all into the one spot in a way that that you know basically creates this critical massive best practice that people are actually drawn to. And the third piece is the logo which is really normalization. I'll go into these in a second because I'm kind of getting ahead of it but you know how do we promote best practice. I think BDP is a little bit unique in terms of security things that a company can do because a non technical user can understand it. Like they might not get you know blockchain enabled AI powered next gen endpoint something something. But the idea of neighborhood watch for the internet especially when you're talking about a safety critical space like like medical. Anyone can get that and we're at a point in time where I think the average human is more concerned about the risk that they have when they're buying a product than ever before. So this becomes something that's actually you know a benefit aside from the security benefit and the fact that it's just the right thing to do. You know the goal with this is for it to almost become like the green padlock for SSL. And I say that fully mindful the people started to remove the green padlock because it has been normalized like if this whole idea of BDP adoption can go through that same arc. I would be incredibly happy and I think pretty much everyone in this space would benefit from it as well as the internet itself. So right the terms it is a collection of boilerplate of all disclosure templates you know some folks just copy paste it with the with the OSVDF the initial incarnation of this we notice the language popping up everywhere it was getting translated into different languages. That document is grandfathered into policies all over the internet at this point and that's just the thing that people do. So okay you can do that or you can use it as a foundational framework or a starting point for new versions which is one of the reasons that's open source. We've actually taken input from programs who've looked at the policy said hang on here's an edge that a lot of people have you probably haven't considered that we're going to write in you should consider it. Issue to pull request and we've actually merged that into the into the language over time. It's open source as I keep saying because I want to really stress that point it's licensed under CCA 4.0 so anyone can use it. And you know some of the folk represented in the NASCAR kind of section here are the kind of people that are helping out. This isn't an endorsement from them of the project itself but it's the kind of folks that are participating on an individual level to making that language better and driving this forward. These are the kind of folks that are actually rocking up and assisting so that's the other benefit of open source it's not just, you know Casey the bush lawyer from Australia's opinion on how to do this as well. It's all sorts of different folk that are actually coming in and making the whole thing more and more effective over time. And honestly, like this is the interesting part I find is that it's actually kind of hard to do. You know, one of the things that's that's a challenge frankly that needs to be overcome because it's a function of how it works is that not every hacker is a lawyer. In fact, most aren't I would say, and many of them I would say most are English as a second language. So you've got to be able to actually articulate what the hell you're trying to do in this brief in a way that if they do read it they can understand what's okay and what's not. So you've got to balance this readability and brevity with legal completeness obviously you want to provide safety to whatever degree you can without going overboard and having it turn into this 12 page read fast. And the safety that's provided by by the clauses in this type of thing need to be bilateral they need to protect the company that's putting it up on their policy and it needs to more importantly I think in context of this talk protect the hacker as they're doing research, or even, you know the concerned citizen who's found a bug, not as a product of security research that just wants to send it across and doesn't know if that'll get him sent to jail or not. So at the moment it's covering us Canada the Netherlands and Belgium and we wrote one for the 2020 US presidential elections. That was tough, because there's obviously a lot to consider there that's the sort of problem sets that we're trying to tackle with these terms and with the goal of making it easy and standardized. The list itself is this true community powered vendor agnostic directory of all known BDPs and bug bounty programs. At the moment we've got you know what kind of legal legal protections afforded by the program what kind of rewards does it offer. We're about to integrate support with security dot text. And I think calling out the disclosure models and timelines if if it's a coordinated vulnerability disclosure model that's going to be different for every organization. And we've accommodated that in there as well. Again, open source licensed under CCA. So that brings us to the seal. This is like the green padlock part that I was talking about at the front. What we've done at this stage this will continue to evolve but where it's up to at this point is dividing out the what we what we're referring to a safe harbor is popularly I think at this point referred to a safe harbor. What that's really talking about is what kind of legal protections are available to the researcher that have been proactively included by the program owner into the program and we've broken it into partial safe harbor and partial like a partial compliance statement and a full safe harbor or a full compliance statement. The difference between those two things I think is quite important. For partial what we're referring to as partial safe harbor. Essentially it's a good faith statement so it's this idea that hey we agree not to lower up and and cause you a hard time if you play by these rules, which is better than there being nothing there. Right, because that's when you know there's I think the risk can be quite high and there's even recently examples of where that sort of things gone wrong. That statement of itself is a good start the problem is that you're still potentially technically committing a crime as you conduct security research. And that's the part where there's still I think a decent amount of exposure I liken it to, you know in California. One of the ways the speed limits 60 miles an hour but everyone drives at 80, and you don't get pulled over right and that's because there's this sort of social contract is not quite that would be not no safe harbor because it's not written down anywhere but kind of everyone knows. It's sort of the same thing you could get pulled over for that but you just don't fall safe harbor would be the analog to actually increasing the speed limit to what people drive at. And putting it on on the speed sign so at that point there's no recourse for you to be, you know, issued a speeding ticket in that context I'm doubling down on this driving metaphor but I think it's a neat way to explain it. And to basically get full compliance in context of disclosure there needs to be provision against anti hacking laws. So authorization is typically the hinge term it certainly is with CFAA and the same is broadly true around the world. Exemption against any circumvention laws like DMCA which is very relevant to medical device research exemption against acceptable use or TOS violation because you're breaking the thing and that shouldn't mean that you're breaking the law. If you're doing security research and a general acknowledgement of good faith as well. And this is the part where you know we'll get to how this all works because ideally, and we've started to see this happen but this is where everyone can help an organization initiates a bonus closure program. It gets added to the list. People see that and look at it. Then say oh well where are you at with with these the safe harbor clauses where are you at with the legal protections that you're getting giving researchers to actually try to help you out here. The organization is like, oh I didn't even really think of that because oftentimes that's true. They go back look through the boilerplate stuff, you know, work out what they're going to include or how they're going to adapt their policies it gets updated in the list and at that point they're issued with the logo. Should an organization actually take that logo and use it to promote the fact that they've got a VDP or promote the fact that this is part of their cybersecurity process. At that point hackers see it and they start to help out which is good customers see it and they trust them or which is good. The part I like best is that their peers from a vertical standpoint see it and ask themselves why aren't I doing that. I should probably do that too like if if you know we're in a vertical with with five players in it for doing it once left out that one looks like a schmuck. And no one wants to be that company so it's basically increasing lateral social pressure to adopt vulnerability disclosure and actually do this well. And you know, it feels too soon to be talking about virus actually talking about this in context of virality and network effect before COVID broke out but you know, without disrespecting. I think the pain that that's causing a lot of people right now I think it actually illustrates the idea pretty effectively like how do we take this concept of doing the right thing, making people more secure doing the right thing by hackers and just generally improving the security of the Internet and actually make that viral. That's that's really the goal that we've got here. So how does it relate to medical security what can we achieve in 2020. The first and this is a call for help is to adapt the terms to the medical device sector in particular I think the medical sector in general would be would be great too. You know, I think the election security revision that we did it took a lot of work and a lot of thoughtfulness from people that are involved in that space because there's a lot of edge cases that are specific that needed to be considered in fact it in and then you need to simplify that so you can't just add crap on because that violates the simplicity design goal of the terms so lawyers that are up for that please do reach out as people adopt a bonus closure in medical security we want to see at least 30% of them with this full authorization statement. Ideally higher like I'd love to see it get to 100% at some point in the future but 30% seems like a like a great place to start across the 1000 odd programs that we're tracking at the moment compliance is about at about 13% I think last time I checked so you know there's a gap there but it's it's a bridgeable gap so I think that's a good goal. And I think specific to COVID. You know, this is something that we've been pushing on commercially as bug crowd but also just in general at a policy level. You know how can we use this to basically unscrew the problem of COVID contact tracing, both from a security and privacy standpoint but then also from an adoption standpoint. So one of the things that's preventing people from adopting rightly or wrongly or perhaps rightly and wrongly from adopting contact tracing applications is they don't trust the security implication they don't trust the privacy implication they don't trust the government, the folks that wrote it like whatever the individual excuse or reason might be. It's sort of similar to, you know the conversation before around like this is neighborhood watch for the internet if there's transparency if there's like this known kind of group of people that are constantly working to help improve this stuff for their own safety as hackers and as members of this community all of a sudden, you know we solve all of those problems with with really the one shot, you know obviously providing the issues are getting fixed, which we're not addressing in this we are just talking about vulnerability intake and policy that whole vulnerability management pieces is very much a separate but adjacent subject. I don't want to confuse the two there, but you can sort of see the model here if all of a sudden a government or an agency is turning around saying cool. This is our solution. These are the concerns that we know you have this is how we're addressing them and by the way the transparency should make you more confident. If that drives up adoption rates of safe and you know privacy conscious contact tracing applications I think that's part of what that makes that particular model of pandemic management work, which is something we need to be thinking about right now I personally hate that type of solution in any time other than this. I got asked a lot of questions about it and said yeah in any other time I'd run screaming from something like contact tracing I'd recommend absolutely everyone else to the same thing, but like we're not we haven't really dealt with the kind of things that we're dealing with before with technology available as a potential solution so I think this is a really practical application of it. So help wanted lawyers help us translate the terms if you want to translate it into a language that you speak in a jurisdiction where you can be considered of the particular anti hacking and anti circumvention laws that apply. That would be fantastic. We'd appreciate that we want to see as much of that happen as possible for hackers contribute to the list if there's something missing, or if there's a detail that's incorrect submit a pull request, and actually use that list the the idea of it is to help you guys and girls and everyone else who's involved in hacking to be able to look at the target space that's out there that's that's actually wanting your help and giving you an opportunity to do that if you want to work on paid programs. If you want to work on BDPs and verticals that you're passionate about do it. You know the goal of the list is to get all of that information in the one spot and make it easy and shareable to everyone and contributions, you know power that for organizations that might be watching. Start talking to your security teams start talking to organizations like bug crowds start talking to hackers about what you need to do to take steps towards being proactive about launching a vulnerability disclosure program and actually engaging the feedback of the hacker community. I think a thing to call out here is you're probably going to get it anyway. And I think that's sort of where this is all going. This this idea of the internet being a feedback loop is very quickly becoming normalized. So you can either be reactive to this trend or you can be proactive and I think being proactive is a better bet and then include safe over and for everyone in the mix spread the word. Again, this is this is the reason that we open sourced it this is the reason that we created this particular brands particular logo around the idea we didn't want this to be purely tied to a commercial initiative. We think this is something that should happen anyway and I'm speaking as bug crowd right now. This is the kind of thing where I think the more input we get the more it gets spread around the more people get involved and the more people adopt it the better off we all become. And I think that's something that we can all be a part of. So that is my spiel on disclosure, I really appreciate everyone for attending this talk. Thank you to Chloe and to Eric for their sections in the talk and tying it all together I think this whole ecosystem of the relationship between, you know, the hacker community and hacker feedback loops and the current future state of security overall and safety as a byproduct of that in the medical sector. Hopefully we've we've given you all something to think about today. And now we're going to do some Q&A.