 Okay, so to refresh everybody's mind and to get us back in to what we've been talking about, right, we've looked at the history of successful security breaches, successful hacks. We've seen what happened to those people, right? Almost all of them ended up in jail. And so we're talking about basically, on one hand, it's how did not end up in jail, related to security stuff, right? Other stuff I can't really help you with. Let me see, there's this big thing. Alright, cool. And then we, and the flip side of that, right, so not going to jail is one thing, right? But I don't know, more importantly, or the other thing we need to think of is are we being ethical security researchers and security practitioners, right? So the ethical component is also very important here. So we talk about don't do anything illegal to avoid jail. So in hacking and in security context, it means never do not hack into a system that you don't own, or that you do not have explicit permission to hack into. Alright, so what does explicit permission mean? In what sense? So how do I know that you've got an explicit authority to hack into a system? Testers, no professional testers start their tests until they get that piece of paper that authorizes them to do so, right? Because otherwise they don't have any cover, they're just breaking into a system. But if the company hires you to breaking that system, really it's a piece of paper, you don't want to do it. What else? If the government asks you to hack into a system, I'll probably just side the side of that question. I think in that case my only advice would be to get a lawyer to figure out what you're going to do with that. Above my pay grade, if you will. Let's rephrase that again. If the government comes to you to test their own system, right? You want to make sure you have a piece of paper that says that that's legal, right? But you also want to make sure that they actually own those systems, too, right? So if I come up to you and say, hey, I work for Bank of America. I want you to test our system. So here's $500. And here's a piece of paper that means nothing because it comes from me and not from Bank of America, right? You want to make sure that you're talking to the right people, that you've done your due diligence. What else would be some examples of giving you pressure? From another country that doesn't have jurisdiction on our extradition. That may be as far as the avoiding jail part, as far as the ethical part. No, I'd say still, if you're in another country that doesn't have an extradition treaty with the US, that's still unethical to hack somebody's system without permission. So permission, what are the forms of permission, right? We're talking about paper, yeah. Both bounty programs? Yeah, both bounty programs, right? So a company may actually have a policy on their website that says, we'll talk about that in a second, right? But they may explicitly say, hey, we allow you to attempt to find vulnerabilities in our system, provided that you follow these rules. Open source? Open source, yeah. So the great thing, what's the great thing about open source hardware? The code is out there. Yeah, the code is out there. Not only is the code out there, right? But you can download it, run a version on your own system, and now you fully control that so you can find all the bugs, everything you want to to your rights content on your own system, right? But if you were to then do that, and then go out and try to look for vulnerable installations to test, is that ethical? Oh, you should have permissions to break into other people's systems, right? You only have permission to break into other systems. The other thing I'll say is verbal, in some cases verbal permission is acceptable, right? So as we'll see, so we'll be using a homework submission system. This is a security course. You know, we'll talk about when I assign the homework on Monday, but as long as it's not overtly malicious, I'm trying to DDoS the server while people are trying to submit homeworks, right? That's really annoying. But if you do find a security vulnerability there, then you should tell me, right? And I give you permission to attempt to find it on my systems, but you have to be responsible to make sure you're only testing my system and not all of my lab machines who are doing research and posting important data, right? So that it's only targeted for this one machine. Yeah, why do you need a written documenter such to hack into someone's system? There are private companies who reported malware in Android OS. So they were not assigned the task to find one. They just found it. So what's malware, though? So what they did is they sent the malware to the MMS as soon as you download it, the video player at the bottom which used to execute it. Right. So the video player passed and it was popular. But how do they test that? Do they test that on your phone? Do they test that on my phone? Do they test that on government agents' phones? No, they test it on their own phones, right? Their own devices that they control. Right? So that's the key point is that on devices that you own and you control, you can do, in my mind, whatever you want. Right? You have the authorization to find vulnerabilities, that kind of stuff. Right? You don't need, in my mind, you don't need permission from Google to try and find vulnerabilities on an Android device that you own. But if you're going to try to exploit my device, you better have my permission in a written document. Otherwise, it's not written, then you can say, you have permission, I say you didn't, and now we have a problem. Right? So yeah, we'll actually get into kind of what to do. So that's also part of ethics. What to do after you find a vulnerability. Okay. So we all want to practice, right? That's over here. Yeah. I was actually going to say, aren't there certain stipulations? For example, if the system is growing close source, like iPhone, there was a huge thing when iPhone first came out about them hating security researchers because iPhone, or Apple technically owned the iPhone operating system and everything else associated with it. So doing the research on that led to a lot of legal battles. So here's where we get into more of the legal part. That is definitely not my forte. So I once again say that I'm not a lawyer. I can use legal advice in my late person's opinion. If I bought your software and it's running on my phone, I can do whatever I want to it, including try to reverse engineer and look at the binary to see how it works. And that's basically what the root of hacking is. You're trying to understand how something works and then you notice that it does something it's not supposed to and you prove that on your own device. When you get an iPhone or any Apple device, please don't say that it's not yours. It's a rental, unlimited rental from a company's point of view. I'm not a lawyer. I'm not an end user license agreement lawyer either. You have to be careful. Yeah, it's from the lawyer in the room. It's been a few years though. So some of the differences, the software is going to be licensed typically, so they have different controls that they could put on that. Forcing that. So you have two different aspects of legal, right? You have business, you broke a contract, it's illegal, not really illegal, it's illegal, you're going to jail. So you can break the terms of use, and as long as nobody knew, you're not probably breaking any law, other than the digital millennium copyright act, anti-circumventary stuff. If you're doing it yourself and you don't tell anybody, it's really hard to say that they're ever going to know. Yeah, that's a good point. And you're not doing anything that's causing other people harm technically. So there's no real way for them, there's no damages or anything there to really lead them to go and get you with other than to annoy you with the lawsuit if something comes up. Right, so yeah, that's a great point. So the EULA, if I understand it correctly, is essentially a contract between you and that company that's giving you the software, right, you and Apple. So by not following that contract, you're just breaking your agreement with Apple, but that's not necessarily illegal, right? They could sue you for breach of contract or something like that, but then, I don't know, they got probably a PR nightmare because people don't do that. And then even, I think, the other point is, are the every clause in the end user license agreement that you signed, is it actually enforceable by the company if I then sue you, right? They could, I don't know, lots of various parts here. From my mind, without the legal aspect, just the ethical aspect, it's totally, like I said, I'm not hurting anybody and I'm not using this to gain unauthorized access, I'm using it for my own knowledge to understand the system more, to understand how it works, to develop my understanding of the system and to teach people maybe about how the system works. I mean, that's totally fine, but... So I'd make a distinction, but I'd agree with you. Breaking the law, what you're worried about is getting caught, whether you get caught or not, has no bearing on ethics, right? So ethics is about you being a better person and about you being a good person for society and for engineers and so that's what we're worried about. You say don't go to jail, but really what we're talking about is correct. What constraints do you control your actions with your own actions? Yeah, that's a good point, that's a good point, yes. I couch in here because this is a little bit more immediately recognizable right now, and it's fun, exactly, but even if there wasn't a law against breaking the computers, even if we lived in a country without extradition treaty, you still want to do this because you want to be a good person, you want to be a good engineer, you want to be a good security professional and all of you are representing me when you go out there and do the security stuff, right? So it'll be like, who taught you how to do this and they'll teach you about the ethics of what you're doing and you'll be like, yeah, Adam told me how to do this and he said it was totally cool. This is the way, so I can cover myself, okay. Alright, so practicing in an ethical manner, like we said, right? Download that code onto a server or a system that you control. Virtual machines are incredibly cheap. You can run tons of them on your tiny laptop, right? So you can have as many machines as you want. Also, the other thing that's happening nowadays is bug bounty programs, so we're going to look at that in a second. So, yeah, there's a lot of companies out there that actually give you the license to try to find vulnerabilities in their software on their live websites sometimes, provided that you follow their guidelines and their policies, right? And once again, that's a contract between you and the company, but here they're giving you explicit written permission to do this. What's another way? Are there other ways? You've got an academic. So not that I can do whatever I want. But oftentimes, part of our research is we want to understand how widespread vulnerabilities are. We want to understand, you know, because finding, it's the difference if I found a vulnerability in a Windows XP system that only 1% of the country is running. It's not that interesting, right? I mean, actually 1% is kind of a lot, so let's say it's 0.01% or something very small. That's interesting, but it's not very impactful, right? But if I say that I found a vulnerability that's running on 80%, 90% of Linux servers, like the ShellShop Bash vulnerability, right, that was affecting 50%, 60% of internet-accessible computers, that's a huge deal. So sometimes we actually, when we do vulnerability analysis, we want to, we actually will go and try to find vulnerabilities in systems, but we have to do this very carefully. So we think about, okay, when we do this, are we going to be crashing any systems? What's the possibility, like, impact to real-world systems? For instance, my lab at BC Santa Barbara, they took over the Torpe botnet for six days, so they found out that this botnet, so it would install software on each of your computers, and then it would contact a random-seeming DNS name to get command and control instructions. Well, one of the students found out is, he figured out that this wasn't a random domain, and it was based on certain things that he could predict. So they registered six or seven days' worth of domain names, and so when that day came about, they had thousands of, probably hundreds of thousands of computers connecting back to them, giving them all the information, all the private data that was sent, all everything that it would have sent to its controllers to us for research purposes. Then the question becomes, so what do we do? So one of the things that could have been done, one of the options is, well, we know that command and control infrastructure has the capability to, essentially, we could send an uninstall command, or send some kind of command to run on the systems to uninstall that. So what are some of the ethical considerations of that? What do you have to weigh? You can run a command on anybody's computer, so you shouldn't be doing some things, like erasing someone's memory, reading someone's private data, so those are the issues. Yeah, so how do you know that this is going to be successful on every single machine that you run this command on? It doesn't need to be. If you're trying out a malicious command, you shouldn't try it out. Ah, quick ideas. Definitely you don't want to do any malicious commands. We're trying to think of, so what could we do? We want to try to uninstall it. And we have permission from all of these hundred thousand people to delete these viruses on their computers. What's the flip side to that? What? A sample set? Try to get permission from these people? I'd be concerned about unknown effects of uninstalling it. Yeah. Do you know every single system that's being run? The example is always what if there's a heart rate monitor that's running a Windows XP machine that's infected with this virus, that because it has some weird environment you try to uninstall it and you have to crash this heart rate monitor or this medical device and then you kill someone. What's the pros? To keep talking. So trespass. So you're basically going to trespass on their system to do this. And you can think about it in like an actual house or however, but everything has exceptions. Your property rights, all rights have certain controls that we put around them. You could from an ethical standpoint we're helping everybody in general just like a police officer that breaks down your door because you're being harmed by somebody else. It's the same thing. And at the same time we can't... So it's... You have unknown effects. There are unknown minuses, but it seems at the time when you're making the decision that it's the best for the overall for society as a whole. There's no clear ethic. There's no right or wrong answer. That's the whole point. But the important thing is you can't just do whatever you want. You have to think through these things and have some justification. Yes, it's going to benefit society because this is a very malicious piece of software that's actively stealing people's usernames and passwords, their credit cards, their social security numbers, date of birth, everything. And it was sending all of this data to the malicious people who were not using it for good purposes. And then it was like, hey, they're going to completely eradicate get rid of this virus. Maybe you can even think of patching their machines to the latest version, so this doesn't happen again. But then the flip side is, if it's a house that'd be like you noticing that your neighbor's window is busted so you sneak through that window or something and somehow patch it or fix their window for them. But during the time you were inside their house would you be super stoked to wake up in the morning and see me or my a nerdy researcher from Santa Barbara like inside your house in the middle in the morning? Don't worry, we're here to fix your window because I was broken. We're just here to fix it and then we're going to leave. So what have you done? What have you done it for? What are some other options? Do you think we'd contact the data server? Change the way it contacts the data server? Yeah, so the technical way of how to do it, at any point you have to somehow change something on their system or run code on their system and really that's the root issue like can you even do that or should you do that? I think what you figured out was the worst case scenario like as you said if it is going to kill a person then probably not take the risk but if it's like probably going to get a machine down for a day but I feel it's for the greater good of the society I would rather do it. Yeah, so that's kind of the tricky part and part of the worst case scenario because you don't know all these 100,000 machines and then you start thinking about some engineering issues, right? Like can you actually write and ensure that this worst case scenario is not going to happen? No, because you can't possibly test on all of these machines, right? It's almost impossible. There's kind of a third option that's related to what somebody over here was talking about. If we can see if there's a way to cut on the source and attack the source which is causing the whole... Yeah, so we could say maybe use this information to go after the larger botnet. That's a good approach. So kind of like side-step it, don't do this but use the data that we're collecting to try to find the bad guys. Yeah, so actually that is part of what they did. So as soon as this happened they contacted the FBI, they contacted Bank of America and some other banks and they set up an agreement where they would actually give them part of this data so that Bank of America could find out their users that were compromised, the credit cards that were compromised. Also ethically you can also kind of in some sense appeal to a higher authority, right? So you can say like, hey, FBI, we have this ability to do this. Do you think we should or do you think we should not? Right? So, you know, kind of goes back to the appeal to the government what they think. So in this case they decided definitely not to do that. Yeah, because it was one of the first times researchers have done this and so we didn't want to be responsible if anything bad happened, right? The government, the FBI didn't want to be responsible either. We were just using it for research purposes. Some other examples, so there's cases where I mean we're doing some research now where we're looking for specific vulnerabilities on the internet so we crawl but we do this very carefully and we target specific vulnerabilities that are highly unlikely to have cited. So it's the difference between having somebody execute code on your machine and just kind of looking at the house and noticing that there's a broken window, right? In my mind those are good, you know, it's costing people bandwidth and resources so you have to think about it, you know, from an ethical perspective but at the end of the day we're learning something new and we're engaging in research so plus I kind of have the backing of the university so if anybody does try to sue me I have I don't know, some resources there so being an academic just to put a little plug in there Okay, bug bounty programs are awesome so this is actually something that's fairly new I'd say in the last four or five years and so they'll give you money sometimes or fame in exchange for reporting security vulnerabilities to them and you get how much money you get rich. You know, 600,000 Total So for each vulnerability you get $10,000 Depending on the vulnerability Anywhere from $100, $15 to $1,000 to $10,000 I mean being perfectly honest we're a class, you know, we're researchers you can get way more money selling them to bad guys Right? But for me ethically I would never I mean I would think about doing it but I would never actually do that because you're causing way more harm like you and you're essentially effectively causing harm as opposed to, in this case you're actually trying to help fix systems and maybe you can get something out of it So even if they just put your name on a Hall of Fame list to me that's totally worth it Okay, the big thing here is remember it goes back to permission so you have to understand that they're giving you permission but what permission they're giving you so those are the incredibly important things and actually a lot of companies have bug-mounted programs Google, Facebook, AT&T, Coinbase GitHub, Roku Microsoft, K-Cow I think, I'll say American Airlines recently also announced a bug-mounted program where they'll give you miles for reporting bugs I'm sure I may not be your thing if I know how you like American Airlines So yeah, so there's a list about these things and you can go but the important thing is reading the terms of service because what can happen is for instance in Facebook kind of an incident what Facebook does their bug-mounted program they actually give you a completely separate Facebook in that sense so you can just generate new test accounts when you log in you're in this separate essentially disconnected from the normal Facebook test environment so you can test two friends, two people who aren't friends and see if you can get them to post on each other's wall or send each other messages or whatever and that's how they test so they say, hey, if you find a vulnerability and you show them good faith effort to use this test system you know, we'll give you money we'll pay you for the vulnerabilities that you report and so what happened is your researcher found a vulnerability on Facebook to post on anybody's wall so hey, this is a vulnerability why? I mean it's not supposed to allow anyone it's not supposed to do that, right? but why is it a security fault problem? privacy concerns I mean it's against the access control policies of the application right, the application specifically that is like you have to be a friend to be able to post on somebody's wall and so he did try to so he reported this to Facebook through the Bugraji program unfortunately so he was a Turkish researcher it's not unfortunate that he was Turkish it's unfortunate because there's a breakdown in communication between him and the Facebook team where his English wasn't great and they weren't really understanding what he was saying and so he tried again and they had several back and forths but the Facebook team were not acknowledging basically his report another thing you have to think about is when you have these systems you get a lot of reports and a lot of them are crappy like oh, I can run on my own wall that's a vulnerability like no, you can do that or I can send someone a message like it happens all the time got the question? no so it's understandable that I guess they didn't follow but it's kind of unfortunate so he decided that he's like hey, I found this important issue right, I he was driven from I think a good place where he said hey, you know, this is a bad issue if I found it it's highly likely that a bad guy can find it so I really want Facebook to fix this and so okay you're this person you're not getting any more of the Facebook team what do you do? make it public make it public on the real site yeah, so which someone would you choose? so he decided well, I'm going to tell Mark Zuckerberg about it by posting on his wall all the details and to get attention about the vulnerability and get attention on the vulnerability he did trust it I think it was fixed within an hour but then Facebook they did realize it was a vulnerability realized it was him, obviously found the account, started talking to him so the researcher didn't follow their policy right, because they actively broke the security of the real system and therefore the bounty was ineligible and so he didn't he didn't receive a bug bounty so this is what it looks like so this is him writing on the wall sorry for breaking your privacy and posting to your wall I had no other choice made after all the reports I sent to the Facebook team it was not Turkish so yeah, imagine Mark Zuckerberg was pretty pissed oh, yeah, here's the whole thing yeah, so it's about this it's a thing, he tried to report it twice he didn't get a reply so often times you will have to work very closely with the security teams and often times it's frustrating and in the back of your mind you're thinking like, I'm going to give you a service and you're just blowing me off and I know about this really important vulnerability at the end of the day, while there's humans on the other end you have to realize that they're trying to do their job and so you have to be patient and they should be patient so, we saw that they fixed it on a one hour after work so they do have the time yeah, so that's it's tricky, I mean I would never do this yeah, I mean I think you have to keep trying with the security team and keep being like no, this is really a problem because your other option what are some of your other options at that point yeah, you could let it go you could never tell anyone it's not going to waste all your time I don't know, to me I have an ethical problem with that personally because I found it somebody's going to find I'm not that much smarter than everyone else somebody smarter than me and more malicious than me is going to find it and I knew about it first I feel like you're a bridge inspector and you saw a huge crack in the bridge yeah now that the security team has confirmed that this is not a bug I'll just go and release it on top of it so that's the other option, right? we're releasing it, we're writing a blog post about it what are the implications, problems there people can know about the vulnerability and bad guys can do it yeah, so that's one that's a huge risk there, right? people can know about the vulnerability actually, now that I you kind of did the same thing I don't have the exact space for permissions on these posts but I think anybody could probably view Microsoft Reverse Wall and learn about this I mean, I saw it on a news site pretty soon after it happened it's kind of the same thing, right? but then you're releasing this information publicly to both good guys and bad guys before the problem is actually fixed right? then you've got to think about what happens if people learn this from you and then start exploiting it in the wild against people and start posting on your grandma's Facebook page or something like that that's Facebook's that's the Facebook security team's problem it's Facebook's security team's problem but you caused it no, you was coming was it? nobody knew about it, nobody was talking about it okay, you have like your open reports Facebook you can publicly and call all you got to say but there is always this chance that someone else knew about the vulnerability before you actually found it so that set of people were probably already exploiting it and Facebook didn't even know so you at least found something else so that could be it did you find it? well, it's a little difficult to find it right do you want to repeat that point? yeah, so I feel that probably this vulnerability could have been known by other people and who were probably exploiting it now since you found it and you made it public now they are going to fix it because they now know so that is one thing that's another thing about public disclosure right? so that's another thing to think about is not can often, in some cases be worse in some sense if people are actively exploiting it and you just sit on it, they are going to keep doing that but the flip side, if you go public now there is that gap in between when everybody knows about it when Facebook can fix it now people can trivially exploit it this guy also did something really stupid he actually put a link to details about the exploit he could have just posted that I am posting to you all contact me that's a good point you could have done it by raising the attention you probably wouldn't have found it in the policy but at least he is not publicly disclosing it but one of the tricky things about security is once people see that, they know where to look so they know how to read they don't know exactly how to do it but when you direct them to their attention they are likely to find it again so in hacker culture there is this organization called Zero Day Initiative which actually kind of addresses these issues because there is a lot of companies you will report a vulnerability and it will cost them more money to fix the vulnerability than to just leave it and get whatever is stolen stolen and so with Zero Day Initiative the idea is that responsibly disclose the vulnerability and then after a certain amount of time publicly disclose the vulnerability and then the company can do whatever they decide and in this case they fixed it in an hour so if I get into the idea of disclosure how do you report how do you find vulnerability how do you report it what are the ethics behind different types of reporting I actually have some not issues but ZDI is not entirely they have a weird other side so what are your options what are your options I'm not on Facebook now generally call the company call the company probably difficult so email so assuming but that's actually a tricky thing right so assuming they have a security contact you just call customer support and say hey yeah I found a cross-executing vulnerability on your webpage on the search page they'll be like who are you are you trying to hack me what are you talking about because they don't have the technical skills to evaluate what you're saying if you're getting the same email like it's not a bug or something then better to talk with them so we can disclose to the company we can tell everyone which is kind of considered the full disclosure policy so it's whenever you know there are mailing lists bug track a bunch of other ones where people report vulnerabilities that they found so what are some of the downsides here I mean you leak it out to the malicious people as well you're giving it out to malicious people as well what's the good thing it's open knowledge it should be fixed now the company knows too so if it is severe and you could potentially you're getting it fixed you're getting results in some ways you can think you're making people safer you could tell the company or group responsible for the software and this is I will say it's kind of a misleading or very positive term they call it responsible disclosure implying that everything else is not responsible but so let's say this scenario happens let's say it's so in this case Facebook completely ignored this person let's say they acknowledge it right you report a vulnerability to let's say Microsoft in Windows 10 and they say awesome thanks so much for telling us we really appreciate it we're gonna put you on our website of people who found security vulnerabilities you know we're working on a fix right and then 30 days goes by you don't hear from them another 30 days goes by, we're still working on a fix another 30 days goes by, 3 months goes by still working on a fix 6 months goes by, still working on a fix do you believe what do you do is it possible that as a user I can go to the police maybe because of this bug my privacy is being applied tricky you actually don't even know how to answer that because on one hand I don't think the police if you went to just the local police department they couldn't care at all but there are government entities that will like cert I believe will help coordinate responsible disclosure so if you find a really bad vulnerability I believe you can tell cert the CERT, the computer emergency response team that we saw that was creating the response to the worm they will help you contact the relevant companies because sometimes it may not be one entity if you find a vulnerability in the Linux kernel how many different organizations do you have to work with the people developing the kernel plus all of the downstream distros that are using that kernel to get them to upgrade to the latest version anybody who's ever made a derivative of the Linux kernel that may have that vulnerability so even just doing this can be hard, identify the group responsible just make me regular developers we're doing this in their free time it's a much general program that I'm just looking at with Linux if Martin would look up I would be telling the charge the vulnerability to all he just wouldn't probably yeah that's tricky I think some parallels definitely yeah I mean it's in one sense you're trying to change thought and change people's attitudes or something here you've identified a particular problem that bad people could use maliciously and you just want to get that one thing fixed so I think you have to weigh the pros and cons it's hard to to completely draw those parallels I'm going to stop at an interesting point thinking about other revolutions or ways that changes the thought right what do you keep telling the authorities what you're listening to yeah yeah exactly so we're going to get to that so one thing the other thing you could do is kind of no disclosure what do you think would be the black market so what would be the black market what do you think we have some say to do a bad guy right doesn't matter where you know maybe you know them in person or maybe you know them just through an IRC chat and they're a bitcoin address or they send you big coins you know you could sell that information to somebody who you know is going to do malicious things with it what do you think would be the black market what do I mean by grey here competitors that's tricky you could sell it to competitors and they can do whatever they want with that information it's not your problem reporting it to a consentee or they can use it for a wrong purpose yeah so you could what kind of organizations public disclosure public disclosure is usually full so that means you just throw it out there and people have to either do something about it or not and at that point you've lost all market value because it's public knowledge so this is selling money you can auction you can what? an auction? who's going to buy it? who are the buyers? I don't know so in that case if you can find the buyers who? you should think about it because this is a potentially dangerous operation so looking at it from a legal perspective I would break it down into first of all you have first amendment protections which will kick in but there are limits so public disclosure would probably be covered under some kind of first amendment protection most likely but it wouldn't protect you from civil liability first amendment the constitution only protects you from government action it doesn't necessarily protect you from private action so if you harm them in some way by giving it out to the public they could definitely see you for some kind of negligence or something like that same thing goes down to selling on the black market if you sold something that's not necessarily doing criminal things with it but something that's you know probably close then maybe you only have civil liability if they're doing criminal acts then it would at least be a co-conspirator at the easiest you know at the part an intent gets a little weird there and then if they're doing criminal acts you have additional criminal abilities that go with that so you could go to jail if they do that yes there's a whole bunch of legal issues it's very complicated so who is buying these things so there may be a team of people who actually work to fix such problems and then they sell these solutions to these companies actually actually actually ZDI is what owned by HP is that right the deserved initiative somebody bought them there are a lot of these companies and what some of them will do is they will buy your vulnerability and then they'll create a signature for that put that in their antivirus engine push it out to everyone and then report the vulnerability to the company so that they can tell all their customers we're protected against zero-day vulnerabilities because they buy the knowledge and use it to update their systems I believe that's how ZDI works tipping point tipping point? I don't know what that is I don't know if it's ZDI specifically but there are other organizations like that some of them are owned by antivirus companies and that's what they use that information for to improve their products what about anybody here in the hacking team? breach? data breach? no? so the hacking team is or was they're an Italian company they provide offensive security solutions they say so part of what they provide is surveillance systems they sell to governments but you have to get a surveillance system on somebody's device so they will buy they bought vulnerabilities from people and sat on them and used them in their product and sold them to governments local governments, national governments to use to get onto people's phones and laptops and to install their remote viewing software much more than gray dipping into the black side of things probably submit it to a bounty hunter program and then at the same time sell it to someone you know who is paying for it what happened if you did that? well the company wouldn't know that probably they found that as well while they were being fixed so how did black markets work? what's the driving force? you have a bunch of criminals you have your own thieves how does anybody do business with each other? trust trust reputation if you are known as a person actually I believe some of the hacking team stuff they would pay upfront and then they would pay after a year or whatever if that vulnerability was still working it was not disclosed they would pay you an additional bonus because it's just information if something was duplicated if somebody else happened to find it you would be screwed without that money and if you had a reputation for somehow all of these vulnerabilities that this person is selling us happened to be go bad in five days I'm not going to do business with you anymore oh and nobody mentioned this what about governments? sell to the US government I don't know how to do it but I'm sure there is some way you can do that NSA part of their job is to find and catalog these exploits which we found out from the Snowden leaks what about other governments sell to a lot of this people happened on this side I don't know what they said people with something in the back can be accessed to me it doesn't matter unless you are trying to teach me how to sell this stuff it all happens through TOR and big coins and all that stuff because you are trying to get an anonymity so it's not really important that's just the communication medium important is who's behind it and what are people doing I don't know the ways you sell to I'd say foreign government but let's say foreign to all of us what if I don't know Venezuela or maybe North Korea the North Korean government could pay a lot of money for your vulnerabilities that's a goal I don't want to be responsible for that so it's really kind of which way you go is very much a personal decision to me the way I think about it is I believe that you kind of have a responsibility and you're me it's part of the ethical consideration of being a security researcher you have to try a responsible disclosure first and document I sent them an email to here I called them here I talked to this person here but if they don't fix it if they're dragging their feet you're totally within your rights after 30, 60 days as long as you've given the company a heads up that you're going to publicly release it to me I'm fine with that because you have a responsibility to the company but you also have a responsibility to the users of that software if I'm an administrator of some PHP application that I'm using in my company and you found a very bad SQL injection vulnerability that a lot of people steal credit cards my users are at risk I want to know as soon as possible so maybe I can put in a fix or maybe I can put in a patch I had a very different question probably comes from the Edward Chauden's picture that Khalil had on his profile pic when he posted on Facebook so let's say if you somehow like Facebook is tracking your internet usage time or probably using your video device to get your video and you cannot report this to the company because the company is actually gaining from this unintended use of their own software and you were not given permission to actually find this out so how would you report this to who would you report this to how would you report this to I mean this is obviously a hack that you did or a reverse engineer to find out in the court that they are doing something they are not supposed to do but they are profiting from it so they are never going to fix it at their end I mean then you have to make a personal decision to me I would probably do full disclosure if you wanted to maybe release it more anonymously full disclosure I think that would probably also be fine or it would be your personal decision of what you want to do that's a tricky one we talked about this a little bit when we talked about Kevin Mitnick and all those other people would you hire a hacker someone who was convicted went to jail for hacking crimes yes, why yes if you want to know how people are breaking in find somebody who knows how to break in you? I am a hot guy I am a hot guy I am a magician you don't trust magicians? sorry to all magicians in here yeah, so they are not trustworthy because maybe they have demonstrated that they could break they break into things and they see their authorization I would buy their information but not hire them buy their information but not hire them that's definitely one thing yes if you don't have any sensitive data and know what you do what company doesn't have sensitive data? possibly like the Linux kernel or something lots of sensitive stuff there where they input code into the kernel like a backdoor or something that they can get into one commit so it's a very difficult question so the pro is I don't know if somebody who could find problems with what the bad guys do but included this person is a legit hacker what's the problem here? he might not report it to you and he might send it to somebody they also showed that they are skillful and motivated so would you hire a convicted arsonist for the job of fire marshal? probably not because you don't trust them to do that job this is kind of more job thing but the rest of your team members are a lot of my existence who have been in jail and you're hiring this criminal into your organization that could cause problems with the team work and morale in your organization you have to assess their personality they're hired all the time important question I want to leave you with how would you fire a hacker? we saw that terrible australian admin who got fired who was not a crazy hacker who caused sewage damage so how would you fire that hacker from your company? is there anything at any place in your company or back door somewhere that you don't know about because there are better hackers than you the other thing is how do you even often times actually offensive security skills don't necessarily translate to defensive skills there's a lot of overlap but still you can break at one thing you're great at breaking in but well now if you defend it there's a lot of area to defend and it requires a little different mindset great I like this we'll wrap up with this on Monday and then we'll get to this cool never-ending next time