 So the topic for today is how to disclose or sell an exploit without getting in trouble. I'm Jim Dinaro. I'm an intellectual property attorney based out of Washington, D.C. I focus my work exclusively on information security technologies. Before I went to law school, I used to spend far too much time just tweaking around on MaxBug, on my power PC and figured that no better way to keep doing that than to do this. So here we go. Just because I'm an attorney and this does have some legal component to it, although this is not a law talk, really, I have to give the standard disclaimer that this presentation is not legal advice about your specific situation or your specific questions. Even if you ask me a question, we're still talking about hypotheticals. If we develop an attorney-client relationship, then we're talking about your specific problem and giving specific legal advice. So this presentation does not create an attorney-client relationship alone. We can maybe do that later. So this is a quick overview of what we're trying to accomplish here in the next 20 minutes. I'm going to speak quickly to make sure we get it all in. We're going to cover the types of risks that are being faced by researchers, some risk mitigation strategies that researchers can take to try to reduce those risks, some of your options for disclosing your vulnerability that may have less risk, and then some of the risks that are associated with selling an exploit. The overall goal of this is to make yourself a harder target. If someone ever asks you, well, can I be sued if I do this or if this happens? The answer is always yes. You can always be sued by anybody for anything at any time. The only question is who's going to win? And the goal is to make it more likely that you will win, which disincentivizes someone from actually suing you in the first place. So let's start out with just some great examples of the kind of research activities that might get somebody in trouble. For example, some of these are generally real-life cases. You found out how to see other people's utility bills by changing the HTTP query string. I talked to someone at a party the other night who had done just exactly that. He was wondering what to do about it. You discover your neighbor's Wi-Fi is not protected. How did you find that out? You broke the crypto that's protecting some media that you had. It's getting a little more serious now. There's actual money at stake. And maybe you wrote a better remote access tool that sounds like you might make a lot of money. So many of the same risks apply, surprisingly enough, whether you're just looking at changing HTTP strings or whether you're actually taking apart a DVD. So in general, we're talking about techniques. I've sort of defined it here. Broad spectrum, everything from denial of service, a technique that might be used for a denial of service attack to something that's just, you know, more akin to sort of investigatory web browsing. Okay, first, when is there risk to a security researcher? There are three general areas where we see the risk starting to show up. One, there can be a threat of legal action before you go to a conference or make this disclosure. There are some examples listed here. You might be the recipient of a legal action seeking an injunction barring you from disclosing something before a conference. So we now remove from merely saber rattling to an actual lawsuit being filed against you. And then there's the possibility of a legal action being initiated against you after you make the disclosure. And these are all real examples. You can, Declan McCulloch of CNET and his colleagues have written some very interesting articles that go into more detail about some of these cases. I would recommend them to you most definitely. But you might notice that some of these seem to be around black cat and Declan on a pretty regular basis. So that's when it can happen. Your number one concern is typically going to be the computer fraud and abuse act. You've probably heard a lot about that lately, perhaps here at other conferences. The main issue is that it prohibits access without authorization or exceeding authorized access. The two times when you're likely to run into possibly exceeding authorized access or acting without authorization would be in the investigatory phase of working on your whatever technique it is that you've got. And when you actually create a tool that performs whatever this technique is, you might actually have a problem where that tool does the act that is prohibited. So in light of, you know, everyone's talked much about how vague this notion of the computer fraud and abuse act is of authorization. I've created a handy checklist to figure out if you might have a computer fraud and abuse act problem. So there we go. Are you connected to the Internet? Probably. Are you accessing a remote system? Probably. Do you have permission to access that system? This is a real hard question. It's really hard to know if you have permission. If you saw a banner go by that said you don't have access, you probably don't have access. But there are a lot of cases where it's not so clear. And that's really where you have sort of like the Andrew Arnheimer situation where he's querying a public facing API on a repeated basis. No one asked him to do that. But there was no banner. There was no clear prohibition of doing that. It was a public facing API after all. So it's really, there is some real risk in figuring out if you can, whether or not you have permission. But that's really all it takes. Unfortunately, it's not just about what you do. The computer fraud and abuse act is about what your friends do. And I believe the risk of being caught up in conspiracy to violate the computer fraud and abuse act is most certainly enhanced by the prevalence of social media today. So if you're on Twitter or some other easy, very easy to use social media platform, you're talking to your friends about how you might do something or answering questions about how you might do a certain thing with a technique that you've developed. You're starting to head down the road of conspiracy. Conspiracy typically does require an overt act in order to really fulfill the conspiracy and typically just discussing something with someone is not. But if you start providing technical support for something that someone else is doing, you're really definitely increasing the risk of being caught up in a conspiracy to violate the computer fraud and abuse act if not actually violating yourself. So we've got some examples here where computer fraud and abuse act has been applied. I think it's helpful to look at some examples because that's how we see how it's being applied. And we can compare what we're doing to some of the things that have happened in the past, other people and see how close those comparisons are. And since we're in Las Vegas, we absolutely have to talk about the case of Nester. Nester was really into video poker. And he liked to play and play and play and play. And he got really good at it. And he played it so much that he discovered a bug in the video poker software that enabled him to play one type of game and bid up a bunch, bid a bunch of money into that game and then switch to a different game and a multiplier would be applied to his bid. So when he won, he got to see enormous payout. And he figured out how to reproduce this bug very efficiently. So he was doing it and his friends were doing it and they were getting a lot of money. And obviously eventually, how these stories always end, he gets caught. And he was charged with, amongst other frauds, violating the computer fraud and abuse act. And as we were just looking at the computer fraud and abuse act a few moments ago, we saw it's really mostly about unauthorized access or exceeding an authorization that you had. And it's hard to imagine how sitting, he didn't access the firmware, he didn't take the game apart, he just sat there putting money in and pushing the buttons on the surface of the machine. How you could exceed authorized access to a video poker machine is absolutely mind boggling. But nonetheless, those charges were levied against him. Ultimately, the Department of Justice did not pursue those charges. Those charges were dropped. They went ahead with other fraud charges. But nonetheless, for some period of time, he was facing computer fraud and abuse act charges for doing exactly that. It's also worth looking at the tragic case of Aaron Swartz, who spoofed his MAC address to download journal articles. That was a computer fraud and abuse act crime. Andrew Orenheimer, who allegedly conspired to run an automated script to plug in identifiers for iPads and get email addresses, didn't even do it himself. He's doing several years in federal prison for that. Also worth noting that the Department of Justice has said in their manual about the computer fraud and abuse act that conspiracy to hack a honeypot can violate the computer fraud and abuse act. There's really no end to the sorts of things that could possibly violate the computer fraud and abuse act. So you're looking at a situation where the computer fraud and abuse act almost acts as an ex post facto law where the Department of Justice is able to look at what you did after the fact. And if they don't like it or they don't like you for whatever reason, you may be trollish for some reason. You are likely to be on the wrong end of a computer fraud and abuse act prosecution. There's also a civil cause of action provided by the computer fraud and abuse act. So the company that, wherever the target is of this exploit can also pursue whoever accessed the system without authorization. So the question then is, what can we, is there anything we can do to try to reduce our chances of being on the wrong end of this type of lawsuit? Well, let's take a quick, we don't want to go too far into statute. It's not a continuing legal education conference, but let's just take a quick look at the statute and see if there are some key words we can at least identify. Here we have whoever having knowingly access to computer without authorization. Another part of the statute, whoever intentionally accesses a computer without authorization. So one of the things you can do is to try to avoid unintentionally creating knowledge and intent. It's a little bit hard to do this for yourself if you intend to do something, but at least you can avoid doing it in connection with other people. So, for example, I would suggest that you do not direct information about how to use some kind of technique to someone that you suspect or have reason to know is likely to use it illegally. Be careful in providing technical support for some clever new technique that you've developed. So my advice, if I were your lawyer, I would advise you not to answer that tweet. If someone's tweeting to you asking you about how to make something more effective, perhaps. This slide's a little more detailed. Some more approaches that you might take. Don't provide information possibly directly to individuals, especially if you're not sure who they are or what they might be up to. Consider just posting things on a website only. Do not post information to forums where you suspect or forums that are known to generally promote illegal activity. If you publish it on your own website or you have control of the post, consider disabling comments so you don't have a situation of people discussing potentially illegal uses of your technique. And lastly, don't maintain logs. So one of the things we've seen happen, so that's enough for the computer fraud and abuse act for now, there's not a whole lot you can really do about it beyond just being careful. Let's move on to the temporary restraining order. This is particularly timely, actually, because you may have read the story about the BW group and the megamos encryption that was used on the vehicle immobilizers. So some European security researchers had figured out how to bypass or discover a flaw in the encryption that was used on the vehicle immobilizers that were used in BW group cars like Porsche and Audi and Bentley, and they're going to present this at the USENIX conference in Washington D.C. in a few weeks, and they got themselves slapped with a temporary restraining order preventing them from making this disclosure at the conference. How did this happen and how can we prevent this from happening again? We've seen this here at DEF CON and Black Hat talks have been stopped by a temporary restraining order. So take a quick look at the factors that the courts look at when deciding whether or not to grant a temporary restraining order to prevent a researcher from disclosing some information about a vulnerability. Number one, will the requester, so in that case it would be the BW group, suffer irreparable harm if the TRO does not issue? Who knows how this works? Is he a new speaker? It's really hard to get accepted to speak at DEF CON, right? You guys should all be thinking about how you're going to create speech, talks. I've been drinking all day. Talks to eventually have yourselves up here, right? A big round of applause for Jim. One more order of business. We need a new person who's first time at DEF CON. First hand up right there. Red shirt, come on up on the stage. We've got a little extra. Let's get one more. We're going to get two people. All right, first hand up over there. There we go. All right, cheers to our new speaker. Let's see if you can pick up where you left off. We're going to work on new material for tomorrow. Thank you. Thanks, guys. All right, so that was great. Thank you. So just a quick look at some of the factors that the court's going to look at when deciding whether or not to grant the temporary restraining order. There's someone like the BW group who wants to stop a presentation for happening at USENIX. Will the requester, the BW group suffer irreparable harm if the TRO does not work? Pretty easy to imagine. You've got an embedded system and someone's figured out how to break it. It's going to be almost impossible for them to update it in any reasonable amount of time. They're usually expensive. Probably some irreparable harm. That basically means that money isn't going to fix it very easily. So that goes in the BW group's favor. Will there be an even greater harm to the researcher if the TRO does not issue what your paper got delayed? You couldn't put in some part of the paper. Hard to see that as a huge harm to the researcher. We might feel really bad about that. But in terms of the use sums of money, BW group's going to have to pay to fix this. It's not really going to look too good for the researcher there. The public interest is just kind of a fun one because we might think that, well, the public interest clearly favors disclosing the vulnerability so it can be fixed. There's, of course, probably going to go the other way on that and see that really risk to all these BMWs or other Porsches and Bentley's and things being stolen is much greater. It is much more in the public interest than having your really obscure crypto talk go forward. The last factor is the likelihood they request a role to ultimately prevail. And this is really the one we need to focus on because the BW group has to have a cause of action. They can't just say we don't like it. They have to say, here's why you need to stop. And it's because you did something bad to us. And in the case of the BW group that would make the most case, and also in the case of the Cisco disclosure, what we had was the use of copyrighted material. And that's really what, that was the hook that got the TRO to issue. So the obvious advice then is to avoid the use of copyrighted material. So if you include source code or object code from whatever it is that you're working on, that gives leverage to whoever it is that wants to stop you from disclosing it. There is a fair use exception if you use little bits and pieces of code. That's a case-by-case analysis and you can't just say, well, this is going to be fair use. It depends on how much you use and other factors that are very specific to what's actually going on in your case. So just try to avoid it if you can. It may not be possible, but to the extent you can do that. Also avoid darknet sources for where you're getting this stuff. In the Megamos case, the court actually talked about the fact that the researchers obtained some information about how the Megamos system worked through some sketchy channels. I don't know how to call it saying exactly where they got it, but it was some sort of BitTorrent P2P type thing where they got it. It wasn't from 3W Group or Megamos. So another thing you want to do is be aware of pre-existing contractual relationships that you as a security researcher might have with the target of whatever it is you're working on. These contractual agreements could come in the form of a term of service and user license agreements, nondisclosure agreements, or employment agreements. What's that? Sure, so an in-user license agreement might very well have provisions in it that prohibit reverse engineering software, for example, and that's something that you might very well be doing as part of your exploration into your technique, and that could give leverage to someone to try to stop using, oh, you've breached this. You know, nothing's for certain. It's just an argument that they have. I mean, pretty much every piece of software you get is going to have some kind of license agreement that's going to, assuming you came to it legitimately, right? You've agreed to this license that may prohibit you from doing certain things with that software, and there's not a whole lot you can do about that, but you at least can be aware of the risk if nothing else. So how far you need to go in trying to mitigate the risk somewhat depends on the techniques that you've used in your research. If you've done things that are clearly look like some of the examples of what people have done that's gotten them prison time, that's something you need to be careful of and maybe take more aggressive mitigation techniques in order to perhaps hide some of the information about what you're doing. So for example, if in the Megamos case, no one had identified that it was the VW group that was where this crypto system had been compromised, VW group would not have been able to issue to go after a temporary restraining order against the researchers. So perhaps there's an opportunity here for the conference going community to create a track where people could present things that are sort of get like a little asterisk or something next to it and we all recognize that this is something that had to be kept quiet. It's sort of like a confidential disclosure trust the review board, this is going to be really cool but we just can't really tell you about what it is because then you won't get to hear it. So maybe that's one approach. So I'd like to talk about some of the ways that you might make a disclosure that are relatively less likely to get you in trouble. You can obviously disclose a responsible party. That's what we'd like to do. That's sort of what the responsible disclosure paradigm is all about. You found a problem with the system. You tell who's running the system. This is actually unfortunately relatively high risk and that risk scales with the questionableness of whatever technique it was that you used to find out about this vulnerability. So if you're connected to the internet and you access a remote system and you had net permission that's how you did it. It may not be a great idea to go tell them about it because if they don't like it they've got an action against you. If you're inconvenient, that's a problem for you. You might think you're doing them a favor. They might not agree that you're doing them a favor. If you're able to submit it anonymously to whoever the vendor is or the responsible party, that's great. Depends how good your OPSEC is, I suppose. A lot of times you think you're maybe anonymous but you're not as anonymous as you thought you were or hoped you were. So that's a risk in itself that you need to consider. If you submit to a bug bounty, presumably they've invited it. Maybe you're at less risk. You can disclose to a government authority perhaps. Maybe you don't really believe it. It'll never get to the vendor. But again, if your techniques were perhaps questionable you might not necessarily want to be submitting it to a government. A governmental authority. You may have an interest in keeping your own identity anonymous. Again, you can try to submit anonymously to the government. But I don't know how much we can really trust that anymore. Fortunately, this is a legal talk and you can almost never get to a legal talk where someone will actually tell you something for sure. Like, absolutely, 100%, you will not get in trouble if you do this. Fortunately, we are in a case here where there is one group of people who really don't have to worry about getting in trouble with the computer front abuse act when they disclose the vulnerability and here they are. You know, okay to disclose if you're one of these people. Although she really should not have been hacking the palace computer. We're not going to hold that against her. So we're thinking about ways that we might be able to leverage opportunities for security researchers to make disclosures while keeping the risk as low as possible. So we're working on creating a pilot program where attorney client privilege can be leveraged to hide the identity and the techniques used by a security researcher in making a disclosure. So the concept works like this. The researcher would disclose the vulnerability to a trusted third party which would be an attorney only to the attorney. It's critical that this be a completely confidential disclosure to maintain that confidentiality of that disclosure so that other entities on the outside can't get to it. The trusted third party does not publish the vulnerability on behalf of the researcher. However, the trusted third party does not publish the vulnerability to whoever the affected party is whoever has this vulnerability. The researcher remains anonymous during the entire process. This is possibly of use if there's no better option. It's a little bit cumbersome and there are some side effects chiefly that the researcher remains anonymous. Doesn't get public credit for whatever the research was. It's a possible way for the researcher to be able to disclose and remain about as anonymous as one can possibly get. So this is a pilot program. We're currently working on it. We're kicking out the bugs right now. So if anyone's interested in talking to us further about this we definitely welcome your input and please see me afterwards. We should now turn to selling very quickly. Right now, there is no law in the U.S. that prohibits the selling of an exploit. And that is a situation that is probably likely to change in the not too distant future, but for now there's really not too much to worry about unless the techniques, of course, going back a few slides, if your techniques in developing your exploit have some problem then you still have a problem. But the fact of the sale itself is not something that's going to get you in trouble. However, we've got a focus on this market now and here are some recent articles from May of 2013, booming zero day trade as Washington experts worried. And my favorite, the U.S. Senate wants to control malware like it's a missile. Stuff is dangerous. So every year Congress has to pass the National Defense Authorization Act that sets the budget for DOD stuff that gets stuck in there. And this year for 2014 the Senate version, it hasn't been passed yet it's still in Congress, the Senate version has provisions that seek to begin the process of regulating the sale of exploits. The bill, the House version doesn't have this, this is still just in the Senate, but I think this is where it's headed. The bill notes that the president shall establish a process for developing policy to control the proliferation of cyber weapons through a whole series of possible actions, right, export controls, law enforcement, financial, diplomatic engagement, and so on. The Senate Armed Services Committee that had the bill before was passed to the to the rest of the Senate said they had some commentary on this and they referred to the dangerous software, a global black market, a gray market it starts to look really bad but they note that there is we need to have a carve out for dual use software and pen testing tools. In Europe the European Parliament recently passed a directive they're a little bit ahead of us this prohibition on the sale of tools that the college basically exploits will be required to be enacted by all of the member states in short order and this provision prohibits the production sale, procurement for use import, distribution of these tools that could be used to commit these enumerated defenses which is pretty much all the bad things that you can think of doing with a computer. However, there's an exception for tools that are created for legitimate purposes such as to test the reliability of systems and it further notes that there needs to be in order to violate this law you need to show a direct intent that the tools be used to commit to some of the offenses. So in both cases both in the US and in Europe we're seeing this trend well it's really going back to the definitional problem how do we define what an exploit is and how do we make sure that legitimate tools can still be bought and sold. So this is kind of perspective we don't know what the laws are actually going to look like but I would start thinking like this. Think about dual use tools. If you write something don't put it together as the next greatest hack. You're creating pen testing tools. This has gone on for a long time. If you look at software I'm sure copy 2 plus or locksmith backup software and the manuals for these softwares have these very elaborate disclaimers how this is strictly being used to backup your floppy. This is not being used to make illegal copies and that is really the conundrum and I think that's where software, that's where exploits will go and some exploits will never be able to be a dual use tool for sure. If you have a nuclear missile equivalent of an exploit it's hard to justify the pen testing value of that but for a lot of tools they will fall into this area and that's where perhaps they should go. Some other things you might do if you are selling know your buyer to the extent you can. I think regulation is just one bad outcome away. What's going to happen is someone in the US is going to sell an exploit, it's going to go through some channel and it's going to come back and get used against some US interest. We may not hear about it. It may be a matter of a secret but this will happen and then there will be a huge drive to stop this from happening very quickly. It's the same reason as soon as if someone is murdered with a certain type of weapon that weapon has to be banned that's going to happen here. In this country it's very reactionary and I expect that trend to continue here so maybe you can prevent that from happening so know your buyer if you're selling something don't sell it to a channel where it's likely to go to some country that's under an embargo with the United States. Maybe your best bet is just to sell it to the US ask for assurances from your buyer so you don't have knowledge that is going to someplace where it's not supposed to go to be lied to but you can't control everything, right? But at least you can get an assurance that it's not going to be used in some illegitimate way and also you can always use disclaimer language so I have some nice examples of disclaimer language here this huge chunk of text on the top is actually from a software product that many of you have probably used many times it's good stuff I've highlighted probably the best of the operative language in it but if you're selling something be sure to use some disclaimer language about that kind of flows along these lines that would help you from being charged and being complicit in any sort of illegal use to which the software might eventually be put and lastly I'd just like to highlight this bottom little paragraph which is actually from the Apple iTunes Store it's the end user license agreement that comes with that and it requires that you agree that you will not use these products for any purpose prohibited by the United States law including without limitation the development design manufacturer production of nuclear missiles or chemical or biological weapons thank god words with friends that is dangerous stuff so thank you for coming this is my contact clock I think we have some time here for questions so if people want to line up I'm happy to entertain them as best we can there are definitely free speech issues especially in the temporary restraining order context oh second amendment sorry second amendment challenge come see me after about that question back here what about using a corporation to limit your liability for disclosure or selling has that been utilized corporations can be held liable in many cases in fact even under the computer fraud and abuse act it hasn't happened yet but a corporation could be held liable question regarding full disclosure versus responsible disclosure so when we do it we do it via responsible disclosure we contact the vendor we give them 30 days and we tell them our intent to publish and we publish everything so the actual vulnerability and how to do it to people to replicate and do whatever they want in most cases the vendors get a hot fix within a week and then if within 30 days they provide the hot fix then we print them and say to fix it install hot fix whatever sometimes vendors will say we need more time maybe we'll negotiate a couple of days but sometimes they'll say we're not going to fix it and you can't publish it so I won't explain what we do for that but google recently published the fact that they plan to disclose vulnerabilities within 7 days to have a 7 day turnaround so what happens if a company like google I don't want to use the word threatened but intends to publish a vulnerability within the 7 day turnaround period and that company says to google don't if you do we'll sue you what happens in google versus that vendor company well google is at risk if google has some kind of obligation not to it would depend on the specific circumstances of it but in this case if no law has been broken then google could be published without any discussion in the case for me for example where I contact a vendor and I say I've got the following 10 vulnerabilities which I plan to publish and they come back to me and say if you publish those we'll sue you same thing happens when google say well we're not going to give you 30 days we're going to give you 7 days and the company comes back and says google we're going to sue you if you publish it doesn't carry the same weight where they're trying to sue google as they're trying to sue me for example that helps as well man that's the unfortunate part but is it just the case of how good your legal team is exactly how much do you charge where do you want to meet them at oh let's go wherever you want to meet them at to carry on so this talks over with we've got to get it ready for the evening now in the hallway we'll answer the rest of your questions unfortunately there's not a Q&A because it's been disassembled too thank you