 So, hi. First I'd like to say thank you to the IoT Village and DEF CON for having me here today. I really appreciate the opportunity to speak in front of a group of people that enjoy computer security almost as much as I do. So as you can see by the slide here, I'm going to be giving the keynote for the IoT Village and the name of my talk is Friends Not Fos Rethinking the Vendors Researcher Relationship. So first to begin a little bit about myself. Like I mentioned, my name is Rukram Gadi. I enjoy building and hacking web applications, reverse engineering mobile applications, most things to do with IoT security and just casual reading. And I'm a security analyst at Independent Security Evaluators, which is the logo that you guys see around the room. So a little bit about the company that paid for me to be here. Independent Security Evaluators is based in Baltimore, Maryland, but they also have an office in San Diego, California, and they work all around the world doing assessments. What type of assessments? We do security assessments on just to give a couple of examples, mobile applications, web applications, infrastructure, native applications. If you have a custom DRM system that you want to look at, give us a call or a protocol that you need reverse engineer, we enjoy doing that too. We do all of our assessments from the perspective of a highly skilled adversary. And most of our assessments are white box. We do black box assessments as well and everything else under the sun. So here is the outline for the talk today. At the beginning, I'll be talking about the IoT Village. And I think it's important because I'm the keynote for the IoT Village and you guys are all here for the same reason. I'm going to be giving a CTF intro where I'm going to give a couple ideas of how to play CTF if you don't know how to play or if you want to start playing now. We'll be going over vulnerability disclosure models, what they are and what they mean. The differences between full and responsible disclosure. And then we'll be going over what I think researchers and vendors can do together to improve security. And at the end, I'll have some time from questions. So the talk overall is going to be between 20 to 25 minutes. And at the end, there should be five minutes for questions or five to 10 minutes for questions. So where did the IoT Village come from? It all started off with some research into small office, home office routers, and network attached storage devices. So, SOHO routers and NAS devices. These security researchers found out that these devices are actually pretty easily hacked. As you can see, if you've been playing the CTF, and they wanted to create something that will help develop a better methodology for hacking devices and bring to light how vulnerable these devices really are. As a result, we had the first SOHO broken contest, which sounds a lot like an emo band, but it's not. It's a router hacking contest. And at that, this took place at Defcon 22 in 2014, and 56 CVEs were brought to light that had to do with all the vulnerabilities in these devices. So if you're playing the CTF and you wonder how it all started off, you can read the white paper and the URLs on the bottom of the page. Another URL is really long, and no one likes typing that out. So the Google URL shortened version is at the end between parentheses. So after the first SOHO broken contest, we have the first IoT village. The first IoT village took part in Defcon 23 2015, and it expanded to hopelessly broken. What does that mean? It didn't only involve routers and NAS devices, it included a bunch of other stuff too, like cameras and multimedia devices. Now a bunch of other nonsense that people decided to create. So the IoT village in itself started off with SOHO broken in Defcon 22. It's happened at Defcon 23, Defcon 24, and this year at Defcon 25. It has included 30 devices in the CTF, 60-plus devices in a zero-day track, 38 talks, not included that let's talk from this morning or my talk right now, and 113 vulnerability disclosures. If you ever get bored of attending the IoT village at Defcon, there's a bunch of other conferences you can go to. Just to name a few, there's DerbyCon in Louisville, Kentucky, RSA. In RSA, it's a different CTF. They just have people giving demos on how vulnerable devices are exploited, because RSA is a different environment. Then there's two B-sides events, one in B-sides DC, one in B-sides charm, CypherCon in Milwaukee, Torcon in San Diego, and HackerLives in Puerto Rico. So the IoT village itself is composed of three different parts. The stage, which is where I am right now, where people are going to be presenting their findings and what vulnerabilities they've identified in products. The zero-day track, which is at the back in just like a couple devices. They have devices that haven't been hacked or don't have known vulnerabilities. You can responsibly disclose these vulnerabilities to a vendor, and then you can take part in the CTF, where there's like a payout for whoever finds the coolest vulnerabilities. And then there's the actual So Hopelessly Broken CTF, which is what I think a lot of people come here to play. Having in mind that a lot of people do come here to play the So Hopelessly Broken CTF, I do understand that not everyone that comes to Defcon is a well-seasoned security analyst, and they don't really know where to start off. Lucky for them, I came with a couple of CTF tips so that anyone can play it if they want to. Tip number one is map the network. So as soon as you drop into the network, you're going to have to figure out where these vulnerable devices are and which ones are the ones you want to attack. Obviously, you need to figure out which ones are easier and which ones aren't. So take some time, map the network, identify the devices, and then move on. I would recommend Nmap. You can use whatever you want. You can ping the devices to identify what's on the network. That's up to you. I'm not going to tell you what to do. Search all the surfaces. So some of the vulnerabilities are going to be on the device in the web application, or it might be on a port on the device. So think about all the services running on the device. Think about whatever's exposed through the web app. Then you'll figure out how to become an administrator, get a shell, figure out what flag you need to get, and so on. My next tip is think outside the box. So a lot of these vulnerabilities you need to be creative. So some of them need like a well-seasoned Linux kung fu artist to get into the device and then after hash some files and get it out. Sometimes the tools that you want are necessarily there on your first look, but there are other tools that will help you out. Search for canned exploits is another one. A lot of people think that they need to create a zero relay in this device right now in order to participate in the CTF. That's not the idea. A lot of these devices, you can just find them online. You can Google around and you'll find the exploit. Then just compile and hack. Sweet. And then for my last tip for the CTF is try not to project. You might find yourself in a position where you want to hack a device that is a little bit more complex, but and you don't really make any progress in the CTF. I recommend finding an easier device and moving on from there. So the other thing is that there's other people here that want to play as well and they are not experienced. So if you do want to team up with someone, do team up with them. If they say no, just back off and let them be, but if you do want to play with someone, have fun. For those of you that are still a little bit lost after that small intro, here are a couple of links for places that you can just like read a blog post and figure out how to hack these devices. If you want to take a second, you just take a picture you can. On the right hand side, I have the Google shortened URLs in between parentheses. I know that a lot of people don't like those because they're kind of sketched out by them. For those people, I have all the real URLs on the bottom page where my Twitter is. Sweet. And before I finish this CTF spiel, I have one more thing to say and that's that you got this. I know, like I mentioned at the beginning, a lot of people come to Defconn already knowing how to do security, but a lot of people don't. If you're like, if you're not getting into this and you don't really understand it, don't fret too much. I know you think you suck, but it's okay. We all suck. It's all about sucking a little bit less every day. Talking about things that suck brings me on to my next thing and that is vulnerability disclosure and IoT devices specifically. So why does the IoT village exist and why does all these things with vulnerability disclosure being talked about? Well, IoT devices are finding their way into public places, so you don't really have a choice whether or not you're going to be interfacing with an IoT device at this point. It's also becoming the case that if you want to buy a device, there's only an IoT option and that's it. Also, as time goes by, people are using IoT devices to further improve how they live their lives and I think just shying away and saying, don't buy any IoT devices, it's not going to be a good idea because then there's not going to be a choice. It's either you buy a fridge that has an IoT function in it or you just don't buy any fridge and that doesn't really make sense. Cool. So I have a simple scenario to give you more or less why I think IoT devices are important. So think about it this way. You walk into your hotel and the first thing that you see is a IP camera. So this is pretty normal. Like, normally you walk into a building, you see an IP camera, you're kind of used to it at this point. So as you walk into your room, you find out that the fridge is also an IoT fridge. Like, well, you know, I'm not really going to be using it. I'm not part of this network other than for checking my email. So that's fine. Then later on, you find out that the lightbulb is also an IoT lightbulb and your shower head in your room is also an IoT shower head. So you get over it and you go to your favorite security conference and you do your talks, you do everything. And then at the end of the day, you decide you want to go back. When you go back to your room, you find out that you can use a shower head because the server that controls the shower head is no longer up. It's no longer accepting any data and you no longer get to take a warm shower. This is kind of a comical example, but it does explain how this person that walked into their room, they didn't have a choice whether or not these IoT devices were part of their lives because there's nothing illegal about having an IoT device. They may be unsecure, but there's nothing stopping you from having these things. So just like to further how much I think IoT devices are affecting our lives, I have a quote by McKinsey Global. McKinsey Global estimates that between 3.9 trillion to 11.1 trillion dollars are going to be spending IoT by the end of year 2025. Likewise, Gardner estimates by the end of this year in 2017, 20% of companies are going to be using IoT devices to further their business initiatives. What that means is that companies are going to be using IoT devices to improve how they're doing shipping or marketing, whatever else they may be. So now that we've talked about why I think IoT is important, let's talk about vulnerability disclosure. So what is vulnerability disclosure? Vulnerability disclosure is the idea that you're reporting a vulnerability to someone, anyone. It doesn't really matter. So if you just put it up on your Twitter page or if you report it directly to the vendor, either of those are vulnerability disclosures, but they're different types. So for this talk, I want to also say that there's two things you can do when you disclose a vulnerability. You can either A, disclose it or B, not disclose it. So if I chose to not disclose it, this talk would be really short and we wouldn't really get anywhere. So let's focus on disclosing a vulnerability for right now. So vulnerability disclosure is divided up into two different parts or two main parts. The first one being full disclosure, which is the one on the right and the second one being responsible disclosure, which is the one on the left. For this talk, I didn't want to tell people what to do because I think that doesn't make sense. I want people to make their own choice. So instead of calling it responsible disclosure because it sounds kind of loaded, I'm just going to call it coordinated disclosure, which is something that the Microsoft Security Research Center does as well. So to define these, let's first start by defining what full disclosure is. Full disclosure is the idea that whenever you find a vulnerability, you just immediately tell everyone, like, hey, I found this vulnerability in this product by this vendor. The idea behind this is that whenever someone does full disclosure, they want to form everyone at the same time so that the vendor makes a better effort for updating their software and then people can take measures to secure themselves. I don't completely agree with the idea of full disclosure, but that's me. Not like you guys can make whatever decision you want. But the whole idea is that some security researchers believe that vulnerabilities are more of a PR issue than an actual issue and with a company. So if a company doesn't really take it seriously, they'll take it more seriously when their bottom line is affected instead of just saying, hey, there's a vulnerability in this, you should fix it. The second thing that I want to define is coordinated disclosure or responsible disclosure. Some people refer to it. Coordinated disclosure is the idea that you're reporting to the vendor or so you're coordinating with them that, hey, I found this vulnerability and I think you guys should fix it. Here's a time frame. You need to fix it within this time frame and if you don't, obviously I'm going to fully disclose it. It's not that if you don't, it's kind of like they're going to responsibly fully disclose it anyways, but they're giving the manufacturer time so that they can fix it and not everyone's affected at the same time. So let's do a side by side to compare what we just talked about. Full disclosure is that you're putting the vulnerability to everyone at the same time. Coordinated disclosure is a disclosure vulnerability only to the vendor at the beginning and after they have patched, you're going to fully disclose it to everyone. The goal of full disclosure is that they want to pressure the vendor into fixing more quickly and making everyone aware of the vulnerability so they can take measures to secure themselves. Coordinated disclosure is that you're trying to work directly with the vendor so that they can fix it and then everyone can know about it. There's parts and parts here and I know it's open to discussion so obviously with full disclosure it doesn't always work out well. Some companies won't fix it and they don't really care about it. Other times with coordinated disclosure, even after you coordinate with a company, they don't really take measures to fix it and also with coordinated disclosure, since you don't immediately tell everyone about it, there are people left unsecured because they don't know about it and they can take measures to secure themselves. Obviously neither of these are perfect. So with full disclosure, public disclosure of the vulnerability is that the researchers discretion so they can do it whenever. If they decided that they found the today and they were going to publicly disclose it, that's up to them. With coordinated disclosure, there's a time frame that they need to wait that it was agreed upon with the vendor. So let's go over some scenarios, some real-life scenarios and how this can go wrong. So let's talk about first what can go wrong with full disclosure. So a quick show of hands. Does anyone know what the witty worm is? Has anyone been in security that long? I'm going to take that as a no. So the witty worm was an exploit that happened on ISS systems. For those of you like me, they didn't know what an ISS system was before doing the research. ISS systems are just pretty much firewalls and here's a couple that were vulnerable to the witty worm. The witty worm exploited a buffer overflow vulnerability in this product and how it happened is actually pretty interesting. It infected 12,000 computers within half an hour and it happened because of a full disclosure. So there was a security company that identified a vulnerability and after they identified the vulnerability, they immediately took to the internet disclosed and say, hey, there's a buffer overflow in these products and it gives you root access to all these firewalls. After that, somebody that thought it was a really good idea to create a worm and infect all these firewalls at the same time. After it infected all these firewalls, it went on to infect other computers and it ended up being that although those were 12,000 within half an hour, it ended up being a couple hundred thousand within a day. So this is just an example of how full disclosure can go bad. For the sake of argument, I want, like I said, I want people to make the decision on what they want to do. I also have an example of how responsible disclosure can go bad. So does anyone know what Groupon is? Oh come on people, you know what Groupon is. But Groupon is just like a website so that people can get like cheap offers on things that they want to buy or participate in. Does that, XSS is a vulnerability where you can inject JavaScript into a page and then after that you could get code execution on the user's side of the interface. So BruteLogic, who is a security researcher, identified 32 instances of XSS in Groupon. When he identified this vulnerability, he responsibly disclosed it to them through their responsible disclosure policy. However, he also tweeted about it short after that saying like, hey I found XSS in Groupon, but he didn't really give a lot of details. He just said, I found XSS in Groupon and that was it. Groupon decided that they weren't going to pay out the bug bounty anymore because he tweeted about it, although he didn't give any details about it, about the actual vulnerability in these products. And also on top of that there was some legal, I guess both of them pursued each other legally arguing that I want my money and Groupon saying I'm not going to pay you because he didn't responsibly disclose according to my standards. That's one of the problems with responsible disclosure is that both parties need to agree upon what they want to do before they start things off. And in the case of BruteLogic, he didn't look out after he disclosed this vulnerability on his Twitter and Groupon did actually make sure that all 32 instances were secured. He did all that work and didn't get a payout, which isn't necessarily fair. So I know like I mentioned, you could either disclose or not disclose. A lot of people don't want to disclose vulnerabilities because they don't want to be in the position like BruteLogic was where a company may pursue you legally. And for those of you that are worried about that, there is a very good organization out there called the Zero Day Initiative that they take measures to help you as a researcher report these vulnerabilities to vendors and hope to try to avoid all the friction that may happen in those cases. Likewise, for those of you that want to participate in the Zero Day Track today, you can. The Zero Day Track is just a bunch of IoT devices that the IoT Village brings here. And if you want to participate, just pick one, hack it, responsibly, it's close to the company and then you can participate in whatever assets or I guess whatever is going to be given out at the end of the CTF. So now I want to go into the last part of my talk and that's how we can make things better. So how can we make things better? We can, there's two things to look about here and what can we do as a vendor to make things better and what can we do as a researcher to make things better? From the perspective of a vendor, things that I think that we can do to make things better is first identify the problems. The problems is availability. Not all companies have a bug-binding program and that or some companies don't even have a contact that you can say, hey, there's a vulnerability in here, you need to fix it. It's been the case that I found that I've found cross-sector cross-forge or cross-site scripting in two web applications recently. I contacted the company like maybe in May and they haven't gotten back to me yet. And it's just a case I'm stuck in this loop where I'm talking to IT and no one really knows what to do. So hopefully if you're a vendor and you want to figure out what you can do to help people out, start off by being more available. Combativeness. So now I think things have gotten better, but before whenever you found the vulnerability in a product, the first thing that the company would do would be send off their lawyers to attack you. That's not necessarily fair. These people are trying to help you out. Think about that before you send the lawyers off. And naivety, and I put in part, this is I'm familiar with security, is that you might find yourself in a situation where you're talking to a company and they don't really understand security. And if they don't really understand security, they think that this is not necessarily a problem and they just brush you off. So just to go over quickly over some of the solutions that I just mentioned, create a bug bounty program, have a contact for security related issues, find if you have to understand that vulnerabilities, finding vulnerabilities in a product is a good thing, not a bad thing. So if a good guy finds a vulnerability in your product and they disclose it to you, take some time to fix it instead of arguing with them. Train your team. So I know I've talked to a couple of people here that do trainings, but the whole idea is that if a company understands, if your developer understands what security is, it's going to be easier for them to understand what the vulnerability exactly is. And become part of a security community. If you're a company that wants to be better, improve yourself by like attending a security conference and understanding that there are security vulnerabilities in your product, whether you understand them or not. So next I want to go over what as a researcher we can do to make things better. As a researcher there are a couple problems and that's arrogance. So you might find yourself a situation where you're talking with a security analyst that doesn't really care about whether or not a product is secure, they care more about their own guess like being noted and understanding like knowing that people think that they're important, which isn't a good thing because then after a company wants to work with you because you're arrogant and it's not fun, understanding complexity of a problem. So a lot of times you'll talk with security researchers that they will find a security vulnerability in a product and they'll just immediately disclose it without thinking about like, oh, there's other issues related to this. I should responsibly disclose this to the vendor because they're higher problems that need to be taken care of. And I put stop for your mongering and a question mark and an asterisk because sometimes we may find a vulnerability in a product and say like, this is a very big vulnerability in this product but it's like an information disclosure whatnot. I'm not saying information disclosure is not a big issue but it's not as big comparing to something like RCE. So if you do find an issue understand that the company needs to like think about what they're doing as a business to improve themselves. So here are the solutions of what I think we can do as a researcher to improve security and that is played by the rules. If a company does have a bug bounty program and you want to get the payout, the best that you can do is actually play by the rules, you can discuss with them what the rules are before playing but if they do have something saying like, hey, we're not going to give you a payout if you disclose this, then you have to understand that that's the case. Work with companies to understand how complex an issue is. So like I mentioned, you might find the vulnerability in a product and not fully understand the impact of that vulnerability on everything. If you work with them, you can understand this in more detail. And understand some of the issues that have different impact comes with that whole vulnerabilities information disclosure compared to RCE. If you do have something, talk with the company, understand what the impact is to them and work with them because you may understand things that they don't, which is that whole thing where companies don't really understand how security works and you can work with them to improve that. So this is kind of the end of my talk but I want to do a recap to see where things are. Like, so we have the IoT Village today, there's a bunch of different events, play all of them if you want to or just sit around and talk with people that have played them if you want to get a better idea. I'll be sitting around or if you see people with purple shirts, they actually work here or purple shirts with the logo, they actually work here, they talk with them, see what kind of things you can do in the CTF to participate and get the most out of being a DEF CON. Vulnerability disclosures are definitely important but you need to do them securely and you need to talk with vendors so that you can improve security overall. And I think that we as researchers need to work with vendors and vendors need to work with researchers to improve security if we think about going anywhere. So thanks for your time, that's the end of my talk. We do have time for questions and I really appreciate your time, thanks. So David, if you hear that is four shots, four shots, David, you need to take. And yeah, so we do have time for questions if anyone has questions or yeah, you can come see me, I'll be around sitting if you want to talk about stuff and thanks, you have a question, go ahead. Can you speak a little bit louder, I'm sorry, there's like people talking. I've never, I haven't had the opportunity like talking with vendors on that, so I've talked with them mostly about their security issues and sometimes I've spoken with them where it's more catered so like having an on-premise instance of something is a good example of how that would work. But I like speaking of vendors like, oh like not moving user information on one server, having a centralized server and being distributed, the best that I've seen is like whenever someone has an on-premises instance of something is the closest that you can get to that, I guess. Anyone else? Alrighty, cool. Thanks for your time people.