 Good morning everyone and welcome to the Packard Hackey Village at Deaf Fund 25 in Las Vegas, Nevada. First and foremost, I would like to thank you for all the support and your contributions for all these years. If it wasn't for all of you and for your support, if you would not be here, there's such a big village and such a strong workforce. So we are very, very, very grateful for that. The Wallachieff mission is and has always been on security awareness. This year, the Packard Hackey Village has a number of events and learning opportunities including the venerable Hackey Detective, which is in the Appleton, and Capital Hackeys. We also have a fantastic slate of DJs to keep the village entertained. Okay, keep it nice and entertained and lively. The Gearsheep City and the Honeypots are back. We are also very excited that we have something new in store for this year. Hands-on workshops. There is a tremendous demand for training and continuing education in this field. We hope that you will take advantage of the many opportunities here at the Packard Hackey Village and ultimately at Deaf Fund to learn, to collaborate, and to be inspired. And of course, here we are at the speaker workshops. This is a very special year for this event. The speaker workshop is by the Packard Hackey Village. This is the anniversary of the speaker workshops. We are going to kick it off right off the back with a very, very special keynote. Dan Gears said in this keynote at Black Hat 2014, and I quote, Time security is now a limiting concern. A cop issue is many venues more important than this one. Or in that place that lightly at the 11th hole last year in 2016 in New York City, and I quote, We are now in a national security crisis. Time security crisis. So what does this have to do with the keynote this morning? There are now many people starting to enter or study cyber security, which is very, very welcome for me to see. However, the body of knowledge is now too deep and intimidating to grasp and history is easily forgotten. How did we get into this mess in the first place? In May of 1998, a group of hackers touched a pride in front of a panel of U.S. senators. The hacker group was locked based out of Boston, Massachusetts. One of the members of law protested by what went wrong with White Sox. Lots more on that internet. The software and the hardware are not safe and security is an afterthought. Their warning was a disaster for coal, and as we all know, tragically ignored. On a side note, please read the wonderful Washington Post article. Well, it's a disaster for coal and ignore. Their warning and efforts also drove a wave from many of our careers and lifestyles in this field, including mine. And why many of us are here today at death march. It is my fantastic honor to introduce you all. It's a pleasure to be here speaking today. My first time speaking at the package village. And since it was the 25th anniversary of DEF CON, I've been coming since DEF CON 4. I think I missed a few along the way. But I thought, you know, why not have a bit of a retrospective to talk about how we got here and what an impact hackers had on the security industry and really made the security industry what it is today. It would be very, very different if it wasn't for the input of people who were doing independent research and taking that hacker mindset. So I'll start off where Ming started off here. This was the picture of the gang of us testifying before the Senate in May 1998. Can't recognize me. My hair is a little different. So to go to you, that was a little bigger back then, but you might probably can't see it. But on the placards there in the front, you can see hand-tink-tink buzz while on space road. And this kind of interesting aspect to this is when we got invited to come to speak, we said, well, we'll only do it under our hacker aliases. And they said, why? And they said, because we have day jobs. And back in 1998, if you were called a hacker and you had hands-on keyboards at some IT, in an IT admin responsibility, they might say we don't want someone like that with that kind of responsibility. You might lose your job. So that was the mindset back then because it hadn't changed yet. I don't think you'd have that today. You said, hey, I call myself a hacker and I go to DevCon. They'd probably say, hey, we need to give you more responsibility. But back then it wasn't like that. But how did we get to this point? This is around 1998. Things had to happen before that, before we got invited. And what I like to say is we made trouble. We got noticed. And how did we do that? We made people upset. We didn't break the law as far as I can talk about today. But we did make people upset. People didn't like the idea of full disclosure. Microsoft really didn't like that. They didn't like us putting out documentation how their password storage mechanism sucks. They didn't like that because then all their customers, including banks and the DoD, would say Microsoft, your software is insecure, it sucks, fix it. And that costs them money and it makes them look bad. So by causing trouble and causing pain for other organizations, we actually raised a lot of awareness of the kind of problems that were out there. And we kind of stumbled upon this. I can't say that there was a master plan. We just started taking our research and instead of just sort of posing it to bug track and being very academic about it, we decided that we really should be more consumer advocate focused. We should take the research and instead of being just pure technical research, we should think of it as vendors are doing bad things that are harming consumers. And that really started around 95 or so and it changed it from like, oh, I know how to break it to your system to, wait a minute, the vendors are shipping code that is causing this problem for their customers. And I think that was a big sort of paradigm shift that caused hackers to be that force that was siding with the user, right, quite for the user, you know, was that the Tron one. And so by sort of causing trouble, we got noticed. And that is how the Senate came to us saying, you know, you guys have an angle on this that's different than what we're hearing from big vendors. It's different than what we're hearing from the government accounting office which is really just going through a checklist saying, you didn't figure this correctly, you didn't figure this correctly. And that's how the government thought about it. And we basically said, hey, the vendors can do better. They can ship things that are secured by fall. They can ship things that aren't as broken. The problem was vendors didn't know how to do that yet. But we'll get it done. So now I just want to go back to what the computer security was before hackers came onto the scene and started to inject themselves into the dialogue. We had the famous Orange book, Common Criteria, which was basically a lot of things that you should do to build a secure system. You know, you need all those features and functionality. You need your encryption, your authorization. You need your logging and your auditing and all of that. But it left out the fact that there could be mistakes in the software that the engineers might actually make mistakes. They didn't even think about that. This is not in that book at all. Who has software that has bugs in it? Software has always had bugs in it. Yet this was completely ignored in the Orange book. It's like the software was perfect. You'd have great security if you followed the Orange book if you made perfect software. Well, we all know that doesn't exist, right? So that was just a gaping, gaping, gaping huge hole that hackers came on the scene and said, you guys have this gaping hole in the way you're building systems. You're building systems with bugs in them. And we're going to exploit those bugs and we're going to bypass all of your controls. The other aspect of the way computer security was thought of was SIRT. And SIRT was formed after the Morris worm in order to, you know, basically put information out there when there was a tax that happened. The problem was this was really a government funded and a government organized with a government mindset. And the mindset back then was sort of like the intelligence communities today is like, yeah, we'll hoover up a lot of information, we'll analyze it, and then we'll give out the information to you that we think you should have, right? So basically SIRT was they wanted to be in control of vulnerability information, weaknesses in software, weaknesses in configuration. They wanted to be in control and manage how the public knew about weaknesses. So if you reported something to SIRT in 1993, it might just go into a black hole and they might stop communicating with you. They said, hey, we're the good guys. Send the information to us and we'll make sure that security gets better. But what we found out was we would send them information and we didn't know what happened to it. We didn't even know if they contacted the vendor. We don't even know if the vendor patched the software or what patch was available to fix the issue we might have told them. So it was a total black hole and it didn't work. So that government mindset of closed controlled information just doesn't work when it comes to securing our world. So in 1993, I was just starting to get involved with the loft at that time and I got introduced to this paper, improving the security of your site by breaking into it. And to me, this was a seminal paper. If you've never read it, go out there and read it because this is what really they really codified, really how the hacker mindset can be used to secure systems. Basically, what this paper said was, why don't you study how your systems are being compromised. Look at the technique they're using to compromise the system and then basically test all your other systems to see if they have that problem. I mean, it seems completely obvious today, right? But back in 1993, this was actually a new concept. People who did security for a corporate network did not do this. They didn't do this at all. And so what Dan Farmer and Weetsie Venema did was they started collecting information about how systems were compromised. It was a trust violation with exploiting trust with our hosts. Was it a weak password? Was it a well-known bug? And they started to script these up and put them together. And what they really did was they made information security a participatory sport. There was this give and take between the attacker and the defender where the defender can learn from the attacker. And so studying the way attackers behave or simulating how an attack might happen or practicing red teaming were all things that hackers brought to the table. And they made security an active participatory activity as opposed to a static configuration of features and technology. So around this time, we started to see the first hacker tools. We have so many tools today. It's completely awesome how many people write great tools and put them out there. But these are some of the very first ones that came to the table. I was looking sort of for what was the first hacker mindset kind of tool. And I think it really was crack by Alec Muffet. He came up with the idea that you could test the strength of your passwords and see if they're guessable, right? It makes perfect sense, right? Like why not see if your users are using weak passwords and tell them to fix it, right? But again, it takes that hacker mindset to do that. The normal security person's mindset didn't do that. It took the hacker mindset to kind of change that. And this was a radical notion back then. Again, it seems completely obvious today. But it really was a radical notion. It was so radical that Randall Schwartz, who was assistant administrator at Intel, got fired for running crack on the systems that he was responsible for. He was responsible for the security of the system. He ran crack because he had read about the tool. He understood it. It made sense. I'm going to see if any of my users are using guessable passwords. And someone caught him running this process and doing this. And he got fired and charged with a felony for running a security tool, a hacker tool, right? It wasn't considered a security tool. This example here is a program called Satan that Dan Farmer and Weetsy Venema wrote. And they were automating all the things that they were learning from attackers. I was saying, you know, security network by breaking into it. They said, why don't we automate that? And why don't we script it up? So Dan wrote this tool and released it. And he called it Satan. I'm not sure exactly what a system administrator toolkit or something like that. And once he released it, he got fired from his job at SGI. He was a programmer at SGI. And they saw that he released a hacker tool and they just fired him. They said, we can't have a hacker working at our company that's releasing these bad hacker tools. This was really the first vulnerability scanner. And now it's a huge market. It's a standard thing that you do. But there was someone who had the right to first one and the person who wrote the first one got fired. So I think some of the theme that I'm trying to bring across is progress is made by people who are making trouble. They're doing things that other people don't like. It's not illegal, but they just don't like it. They don't like the idea that these tools are out there that can break into their system because they don't understand. And over the past 20 years we've gotten more people to understand, but there's still things that we have to do to make trouble to keep pushing the envelope. And then Hobbit wrote Netcat in 1996, which if you haven't played with Netcat, at least read the Read Me file. I'm sure here people at the packet hacking village have played with Netcat, but the Read Me file is just an awesome look into the mind of someone deconstructing TCPIP and sockets. As far as I know, no one's been fired directly for using Netcat, but things could still happen. And then on the information resources side, who was sharing the information about all these issues, how attackers were working? It was the hacker community. It was BugTrack, which started in 1993, the same year that DEF CON started in 1993. And that really was the birth of a new form of information sharing that wasn't controlled by the government, wasn't controlled by CERT, it was people talking to other people. And that's really what started the openness that we needed to secure our environment. The next step in this was hackers starting to write commercial software. This was hackers sort of moving into the commercial space. And I think really one of the first commercial software written by a hacker was the Internet Security Scanner by Chris Klaus. And we at the loft started to think, hey, we could make a piece of software that is targeted at system administrators. You know, most tools that hackers write for other hackers are command line tools. You know, Alec Muffet's Crack was a command line tool. What we learned was Windows NT administrators can't use that. So we open sourced our command line version, but we built a version with the GUI so that actual Windows NT administrators could use it, and we sold that. And we said, hey, wait a minute, we can actually make money building the tools to help secure systems. So this again was another change of just becoming more of the security industry at the time. And then around 1998, we started to really interface with Microsoft. And we really started to tell Microsoft what was wrong with their networking protocols, what was wrong with the way that you were building things. And we came up with the notion, much like Dan Farmer and Wheatsey's notion of improving your network by breaking into it. We said, well, wait a minute, what about improving the security of a product by breaking into it? So instead of looking at the whole network and finding the weaknesses in the network, why not look at it, isolate, look at one piece of software. Put that software in a lab environment with your debugger, your packet capture, and other reverse engineering tools, and start to think, hey, we can actually secure software by trying to break into the software. So to me, this was sort of the beginning of the process that a lot of companies use today to actually try to secure their software before they ship their software to their customers. But at the same time, the security industry still isn't woken up. If you were working and doing security in 1998, 1999, they're still doing this. They're selling security features. That's the only thing that they knew how to do to secure a network, or selling compliance. And we all know that this just gets bypassed. You just bypass all those things, and you ignore all the compliance issues. You don't even look at the book and say, hey, let's see if they check this box. I'm going to attack and see if they check that box. That's just not how the hacker mindset works. So compliance is completely orthogonal to the hacker process. So in 2000, we decided to basically, a lot of people called it selling out, and this is what we did at the loft. We said, let's take all the things we're doing here. The tools we're building, the processes we're using to reverse engineer software, to pen test networks. Let's see if we can just do this as a business. And the headlines were pretty, we're going to use good hackers to fight the bad hackers. People like to put things into simple, easy boxes for the general public. But the idea was that we were going to take all the things that we were doing that we had sort of created over the last few years, and we were going to sell those as services to customers. So I don't think we were the first company to do commercial pen testing. But it definitely put it on the map that you could use these techniques to secure your network and secure your software. So some of the things that we did different, that other security consultancies weren't doing at the time, but seem totally normal today. This was completely new. We conducted our own vulnerability research. It was a consultant, a consultancy that had people going out and pen testing and people back at the office building tools. And doing research. And we came up with the concept of we can help you secure your application by breaking into it for you and telling you how we did it. And then other companies started to follow. You probably have heard of Gardens. Went to Verisign. Foundstone. Went to McAfee. I think McAfee still calls their adversarial security services Foundstone. And then shortly after that we had all kinds of worms that were affecting Microsoft products. Code red, SQL slammer. I mean basically it was pretty much of a disaster. If you were in Windows NT admin or Windows 2000 admin in 2002 you were like patching IAS on a monthly basis just so you didn't get a worm attack. It was really bad. So Bill Gates came out with the trusted computing memo. And he said we're going to stop everything. We're going to stop building software the way we used to build it. We're going to train all of our engineers on writing secure software because they had a book, Writing Secure Code I think it's called. We're going to train them on that. And then we're going to build secure software. The problem was they all read the book and they all got the training and then they went back to their desks and they still didn't know how to build secure software. They thought they could do it, but they couldn't. They didn't have the capability of making sure that the software they were releasing was actually secure because there was nothing in their, they had secure coding guidelines but there was nothing in their process which was adversarial in nature. There was no process of testing. There was no process of attacking. So they didn't actually know how to write secure software. So what did Microsoft do? They hired a company, started by hackers to come and teach them how to secure their software. So Microsoft hired AdStake. I was there for months at a time helping them write, completely rewrite IAS, helping them secure exchange, SQL Server. We taught them how to threat model. They had no concept of threat modeling at the time. Microsoft likes to talk about how they have stride and they have all these threat modeling standards. They learned it from hackers. Hackers taught them how to do that. What was the hacker mindset for trying to break into a piece of software? So this came from hackers. They didn't understand exploiting heat overflows. Their engineers just didn't get it. We had to teach them that because they weren't going to fix buffer overflows on the heap because they didn't think it was exploitable. They didn't have any concept of fuzzing software. Dave Eitel, who's now a founder of Immunity, wrote the Spike Fuzzer while on the Microsoft campus because he wanted to fuzz the software that we were testing for Microsoft. He said, let me write a general purpose fuzzer called Spike. Then he later open sourced it. One kind of funny aspect of this was Dave wanted to do this using his Linux laptop. Microsoft explicitly said, you can't bring a Linux laptop onto the Microsoft campus. We had to get special permission from some senior vice president and it had to only be in this room and it was tightly controlled. They were afraid that if a little of that software, that GPL software somehow got onto the Microsoft network, their lawyers had them so scared that all their software would be GPL. It was like the GPL virus is what they called it and they thought that using a Linux laptop was a really scary thing. Now, as you can spin up Linux boxes, it's taken them a long time to get over it. We taught them how to do things like find the attack surface. We would do threat modeling with the engineers and they would tell us about all the inputs into the program, what other components it talked to. Then we would run something like sysinternals process explorer and we'd be like, you didn't mention this name pipe, you didn't mention that it read from this file and the people who built the software didn't actually understand how their software worked because they were calling into libraries that were doing things that they didn't understand. We taught them how to go to the ground truth to actually look at what the software was doing. Take a reverse engineering process. They could only think with a forward engineering process so the engineers at Microsoft didn't even know how their software worked. So we taught them how to do that. It's really interesting that Microsoft actually ended up buying the company that made sysinternals. Then this was sort of finally, we look at the Microsoft SDLC as the standard, they like to say we have a great process for building software securely and they were the main contributors to ISO 2734 which is the process for building software. Where did they learn most of the stuff that's in there? They learned it from hackers. We could say that if you think your software is more secure today than it was 15 years ago, it's because hackers taught software vendors how to secure their software. Basically I take this to about 2003 or so where sort of the modern era of security and building software started down to where we are today. So penetration testing once was a scary thing that you hired hackers for and you had to get special permission for it. It's now a requirement. If you're a financial services, you have some certain compliance, you need to pen test your applications. So now it's a requirement. People have a product security response team to communicate with hackers, right? And developers are using hacker techniques as they build the software to test it in order to make sure that that software is secure. So today we really do have securing your product by trying to break into it. And I'm also happy to say that Bug Bounties came because the idea of someone poking around in your software and informing you about it became a legitimate good thing that companies wanted to reward. So this is really full circle from 20 years ago when if you, you know, there were people that would report bugs and they would be threatened like Mike Lynn was threatened from Cisco and Dmitry Skylarov was threatened, well, arrested because from, he came to DEF CON about 15 years ago, he was arrested because he was going to talk about bugs in DRM. So we've come full circle from people notifying of vulnerabilities and potentially getting arrested to today. It's all a good thing. So now I want to take us to today and where have we come? Where have we come today? You know, it still looks pretty bad out there. I mean, if you see what Petia ransomware worm was able to do on modern, you know, modern networks, huge companies like Merisk and FedEx, the Ukrainian government, it's still pretty bad out there, right? The really crazy thing is nation states are pretending to be hackers. I don't know for sure if a nation state, what released Petia or not Petia, what you want to call it, but I find it very interesting that whatever their motives and what they were trying to do, they were trying to look like criminal hackers, right? So this is now, we've come to this point, that this is just a part, even people that are doing offensive operations for governments, they want to look like us, right? They want to look like the people that go to DEF CON. They think that's how you hide yourself. And today, hackers are insiders, right? We could be sellouts, we could be insiders, whatever you want to call it. I put the names of some of the companies up here that people have disclosed. I know that the three other guys up there, I know where they work, but they like to keep it a little bit on the down low. Some people are still really paranoid. But hackers are insiders. Much worked for DARPA. He was a government employee working for DARPA funding, using his insider position to use DARPA funds to fund hacker-style projects like Charlie Miller and Chris Velasic hacking the Jeep was funded through DARPA. So now that we're insiders, we're really in powerful positions to actually change things. And that's a good thing. It was basically Windows Snyder and Katie Miseras, who over time got Microsoft to even do a bug bounty program. And they came from the hacker community. And we all know insiders are more powerful, right? But we're getting old. We're old insiders. We need the next generation to keep doing what we did. We need people to still keep making trouble. And I do still see people out there making trouble. I don't know. Most people have probably heard of Justine Bone and her company MedSec, which caused a lot of trouble for St. Jude medical devices by selling their research around how their pacemaker had vulnerabilities that could kill someone, selling their research to a hedge fund that shorted the stock. That's making trouble. That's trying to change things up. They can't do anything illegal. There is a lawsuit out there, but I think they'll probably be okay. And they brought a new mindset into thinking about how can we fix the market around cybersecurity. For a long time, it's been a vendor-customer relationship where the vendor decides how much security they're going to put in their products and the consumers have no visibility into it. It's kind of a lemon market. The consumer has no power over making their world a more secure place in general, the average person. And what MedSec did was by bringing in the investor community brought in another party. They brought in the owners of the companies, the investors who own parts of these public companies and brought them to the table and woke them up to say, hey, maybe the security of these products is going to change how much money they make with this company. So it's not the people who are the employees of the company that were necessary in control. Now, what MedSec did was brought in this other party, the investor community. And we're starting to see more of that now. Poneman Institute came out with a report showing how stocks dip after major vulnerabilities are announced. And the study broke down how much it dipped and for how long the stock dipped. And we found out that companies that had a very good security, mature security posture and had a really good response, their stock dipped less, half as much, and for half as much time. Whereas companies that were very immature from a security perspective and had a really bad response, their stock dipped twice as much and lasted twice as longer. My analysis on that is the investor community sees the vulnerability of this company as, you know, a black swan, right? It's something that, oh, it happened. It happens to everybody, but it's not going to be a regular, recurring event. But when it happens to a company that doesn't have a good response and can't talk about how they're securing their software, the investor community thinks, well, this is going to happen again and again and again. This is actually going to damage the brand and hurt the company. So we're starting to get the investors into a little early days, but I think that that's an interesting new angle. So what I want to, you know, ask you to do is to make me nervous. I want you to go out there and do something. I don't want you to break the law, but I want you to do something that after you do it, I say, I don't know if that's a good idea. That was my reaction to when I first saw the NSEC incident. I said, really, are we going to go down this path? This is not going to, this is going to tarnish the research community. This is going to make hackers look bad. And then I thought about it and I said, maybe we should give this a chance. Maybe we should let this play out. Maybe there's some big benefit down the road to this. Someone's trying something, but it made me nervous at the beginning. It really did. And so that's my call to you that makes me nervous because I've been doing this a long time. I've seen a lot. I made other people nervous. But do something that makes me nervous and I think that will be pushing the envelope. So I just want to finish here with something a little bit practical and a way that we all can influence the organizations we're in is the concept of security champions. Now I'm in the application security business. I help companies build secure software. And part of it is to build that security culture. The security team can't do it all. You know, pen testers can't do it all, hackers can't do it all. We have to enlist the masses. We have to get them to understand the security mindset. And so the notion of security champion is, you know, take the example of a company that builds software. The development team or every scrum team in that company identifying individual who has the aptitude and the interest and maybe learning a little bit more about security. So there are software developers, their day job. Get them interested in security. Get them some extra training. You know, do some reverse engineering with them. Do a capture-to-flag exercise with them. If you can get, you know, one person in each development team interested in security, it has a huge impact across the organization. Because there's just not enough security people and there's a lot of people building software out there. And I think you can take this to other places to educate people. So I think that is something that we all have a responsibility to do where we work or in our lives is to educate. Because it can't be us and them, you know, I know it's called the wall of sheep and there's a bunch of sheep out there and then there's the wolves, right? But it'd be great if we could get some of those sheep more educated. So with that I'm going to end it there. There's my contact info. If you want to contact me I don't know, Ming, if I went over time if we have any time for questions. We do have time for a few questions and if you have a question for Chris and please have you know, we definitely welcome you because this is a fantastic opportunity right now. The mic is all yours. Hey, good morning. Thanks for taking the time to speak with us. Chris, my name's David. I'm from New York. I just, I wanted to ask you I've read news articles to talk about how difficult it is for companies to keep themselves secure that they're effectively gladiators in this pit going against everyone out there. Malicious and hacktivists and all the like. Have you heard any theories or any proposals that are convincing for how companies can share their data potentially with one another? To say we found this vulnerability this is something perhaps to look for in your work and kind of help bridge that gap and help them unify? So what you're really talking about is information sharing. And I'm a huge proponent of fully open information sharing. All these little pockets of groups, government groups or financial services groups where they share stuff amongst themselves I think that's incredibly inefficient. The idea of DEF CON the idea of bug track where to make information sharing completely open. So I think we should have as open as possible. Information sharing. I don't care if the adversaries get the information too. As long as the defenders work on that information it's better. The problem with open information sharing is that the adversaries take advantage of it and the defenders don't. If the defenders took advantage of it it would be fine. So I'm all for as much open information sharing as we can have. Thank you. Anyone else for questions? Thanks Chris. Do you think that yourself other members of LOFT and other individuals in the early 90s and late 90s received a reward and recognition that they should go to the security sector today? Just for the record I don't unless you convince me otherwise. I don't know. I'm a pretty humble guy. I don't need any more reward. I've been very successful in my career. I've met a lot of great people. I get to stand up here and talk with you and meet you. I'm fine myself. I don't think I need a presidential Medal of Honor or anything like that. Some of them probably would have a different answer. So really quick. What spaces do you think really need to be made nervous these days? Could you repeat the end? Sure. What places or what sectors do you think really need to be made more nervous these days? I missed the last part again. No problem. What places or what sectors do you think really need to be made more nervous right now? Oh okay. I was missing the nervous word. That's a good question. I know the government is already nervous. I think it's the IoT sector. I don't think they're getting quite the feedback loop that they need to. I think that people are just shipping products and ignoring things. I think the noise level has to be raised there. We saw the Marai botnet was sort of a wake up call. I don't think that changed how people are manufacturing these inexpensive IoT devices that it might cost $50 but it causes a world of hurt. Sort of like dumping your toxic waste in the swamp. Just because something is cheap to do or make doesn't mean it doesn't have consequences. Let's make them nervous. Maybe one more? Absolutely welcome. Absolutely welcome. Great. Thanks for the talk. That's a great talk. I want to ask you sometimes when you get a software you want to reverse. When you look at the agreement one of them is do not reverse engineer. Do you observe it and sit on your hand or you just go ahead and do it anyway? I would go and do it anyway. I think you can't ask for permission. They're not going to give you permission. I would go ahead and do it anyway. As a company it's something that I can't do at work. I need to follow those rules myself. I don't think we would have gotten anywhere with making better products unless people did that. Again, that's along the lines of making people angry. Can you share with us where Weldpon came from or is that completely taboo? No, I can share. It's actually a fairly boring story but quickly. The first BBS I connected to that it had a requirement, I think it was called the works in Lexington, Massachusetts. Jason Scott was the guy who does text files.com. Jason Scott, you probably all know, was the sysop. The BBS was set up that it said no real names. You can't use your real name on the BBS. You've got to put in a fake name. When I signed up for an account I was like, I don't want to spend time thinking of some cool name. All I did was I looked above my desk and I had a map of the Boston area. I just put my finger up there and I put my finger on Weldpon which is a pond in dead of Massachusetts. That's where I came up with the name. I didn't think I'd be using it 24 years later. That's where it came from. Thank you so much, Chris. Thank you so much. When we received the CFP we were like, oh my god. You know, it's a tremendous honor for you to give the keynote here at the Packet Hacking Village and Speaker Workshops. Personally I cannot thank you enough for everything that you've done for not only the community here but also for me personally and my career as well too. I'm so very, very grateful for that. So thank you for everything that you've done for the community and for me personally. Very special thank you to each and every one of you here who was at this talk. Just looking at this room right now, this is my 11th DEF CON and it really, really does feel like going back to the old days when I first got to a talk at, not a lecture talk but at the Riviera. I mean DEF CON talks, if you could believe, were decided we're just this big. So just like going back to the old days. Thank you all so much. Thank you very much.