 Thanks, everyone, for coming out. You guys having a good time? Yeah. Awesome. I appreciate you guys coming out to the last talk tonight. So my name is Scott Urban. This is a quick real brief background. Associate director of productivity. I focus solely on healthcare and medical device security. Also, a security researcher. I spent 15 years Infosec IT. I've spent five of that directly working for healthcare organizations running security programs. So is there anyone show of hands, works for a healthcare provider? All right. Anyone manufacturer side? Awesome. Great. So yeah, definitely come with the industry background as well. So hopefully some of that will reflect in my understanding of the situation. But I've spent about three and a half years now researching medical devices. Anyone come to Shawn Merdinger and I just talked last year that we did? Couple folks. Awesome. So we're going to recap it really quick this year. But I'll go ahead and turn it over to Mark to introduce itself quick. Awesome. I'm Mark Haleo, senior consultant with productivity. Mostly do pentesting or anything security. So you want to be a red team. I have pretty good interest in offensive power shell stuff. I like bots. Botnets. Been running hunting pots for the past five years. Been in security for about five. Low fun fact, this is my first talk ever. So thanks. Awesome, right? Come to DEF CON first time, right? Do it right? Yeah. All right. So really quick is what we're going to cover. First off, why are we looking at medical devices? Why does it matter to us? Why are we passionate about it? Secondly, kind of cover really quickly like phase one where we started like three years ago. Just general three level security hygiene vulnerability issues that we're seeing. Secondly, do a recap of Shawn and I's talk last year that we did on some of the internet exposure and healthcare organizations that were vulnerable to direct attack and pivoting into medical devices. And then we're going to look at how do you get admin access to these devices. Finally, Mark's going to talk about some of the honey pot research we've been doing over the last six months to see if these are attacks are intentional or unintentional. Some of the data that we're seeing hitting these emulated honey pots that we've got set up. And finally, just quick diagnosis treatment plans. What can you do to mitigate some of these risks if you do work in industry or you're a consultant for healthcare organizations, that type of thing. So why research medical devices? Who in here is reliant on a medical device every day? Anyone? Yeah, a few. Diabetics. Yeah, absolutely. So there's definitely that, right? That's a very personal impact for folks. And if not, probably many of you either know family member who's reliant on those or you've been to the hospital at some point in your life. So there's a real personal connection. Also for me, because of my background in healthcare, that's when I kind of got a passion about this, right? I started to see medical devices becoming connected. And it wasn't just from a patient privacy aspect, it was also looking at it from a patient safety aspect. So that's kind of what a lot of the research has been focused on is more looking at patient safety versus patient privacy. I will touch on some patient privacy stuff that I'll call out independently. Really quick, just to set the stage, so everyone has the right mindset here. I often get challenged or question of what type of person is going to attack these devices. And so I think it's really important to set that stage up front that malicious intent is not a prerequisite to a patient safety issue. So a good story here is, and actually Sean, who's in the audience here, found this story of two individuals that presented for treatment in Austria for gunshot wounds. And they were hooked up to a infusion pump, a PCA model, which is like the self-controlled clicker. They didn't feel that their pain management was under control. Nursing staff felt otherwise. So they did, kind of what I'm going to show you and did, went online, found service technician manuals, found hard coded credentials, got into the device and up the doses and suffered from OD effects. So it could be a patient themselves that causes an adverse event or a patient safety issue. So don't think that it's just malicious intent. We'll show you some of that stuff when we get into the honeypot research too. So really, what are we doing? Ones discover patient safety issues, not hippo focus, not privacy focus. Help equip defenders. That's really important for us to give you the information that's meaningful so that you can go and protect your organizations. Secondly, alert effective party. So we've been working with ICS CERT, the FDA, actually was anyone out of B-Sides? If you have you. So we did a quick update on state of medical device security there. And we had Dr. Suzanne Schwartz, who leads CDRH over at the FDA, kind of call in. They've been great. We've had a lot of progress over the last year. Did anyone see? We had a first ever last Friday, where the FDA actually put an advisory out to healthcare organizations, some research that Billy Rios did on hospital pumps and actually alerted healthcare organizations and advised them to pull these things out of production. So that was a massive win. It was precedent setting. Not only was it precedent setting for the FDA to do that, they actually did that before harm was proven, which is very important. Usually recalls happen after a patient safety issue has occurred. So that was a big win. And then inoculated against future, yeah, big round of applause, massive win. Yeah, the FDA has been, you know, they've been really great to work with. So phase one really quick, you know, what do we see high level? Was it all these kinds of crazy attacks? Absolutely not. This is what we were looking at, you know, 90% of these security hygiene items, right? The big three we like to call them, you know, those folks that are, you know, working on security research, working on this issue. We default hard-coded admin credits. That's what I'm going to talk about later on here today. No one's software vulnerabilities, inability to update patches on devices. Legacy systems heavily utilized in healthcare. You know, many XP, SP2 boxes, 2,000 boxes, those types of things that are not patched. And unencrypted data transmission. This is a big one that at first you may say, okay, this is one of those patient privacy things because you're pushing PHI data. But these devices, as they become connected, we're using what's called medical device integration. So we're taking these devices, we're connecting them, and through web services we're pushing stuff in real-time data for like ventilators, anesthesia cards, critical systems down into the medical record. Well, a lot of times that transit doesn't use encryption. So you can sit man in the middle, you can see the XML files, most of the stuff is just XML files that are calling and pushing, alter that, replay it, no app sack on them, and ultimately alter what ends up in a medical record. So then it becomes something that may be a patient safety issue. There's been lots of research on individuals presenting as you potentially. And the results of that, the research show are a very high probability of misdiagnosis, mistreatment being prescribed the wrong drugs. So phase two, this is a recap really quick of one of the issues that Sean and I, you know, uncovered and kind of presented on last year. So we did some research through Shodan, John Matherly runs that, if any of you aren't familiar with it, it's an absolutely awesome tool to utilize for research and find things. So I was sitting on the phone one night with Sean, and I was like, Sean, check this out, I did a search for anesthesia and got all these returns, and it's clearly not a medical device, I know that. The only indication it was medical devices that I was running XP. But other than that, I was like, Sean, this isn't a medical device, like what's going on? And what we found is that this system had a misconfiguration on this external system that had SMB open and had anonymous read into the organization, and it was leaking intelligence on all their hosts, medical devices, supporting system and applications, gave us all this stuff that is a treasure trove for an attacker. So we found this huge healthcare organization, over 12,000 employees, exposed intelligence on over 68,000 of their systems. So it wasn't just medical devices, it was their entire network. Sean and I's research was really, we want to look at medical devices, so we scrape that out, but it was everything, it was their financial systems, domain controllers, every single device. It also exposed third party organizations that are traditionally contracted in healthcare for services such as laboratory services or radiology, imaging, reading services. So that's the type of information that we were getting out of there. So did we just find that one organization? Did we just randomly have one organization in the world that we happened to assemble upon? No. We found hundreds. So this is like showdown queries that we were doing. We were doing org searches on SMB and you can see here just key indicators like health star, so anything with health in the org name, those are some of the hits. Once you started changing those to specific stuff like podiatry, pediatric, neurology, then you ended up with like thousands of organizations that had this misconfiguration that exposed this intelligence on their medical devices supporting systems. And on top of it, it ended up being a system that was XP2 and was vulnerable to MSO8067, which what's that run on? SMB. So you had a direct attack vector, knew exactly what systems to pivot into, what those devices were associated with and host names and doctors names associated with them, office floor location, oh this isn't OR7, this is this type of anesthesia card. Very detailed information. So what type of systems did we find in this one organization? So here's a recap of the number of devices, the types of devices that you can directly attack and pivot from this external system into. So this organization had a very large cardiology institute, see 480 systems, infusion systems, MRI, Pax, Pax is used for image storage from radiology systems. So we'll talk about this a little bit later. I think this is a good one that I think a lot of attackers are actually going in to these systems that have pretty poor security on the back end, through the application, almost always get prompted for a username and password. But in the back end, if you hit that thing, very rarely do they have like NTFS permission set up or accounts password set up, or they use a hard coded call that you can get in to grab some of that, some of that PHI. So I think attackers know this, I think this is one of the ways a lot of PHI is getting leaked out of healthcare organizations. So it's a little remark, kind of talk about, you know, attack vectors here with this information real quick. Awesome. So now we have all that information, okay, we can do short inquiries, we can do a lot of recon passively with our open source intel. How can somebody, for example, take advantage of this? So the first attack vector would be physical. So through the example before of our SMB, you know, we can start querying active directory in the back end, we can start pulling information on users, their roles in the organization, all that fun stuff computers. Also where these computers are, where they sit, and most importantly, lockout policy. So whoever's been in the hospital, I mean, it's pretty lack security, you walk in, you get a little badge, and you're pretty much free to roam anywhere. So pretty trivial for an attacker to just roll up, walk through, you already know what floor it's on, you know the doctor's name, you know the computer name, find their office, they're not there most of the time, probably, maybe not, you can stalk them out, and you sit on the computer, you know there's no lockout policy, hack away. Just start brute forcing, next thing you know you're in, you're in a doctor's console, and if you got loads to wealth of information. Second attack factor would be phishing. Again, this whole pretext of, we know the OS, we know the IP, we know the users names, easily build the phishing campaign against those users or computers. So we know the OS, we can start crafting pretty specific payloads, we know the underlying system, it's pretty easy, especially if it's windows, start Excel attacks, and you just keep going from there. And the next one would be Pivot. So we had physical, we had phishing, okay, but why don't we just go for the easy ones? So we know that 0867 is running SP2, why don't we just lob an exploit over the net and see what happens. Nine times out of ten, maybe it'll crash, what if it works? If it works, you've got a foothold there, you've got a foothold within the hospital organization or whatever, you know, research facility it may be, and you start pivoting from there, and you know you can take over the organization from that way. Alright, so let's get into the super awesome credentials that are super hard to crack that you all came to see. So phase three is, well, now we know the vulnerabilities in these systems. We know you can reach them directly from the internet. So what would it take an attacker to get remote admin access on medical devices? So that's what we're going to talk about quick. Before I do that, I just want to go over disclosure timeline. I also want to note, before I go over the disclosure line, that all this information was actually publicly available on GE's website. So it didn't require any type of banner, any type of authorization to go into any type of account. It was publicly available. Obviously, since there's a lot of information, I chose to be responsible about it. We contacted CERT, we contacted GE. So you can see back in August last year is when this was initially disclosed. They responded very quickly. September 16th, this was an additional disclosure. So the first one in August was a disclosure of about a hundred sets of credentials, administrative access. And then September 16th, I've had some more time on my hands and decided to sign in another 30. December 3rd, this is when we got confirmation from CERT that GE had closed their investigation and closed the issue. So we'll talk about that here. So what was that response like? Well, GE, the team over there, the security team, they're actually, you know, I want you to know they're doing a very good job. They are much more mature and they've put a lot more resources into this recently. So does anyone know Mike Murray? I mean, he comes from the hacker community. He's building a team over there. You know, they're doing good things. Every, this is a systemic issue. So although I'm showing specific issues, this is across the board. We could grab, you know, any medical device manufacturer with similar results. So I want to make that clear that it's not just a GE issue and they have been very proactive about it. Now, GE, after the investigation, their response was that all of these credentials are default and they're not heart-coded. So I want you to know that's the response that they gave. I'll talk a little bit later and show some potential contradictions with that through their documentation after we get through these. So let's sit back, enjoy the show. This is going to take a while. I thought it was a really good idea, like, hey, it would be awesome to drop 30 CVs in one talk. And then I realized, like, we've got to go through, like, 40 slides here really quick. So stay with me. I'll point out some of the highlights. So here's the first ones. So what I've got on here is the nuclear imaging system. Up top, what you'll see in each one of these slides is the CVE. You might quickly be like, oh, CVE 2006. Scott, you've been sitting on this thing for nine years? No. No. How that works is, like I mentioned, those are publicly available documents. So when that's sent in, you look back at the earliest iteration that's available publicly and you reserve a CVE back to that date. Why I also put these in here is as we go through these, I'm not going to point them out, but I want you to note, some of these are legacy. I started at 2000 and newer. You'll see some of them are new. 2014, I think maybe the next one or a couple in is 2014. It was updated documentation as of a month prior to me coming across this information. CVSS scores. These majority of these CVEs were published two days ago. They hit the National Vulnerability Database yesterday. Everyone that's been assigned so far has been assigned to CVSS of 10. So that's what CVSS score is. It's a remote administrative credentials admin access. Alright, so another nuclear imaging system here. You start getting into some interesting stuff here. Telnet, ROOT. I mean, they're pretty super awesome passwords. So, another thing of note is it's kind of cool, like look at the passwords, like pound big guy one. And as we go through them, like look at how many different products they're utilized on, cross products, cross years to see, you know, oh were these developers like switched on these products and move product lines, those types of things reuse. Okay, keep going. Still new imaging system, service logins. So, Windows admin accounts. Those are great. Oh, the bottom one's really awesome too. You know, from a clinical perspective, these systems are heavily supported and because hospital staff doesn't necessarily know every single product, it's outsourced to the vendor for support and so they need to remotely be able to get into them. So this one happens to use super awesome VNC with a super awesome password. Here's some cameras. Some more stuff. More new imaging systems. CT scanners. So now we're getting interesting. Now we're getting the SU logins on these types of systems. MRI systems. Same thing. Repeats the passwords. X-ray systems. More X-ray systems. Centricity. So this is where it starts to get interesting. Never mind this. Centricity, like I was talking about packs before. This is, you know, a system that does patient monitoring type stuff as well as it also does packs imaging storage. So that's what we're getting into here. So here's the centricity image vault. Oh, that's where all the images are stored. Super awesome SQL SA password of nothing. Admin logins. License server if you don't want to pay for it. Archive audit trail. So this is actually really good. Like, okay, this is CVE for 2014, SSL. Hey, you can actually use encryption. Like, made a good decision. We're using encryption now. Oh wait, the key store server and the key manager server have really bad passwords. SQL SA logins for the analytics servers. Analytics server logins themselves. Gonna be a treasure trove of data in there. That's where all the data is dumping. That's where you want, taking all of your data out from the data warehouse. Pack SQL SA logins. Directly into imaging. And more packs. And here is what I was talking about the backend. Not through the application, but in order to get into the actual storage server, if you want read only, please put RO. If you want read right, please end with RW. Two characters, GE. And some more packs. And oh, here's some IIS. So here's some web server stuff if you want in. Just type in IIS. Gamma cameras. CT scanners. More SU, more logins. Emergency logins. Obviously there's a reason for that. I think we don't always know the best answer. But we do know what's failed. And we need to not continue to use failed practices. Even if we don't solve the problem, we should be trying something new that is not a known failure. And I think that's a big message, you know, going forward is we can't continue to use failed approaches. More x-ray. This one's interesting. This is like custom scripts for you. Like, you want to create user accounts? Just run a script. The first one, techusers.bad, it will create two service tech logins with service tech as the password. The rest of them will create user accounts one through 100. So, all right, you guys survive. There's 130 CVs there, 130 sets of creds. Multiple different radiology equipment. So what do you do when you have all kinds of credentials? You create a word cloud? Yeah. Yeah. Yeah. So, there's a bit of a, I don't know if it's funny. You guys probably think it's funny. But I, when I went to these word cloud sites, I kept pushing it in. If anyone, like putting all the information into the word cloud site, and they all kept coming out and saying big guy, right? And big guy won. But there's actually, if you look at the real password, it's like pound big guy won. So apparently word cloud websites have better app sec and sanitize input. Much better than medical device software. So, all right. So, now, you know, again, you know, the official response was these are default. So, you know, I want to make the case, are there still issues? So we're going to go through a couple examples of that. So, some cases, documentation instructs do not change these credentials and do not allow password resets on these accounts. Some cases, documentation instructs do not change this password to the account or we will not be able to remotely support that application or that system. Documentation also, in many cases, does not have instructions on how to change many of these accounts. Secure configuration documentation is very lacking. Because of that, third party integrators or the manufacturers themselves that are supporting it or if the healthcare organization is implementing the cells, they're going off this documentation. So, they're following it. So, these credentials are heavily utilized in the industry. When we go do assessments on this type of stuff, it is massive success rates using default, hard-coded, you know, service and technician credentials. So, let's look at some of these examples to show you. So, you can see on the very top, user cannot change password. Make sure you check it. Password never expires. Make sure you check it. Down in the bottom, changing passwords. Big important flag for the person doing it. Do not change the insight password. Remote access will be disabled for support if this password is changed. So, if you're saying it's default, this is a little contradictory in my opinion. And here on acquisition again, big bold letters with an exclamation point. Do not change this password. As a healthcare organization doing this, if you're reading this or an implementer doing this, you're probably going to follow this. This is big and bold and called out, never change this password in some of these cases. You know, it's not an all of them. There's definitely someone that's absolutely a default and they document how to change them. Password, tech support. Here's PCNW, not VNC. The bottom one, super awesome, because physicians love typing passwords, so they would never click the yes, remember this password forever. And last kind of examples, you know, here, there's a clinical perspective, this remote support, we would call it a backdoor. But station operator, hey, just call them up. They can always give you a username and password into that system. And we'll go ahead and reset that password to password. This last one, I want to be very clear, but I want to show you, this is theoretical. I have not done anything on this device. But I wanted to, I've done stuff on other devices and other systems, obviously. But I wanted to kind of give you the mindset of an attacker or even a researcher, how we started looking at documentation and seeing are there potential issues. Can we think, get this system to do something unintentional? And so this bottom one you can see in the snow, to perform the following steps, you must generate X-ray radiation. So with radiology equipment, dosage, radiation, that's where there's a safety issue. Now obviously with this step, there's probably some very good controls in that system, so that under intended use, it emits a low level. But they're still telling you make sure you follow up to safety precautions and you're emitting radiation. So if you can get remote access and you follow this, and they're not using encryption and they have these web services and all of that, can you then potentially, as an attacker, sit in on that? Look at that? Like I said, a lot of these things call XML files and feed parameter values for dosing levels in the XML. So once you figure that out and you figure out, okay, this is how you can get it to go low, could you potentially, as an attacker then, when you're thinking, change that parameter and replay that. So that's kind of, you know, background on, you know, how attackers think or how researchers, you know, dig to this documentation before we even touch the device to look at different attack vectors. And finally on this, you know, something that's contradictory a little bit, you know, it gets into that liability space, is in the documentation it says, adheres strictly to the procedures in this manual. But warning, the editors and producers claim no responsibility for its accuracy. So of course, there hasn't been a liability case to set precedents yet. If something does happen, I think that will take place, but right now there's this type of contradictions that you see of, you know, and again, all of this stuff is not, this is GE specific, but this is systemic across the industry, across vendors. So I'm going to turn it over to Mark. So now that we got the simpo, we know they're internet accessible. We've got credentials. So what we wanted to do over the last six months was set up and emulate honey pots to see if we were seeing intentional target attacks, unintentional, just random noise. And so Mark's going to kind of talk high level about some of that data that we're seeing and some of those statistics to give you an idea. So Mark, I'll turn it over to you. All right. So like Scott said, we've got all this data. We've got all these passwords, the fall creds. We know this information about the data and all that stuff. Let's figure out if this is actually happening. So what we kind of wanted to get out of this research was are people using this data, like somebody else scraped GE's website or any other default vendors website before we did or Scott did and started looking for these devices on that and logging in. Have they, were they using Shodan as well, looking for 4.5 and other SMB attacks to 4.6.7 and trying to exploit those? Were they developing custom malware for the different vendor devices? And if they did get access, was there malicious intent? And was there any campaigns against specific vendors? Because, you know, it could be political or geo, if there were just attacking vendors for certain reasons. So the whole setup. So for anyone who's done Honeypots, I know there's a bunch of popular ones out there that, you know, are really low to high interaction. So we definitely use some of those. The open source ones kind of forked in, made it our own way. So to include, we got a bunch of information on the vendor devices. So just HTTP strings, different protocols they use, the web front end, so to make sure if an attacker hits it, you know, it'll actually say a vendor name, user login and password and give the right error messages back, just in case if anybody was doing any type of fingerprinting, or maybe even heuristics. Replaced existing vulnerabilities. So, like I said, 0867, those application level vulnerabilities, some have vulnerable versions of Telnet on there, or open VSC. And also, of course, default creds for SS agent web. And also custom services and scripts. So we kind of had to emulate the whole stack on the Honeypot itself to make sure that when these vendor devices are communicating with each other within the operating system, that, you know, these types of protocols are actually there and set. So if there was a pretty sophisticated attacker who was looking at it, this isn't going to look like some ordinary Honeypot that they found. This is going to be some bona fide traffic on there and potentially juicy information. So internet presence. So obviously Shodan and Google would be your go-to. We're setting this up about six months ago. And like we need some quicker results like that just because I wanted it quicker. So we utilizing previous talks, really CVEs and obviously all these vendor creds, they're out there so we can use that information also to kind of speed it up. We did a bunch of Twitter and fake paceman dumps. So we set up a bunch of credentials for each of the systems that were unique to the systems and just let it rain. So, you know, a hack by, you know, somebody who hates medical device vendors. Here's this dump of the file system and everything. It made no sense, but hopefully somebody's like, yeah, like, I want to hit that. I would. So the data. So we sampled about 10 different honeypots, you know, from D-Fibs, MRI machines and kind of spread them all across the world. So just to get kind of a balance in there. There was over 55,000 successful logins. So that includes web and SSH. These are, a lot of it also includes your typical admin admin, even though those creds are probably valid somewhere, as we saw earlier, you know, the default passwords for these are pretty terrible. So you might get a bunch of, you know, regular, like brute force traffic in there. So successful exploits. So how many people exploited the telnet, the 0867 or, you know, some sort of FTP. There was about 30 or 29 of those. There was 299 pieces of unique malware dropped. Most of these were sort of like C2 callback strips. So they just plant there until they were in Pearl or Bash or what have you and just kind of establish some sort of persistence and have some callback to an IRC server somewhere. And a Honeycred. So out of those, all those fake pacepin and Twitter dumps that we did, eight of those hits, unique hits actually came back to us and were successful login. So we had alerting set up. So when I saw that come by, I got out of bed. It was like 4 o'clock in the morning. I was like, I got to check this out. So what happened when those people logged in? You know, who's trying to attack this and they're obviously here for a reason. I think it was interesting to just to point out the, you know, the source countries that, you know, who would have thought like Netherlands was the top country. And so, Mark, like, I know you kind of looked at that a little bit like the IP. It seemed to be like an ISP sitting there that they were hopping through that they had C2. Yeah. So, you know, so we have the source countries, Netherlands, China and Korea. The latter two, okay, we can kind of see like if for anyone else who owns Honeypots out there you see a lot of traffic coming from those types of countries. The obviously, you know, we kind of do a little more research on that. It actually came from one web host provider. So, why that is coming from there? I don't know. Probably definitely going to follow up on that and see if there's anything going on there. And we were just going to roll the attribution die, but Netherlands are not on the attribution die yet, so we may have to get them at it. All right. So, what did the attackers do when they logged in with their Honeycreds? Absolutely nothing. A lot of them just let the shell blink or ran a ping to 999.999.999.999. I was like, you suck. How did your logs fill up? Yeah, logs were... I had to get more space. So, obviously really saddening, but I mean, when we were doing previous Honeybot research unrelated to this, this is the type of stuff we saw. So we can sort of conclude from that. Okay. Like, these metal devices that we copied to a tee, we're going to do a search and we are pretty much vulnerable to any other, you know, typical Linux or Windows box that's out there that's just chillin' with direct access to the net, getting pulled into some C2 or some RSC botnet. So, very similar types of attacks. Did the attacker know that they, you know, had Bruton MRI machine? No, because they didn't do their proper enumeration. If somebody would have cared, they realized this is an MRI machine and I have Bruton, what can I do? These honeypots were actively connecting back and talking back to that C2 server. So, we can conclude from that. Are there currently medical devices that are owned and talking back to a C2 server? Yeah! Which is scary. So, you know, because these devices check in, they can be part of a DDoS, so if anybody sees a, if one of us does like network level stuff, you see a defibrillator DDoSing you, that's interesting. There's somebody's pacemaker just going nuts. Let's be crazy. So, yeah, so these C2 owners or the bot owners or whoever is sitting on this and calling back, they don't know what they have. So, once they figure it out, once they figure it out, what can they do with it? So, there obviously there's one intentional text, but what happens when you know, a news hits that somebody's, the defibrillator starts like, you know, going crazy and somebody just reaches out and they're like, oh, this defib is MRR machine, or what have you, is owned. Like, when it starts getting into news, it's high profile. Once it's high profile, people are starting to look, okay, they put the pieces together, interconnect devices, this is a typical UNIX box or a windows box, probably, you know, out in the internet, do some light fingerprinting versioning, I realize, okay, can I sell that? Can I do malicious things with that? That's what we're trying to conclude. So, our next steps definitely is to sort of go on the hunt and realize, okay, which bots, which bots are talking to what and try to kind of like tell them, hey, we are an MRR machine, or make it more obvious to increase interaction, but yeah, this is all stuff that's going on right now, which is pretty scary. That's a really quick diagnosis, those types of things. So really, you know, I think we've shown, you know, we know there's exposed vulnerable systems out there, right? And we all know in this room that all software has flaws, we're not going to increase that to zero. That's not our goal. These are basic security hygiene items, you know, that we've shown you that allow, you know, massive attack surface, but very simple things. I mean, you don't need to really, you know, that vulnerable exposed system that we're looking at. And there's really this lack of patient safety alignment, because healthcare organizations overall have been heavily focused on, you know, patient privacy and those types of things. So problem awareness, just some really quick things, you know, increasingly accessible, right? And as connectivity increases, so does that exposure to, you know, potential vulnerabilities or interaction. And again, HIPAA focuses on privacy. It does not focus on safety. So we need to change our mindset and look at, you know, real safety. Number three, two, well, the FDA has been very forthcoming and been very good to work with over the last year. And we've done a lot of things and had some of these landmark things happen. They do not validate cybersecurity controls or, you know, safety controls or requisite. What happens, one of those systems that we see in the Honeypot, this is a device and it gets malware. Well that malware is actually meant to, you know, scrape card data or something like that. But how do we know? They lack forensic evidence, capture evidence, those types of things. So treatment plans, one, if you work for a healthcare organization, I would highly recommend that you go and grab the credentials now that they're in CVE. Get those credentials, tag the manufacturer and ask them how we're going to get them fixed. Secondly, working in healthcare, many organizations want to solely put it on sometimes on the manufacturer. And they will ask the manufacturer, hey we've got these credentials or we need this update and the manufacturer gives them an answer that says we can't do that. They take their word for it. They don't press it any further. So it's not a spectator's sport. You need to get involved. If you work for a healthcare organization, we guarantee you patient safety, quality care is in your missions and in your values. And if it isn't, you press and work for the healthcare organization. Right. Profit first, no. It doesn't happen. So engage with stakeholders. You know, whether you're consulting side like I do now or whether you're in industry, you as information security people, you need to reach out to clinical engineering. You need to reach out to legal contracting. Under selection, prior to cutting that check, to put requirements in place for certain security controls, to validate them up front, to put penalties into the contract, come up with service level agreements for vulnerability and that coordinated disclosure. Those are very good things. We've had some big wins on coordinated disclosure. So when Sean and I talked last year, we had that formal statement from Phillips in November. So that's awesome, right? That was the first medical device manufacturer. They released coordinated disclosure. And it's a really good one. So, you know, round of applause for those guys. Yup. And not only did they get it implemented on the healthcare side, they got it implemented like Phillips wide. So that was a big win too. I mean, they have a lot of IoT stuff as well. Medtronic is another one. They're working on something now have a dedicated way if you go to their website to contact them. And it says, hey, if you're a security researcher and you find a vulnerability, this is how you get a hold of us, a separate way to communicate with us and report it. We just had it besides Las Vegas two days ago when we did the update. Hans, who works at Drager Medical out in Germany, very big medical device manufacturer do a lot of ventilators and stuff. They came on, we've now committed to come out with the coordinated disclosure policy. So we're having some big wins there. But it's not a spectator's sport. We're reaching out. You've got to collaborate with those allies, those internal stakeholders, and those external stakeholders in order to solve this problem. So if we continue down this road that we're in, right now it's very interesting. So the FDA receives, and this is quoted from their website, and it says, if you want to do adverse events, it could be medical device malfunction, associated death, or just an adverse event. Right now, we can't do forensic investigations very easily. So evidence capture is not something that is historically built into these devices. So when there's an event, it gets adjudicated clinically. So it's like, oh, was it a clinical cause of death? Instead of medical device malfunction. But when it goes to that medical device malfunction, generic bucket, it doesn't then get looked into a, oh, was it something at a security level? We don't know that. And so it just goes in this generic bucket. It was a malfunction. So we got to get better of that. Going forward, what can we do, right? I think overall, to treat this, again, we have to look patient safety should be the overriding objective. Privacy, for most parts, if we do security properly and look at it from patient safety, should be incorporated. There are some cases where it won't, and we'll have to address it. But I think we got to look at patient safety more than patient privacy. We need to avoid failed practices. That is a huge one. We continue to see it. We know they're bad practices. We know they're terrible security and we keep rolling out product and we need to integrate those safety concepts into existing security practices, governance structures. If we do that, we're going to have more reliable medical devices coming to market. We shouldn't have undue delay or cost. We're going to have better collaboration, which has been awesome over the last 12 months, 16 months. And they're going to be much more resilient against accidents, against adversaries, unintended use. And lastly, how can you get involved? If you're interested in this, if you're a researcher, if you work at a health care organization, if you want to acquire devices and test them, my wife has not let me buy an MRI system and put it in the basement yet. So I don't have one of those. But some of these devices, patient monitors, those types of things, you can go on eBay or MedWild and you can purchase those. So that's a lot of researchers how they acquire devices. There's a handful of them. And there's not just that, like the FDA held a workshop this year, first time ever in October, brought in all stakeholders. There was security researchers there, there was providers there, there was manufacturers there. That was awesome. Release guidance. There was also an IEEE NSF building code for secure medical device software that was held in New Orleans the month following that in November. So there's a lot of industry working groups working together to speak. Speak at industry conferences. And not just security conferences like DEF CON. I spent a large majority of my time now actually going out to health care conferences and speaking to that C level and going to HIMs and the HIMs privacy and security events. And I did a talk this last year, actually it was just health care procurement guides and helped them out and kind of gave them hey, this is what you can do during procurement recovery. It's kind of a grassroots organization focused on public safety and security issues that have the potential to impact human life. And so medical is obviously one of the cornerstones of ION the Calvary as well as automotive IOT. So get involved if you're not involved with those guys. It's really good if anything else just the collaboration interact, meet new people that are interested in doing what you're doing. They're dropping a bunch of research and then walking away from the problem but getting involved to actually solve the problem because the people in this room are the ones that are uniquely qualified to fix that same problem that they find. So please get involved, do your best. Thank you guys for coming out. I know we're already out of miscommunication about time and it was so we'll try to hang out somewhere out here if anyone has any questions but thanks a lot guys for coming out and have a great DEF CON. Thank you.