 All right. Well, welcome folks. It's my pleasure and honor to introduce some great friends of mine. They are all hackers and we have our moderator today. I'll do a brief introduction for him. But first and foremost, welcome to the biohacking panel. How independent security researchers vis-a-vis hackers work with MDMs or that notorious medical device manufacturer. This is the bad, the ugly and the great. Also affectionately known as bug. My name is the red dragon in 1949. You know me as the volunteer coordinator. But more importantly, I'd like to introduce you to the panelists. First and foremost, ladies first. We have Veronica or B Schmidt yesterday was her birthday. So happy birthday V. She is a former federal law enforcement in South Africa. She has done forensic investigations. She is a cardiac implant patient three times. And she's doing some great forensic work for one of the MDMs that will be showcased here at the biohacking village Medtronic. Welcome V. Second, we have Miss Natalie Schuva. She is the CEO for sternum out of Tel Aviv Israel. She comes to us from the student 8200 of the Israeli Defense Force. She can do things remotely with a mobile phone and the applications that are fascinating and yet scary at the same time. So welcome Natalie. And then last but not least, we have Mr. Peter Morgan out of San Diego, California. Mr. Morgan is quite, how shall we say effective at basically taking a medical device down to its very fundamental binary level and doing things with it. It would surprise you not only from a software and a firmware level, but certainly from a vulnerability exposure. He has notable common vulnerabilities and exposures to his name associated with some metronic devices. So welcome Peter. And then last but not least, our moderator, Mr. Kyle Erickson. He's the director for product security engineering with the cardiac rhythm heart failure business division at Metronic out of Minneapolis, Minnesota. He has extensive breach and incident response experience with one of the largest healthcare organizations here in the Minnesota area as well as globally. So he has faced some of these individuals on the panel that he's going to be moderating. So welcome Mr. Erickson. I will turn it over to you. Thank you Red Dragon for those great introductions. I have the luxury to be here on the panel with Pete, Natalie and V. And I'm really excited to dig into their story and learn how they've been working with medical device manufacturers, what medical device manufacturers can do better, and what we're working on to improve in the industry. So I want to start out with an origin story. And I'm going to start with Natalie. Natalie, what makes you a hacker and how did you really get involved in the security movement around medical devices? Yeah. Well, I actually don't refer to myself as an hacker, but I can tell you basically my story grew up with four big brothers and started my undergraduate degree at the age of 14 computer science. It wasn't just weird because I was 14. It was also weird because nobody in my family ever went to college or something like that. So I was like the black sheep in the family. And once I graduated, then I just started my period in the Israeli Unite 200, one of the elite technological units there. And this is where I got exposed to cybersecurity, low level development and research and exploitation in general. We did some significant things in the unit, handled the most advanced techniques and technology. And after the period of time there, I wanted to keep using cybersecurity to create impact. So during my time in Settelbride, I did a lot of vulnerability research and exploitation, mainly on Linux kernel and Android platforms. And I didn't consider it as being a hacker, but mainly to provide evidence to stop pedophiles and human trafficking and stuff like that because using the vulnerabilities and exploitation were able to extract forensics and basically evidence from the mobile devices, including encrypted ones. So for me, it was the passion and the impact that such tools can create. And the medical device was actually not a coincidence. I was always dreaming and thinking about becoming a doctor. So when I finished my master's in computer science, I figured that this is like my last opportunity to go into the medical industry. I thought about studying medical school. But with the knowledge I had in cybersecurity, I met with a few friends and entrepreneurs and one of them is my partner, Boaz, and we started thinking about medical devices and cybersecurity. And when I saw that remote care and remote monitoring could have such a tremendous effect on our lives and they were not properly secure. This was something that attracted me tremendously because I figured that creating the right security to these devices could really improve patients' lives and improve quality of care and treatment. And I saw how the knowledge that we have from the cybersecurity industry could leverage this kind of technology. And this is what actually attracted me to Co-Foundsternum. That's excellent. Obviously, you may not call yourself a hacker, but you have the tactics, techniques, and capabilities. So, V, I'd like to go to you. And obviously, we know you have a unique story. You're not only a hacker, a researcher, a goon, but you have a implanted medical device and refer to yourself as a cyborg. So why don't you just give us a little bit more about your background and how you've been getting more and more involved with medical device manufacturers in the industry? Yeah, so I wasn't a very good hacker when I was a child and I basically got changing records at school because I felt everyone should have a good year. Unfortunately, that led me down a path that my parents weren't very proud of. But going back at the age of 18, I joined the special messiering unit, which is a law enforcement agency that deals with cross-border human trafficking, child pornography, some really nasty things. We actually made use of celibate. So I'm very familiar with what Nati is referring to. And let me tell you, if you want to get a physical image, those things that they develop are key. But at 19, I also had about approximately two weeks left to live when I got my device implanted. And I know the quality of life that it gave me. But since then, I was obsessed with what my device does for the pure fact that I believed in the same trust but verify, but I could never verify what was keeping me alive. And that was the biggest, greatest frustration. I reached out to you guys about three years ago and I was met with the ugly part of Metronix, sadly. I got met with the legal team. And I felt that the conversation wasn't going very well. I remember these words and Toro, that used to work with you, knows this well. I said to them, if you want to bring your lawyers to the table, I can do the same and then we can have a conversation. But that's where it ended. And in walks biohack village three years ago, Nina being forever the master planner put me on the path for bull. But then I came to severe loggerheads last year because he could not confirm to me that he was from Metronix, even though I knew he was. And we went at it and we had our conversation and, you know, fast forward a couple of months. I'm now finding my feet within Metronix and doing research into early detection, because I didn't want to focus on the problem. I wanted to find a problem and then solve it. So it's not just about whether the device can be hacked. We all know everything can be hacked, but it's actually about having some tangible evidence to deal with to early detection to mitigate it. For me, literally close to my heart is the embedded implanted devices that are often misunderstood, forgotten about. And these are the things that I connected to people flesh and bones. And I just decided to take this on and put my story out there and say, you know, these things are cool, but they also scary, but that doesn't mean we need to go in the Internet. It just means we need to do more to make it safer. Absolutely. And we're glad that we can make some inroads there. I know it's been a journey to say the least. Pete, you've had a similar journey. I'd love to just kind of hear your origin story and what got you interested in medical device security. And not only do you have vulnerability discovery on your side, you've also looked at solutions and improving the problem. So give us your background. Sure. So I started out as a kid learning how to hack video games and change them before the advent of like, you know, client mutual authentication. And that really got me hooked. And as I grew up, you know, learning about software development, networking, reverse engineering, exploitation, going forward into like things like software to find a radio and actually understanding RF better. It was kind of like I picked up these skills along the way. And as I, you know, was able to escalate in a career path and like get better jobs, I finally got into a spot where I was able to affect what direction our team was going. And previously, it spent some time at a hedge fund, kind of protecting a billionaire, making sure the billionaire still worth billions, right? And there wasn't a ton of job satisfaction out of that other than like the interest of how that works. And I got a project at doing a small medical device in a consulting relationship. And it was really neat because the problem set was really interesting from like the requirements that how these devices operate, the conditions they have to operate in the battery life, processing power, all this kind of stuff gives really interesting constraints. But also, when you make improvements to something like that, it has a direct impact on people's lives. And it was a lot more interesting to work on those types of problems. That's when I got kind of started just doing some research on some of the metronic gear and was able to publish some vulnerabilities after maybe an ugly or bad initial interaction that then turned quite positive. And then, you know, one of the things I'm most proud of I think is having the patients to work through that process with metronic and then end up in a productive working relationship where now I do get to work on solutions and help with fixing things, not just identifying issues, sending a vulnerability report and walking away. So that's been really rewarding. Absolutely. We thank all three of you. We've all been influential in the journey here as MDMs have progressed and adapted and, you know, understood that there's a problem but also look to solutions. So Natalie, I want to come back to you. I have a specific question and just it's more around military grade technology and techniques. How do you think that can really help secure medical devices and the infrastructure and hospital systems that they reside in? Yeah. Well, I think many times in the history we saw that military developed one of the cutting edge technologies in many areas in the industry. And as part of it, I think that security techniques are also well developed within the military. And when you talk about military grade security, I think this is what we want for medical devices, right? Because you don't want your security camera be protected the same way as your pacemaker. I think medical devices do require some higher extent of security and deeper level of security. And I think what we see in the military is wide range of attack scenarios and attack techniques and exploitation techniques. And you realize that for me, at least securing medical devices just by doing vulnerability patching or penetration testing or avoiding hard-coded password, it was not enough because I know and I'm familiar with how you can bypass those things pretty easily. And I think that taking some of the knowledge and basically military grade techniques can help you understand the kind of holistic and comprehensive security that these devices deserve. And one of the things that I think was significant in our way of thinking is that even if you secure the MDM's code or doing some vulnerability research or pen testing, it's not enough because those devices contain third-party code as well. And Ripple 20 is a great example, right? We see a TCPIP library affected millions of devices, power grids, medical devices, and most solutions today just can't handle third-party vulnerabilities. You can't really protect against vulnerabilities that are within code that you didn't develop and you don't have the source code to. And I mentioned in it because that in the military, we handle a lot with binary code and with embedded systems and with the ability to reverse engineer other people's code. And actually, we are leveraging this kind of techniques in our philosophy to embed protections and security into these closed source components because in my view, the MDM has to be responsible for the security of the device. Regardless of the components within the device. So we have to give controls to the MDM's to secure these third parties. And I think these binary level capabilities are well established in the military and can be used to holistically secure medical devices. And this is what we try to do. Excellent. I just want to say, Natalie, as a patient and as someone that's been shouting about the stuff very loudly, it's like you're serenading my ears at the moment. You know, I'm all for what you are saying. It is solely the MDM's responsibility. They have to take ownership and responsibility of the take table. I agree with you, but on the other hand, I must say it's not an easy task, right? Because the MDM, it's very difficult to secure third parties and there is no way to avoid implementing third parties within your device. So until someone wouldn't take the responsibility to create such controls for MDM's and to create the security products that they can use to basically protect their devices, we really can't blame them. But we do need to ask them to use the cutting edge tools to secure their devices and this is indeed something that I believe in. Yeah, to embedding the security culture and how we look at devices, right? But, you know, Metronik is a good example from this learning from lessons learned, right? They've come from a place where they were in a really bad position to where they are now, which is never an easy thing taking that ownership and responsibility saying, hey, we need to do more. We need help. We can't do it on our own. I often say it takes a stronger person to say, hey, I now need help and go to people that can help them versus having the persona of I know everything. We are a big manufacturer. We can do it on our own. Completely agree with you. That's great. I want to pull in that thread a little bit V and just talk a little bit more about your patient hacker relationship and how being a patient maybe changes your perspective on device security and how the more you've learned about how devices are built and a little bit more under the hood, how that's also changed your perspective. Can you talk a little bit about that? Yeah, sure. So that brings back a memory of mine. I think it was two years ago, February. I was actually an ICU where I had some issues. I had some device issues and I had to be resuscitated, but I was lying in an ICU connected to 15 different kinds of machine doing infusion monitoring. I had a specific external monitor monitoring my pacemaker at that time. And I was offered guest Wi-Fi, obviously, because hospitals want us to feel safe and connected to the world. And soon realizing that these systems weren't segmented and that I could potentially may have or may not have access to the central station that had my vitals on it, but just realizing from a perspective of a patient how interconnected these systems are. And the more that I watch and just observe the world or even attending talks, I realize that the word medical device is exclusively used for anything in healthcare and hospital and anything that a manufacturer makes, whether it's a Windows 10 machine or versus a complex embedded system. And I think until we start understanding each system is unique. I say language linguistics or, you know, when you have someone that you're observing you try and, you know, observe them and learn what they are, how they talk, all those kind of things, but we don't do that with devices. And I think I appreciate what Peter does and what Natty does is reverse engineer things to understand how they work, not only at a, you know, firmware level, but at a file system level, at a binary level. And as a patient, I wanted to know what keeps me alive. I wanted to know what libraries were used, what hardware was used. And I did that through OSINT. I literally googled manuals and from manuals I've pivoted to get more information because the manufacturer wasn't disclosing it to me even as a patient. But I wanted to know that I can trust my device because I've seen that these devices almost understood and implemented in a way where we build bridges or walls around our networks. But we're so soft in the middle, but we never considered the malicious insider, which is a big thing because I've realized that the threat from hospitals might come from within, especially with everything with COVID-19 becoming borderless. I don't know where the border stops or the perimeter starts and stops anymore. So I think it's just we need to take the time to understand the devices, learn their language and understand their constraints because they're all different. I always say, you know, beautifully broken wonderfully flawed, meaning we can have the most secure and perfect device. There might still be a flaw and it still might be broken, but that doesn't take away from the beautiful synergy that goes into these devices. I know I geeked out because I spoke to one of the engineers and learned what my functions on my pacemaker and ICD does. And let's just say it was like the moment we sat back and said, this is such a simple complicated device, but I trust it. It's the weirdest thing, but I geeked out knowing that the maths behind it is so wonderful that in a nanosecond it raises my heart rate up. I'm actually getting emotional. Excuse me, but yeah, it's a good thing when you realize that you can trust what's in here. I can trust that it's going to do what I need to do. I must mention just regarding one thing you said about getting the hardware specifications, the software specifications, being exposed to how the device has been built. There are also downsides into publishing this information, meaning bad actors having this information very accessible, having them being able to see what kind of third parties and start looking for vulnerabilities or understand the specs and extract the information that we were very easily, you want to make it as hard as possible. So I do think that we need to keep some sort of balance between what information we put public and what information we maybe exposed just under certain terms. So I understand you as a patient wanting to know as much as you can about your implantable device, but imagine that bad actors could have access to this information as well. So just to clarify, the reason that I said I sent this was before I spoke to them and then I realized that I could get all the information. That's when I went to the MDM, right? So I'm all for that certain things shouldn't be on the internet. And this is one of the first things I raised with them. So when I meant as a patient I went to there, I was concerned. I was petrified because I managed to get technical manuals from technical manuals jump to hardware manuals, which led me to more information than I certainly wanted to find in an OSINT exercise. So yeah, that's what happened. And that's when the legal team slapped down on me and like, where did you get this information on the website? Yeah, we had the same experience where they asked, where did you get the devices and we're like eBay. We know about that. I think that underscores the point of, I mean, I could understand the request to have a balance, but the minute that comes out, you can't take it back away and I would caution the idea that hardware designer, software designers that rely on the concept of anonymity and hiding specifications of the device may make worse decisions on predicated on the idea that no one's going to know what the hardware is or what the, you know, what radio chip they used, that kind of thing, right? That's an exercise in time. And when you think about the lifetime of these devices, I just cautioned that concept because once that comes out, we can't take it back away. Yeah, I mean, sorry. So, so Peter, I think you and I, I mean, I didn't have to go buy my device. I just got it expanded right and I got the doctor like hey, hey, hey, I would like my device please. Can I, can I keep it? And they gave it to me. I mean, your story of course is way cooler. I mean, I didn't have to. Talk a little bit more about that. Talk a little bit more about that story. So I'm a little bit of an empathic person. So like awkward situations tend to land strongly on me, I guess you might say, and amidst doing some of the work I've done, I've taken apart and reversed the majority of the home monitor device, but I needed to see live communications to understand what some of the implications of that work. So I tried to get my hands on some implantables and I think it was like between 30 and 40 morgues and corners offices I called trying to get my hands on before basically anyone that did cremation, right, because they removed the devices before cremation. And it was some really, really awkward conversations. I got yelled at a few times pretty, pretty aggressively, understandably so they didn't see the what I was trying to do. But yeah, that was a little bit awkward. Yeah, you should seriously try going to a cardiologist's office and they've googled you and they've seen one of your talks and you get specifically watched during everything. Yeah, that's a different patient perspective. I am officially banned from all major hospitals in my hometown from having a laptop or even getting Wi-Fi access. I think they've grown tired of me saying, hey, we should fix this, right, or me spending time fixing it for them and saying, hey, I fixed this for you. You know, here's the credentials go, you know, for a while. It's just that whole culture of when you say to someone, but we can fix this for you. It's not an insult. It's genuinely people or researchers wanting to make things better. And perhaps not having the social, I don't sometimes have the social queues like you would say people like I can totally imagine having those conversations with Coronas. Like, you know, I can just, I think I wanted to be a fly on the wall for that. So as a medical device manufacturer, I think I'm internalizing two takeaways. Don't rely on security by obscurity, but also make sure you keep operation security in mind. And you're not just posting manuals and details that can be found through open source intelligence gathering. So two takeaways right there. So I want to switch gears and go to you, Pete. You know, we talked a lot about your vulnerability discovery. Obviously, we hinted at a few of the CVs and things that you brought to Medtronic. But how do you see the testing landscape changing for devices? And what should medical device manufacturers do on their own to learn from what researchers have done in the testing space? A great question. Well, I think the testing is going to have to follow the path of development. And one of the major things I see is very important for the medical device community right now. Is there's a lot of devices that have update functionality, but there's a very, very small percentage of actually use it. And I think there's an opportunity right now to fix that problem before it gets out of hand. So update is one of those problems that if you don't design that in early on, you can kind of lose control of what it is you're trying to work with. It's a perfectly right target for attackers, especially when it comes to low power, questionably connected embedded devices. So I would stress, like as far as something to develop on work on secure update and get that working immediately so it can be used. The more confidence you have in that part of the tech stack, the more able you're going to be to respond to things like security incidents or updates because of some binary or open source package that comes in. And that kind of I think will drive a lot more of the modifications to testing that's going to happen. As these devices we've seen a trend now for devices that used to use homegrown protocols and bespoke like radio communication systems to communicate largely shifting over to using things like Bluetooth and something that's a lot more standardized that also tends to work with a smartphone. And as that happens, we're seeing a shift into a lot more open source library usage in these products. That particular thing is a problem near and dear to my heart. We're working on a tool to basically audit the open source ecosystem and mine for malware, mine for backdoors. The vulnerability space I think is kind of well people are looking at pretty pretty aggressively right now, but no one's approaching it from the concept of how, who are the authors of these packages and how they work together. This is going to land very important on the medical device manufacturer. When you think about, I spent a little bit of time in the defense sector myself. And when you think about how nation states approach problems, there's a tactical and a long term strategic goal to some of these things. And compromising the GitHub credentials of a developer that writes a package that's three levels upstream from something that goes into a medical device is a great way to lay in wait for access into into an entire fleet of these things. And again, you think about that kind of problem without a secure a good secure update mechanism. And now you have something that is a reversible. So I see this as as the trend of development changes with medical devices. The testing requirements are going to change as well. I would love to see a universe where when a device is designed. There's auto. I mean, we have automated testing through like fuzzing of the protocol, right? I mean, we have knowledge of different aspects of this that we can bolster and use things like computers to help us. And have that built into the design such that when, you know, before release, there's all these different tiers that have happened, aside from just someone manually reviewing source code. And that is still a strong improvement over requiring people to buy these devices or call corners offices and take them apart, you know, dump firmware reverse engineer and try to find bugs. The more people we can get involved with this at a productive level, I think is going to be more valuable to and that's a, I think a large challenge of the medical device manufacturer. How do you adjust the signal noise ratio such that you're getting confident quality people that really want to help without, you know, wading through the deluge of things that may happen if you just bug about me at the same. Yeah, I think there's like a diverse approach to this I think, you know, in a prior life you were at a boutique shop and, you know, that we've used all different types of testing companies we've internalized some of this but you know you can find those niche boutique shops or those individuals I can really get at certain elements have really helped us I know so. I can, if I may jump in as, as a person research research, many platforms that are highly secured highly being researched highly being fast and pen tested, including Linux kernel and iPhone devices. I can tell you that it's impossible to put a device out there without vulnerabilities. And once it's out there and you have enough researchers. Some of them will find the vulnerabilities and if we'll take a look at the long term strategy and take a look at what the traditional it security word has gone through. We have endpoint security, and we have network security, and the endpoint security is something sitting on the device in real time, capable of preventing and detecting attacks, regardless if the device is vulnerable regardless what type of attack it is. Regardless of, you know, secure design or or where the vulnerabilities is it in the OTA or improper input validation, etc. So for me if you ask about what's the right approach, the right approach would be sustainable. Meaning, I think secure design is cool, I think penetration testing the school but it will never be sufficient. You never have 100% vulnerability free device, especially when you use third party code so I don't know if how much static analysis or vulnerability assessment tools you use but they're not very good at finding vulnerabilities in binary code. And source code it's not the only thing that goes into your device, not to mention hardware components that have software within them. So your Wi Fi module or Bluetooth module. It contains vulnerabilities as well. You have vulnerabilities on the cheap level. So, in my view and in my opinion, only something that's operating in real time, capable of verifying the integrity of the device capable of preventing damage data theft or basically preventing attacks or and on the other end also monitor the device meaning you need to be alerted if something bad happens or maybe happens. This is the approach that I believe in more because I think this is why endpoint security has been so strong for many years now in the traditional IT space. And I think this is where IOT and IOMT should go to. And obviously there are challenges in installing something on those devices. They are low resource, they are real time operating systems, they are diversified, they are extremely different from traditional servers or pieces, but this is where innovation comes to the picture. I think we can solve this problem. I even think that with Stenum solved part of the problem for sure. But I don't think that this challenge should keep us from putting something on a device to protect it. We should just do a breakthrough in how we see endpoint for IOT devices. Yeah, so I totally agree with what everyone's saying. I just something dawned on me the other day, right? If we look at the longevity of medical devices, for example, could be a decade. So if you look at just some of the statistics 600,000 medical devices implanted yearly in the US, I hear all those will last 10 years. So if you start adding the numbers up, you're dealing with an ocean of legacy devices that at one point in their lifetime will be vulnerable. And it's not technology that you can rip out of a body or, you know, some people only can have that monitor depending on what device you were talking about, but those things are built to last for a long time. So any solution will have to be a multidisciplinary approach eating it from all sides, but need to grow with the device need to be updatable need to be changeable. And basically grow with the times when the device necessarily can't. I'm glad you brought that up the so I'm interested and I'll go kind of back around the panel and start with Pete. How do we approach this legacy device problem I think we already teased out the update ability and the, you know, forecasting of the life cycle but we have a lot of legacy debt right now that we're dealing with so what are some ways that we can, we can work through that Pete I'll start with you. That's a super challenging problem. I think there has to be some preparation for worse case scenario and worst case scenario being think it did think about the attack surface. If that if that case is disabling radio connectivity and some of these things. I, you know, working with some of the medical device manufacturers come to learn. This isn't a software change for a website that can be pushed into the CSD pipeline and show up in 10 minutes. Right. This takes legal and regulatory guidance and preparation and submission to the FDA and like that can take a long time. And my suggestion is prepare for those scenarios now. Are we ready for how do we do this if we have to, because the reality of ex planting devices from hundreds of thousands or millions of people is completely untenable. Right. So how do you get. And I realized that maybe there's going to be an inherent trade off and quality of care provided. But, and that's probably a situation you get to only as a worst worst case scenario on these devices keep people alive. And that we want to keep enabling that because like you know as V mentioned, these are things that some people wouldn't be here without those things and we don't want to instill the wrong level of fear. But planning for those scenarios now so that if those need to be enacted, I think is one of the only things you can do for the legacy devices that don't have an update mechanism. If they do, I mean, like setting up the testing for this now and being ready for it is it's an inevitability. So it's a when not if. I think I think what we are doing together with metronic I think metronic is a true pioneering in this support to legacy devices as what we are doing together installing protection into five years old device I think is five years old, enhancing the security in field getting visibility getting protection into really old medical device using the OTA and I think it's a true innovative play and I think that I truly support this kind of of acts and I think that we have a lot of respect to metronic in being pioneering doing such a thing. So I think we can learn from it. I think we can provide enhancements to existing device flits. Of course with the right controls and the rights MDM to take is to take it to the to the next level and to move forward. It's a it's a good example of what we are doing together. And I hope that we could save the way. Absolutely baby steps in the journey but exciting stuff be I know you have an opinion on this. I always have. You're good. You're on mute. Oh goodness I'm having audio issues today it's like the audio day of a hole. My feelings on this is we we can't even start facing the legacy problems and stories until we stop making stupid decisions and choices going forward right. We have to now nip this in the bird and say going forward. We're going to do better whether you know that leaves I haven't looked into a lot of some of the things I've googled it I have read up on it I'm very excited about what you guys are doing with metronic. That certainly is a step in the right direction but visibility is key right. Like Pete and I've had discussions were like it was like we don't know if the ICD or pacemaker has been attacked well do you even have the data to know that that's happened. So we want to deal with legacy we need to start getting visibility moving forward because otherwise the adding this amount of, you know, another year goes past. Now we have another group of legacy devices that's going to want us for 10 years. Right. So get a drive going forward, and then start dealing finding solutions for those devices that you don't have visibility on find ways to get eyes on on your products in your data. Because where you have a blind spot is most likely we're going to be attacked and it's not like a nation state actors going to say, you know what I have owned your product right researchers have the courtesy to come, you know, have a discussion. Very bad people looking to do bad bad things aren't going to have that discussion with you you're not going to know they dare. You need one patient to have your device, and they get attacked and they don't, and they, they ultimately pay with their lives. I can tell you can I have a string of patients wanting to expand their devices, which is why something I don't want to have happen. It scares me because I look at where I was at 19, where I am now 33 celebrating my birthday, having two kids having a master's thanks to a device that kept me living right. Otherwise I would be dead. It's as easy and as plain as that. It added years to my life. I was not supposed to have take that technology away. And as a human race, we go backwards. So patients need MDMs, whether people in the security want to see them as an adversary. There are people like me that rely on them to develop the technology to keep us alive. Right. It's important to understand that it's a love hate relationship. You know, we're not all going to do it right. But if we can have the conversations and we can fight it out and we can find the solutions that are super cool and super techie and super new, we should try them. Right. What what I think sternum is doing what Pete's doing is hitting the same problem from a different side. I'm concentrating more on that. How do we detect and log these things? How do I investigate that? How can I, you know, have eyes on the prize sooner? That's my passion point. But we're all effectively covering this from a different side, which is good. Because all sides are vulnerable. And all sides are needed. You're bringing it all together, right? Like, how can we get better visibility? And, you know, Natalie talked a little bit about this, but I'd like you to go into more detail. What are your thoughts around proactive controls in kind of the post market, which is the term we just use for detection, logging, monitoring. I know we've thrown surveillance out there, but to be to be more in line with the community, it's it's incident response. It's logging detection that capability. So how do you see proactive controls fitting into that model? Yeah. It's a great question. And I think I want to learn from other industries, which already developed into this direction. So you don't see devices today within enterprise or in cloud infrastructures that are not monitored actively that do not pop up alerts when something suspicious is happening. And I see the medical space behaving the exact same way. I think with solutions, I won't mention ours, but I think there are many solutions that can do that provide monitoring and visibility into the device itself. And then I think that this kind of information and an incident response should go into two main personas. One is the product security in the MDM. Monitoring is distributed device fleets, understanding if somebody is richer, researching his device, selling the device on eBay, maybe trying to hack into the device. Stealing information, this kind of things can be visible with the right controls. And of course, there is a technological challenge of supporting all devices, old code base, which didn't come with visibility inside without logging without monitoring. But that can be fixed. It can be installed post market and it's even easier to install it pre market. The other persona is the hospital, the clinic, those devices that if they come with built in protection and monitoring, they can be the eyes of the hospital for a beginning of an attack attempt. Imagine an attack attempt just a month ago, there was a ransomware attack on hospital starting for a remote patient monitor. So the hacker penetrates to the internal network through the vulnerable IoT device. So imagine this device transmits information. Imagine this device is capable of detecting the attack attempt and even preventing it. Not only you prevented the attack, the hospital that moment become aware that he is under attack and can act accordingly to secure other devices, maybe unpatched devices. So you don't have to have all the environment protected, but as you have more devices that are transmitting visible protected, you have more visibility into your entire enterprise or hospital and can respond accordingly. So I think both personas benefit from this kind of visibility. And I think moving forward, we can use this kind of tools to embed security and visibility, and I'm making a difference between the two because the kind of solution that I see in mind is first to determine a realistic solution to prevent attacks in real time, prevent the damage, because in medical devices, there is no time to waste. A successful attack could lead to a little resorts and you don't have enough time to detect the attack and respond in a week or two weeks. Sometimes in enterprise, it takes six months to respond to a security incident. You don't have this kind of time in medical devices, so you have to block the attack in real time. The other aspect is really to become aware and to become aware means to get alerted when something is behaving differently or suspiciously or if there were a prevented attack attempt. And once you are aware that this kind of attack happened, let's say in the US and you have devices that you can't update in another country with the same vulnerability because you know what the vulnerability at this point because you have visibility into it. You are capable of providing fast response to alert your patients, to alert the hospital, to update the devices with the patch. So I think it's essential thing that we need to embed into the medical space and we are on it. I love to do that because I think we're starting our journey to improve our visibility, improve our prevention in near real time, improve our detection capabilities. But we've thought into the future and how we need to better partner with healthcare delivery organizations, hospital systems and provide some of that information, right, to the example you gave about the ransomware outbreak. And just having dealt with brands and one of myself in the past life, you know, obviously time is of the essence and, you know, that's something that we'd love to be able to help out with as an MDM in the future using these techniques and tools that we're talking about today. So V, I want to switch gears a little bit. We're talking about all different types of aspects of, you know, security controls and capabilities to give back. But you have a unique forensics background. How does that make you approach device security differently than, say, you know, the traditional vulnerability disclosure scenarios? Yes, I think forensics is this unique focus on scientific processes. I think that's the one thing that was hammered into my head is we need to follow a scientific process. So you need to look at it from every aspect. But I was doing forensics for eight years straight solely focusing on that and realizing that I wasn't quite understanding all sides of it. So I always say, we have a saying in forensics we find the truth. So we reconstruct what's happened. So I decided to actually start retrieving so to learning how to attack. And then I learned how to defend and then I learned how to investigate and taking that. I decided to start applying that that type of mindset of taking a multidisciplinary view to things saying, OK, well, I want to know if we're being attacked. I'm kind of tired of having, you know, it told to me that we don't have any data that supports any attack. And now knowing that we don't have the data actually available. So to have more visibility, we can start seeing whether devices are actually currently being attacked because we don't know. I spoke to multiple coroners around the world through law enforcement agency contacts. And the question I said to them, if you have a cardiac patient that has an implantable, right, they've passed away. That doesn't necessarily mean that it's, you know, suspicious circumstances. But do you do, you know, get a program, pull the data, look for the time of death, look of any strange, you know, activity surrounding the device itself. And I realized that, yeah, the devices get sent back to the manufacturer that does basically a memory review to see that everything is like it should be. But people forget that programmers can be compromised and something can happen from a programmer to a device. And we don't leverage the information that we have. So the important thing for me is we have no ability at the moment, unless a researcher comes to an NPM or someone makes a grand gesture that they've had the device. Unless it causes chaos or becomes public. Somehow, we don't know. And that makes me uneasy as a forensics person because forensics we want to know. We want to know the bits and bytes of how something functions. When the local principles a big thing that I love my life by. So everything that comes into connection with each other leaves a trace evidence. I want to know what it is, but if we don't have visibility. Right. It's there, but we don't have the capabilities to make it visual. So I want to learn more of what we have what we need. And what we need to do to kick ass to get there. So that's my approach to what I'm doing with my chronic is I'm assessing what you have. What we need and willing ass to get there. And I'm seeing it come together with the various aspects of you guys have brought together, you know, covering it from different angles. And the forensics in me is like saying, we can apply forensics and development. You know, we can apply for instance, extreme market post market. It doesn't necessarily mean it's an investigation, but it's creating the evidence that we need for the future. That's creating the data now for when we do need it. Now I'm always drawn to the example of Microsoft and how they used other data sets to detect a large windows vulnerability and I think really was mining data in windows event reporting. And I know that a big piece of what we're thinking about is we have a lot of remote monitoring information we know what is above the line and what's below the line and we're trying to develop, you know, that anomaly detection there and it can't only be done right with that type of detection that's one leg in the stool but as we look at on device agents as we look at other ways that we can pick this up with our fleet distribution infrastructure. I think that really has to come together to get this full picture and all of you have hit on that element and and I think it's not to discount testing or threat modeling all of that has to happen as well. But we really want to emphasize this prevention in real time and then this detection of the current devices and fleet that our legacy and are out there live today and what can we learn from that and I just, I really think that's important and all of you have been bringing a different element of that to this discussion and it's it's helping us I think learn as a medical device manufacturer how to handle this situation. Yeah, I'm always going to be sorry. I always say data doesn't lie. Right. It always said these are saying let's follow the money or follow the evidence. But we have so much data we're just not mining it we're not learning. I think it's one of your members Sasha that said we know we need to learn to listen. Right, so we need to learn what the data is telling us. And I think that's the big thing. We have all this data, but we're not listening to it. And once we start intuitively tuning in and understanding the data, those numbers don't lie. Then actually I think an MDM can only really do straight analysis. Because if you don't know what you have. How do you know what to protect because then you're always on the back foot. Right you need to get ahead of the problem you need to know where you're most vulnerable at. You shouldn't wait for that to become public knowledge. So I want to shift gears one more time here. We hear the concept of zero trust in medical devices, and I've talked to I believe each of you around the same topic. How do you think that can be that concept can be applied from an MDM perspective or from a medical device perspective and I'll throw that out to the group whoever wants to latch on to that one can start us off. Okay, I'll roll with it. I was waiting for you. I know you got to make sure that. Go ahead. I love zero trust like it makes me excited because for once we turn in security on its head and saying you know we're not going to trust but verify. You know, you know, that's not going to be which is not going to trust right. Because the nice thing about zero trust is you can actually understand devices. But the ideal thing is now we don't have a perimeter. We hard and everything. You know, we probe a device and if it meets the specific standards that device is allowed to have access to the minimum it needs access to. Right it's like a patient monitor that doesn't have to be on the Wi Fi and doesn't have to link to the accounting system or the patient records. We have this thing trying to make everything accessible. But to accessible. We take it to the extremes we either make it to secure and unusable. Or we go to the other way but your trust has really big plus points for me because now we can attend to device manufacturers independently. Right we're not going to have the same controls across the board because everyone books their devices different. You can understand the device and apply the appropriate rules privilege and access. So I think this is a really good solution as we move to a perimeter less remote monitoring and medical service. I mean I had my first video consultation today, which is something very strange but yet it felt normal. And the perimeter has just been broken. COVID-19 came in, swept healthcare off its feet, and we are facing new territory we'll be having to build while trying to catch up. And this virus is crippled health care. And I think security should never put such a strain on health care that we cripple it more. We should be making it safer and easier to use. And that's our responsibility security people in this field. It's the responsibility of the MDM, but zero trust is the ability to do catered security based on a device, not necessarily same rule for everyone. So I'm very excited about it. So I have a, I agree with you, and I have a slightly different interpretation of zero trust in the concept of medical devices. For me, zero trust basically means we talked earlier about third parties and we talked about embedding communication models within the code and communicating with the patients mobile app. Maybe home router, you know, each medical device with its communication, and I think zero trust should really mean that we can't trust any of those. We can't trust any of the third party open source, closed source embedded. We can't even trust the operating system that a medical device use, whether it's commercial one, Linux based, we should be expecting vulnerabilities throughout the entire supply chain. The code, the hardware, the software within the hardware, and of course we can trust that the patient didn't install a malicious application on his mobile. Right. Like the last thing we want to trust is, is the human being not making mistakes. So for me, the concept of zero trust basically means that the kind of security that we choose to implement should take all of this under consideration, and to make sure that we keep the integrity of maybe not the entire device maybe it's impossible, but essentially the critical operation of the device should be remain secure regardless of all the things that I mentioned. If we talk about insulin pumps, so you want the functionality of injecting insulin will be the most protected functionality of the device. Same goes for personal health information on the device itself. So for me, zero trust basically means taking this into consideration when designing a security solution. Exactly. I was, I like what Natalie said. I think it's the difference in design. And some V might be, I talk about this all the time every time someone will listen, but I look at the concept of, I think a great example is like cars. When the systems were built to allow, you know, a computing interface in a car, they weren't connected to anything. And they designed according to that, and they scaffolded this over years, and then all of a sudden someone came along and tacked on internet connections and Bluetooth and Wi Fi. What you should do is go back and redesign everything to consider that new approach with is the zero trust concept to me means we think about it from the initial idea stage of defending it in every place it can be. And it's an analogy to what I think Natalie and V were saying about no one operational environment is any more or less secure than any other one, because they're all the same. And building to consider that takes concerted effort from the start from the very, very start. And I think approaching the problem from that, from that way is another one of those shifts that, you know, we have to consider from legacy devices into what is going to be the new normal for this. The quicker that everyone's going to have to get there. The quicker a manufacturer or of any kind of product really can start considering that the easier their life is going to be in the long run. I must add just one thing about that because I think we are repeating, like a lot of thought think about it in advance you have to design it. I must mention the other side security and always been the last part to be incorporated into things. Everybody would like to go to market faster everybody likes to just build their devices build their technology nobody likes to think about security. And I'm, I'm saying it's okay meaning I don't want to say you have to think about it in advance you don't like history teaches that nobody thinks about everything in advance and it's okay and security comes last always. For me as someone was looking for solutions and not just looking to emphasize the problem. I'm looking for solutions that can be developed and installed, even if you didn't think about security. And I have to say, again, it's, it's, it's the best thing to think about it in advance, but I'm just saying it's okay if you didn't. Nobody holds you responsible. I will hold you responsible only if you didn't try to fix it later on. I do not have to. I'm sorry, I have to disagree they're like I want to kick some you know, I sometimes want to kick an Indian's ass and say, What the hell are you thinking. That's what happened there, but it depends. It depends. It's like you have how could it pass for like 123 Ford and this is the place where you can think. What the hell were you thinking but I'm just saying, you know, the more advanced stuff. It's harder to think about. It's a twofold problem. We're dealing with legacy. Right. I call it the legacy because that's what it's written in that joke for the evening. So the problem is we have, we have to stop developing legacy problems. Okay, so we need to address it ahead of time. But okay, we have to face the fact that previously we didn't address it ahead of time and there's nothing you can do. I'm going to take these things back until we're going to cut me open, take my device back, give me a new one. I'm not going to let that happen. So we have to deal with that. We have to be realistic meaning I think you are completely correct but realistically looking ahead. I don't think we're going to be thinking about it in advance in the years to come because new problems will appear. Like in the 50 years prior to where we stand today, companies didn't think about it in advance. So we can teach that we can say that but to be realistic. I don't think it's going to happen. People will only handle the problem once it bites them in the ass basically. You know it's human, right? I can't disagree with you because history often repeats itself. I think what's interesting is we're covering it like a triangle from different sides which is very interesting. I'm concerned with we have these devices that are diversified. How do we include this on a healthcare network, right? Naki's product, let's read deals with legacy devices and new devices and peace solution covers it from a different angle. So it's just like diversified opinion that's doing the same thing, but each in a different way. And this is why I think researching is important because we all think differently. So it's having, you know, it's a big problem. It's massive. I think we're, yeah, I agree with what you're saying. We're all kind of coming at it from a little bit of a different perspective. And I do agree Natalie that like there's going to be new attacks we haven't thought of today that come out in the next year, three years, five years. But we're talking about medical devices today. And I think we absolutely have to require better work on the design. We're not here because of the evolutionary walk of how medical device manufacturers built businesses to help patients and computers came part of that. And here we are. How we go forward though can be changed. And I think the idea that we'd be accepting to someone not working hard at that right now is dangerous. And I agree that we need to have things on the other side to prepare for the inevitable, the new things that come out that we haven't thought of the, the next thing that, you know, shakes the industry so to speak. And we want to be resilient to that completely agree. But in one case when I was looking at a medical device. And I started reverse engineering an e bike, and the e bike was orders of magnitude better, because the e bike wanted to prevent someone from bypassing the speed controller, and going too fast in the bike that it wasn't regulated for. They did a tremendous, they did an incredible job for that but seeing the stark contrast between those two, the difference there was they designed it to be that way. And I know I harp on that a lot but I think that's an important point. I think we can all agree that there's some fundamentals that we need to align with I think I think this is why we want this diverse experience background opinions because it helps us grow in the industry and we have time for one quick final question here and I'll go around the panel about with you Pete. How can researchers smaller shops those attending the biohacking village get more involved with device makers. The way I was even calling. I think there would be a lot that could be drastically improved. I mean to get your hands on research devices to start learning this space. One of the challenges I think newer people to the to working in medical device research are going to run into is becoming aware of the monstrosity that isn't an MDM. And something like when you find a fit you find a bug and you suggest a fix for it. The realities at scale of making that happen. So on one hand I would say it would be wonderful if MDMs would start exposing programs for researchers to apply to and be selected to start working on this stuff. And on the other hand I think there'd be some a great spot for education on the MDM side to show what is it we're going to do we need to do a hypothetical patch to this hypothetical device. What does it actually look like the legal ramifications the regulatory ramifications because without that information. A lot of security researchers I think will be frustrated at the timeline it takes because they don't have any sort of understanding of what's really happening, especially coming from a place of like the hacker ones in the bug crowds where you know that's a pretty smooth process when you're looking at web properties right and it's a very very different industry to very different threat model to very different process you have to go through and I think there's an education opportunity for that. Natalie I know you've had your journey with with medical device manufacturers same question to you how do you think that researchers and smaller shops can get more involved in making a difference. So I completely agree with Peter and I think that no bug bounties programs and exposing the the inner processes that happening within medical device development which is extremely different from traditional development with different regulations and different constraints could help getting more engaged I think assigning a person or a unit responsible for that is also super important in some of the medical devices you just don't know who's the right person to approach. It's not the CISO right the CISO is not concerned about medical device security is concerned about the network. So I think getting this better approach is is is good transparency is good. Basically saying hey we want you involved we're okay with it we're not going to send you some terrifying legal documents if you said that you find something this can be helpful. And I think the the research community maybe thinking about it in a different direction can also help educate. Meaning, I think that what we sometimes do we go to medical devices we do some exploitation workshops we explain what vulnerability means how to exploit vulnerabilities how to write code with security in mind to avoid vulnerabilities we had this experience with metronics as well it was amazing it was super satisfying for us and I believe that it was also interesting for medical device engineers and software developers to suddenly see the other side of of the code and the other side of what was done with with the code and how devices are being exploited can also raise awareness and can also help them understand what the researchers are doing and why they're doing that. This could really be like open open mind and open hearted conversation which we had, and I think is a great example and it's not very common. In the device manufacturers OEMs. Some of them do not respond well when researchers approach them with vulnerabilities. And I think a position yourself as as an MDM saying, we respect that would work with you is amazing and it's the right approach. Last but not least fee, you get the final word. I'm going to do a call out to all my fellow bio hackers, my friends, my goons, my family. The biggest thing I've learned that the constraints with dealing with vulnerabilities from an MDM perspective is when you do a proof of concept approach this like a report approach it like scientifically give us much information as much code as you can for the MDM. They need to verify what you have said and if your document doesn't consist of information that is comprehensive and easy to follow. They're going to have to reverse engineer things themselves and then the 30 days that you expect to get back from them. It's going to take them 60 to 90 days just to figure out whether it's applicable to their product. So if you come with a product, a problem, come with a solution, right? Let's not just focus on what we've broken. Let's try and fix it. So my challenge to all my hacker friends are, let's not just try break things. Let's try and solve things. For me, the term hackers obviously sometimes associated with cyber criminal. And for me, it's a passion point because hacking is what I do. I'm not a criminal. I try and figure out what's wrong so that I can fix it using alternate ways and means of thinking. That's what a hacker does. That is what a hacker is for me. But we're also scientists and in science, we need to be thorough. We need to do the project better. And for an MDM, it won't be such hard asses. We're trying to help. I promise you we're not your enemy, right? Just be a little bit nicer. I have a conversation. Buy someone a coffee. If you see someone at Biohack Village, I promise you, I haven't had a hacker that bites unless asked to bite, right? We don't like having teeth. We like having tech conversations and we like geeking out with you. But it comes from both ways. We both need to open our hearts and open our minds and realize we're working towards the same ideal and working against each other is not going to get us there any sooner. I'm from a patient side. Keep on doing the research. Make this stuff better. Because I think either party needs each other. It's a trifecta. It's the medical manufacturer. It's the researcher and it's the FDA or the regulatory body, depending on where you're from. Those are the things that are going to make the future better. And those are basic engineering practices. Triangle is the strongest thing you can enforce something with. So let's enforce this with teamwork. That's all I have to say. With that, I want to thank the panel. Drop the mic. Pete, Natalie, V, we're good to go. Have a good death combo. I can village everyone. Thank you. Thank you very much.