 Hi, hi. It's 10 AM guys, hi. OK, so quick and dirty intro is, I don't remember you. Hey, quick and dirty intro of the biohacking village is that this is our fourth year. Last year also, I talk fast. Please let me know if I talk too fast. Fourth year, first year we had nine talks. Second year we had 27 talks, two demos. Third year we had 37 talks, three demos. And I came to the realization that we weren't really encompassing the whole medical ecosystem. So what we did this year was we have a talks village next door in Palermo. We have a medical device hacking village with four different companies that brought their medical devices. We have BD, we have ICU Medical, Phillips, and Thermo Fisher, as well as antique medical devices in case you want to look at those. And in the village next door to that, in Siena, we have an implant village, implant slash wet lab. So you can go on the website, which is villagevillageeb.io and sign up for the implants. And that's going on today, all day, Saturday morning, Saturday morning, Saturday afternoon. We're having a wet lab with Michael, who is doing a talk tomorrow, as well as a badge workshop, the badge this year. We changed it a little bit. We did it based on microfluidics. Do you know what that is? Some of you do. If you don't come over tomorrow, the badge maker is giving a talk about it, and he's going to have a lab to show you guys how to use it. Really awesome. And then Sunday, we're going back to implants. I think that's all I got, so I'm starting it back to you. Hi. Great news, I'm losing my voice. Perfect timing. I'm just going to start off by saying a huge thank you to Nina, all the other organizers of the biohacking village, and particularly our sign language interpreter, who I'm hoping is going to make me sound a lot smarter. Please, I really, please. OK, so before I get started, I'm just going to have a quick disclaimer. I think there's a decent chance that there'll be people in the room who have participated in perhaps some of the disclosures I'm going to talk about, or who work in organizations involved in them. You may well have more information about the disclosures than I do. I'm sharing sort of an external perspective on any that I wasn't directly involved in. It's really just sort of like, here's the lessons that I grabbed based on what I saw happen. So don't judge me too harshly anyway. And if you want to kind of educate me after the fact, I'd love to learn more about it. But please don't stand up and start screaming at me. I don't react super well to that. I get sweary, and then it's really awkward for an interpreter. She has to deal with that. So who am I other than just being sweary in British? I'm Jen Ellis. I work at Rapid7. And my general sort of thing that I do is I try and figure out how we can create positive social change around security, since security is a societal issue. So as a lot of that, we do a lot of vulnerability research, which we disclose through coordination to try and help people really understand the real risks. We also work with the government. We try and get policy changed to reflect better cybersecurity practices. This is actually me testifying to Congress, where I brandished a vulnerable toy at them, which succeeded in making them think I was completely insane. So that was good. In terms of vulnerability disclosure, I have worked in vulnerability disclosure for, I don't know, eight years maybe, a bit more than that. I have probably worked on a couple of hundred vulnerability disclosures in that time. The thing that's sort of interesting for me about me on this topic is that I started off on the reputation management side. So I was the person in the technology vendor organization who would be like, oh, this looks like it's bad. We should kill it with fire. And so to begin with, I was like, oh, researchers are bad news. We don't like them. That's terrifying. And then I kind of had my come to Jesus moment. I got converted and now I testified to Congress about how we can protect security research. And I have some of my best friends in security researchers. So I've done a lot of vulnerability disclosure. And one in particular that I did in 2016, which I'm sure is your main memory from 2016, because not much happened that year. I worked on a particular vulnerability disclosure that was in the medical device sector. The reason that I did that was this guy here, who some of you may know, Jay Radcliffe. Jay is a type one diabetic. And in 2011, his primary care physician recommended that he move onto an insulin pump that would be connected to his body. And because he works in security, he thought, well, if I'm gonna connect this down to things in my body, I'd kind of like to know what the deal is with it, how secure it is. So he did a bunch of research on it. And at that time he became somewhat known in the security community as a medical device researcher. He also had a lot of learnings from that experience. It wasn't his favorite thing ever because what happened was there was this huge news type about it and understandably, patients got really concerned. And back in 2011, medical device research wasn't being done a huge amount. The press still sort of like went into a frenzy over it. There wasn't a lot of sophistication. And so there was this huge hype cycle, Jay ended up with patients, parents reaching out to him, saying, hey, my kid has one of these devices attached to them, should we take it out? That's a really awful position for a security researcher to be put in. And it was actually quite traumatic for Jay to be put in that situation. He's not a doctor. He's not the person who should be making that decision. And so the great thing for me working with him in 2016 was he brought all that experience and that knowledge and that thoughtfulness with him when he again started to look at an incident pump. And again, it was because his doctor said, hey, I know you didn't want to have one last time, but you should reconsider. So there we were five years forward. And we were looking at the Johnson & Johnson Animus One Touch Ping. Now the way that this bad boy works is there is a device that connects to your body that delivers insulin. And then there is a remote control that you would carry with you that communicates to that device. And the remote control monitors your insulin level. And then it tells the device, hey, it's time to release insulin. It communicates via radio frequency and that radio frequency can be either disrupted or it can be spoofed. So you can either withhold insulin delivery or you could potentially push a fatal dose. That was Jay's research. That's what he discovered. So he came to us, he at the time worked with Rapid7, doesn't any longer, but he did at the time, came to us and said, this is what I found. And we were like, okay, well we should do something about this. We should go out. But we knew that there was some challenges. Vulnerability disclosures are not always super popular. And so we had a good idea that this one in particular might cause some heartache. It was gonna be a little bit different. Even with the experience that I have and a bunch of Rapid7 people have around vulnerability disclosure, the number of them that we've done, I mean the team that we have have worked on hundreds and hundreds. We knew that this one was gonna be a little different. And there are a few reasons for that. I mean the first was, we were talking about something that involved life and death. And we didn't wanna like go crazy and say like people will definitely die. Like that's it, we're definitely gonna kill people with this thing. Because the reality is that for an attack like this, you have to be within a proximity. You have to have the right technology. You have to know that the individual has the device and you have to know what to do about that. So you're talking about super targeted attacks and your average person doesn't have the right profile to be targeted by that. So we knew like likelihood was low, but potential risk was high. And we tried to balance those things. Still when you're talking about something that can relate to, can resolve in death, like it just results in you handling it very differently to how you might, if you're talking about something that's to do with networking for example. Another thing that made this a little bit different and a little bit challenging was, I don't know if you've heard of Johnson & Johnson, but they're kind of a big deal. Rapid Seven on the other hand is, you know, we're growing, right? We're growing. So we were kind of aware that we were gonna be reaching out to this organization that is hundreds of thousands of people would have a very complex organization internally. Probably would have like a varying amount of security knowledge internally. We knew that they would have an army of lawyers and army of comms people who were much like me in my old job, where I was like, kill it with fire. And so we thought that was all gonna be quite difficult to deal with. We also knew that they operated in a highly regulated environment and that that would probably make them more defensive about getting a vulnerability disclosure because there's a lot of fear about what their regulator is gonna do, how they're gonna respond. And when we were doing this disclosure, it was not that long after the FDA had come out with their post market guidance. So we knew that there was like definitely gonna be tension around that. And then the last thing that really kind of changed the dynamic for us was we knew that with Johnson & Johnson, typically when somebody kind of knocks on their door and says, hey, there's a problem in the product, they're like, and here are 500 of my closest friends and we have a class action suit for you. And they're not really kind of knocking on the door and saying, hey, we found a problem in the product and we wanna help you fix it. So we knew that there was a likelihood that they would probably, one, not have a reason to trust us. And two, are on the side of protectivism and conservatism and basically being like, here are our lawyers, please do chat with them. So we were kind of, we were apprehensive going into the disclosure. We got lucky. We got super lucky. And the reason we got lucky is because of a guy called Colin Morgan who some of you may know. By the way, the subtitle of these slides is an ode to J. Ryan Cliff and Colin Morgan, who I love. And they basically took us through this process. Colin had, Colin worked in the security team at Johnson & Johnson. He had been engaged with the guys from I am the Cavalry, some of whom are in this room. Quick thank you to them for everything that they do. And he had his own come to Jesus moment where he'd been like, hey, we make stuff that impacts people's healthcare and their safety. And so we should probably think about security from the ground up in that. And so he had been on a two year journey inside Johnson & Johnson. I made it sound like he went to the journey in the center of the world. And maybe he did, he probably fought great battles. And he was trying to get Johnson & Johnson to change their processes, build a vulnerability handling program, and he succeeded. So he got them to build this program. We happened, just coincidentally, to knock on their door a week before that program launched. Which was either really great timing or really terrible timing, depending on your perspective. He, we ended up in his test case essentially is what happened. And so there was a lot of education through the process. But because Colin was there and because Colin was very understanding of what vulnerability research is all about, he understood what our intent was. He gave us the benefit of the doubt. He pushed the engineering team to really take it seriously because we had that person internally who could be that advocate for us. It made all the difference. It was a massive game changer. And what it meant was that through the process we were able to constantly come back to this unifying point of how do we protect the patients? Once we had got the idea that there was like a level playing ground that we both recognized we wanted to protect patients, it didn't, it actually kind of to an extent didn't matter that the details were difficult. And we would argue over the details and frankly like Colin and I had some pretty late night phone calls so that we could vent at each other before we got on the call with all the rest of the team the next day where we could be really calm and be like, this is what we think we should do. But because we had that opportunity and we had that trust we were able to really prioritize what was best for the patients. That doesn't mean there weren't surprises along the way. One of the coolest things was that after J&J had verified the vulnerability they decided that they would proactively communicate with patients. Which I think the FDA told us that that was the first US medical device manufacturer that proactively communicated with patients about a cybersecurity risk. So that was great. And we were super excited when they told us they wanted to do that. And then they told us that they were sending out letters. And I felt like I had gone back in time. I didn't realize that letters was still a thing and I didn't really know how to process that or plan around it. So we had some interesting conversations about at what point does a thing become public? Because they wanted to have us do our part of it at the point that the last letter was received. And I was like, but the first person who gets the letter is gonna take a photo of it and go on Twitter. Like, have you met social media? Which for some of J&J is possibly a no. So again, because we had built this sort of relationship and this trust and we had been really kind of partnering with them on timing and all those kinds of things we were able to solve this problem and come to an agreement. They actually pushed out, they like changed the way that they do the letter sending to fit in with our recommendations. And sure enough, the first wave went out, someone took a picture of it and stuck it on Twitter and I was kind of like, I'm just gonna send this to you and not comment on it, I'm just gonna leave it there. And so it was great, we managed it together. And as I said, like every time that we disagreed and there were times where we were like, no, it's this way, no, it's that way. Our unifying thought was always, how do we protect people best? We didn't wanna cause panic, we didn't wanna create an opportunity for adversaries and so we were able to kind of come together to save the world, if you will. Because we took that approach, it meant that when we went out with the story, J&J were able to control the message. They were able to go out with this really positive message and that meant that the press covered it as like a sort of affirmative action from Johnson & Johnson that looked very positive towards the patients. And in fact, the response that we got was very positive. People thanked us for taking a really thoughtful approach. The advice was not, hey, you should rip this thing out of your body, we were really clear from the beginning about that. Actually for people who are interested, the advice was don't use the remote control. You didn't need the remote control for the pump to work. You would just have to manually control the pump but it would still work fine. So that's what the advice was. And we got a lot of people who reached out to us and thanked us for the approach that we took on that and we didn't have hysteria for patients. Nobody asked J if they should take their kids off a thing. He didn't have to deal with the trauma of that. And the FDA were also super positive about how all of this went. I'm just gonna give you a second to read this. I normally hate telling people to read a slide but I think me reading it out would be a little bit weird. The net of it is that the FDA basically said that this is the exemplar that they want other manufacturers and researchers to follow because of the way that it minimized patient impact. So as a quick recap, what did we learn from the process? I cannot emphasize enough the importance of investing time in building trust and empathy regardless of whether you're on the manufacturer side or the researcher side. You need to approach the table with the benefit of the doubt and figure out how to get to common ground. For us, that common ground was all around what's best for patients. It will be different in every scenario but I think that you just need to identify what it can be and use that as your guiding principle. Another big learning for us that we were really clear on is that just because a thing can cause harm it doesn't mean it will and that we wanted to really avoid creating fear, uncertainty and doubt. We didn't want to be sensationalists in the way that we communicated this out. The whole incident with the letters highlighted to us this whole thing about expectations and being really clear on the detail. If we hadn't gone into every detail and really kind of zeroed in on it then they would have sent out letters and we would have had no clue what was going on and the next thing we know, we would have lost control of the narrative and it could have blown up on us. This is a note to researchers. I, so Rapid7, as I said, we do a lot of vulnerability disclosure. We have a published process. The process is we go to a vendor and we tell them about it and 15 days later we go to cert and then cert's clock starts and the cert's clock is 45 days. So in total our published timeline is 60 days before we will go public. However, and this says it's on our website, if a vendor is engaged with us and we see that they're taking it seriously and they're working on it, we'll obviously try and be flexible with them. We have no desire and there is no benefit for anybody here in sort of preempting and going out before people are ready, particularly when you're talking about something like a medical device. Because Johnson and Johnson were engaged in the beginning and we knew that they were taking it seriously, we were happy to wait. So in total it took about four months from beginning to end, which I don't think is a particularly bad timeline. I think that's a completely reasonable one given what we were dealing with. I think for researchers you have to decide where your line is. I'm gonna talk a little bit later on about a disclosure that happened yesterday where I think the timeline has demonstrably been too long. And so it is hard as a researcher to decide how much leeway to give, I was about to say rope but I'm gonna back off from that, how much leeway to give the manufacturer on timing. And I think you have to judge it based on, one, the potential harms and two, how engaged you really think they are and how seriously you really believe they're taking it. So public disclosure can be handled in a way that does not cause trauma. I think that the numbers six and seven here were probably the biggest learnings for Johnson and Johnson. I think that they would never have anticipated that this could be the case going into it based on their prior experience of generally bad things, whether it's medical devices or not. And so I think this one was a pretty big learning on their side. So that's the J&J one that I worked on personally. Now I'm gonna talk about some others because at the same time that this was going on, this was also going on. I'm expecting that some of you are fairly familiar with this. And so I'm not gonna like pick up on it too much and go into what many muddy waters did or didn't do and who was in the right and who was in the wrong on this. What I will say is the research itself was done by another entity, not muddy waters. And once they had that research, they gave it to muddy waters, they sold it to muddy waters and muddy waters then used it to short some Jude stock. I think the lesson here that I would take for researchers is there are lots of researchers who find things and they don't know how to handle disclosure. They don't want to handle the disclosure. They're afraid of legal repercussions. They're afraid of taking on a big vendor. They have a day job that doesn't allow time for it. They discovered it during the course of their work and their employer doesn't want them to do a disclosure. There are all sorts of reasons that researchers don't wanna do disclosures themselves. We recently did a disclosure in an electronic medical record system on behalf of a researcher who had discovered it in the course of doing his job and his employer didn't want him to be the one who kind of stepped out into limelight and took it on. I would just say that as a researcher, if you are going to hand your research off to a third party, just be really careful with who that third party is and make sure they're aligned with your goals. Make sure that you're not handing it off to somebody who's gonna handle it in a way that you perhaps wouldn't have wanted and that you're not gonna lose control of the narrative in a way that feels concerning to you. There are lots of third party bodies that you can work with. For example, there's ICS cert, which I understand is now no longer ICS cert, it's now the N-kick, but the function is still there. It's just under a different name. So you can go to the N-kick, they'll coordinate disclosure for you. You could reach out to the FDA and they'll coordinate disclosure for you. There's all sorts of things that you can do. If you want to hand it off to somebody else, there's also ZDI, if you wanna look at another body. I am the Cavalry folks will be happy to help you. You can reach out to them. And you can also check and see whether the vendor has a bug bounty program. And if so, you could reach out to the company that manages that. So there's lots and lots of different ways you can do it that means that you won't necessarily have to lose control. The other thing I think that we can take as a lesson from this is that not everybody is motivated by a concern for patient care. This one happened, I wanna say around about March this year, Philips disclosed a number of vulnerabilities in their imaging systems. And I kind of wanted to talk about this one because this has privacy concerns more than HOM concerns. And I think like there's a lot of dialogue in the media, particularly around medical device vulnerabilities that make it seem like everything is a plot twist from Homeland. And the reality is like a lot of it's not anything to do with that, but it can still have a really big impact. The other thing I wanted to flag about this is that Philips takes vulnerability research really seriously. They're actually pretty sophisticated in the way they handle this. And because they do it habitually and they've built up really great processes, it's become kind of business as usual. And I'd mean that in the like highest possible esteem. I don't mean that as like, oh, having vulnerabilities of Philips is business as usual. Everybody has vulnerabilities in their technology because they're made by humans. And as we know, to her is human. I think the key here is that they've got to the point where they've taken the hysteria out of doing it. And so they've got to a point where what they're showing is just real transparency to their customers, real accountability to their customers. And they're demonstrating that responsibility in addressing these issues really quickly. I think that's awesome. I think that's why we should all strive to get to, frankly. GE, this happened also in the spring, I believe. This was a series of technologies from GE that have hard-coded or default passwords. I like this one because that is a thing that we have known about for a really long time as a no-no insecurity, but people still do it, right? And I think what that highlights is that generally speaking, there's a pretty big disconnect still between engineering teams and security. And I think that there's a lot to be done, like if you in this room work at a medical device manufacturer, the thing I would urge you to do is take Colin's journey and figure out how do you go to engineering and build security in from the ground up? How do you secure by design? How do you educate them on things like why hard-coded passwords are a really bad idea? And I think that we are seeing progress in that. There are people who have taken that journey, who've taken on that battle, and there are lots of people who really, really care about the product that they build and how it impacts their customers. And so they want to get this right, but there's still a way to go. We still, even organizations as sophisticated as GE, who've been around for a really long time, still have challenges with this stuff. The other thing that I like about this example is the researcher who disclosed this is a guy called Scott Irvid. You guys might know Scott. He does a lot of research in medical. He's been doing medical device research for a really long time, absolutely as long as Jay has. They were a couple of the first people doing it. And when Scott started, I think he would be fine with me saying this if he were in the room. He was pretty bombastic about it, and he was pretty gung-ho with the vendors about how to address this stuff. And that was really terrifying for them. And so what he would get is he would go to vendors and he had all this goodwill and he wanted to help them fix it and solve the problem. It was all very well intended. And they would be like, oh God, and they would take a massive step back and then they would disengage. And then the process would become slow and unwieldy and not great. It was hard to build the trust that we talked about in the empathy. And Scott has massively changed that. He's built his credibility in his face. He's built an approach that is based on building credibility and trust. And now he has this great relationships where he can go out and talk about this stuff and really see people taking it seriously and responding to it really quickly. And I have a lot of respect for that, like a lot of kudos to Scott for doing that. So my last example that I'm gonna go through is not a company, it's a person. Again, somebody who's doing medical research for a really, really long time and I'm sure that lots of people in the room know about Billy. He's the founder of Whitescope. Yesterday, Billy released some research with his research partner, Jonathan Buffs from QED and the research was on Medtronic pacemakers. Now the thing that's interesting about this one is that Billy and Jonathan reached out to Medtronic two years ago. It took Medtronic 10 or 11 months to even verify the vulnerabilities. I, even with my like, hey, let's give everybody for the benefit of the doubt and like, let's take time and make sure we do this right. Basically, I sound like a goddamn hippie. Even with that, I would say, I think that 10 or 11 months is an outrageous amount of time to verify vulnerabilities, particularly when you're working with researchers of the caliber of Billy and Jonathan who I'm sure were absolutely walking them through the process and investing time and effort in helping them understand the risk and what happened. I'm sure that they had video demos. I'm sure they had great proofs of concept. So verifying taking that long is kind of outrageous. And now here we are, two years after initial vulnerability disclosure and those issues have still not been addressed. And I think, you know, you can understand technology is complex. The stuff takes a really long time to develop. It takes a long time to fix if the issues are profound. However, there needs to be a really strong focus on it. There needs to be a real response of taking it seriously. And I think that this is a situation where although the vendor has signals that they're interested in vulnerability disclosure, they have a vulnerability disclosure path on their website, all that kind of stuff. It seems as though, and again, this is me, third party looking in on it from the outside. I don't know what the difficulties are they deal with internally, but it does seem as though that commitment to handling the vulnerability disclosures and actually like internalizing them into the product is perhaps not as strong as it seems from their website. And I would really encourage them to change that. Okay, so a recap of the learnings that we've had from the third party disclosures. Again, be careful who you partner with. Not all medical device disclosures are going to be or vulnerability is going to be a matter of life or death. I'm really, really happy to say that that's the case. As you build your experience, they will become less disruptive, which is great because you want less disruption for your customers. I mean, ultimately that's the goal, right? Is make your customers happy, don't get sued. Everybody's a winner. Like that's good stuff. ICS can help manage the process, although NKIC, I'm sorry. I like apparently I was a little behind on the times on that one. NKIC for those who don't know is NCCIC. And if you Google it, there'll be information about how to work with them and dispose of them. There are a lot of known security problems that continue to arise. Going back to Johnson and Johnson and the one that we worked on, the root cause of the issue was that that communication was not encrypted. But people have known about encryption for a pretty long time. It's kind of a thing. I mean, it's enough of a thing that the government wants us to backdoor it. So there's no excuse not to build encryption in, particularly when you're dealing with something this sensitive. So the other thing that I wanted to talk about is with all the examples I've given, not one of them has required a reauthorization through the FDA of the product. And there is this great myth that goes around, but I hear all the time of, oh, we can't address this issue because we'll have to go back through reauthorization. That's just not true in most cases. And then the last one is, having a vulnerability disclosure email address on your website or a form to fill in is not the same as having an actual process or program where things will really get prioritized and get done internally. And you need both. A welcome that doesn't really amount to much if there's no house on the other side of it. So with all of those learnings in mind, why does all of this matter? I'm guessing that everybody is familiar with what this is. If not, this is what WannaCry look like. So you can probably tell from my accent that I'm British, WannaCry hit hard. And when WannaCry hit, there were a lot of hospitals in the UK that closed, 80 odd hospitals closed. A good proportion of those, I think 60-ish, they had been hit, but they didn't know what extent they were gonna suffer. They had no way of knowing what level of exposure they had or how bad it would be because they literally didn't know how vulnerable they were, what vulnerabilities they had in their environment. And that's from what I understand from talking to healthcare organizations. That's actually super common. There's just a real lack of understanding of what they have and what's going on. And as a result, when situations like this arise, the outcome of it is pretty dire. It's disproportionately bad. And so you end up with a situation where hospitals closed, people get turned away, they didn't get operations they needed because they didn't know if they were vulnerable. This issue is so big and so important that the FDA has spent a whole bunch of time working on it. They brought out their post-market guidance a few years ago, well before WannaCry. WannaCry has added a lot more scrutiny from governments, both in the US and the UK, around the world. But even before that happened, the FDA had been working on this post-market guidance, encouraging vendors to behave in certain ways around security. They continue to push forward on this stuff. I would just like say, I don't know if anyone hears from the FDA, but I would like to say a massive thank you to them. The work that they've done and particularly the way they've approached it, they've partnered with the security community to make sure they're getting it right, getting the right expertise into the conversation. They have really done great things and they're not now resting on their laurels and going, hey, like we have a document, it's all good, which is something that sometimes I like to do. Instead, they are looking for what's the next thing they can do, how can they push this further, how can they do more? And I think that's awesome. Another reason to take this seriously is because, oh, look, we have a whole village on it. It kind of seems as though people care about vulnerability disclosures in medical devices. I'm hoping that's the case. Otherwise, this keynote is going to have been a bit of a disaster. But hopefully you guys care about this and this is a topic that you're interested in and I think there are lots of people who care about it. I think that the media also care about it, which means that lots of other people care about it because they read about it and also super sci-fi and scary and that makes people go, oh, what's going on? And again, it's in Homeland, so that makes people care about it. So it's all like a serious thing. On top of that, I'm actually gonna see, does anybody know what this is? Okay, great, the Library of Congress. Does anybody know why I have a slide with the Library of Congress on it? Other than Josh, he's sitting on the front row nodding. Patence is a close guess. Yeah? Yes, very good. Okay, so the Library of Congress makes the final decision on DMCA exemptions and three years ago, there was in 2015, the Library of Congress said we shall have a exemption to the DMCA on security research, provided it's done within this sort of box of what it should look like, which is basically like do it in a safe testing environment, that kind of stuff. And that was a real game changer because all of a sudden, researchers who'd been sitting on vulnerabilities and afraid of disclosing them were suddenly like, oh, hey, I'm not gonna get arrested, I should totally go and disclose this. And it was a wake-up call for medical device manufacturers who no longer had that handy DMCA stick to beat people with. And so that was a really good thing and it's been part of changing the ecosystem. And this exemption is going to, this is a bold statement to make, but I'm gonna say it, this exemption is going to get re-approved. It's gonna roll over. It may even go further. So as a medical device manufacturer, you're gonna continue to have people knocking on your door and you need to know that. Again though, you're not alone because there is an organization that has a very out of date logo apparently that should say N-Kick and you can work with them, they have guidelines that you can follow on how to do this stuff. I think the big thing here is that we are evolving. We have definitely made progress in this field in the years since Jay did his initial research, which was 2011. This field has changed to a point of being unrecognizable from what it was. And people now are so much more thoughtful about how they do this. We see there are medical device manufacturers like Philips, like J&J who have really great practices in place now. They've got really good at how they do this. And that is a really major thing. That is a really positive thing. The environment has changed in the best possible way and it's continuing to evolve. Again, things like the biohacking village play a huge role in that. People like you coming together and looking at the stuff and talking about it, sharing information helps to continue to move us forward. And so I wanna thank all of you for doing that. I think it's really awesome. And I think it means that together we'll be able to embrace the future that we all dreamt of where technology is safe to use and we fly around in bubbles. Thank you. Okay, so. For people who have any questions, please find out that you might have come there in life and we'll be able to answer any questions you may have. Hopefully. I told you they wouldn't have questions. What's next for Medtronic? What's next for Medtronic? Yeah, obviously they're not doing anything after two years. What's the next step to get them to take action? I think that, sorry, the question was what's next for Medtronic? They haven't got where they need to be in the two years since Billy and Jonathan's original disclosure. Where do they go from here? I think that the answer is, I think there are two answers. There's the cynical one and then there's like my optimistic Care Bastard one. The Care Bastard one is that my big hope is that they will continue to work on the issue that they'll continue to take it seriously. They'll continue to invest time and effort on it. The cynical one is DHS and the FDA are involved and the FDA is the regulatory authority in this space and I believe that they will evaluate whether action needs to be taken and they'll push Medtronic accordingly. Any other questions? Do you think more regulatory frameworks are coming down the road? Do you think there's a push from the industry to say from the consumer side? There's a lot of consumer groups and the lawyers are definitely all over this. So do you see that six months from now a new regulatory framework is going to be announced that is really going to push the medical vendors? Yeah, so the question was, do I think that there'll be new regulatory frameworks coming, will there be more legislation, et cetera? I think there's certainly an awful lot of discussion about it, both in Congress and in the administration. So Congress is looking at legislation around things like IoT procurement, labeling for products, basic security hygiene measures for privacy and for protection against harm and there are bills that come out around those things fairly frequently and they're sort of like increasing in sort of intensity and interest and support. Every time there's a sort of new major milestone headline those bills kind of come up again and they get discussed more. So that's the sort of hell piece and then in the administration, as I said, the FDA is really focused on cybersecurity. Suzanne Schwartz and Seth Carmody who are here somewhere, they led the effort on the post-market guidance. They're absolutely phenomenal, really knowledgeable, work really close to the community and they're definitely looking at how they can continue to help with the issues. And so you have to balance, right? Like nobody wants to create a lot of very burdensome regulation that holds innovation back and that hurts patients that way, right? Like we recognize connected technologies are a thing because there's huge benefits. I don't think anybody thinks otherwise. Patients definitely benefit from using these technologies but they have to be able to do so safely. And so the FDA is trying to balance those two elements together and they're likely to kind of go and wander in some direction. I feel like Mr. Corman here wants to share something too. Okay, great. So what he was just saying is Suzanne is gonna be around eight o'clock tonight at Do-No-Harm. If anybody wants to talk to her about what she's working on and where they're going with this in terms of regulation. Thank you, Josh. That's here. Oh, okay, it's in the main track. Octavia's nine. My God, this is so great. Who else has information they want me to share? If you own the white Ford for this budget. Any other questions? Thank you very much. Thank you. So now without further ado, our next speaker has been with the biohacking village since the beginning, focusing on the interface between computation and biohacking. He is currently working on a doctorate in environmental engineering, focusing on microbial carbonates to isolate pollutants. Back at the dawn of time, he was a directorate historical researcher focusing on WMD warfare, but fell into the IT field in order to make ends meet. Now, without further ado, I'd like to introduce my friend, Mr. Brinley. Thank you biohacking village attendees. Today's topic is blue team bio using kill chain methodology to stop bioterrorism. Quick little notes, who am I? As we mentioned, I am a PhD student in environmental engineering at a very ripe old age. I've got 20 years in the IT industry because you got to make right somehow. It's my fourth year at the Defconn biohacking village and in my first year, we had some issues, technically. So we've got the sacrificial virtual rubber chicken for good luck. Today's agenda slide, we're going to go through bioterrorism, get a quick definition, talk about some genetic engineering methods, talk about the actual kill chain methodology as applied in information security, and then talk about the various steps that I'm seeing as a biology kill chain method, recon, development, weaponization, staging, delivery, and then we'll have our conclusions. So, why this talk? Well, first off, we're now at the point where gene engineering is actively being used to alter human DNA in the field. And last year, there was a main track talk by the chief medical officer of Intel, Dr. John Santos, regarding genetic diseases to guide digital hacks of the human genome. And he presents a nightmare situation about gene engineering techniques which are cheap, perfectly reliable, and easily accessible. His thought was basically if you made genetic engineering as easy as computer programming and as reliable replication. Can you hear me? If you had as reliable of replication as you have with digital copying, you can have run into some real issues. So, the analog is actually malicious code development. That's why he brought it to DEF CON to discuss. And in keeping with that, I thought, well, I work on Blue Team. I thought his talk was maybe a little alarmist, but he said, why don't we put our heads together and start thinking of ways to counteract that? And I went, oh gee, we have a methodology in place already, several methodologies to deal with malicious code development and the effects of malicious code. So, let's start exploring directly applying that to this potential situation before it happens. We've got a 30-year head start. Let's take advantage of it. Some current technical news that plays in with this. There is recently, the big worry has been whether CRISPR-Cas9 is potentially described as a weapon of mass destruction in and of itself. Do the possible effects that it could bear using it to basically genetically engineer either pathogens or alterations to human DNA and spread them out in the public. It turns out that it looks like CRISPR-Cas9 might not be as fault-proof as initially feared. You wind up getting coding errors when you implement CRISPR-Cas9 genetic change. So, that would be our first little bit of technical news. On the other hand, the alarming one in this list is that some Canadian researchers last year reconstituted an extinct horsepox virus for $100,000 using mail-order DNA. And this is exactly the situation Dr. Santos is worried about because we're talking poxviruses are classically horrible and $100,000, no questions asked. I suspect actually these were university researchers. If this was somebody off the street without the university affiliation it would probably have been much more difficult but it still plays up the insider threat aspect in any security paradigm. The folks who you expect to be doing things properly are the folks that have the greatest access to do things maliciously. So, when you're dealing in bioterrorism the first place you start worrying about is the faculty at the local microbiology faculty because they're the ones with the expertise and the opportunity and the ability to sneak something out. So, let's go to a definition of bioterrorism so we can get our terms clear. Bioterrorism is the use of harmful biological agents to generate a political response. I'm separating this out for purposes of this talk from biowarfare. In this case we're assuming that bioterrorism does not have direct state support possibly indirect state support but this is not a nation states official biowarfare program. The reason for that is if you're someplace and you run into your own country's biowarfare program you pretty much have two choices. Live with it or don't. If you run into some other countries biowarfare program okay you're going to tell your country hey I've discovered a biowarfare program over at this country and they're going to either say we know about it or that's interesting you're going to spend a long time talking with some very polite gentlemen to describe exactly how you came across this information and as that points out you're dealing in national security issues local sovereignty issues if it's a biowarfare program the locals are either read in or it's authorized you're not going to get any leeway and so we've just pointed out the lack of viable options. Bioterrorism presumes that your local PD your local national police and security apparatus are not involved and would like very much to prevent this so as an individual concerned biohacker you have a lot more options for who you can notify and what sort of actions they're going to take. Now with the new genetic engineering methods we have a new concern which is designer or custom pathogens and we're going to see how that works when we're talking about gene modification. Now for bioterrorism there are three main possible targets crops, animals and people basically anything that's biological that relates to life in the target area is up for grabs but what we're doing is we're functionally using biological techniques to change the bits what you could call the bits of the genetic code to a configuration that's not the original one so what it really looks like is a genome data problem and much like dealing with say computer viruses and other forms of malware you've got a pervasive threat that will either cause harm or alter content and it can be customized so really instead of necessarily a medical biological paradigm you can sit there and switch to guarding this from an information security perspective and that's what we're going to try to do here now real quick we want to go into some of the methods that are used to alter the genome first off you have to have proper information and that is available mostly online, mostly open source there are a number of databases the BLAST database, the FASTA database and the PSI protein classifier you can sit there put in an entry on and I want to find the human genome code for Tay-Sachs disease and post that in it will bring it right up I want to see the regular genome code for apodostris which is or a fruit fly or any other common research subject animal or plant it will pop right up and additionally that's great for when you're trying to customize down to a species if you're trying to customize down to a person or a subset of people you can go out now and pick up genome capture devices like the ION which was demonstrated actually first biohacking village four years ago little device about the size of USB stick take a sample put it in there basically have a little PCR that would run at top of the DNA do it sequencing spit out the code or the analysis thousand dollars gotten three uses and then reload kits had an additional cost there's an additional one called KIAGEN that's available so now you have the ability to go look up a code that you want to implant someplace and go look up the code of the person or subgroup of people that you want to plant it into this is a little alarming if you look at things from a security through my obscurity standpoint ten years ago before the human genome project fully completed we didn't have this as a possible action so after you've got your information you want to go to synthesis pretty much all of it's done for custom genetic code is a ligand nucleotide synthesis there are like four different methods that have been generated over the years currently now it's the phosphoramidite method basically you're running in a cycle you de-block your amino acid couple it up with a new acid put in a cap, oxidize it lather rinse repeat until you're done making that section of genetic code that you want it's generally done commercially because unless you're really highly skilled personally as a process chemist you want to have this done commercially and the nice thing is at least for some of the more dangerous codes that segments we go back to the pox viruses for those Canadian researchers generally they won't synthesize known harmful gene code that's why I said if you're dealing with a university credential you might be able to get somebody to synthesize something that they otherwise wouldn't because I'm a legit researcher this is what I do there's also something called MAGE which is multiplex automated genomic engineering which came out in 2009 from the Weiss Institute in Harvard basically you take a little chunk of single strand DNA and you electroporate which electrically stimulates cells in order to get it to go through the cell barrier and get that single strand DNA into the nucleus it anneals and melds in at some point and recovers to two and a half hour cycle you can get up to 50 edits and usually what this is used for is to create edited genome diversity for doing tests but this also means that you can get a batch of gene samples picked up filter out the one you want and then simply go and replicate that and have that cell replicate itself over and over by conventional methods basically at that point you're just cell breeding finally we have to deal with our transfer methods because we've got our chunks of genetic code that we know that we want basic gene editing all forms are pretty much the same overall process you pick a target location and you define a binding domain that your method is going to group to in order to say this is right before where we want to insert the custom code you induce a double strand DNA break at that cleavage domain you add your program DNA and you anneal it and usually there are off target effects basically you don't get a perfect match up you'll have because you're trying to do this in bulk it might have bind to multiple places and make multiple strand breaks and insert this DNA in multiple spots and that would give you a one way function because you can't pull them back out as easily as you put them in because you take the risk of taking out the change that you wanted to make in order to take out the changes you don't want and the thought then becomes the more accurate you make your transfer method the easier it is actually to reverse your transfer method because the fewer strand breaks the fewer chances and the fewer off target effects the easier it is to remove one or not have any to deal with and it's just the one that was put in you can just take it back out if you want to three types are ZFN basically it's a binding a binding domain it's got its cleavage domain and it's defined by having a zinc ion in it you deliver it as a plasmid and if your target domain wasn't unique it'll bind in multiple places and you'll have multiple gene changes you've got talon which is transcription activator like effect nuclear it's used for gene therapy delivered as an mRNA and you can combine it with other methodologies it's generally slower to build a proper talon because you've got a larger target domain that also makes it very very granular and very specific so you don't have many off target effects and then finally the most recent one that's been getting all the news is CRISPR clustered regularly interspersed short palindromic repeats it's faster to generate your plasmid or other method to transfer than either ZFN or talon and it'll cut at any location because it's using a little bit of a small chunk of guide RNA in the plasmid but it does have a large number of off target effects noted as we saw in the earlier slide where we were talking about the fact that there are possible toxicity from off target effects after you've got your transfer method picked out you pick a vector usually it'll be either an adeno or a retrovirus and you basically inject it around the cell have it infect the cell your change takes place you're worried that point is that you might have some over expression because you're trying to you're putting a large dose in and hoping that it gets to the right cells and maybe you get two or three doses per cell there are some non-viral vectors that you can use that cause less immune response so you can actually literally just inject in the WAAD DNA you can put it in a polymer or a fat cover the whole point is to just get it through the cell membrane into the nucleus now in reality as I said I'm using a computer hacking paradigm to try and fight this and the advantages in this case for doing code changes is to the computer person the programmer there's less formal training required you can edit as you see your project develop there's a lot less infrastructure needed code doesn't mutate until somebody gets a hold of it and changes it and since you're dealing in generally electronic bits and bytes the consequences when you're caught are a lot lighter which is why for 30 years people have been running rampant as computer hackers the two advantages for a biological attacker are you have concrete effects you're doing real damage if you want to do real damage not to say that a computer hacker can't cause real damage and physical effects but in this case it's direct harm to the organism and you can select existing pathogens in the wild and possibly alter them in order to get your effect now to deal with these you know with computer attacks recently of you know we've gone from a perimeter only defense to something called the kill chain framework and the idea for the kill chain is you want to identify all the steps required to have a successful attack and that starts off at reconnaissance and ends with persistence and it goes through you know recon weaponization delivery of the malware exploitation and goes all the way goes all the way through to actions on the target and persistence and you know as I said with the goal for a kill chain analysis is to identify malicious action as early as possible and choose to react to the threat or to allow the threat to continue so you can study the methodology and it would possibly apply it elsewhere and then react to the threat and it's not just confined to computer security there have been variants developed for missile defense for ship for basically against anti-ship missiles you know you have a radar ping you go through a decision framework you gather more information to determine whether this is something that's hostile or not you choose whether you're not how you're going to react to this potential hostile circumstance you reevaluate and continue on and we actually have an analysis and reaction cycle for this you gather information evaluate it choose your reactions gather more information and keep repeating the cycle until the threat has been neutralized or basically until the threat is neutralized or you can't gather any more new information now there's some cautions in dealing with the kill chain framework instead of a strictly perimeter based defense you've got limited resources to apply to all of your defenses so you have some restraints on both your monitoring and your response actions the earlier you react the more chance you have of tipping off your opponent and time spent gathering information reduces time spent reacting you know it's a trade off but if we apply this to biology we can form a kill chain which is similar starting at recon going through the development and testing phase going to weaponization production delivery and we can stop the delivery in this case because after delivery the advantage that a biological threat has is at that point it's been delivered it is now self acting either it'll work or won't pretty much your only follow on action for the opponent is do we repeat the cycle so so at that point you're dealing in post action issues and I'm taking those out of the kill chain because they don't really add to the kill chain analysis and what this leads what you find out when you're dealing with any given spot in the kill chain for a malicious actor secrecy is inversely proportional to the scale and effort put into that phase if you spend a lot of time on recon you have a larger chance of having your guy get caught snooping if you spend a lot of effort in production that means that you're having a lot of resources consumed and you have a facility and there's a greater chance that that will draw somebody's attention you spend time on production that's a greater chance somebody runs across it so especially for non state actors you get some serious tradeoffs that build up when you're a state actor once again this is why I was accepting out the bio warfare programs it doesn't matter you control the local police you have the resources at your disposal it's okay we can just move on with this but if you're a small motivated group that's trying to do something against the local authorities you've got to make some decisions and we can take advantage of the decisions that they choose so our phase one is recon what's recon it's getting information either on the target area or the target organisms so collection of biomaterial collection of targeting information biomaterial collection can be either direct sampling or acquisition of waste product and it is the easiest step to complete we follow on in recon you've got your target selection so this is where you've got direct observation in the use of open source intelligence by your malicious actor and it's very similar to stalking and in fact the choke point for that then is treated like stalking you've got anti stalking laws anti trespassing laws that you can leverage also you can leverage malicious actor personal chatter motivated people tend to talk a lot about things that they're motivated by human nature there's an old SNL gag when Eddie Murphy decided to retire the Buckwheat character and it was right around the time that John Hinckley shot President Reagan so they were doing that whole thing and one of the questions was do you believe that the man who shot Buckwheat shot Buckwheat and the answer was oh yeah that was all he ever talked about so chatter comes in very early but it is a lot harder to prevent than it is to detect which is just like recon and cyber you've pretty much got a one to one match there the next step is development and testing which is where you're developing the desire genetic payload you want for your attack do you want to do gene transfer or do you want to do direct damage you need to test to see whether or not it's going to have the desired effect and you can do that either in cell culture or in crop samples note that if you're doing it in crop samples it's a lot more noticeable you've got multiple payload options you know generalized plague a targeted plague which in this case is confined to a subset of a population there are viruses that are associated with cancer risk you could try to cause an autoimmune response or you can try to inflict trait modification now the downside for the defenses on the defense is the gene sequencers are becoming less expensive and commercial gene sequencing is now available in common as I said they do monitor for known pathogen sequences and if a group is going to try and do this personally they're going to need appropriate facilities I know some biochemistry students who can tell you the exquisite agony of trying to keep mammalian cells from developing from catching viruses that they did not want and dying it absolutely drives them nuts so it's difficult for a small group to do the choke point in this case are reagents everything has to go using nucleoside phosphoramidites and there was a presentation last year by meow meow ludo meow we'll cut up on YouTube but it was actually a very good presentation where we went directly into this also you've got your polymerases of both of these you have to order and you're not going to make them yourself unless you are one way a little process chemist step three would be weaponization and this would be marrying your payload to your desired transport vector and that pathogen then has to retain its virulence its specificity and the ability to turn it off because it's no use as a political tool if you let it loose and you can't turn it back off then all you are is an homicidal maniac this is difficult for nation states to do and that difficulty relates to basically making it so that it stays on target does not mutate and your own population is not affected big choke point in this case for non-bio warfare programs is usable facilities two years ago I did a presentation on biosafety and we're talking biosafety level 3 and level 4 facilities if you start building one you will have the locals immediately taking a strong interest in you because nobody has a need for a level 4 biosafety facility outside a nation state by warfare program at this point you're also looking at marrying transmission agents whether it's a prion or a virus or a parasite or a host organization to your desired effect and it partly sets how you're going to transmit and deliver this your transmission selection are airborne, waterborne, contact injection partly depends on pathogen partly depends on how you're planning on delivering this step 4 would be production this is where you're trying to make your pathogen in bulk you start dealing mostly in accomplices you can get through phases 1 through 3 as a solo operator but at phase 4 you're looking at generally where accomplices start coming in and accomplices multiply the number of opportunities for the defense to catch something and it generally will require additional facilities beyond your initial development facilities because now you're talking chemistry bulk process biological activities you've got culture media issues cross contamination issues to worry about the analog here would be trying to do any form of illegal substance production from your stills to your illegal drug labs you require in this case it requires better technique higher product purity and you're going to have more difficulty in post processing and once again the real bio warfare program has the cooperation of locals they're not trying to sneak this around they have dedicated facilities it's nowhere near as difficult for a nation state finally we've got delivery and infection and this covers your delivery logistics and attack planning if you deliver your product and nobody catches it you have failed so the key issues for the defense are target access and cracking the bad guys operational security once again chatter kicks in you're dealing with a group of motivated people who want to talk about what motivates them target access most of the interesting targets for political activity are generally guarded when they're interesting so basically you're trying to sort out what a potential malicious actor is going to try to do aerosol delivery is easiest but it also tends to burn away quickest injection is hardest but it's the best targeted and as it turns out the more specific you're targeting the harder it is to deliver so part of the thing that the defense gets to do is just monitor known or likely targets for activity based on what you're hearing from chatter operational security is a bear in general and the moment you catch one of the an accomplice prisoner's dilemma kicks in so you want as a defense you want to intercept and leverage the communications between the planning cell and the delivery cell as much as possible because that way you can walk it right back there are no current examples of an engineered biological attacks and in fact there's very limited examples of non-state biological attacks the two that I've got here are the 1984 Rajneesh-Samanel attack and the 2001 Amerithrax attacks which are the only two biological attacks in the past 50 years of the United States that I'm aware of in both cases I'm going to explain why the steps in the kill chain that kind of got skipped over but how you can analogize them so for the Rajneesh incident the goal was to interfere with county elections in the county of Oregon to allow some favorite candidates to win and seize local power the result was 751 cases of food poisoning and they lost the election anyway for recon the Rajneesh religious group selected restaurants with salad bars they also tried to do some direct delivery of pathogens in drinking water to specific personnel they skipped development because they were using Salmonella purchased from a medical supply company for testing cultures and they just recultured that in bulk in order to get their pathogen weaponizations also skipped because they're using a wild type bio agent production as I said they cultured bulk amounts in a operationally controlled medical lab and basically took that cultured put it in the salad dressing at the salad bars put it in water and they had limited effect the Amerithrax attacks occurred late 2001 after the 9-11 bombings and the goal is near as understood right now it seems to have been to save the anthrax vaccine program that was under development there was a false flag attack attempting to pin the threat on radical Muslims and it was targeted against high visibility targets media operations and congressmen there were five collateral deaths in 17 industries and on the one hand it was successful because the anthrax virus program anthrax virus program funding got restored on the other hand since the perpetrator the perpetrator that the perpetrator in question committed suicide and he was an employee and presumably he was doing this to save his own job you don't save your job if you shoot yourself while you're under investigation so you can argue that's a failure recon was publicly available mailing addresses development skipped again because it was leveraging actual bio warfare bio agents cribbed from the ames anthrax biodefense research strain also weaponization skipped because you're using the active bio warfare it was Dr. Ivan's cultured his agents for two years from existing research stock and mailed it out to his targets which is why mail nowadays US mail nowadays to certain high priority targets goes to considerably greater number of steps to treat potential bio hazards but in conclusion you can see that we can actually apply that kill chain methodology to a bio threat and as I said you can find these bio cyber similarities and we've got 20 plus years of cyber defense techniques and knowledge available for us to leverage we've got a head start so let's not waste it and we've got a head start in an environment where we're at a greater disadvantage on the defense than we would be against a bioterrorist so basically it's the difference stepping down from playing against the NFL to the local high school team we've got our advantages we should move now to really leverage these build out policies build out a knowledge base and build out plans and move forward I've got some references basically available hopefully these they'll be on are there any questions I can answer questions outside if you do have questions for Miss Grimly please come outside we do have a set up for a call thank you very much thank you very much to the Biohacking Village for allowing us to be with you here today and so it is much more than simply me but you will in fact be speaking with my co-panelist Dr. Suzanne Schwartz from the FDA Commissioner Rebecca Slaughter from the Federal Trade Commission and Professor Stephanie Powell from West Point and we have a little bit of technological excitement going on because one of our three panelists appears with us today in cyborg form so as with every live demo excitement will occur but we'll hopefully be able to make everything work but before I introduce in carbon form our well two out of three carbon forms of our esteemed panelists I'd like to share a little bit of introductory information with you on this concept of the Internet of Bodies which may be unfamiliar to many of you and this is a term that kind of I started using coming out of some of my earlier work and basically the ideas are connecting the Internet of Things into the world of body attached body embedded and body melded devices so as we're cybering all the things and the Internet of Things we are also starting to creep into the human body as a platform for development and applications the defining features of the current Internet of Things which is no news to this audience we rely on Internet connectivity gratuitously we think everything is better with bluetooth and bacon there are extreme levels of known vulnerabilities they're already causing harms we have enabled new methods of data collection aggregation and repurposing and this in particular has led to privacy problems we have consumer ability diminishing in terms of the ability to opt out of these additional methods of data collection and leveraging I have a very deep voice now okay that's very exciting in particular we have hidden price terms that are appearing in the way that products are being sold so you can't really tell whose security is better by looking at the product in a store for example and so self-help on the part of consumers who struggle to find the power on button sometimes is relatively limited so one of the challenges that you see in policy spaces frequently is the definitions of security and privacy get blended but at the end of the day and I don't need to define these terms for this community but at the end of the day in terms of a policy set of discussions what we are asking or should be asking in security policy is whether Ellis's system successfully defended against Eve's attacks was the failure foreseeable and what were the legal duties that pertained in terms of fixing the harm that occurred and preventing it with privacy we have negotiated idiosyncratic promises of reasonable expectations the first setting whatever was on it first was good I can also just yell so as a consequence we have different questions that are being asked in security and in privacy and this distinction is lost in many cases in policy circles and in legal discussions so when you run into your friendly neighborhood lawyer make sure they understand the distinction between security and privacy so this starts to matter as a legal matter in part because of reciprocal security vulnerability so as we all know in this room think about Mariah it was pointed at Reddit and Twitter but next time it could be a nuclear power plant in electrical grid so the same types of vulnerabilities and compromises that exist in the private sector can ultimately impact the public sector and vice versa so we have the Internet of Things starting to do real damage just by way of memory lane some of you undoubtedly remember the April Fool's joke of 2013 Ha Ha Toaster.io Ha Ha Hilarious 2017 we have articles about toasters breaking the internet we have BBC warning about compromises on toasters and we are in the world where computer code is starting to directly impact humans and we forget sometimes though not in this room that code is written by humans for humans and that mistakes happen often you undoubtedly know about Alexa ordering doll houses this is the first the story of the first smart home in Germany where Professor Raoul Rojas had accidentally dosed his own house because of one light bulb he thought it was kind of hilarious his computer scientist wife less so we know that malware is increasingly user friendly in terms of being able to deploy it and Marai has caused many problems and sometimes the creators of these technologies lose control of them in particular in terms of this room we're all very aware of the epidemic of malware spreading around hospitals and ransomware and that's where we start to see the connection between code and human bodies so we have the risk to physical safety happening as a result of computer code and so here we start to see the blue screen of death become literally a blue screen of death undoubtedly many of you know about the incident where a patient's heart procedure was interrupted because of a virus scan we have cases where fitness apps are revealing military secrets so where human bodies are deployed and we have the internet of things transitioning into this type of internet of bodies where human flesh is connected and reliant upon the internet for some aspect of it functionality or safety and that starts to change the stakes especially in a legal context so the internet of bodies if we were to put a definition on it is the creeping technological reliance and vulnerability of human bodies on software hardware and the internet for their integrity and functionality so in other words all of the unfixed problems of IOT are now blending with the unresolved legal and ethical messiness of prior generations of med tech and legal issues related to the human body and that is complicated so we've had and are having three stages of these IOB technologies the first stage we're all very familiar with the quantified self the Fitbits the Google Glasses the Apple Watches etc it's optional the risk is primarily driven by repurposing of data privacy and security harms but generally we're not talking about digital safety directly being impacted the Strava location disclosure incidents notwithstanding the second generation of internet of bodies technologies we're already seeing so we're an early second generation digital pills that report the progression of the pill and its release of medicine from the inside of our stomachs pacemakers that are hardwired into our bodies, cochlear implants that communicate using bluetooth digital prosthetics limbs that rely on internet connectivity for some aspect of their functionality or otherwise communicate with an external machine artificial pancreases the first one of which has already been approved and of course internet connected hospital equipment often keeps human bodies alive so these technologies are less optional they are connected to the physicality of the human body physical harms are entirely possible as a consequence and the next generation we're already starting to see in medical trials stage 2.5 second generation late second generation things like brain prosthetics where you have an external computer that can modify the sensation to say a patient's spine or a doctor can remotely recalibrate the degree of sensitivity of an implanted device the third stage we are not quite at and it's the situation where you have in theory melding of the human body and the externalized cloud ether brain it's Elon musks neural lace it is the idea of a cortical interface some version of next generation humanity so this is sort of the progression and as I said we are somewhere in the second stage now Google glass is still around it's on factory floors DOD is building an iron man suit that sometimes is powered in ways that make decisions that override the human body inside potentially based on early reports and so we see these buggy bits and buggy bodies potentially leading to physical harm imagine marine not on DVRs or cameras imagine it on a set of injected contact lenses just for fun you put lenses in for augmented reality gaming and they get harnessed in a botnet botnets of body parts are a reasonable anticipated consequence here imagine want to cry an artificial pancreas is senior citizens not knowing how to bitcoin to the person holding their pancreas house this is unfortunately somewhat foreseeable based on the way that we know medical device manufacturing has progressed and we have now gotten to the point where there is excellent active consideration of these issues by the FDA we saw the first recall over a security review and we know that these are the types of issues that we will see continuing into the future in particular we'll hear from one of our panelists about the implications for criminal law so these transfers of technology from outside the body to the inside of the body will have tremendous legal implications and this is a case where someone was convicted based on data from his own pacemaker and we know that companies through their patent filings patent me meaning patent but that's what I'm saying that they are experimenting in research and development with various kinds of inside I devices whether it is to directly enhance your ability to perceive or for gaming or for recording the reality that is around you so some of it is the archival concern and interest that existed with quantified self but now it's less visible to the outside person who may be the object of the recording so the big question that we're left with and I'll just throw some of these ideas out there and shift quickly to our esteemed group of panelists these third generation technologies in particular start to raise questions about where our own minds end and where someone else's ideas begin and not to be too philosophical but Kant has this idea of autonomy and we think about the autonomy of the human being there's also another idea that he has called heftonomy and it's about self self governance it's about the ability to think through things in for lack of a better term a disconnected way to ensure that it's really your processing of the ideas internally to prepare you for exercises of autonomy to act in ways that are consistent with your own moral processing once you are in the world and making decisions whether it is voting or it is the way that you treat other people or your conduct as a professional in a corporate environment so autonomy self governance really can't be exercised Kant says without this prior self self-governance this heftime and when you have your brain always on and connected to a cloud the query is whether you can ever really be sure that the ideas are fully yours and so we know that these experiments are underway neural lace cortical interfaces and so this brings us to a host of legal questions which I will just hint at with this slide and then leave you in the capable hands of our panelists we have regulatory questions about where different agencies authority overlaps and how to ideally ensure that consumers are not hurt and with the first generation iob those devices that were deemed to be lifestyle devices those were primarily regulated enforcement conduct was carried out by the federal trade commission and the FDA deemed many of those devices those first generation devices to not be medical devices and so therefore was adopting a hands off approach but second generation devices where things go inside the body that's starting to be a different story with contracts think about every detail you've ever clicked on on a website now imagine that that eula that you're clicking on is attached to the injected lenses in your eyeballs that have an internet connection the stakes start to be a little different if you can't understand what you're clicking yes on what does that mean in terms of the possible harm to your vision your participation in a botnet of eyeballs that could take down a power grid this sounds like sci-fi but yet if you look at the impacts as we all know it's unfortunately not that far fetched patents and patent assertion entities trolls have been very aggressive in enforcing their IP rights what will that mean when the allegedly infringing patent is in a body part how much can legal process force you to stop using a device for example in bankruptcy whose benefit is served when these devices and the contract rights that spring from them are sold off open question and how to build civil recourse in tort we know that civil cases around security have been slow to emerge so these are open questions for us too where is the line between augmentation medical correction what are our new tech baselines and who are the winners and the losers how are we changing our relationship of our bodies to the rest of the world so we're entering a stage where we have an internet of situated things and bodies and what this really asks and this is where I'll leave you it asks us to ask ourselves and our communities what is our ideal of the human body in the next generation of technology are human bodies a bug or a feature are they something that we need to get rid of and replace with robotic parts or are they something that we need to preserve and extend with tools essentially in their current form depending on who you ask you have a different set of responses on this sliding scale of techno humanity if you will some people of course think that we're all just a simulation in which case just pass me the cheese and the wine and let's call it a day but if you are somewhere else on the scale and you think that the same human machine symbiosis point is the ideal place to stop then you're going to have a different set of policy and legal prescriptions for the way that we want to build the next generation of technology and security then the people who believe in a post humanist ideal for example and we don't even have a consistent definition for some of these stopping points right so this is a bigger discussion over whether human bodies are a good thing or just a last generation operating system that needs to be replaced so with that I will stop here and bring up our esteemed panelists and to those of you who have my phone number please don't text me during this because one of the panelists is on Skype so I will ask each of our panelists to introduce themselves briefly and describe how you are connected to issues of security and bodily integrity thanks Suzanne can I ask you to go first while we have some fuzz around here alright there we go we're good can you hear me can you hear me good afternoon and my name is Suzanne Schwartz I'm at FDA Center for Devices and Radiological Health I'm here with one of my colleagues from the center Dr. Seth Carmody who's sitting up in the front row as well our team at FDA has taken on the role of medical device cyber security from the FDA's policy response outreach education perspective so just a couple of words also introductory about FDA, FDA's mission specifically is to protect and promote the public health FDA has multiple centers within the large organization several of those centers are specifically what we call product centers such as ours Center for Devices and Radiological Health and what that means in terms of the product center is that we have the authority we have the authority to review and to regulate medical devices those devices that are going to be coming on the market as well as the authority and the post market sense to make sure that those devices which are being used which are deployed which are available for patients that they are remaining safe and effective and that if there are any concerns around those devices that those concerns are further analyzed that information is looked at carefully and then the appropriate next steps are taken based upon that information so medical device cyber security as with the framing that Andrea gave as you can imagine with a lot of the newer advances the extraordinary therapies that we see in treatments, interventions and diagnoses that are available today contain computers are connected, are interoperable are interconnected and they present some challenges from a security perspective we have had to address and we continue to address those that are the legacy devices devices that were built and developed and put on the market and are actually in clinical use in hospitals years ago and those devices many of them were not built with the security that we would want to see in them today and yet they are performing extremely important life functions and they are also huge investments by hospitals and healthcare organizations and then we have of course the newer the novel technologies and the opportunity really to be able to make sure that before these devices actually get out there in clinical use that a very careful very thoughtful of rigorous and appropriate security approach is taken with respect to threat modeling and the kinds of assessments that need to be done to assure that by design the security is built into the device I will stop right there by way of introduction and let's try see how we are doing with our keyboard connection here Becca could you tell can we hear yes? I think we are good would you be kind enough to tell us a little bit about the mission of the FTC the approach that the FTC has taken to protecting consumers in terms of different internet connected devices such as the lifestyle devices etc. and a little bit about the FTC's perspective generally towards security and privacy enforcement yes sure hi my name is Rebecca Slaughter I'm a commissioner on the Federal Trade Commission and I'm going to apologize in advance not only am I not with you in person I am here in person with my four month old baby who I think I have asleep with IC proposition so let me tell you a little bit about the FTC and data security and the internet of bodies so the FTC's mission is to ensure a fair and competitive marketplace in two ways we protect consumers from unfair methods of competition and we protect consumers from unfair and deceptive acts or practices affecting commerce and deceptive bodies really sounds more in what we think of as our consumer protection mission the unfair and deceptive acts or practices section of what we do as it sounds unfair and deceptive acts and practices can be divided into unfair acts and practices and deceptive acts and practices I will tackle deceptive first because it's easiest a deceptive act or practice is one that deceives consumers so in terms of data privacy and security and misrepresenting what your device does how vice, how you will protect the device basically lies to consumers about your product that's true products also in internet of bodies products unfairness is a little bit more complicated but we in reasonable security as an unfair act or practice in certain circumstances reasonableness is a flexible standard that depends on the sophistication of the business the type of data in question how sensitive it is factors like that when we are evaluating unfairness we basically have to determine whether act is causes or is likely to cause substantial harm to consumers whether that harm is unavoidable by the consumers and whether the harm is not outweighed by countervailing benefits competition or consumers so thinking about this in terms of the internet of bodies this could, our general mission could be applied in terms of failure to accurately describe information practices so what information is being collected how long it's being stored who has access to it failure to reasonably secure sensitive data so for example if you're thinking about a medical device that reads on geolocation or heart rhythm or something like that if that's not kept to safety or health risks that arise from devices such as susceptibility to hacking that impacts critical functions like Andrea was mentioning in her introduction those are all areas where I could envision FTC enforcement under our okay great thank you Stephanie tell us how these kinds of issues might play out in a criminal context sure and because I work for the army cyber institute I have to say that these are my personal views and are not the views of the united states army or the US government okay so Andrea has introduced us to this concept of internet of bodies and you've heard about some of IOB's propensity to damage human bodies and minds and we should start considering what the appropriate consumer safety responses are legal and otherwise I'm going to add another layer to all of this like it or not law enforcement is going to want IOB generated data under circumstances as well and while I do not intend to spend my time talking about the crypto wars and the going dark debate and let me just say I don't I am not even buying the rhetoric of going dark for purposes of this conversation but I am sure you all appreciate that arguments from very respected researchers have been made that say look law enforcement even if you're having challenges intercepting text messages or voice calls there's all this metadata out there that's coming from the internet of things well guess what it's going to come from the internet of bodies as well so as we contemplate the various generations of IOB that Andrea identified body external body internal and body melded we have to think about how the fourth and fifth amendments of our constitution may regulate law enforcement access to this data now as you all probably recall the fourth amendment protects us from unreasonable searches and seizures but for the fourth amendment to be triggered as a protection we have to actually have a search a search must occur and a search is a legal term of art if there's not a search there's no fourth amendment protection and a lot of IoT metadata arguably falls outside of the scope of the fourth amendment because of something called third party doctrine and that in its most simple terms and in its most aggressive interpretation on the part of the government says well if you share certain kinds of metadata with various third parties you lose protection in that data now some of you may say wait a minute the Supreme Court just came down with the wonderful Carpenter decision right? if you're familiar this happened end of June of this year been a long litigated issue the issue of whether cell phone location data could be acquired by law enforcement without a warrant in this particular case there was 127 days where the historical cell data and the court said that we have a reasonable expectation of privacy in the whole of our movements and the court basically said anything seven days or more of location data is a fourth amendment search and generally speaking there are some exceptions but when you have a fourth amendment search the only way to make it reasonable because the fourth amendment protects against unreasonable searches the way we most make it reasonable is by getting a warrant and so now I don't want to be too optimistic because their tower dumps or something the case didn't address the use of stingrays but I think it's fair to say it is a very positive opinion in terms of technologies that have the ability to track us and certainly to do long-term tracking how that case plays out over time remains to be seen and so how that case may apply to IOB generated data also remains to be seen I don't want to be overly optimistic and say well look that data is being generated from advice inside your body granted very concerning from a privacy maybe even a security perspective but it remains to be seen how much carpenter will help or give guidance with respect to that kind of data so look at a real situation a real case that Andrea brought up to say why would law enforcement concretely want this data alright so the man we had a guy I know his house had a fire and he reported to the authorities that this was all a surprise that in fact he was busting open windows and throwing things out of windows and yelling at people in the house get out there's a fire and well unfortunately his cat died too but law enforcement for whatever reason would not necessarily believe what he was telling them and so a body internal device under Andrea's framework a pacemaker was well a little bit of a switch if you will according and again I'm going from news reports not an actual court record but according to news reports a doctor who could interpret this pacemaker said that the readings did not match someone who was surprised by the fire who was under stress and you know trying to get people and belongings out of the house and so we've taken the liberty of just tweaking with KC Green's wonderful little cartoon here and our friend may think may trying to look like he everything is fine he may tell authorities otherwise but his pacemaker day to make it like the fifth amendment say about a case like this or about other kinds of IOB data now you probably remember that the fifth amendment among other things is a good amendment basically says that no one shall be compelled to be a witness against him or herself in a proceeding and it's not just a specific criminal proceeding if you're in you know if you're a subpoena to testify in a civil case and what you say might incriminate you that that would be protected under the fifth amendment but of course there are always elements you can't just say well I take the fifth and have that be over for something to truly be covered protected by the fifth amendment there are three elements that must apply it must be compelled if you're going to law enforcement voluntarily then you're not you know you're not being forced against your will the information to be gained vis-a-vis your communication must be incriminatory if it doesn't incriminate you then the fifth amendment doesn't cover it and it also must be a testimonial communication or act and that is normally the sort of critical element that gets litigated again not quite as applicable here but I am sure this community follows you know whether or not the fifth amendment would protect the password to your encrypted password to your smartphone and my answer to you and I'm happy to discuss it later is a lawyer's answer well it depends under certain circumstances that might be testimonial under certain circumstances that might not be now in this particular place this is data that's being acquired presumably from a third party so we don't have a fourth amendment issue and it's not testimonial so our little friendly our friend up here is kind of out of luck with respect to his pacemaker data this was a fairly straightforward example as we start to consider the third generation body melded that Andrea talked about and we think about how technologies may meld and reveal things about our thoughts it's certainly reasonable to ask how are our thoughts protected by the fourth or fifth amendment or various kinds of brain scans for example you know if a perpetrator breaks into a house and the occupant owner of the house grabs a hammer and starts to hit the perpetrator on the head the perpetrator you know then is able to overpower the owner of the house and unfortunately kills the owner and all of that is caught on video if law enforcement can't make out the face of the perpetrator if they find a suspect in enough time and he's been here she has been hit hard enough maybe they want to examine through some kind of scan internal damage that might have been done to the brain is that protected under the fourth or fifth amendment certainly under the fifth amendment it would be a very hard sell I would say because that kind of identification that kind of examination would be more like just a fingerprint or DNA or if there were you know external wounds it's identification it's not really about about testimony what if however there's a more advanced technology and we if some investigators or scientists were to show that alleged perpetrator pictures you know still pictures taken from the video of the attack and you could read how the alleged perpetrator was reacting would the fourth amendment cover that maybe even more interesting question would the fifth amendment cover that does the fifth amendment reach as far as those kinds of mental privacy issues suffice it to say that these issues are not completely settled and that criminal procedure and I like to call myself one of them is thinking through these kinds of things and I think we're only going to need to think more about them as these three generations of iob technologies become more prevalent in our world thank you thank you very much for those insightful insightful comments and so let me give back our third panelist her cord we are back in business now and let's start taking any questions from the audience if there are any okay let's start up here yes you're injured before is there a possibility that a combination of technical encryption and something legal like a contract with the company that made the pacemaker you could have had another kind of protection I don't think contracts are going to if the fourth or fifth amendment I should say I don't think contracts are going to meet law enforcement other things might like encryption right so that case as you well know ended in a really interesting way frankly the FBI was able to hire a third party vendor to break in to the phone the question though is important because it's still being litigated it doesn't even that case our San Bernardino shooter was dead so there was no way to try and compel him to provide his password however there are sort of two lines of cases that are developing one from the 11th circuit where you had a situation where an individual was suspected of having a lot of child pornography and law enforcement was able to get into some of his devices but not all of them and they couldn't he used true crypt on certain external hard drives and when law enforcement sought to compel him to open up those hard drives in other words to decrypt it to decrypt them he raised a fifth amendment privilege the 11th circuit basically determined that in that case partially because of the use of true crypt the government agent could not I don't want to use the word you don't have to be definitive but it was the government agent couldn't really say are there actual files on that case or is it just a bunch of nothing so to speak is true crypt essentially creating potential evidence that is unable to decipher to a point where law enforcement can't even offer a reasonable prediction that such files in that case the 11th circuit determined that the the decryption would be testimonial in another case in the third circuit not 100% same facts but close enough for purposes of our discussion the third circuit looked at the situation and they did this in footnotes so it's called dipta but it's pretty strong dipta they said unlike the 11th circuit the government didn't have to show that it was likely that those kinds of files were on the external drives all the government had to show was that the defendant in this case it was a defendant knew the password and in both of these cases what they were talking about was something called the foregone conclusion doctrine and basically what that means is if you are merely asking someone to do something and the government already knows that this exists in other words the act of decrypting doesn't give the government any kind of testimonial information or admission then it falls outside of what is considered testimonial under the fifth amendment but it is not a settled area of law that you guys think about that can you give me an example Thursday I had to make you smarter think faster yeah and I think what we're pushing towards is a frontier that honestly speaking we have not been working through not to say that we shouldn't be but we're not there yet I'd say that and one thing to really kind of complete the introduction on FDA there are certain definitions that are statutory definitions and legal definitions that give the FDA authority and kind of prescribe if you will what is a medical device and what are drugs and what are biologics and that's where our remit is from as a public health agency in terms of really again coming back to that assurance of safety and effectiveness so when we talk about medical devices and I'm not using the exact statutory language here but we're talking about devices that are diagnosing treating mitigating illness in other words it has a very clear in the statute language around it providing something to help a patient who is injured or ill providing a cure diagnosing a disease diagnosing an injury treating and mitigating and I think that the lines really start to blur when we get into these areas of augmentation or enhancement if it's not specific to not specific to dealing with an injury or an underlying illness or an underlying and again here I'm using very generically defective sorts that an individual as a patient is in need of addressing in some manner Andrew can I jump in there? Yes please My sound was a little bit kind of a question was about these devices that would say augment your intelligence or do something like that and I want to point out that that's an area where the FTC has taken and would take a careful look to make sure that the claims that someone makes about a device whether it's an internet of bodies device or an over the counter pill that's not regulated by the FTA or anything else something we would look at carefully for to make sure that the claims are substantiated you can't lie about the benefits of your product or the existence of scientific proof behind it so we've brought a fair amount of enforcement actions against makers of are supposed to help people like up that would detect claim that could detect melanoma that could treat acne and where those claims can't be substantiated that would constitute a deceptive act that the FTA FTC could enforce against even where the device or product is not regulated by the FTA and I should have said at the beginning as Stephanie did that I am also speaking for myself and not for the commitment in this context sorry for that ownership One question I had that I mean in here that has ever come up for example in the case of the arson situation is discrimination so the man who had the pacemaker was in situation where he had no choice but to have a pacemaker that would survive and because of that one could argue for example that he was discriminated against let's say he had a medical device they were able to pull this data on and then say a different witness who doesn't have this device they could never pull that data on so I just want to open up to that question beyond just 5th amendment discrimination and self-implementation what does it take on discrimination being a factor so it's a very interesting question I mean if I am just looking at the N5th amendment doctrine it to the best of my knowledge doesn't recognize discrimination that doesn't mean that you know communities of color are often or communities that don't have certain kinds of resources often are targeted more by law enforcement activities than others I don't know that the 4th and 5th amendment are going to be the ways to address that and I do very much take the concern that you don't generally have a choice about whether to use a pacemaker nevertheless traditional 4th and 5th amendment law doesn't to the best of my knowledge in case I have ever read acknowledge that kind of and actually I was going to caveat I'm not personally I'm thinking that the 4th and 5th amendment is the only way to get there I'm not aware of that originally but it shows other laws that could be important when you raised your question the first thing that came to my mind is frankly again thinking about communities that are impacted more by law enforcement actions than others just thinking back about law enforcement policies and better kinds of policing that is that is not a new problem that is not a problem just raised by me but frankly something we've been struggling with for a while I am not a fan but stingrays have long been an interest of mine I've written on them with a technologist and I was asked the question the other day about how the use of stingray is which I am almost certain that everyone in this room knows that a stingray is a device that acts as a fake cell tower it can trick your phone into believing it's a real cell tower so your phone will give an inversion as if the phone was talking to a real cell tower you know the phones are better at resisting these kinds of devices when they are operating in 4G but people who don't have the resources might for a long time have phones that are backward compatible to 2G and because of that may be more susceptible to enforcement use of stingrays so I take your point very much I think it is a problem that goes far beyond just IOV that we've been grappling with for a long time this is a bit of a sci-fi reference question sort of things that we can do to give us a advantage like a massive computer and technology to be able to learn master yet when a computer I can step away from as a kid I can do my life, I'm 18 I can never look at a theory of what I want to but what happens when you start to invent a system within us where the parent and child relationship how do we determine what's making me to invent in my child what kind of choice do I have and then a second part would be because this would give us some sort of independent advantage what if there are some companies that have been saying we'll go into the lab to this level do we do it just to keep up do we not so on the first point about children and the choices of parents to say give their kids a little extra storage for school that is something that we might get a little window of a presage from the cochlear implant cases so there have been cases that have been litigated out over whether children should be in some cases against their wishes or against the birth parents wishes implanted with a cochlear implant and courts have struggled with these cases so in general the wishes of the parents are respected but the child's opinion depending on the age of the child is also recognized so it's not going to be clean and this goes to the question I was raising about technological baselines slipping so if you think about it 150 years ago not everyone had glasses who had vision issues and today it's a precondition of getting a driver's license that if we need glasses to reach a certain level of vision clarity we are obligated to wear those glasses or we don't have the legal right to drive a car so there's a codification of these tech baselines that are just socially constructed like so many other things different cultures will probably reach different technological baselines and that's kind of why I left those big ethical questions hanging out there about what is the desirable human of the next generation that we should strive for as a society and where does mechanical doping come into the picture so I guess I have a few questions first there's a top gear in there second a long tail do many of you have volumetric patients or implants or things along those lines and third you mentioned that some of you have responsibilities along the lines regulating people's claims are there any claims of like drugs or implants that make you smarter or do anything to have that are you're not seeing the applicants because they are in fact true and what are your opinions on that? Susanna do you want to explain the copier and planner or should a lawyer stumble through it? Thank you the ideal solution appears copier implant is for anyone that has an issue with hearing it's a technological device that's implanted in either children or adults into the cochlea of a person's ear and many parents are deciding that for their children without their children's knowledge when they're too young to even decide we don't know the whole full story behind any of those decisions but they are parallel to glasses if you will so parents think that this kind of implant is fine it's a benefit to the child but the issue in the deaf community as well is some people want to make it illegal for them for parents to just automatically implant their child because many of those children are growing up without sign language without a deaf identity so that's kind of the gist of the copier implant are there any FDA unless one of the panelists wants to draw us the second one I'm not but the third one the third one of whether there have been to what extent have there been FDA approvals of IOB devices and what is kind of the next generation of devices and any malfunctions that we know of and recalls along those lines so the examples that Andrea provided in the original slide which had under I guess with the phase 2 implantable devices certainly FDA has approved those devices that include implanted insulin pumps pacemakers, neurome modulators a huge number of devices that do reside that are within the body and they go through a rather rigorous process in terms of again looking at the device's performance looking at the kinds of testing around those devices and now of course also incorporating security as part of that process just like any other medical device so I would be asking about like occupations are healthy as opposed to devices that are for like disabilities or medical problems so it's more than FTC realm that we have open or closed or whether we have any so the way I will answer your question is just by making the logical point that the number of connected devices including internet of bodies devices out there so necessarily you could conclude that there are some that we have investigated and thought that there market needs to be made out soon so if you lose my connection that is why and I really appreciate you guys having me remember the gap between the FDA scope and the FTC scope we say there is an implant in the eye that is effective at uniting that there is a 20% chance of infection because of the techniques used during the implantation what is the responsibility of the FDA on that space and what is the responsibility of the FDA functional if uniting would ensure a medical safety problem that exists and is there regulation or definitions for medical devices that need to be updated so I'll come back to the medical device if it's deemed to be a medical device it's regulated by the FDA and if it's not a medical device we will work with the FTC in areas where there are some blurred lines or some hazy lines and that's been going on for several years already but with respect to an implant or any kind of device that is considered to be by regulatory definition a medical device that is under the FDA's jurisdiction and so safety becomes a prime area of our review and investigation and follow up over the course of the lifetime of that device being on the market so I had a question kind of on the one hand we're describing an awfully used for example an Adderall prescription and how the FDA would handle something that had a benefit for a specific population but also could be applied to healthy people to see what better can you repeat the question again I missed the beginning of it talking about over prescribing medications and awfully used for example healthy people seeking an Adderall prescription and how the FDA would deal with a device that has a benefit for a specific population but that could also be used as a boost for healthy people so we consider off-label use in the hands of the clinicians the patients who prescribe those particular treatments and that happens a lot and it doesn't have to be the kind of case that you're talking about but physicians will recognize the use of a drug or a device in an area where it was not necessarily clear for marketing or approved for but that it may have benefit in the hands of that physician that physician can through what's considered to be practice of medicine prescribe that particular treatment intervention the FDA does not has no authorities in that particular area thanks for the great discussion here a question that I think we're focusing a lot on protecting individual and individual rights we've seen IoT go back if you're that cyborg who has medical devices that are connected and you are in fact the one going road unbeknownst to the person part but the cyborg is the one doing the distributed denial service or something along that effect how do you stop that without stopping the individual that's part of this life that is an excellent question that we hope the security community will come up with brilliant solutions for and I think that is a fantastic point on which to end thank you all very much for joining us and thank you to the wonderful panelists