 Hi everyone and welcome to my Biohack panel. I am very fortunate to have two friends with me today, Ken Carter and Eric Dorfie. And today we wanna talk a little bit about red versus blue versus green, the logical disconnect between the different teams and large organizations. Gentlemen, please go ahead and say hi to the crowd. I'll go first. Hey, I'm Eric. I work at a medical device manufacturing company and I currently position in an enterprise role helping oversee and mentor teams, build their applications and get them out in the public app stores. My name is Ken Carter. I'm a technologist. I spent my time between Boston and Washington DC. I like to do a lot of different things around security. I came from the finance and healthcare background. So I spent a lot of time thinking about not just how to keep their secure for individual companies but for individual themselves. Well, let's African to talk a little bit about the red versus blue versus green. Now, from your experience, how do you think in a large organizations, these three teams tend to work together? Do they do this well? Do they struggle? And what are some of the commonalities you see in the struggles? Well, Erica, I would like to take a swim. Does anyone mind? Sure. Yeah, so it really depends on the team. My background, I came from a business unit that specialized in devices that one of the panelists here is very familiar with and working with teams to build, maintain, release these applications across several different releases, doing things like penetration testing, taking that feedback in and making mitigations or making corrections and such. When you get to large teams, my experiences have been mixed depending on the project where it is in its life cycle, is it a new product or is it something that is going through maintenance releases and different kinds of situations where maybe there is a dedicated security engineer on the team injected in the project from day one or maybe you have another project where due to rollover or things like that, people leave the company and we have to onboard the security engineers into our new product or our existing product. So it can be hit or miss and I've had a number of different experiences but a lot of really good ones where we're working hand in hand in groups and it's definitely been an evolution for me as I've been at the company the last seven years right out of college where it's like, I'm trying to just absorb and meet all these security engineers, regulatory engineers, privacy engineers, these seemingly different silos. At least for me, my experiences have been that I've been fortunate enough that they've been broken down in some fashion and I get good visibility and collaboration with the different teams. I think my experiences are pretty similar to yours but at one point in my career path, I decided to really specialize and think through how to learn and do offensive cyber so that I have a better idea of how to secure it. And I discovered that most companies really don't accept nor invest into a red teaming where a lot of stuff like chaos is adhering. So companies are perfectly happy being situated with the notion that if a risk occurs, I have a mitigation effort and when the risk does occur, I'm liable for this amount. So the solve is never for keeping something safe. The solve is for how much can I lose and expect to survive. So it's a very different world for me on the red, blue, green team versus risk management. So from a pure security side, it's really fascinating where I often find people who do invest in red teaming to really understand criticality and vulnerability in their business model versus those who don't. I think when it comes to vulnerability, one thing at least the person sometimes tend to forget is the person on the other side that has the device. The one that inevitably is at the end of it all. What I have learned from my perspective is that we often speak different languages and we have different perspectives and context. And it's only now recently that I've learned thanks to Eric a little bit about the green team stressors and pressures and context that I was able to shift. And I do believe that if we understand the whole ecosystem or all sides of it that we are able to actually communicate effectively. Now I have another question and this might be a touchy subject. In terms of looking at the recent breaches in healthcare is there one instance that you can say that this was handled well? That to say I can't think of an instance where a security side was handled well but the mitigation. So the legal impact of the business that's handled pretty well, right? So at what point can we get enough lawyers to find against the company going down? That part is handled really well from a business aspect from a security aspect. I think a better question to ask is why are we prioritizing different parts of security over another when the objectives to steer something and that fundamental objective wasn't accomplished? But I mean, that's often said that security gets a budget once the breaches occurred but it's often little too late in the whole instance of things. Eric, do you have anything to add to this? Yeah, I don't have any personal experience with any intrusions or breaches in the industry as a consumer being affected, your email and password or whatever for any number of things whether it be personal applications, health applications, fitness applications, anything that is private information, PHI, PII, location information. It's a real drag to consumers that application developers, either they don't know or they don't care or there's other constraints like it, maybe time, cost, money, et cetera where things just don't get built the way that they should be from the get-go. And one thing that I take pride in my work is when I get a penetration test result on an application, I'm gonna go read these and not only understand but plan to make actions on them before an application were to be released or if I obtain this information and provide it to other teams and say, hey, this is something that might be a common mistake if you're a new team building out a new application and you just don't know any better. So let me enable you to make the right decisions to have that long-term success. So I don't have any experience with the breach, more of the preventing things so that they don't happen. Or if they do, we've got mitigations put in place that it's things like if there's a denial of service that's a single individual, it's not a systemic thing, building privacy by design and security and design from day one. If I may pull on that threader, that's really, really beautiful. I love that pattern. And to kind of pile on that, for me, the way I think about it through the day, even outside of healthcare, just in software in general, is to be able to bring that teaming to the beginning. To be able to have security as a fundamental principle day one, I prefer to take my security folks instead of having a separate team to embed them in each software dev team. So that because I can never expect or even anticipate that any software person developer is going to be a security expert. I can't count on that. So to hedge that, I'd rather have Steve folks inserted day one. Yeah, if I look back and reflect on a requirement that I had reviewed in the past from a security engineering colleague, and it was very vague and generic and my background as a software developer, I don't have a formal security training background. I don't have that number of years experience dealing with these things. It's like, I have to ask, well, what do I do about this? How do I do this? And whether it be reflect on open source to find a library that does what we're asking you to do or leverage another team, it's really good to be efficient at communicating and collaborating with the individuals that know and not having everybody responsible to know everything. So I'm kind of like receding at the moment to try and teach developers security and forensics. So building in the information that we need to detect, prevent and investigate a breach. And that is only done because let's face it, we are good at defense and attack, but we're not the ones building the software. We're not the ones influencing the code or what is getting written. And that's in the hands of green. So I think I almost see blue and red basically strengthening the code that's produced by influencing the knowledge that's passed on. How do you guys feel about that? I think that's a really accurate read on being able to divide the conflict of team, but to have the one cohesive team to be able to move forward with the entire stack on board versus having separate responsibility. I rather not have this waterfall event of thinking security out of the bat. I rather think about it way ahead of time. Yeah, so I agree completely there with you that we should, I sometimes feel we do a thing that I call dev-obsess where security is always lost in the line where we should be doing DevSecOps altogether as one team, as one collective. I often feel like we're trying to fix things at the end of the pipeline where it's super expensive for an organization or for a development team to handle it, but we don't address the things right at the beginning. Now I have another question for you guys. Taking into account the context of medical devices that have been bought, where do you see the future of this going in terms of dealing with legacy devices that at some point become vulnerable? Do you guys think that will be a problem or do you guys think that it's not going to be a problem because we're solving them and updating them at the moment? So to step into the light of new devices coming online at my workplace or in competitors or just in the medical industry in general, Bluetooth being a really popular technology because it's on everybody's smartphone, everybody's got an app for their product. These devices are gonna remain connected to the smartphone as long as the smartphone is in the embodiment that it is. But to your point about the legacy devices, these devices are gonna eventually be phased out. They have a certain life expectancy for batteries and support and the things like that. But there's always a risk of every device, it just takes time and money before you can find something and we can only do our best to make sure that the legacy devices get just as much care as the new products do as we go forward. I think it's also a really complicated balance between the consumer and what devices can be easily changed out, what can vice the support of the devices. Even Google has now made a statement of how they're gonna discontinue connectivity to older Android versions because of security risks. And I imagine that while Bluetooth connected insulin pumps can be changed out because they're external, a Bluetooth connected pacemaker or a kind of heart control device that they may be a very, very different problem. So I think there's gonna have to be a balanced soundware of how to prioritize what you are gonna support legacy-wise. I think that might be the best thing I could think of to try to help humans live longer. One of the things that I learned very early on in my research was that, yeah, these things are connected to patients. I mean, I've lived with them since 19, but sometimes it's a question to patch or not to patch. These cardiac devices end up in surgery for a patient that wants it to be removed due to security constraints. So I think it's very important how this message communicates to those patients affected. A lot of them are elderly or a lot are not technologically savvy. But now, do you guys feel that the medical, you know, the healthcare professionals have actually caught up with medical technology and are able to explain cyber risks and visories to patients? How do you think that could be combated if it's not done as we expect it should be done? I feel that that's a matter of what the person's priority is. I think that a medical professional priority is to be explaining the health benefits and risks. And I think there's going to have to be ways to quantify cyber risk as part of health risk for an MD to really care about and be passionate about explaining it. And also have the bandwidth to even try to get to the depth of why it's critical. So for me, when we're thinking about education to our patients within our systems, within our applications, there's a couple of different personas that kind of emerge individuals that just simply don't care. You know, they put everything on Facebook, they put everything on the internet. They use the same password for every single thing because that, you know, maybe that's how they remember it or they just don't know any better or don't care. And then there's individuals that, you know, only like myself, I only buy Apple products. I have the mindset that my data is better protected in this ecosystem compared to the alternative. But I make this conscious choice about it and use password managers, use the secure private email, all these kinds of things that as we see more and more devices being introduced to the market that are not just for the elderly because, you know, there's new diagnoses, there's new technologies that, you know, innovation every day, we're gonna start to have more and more users that have more of the mindset of, I'd like to know what can I do? And for that, we can certainly provide education, whether it be materials that is directly to a user within the application, like when you're setting up your email and password, just mentioning, you know, it's gotta be this number case sensitivity, length, strength, et cetera. And please remember it, please don't make it the same as everything else, or when we're educating clinicians and field reps about how the technology works to kind of make sure that there's no concern and that patients really get that peace of mind because they are trusting us to keep them going or to help them in any way that we can through the device or therapy that we're delivering. So there, a lot of that education comes in where you're teaching and providing that information is a really good way to achieve that. Well, I have some questions specifically aimed at the two of you, because I think the perspective that you've come from Mr. Chan. So, Ken, what motivates someone to look into hacking medical devices? Now, I get asked this question, a lot of people look at me strangely like there should be something wrong with me, right? What is your motivation behind, you know, doing penetration tests on medical devices and so, you know, breaking into them? Sure. My personal motivation is to help a lot of my friends who have these devices, right? Like, since I am of the security-minded person, I rather kind of give the shot to see are the vulnerabilities I can expose? And there are. And the second half of that is that responsible disclosure. How can I communicate with these companies to effectively disseminate information in a secure way so they don't actually risk human life? So my personal motivation is to help friends, help human life. So no plans or all domination yet? No. Now, Eric, same question, just slightly different frame for you. How does one become a software developer or engineer for mining medical devices? And what is it like knowing that you're designing something that has a physical impact on someone's longevity? So a couple of my personal experiences. For me, you know, a fresh out of college, having a computer engineering degree, working with microchips and regular computer code and whatever it may be in college, getting a job lined up out of school as an associate, a very entry level, but very quickly becoming attached because I have family members extended that have our devices. And then, you know, friends and colleagues that do it as well. And so there's a personal connection that is a drive and has been a drive for me to accelerate and achieve in this space. But generally speaking, you know, you can do things like learning about security, learning about building reliable applications, learning about how to build something that's private by design and not, you know, collecting data unnecessarily just to be able to make a profit. That's not at all what we're in the market for because we're in the life-saving business, not in the data-selling business. So it really depends on where you're looking to get into things. If you want to be more developing the product, I can speak to mobile applications, you know, just dive in, ton of resources online to get into mobile technologies. Can't really speak too much about the embedded firmware devices or the implantable devices in terms of how to get into that game or that space. But it is certainly a different way of thinking even from a mobile's perspective, like if I compare some individuals that work on server technologies, you know, the server will scale. We host things in the cloud, every company does it, you know, you just put more money in the server will scale, you got more resources. You work in a mobile space, well, you know, your application is limited to what Apple and what Google want to give you. You go over that budget, you know, they're going to terminate your application and then the user has to relaunch it to resume whatever they were doing. And then the embedded devices, you know, being very, very particular about every single byte of memory management. So there is different levels of scrutiny in different areas and different constraints depending on which part of a system you're building that are all very, very interesting and equally different in their own way. I think what is very obvious is that the motivation behind this is making it safer, more securely and better built. Even though we come from very different backgrounds, for me, it's more about how do we detect and investigate that something's actually happened on a device? Do we have the capabilities to currently look at these devices and even know that they've been a target of a cyber attack? That's kind of where I, my passion lies and the reason being is we don't know that there hasn't been a cyber attack because we simply don't have the information to do so. But again, also I have a device, so I wanted to know that my heart is safe in the hands of the people that built it. And I actually got to meet the developer behind the firmware on my device and it was a very emotional meeting because you're looking at the person's mind that sculpted the thing that every day saves your life. Now, looking at what we can do better going forward, can you talk to big organizations that medical manufacturers are big? How do they successfully navigate building secure software and navigate the syrenical road to success? That is an enormous question and one I think about all the time to a point where I am actually starting to write something about it. So there's a lot of processes out there today, right? There is ISO of 31,000, there is the traditional risk formula, right? Like is it, you know, what's likely versus what's the level of consequence? Probably both of these is predicated upon the quality of data inputted and still gets the best. And also it largely ignores the treatment process. So I've been thinking about my experience writing software and how I produce software and how my security embedded and then coming up with a concept called the risk budget. And the risk budget is really the, how do I talk to people equally in both sides? How do I have my security team be able to express security problems, security concerns and how do I have non practitioners? The rest of the organization who are not sat in security to be able to come to the table and still have a conversation. So that's something I've been framing recently and kind of writing out the process that's been successful for me. But at the end of the day, what a risk budget is this? You want to subtract the logistical problems from actual security problems and do the actual security around what's important and critical because you can't secure all the things all the time and you sure cannot secure everything all at once. So prioritization, criticality and a plan to go forward. So in my role, there's a couple of different things that I can do, I'm doing, planning on doing things like if you look at certificate pinning to prevent man in the middle attacks on your network stack, this is something that you're not going to go and read about if you're checking out Apple developer Android, Google developer websites, they're just going to show you how to make the network call. They're not going to tell you why you need to do this. You're only going to know why if you're security engineer, your colleague that's done it before and for either reaction or preventative measures is told you you need to do it and then you go do it. Things like that where it's simple enough and it takes a lot of risk out of play, just do it. Make a standard build-in tools that you can inspect, hire pen testers, do internal testing, make sure that your environments are set up to manipulate in such a way that you can prove that your application is doing what it's supposed to be doing. Other strategies that you can put into your applications for other security mechanisms like preventing denial of service on your implantable device or maybe denial of service on the application that connects to that implantable device. These are more ideas that you can rinse and repeat cross projects, but not necessarily have the same implementation because as an example, a cardiovascular implantable device is far different than a diabetes pump in terms of what it does. One delivering electrical therapy, another one potentially delivering insulin or other devices just in general, maybe they don't do any kind of delivery of therapy and they're just reading data. Between creating reusable components and tools that you can put into place and enforce, advise, encourage teams to implement and then building more of a, here's a documentation strategy on setting yourself up for long-term success, making sure that you're checking the right boxes and doing your due diligence. And then maybe to call you out on a couple of things, some of the conversations that we've been working on were talking about developing logging as a first-party feature of an application for any number of reasons, whether it be the developer, the forensics engineer friend that you've got or it's the field tech support whoever it may be building robust logging solutions to enable you to analyze items during development, during your initial release and all the way through your production through legacy and retirement to the device, monitoring things at all points in the life cycle of that product. I kind of, thanks for calling me out. Like, I'm the one that's hitting the panel and not supposed to be called out, well-paid. I think what I realized is we all do instant response. All of us, red, blue and green. And how I came to the conclusion is like, Eric was explaining to me that they do in-field investigations when they have to debug something, I don't think. But that's like the incident. Your incident is that your device or your application is not functioning. And then I said, I don't know from when I've done some pain testing and reverse engineering. Well, that was an incident as well. I was investigating how to break into something. And then you have me on the forensic and incident response side that looks at a security breach. So we all end up looking at the logs. But often the logs are designed by developers or developers, not necessarily taking into account what we need for security or what we need for our investigation. And sometimes even leaving trails for red team to find. I don't know if you guys know the statistics on sensitive information disclosure and data breaches recently. It's said to be 80% sensitive data is disclosed during a data breach. And a lot of that information comes from the logs. Why do you guys think that there's some sensitive data being leaked into an application log? So one thing might be the understanding of trust between systems. Like, if I am putting a log on my app and I send that log to my server, in theory, it's encrypted during REST and it's encrypted during in transit. And if I am naive and don't know any better and just assume trusting everything, then I'll go about my business. But if you've had conversations at length with another member of another color team, you might realize that it's intended to be placed in this data warehouse or secure storage. But somebody might by accident find some information, come across some information. Or if you're analyzing a bug in a mobile app and you look at the logs and there's information that a developer put in there, not knowing or not considering that it was example, PHI or PII, one of the corrective actions you can do is in your logging requirements, design it such that you are not putting that information in there. Maybe you're hashing identifiers or you're obfuscating these pieces of information that is a one-way hash algorithm. So if you hold that information in another context, you can run it through the same one-way hash and then you can resolve the same number and then you can verify that your logs are emitting the pieces of information connecting to the things that they're supposed to be doing. So there's a lot of work that can happen just by making a developer think in a different mindset about what to log, how to log it, why to log it, who's gonna have access to it and how long is it gonna live? Very well thought out. The other thing is the on the whole transport of logs. The thing that I would worry about, I guess from the retting side is that I tend to use opportunities like where logs go or how logs are transmitted. It's either ways to export data or even as an attack surface. So if that makes the transport message logs used then just for me, it's a way to get it. So I think there's also gonna be some kind of meaningful way to talk to both the sec team, the login team and the debt team about how to be transport logs in a secure way and how to ensure that it's a one-way ingress. You know, this is the kind of thing that I like hearing because this is the thing that people don't often know that the logs are often the key that opens the door to the retting. Now, based upon the retalking little bit, last question that I haven't been asked you guys, to zero press or not to zero press? I think that's a hard question for me to consider when there are so many in your, and I guess in this case, edge devices kind of moving around. And I don't feel it's the device itself that you're gonna have to lock down. It's the communication between device and the whatever the data devices. So I guess in most instances, that would be the Bluetooth connection. So what can we do to optimize sturdy around Bluetooth? We're making this assumption and accept from the council of managing Bluetooth that there's a bunch of screws built in, but simultaneously there's a bunch of problems around the authentication scheme around as well. So for me, I don't think it's so much as a question around to zero for the not zero trust because the fundamental connections that Bluetooth can have to be between is the biggest attack surface than other worry about streaming at first. So I might have a little bit of an interesting thought process on this. Being in the mobile space, I don't only just think about, is the iPhone platform as a whole secure or is Google Android secure? But I also think about, is that third party library that saves me a little bit of time? Should I be using that? Should I just learn what it's doing and write it myself and control it myself? Or should I research an algorithm that does whatever it needs to do? Should I research it and write it by hand? What is the trade off to accept some library? Do you just trust it blindly or do you use it with caution? And I always tend to write things by hand if I can, write something that we control internally and share it across the teams and have revisions on it. Open source is a wonderful place. You see open source and you then have that source too in public visibility, which you might get some benefits of, hey, people reporting bugs and then you can go inherit and fix those things through the updates that are made to their repositories. But my natural tendency is to not trust anything and to build mitigations many times over as often as we can to make sure that doing a risk analysis. So what if the flash memory is compromised? Well, then you encrypt your data to make sure that it's not encrypted. Well, what if they can get the keys? Well, then the key is different place than where the data is. So building in several layers and walls around it, at some point you're going to have to trust something. But when you have mitigations put in place, you're burning down that risk as much as you can. I love that. That's something I worry about constantly is that software supply chain, right? And for me, I tend to go towards open source route more and more and trust certain libraries but check and verify for myself before the line using. So I tend to corral how my developers can get to certain libraries pulled down, like I tend to control my network so that folks can just go when they're out and grab what they want. There's a verification process of what is entrusted and how we can maintain it. I try not to fork existing open source projects because for me, it's hard to maintain things. So there's now a separate domain knowledge necessary to be in a certain team which I try personally to avoid. Granted, I am not creating that device like yourself. So I don't have that high lit Mr. Cross. But for me, I also think about sustainability. Yeah, so there's definitely a need sometimes to use a well-established library that specializes in one thing. They are the experts in this case, not me as an individual developer or might be as a team with my colleagues. So there is trade-offs and making sure that updates are made timely to the field where vulnerabilities if they're identified, they get patched or new features or whatever it is that's benefiting the patient. That all comes into play when making those kind of decisions. So to end our time off, I would like to end it off on a positive note. What is the one piece of advice you can give to manufacturers, engineers or someone generally in this space? Or what is the one thing you wanna highlight that they've done well? Something positive to end it off because the world's in turmoil for what's going to be two years on. So let's end it off with a positive note. Ken, you're up first. Thanks. Well, first of all, let's say, thanks for inviting me here, it's been great. Hearing folks like yourself and Eric talk just need a warmer fuzzy that things that obviously done very, very well metronically. So kudos to Metronica and your teams. As for advice, stay curious. It's how I survived in the security game this long. For me to always wonder, not always be scared. Security is a weird front here to be in and at times that can be scary, but fear should not be the motivator or the demotivator. There should be the end goal of how can we all get to the same equitable share of data and everything else. Yeah, I'm really excited for the future. I think you hit one of the big things that I had in mind about being curious as a mobile developer that like to partake in the annual developer conferences each summer. Eagerly anticipating the fall release and all the new things, you always have to be on top of things to learn the new technologies. Like let's say you're not building application support for the fall updates at a compile time perspective, but your users are gonna go and install the latest operating system because that's what they're told from marketing from the big Apple and Google campaigns that they see all over the internet and on TV. So one of the piece of advice I would have would also be that stay curious, go research new things, technology is ever changing landscape, especially in mobile. It changes several times a year and these big companies have reacted quite well to the global circumstances by providing new capabilities and technologies in their SDKs like healthcare adding symptom tracking that you can do for if you're making a COVID application for a state or a country. Like these are awesome things that are happening and it's super exciting. And I can't wait to see what big companies and small companies like make with what's coming. Thank you very much, James, from my side. I just wanna say to the community if you are curious about medical devices and you wanna research them and you wanna break them and you wanna figure out how they work, do it. Take the chance. If you find something, this is a responsibility. If you're a developer and you're curious, often developers make the best pen testers because they understand how the applications and devices are built. So just be curious, be excited. It's a whole full circle. You don't have to subscribe to a single color and say, I'm only green, I'm only blue, I'm only red. In the fact that we're actually all working together towards a common goal. So from my perspective as a patient, also thank you very much for building these devices because otherwise it wouldn't be, yeah. And it's now since 19 and I just said I wrote up my 34th birthday. So I was very happy with my device and thank you very much for joining me and that is all we have for you guys today. Thank you. Thank you.