 Welcome, everybody, to my talk, and thank you to Biohacking Village for having me. The title of today is, Why is Healthcare Security Hard? My name is Seth Carmody. I spent eight years at the FDA. I started off as a device reviewer, and then I transitioned to tech policy. And then I recently left the agency and joined a security tech startup. When we started to talk about health care economics, we need to first move past the struggles that security folks face. The frustration and affriction, the sticking points, a lack of funding, a lack of traction, and this problem is too big kind of thoughts. And once you are able to move past that, you start to observe some patterns. And what you realize is that these patterns are really just people. It's a collection of people making decisions, and those decisions are based on incentives. And the dude said, you look for the person who benefits. And indeed, once you start to unpack why people in health care space make the decisions they do, especially around security, you start to understand the tension I felt as a regulator, and now still feel as a tech supplier. This talk is really a call to action. And this is the beginnings of shining a light, if only briefly, on the economic forces to provoke questions and really to ultimately prioritize where we spend our career capital. Since our time is precious, we wanna spend it on the most impactful endeavors. Professor Ross Anderson wrote a paper from around 2001 what was called Why Information Security is Hard. I didn't know at the time when I was titling my talk for the biohacking village, but it's clear now after researching this talk, why the titles align. In that paper, which is I understand, launched formal research into security economics, Anderson described three market forces that make information security hard. He talks about network effects, such as the race to be first to market. Today we launch, tomorrow we fix. He talks about the low marginal costs to produce software and that necessitates that software be sold on its value and on its cost. And lastly, technical lock-in of the software builder, the value of the software builder's company is equal to the total lock-in of all their users. So while I'm presumably two decades behind, Professor Anderson states in his reconfigure of security engineering, the prominent textbook that he's authored, that medical devices specifically is an area that needs additional research. And today I intend to show the healthcare version of his key economic insights that make healthcare security hard. Whenever we start to talk about hard problems and things that make it easier, I tend to draw on things that I know well and I know the security space, I know the chemistry space very well. And I start to think of catalysis and the general concept is that the universe says if you wanna travel or go from point A to point B, that's going to cost you and catalysis we're talking about energy. But if you add a catalyst, it's like a discount from the universe, like a wormhole, thereby allowing you to reach your destination using fewer resources. And if we consider the significant resource constraints and healthcare security, if we wanna be successful, we have to seek out the economic catalysts. So let's pose a thought experiment. And this might be helpful in decomposing a giant problem of healthcare security into some bite-sized chunks. So let's consider the following thought experiment given a specific resource time. The baseline projection on the bottom there basically says to reach some reasonable goal of say 90% safety, which is fairly arbitrary, but let's start there. Considering our current trajectory and current progress could take anywhere from 15 to 55 years. And this is the tension I feel. I felt it as a regulator being responsible for the progress of medical device cybersecurity. I felt it as a technology, I feel it now even as a technology vendor responsible for making security easier. It's the same problem and the same operating incentives. And I also still feel this sense of urgency. Maybe nothing catastrophic will happen again, but maybe my concern is overblown, but this is the exact point of being prepared for a cyber event, right? You prepare for the worst and you hope for the best. It would be perfectly acceptable to me to over-prepare, to never need to reap the fruits of preparedness. But whatever your viewpoint is on urgency and preparedness, hopefully we can agree that regression isn't acceptable. And without a hard look and reckoning with the security economics, I fear that the probability that progress languages or diminishes is too high. So if we can effectively identify the catalysts, we can compress the timeline to a decade or so without any additional effort. And when I envisioned this talk, I really wanted to talk about supply chain and liability together because they're literally the one two punch of security economics. And when I say liability, I mean that when people produce technology that there's some culpability to the security debt that's ingrained in that piece of technology. But first, I felt like we're sort of outside the bounds of this talk because we had some real problems and supply chain is solved first, which we'll talk about throughout the talk. The topics of equipment phase out, legacy equipment and simulation exercises are important, but these topics won't be covered today. I like to frame the concept of healthcare supply chain economics within the supply chain. So that's the first problem that we're gonna tackle. And it's a useful framework for all the discussions that will happen. It's kind of low fidelity. Maybe it's only one example. There are plenty of permutations on the supply chain, but here I think we have at least some of the major players represented. There are certainly some simplifications here and parts missing. The general flow is that, if we consider that there are a bunch of builders and buyer relationships within the supply chain, we can start to tease apart some of the operating incentives. So tech from big to small to security to general purpose tech are supplying medical device manufacturers who are active buyers in this regard, who then reassemble these pieces of technology into a medical device or some medical technology. This is specific to medical devices, but they're building it to achieve some intended effect. It could be a pacemaker. It could be an insulin pump. It could be a linear accelerator. And in general, we'll just assume that the medicalized manufacturer who's regulated principally by the FDA, but there are other regulatory enforcing, regulatory forces that they're selling to principally a healthcare delivery organization or a hospital system as a buyer. The hospital then facilitates the interaction of the device with the doctor and then the doctor delivers that, the therapy of that device to the patient. And all of these stakeholders when the supply chain economics have a role. But the thing is, is that healthcare's core competency is healthcare, not security. And therein lies sort of the first principle of healthcare supply chain economics and represents a significant barrier to overcome. So we typically hear about sort of where the security debt sort of builds up. We hear about what hospitals are doing to secure medical devices and other technologies. And just yesterday, IBM released a report that basically said healthcare is lagging in all key indicators within security. They spend less. They take more time to resolve things. They use less technology to solve a security problem. And this is indicative exactly of the thing that I'm talking about. It's because healthcare's core competency, as it should be, is healthcare, not security. And this is a core economic principle that specialization is efficient. You wouldn't want hospitals to be security experts because a jack of all trades is a master of none. So if you try to make healthcare security experts, healthcare stakeholders of all varieties, security experts, you'll get worse healthcare and inadequate security. And furthermore, trying to make everyone security experts is the uncatalyzed path that will cost the most, will take the longest. And if we take this path, I doubt that we will ever collectively succeed. Make no mistake, secure tech is hard to build. So if we just back one step up from a healthcare delivery organization up into the manufacturer and supply chain from the manufacturer perspective, historically there has been a lack of out-of-the-box secure by design solutions. If MDMs, as I like to call them, needed an operating system, a wireless chip, a protocol, MDMs were under similar constraints as HDOs. In order to build life-saving technology, they had to use what was there. That's because, and because of the economics that Professor Anderson outlines in his 2000 paper, the components that the MDMs pulled from technology vendors shelves weren't built with security in mind. Therefore, the security that was passed the manufacturer and in turn passed down the chain. And now that security has emerged as a top of mind for regulators like the FDA and some leading HDOs, device manufacturers are an inflection point where they need to buy secure components to build and maintain secure devices. And this transition is hampered by the security economic truth of healthcare. Healthcare's core competency is healthcare as it should be. Don't get me wrong, absolutely. Device manufacturers should be building some security engineering capacity. These folks are the guides to the security universe and you need them. To what extent that capacity is built really depends on each individual company. Yet despite the receptivity of device manufacturers to transition and build security capacity, every product security person I've ever met, even the ones that have experienced tremendous success have struggled in some capacity and scale to get the organization and individual business units to do what's necessary for security for reasons that we'll discuss throughout the talk. Since the transition is arduous, security folks within the device manufacturers needed the tech community's help. Device manufacturers need as many out of the box secured by design solutions as they can get and make no mistake, secured by design leveraging zero trust architectures will not the topic of my talk is a prerequisite for everything that follows. We can now confront the reality and magnitude of the challenge before us because despite the economic reality, the healthcare supply chain must be secured. We have a saying in chemistry that the best model system is the real thing and you won't know until you try, which means there is no amount of testing that can accurately model real life. I'm not the only one that thinks this way. The story of the DERAC 25 is a classic case study for lawyers, computer scientists, medical physicists, and so on. The case study is exquisitely described by Nancy Leveson's work in a few places. There's a appendix A of her book, Safeware, System Safety and Computers and I think there's a separate paper as well. If you're not familiar with the story, a new linear accelerator experiences values over and under irradiates and kills some patients. It's the nightmare scenario for every manufacturer and which they seek to avoid. And the story is probably so cliche and cherry-picked the eyes glaze over every time it's mentioned, but I'd like to use it here to describe two key messages. In particular, there's an FDA quote which is basically giving feedback to the manufacturer on their corrective action plan or a plan to fix the issues. And the FDA reviewer says this is the following. We are in the position of saying that the proposed corrective action plan can reasonably be expected to correct the deficiency for which they were developed. We cannot say that we were reasonably confident about the safety of the entire system to prevent or minimize exposure from other fault conditions. And if I can speak for this FDA employee, I think what they're trying to say is what Donald Rumsfeld said so well. There are known knowns and unknown unknowns. They're just properties of the system which we cannot accurately predict. And this gets to the second point that there's this underlying devotion and subscription even today to extrapolating the rate of component failure to the rate of system failure. And when we rely on deterministic values for rates of system failures, history has shown us that those risk models break down and simply put the classic probability time severity model is an oversimplified model that trades trackability for fidelity. This exact model breakdown is further exemplified by Richard Feynman and his Roger commission work. On January 28th, 1986, 73 seconds after liftoff, the space shuttle challenger broke apart killing seven crew members. And the resulting investigation of the Rogers commission, member and Nobel laureate Richard Feynman, a theoretical physicist, excoriated NASA management for extrapolating component failure rates to the entire shuttle system. Essentially, managers had concluded that through the use of component failure data that if the shuttle launched once daily for 300 years, NASA would experience exactly one catastrophic failure or one failure in 100,000 launches. Feynman's instincts told him otherwise and conversations with engineers resulted in more realistic figure of one failure in 100 launches. Don't get me wrong, device manufacturers absolutely need to test to reduce risk. However, you will not know the extent until it's marketed that's a fact. That's why the bar for marketing is not 100% safety and 100% effectiveness. The legal bar is reasonable assurance of safety and effectiveness and in complex systems with emergent behavior we need to collect post-market data, which they do, it's just suboptimal. Data is important, especially if the data indicate a near miss. Also as part of the Rogers commission, Feynman had to explain to NASA what a safety factor was. Some early tests of the booster rockets O-rings resulted in the O-ring burning a third of the way through. NASA managers recorded this result as demonstrating that the O-rings had a safety factor of three and Feynman had explained that the safety factor is in fact zero. To paraphrase Feynman's example, if a thousand pound truck drove across the bridge designed to bear 3,000 pounds and a crack appeared in a beam, even just a third of the way through that the safety factor is now zero, the bridge is defective. There was no safety factor at all even though the bridge did not actually collapse. The data for the O-ring was there. To further build on this idea of systems and uncertainty in systems, as a regulator, I had a front row seat to quite a few economic externalities for security from routine vulnerability disclosures to global cyber attacks. What has been the impact of these events? We have lots of disclosures with a dash of coordination and a pinch of public. We've got wanna cry and a petia and a handful of large impact vulnerabilities that have us questioning, why did I choose this field? As a manufacturer, it would be perfectly rational to the event of a disclosure dropping my stock price, but ultimately recovering, you might greet it with a sigh of relief and I'm glad that's over. What would be the incentive to change in that case? It would be very easy to play the event off as a black swan or an outlier. Instead of a larger symptom of an underlying issue or a cost to look under the hood or look into the field and see if there's anything else there. And furthermore, because healthcare's job is healthcare, hospitals still need to buy the technology and those darn patients just won't spontaneously get better. It wouldn't surprise me if there were negligible impacts to device and technology sales as a result of taxes potent as well, aren't right? So what should we do when our crashes, our visible events are collectively shrugged off? Especially when folks know that there's nothing really that prevents a wanna cry 2.0. And this is the perfect job of the regulator. If the market fails to deliver the correct incentives, this is a primary example of the role of the regulator. If you look at the history of the FDA and think about why the FDA exists, it's about resolving information asymmetry for the consumer. The average person cannot assess the safety and efficacy of a drug or device. And even if they could, it would be economically inefficient, not to mention ridiculous, that if before you ate or started chemo or passed out from a cardiac event that you had to evaluate the safety and efficacy of a product you needed. So you need an impartial body to see what's in the meat or evaluate the claim of a drug or device. And therein lies, again, the security economics of healthcare. Healthcare is healthcare, not security. So you are then reliant as a consumer on this entire and patient, you're reliant on the entire supply chain to deliver healthcare and deliver security. And it just doesn't work efficiently. It doesn't scale. It doesn't solve the problem on a timeline that we desire. Nicholas Taleb, the author of The Black Swan, offers us another brilliant explanation for the chain challenge of healthcare's transition to security. The idea of a black swan event or a low probability event is often disregarded as possible, or not if regarded at all. And therefore isn't present in our risk models. Now Taleb was talking about the uncertainty present in financial systems, but the concept applies directly to healthcare as well. We're so concerned with the probabilities and indeed it is the foundation of risk management that we've convinced ourselves that a security event is a black swan event, not an everyday occurrence. In 1990, the GAO released a report to the FDA knew of less than 1% death, death series injuries or equipment failures that occurred in hospitals. And at the time, reporting from hospitals to manufacturers about these events was voluntary. So the law was amended to make it compulsory for user facilities and hospitals to report to the last manufacturers and FDA under certain conditions. And the result was a significant uptick in reporting. Separately, in 1999, the Institute of Medicine released a shocking report that the third leading cause of death in the US was about 250,000 deaths. It was absolutely shocking. This study touched off a wave of efforts in hospitals to corral and reduce those errors. And I've seen recent reports in 2020 that those deaths due to preventable errors are significantly reduced to around 22,000. Regardless of the number, these two pieces of information got me thinking how many close calls are there. And I wondered what could be gained by leveraging the technology and connectedness to automatically capture, aggregate and analyze the data about the performance of the device. In the age of technology, it's absolutely possible and cheap to assess the performance of the device for the betterment of healthcare. I mean, the internet figured out a way to monetize my online behaviors. Why not do something else useful with it? Imagine how difficult it was for the Therac 25 manufacturer, AECL, to piece together the puzzle as it emerged. We enjoyed the benefits of hindsight, but I imagine in a moment it was quite vexing. You've got people dying and you can't even reproduce the error. But if you could further leverage the technology already on board, capture the events, black box style, pool and analyze the data, maybe it becomes far easier and far cheaper to respond meaningfully to the real world. We need to pay more attention at the system level. We need to look deeper. We need to collect data. I believe that in the real world, everyday devices experience security events. I mean events as broadly as possible. People may not be dying, but I'm sure that there are near misses all the time. And if we can show the data, then that creates the incentive to build secure, that creates the incentive to build secure healthcare technology from the ground up, using secure by design components from the tech industry, as well as helps us answer the questions of how much security is adequate. So let's measure it. Heisenberg said it best. He changed the outcome by measuring it. Now he meant it for subatomic particles, but we'll take it here for changing the outcomes of healthcare for the better. And honestly, in order to get across the finish line for security, no security event perceived to be a black swan will get us across the finish line. It will be the innumerable signets that are hiding in the weeds of everyday data. And today I learned that signets were baby swans. Costanza said that he needed to learn risk management because it was on his resume. Let's not fall into the trap of using oversimplified risk management models because we build our careers on it. The uncertainty of behavior complex system demands the adoption of new higher fidelity models that are plugged into the real world. For as Feynman put it, mother nature cannot be fooled. And in doing so, we'll begin to reveal the late economic incentives, not just for secure by design, but I imagine a host of important clinical items will be able to tease out efficiently if failures are related to human factors, humidity, cybersecurity, bad boards, a software bug, or it's a Tuesday in April during a full moon, you'll have the data right there. And as we look towards the future of healthcare where outcomes determine who gets paid will be the only way. Certainly tremendous progress has been made and the groundwork for catalysis and acceleration has been made possible by this progress. You only need to look to the FDA and international regulators recent policies in the space and quite frankly, the efforts of security champions across the supply chain, really to understand how deeply passionate healthcare is about fixing all sorts of problems, including security. We can supercharge the efforts by turning ineffectual black swans into effective everyday economic incentives. Thank you and I'd be happy to take any questions.