 Hello, my name is Bill Doherty. I'm the CISO of Omada Health. Joining me today is Patrick Currie, our senior director of compliance, and today we're going to talk about threat modeling in digital health care. Patrick and I are the co-authors of the Includes No Dirt Threat Model, which we'll be discussing today. If you'd like to follow along with us, you can go to includesnodirt.com and download our white paper on that threat model. We've also got some specific exhibits that we've created just for this discussion. That includes nodirt.com slash defcon.pdf. Thank you for paying attention and watching our talk here today. Standard disclaimer, we have put together a discussion that is based on some real world concepts, but this is not a real world situation. Any information we share today should not be construed as being related to the products or services of our employer Omada Health or any of our partners or our customers. That's out of the way. We can dive into it. A little bit of background on us. Omada Health is a digital health care company, and what is digital health care really? It is the combination of technology and clinical expertise and humans to deliver better health outcomes. We've been doing this for about nine years now, I think. We really focus on digital care made human. It's not just the machine that you're interacting with. We do it a little bit differently. What we do is we try to partner up devices, applications with remote monitoring and specialists who can provide assistance in our specific diseases to help improve health outcomes. We do that through behavior change, remote monitoring with digital devices, care delivery, lab diagnostics, medication tracking, and a whole bunch of back end systems that are necessities in health care like outreach to patients, enrollment and eligibility, billing and reporting. That's all of the messy stuff behind the scenes. We have four diseases that have been kind of our core, which is type two diabetes prevention, type two diabetes treatment, hypertension, and last year we added behavioral health, so treating anxiety and depression. Recently, we just bought a company called Fizzera that does digital physical therapy for musculoskeletal or pain treatment. That's us. We're going to talk today a little bit about Sam. Sam is representative of a real participant in our program. She is not real, but Sam would be a participant who has type two diabetes and is using our program to try to better manage her disease states. She does that through tracking her blood values, that information gets shared with her coach. Her coach is then giving her advice on meals and exercise and potentially talking about her insulin levels, things like that. Ultimately, what we're driving towards is behavior change. We're trying to give our participants lifestyle improvements that will help them better manage their chronic diseases. We're going to come back to Sam and how she's managing her diseases in a little bit in the context of a threat model. This is what OMADA does. We do whole person healthcare. We do this with connected devices and lesson plans and coaching and all kinds of stuff. We do a lot of it. We have, since our inception, served more than 350,000 participants. We have over 1,000 satisfied customers. One of the largest data sets in behavioral health, as of last week, we had over 80 million weigh-ins from our digital connected scales. Our participants really seem to like our program. We have a 92% CSAT. That's enough about OMADA. We shared that with you because we want you to understand who we are and why we came to do this. Why should we do threat models? Patrick and I started this about two years ago. In healthcare, we are required to do annual risk assessments. The problem with that is nobody ever tells you how. We've been doing them for a couple of years and we decided that we needed to up our game. The reason we needed to up our game is because we were doing a typical risk assessment process where we'd sent a room and we'd just think about things that could go wrong and then we'd assess our risks. The reality about all things in security and compliance is everybody got a plan until they step into the ring and the first punch comes. Then your plans go to hell. We knew we had blind spots. We wanted to get rid of those blind spots. I love this cartoon, by the way. On the left side, this is typically how we would deal with things in healthcare. We're going to encrypt the laptop because HIPAA says that all the data has to be encrypted at rest and then what would actually happen is somebody would force us to reveal our password anyway. Every time we give a talk on this, we update this slide. Sadly, I'm never out of companies that have had major breaches in the last six months to update. These are examples of really bad things that have happened. Healthcare is by far the number one most breached industry, but everybody gets breached. The underlying factor for all of these companies is they all had really, really good smart security teams that were working really hard that had lots of controls and lots of vendors and lots of stuff in place to try to protect their systems and yet they still had problems. The reason they had problems is because they had blind spots. Threat modeling is a way to try to eliminate some of those blind spots. The fundamental truism in our business is nobody ever says thank you for the work you did to prevent the disaster that never happened. There's no A for effort here, but doing threat modeling and doing them consistently will over time improve your security and your compliance and your privacy, and it is by far the right thing to do. Let's define it a little bit. In order to really talk about this, we have to have a taxonomy. We have to all be using the same language. Lots of people interchange the word threat with risk. We do that too accidentally, but we had to come to a common language. When Patrick and I were working on this model, we were using the same word to mean different things. We eventually wrote it down. This is our taxonomy. First thing is a system. A system is anything you want to model. Lots of threat modeling focuses on applications and software. That is certainly a system that can be modeled but so can a business process or a network or a vendor. The defining characteristic of it is we want to protect it from specific threats. We just did our annual risk assessment and just completed it. This time around, we modeled 26 business processes end-to-end. Systems typically have defined borders. You know what the entry point into the system is. You know what the exit is, and you can then model it for threats. Those borders are sometimes called trust boundaries, which are areas where principles can interact. Sometimes they're called attack surfaces. The key point is understanding all of the areas that an attack or a risk or a threat or it can come from. Vulnerability is a weakness in your system. Vulnerabilities are things that can be exploited. If you have a weak password policy, that is a vulnerability. It can be exploited. If you leave your front door unlocked, that is a vulnerability. That doesn't actually mean that someone will breach your password or open your door, but it is vulnerable for exploitation. A threat is an actor. A threat can be a person. It can be an employee of a third party. It could be in its own business process. It could be a piece of code. Threats exploit vulnerabilities. We call that an attack vector in our taxonomy. Risk in this world then is the bad outcome that results when a threat exploits a vulnerability. We can then measure risks by measuring the likelihood of it happening. That's the probability. The impact of the cost, if it does happen, that's the impact. That's typically how people think about risks. You'll see this often. People trying to measure the impact by putting a dollar amount on it and a probability and that gets you to an adjusted risk score. Then we talk about inherent risks and residual risks. That's often how risk assessments are done. In our taxonomy controls are things we do to reduce the probability or the impact of a risk. If your door is unlocked, that is a vulnerability. The key and lock is a control. You can lock your door. That doesn't necessarily mean that nobody will open your front door. It just has lessened the probability of it. They might have increased the impact, by the way. Controls have, there's no panacea to them, but we do need to model what are the risks and then what are the controls. When we do that, we can then figure out what are the residual risks. Threat modeling is just an analysis. It's a way of systematically going through and looking at vulnerabilities and controls and threats against a defined list of risks. Defined list of risks is really important because we can sit around and talk about every bad outcome under the sun. A meteor may strike the planet, but that really isn't a risk. We're going to try to go model as threat modelers. Lastly, action items. This is the result. This is what we're trying to get out of a threat model is we've looked at all the bad things that could happen. We've measured the probability impact, we've assessed the controls we have, and then now we've got a whole bunch of work that we want someone to go do to reduce the risk. We're going to reduce the risk by creating new controls that either reduce the probability or the impact. That's our taxonomy. If I can add to that while you change slides on that, one thing that was super critical for us is just exactly that taxonomy. Coming from different disciplines, from IT security and from healthcare compliance, we spoke very different languages when it came to risk and threats and realizing that, reconciling that and making sure that we had a consistent discussion was really important for us to be able to make breakthroughs on this. If you decide to adopt this model and go forward with it, don't underestimate how important it is to create that taxonomy when you're speaking to your risk organization or your compliance team or your privacy office. Getting on the same page is really important. Absolutely. I could not agree more. Thank you for jumping in so I could take a drink. We would love to think that we were the inventors of all threat models and the geniuses who wrote this down. The truth is we're not. There are lots of very, very good threat models out there in the ether. We borrowed heavily from them. We wanted to walk you through some of those traditional threat models so that you would have these resources available to you to go do your own research and hopefully take what we've done, take what these other people have done and apply that into your own business, whether it's in healthcare or any other. Our starting point was this wonderful book here by Adam Shostak. I think he may be talking at Black Hat or Defcon this week on threat modeling. It is fantastic if you don't own it. I highly recommend it. He didn't pay me to say that. We'll talk a little bit more about what's in that, but that's really on the software design standpoint. On the privacy design, there's this model called Lyndon. Again, it's excellent. Adam's book largely focuses on the stride threat model. This is something that came out of Microsoft. This was a way of getting software engineers to assess the major threats to applications. They hadn't narrowed down to six areas. Spoofing, somebody illegally accessing an application, tampering, somebody modifying the data, repudiation, somebody performing an act, and we couldn't figure out who it was, elevation of privilege, somebody gaining credentials that they shouldn't have, denial of service, shutting it down, or information disclosure, which is what Patrick and I worry about a lot, which is breaching information. That's the stride model. It's excellent. Please go read about it. If you haven't already done so, get Adam's book. On the privacy side, the Lyndon model is also excellent. When we started researching this, what became really apparent to us, and we'll talk more about this as well, is that sometimes privacy and security are polar opposites of each other. In the stride model, we're worried about repudiation. Can somebody do something and then deny they did it? In privacy, we're worried about non-repudiation. Can I do something anonymously? We borrowed heavily from the Lyndon model as well, but there's some other models out there that are also good. Bruce Schneier wrote extensively about attack trees. Attack trees is one way of brainstorming where you start with an objective. I want to open a safe, and then you walk down a tree of all the ways you would do that. How could I open a safe? Well, I'd have to learn the combo. That would be one way, or I could cut it open. To learn the combo, how would I do that? You walk down that, and then you start figuring out what is possible, what's not possible. Then, once you've done that kind of a model, you can then insert controls in there to break up the attack tree. Kill chains came out of the military. Again, it's a way of modeling what needs to be done for somebody to execute an attack. If you interrupt any step of the kill chain, you can impact or possibly prevent the attack, both excellent models. Security cards came out of the University of Washington. This is a way of training threat modelers on how to do threat modeling. It's actually a deck of cards. I have a deck that's at my office that's under lockdown from COVID, but it talks about a lot about motivations and resources and methods. It's really just a training mechanism, but they're worth checking out. There are a ton of other models. I found this white paper from Carnegie Mellon on threat modeling. It's excellent. I highly recommend the POSTA model just because I like the name. But there's lots of models out there, but none of them really fit what Patrick and I wanted to do, which was we wanted a single approach that we could look at our software applications and our vendors and our business processes and deal with all the intersections between compliance and security and privacy and ultimately reduce the risk of our organization. Lots of people do brainstorming. Brainstorming is the simplest form of threat modeling. It has its place. We do it too, but it also has its limitations. We don't model everything. We model the things that are applicable to our system. This is kind of a typical traditional threat model. We talked about SAM earlier. We're going to talk about SAM again later, but this is a diagram of how a continuous glucometer or wearable glucometer might work, might interface with our company. And we draw this on a whiteboard and then say, okay, if we were going to attack this, how would we do it? And everybody starts drawing things and we'd say, well, I'd do a man in the middle of the application, or I'd do a denial of service on the API, or I'd do a breach of the partner, or I'd figure out some way to hack the device to harm the participant. And that is one way of doing threat modeling, but it ignores lots of things. It ignores the motivations and capabilities of the attacker. It ignores the objectives of the system. It also ignores all of the controls we already have in place, because we've already done things like we've got TLS written up here. That's control that we encrypt the traffic. So we want something that doesn't ignore all that and is a little more structured than just a couple of smart people and a whiteboard. And threat models done via brainstorming, they're limited by your imagination, and failures of imagination lead to blind spots. So this was the problem we were trying to solve, and now Patrick is going to talk about what we actually did, and specifically the includes no-dirt model. Yeah, exactly. Thanks, Bill. So exactly that's what we were trying to solve for. There's a lot of dimensions to that. I think the one comment I would make on the brainstorming thing, and I think we've seen this before in prior practices, if your thoughts are limited or if you don't think of something, you don't actually expose that in your conversation. So what this process actually allows us to do is force us to think of things that may not be top of mind when we're actually doing the work, and that regimentation and that process actually drives us to that one. Okay, next slide. So what we were looking for in coming up with this process, something that was easy for a non-SNE to understand, something that we could and something that would be easy for someone to perform. So something that we could give to a non-expert, say someone on my team or on the privacy team, and have them not only understand what we were after and what we were trying to do, but actually something that they could deliver and run through in a fairly short amount of time. We wanted something that was flexible and repeatable, something that we didn't have to shift every single time that we did the questions and something that we could do over and over and over again. We wanted something that was usable anywhere. We didn't want to have to design something that was great for business process, but really crappy for IT structures or vice versa on that. And of course, since we were putting a lot of effort into this, we wanted it to be memorable because why not when you're building it? And some creative uses of anagram generators actually got us where we are. So next slide. So what we created with this process, it's a systematized approach to analyzing risks. That pays a couple of different dividends. One is it's systematized and it's easy to execute like we were just discussing. It's also interestingly started to be a key for us to explain how we think about risks. So it structures educational conversations when we have them with staff so they understand what we're trying to do. It's a repeatable process with objective scoring. So another huge win there, something we can do over and over again sometimes on the same system to see changes or see how things evolve. And it gives us an objective score with some weights that we're going to see in the example we'll show you that help us compare across risks or across even domains in what we're looking at, how we think about risks and what do we do first? There's never enough time. There's never enough resource. How do you focus your time? We wanted a system centered approach. So something that isn't really focused on the thing that you're modeling but something that actually or is focused is on the thing that you're modeling but not on the process itself having. We tried to kind of bridge the gap between say having a stride and a linden for different things and create something consistent. Bill mentioned we focused on established controls and that's really important. If we've tested the control already and we're sure it works in our audit practice we know it's actually running then we don't actually have to include it in the model. And in the example we'll show you we've gone through a few things and eliminated it because either don't apply or we know that it works. Lastly, we wanted to have a model that covers all of the domains we think are important privacy security and compliance. So not having three different models or three different versions and being able to include different regulatory regiments. Next slide. All right. What we created with includes no dirt the model that we have. Your mileage may vary on this one. We designed it for our own uses at Omoda Health and if you're in the healthcare space it may be directly applicable with the questions that we have. You may have to swap out some of the specific regulatory questions. If you're in an adjacent industry it may need some questions adapted please by all means take the questions and modify them to your own use to target exactly what you're trying to get to. Next slide. All right. At long last the teaser it is the includes no dirt model and it has been arranged to actually be memorable. So identifiability, non-repudiation, clinical error, linkability, unlicensed activity, denial of service, elevation of privilege, spoofing, non-compliance to policy, overuse specifically there we're thinking of overuse of information and data as really pertains to the HIPAA space that we're in. Dirt data error, information disclosure, repudiation and tampering. So all those parts play together to make the model that we're using. Next slide. So for every risk there's a property and a goal and it comes from a specific place. So you skip down a couple clinical error. The risk is clinical error, a clinician making a mistake that may otherwise have been prevented. The property or the goal of that is the application of correct clinical standards. So making sure that the clinician both knows what they're doing and actually can do them at the moment in time where they actually need to do it. That's in the realm of compliance and we've sorted on this slide the specific things as to what the goal of what we're trying to do is and where it comes from. Next slide. Now you may have noticed that some of these things overlap and that's where proper judgment by the risk assessors as you go through this is important and I don't think I can say that enough strongly in the title slide. The risks that apply depend on the system being modeled. So some of these you'll have to look at what you're trying to do and figure out does this apply? Does this not apply? How does it apply? And some things you may just factor out as you go through it. So these three in particular are complicated because they are very related and Bill alluded to it at the beginning. Security and privacy sometimes are at opposite ends of what they're trying to do and these tend to reflect it here. So identify ability at the property of the system that allows users to trace to a specific user. So there the objective or the goal is anonymity making sure that that's not actually possible. A risk of non-repudiation, non-repudiation, the process by which it's proven that a user took an action. The goal there is plausible deniability so that somebody is it isn't clear if someone did something or not. The risk is repudiation and here what we're trying to get to is non-repudiation. So where someone can actually show that it actually hasn't happened yet. So these of note actually came from different parts, some from stride, some from blended. So privacy and security blended together. On the next slide, the next couple will actually kind of unpack that a little bit. So de-conflicting these goals can be kind of complicated on this. Different stakeholders will have different needs for these things. So for example, with anonymity and identifiability, that risk and goal, there are sometimes where you're building a system where that absolutely is required. We've got a whistle here for a whistleblower. If I'm designing, for example, a software application that holds anonymous reporting for whistleblowers because that's required under healthcare compliance rules and other codes as well, anonymity is really important. But then for other goals and other people in the system, less important. So a hacker, for example, they're in the middle, really relies on plausible deniability for what they're trying to do, either ethically or otherwise, making sure that there's that deniability as part of it, repudiation and non-repudiation as well. Let's get into a little bit more of a specific example on the next slide though. So let's say human resources wants to build or wants an employee complete application that lets employees report sexual harassment. Lots of different goals, lots of different stakeholders, and figuring out how to balance between them gets really important. So employees want to be able to report and want, if they want to have an anonymity, that their anonymity is protected. So if they want to say something anonymous, they can't and no one will know what it is. Well, HR wants to help ensure anonymity, but also wants to make sure that there's less of a possibility of abuse in the system, that like you don't have one person anonymously reporting the same thing over and over again to drive a larger oppression. IT needs to provision and deprovision administrative access to this, but they have absolutely no need to see the complaints that are registered. Security runs DLP on every laptop, logs who has access to the application, important from a security perspective, but then that becomes potentially challenging depending on the level of access and the fact that if they're provisioning it and the log who accesses, they have a proxy for who's actually reporting things. The legal department wants to be able to document complaints and collect evidence to take action, which is somewhat the opposite of anonymity. It's hard to take action and have a cause that comes out if we don't know actually who did something. So you have lots of different people running the system, running in a system with different needs, different roles, and those roles will conflict. So we can't completely make everything anonymous because then security at IT can't do their work. Legal will be able to document complaints and the harassment if in fact it is actually occurring will keep continuing because there's no way to invest in it. So all of these parts play together and need to be balanced in the work that you're doing when you do the modeling. All right, I think it's back to you. All right, so let's give it a try. Time to come back to Sam. So Sam is a typical patient with type 2 diabetes and she's been using for years a blood glucometer, which means she's constantly pricking her finger, taking blood readings. We know that she would benefit from having a continuous glucometer that is wearable and that is automatically sending readings to her coach so that we get greater telemetry so that we can respond quicker. So we want to introduce CGMs into our product set, but we want to do it safely. So what Patrick and I did is we filled out and includes no-dirt threat model on the concept of a CGM to try to help us figure out where we need to pay attention. And again, you can download the one we filled out at includesno-dirt.com slash defcon.pdf. I highly encourage you to do so. We talked about brainstorming. The includes no-dirt threat model, we've included a structured brainstorming worksheet that allows us to go through a system and kind of helps guide where we go. So again, I'm going to come back to this as our diagram. In our diagram, we have a wearable glucometer that glucometer syncs to an application on the patient's cell phone, which then transmits the data to the partner. The partner then sends it to our endpoint and it gets stored in our database. It then sends information back to our application. It also surfaces that information to the coach. And you'll see here that in addition to the CGM, the participant still has a BGM. So it still does occasional finger sticks. And that information is also being sent to us. So we've got two sources now of blood sugar data. So this is the diagram we're going to be working off of. In our worksheet that we provided you, it's highly structured. One of the first things we do is we mark which threats we think apply. Who are the actors that are involved here? So certainly the participant is involved in this whole process and the coach is involved. But we've got a vendor. We've got potentially other partners. We're going to be doing claims on this and billing. We're reporting. So there's business processes. There's people. For this one, we're not so worried about natural disasters. We're not so worried about geopolitical unrest. But other threat models, those might come into play. And we then do some brainstorming on vulnerabilities. So what are areas that could be vulnerable? They get an incorrect reading or the service becomes unavailable or the coach misinterprets the data. Now, up in the right hand corner there, I've got a little diagram where I show the questionnaire and also the structured brainstorming. And this is an iterative process. We sometimes start with the questionnaire and we sometimes start with the worksheet. But it's typical when we are doing one of these on a complex system, we are going back and forth. So we'll be going through the questionnaire, which is highly structured and that will trigger us to go, oh wait, because we've said no on this question, we think that there's a vulnerability there. Let's go write that down on our worksheet on vulnerabilities. Let's go ask somebody to get more information. And we go through this process until we think we've got the questionnaire complete and the worksheet complete. And so we did that. And when we did that, we were able to take those vulnerabilities that I've listed here. There's five of them in our example. And we're mapping those to specific areas and the includes no-dirt model. So you'll see like anonymity, we don't want anonymity, it doesn't apply in this one, but clinical error certainly does. And denial service does and spoofing does. So in the interest of time in our presentation, we're not going to go through all of our answers for every risk, but we're going to go through the answers we did for the risks that apply and talk about why they apply. And again, you can download our example and see our answers on all of them. So the factors that do apply are clinical error, unlicensed activity, denial service, spoofing, non-compliance, data error, information disclosure, repudiation, and tampering. These are the things we're worried about in this particular threat model. So Patrick, let's start with clinical error. Sure. So what we did in this one to make it a little clearer is for the next few slides, on the left side of the slide is a snapshot of the answers. And all of these answers are in the materials that we put on the includes no-dirt website under the DEF CON link. On the right side, we've clarified a little bit about what these actually mean in the context of the CGM work that we did for this specific example. It can be a little hard to read through those. So we've kind of pulled out what we think the important answers are. So here for clinical activities, what we're thinking about in the specific example of this continuous glucose monitor is, are we doing something that relates to the treatment of a patient? Well, of course we are in this particular example. So the specific question that we probe into here is, does this system or process, does this combination of things have either inbuilt controls that prevent something from happening in the first place or other review-based controls that can, if something does happen, we'd be able to find it corrected as quickly as we can. So answer here, yes. And it's a little complicated because it's both CGM, both a device that is created by a partner of ours and software that we create for our coaches to be able to use that information. Each one of those has specific detective and preventive controls that operate in its own environment. And this is a great example of what Bill just said, related to the iterative nature of this. When we hit this question, it's like, okay, wait, that's both the device and the software, how does that work? And we had to fork a little bit, came back. Yep, that's exactly right. Question is 3.2 and 3.3, we delved down a little bit into some additional control work. And another thing I would say here is not all controls are technical. In this particular case, with clinical error, part of preventing clinical errors is ensuring both the proper training of your clinicians. So being able to say, yeah, everybody that has access to this was properly trained has, I almost said license, true that's coming soon. And that the delivery and the quality of the delivery is important up to the standards we set in our clinical practice guidelines. So that becomes a review process and quality check that our clinicians will do on the staff to make sure by reviewing their output that things are going the way we want it to. All right, next slide. So one thing I want to say before we move to the next slide, you'll see there on the left hand side on question 3.0 when we answered yes, it gets one point for that. If we'd answered no, we can skip the rest of the questions on clinical and move on to question 4. So those two things are really important. At the end of this, we're going to total up all the points and that will drive a risk score. But being able to skip a whole bank of questions when they don't apply means you can go through the model much faster. So for simple systems, the don't involve clinical, don't involve patient activities, we can maybe model them, as we said, very quickly at 15 minutes. For something that's complicated, and this would be a fairly complicated one, it might take us several hours. But the model fits whichever size and whichever system, whichever level of complexity we're dealing with. Definitely, thanks for pointing that out. That waiting is really important because it helps to also create that apples to apples comparison that we talked about a little bit earlier. Second one, unlicensed activity. And I spilled a little bit ago just talking about it. Does in this case, the work that we do require a licensure from either a site license or personnel licenses for the people that are delivering care? Just like the last question, yes, it does. And it's complicated because different parts of this thing require different licensure. So the CGM manufacturer requires licensure by various federal and state authorities. And our clinicians internal to us require credentialing to make sure they're able to deliver the coaching that's appropriate for diabetes. So there are national standards for that. Also important here is the fact that we rely on other people in these things. And just like we talked about, if we've tested the control, we know it works. We don't have to bring that into the discussion here. We don't have, we made indeligence and checking acknowledge that, yes, our business partners have the appropriate licensure, but we don't have to dig into that to make sure it's as robust as it is. Our contracting processes make sure that those exist. So we use that as an effectively a control and we focus on the things that are important to us, which is our own internal clinicians. Thank you. You're next. Now of service. So this is where we start looking about how mission critical the system is that we are modeling for a connected glucometer. The availability of the entire system is very important. And if any piece of that system is having a problem, the connectivity between the device and the partner, the partner and us, us and the participant isn't working, then it's going to have a significant impact on the effectiveness of getting that telemetry data to the coach back to the participant and being able to make decisions on that data. So when we go through our model, we ask, is it is it a mission critical system? If it is that raises the point value. And then we look at have we defined targets and what are those targets and how are they enforced? And again, Patrick said not all controls are technical for the partner. They've got technical controls to ensure their availability. For us, we've got contractual controls where we define an availability target with monitoring and penalties. And that's how we manage the risk on our side spoofing. We want to make sure that we are getting the right data and we want to make sure that only the right people can access that data. And so spoofing as a threat is where we look and model in authentication. And this is a good example of where we can rely on existing controls. So we have defined authentication levels for our participants. We base it on NIST 863B and they're defined as an AAL level one. Our coaches are defined as a level two, which means that they not only have to have a username and password, but a second factor. We test those, we know it works. So as long as the system is going to use those controls, we've already defined, we can check those boxes and move on. We don't need to spend a lot of time detailing how authentication for this particular system is going to work because it's a client of the greater system within our care delivery. All right. Non-compliance. So not surprisingly, when we have to address a particular business processor system as a HIPAA covered entity, there's lots and lots of legal requirements that get attached to something. This is probably, let's call it the worst example of the complexity here because for this combination of devices and software that we're building, it's everything from HIPAA, privacy policies, the terms of use both for the device and for our software application. There's a number of healthcare compliance issues. There would be FDA obligations for our business partners. The contracts that Bill just mentioned. So because this is clinical in nature, it relates to a device. There's patient data involved with it. This one's particular complicated. As I mentioned kind of early on, when we were talking about the problems that brainstorming can create, this question is specifically designed to bring up those non-obvious things that you may not have top of mind when you're actually doing it. If you have, if you're adapting this questionnaire, certain things that you want to target, definitely adding it to this list is important because, for example, here, terms of use may not have been something I would have thought about, but it has to include what we're trying to include in this specific example. So that was important to kind of drive through that. And this also, if you look down at the very bottom left of the screen, we can also check through the applicability of some of the credentials that we have. We're SOP2 and a high trust certified organization. Those apply in this particular case because of the nature of what we're trying to do. Okay, next one I think is mine as well. Data error. So here in this particular example, we're digging heavily into data integrity and for a medical process and a clinical record keeping process like this would create. So we're essentially creating a part of a medical record on glucose monitoring, glucose management for our participants in the program. It's really important to make sure that this is ingested and maintained in an accurate and viable way. Again, here, in other includes no dirt models that we've actually done, we've tested some of that. We've tested, for example, the APIs that we do data ingestion with. So we can kind of check that off and go, yeah, it's acting as we intended it to and move on and focus the mitigation control work that we're trying to do here for other things. So information disclosure. This is where we're worried about confidentiality of the system. We've got rules over within HIPAA within our customer contracts on how we protect data to make sure that it isn't disclosed where it's not supposed to be. And again, here we're largely consuming controls that we've already tested previously. So HIPAA requires us to encrypt PHI at rest and in transit. So we can ask, are we doing that? And if so, how? And since those are really well known patterns for us, we can accept them and we can move on. It doesn't actually require a ton of discussion. Down at 12.6, data locality, this is a really good reminder for us. We have obligations to keep all of our data within the United States, processed, stored and accessed. And so, especially when we're talking about a third-party vendor, this is a good reminder of, hey, let's make sure we know where their data centers are and where the data goes as it traverses its way to us. Repudiation, we've talked a lot about repudiation already. Does it require non-repudiation? Yes, what are those mechanisms? And again, question 13.3, there's lots of mechanisms that we have in place, but we want to make sure we address them. How are user activities being logged? Do we have accurate timestamps? How long are logs retained? Things like that. That lets us know that this particular system is going to fit into our overall framework and tampering. We don't want anyone to mess with the data. So, again, what are all the mechanisms in place to prevent tampering? Now, for this particular system, there's some interesting tampering things we need to deal with, like the chain of custody of the device between the manufacturer and the DME and the DME shipping it to the participant. And also, how do we make sure that the device that gets shipped gets assigned to the appropriate patient in our data model? And that's a fulfillment question, because we have to make sure that every device that gets shipped, that serial number comes to us assigned to the correct person. And if we don't, that then makes its way back up to not just tampering, but to data integrity and who has access to it. So, again, the model is iterative and lets us go through it, and it reminds us to check how are all these things being addressed for this. It is a very structured way of brainstorming. And we get to the end, we get a score. Patrick, you want to talk about this? Oh, sure. Yeah. So, as Bill said, we get to the end and we get a score. So, the product of all the numbers that you saw on the side. So, for example, when we've weighted the first element as one, when we're talking about clinical controls in place or not, all of those add up together. And in the particular governance risk and compliance system that we use, we can weight the scores. So, some are stronger than others. But essentially, the product of that turns into a total score. We can rank that total score as a low, medium, or high, again, to be able to focus our efforts and make sure that we know kind of is this something we need to address immediately in the grand scheme of things. This is something we can actually wait on for a while, because it's not as critical as other things that we're looking at. On the right side, you actually see the list of action items. You saw that in a prior slide as far as how those work. Here's the specific action items for this one. So, for example, for this particular model. So, for example, creating clear instructions for participants on device calibration can help with data integrity issues, because it's not clear if someone enters data incorrectly, the treatment would apply correctly. So, this addresses the clinical vulnerability issue. Backups to BGM and CGM also touches on some of the same issues. Sometimes that's required, just because of the nature of the CGM. And so forth with all of these. Each one of those action items is designed to address one or more vulnerabilities. And that's part of the process of this is everything that you've identified should have an action item at the end of it to make sure you're hitting everything you need to from a control perspective. And we've done a lot of these. I mentioned earlier that we launched a behavioral health application. And when we did that, Patrick and I did a threat model. And I think we came up with 19 action items. And those were specific things that we wanted to ingest into the system before we went live. So, we did that at the very early stages as we were just planning, which was six months before the launch, which meant that we had as risk assessors the ability to have a meaningful impact on the security and privacy and compliance of that application before it ever launched. And everybody involved also understood, because they went through the process with us, why? They knew why we had those action items and what was the specific vulnerability we were trying to address. Latter is very important. You may have groups that let's just say are not necessarily as inclined to be helpful when working with the risk assessing organizations. We had in this particular case, this behavioral health example, some of our developers come back to us and say, oh, we get it now, why this is important after they've executed through the process. So it becomes educational as well as helpful just as a reminder as to why we're doing it. So a couple of points to wrap this up. Vendor management, the threats you aren't seeing can also kill you. I use this example. This was a letter that the Quest Diagnostics sent out about a year ago on a breach. And the important thing about this is not that Quest sent it out, but they sent it out because one of their vendors had a problem. And that vendor was acting as Quest BAA, but Quest ultimately got sued for this breach. So when you're doing threat modeling and risk assessments, it's important not only to look at your own systems, but your third parties as well. And we use the same methodology, the same checklist, to assess all of our vendors. Now, we have lots of vendors. We are a SaaS first company. We've got SaaS all over the place. When somebody comes to us with a new vendor, and let's say it's a project management tool, we can go through this checklist pretty darn quickly because it doesn't have clinical error and it doesn't require a licensure. But when somebody comes to us with a new device vendor, we're going to go through this same model very, very carefully. And so when we use threat models to assess vendors, it's the same basic questionnaire. We're doing that checklist. We may or may not also do the brainstorming, but we then use that to influence our legal terms. So if we define that the vendor is going to be mission critical, well, that means we have to tell the legal department to make sure we have an SLA in the contract. And if we're worried about encryption of data at rest, then we have to include that term in the contract. And we have to assess that vendor to make sure that they are doing things we want. We put legal terms in to say they have to keep our data in the United States, but we then also verify where their data centers are. So this model works for assessing vendors, and it works very, very well. All right. So when do you actually use this? We talked a lot about different possibilities for it, and we've got a chart here that addresses a little bit of when to use it. As you can kind of tell from our examples, we use it all over the place. Initiations of significant projects, the behavioral health example that Bill just did, vendor acquisition for the first time, and then annual assessments, both from a risk assessment perspective, and from a vendor assessment perspective. Bill mentioned earlier that we did it for 26 material business processes on the risk assessment we just completed. I can't overstate how interesting and helpful that was this year, because traditionally from a compliance and a privacy perspective, risk assessments are really brainstorming in nature. This forced more rigor than I think we'd even seen last year when we did it a little bit closer to this way. And it took out the potential for missing things because you're not asking the questions in a regimented way. And that was a huge win, resulted in a lot more action items for us to do, but that's a good thing in the grand scheme of things. There's a lot more, because we're aware of it, for us to look at. On demand too, there are times when sometimes from an audit perspective, things just crop up and you think, maybe we should take a look at that more detail. There's the regimented way to actually look through it. All right. It does exist in a continuum of activity from a risk assessing perspective. And we've got a little bit of the dimensions that we think about it here. So threat model on the upper right, really, it's best practice seems to be when it's a new process or something that we're encountering for the first time, and we have no idea about the dimensional risk on it. So risk unknown in the new process. On the left side, we tend, there are things that we have an idea about, like processes that we're actually doing, and either how much we know about it or how much we don't know about it. So on an annual basis, the last few years, what our risks are as a company are pretty stable. We kind of know the general categories of risk. So we may not necessarily know with an existing product how it shifted over time, take a look at that from an audit perspective, and these overlap. A threat model, in general, when you're talking about a threat model designs a control, we'll have to retest that control at some point. So that in our world still stays in my universe that may get handed off to an internal audit group for them to be able to test that control eventually. But it's all related. It does create a completely virtuous circle, I guess you could say, from a control management perspective. So final thoughts on all of this. When you are insecure to your compliance, your risk assessing organization, your job is actually to say yes. Security practitioners, we get a bad rap because people think we always say no. Our job is actually to figure out how to enable and empower the business. And so really, Patrick and I firmly believe that it's our job to say yes safely. And one of the ways we can say yes safely is to go through a regimented process of assessing risks, and then coming up with action items and say yes, it's fine to bring this new vendor on. It's fine to do this new process. But here are our recommendations for the ways to harden it, to improve the security and compliance and privacy of that system. And with that, thank you for listening to us. It's been our pleasure to talk to you. And we look forward to the Q&A portion here at Defcon Biohacking Village.