 Hey, what's up everyone, Alyssa Knight here. I want to thank Nina, Ali, and the rest of her team at Biohacking Village for inviting Mitch and I to come speak on the recent vulnerability research on hacking fire APIs and M-Health APIs. So I'm going to start out with the view from the attacker, my tactics and techniques and the results of me hacking these M-Health and fire APIs. And then we're going to transition over to Mitch, who will give his side in perspective as a defender and CISO. So with that further ado, let's get it started. Just so quick about me, if any of you want to reach me, you can feel free to reject me on my email at akitknightincmedia.com. Follow me on Twitter, subscribe to my YouTube channel, hit that bell icon to be notified of new uploads. I publish every week as well as livestream. And that's the best way that you can support content creators and influencers in cybersecurity is to subscribe to our accounts and follow us and retweet it or reshare it. So with that being said, I'm a recovering hacker of 21 years. So I've been in this for a long time. I'm an entrepreneur. I've published a book on hacking connected cars. I'm also working on a new book, Hacking APIs. I've started and sold two previous cyber security startups and buy side and sell side M&A transactions. So I've been around the block and been to the show. I actually started out within the BBS scene in Seattle a long time ago, many moons ago. And I run a multi-node BBS and actually got started in hacking on IRC. So for any of you who's been around long enough to remember that, I also run in venture capital funds, M&A night capital, M&A night entertainment. So I'm in the process of actually writing a screenplay for a new TV series and also run Night Inc, which is a content marketing company. I'm a cybersecurity filmmaker and content creator. So I work with cybersecurity vendors, brands, challenger brands to create what I like to call adversarial content, meaning that I create content from the views of the attacker. And that is a new type of content that really shows content in the efficacy of a product and it being able to defend against the hacks that I'm running against it. So if, for example, a proof who's the sponsor of this research, I thought, well, instead of just writing a boring white paper or doing a boring video on your features, what if I were to hack APIs and show how your product would have prevented it? And that's exactly what I did. And that was the impetus to my presentation to you today. My API hacks, so I'm kind of branded at this point as an API hacker. In 2019, I hacked 30 financial apps and APIs. 2020, I hacked 30 M health apps and APIs. 2021, I hacked federal and state law enforcement vehicles. And 2021 this year, I'm hacking fire APIs and also connected trains. So I'm very passionate about hacking embedded systems. Everything today from the plumbing of our financial system now to our health care system with the fire deadlines are all communicating with APIs. So I'm really excited to talk about that today and give you a little bit of a brief background on fire. And OK, so as I mentioned, a proof sponsored this research in the hacking health care APIs. So you can actually go and download the report from approve.io. And one thing that I do want to mention is, and I'll go into this in the final part of my presentation, but the APIs that I hacked, many of them were behind WAFs. And I'm going to talk about the dangers of using the wrong tool for the job. When you're not using an API threat management solution that's been purpose built from the ground up to secure APIs, you're going to be under a false sense of security. So take a look at the research, hit the website and download it today. OK, so many of you will be hearing me refer to smart and fire or smart on fire. And I want to really help you understand the difference between those because they're actually not one in the same. Some people may even conflate the two terms. They're actually separate things. So fire provides a set of models to standardize the representation of clinical concepts, such as allergies and medications and an EHR or the clinical data store. Basically, what that means is fire creates the skeleton, if you will, for the EHR data. And what I'm going to go to in later slides and I'll explain this later, but when you go to a hospital prior to fire and prior to these these APIs that just kind of allows these systems to talk to each other, you could go to two different hospitals and they'll be using different EHRs or EMR systems, in this case, CERNR or Epic. And if you go to one hospital, they're going to have all of your data. And let's say you get sick the next day and you go to a different hospital and they're running a completely different EMR, they will not be able to talk to that other system at that other hospital, creating concerns over life and safety. So it's a real problem. So I don't want you to think that, oh, Alyssa is saying that fire is bad. What I'm saying is that this is very important that these systems are able to talk to each other and I'm a huge fan, but I want to make sure that it's done securely because this is our PHI, this is our protected healthcare information and it needs to be secured. SMART standardizes the process through a third-party application could, so a third-party application can plug into the data store and access that clinical information. It basically applies OAuth to an open ID Connect. So what that means is SMART basically turns our healthcare system into an app-based economy. So originally when SMART first started out, it actually was very similar to fire and was in essence competing against what fire was going to be. And the team basically took a step back and said, hey, you know what? What they're doing over at HL7 or Health Level 7 International, and I'll talk about that in a minute, but what they're doing is great and this competing against them doesn't make sense. Let's change the mission for SMART and let's make this about authentication with OAuth to open ID Connect and really giving third-party app developers the ability to make apps for these, for fire APIs. And so that was a large part of my tax surface, actually, where the third party that came along making apps for fire APIs. Simply put, Fire Standardized Day will SMART Standardize and Secures Data Access. So think about SMART as like the authentication and authorization on top of fire. You might be wondering, hey, listen, why SMART? Why didn't Fire just do this? Well, an app developer can write an app once and expect that it will run anywhere in the healthcare system. That's the importance of this, right? So if I wanted to start a startup company and begin building fire apps, I know that anything that supports fire is going to be able to run that app. I can go to any hospital and sell my app to them. I can go to any healthcare provider and sell my app. It's creating a new economy of companies, which is really interesting. It basically creates an app store for health. SMART on fire provides a health app interface based on open standards, including Agile 7's fire OAuth to an open ID Connect. So that's fire. So let's actually get into the weeds now. This is a screenshot of the approved fire client that Skip built for me. I was greatly appreciative because I wasn't gonna sit there and build a fire app. So basically you can see here, and if I expand this out, it actually will expand all the JSON results from the API containing all of the information for this patient. So these are the testing phases in my research. So in 2020, I did static code analysis of these fire, of these M Health apps. In 2020, I also did traffic analysis, basically using what I call woman in the middle attacks to intercept that traffic and look at it to understand how the API works. Then I actually did fuzzing and then static code analysis in 2021. So now I'm basically have downloaded the apps from the SMART app store and there's actually a list and began doing static code analysis of those looking for things like hard coded keys and tokens and all the other bad stuff including stuff about the backend API and how they work. Then traffic analysis was performed where I did network interdiction of the traffic in order to understand how the backend API worked and how, what it expected in order to find things like broken object level authorization vulnerabilities. And then did fuzzing. I wanna take a moment to talk about the importance of fuzzing APIs. I've been talking a lot about this lately and coming soon here, you'll be seeing me talking a lot more about fuzzing APIs and the importance of that. So having said that, what fuzzers will do is they'll, for example, like kite runner or wrestler will allow you to import in SwaggerFile. Like with wrestler, it'll allow you to import a SwaggerFile and it will test that back in API against that SwaggerFile. With kite runner, for example, the developers have used like thousands of different SwaggerFiles out there on the internet in order to build kind of like this content discovery system for APIs. If you don't fuzz APIs, I guarantee you you will miss vulnerabilities. Fuzzing APIs is so important. It's not like you can run Nessa scan or any other web application scanner against a backend, fire API or any API. The most effective tool that you can use against a target API is a fuzzer. Okay, so this was the actual different phases of my research. The four most common vulnerabilities that I found in my research, which you can read in the report were broken object level authorization, broken user authentication, excessive data exposure and mass assignment. Now, in this particular case, I'm gonna explain what you're looking at in this drawing. So a patient goes to a hospital running center, for example, and then that individual patient goes to a second hospital that's running Epic. Here in this case in my diagram, I actually targeted both APIs, attempting to gain access to patient records that I shouldn't have had access to. Several organizations over the last two years have come up and told me that they wanted to support my work and get involved. And I'm very appreciative of them, including the EMR companies. So I wanna say thanks to all of those companies that did get involved and helped support this. There's definitely less of an abyssal relationship between vulnerability researchers and companies today than there was 20 years ago. So you guys and girls are all here for the actual results, so let's get into them. So here you can see the architecture of my attack. In this mobile app, you can see that I've logged in as a clinician and these are the patients that I was supposed to have access to. Doing network interdiction, I learned that this was the API request that was going to the API by just basically clicking every button in the mobile app and then the API would return that patient record. But then I took those API requests and I loaded them into Postman, which is an amazing API client that I absolutely love. Postman, if you're listening, you guys do an amazing job and have made them an amazing product. So thank you. Then I basically pieced those into Postman and then changed the requests from patient one, zero, one, which is what I should write access to, two, one, two, three, four. And I had, because of a bowl of vulnerability, had access to tens of thousands of patient records. And here on the right hand side, you can see the screenshots of Postman that allowed me to send those requests to the backend API and download those reports, pathology reports, x-rays. All of these reports were available to me. And here you can see the hard coded keys and tokens that were found in those mobile apps. So definitely a lot of vulnerabilities were found and what's neat about Postman, and you'll see it here in the screenshot, is, and I didn't know it could do this, but when I started requesting endpoint documents, so actual PDF files from the API endpoint, of course I got the PDF that I was requesting back. But Postman lets you save that to a file, which was really neat. So over here is the screenshot of the actual pathology report that I downloaded from the API endpoint. Ooh, pause for sound of that pin drop. Sorry, I have a real dry sense of humor. Hard-coded API keys and tokens was a real problem. I found 104 hard-coded keys and tokens in the mobile apps. A lot of them were not only just keys and tokens to the APIs, API secrets to the APIs for the mobile app, but also to third parties like payment processors. So a lot of hard-coded nastiness in those mobile apps that were able to be reversed once I loaded them into my tool called Mob SF. Here I was targeting a hospital and was able to download admission records for different patients that were admitted into the hospital. Here I've redacted a lot of this, but you can see how much data is actually stored in these patient records. The interesting thing here was I was able to actually download a lot of patient pictures. So this hospital was taking pictures of patients as they were processing them. I think I strongly believe that this is why PHI is worth 1,000 times more than a credit card number on the dark web and a dark web marketplace is because of the fact that there's just so much data on an individual and a patient record. And not only that, but when I was going through this, I found that I was actually able to access data for their next of kin and other family information. So once this happens, that's the other thing is that if your PHI is compromised, it's not like anyone can send you new PHI. Like if your bank card is compromised, the bank can send you a new bank card. What happens if your PHI is compromised? No one can generate new PHI for you. Okay. This was a really interesting vulnerability. So what I found was that when you were in a locked session on an iPhone, whenever you unlocked it with your face, the mobile app would send the secret and that string of letters and numbers never changed. So like weeks would go by, I'd keep locking and locking my session and I looked at the packets and it was the same key every time. So what I found was this was hard coded and like weeks later, if I pasted that same secret to unlock that session, it gave me the results of all the patient records. So, you know, why the developer did this, I don't know, but it was a pretty big finding. I was pretty proud of it. This slide is for all of you who want to know how to actually properly secure these things. In my testing, the only API I wasn't able to breach was protected by APPROVE. I do want to say that. So I tested a number of APIs and then APPROVE worked with several of the organizations to stand up an installation and secure the API with APPROVE and all of my attacks failed. Basically the way it works is you have to compile your mobile app with your SDK and what it does is it basically prevents anything from talking to your API unless it's been compiled with that SDK. So for those of you who are actually here looking for an API security solution, it's amazing. It's absolutely amazing. So especially if you have a mobile API, I believe in their most recent version, they added some support for some web APIs. So take a look at it. Authorization, vulnerabilities were everywhere. Developers were remembering to authenticate the requests, but they failed to authorize. There were a bowl of vulnerabilities and every one of the APIs I tested ensured scopes are being used with tokens. So what developers would do is they would implement tokens, but they wouldn't apply scopes. So I had a token. I was supposed to be there. I was authorized to be there, but I wasn't authorized to request the data that I was requesting. And because there were no tokens or scopes, sorry, there were no scopes in place, it prevented that from, it allowed me to be able to do that. Jot tokens should really have short time to lives. I saw really long TTLs for Jot tokens. They really shouldn't be longer than five to 10 minutes. So that will really help with the whole woman in the middle attack stuff. And the other thing about a privilege, they have this ability to actually help you to better manage certificate pinning. So I spoke to a lot of developers when I asked, I'm like, hey, almost all of these apps didn't implement pinning. Why? And a lot of the developers said they were afraid of pinning, breaking the app. So, you know, the ability for you to better manage pinning is available and approved. So I would definitely click that. That's a concern of yours. I use refresh tokens. Just remember that refresh tokens can be revoked, but access tokens can't. So if an access token is compromised, you can't revoke it, but a refresh token, you can. So just remember to use refresh tokens as an alternative to keys. Approve is specialist in protecting APIs consumed by mobile apps. It's challenging to secure since anyone can download your mobile app hence a dedicated security design for this challenge. So that's the thing. That's the scary thing about mobile apps is anyone can download them, anyone. So if you're storing anything sensitive in that data and that code in that APK file, a hacker can download an APK file and reverse it back to the original source code and view it. So keep that in mind. And approve is specialist in protecting these kinds of APIs. Use hard coded API keys and tokens as app identifiers and not as app secrets. Ensure that a second factor such as mobile application, mobile app authentication is needed to use them. So, okay. I've had that whole religious debate with developers about I'm not supposed to store these keys and tokens in the code then where the hell am I supposed to store them? You can store them and you can obfuscate the code. Okay, fine. But use multi-factor authentication. Don't just rely on just that token, just that key for authentication. Also authenticate the app. So make sure that your API security solution can authenticate the app as well. Remember that fires a specification and doesn't mandate how it secured vulnerabilities or pre-implementation. Okay, this is really important for me to make clear. So Nina and I can both implement a fire API. But my implementation of the fire standard can be completely different from hers and mine can be full of vulnerabilities whereas hers wasn't. So, you know, just because I've found vulnerabilities or issues in these APIs it doesn't mean that there's a vulnerability or a problem in the fire standard. Okay, it's always going to be down to the implementation and that's something for all of you to remember. So again, the research is available at approved.io. And again, I wanna thank Nina, Ali and the rest of our team for inviting me to speak here along with Mitch and Mitch over to you. So my part of the presentation, thank you again, Alyssa is how do we approach this from the provider side? So a big thing I always talk about is history. I always like to talk about and lead off with some examples from history. And one of the most apocal periods in history was late 18th century. And back then a learned person only needed to know what was in a few books to know everything they needed to succeed in the world. The books about chemistry, physics, science and business you could fit on a bookshelf. We are not in those days. We are in a significantly more complex environment. However, our social structures really haven't evolved that much since the late 18th century. And another way that we haven't really evolved has been with our work environment. So back in the early 1900s, Frederick Taylor did a lot of studies, time studies in factories in Philadelphia to determine how long it took someone to do work. Time series analysis. And it is the fundamental metric by which people have measured work ever since. But Taylor had one challenge. His challenge was that the work environments were not adversarial. They assumed an ideal environment where the worker would always have supplies and there weren't people trying to take the workers items. So that's not the case what we have today. And we've built these matter structures. We've built business around the absence of adversaries trying to break down what we have, which makes it a lot harder for us to be able to fit into the concept of business, the idea of defense. So we need to embrace that nature to provide that critical eye. This means we need new capabilities. We've built a lot of our electronic systems around how we've originally worked in business, the absence of adversaries. So we're moving from the pushing and pulling predictable of information to now full programmatic interaction. And so in healthcare right now we have a obvious major change. Everything used to happen with predefined blocks of information going along predetermined sequences and the level of interaction was push-pull. Now we're moving to full API level interaction with fire and OAuth in healthcare. So we need to build knowledge of how to address adversarial issues into our environment in two areas. Number one, detecting the API issues and number two, preventing them. And right now we can't get the push-pull right. And now we're changing the complexity significantly and adding more on top. So what are some of the challenges that we're seeing? What are we seeing from my vantage point? We have a lot of privacy issues with these new applications using fire to connect to get patients medical records. So I put the link to healthcare dive here for people to take a look at. And most of the apps that this group, research group founds, had significant issues with privacy. Also, we have the issue of a lot of these developers developing these apps are small. And what they can do is they monetize the information by selling those details to marketers. There already was one case where a healthcare application, the owners of the application, sold data on women's menstrual cycles to marketers. I can't think of a greater invasion of privacy than that. And a lot of the powers that be in healthcare by the big providers, aren't really taking a look at this because the use of that data, once it leaves their environment is out of scope according to Cures Act final rules wording. So what's our goal today? Our goal is that we're going to change that. So what I always like to say about privacy issues is that when there's privacy issues, there's security issues. So today, where there's smoke, there's fire. So what other types of issues do we anticipate seeing? So a lot of these applications, remember the programmatic applications, we expect them to see malformed requests that don't handle context changes. So back about 10, 15 years ago, we'd handle malformed HL7 in test. You wouldn't see it in production. Guess what? It's in production now. Insecure client application code. Yeah, not much of a change. You're just going to see a lot more of it. And you're going to see the use of fire to transfer data between your electronic medical record systems, a lot of unprotected local repositories and a lot of insecure cloud locations. And now also with the fact that people do incredibly crazy marketing things, you'll also see it on social media. And the app builders out there are building these apps, they're not going to have the security resources that you can call upon in your Google or Microsoft or one of the large EMR companies. Even if these small companies hire ex-employees of these big companies, they're not going to have the resources to do it right. And a lot of these small companies need extra help. I know this because of the position I'm in in my organization. I deal with several hundred application vendors. And I talked to them about source code analysis. I talked to them about DevSecOps and new technologies they can use. And one of the challenges that I have working with these organizations is a lot of them don't even know what to do or if they do, a lot of them are significantly under resourced. So what other issues do I expect to see? I call this one guilt by association. If there's anything big healthcare organizations are very protective of, it's their brand. And I expect there to be knee jerk reactions the second that one of those unprotected cloud repositories breaks. And the next thing you know patients medical data from their facilities ends up in an unprotected cloud environments. It's not going to be a breach because technically it was downloaded from the healthcare app. However, if it's got a healthcare systems brand all over it, there's going to be challenges. And also integrating that data with social media, we expect there to be outcry over that. And when the first record starts showing up when people do integration and use it for marketing they're going to want to take some action. So what else do we expect? We also expect that there to have the API security holes caused by having those fire front ends to the legacy environments. So these APIs we're talking about that provide fire data to these applications. They're a front ends for other systems. And a lot of them both on the legacy ones too AKA those systems you can never get rid of that are probably going to outlive you and they'll be around when your kids are old enough to work in IT. And so one of the big things we always talk about with API vulnerabilities is the ability for them to execute code and lead the pivot points that you can use to attack other points on your network which is a technique we see from a lot of ransomware threat actors these days. Nice front ends. However, if you can get in there, run code and attack the back end it becomes significantly more difficult. Another meme for that coming up. So you've got XML or JSON with large binary blobs handling your healthcare data. What could possibly go wrong? So bring it back to some more history in healthcare because healthcare has evolved not as much as the rest of IT. However, healthcare you started out in the 80s you had your IT guy. They knew hardware and software very well. In the 90s, they started differentiating between each other. And in the late 90s, software development forked between client server and web. And then security kind of grew out of a hybrid of the two in the early 2000s. So now these days finally in the 2020s we're starting to see that reconversion of security and development yet again. And we considered this to be a good thing. However, healthcare always lags a little bit behind. So we're gonna try and make that advancement happen a little bit quicker there today. So how are we gonna address the challenges of these fire APIs and the security usage that Alyssa and I have been talking about? First of all, we've got to change how we operate with developers and help them understand it's not about the lines of code you complete. It's about how well you complete them against adversaries and to think in terms of adversarial attacks. We have to take a look at how we scan for vulnerabilities because healthcare has done it one way for years and now we have to do it another way. And we'll talk about that. And third-party risks. The conventional way of giving your vendors a huge questionnaire to fill out really doesn't work to mitigate risk anymore. And trying to hand it off to a third-party, AKA paying the people in nice suits that come up to you at big conferences and tell you how they can address your third-party risk issues or relentlessly spam you in your inbox, that's not gonna fix your problems here either. And we'll talk about that. So let's move on a little bit here. Big thing we need to really talk about here about security is blindly using those off-the-shelf solutions or the ones that claim they can fix most vulnerabilities before they even hit your servers, they don't fix the root cause. I don't think they fix much of anything. And what happens when you use them is you put yourself in a situation where even though it says it protects you, it does so at the expense of your functionality and it doesn't address where it breaks your stuff because it will. And guess what? When it breaks your stuff, guess who's fault it is, not the vendors, it's yours. And also these big waffs because of the fact that they are so expensive to license in a lot of cases, they don't address the connectivity between your internal systems. So you've got something where if you can get in and get binary code running on an API server, it's not that hard to go and breaky eight like we used to do on the bars at school and go across the environment, you've got your way in. And finally, a web application firewall doesn't address the root cause, aka insecure code. So we have our challenges here. We have to, and this is how we can start addressing them. So a lot of people are gonna think, well, Mitch, all our vendors are high trust certified. Will high trust help us? No, I like the high trust people. However, their certification is not a Swiss Army knife. It's really great for controls testing and accounting type work. However, it doesn't help developers as much as we'd like and it doesn't help the architectural issues. And also it really doesn't help if you drop a high trust certified solution into a legacy environment where you've got systems running from when Reagan was president. Really will not help you, won't help with Bush Clinton or Bush Jr either. So that's the situation here and high trust is great. However, when it comes to integration, doesn't work the greatest for the environment that it's in. To get the full benefit, you have to be fully high trust certified as well. And there are very few healthcare delivery organizations that have actually done this. Even the large ones that I've spoken to have only certified part of their environment. So this means we need to change our service model. So a lot of your major EMR companies, when they offer up those APIs, they give sandboxes a testament. The sandboxes are great. However, they're not testing the entire app that's accessing the APIs. They're not testing the gateways to legacy systems. That app can be completely conformant. And again, can be horribly insecure. We've proven that. A lot of code that's written for Apple and Google for those devices, code can be broken. We know this and no questionnaire, no piece of software, they're not gonna help you address the root causes of these issues. So you have to modify how you test and we're gonna talk about that. So another challenge we find is mutually assured security. So large design pattern that we've seen that's been supported by the major software development platforms has been the emergence of these distributed technologies. Now you have the extreme of people still doing things legacy way, all the way to full running on smart contracts and blockchain and distributed technologies and healthcare kind of lean towards more a small evolution from where we were legacy systems. And a big concern we see with that is you're only as secure as your weakest link, especially internally. And we have to be concerned because these apps that are now accessing our systems using fire contain our patients' protected health information. And even though the responsibility goes away as soon as that data gets downloaded, we still consider even the mere existence of those APIs the first place to be a major risk for our organization. And again, it's one part of a distributed chain which is how we look at it. So we have to really think about it in the context of it's one part of a chain, not some isolated little system whose scope drops off like the Mariana Trench. So what we have to take a look at here, and again, I'm going to talk more about third-party risk companies here. We don't go to companies that give us a score to make decisions about third-party risk. You actually have to do something about that. And I don't care if I pissed these companies off today. Why? Because I'm at that point. So what you have to do is you have to do some pretty serious purple teamwork to be able to ensure your APIs are secured and that you've done a good job making sure your environment in total is secure, not just your APIs. So what you do is you configure some test systems to do the testing that are configured like your production environment on a similar network, which is much better than testing the APIs in the sandbox. Why? Because when you go live, all the fun stuff happens and you see a lot of errors you didn't think you'd ever see before. And if you just test against the sandbox, I can guarantee you 100% you're not gonna know what to do. If I give you a sandbox to play with that's a test system and a similar network environment, when you see errors, the probability of you knowing what to do and being able to address the issues is significantly higher. And again, this is not something you pay the people in the nice suits to say nice words about. This is something you've got to do on your own because you are allowing programmatic access to your environments. This is not downloading PDFs like a patient portal. This is not downloading HTML like a patient portal. This is systems interfacing with yours that are not yours and you have to be very, very careful here. So if you find issues, you gotta let your leadership know first. And here's the deal. And I'm going to say some things that are gonna piss off some CIOs right now. If they ignore the issues or if they're profoundly ignorant and I have sadly found a lot of people in healthcare leadership positions that are profoundly ignorant, do it anyway. Make sure the right people know and document the living heck out of a security issues that you find. And I'm gonna go on record saying this. Ignorant leadership is no reason to avoid security. What I've done in my career, it has been tell CIOs you're not doing the right thing. This is what you need to do to mitigate risk to your patients. And if they tell me don't do it, a couple of times I've had to do it. The perfect example when I helped roll out Windows 7 at my last job many, many years ago, I had the CIO tell me to stop work on it. I told them, no, because I had people telling me I'm buying laptops now that don't run Windows XP. They only run Windows 7. So I need to have Windows 7 or we can't get people taking care of patients laptops. So we made the best choice for the patient and eventually the CIO came around to it. But the way I look at it is ignorance is no reason to stop security work. And if you are dealing with leadership that is being ignorant, do your best to protect them from their own best interests and keep the interest of your organization and patients in mind. Thankfully, my organization, I have a really good boss in Tim Tarnowski and I don't have to do this with him. Also, we take a look at patient advocates. Patient advocates are an underutilized resource in healthcare. They've helped many people deal with complex issues and concerns at the worst times in their lives. So I know a couple of them. They deal with people that have breast cancer or more advanced forms of cancer. That is the absolute worst time in your life when you have advanced cancer and all the help you can get. These patients are susceptible to scams. They're susceptible to all sorts of things of people with bad intentions trying to drain their wallet. Patient advocates deal with adversity and adversarial behavior better than your average healthcare organization does. So here's the deal. I as a representative of a healthcare organization cannot stop a patient from downloading some of these applications that prefer to help them with healthcare data using fire that may have privacy and security issues without potential information blocking penalties and complaints that a federal government. Patient advocates do not have the same restrictions that I have. They're not providers. They're neutral. They can advise, they can warn. This is a new area. And realistically, when you're dealing with a sickness like cancer or COVID, you're not in the best place to make a judgment call. You're thinking about other things much such as how long am I gonna live or what's the quality of my life going to be? And we need to make sure that these patients get good guidance at the right time in their life when they're at their most sensitive. And one of my greatest concerns with these applications is that people are gonna let the marketing buzzwords take over instead of using something that works. So we historically haven't engaged patient advocates the way we need to. And I think now they have the potential for really helping us out, especially with these threats to patient data. And again, I look at it this way. They're used to dealing with the adversarial environment that can be healthcare. I think they have some skills that can transfer over to help patients out in making the right decisions with the apps that they use. So I look at it this way. Keep them informed of everything you find in privacy and security issues. There's nothing preventing us from doing that. So what can we do besides informing patient advocates of all the fun things that we find? First of all, we do our vulnerability scans. We start with the outside, the inside, all the vendor connections, all the VPN and RDP that no one thinks that the IT organization knows about. And also do a scan of your software as soon as people have installed your Casay agents or others. And also take a look at your container environments and all the work that they said they did to secure your containers, because I can tell you they didn't do that work. I have found one of the biggest new challenges in healthcare has been with people putting software in Docker or Kubernetes and saying it's secure. And the only difference between that and insecure software in a Linux box is that it's in a container. So what can we do next? Let's take a look at what these fire APIs are going to connect to ultimately. And take a look and do a data flow analysis. So patient connects with an app, sends a fire API call. What systems is it going to connect to? What dependent systems along the path is it going to hit? And it's going to light up like a Christmas tree. Because what you had before was, oh, it's only going to connect to this one system. What you're going to realize is about 20 systems in the path and you have to understand what they all are. And one of the big challenges we found in healthcare was that a lot of people like to reduce scope to minimize risk. So they look better on the risk assessment for what they handed in senior leadership. I can tell you there is no greater way to look like a failure than to do that. This is not a time for reducing scope, especially with the existential threats that we're facing to our businesses. You have to take a look at security along that entire path. And so for all those dependent systems you find from your data flow analysis, do your analysis and assessments of those systems that you find. Don't de-scope because here's what's funny. I first started doing risk assessments 12 years ago. It seems like every system that I was told of the scope by leadership mysteriously ended up back in scope later because of some other reason for risks. So, discoping leads to re-scoping anyway, so don't de-scope. And so what you wanna do based upon your risk analysis is develop some good short and long-term remediation plans for these systems. So, big one here, patch, firewall, segment, change those default passwords, turn off your unnecessary services. If you've got privileged account management like CyberArch, use it and develop good operational plans, take good backups and storm somewhere else. And if you're long-term, yes, security is now thinking long-term, get current, develop your plan to update and upgrade every so often and keep a good budget for running the system well. And with IT, we haven't seen a lot of good long-term thought, we're really trying to change that here. But again, that's your short and long-term, that's to basically maintain the system. Now we get to the fun part, which is API testing. So you've got to develop plans to do full API testing. We recommend using that purple team approach. Again, thanks, Bryson. This approach works incredibly well because it helps you work with your blue team to go in there, try and break these APIs, try and understand how these systems work and really try and programmatically look at them from an adversarial point of view. And if you can't do it yourself, there's a lot of great companies out there, some of them we're watching today who will help you do so. And they'll help you develop that testing plan and help you test. But when you're done testing, you've actually got to follow up and keep up. So you discover some high and critical issues, you got to address those immediately. You got to log them, ticket them and track the resolution. If you've got third-party vendors, you're going to be following up. Treat your third-party vendors like you would your internal systems. And again, the approach that my organization is taking and the one that we recommend all health organizations take is to do that purple team test at least annually. And when you have a major system upgrade, because here's the deal, what we've found with a lot of these APIs has been that when you upgrade systems, sometimes you have some pretty bad regressions on security issues. A lot of organizations aren't as agile as they need to be when it comes to pushing fixes they found or made back into the next version of the product that's still in their development. So a lot of companies have done a great job improving on that, Microsoft being one of them, Google being another, but not all companies are Microsoft and Google. So we've got to make sure we're on that and we keep doing that. So with that, I'm gonna follow up here and thank you all for your time today. We open up for any questions and here is my contact information. So thank you all for your time. Thank you, Alyssa, for the incredible presentation that led into this and have a great day, everyone.