 Hello everyone, thank you all for joining us today for our second innovators on the line live customer webinar. Innovators on the line connects Red Hat customer with each other to discuss supported open source solutions to their business problems. We are thrilled that HCA Healthcare has agreed to be our guest on today's webinar and share their stories and their journey. This organization was recognized last year as Red Hat Innovator of the Year and we look forward to hearing how their story has expanded since then. During the webinar, if you need assistance or have questions, we have support to help you address your concerns. Please type these in the Q&A chatroom. We will kick-start today's session by introducing our guests from HCA Healthcare after the introductions. Conclude, we will spend time talking about their story. Once you hear about HCA's journey, we have time set aside for questions. You can type your questions in the chatroom and I highly encourage you to do that throughout the presentation. Without further ado, let's move to our introductions. My name is Atif Chukdai. I lead the healthcare market for Red Hat at North America. Our speaker today will be Daniel Chisari, Consulting Data Product Engineer from HCA Healthcare. Daniel has been with HCA since 2014 where he started as a platform engineer and in 2018 transferred to the clinical service group to focus on more DevOps focused role. As a system architect and a technical engineer, Dan proved out enabling technologies like OpenShift and an ecosystem of tools to improve both development and operational efficiency. His DevOps focused role is where he will continue to build up foundational capabilities for data scientists at HCA. Dan came to Nashville in 2014 from his hometown of Chicago, Illinois. Dan I will pass it on to you to get us started with today's session. Thank you. Thank you, Atif. It's great to be here and it's great to talk to everyone about this. Thank you for the wonderful introduction. As you were saying, I've been with HCA for what is my sixth year anniversary this week and I'm here to talk about the OpenShift platform and what it means to healthcare and what it means to innovation as a whole. I will be going at a high level and I'm going to start with a little bit of that story and a little bit of historical view. I started with the design and build team in 2014 and I was mainly responsible for building a whole lot of Red Hat hosts. After a while, we started to get the idea that there was a better way to do things instead of deploying physical or virtual hardware from the ground level up each time that we needed to make application changes. There was a lot of buzz in the field around Docker and around Kubernetes. I went to one of my customer sites and said I really want to explore this idea of OpenShift because I think this is a great production model for alleviating some of the heartburn we have around timelines when we're deploying these large scale infrastructure events. For those of you who don't know, HCA is the Healthcare Corporation of America. We have a little over 180 hospital sites, a lot of medical and med search and a lot of emergency rooms, a lot of care clinics and we're responsible for quite a lot of the healthcare in this country as well as the UK. I looked at what it would take to roll out something enterprise-wide and what that would get for the teams that I was involved in and that's kind of where I came across the solution of Red Hat's OpenShift because it got to a point where the timelines were so severe to getting infrastructure built up I said I can take all of that away by just deploying this one thing and then you guys, when I say you guys, I mean the developers in my organization can make pushes at their leisure as fast as they need to instead of constantly having to go back and submit kind of long timeline change control requests. So the initial proof of concept that I got that I was in talks to roll this out for was for Cephas. So the Cephas prediction and optimization through therapy is our spot tool and that's the reason we won the Red Hat Innovators of the Year award back in 2019 which to me seems like a lifetime ago but it was a way to automate the medical information that gets input into our EMRs and parse that information and start to give a more accurate reading of when the statistics around a patient or any person that's in a hospital is more susceptible or has Cephas. So the kind of the biggest struggle that we had in the beginning was getting the data feeds put into place and making sure that everything was being parsed correctly and making sure that all of the data pipelines were correct and then applying that to new infrastructure or new development initiatives and aligning with the company's goals that rightfully so as healthcare changes in this country get to align with what is the current state instead of the legacy view of what the problem was. So this is our healthcare overview. This is basically the total stats of our hospitals. You can see we are I believe within the top in the one of the leading top healthcare providers in the U.S. we're located in Nashville. We have 184 hospitals and we are in 21 U.S. states and in the U.K. and we see 34.8 million patients and with well over 280,000 employees and that does include nursing staff and doctor staff. So when you roll out an enterprise level solution to for a platform and then subsequently for a spot as a development platform, those employees aren't coworkers, they're customers and they've got a very high stress job and they've got a very demanding job and they can't be bogged down with the process. So you're looking to take away some of that heartburn and you're looking to replace it not with a way to replace them out of a job but a way to augment the role to take some of the strain off of them. So that's kind of where we were at in the very beginning of we were looking at how we were doing how as a healthcare company we were doing assessor screens and the CSG group the at the time of the clinical services group which is what I'm in now really said there's got to be a better way of doing this. We've got all the data we just need to have a way to funnel it and a way to do something correct with it. Next slide. For everyone on the call that doesn't know, sepsis is the deadliest disease in any hospital. It happens very fast, completely preventable and completely treatable if it's caught in time. There is an increase in mortality of 47% every time for every hour that treatment or detection and subsequent treatment is delayed. So it is one of the few things that it grows rapidly, it grows very rapidly actually and it's completely solvable as long as you can detect on it and as long as you know about it. One of the kind of the challenges we were up against was affecting not only patient care but patient care when you check into the hospital you are you're at the worst part of your of your of your year even. You want to make sure that the nursing staff, the doctor staff, the hospital staff they all have the tools and the best tools that they can have so that they can make sure that when you're at your worst and coming into a hospital they can be at their best and they have the best tooling available to help them do their jobs because the faster that they can react is the better you're going to be. So that was a big part of what we were looking at and some of the struggles we were trying to trying to solve. So yeah, this is this was our challenge and goal, our challenge and goal for the challenge was substance detection is done manually, it's done with clipboards, it's done a shift change which is everyone knows nurses shifts are long, 12-hour shifts if you're only putting in data and and doing a substance screen at the beginning and end of a shift that is a that is 12 hours where something can go wrong and something can start to grow inside a person and before it gets detected. So that is the matter of life and death that 12 hours can absolutely define how you're going to be treated and if you can be treated and we really wanted to make sure that we put software and put something user-friendly enough in front of nursing staff not to try to take that aspect of their job away from them but to give them better information and better tooling to be able to do their jobs more effectively. It's never been about replacing nursing, they're absolute heroes and no one could ever replace them but if you give them better tooling you can make their job easier which benefits everybody. So we as a group we started working through a small POC which was an open shift cluster built through a kind of shadow IT means. The POC cluster that we had on had very little funding and we weren't really sure where it was going to be possible but we all went in kind of shoulder to the wheel and said we're going to give this a whirl and I had at the time I had an entire department building me and the department was approximately 20 or so data scientists, a handful of data science engineers and just me and another guy, me and Nick which were the DevOps guys and a Postgres DBA, Josh and I had a couple of other guys and we had a couple of other guys and we had a couple of other guys and we had a couple of other guys and we had some and then just that group saying let's try to figure it out. So we did a whole lot of Tableau and a whole lot of machine learning and tried to roll it all in through Datomic with our vendor and it turned into a really nice piece of closure code that was run through on OpenShift in an OpenShift environment and now they were able to spin up pods and spin up a nice little ETL in a way that sync active directory and all the other nice things that you need to have in your microservices view and we'll get to the microservices view in a couple of different slides later. But yeah that's what it started as was a very small one that has grown to a very large department and of course we partnered with Red Hat, really couldn't have done it without them. They were with us every step of the way helping us make sure that our platform was up and available and making sure that we had the tools and the information we needed to be able to put this into Docker containers properly which depending on where you are in your life cycle it is a steep learning curve and there's a lot of engineering learning that goes into deploying to production and then stealing that to an enterprise. Moving on to the next slide. And the other side of this was getting executive buy-in, making sure that the right people knew what we were trying to accomplish, making sure that down to a hospital level each individual hospital is run as its own separate, I don't say separate, but its own little microcosm entity. So we had a whole lot of business leaders that would go out into the field and start to vandalize and start to train staff on new software and the processes and all of the specialized workflows that would have to change because of it and help them adapt what we were offering them to the organization and then give us critical feedback to the engineers to say this really doesn't work for them and we would be able to quickly roll up a solution and deploy it to prod for that environment or deploy it to prod for everybody else and it's a nice way to kind of make everybody happy but also get them the best tooling. So it wasn't just tech that solved the problem, there was a great deal of evangelism, there was a great deal of partnership with hospitals to make sure that they knew what we were trying to accomplish and we were listening to them when they said we're going to try to get there with you. And the result of that has been pretty amazing. We average a five-hour decrease in substance detection which if you're not familiar, five hours is massive in a world that moves by the minute. So that is, we've seen just an astronomical decrease in substance related deaths at hospitals which is just, it's amazing to say it out loud because I don't get to say it out loud to everybody every very often because I have one on my team kind of knows the impact they've had so we don't really talk about it that much but it's truly awe inspiring to work with all the people that I do and seeing what we've accomplished just in the last year. And the best part about this has been the open collaboration with other groups making sure that we are pulling in security people when we think that we need their input and pulling in data scientists and developers and DBAs and having our own little DevOps focus group around this tooling has really paved the way for new efforts and other teams to quickly follow suit. So this is our solution architecture. It is largely all open shift based. Lots of persistent volume claims within the non product environment to pull in data. No persistent volume claims in production. It's all run completely stateless and we are relying on the integrated Docker registry to open shift and we have actually rolled out a custom tool for promoting images from non production to production. So we did go ahead and texture diagram on this but did roll out two clusters within open shift or for open shift to our on-prem deployment. One is a production and one is a non production. Everything gets written and pulled from get to production. We have our own build configuration. We have our own source to image configuration and we wrote a custom tool to do that promotion from non-prod to prod. Just as a nice way to fulfill security and audit requirements around login changes but to do that in a fashion that didn't impede the developers from working quickly. And then this is our complete CI CD pipeline. So there is a handful of technologies I'm sure everyone is familiar with and I guess the main take away from the slide that I went on to have is this is specific to our development team and our group but this does not... There's going to be very little that without the development cycles and the ability to learn how this stuff works, you're not going to be able to do this just without a whole lot of manual under current reading and effort. So it took a long time to get to what this slide represents and I imagine this isn't even remotely where this stops. It's just where we're at today but it's definitely a steep learning curve and it is a hard model to have anyone outside an organization tell you best how your organization works. You're your own strongest advocate and the best thing I can say is our motto which is fail often and fail quickly and that's the best way I know to learn. Let's move on to the next slide and here's some of the before and after results from the Substance Prediction Tool. So you can see when we were at a manual process we had approximately 20,000 screens performed with a 3.3% hit rate and afterwards we had 722 spot detection and a 92.2% hit rate. So those are huge numbers to go up in just in the amount of screens and the amount of detections either detected or not detected. It's been a huge burden off of the nursing staff to be able to say I don't have to worry about doing this manually anymore. I can rely on this tool and it's not doing my job for me. When it says panic, I have to panic but I can rely on this and do that. So this is an example of what the business leaders did. The business leaders that were having meetings with the CMOs, the chief medical officers, chief nursing officers at hospitals, they were trying to better best figure out how this works in an actual hospital and with nursing staff because nurses are frontline and they are already taxed, they are already completely overburdened and we wanted to make sure that what we were rolling out wasn't going to be another burden on them. We were actually alleviating some stress and not adding on. So this was a big part of showing them the signal and then really coordinating with them to show them what to do with it. This was all the business development and our executive staff which were really making great strides and besides the technology we are really making great strides in what this looks like. This is where we are at today. We are at 30,000 patients monitored and screened every day. We are at 75% sensitivity. We are at a little under 92% so we are at a prediction accuracy of about 91% and that still represents a 5-hour time advantage. This has also given us the kind of ability to see if there are problems within our data pipeline that other teams manage and maintain and be able to give them concrete proof of this isn't a problem for you now but it will be in a couple of months and here is all the things we are able to uncover because of how we are parsing and ingesting your data. We are able to give them tangible concrete evidence of there is something you might want to worry about that is not here today but it is coming down the line and that has been a huge advantage to other teams around us because we see everything real time so we are able to give them long and advanced heads up when there is a lurking problem that they are not completely aware of. This has really gone into what has been an interesting year. This set up the groundwork not just for subsist but around a lot of our COVID related activities and efforts much like everyone we have all been affected and we had a large scale pandemic come to this country in a very ferocious manner and we were largely caught with a lot of what do we do now and this is what we learned in the subsist roll out allowed us to roll out in the same field around COVID related activities and supply chain, bed management, discharge predictors all got rolled into a new tooling that we spun up a couple of months ago but has been rolled out enterprise wide with in a little under two weeks which is pretty amazing we were able to build a dashboard very quickly, get that dashboard up, get it parsing the right information and be able to get relevant information to hospitals but also get relevant information to divisions so that when there's a large scale pandemic there is a lot of information that you have to parse not just at your hospital but around your hospital and other facilities to know if I have a larger scale problem than I'm anticipating in my state instead of just my hospital so what we learned there really allowed for us to spin something up quickly and use a lot of the same access patterns and models for COVID-19 that we used for SPOT and just parse different information but we were able to really spin something up, something pretty amazing. I'm going to go ahead and wrap it up with 30 seconds to go but thank you for the opportunity and it's been a pleasure talking with everyone. Thank you Dan, that was amazing. So let's take some questions from the line. Please type those questions in the description box. We'll continue to monitor that and without further ado let's take some questions. So one of the questions that came in I think I'm going to combine them together. The question from Linda was what kind of ROI is HCA getting from the solution and what were the stats before the solution would be applied to the processors? Sure. So ROI is actually very hard to calculate because obviously we're a public company but we're a for-profit company so we do worry about business costs and deployments and things like that but at a basic level we want everyone to be healthy and we want to save lives and that's our mission and that's our primary goal. So ROI is something that I calculate in terms of how many hours do we have for how many hours heads up can I give nursing staff a positive sepsis screen to give them more time to get a patient better because that could be my parents, that could be my son, that could be anyone. So as far as the actual calculation of business costs versus bare metal, bare metal VMware and open shift it's something that hasn't been as far as I know hasn't been independently evaluated because the two architectures are vastly different and we do deploy on VMware for VMs for open shift and bare metal so there's a little bit of a Venn diagram when it comes to those environments. The stats before the solution was deployed just as good to talk about where we came from and where we ended up so we have a five hour increase in the prediction model to where we give nursing staff on average a five hour heads up when there's a positive sepsis screen so if you start at zero you end up at negative five but it's also just about taking the strain off of nursing staff giving them a tool that they can rely on and they can look at and they can do the million and one other things that they have to do their jobs. It makes sense, thanks for those insights. It's hard to quantify ROI in terms of life-saving but then there are other factors as mentioned in terms of time saved with the nurses better customer or patient experience less overall complication from their state that can result in straining your resources all are part of the ROI that can be accounted but as such the number of lifestyles by itself is a huge factor so for this thing and capability to work you needed the data from EHR so how did you go about connecting the data from EHRs to this application? So that involved a lot of separate teams so we are primarily we don't have one EMR we have multiple EMRs so one of our biggest ones is Metatec Metatec parses data and actually saves logs in real time to our Hadoop cluster and that is available to us by HL7 so we have HL7 listener feeds set up by firewall rules between the open shift environments and the EDH which is our electronic data warehouse to talk to them specifically to talk to specific topics that are available based on those feeds Got it Okay let me ask you what other questions you are coming to One question from Sina .NET or Java and Cugating No.NET we are primarily Python and Closure and then there is some Java on the front end One other question that came up let me try to summarize this solution could it have been written using a traditional rules engine and ML engine and doing self training what were the areas from your perspective that called out for using ML versus critical rules capability So simple rules from my perspective simple rules really defines and really takes a couple things to heart which is a static environment So we went with a more ML defined approach because metatech changes data changes and patients are we're all unique so it's going to be it's going to be very hard to write in a binary answer to a what is a complicated question I mean you have a constant movement of the target environment and as such you need a model that is quickly adaptable because you have to re-code and re-deploy new rules for that environment so that's essentially what we're able to establish okay that's pretty interesting Yeah you really have to develop a burnout when it comes down to it burnout is real and the more active changes that they have to make and the more everyday can feel like a panic if you engineer things wrong from the get go developers they have a hard job and then the less strain you can put on them the better I'm part position burnout this is the first time I'm hearing about burnout so yeah somebody is paying attention to this for sure alright one other question that came up how is the adoption of OpenShift and the CICD pipeline going outside of this project that's from Jason surprisingly well it's been I believe now 12 other teams that use OpenShift don't quote me on that number but I know it's a pretty significant amount and we have a lot of significant org changes based on microservices and this type of platform so we have an official DevOps team we have an enterprise architecture team that is more focused it's gotten pretty much fire here it's been pretty good glad to hear that folks continue to ask questions monitoring a lot of good questions coming in then we have a pretty well engaged audience which is always good so one question came in I guess this is related to my question that was asked earlier from somebody named Mr. French is the outcome improving your quality rating and the positively affecting rates negotiated with health plans I mean have you guys connected this to what it has done for your underlying business? you know I am not sure and I am positive that if there were a comment that I would not be involved okay another question came in from Linda that's an interesting question from a sponsorship executive sponsorship perspective for this project was this primarily led from the IT side or was it like on the clinical side I mean how did it from a sponsorship that they played out and who did you have to go to create a buy-in for all of this sure so the short answer is all of the above the long answer is even though we're developers we still report up to our group is the clinical services group renamed to clinical operations group so we are clinical even though I will never I'm not a phlebotomist I will never go into a hospital and administer medication to a patient but I am absolutely clinical and everybody on my team is also so we don't draw or at least I don't draw a differentiation between the IT side and the clinical side because we're both we're all IT and we're all clinical so the other side of this is we have executive business partners or executive sponsorship at the clinical level in my organization and they set up they set up on the other side with the chief nursing and chief medical and the clinical on the actual field side so the people who will be using it kind of they work in conjunction and go that way so basically the end user you have to create their buy-in on the chief clinical side and work along side with them to bring this idea forward I have a question that came up around what aspect of OpenShift specifically facilitated all of this what made OpenShift be your choice and also how did it help you guys the thing that made OpenShift so appealing was well my experience and my training has always been in Red Hat so I'm always inclined to go Red Hat before I go to anyone else but it was about the robustness of the actual application how it could be scaled to multiple data centers how I wasn't tied into a lot of the same vendor lock-in that other vendors have within their doctor implementations no offensives but if I decided to port everything away from OpenShift I absolutely can I'm not tied into that and then it was familiarity I like Red Hat so I used it for years so why not just keep in there No we appreciate you saying that that is our mantra to make sure we are impossible and open for the ecosystem that exists around us so from the drawlment of these models or steps or anything like the capability built in the ICD how do those play in to help you refine these models and iterate over them any thoughts around that if you can share with the audience You broke up for a second there Artif Yeah sure so from a OpenShift CICD perspective and the refinement of the model or continuing experiments and refining them can you share your thoughts on how did OpenShift play into that aspect and helping you guys improve your models for Sepsis Sure so that comes down to the image spin-up the ease of use of being able to spin up a new image for an S2I build once you get the hang of it it's actually a very easy very repeatable process that's a big asterisk once you get the hang of it in front because it is very complicated but once you get in the rhythm of things you start to find out new JDBC or ODBC connections that you need to adapt in you need to be able to roll those in quickly and get those to the developers but it was at an extraordinarily basic level it's just a closure algorithm it's faked into the image and that's what's deployed and that's what's running live in separate capacities Thanks Dan for that one other question that came it's around what was the training data used for building the ML model sorry this moved training data used for ML model for prediction which algorithm is used here I don't know if you can share any of the details there that sounds like a really heavy data science question and I am going to pre-curse that with I am not a data scientist and there is plenty I don't understand I work on a platform I work on CICD but as far as the actual data science models I can try to get you answers and I know exactly who I would ask but I honestly don't know No worries, thanks Dan So would it be fair to actually follow up to this question and ask about what kind of data feeds are being used for this making this prediction outside of EHR that you guys are consuming data or is it all coming from EHR Majority of it comes from EHR there is a little bit more that comes from Teradata and there is a little bit more that comes from there is a data lake and it's SQL based I know that but I forget the actual name of it there is a little bit of our SQL alchemy and then it's mostly HL7 Sounds good So one of the other questions I just scrolled through and folks please keep the questions coming all great questions and this is really for you guys to interact with our customer so I encourage you to keep asking questions One other question was what have been some of the biggest challenges around being successful as a team and how do you make sure those challenges do not hinder your progress for your project I could write several novels on that some of the biggest challenges for everybody that wants to succeed there are people who challenge a new way of doing things it's not so much that they don't want you to be successful it's that they don't understand and their default answer is going to be no I don't understand it there is plenty in an organization even a small organization to affect pretty monumental change it takes a lot of top level evangelism and after that is you have to work through the actual people in the organization who actually ends to keyboard do some work for you it was challenging in the beginning days it was a lot of not shadow IT but giving people specific tasks for very little parts and pieces so you can get small things accomplished there were some pretty big challenges even a low to or a small to mid size organization affecting platform level change like this I can imagine you probably had to deal with the cultural shift and the process shift which is all part of any disruptive technology that you bring in at such a scale to a massive organization and how challenging that was a good job for you and your team getting through that one other question you mentioned logging changes how did you incorporate audit logs and change management into the design and build? so obviously we have github and we run that on prem and that's where code changes are pushed and pulled from so we can go off of main and non-main branches but from there it's built within non-production and we actually that's one of my team members Tommy wrote a custom tool which is a webinar face that is able to talk to all an API from non-production open shift pull from its registry and then docker pull and then docker push to production and that really does three things not just the image promotion from non-prod to prod but it also lets you link up a JIRA ticket so that there's a logging record that that push happened links to the specific hash for what is being pulled and pushed and that fulfilled our other requirement of making sure that there's some sort of peer review and that there's some sort of logging that a push happened especially in the world of healthcare data you have to have these detailed logs to meet so it definitely sounds like a really cool way to do it one question that came up from Gaby can you speak a bit more about the effort around COVID tracking and what outcomes are you seeing from that effort as well so there's a good way to answer that question and I'm going to struggle with it the nice thing the nice thing we have about COVID tracking is that it works at both a granular and an enterprise scale so what we're publishing to the field actually defaults to a kind of a country or a division level access to where if you're to CMO over multiple divisions or multiple hospitals you can look at those independent overlays or you can look at all of the overlays that you want access to and the same goes for some of the some of the chief officers that want to look at more of a national view of how this information is coming in and going across but it also gets very granular to where a hospital administrator or the CEO of a specific hospital can look at his hospital and drill down to the actual meet on the bones and say this is what's going on in each individual bed so it's a really neat timeline that we have within some new tooling that we've deployed enterprise-wide to be able to say we're under specifically just in the app development we're in the framework development phase of this we're not trying to solve one specific question with one specific answer we're trying to give you bigger and better tools that kind of go in and say here's all the information or if you want to drill down here's the one specific piece that you want but it's up to you tell the tool what you want versus telling us and then we tool it no thank you for that if I were to follow up on that how long did it take you guys to put something like that together with what we learned from the sepsis protection that effort was it was in POC about six months ago and it had basically one guy Nathan working on it and after he got the POC off the ground and everything started to really catch on now it's got a full-fledged team around it and now he's got lots of people working on it so it got deployed enterprise-wide in I believe eight or nine days but the developments before that was around one hospital and it was just the POC and then it scaled very rapidly and I assumed that you were able to scale so rapidly because you had the unified architecture through your hospitals and you didn't run into issues what was the reason for scaling so fast so quickly all of the above we had successful buy-in from executives they've seen what we can do with the spot tool we had an enterprise architecture that was already deployed all of our pipelines were already intact it was just everything it was already built and tooled for this specific pandemic or any pandemic and so we were able to really scale up very quickly because of all of the lessons learned and all of the documentation that we spent the last half a decade learning well okay obviously you guys are supporting the frontline workers in this war that we have against the pandemic really appreciate everything you and your team and IT staff is doing and thanks to all of your healthcare workers for being out there and supporting the population the way they have them as such I do want to ask if there's anybody else has any question if not we're going to end this webinar I don't know if there's any more questions I had some people saying off I like the IOT question that's a good one go ahead we got two minutes the question is is IOT type data from the hospital setting up potential data set to help improve the machine learning app or C and the short answer is not yet we actually got our first IOT devices in a POC status with this year and it was one of the major efforts when the world got very different very quickly so that is something that we are doing and we are exploring very extensively but yeah it'll the next year and a half are going to be extraordinarily exciting once they come once they come IOT did definitely bring the new wave of data feeds and new insight that we didn't have before that could actually open up a lot of use cases outside of what the current ones are or even make them more effective with smart devices and history tracking with that then thanks again for all your leadership and your effort on this amazing project and sharing with the community here folks if you have any questions that are still left if you didn't get to we'll follow up but I believe we got to all of the questions if you think of something feel free to send us an email and we'll follow up and put answers to those questions Dan anything in the closing remarks just keep wearing masks and keep six feet away from other people that's all I ask yeah same here alright thank you everyone