 Thanks for waiting, everybody. I'm Stefan Kodaki, and it's my extreme pleasure to introduce Patrick Mathias, MBPHE, who is the Vice Chair of Clinical Operations of Laboratory Medicine at the University of Washington School of Medicine, and one of the leaders in driving automated large-scale COVID-19 testing. Mathias Lab alone has done more than 500,000 tests, which is an enormous, in laboratory medicine, an enormous scale-up from starting with nothing, and he is going to show you how he used R to help orchestrate this ginormous effort. Take it away, Patrick. Thank you, Stefan. Sorry for the delay. I'm going to switch computers, hopefully be okay and no technical issues. So yeah, I want to spend this time talking about, actually, this is not a R-centric talk. This is really an open-source-centric talk. So we've used a variety of open-source tools and infrastructure to help us expand the scope of our testing, and really in laboratory testing and laboratory medicine and kind of this pandemic state, by increasing our capacity of testing, we're really increasing our access to testing. There are severe limitations, as I think everyone's aware, throughout the country, throughout the world, with providing timely testing. And so we've spent a lot of time and effort using some of the infrastructure that we've actually developed over time to rapidly deploy applications that can help us fill some of the gaps that we see. And so I'm going to spend a lot of time talking about kind of the general problem and solution. I'm not going to go into too much technical detail about any one solution. There's a lot of Python and AWS in this talk, but I promise there's some R that I will cover at the end. Really, R has helped us support the whole operation, but I think it's interesting to go through the various solutions. This is really a lot more of a kind of clinical informatics perspective on how to just make things work under difficult circumstances. And so I can't give this talk without really acknowledging the folks in UW-myrology. And so Dr. Greninger and his team really oversaw one of the earliest COVID tests in the country. And it's my strong opinion and it would take a lot of strong argument to convince me that our UW-myrology laboratory was not the most prepared laboratory in the US. So as soon as the genomic sequences were out in the wild for SARS-CoV-2, they were on test development at the end of January. They're pretty close to a test, to having a test available. And then they really spent the majority of February going back and forth with the FDA to try to get this test available and out there. And because of regulatory and other issues, we're not able to push it out as early as we'd like. But it's really their effort and folks like Greg Peppers, the manager, Dr. Keith Jerome, who's the director of the overall virology laboratory and what has grown to be a very large pre-analytical team and a team of medical laboratory scientists that have thrown our testing and really the role that IT, the lab informatics has had in this is to help amplify their effort and help make sure that their effort is maximally utilized to get testing out there. So just to kind of frame our perspective in the pandemic on the importance of testing, we have our symptomatic high-risk populations. We have folks who are also asymptomatic and really the goal is to identify as many folks who are carrying SARS-CoV-2, the virus, test as many populations as we can recently given our supply constraints and identify those cases, isolate them, and then cascade this process of understanding who they've been in contact with and identify other folks who might be infected so that we can stop the spread and contain as much as possible. This is really the general framework that many countries have used to be successful in containing the virus. So testing is not the only piece of this solution, but it is a critical piece of the solution. And as librarians, when we think about the total testing process, we kind of think about this schematic here, at least many of us think about this schematic here, which people won't be familiar with, the brain-to-brain loop. And so the general idea is there is some action from a patient, some issue with the patient. Physician works with them to figure out the problem, and that triggers a laboratory order. We have our kind of total testing process going through ordering. You have to collect a sample, make sure it's properly identified, transport that to the laboratory. Within the laboratory, we need to prepare that for analysis, which includes a critical step of accessioning the sample into our lab information system. Analyze that, we report that, and then whether it's a consultation, or whether it's an actual conversation, or it's a test validation in which the laboratory has taken their understanding of the test and injected that into what can be an automated report that goes out to the physician and in this day and age also the patient. So this is kind of a general workflow that we work in every day in laboratory medicine. And from informatics, electronic standpoint, we think about our laboratory information system, which really coordinates all those activities within the lab. In our case, at UW Medicine, it's SunQuest. And then we are very often, in 2020, we're now working with the electronic health record. So at our organization, we run both epic and serum. It's a variety of other electronic health records. And those are not involved, both of those sets of systems are not involved in every step in the process, but they're critical, they play critical roles at either end. And so typically the way that we move data from one system to another is through, and what we call it, HL7 interface. Health level seven is a standard of data that we're moving around. And we'll receive orders into the lab, and then we issue results back into the electronic health record. And that's kind of the ideal state within the laboratory where we get electronic orders, we send electronic results back, and that's where typically we want to be. But HL7 can be challenging in that it is the solutions between these production systems typically require a lot of expertise and time investment to build these pipes between the systems. And so HL7 is called a messaging standard, but it's not completely standardized to the point that you can't just say, I'm going to build an interface from this system to this system and I can just apply a standard and be done with it. Every institution has kind of their different flavors of how they implement HL7. And in some cases, customization is critical for some functionality. In other cases, it's just a result of kind of historically how they implemented the standard. And so what we see in the informatics space on a day-to-day dealing with sending messages between systems is that building these pipes of HL7 messages are typically a high overhead activity. Most of our HL7 interface projects take weeks to months. And we need to respond to the pandemic on a shorter time period. So the default mechanism by which we receive orders is actually, in 2020, it's still a piece of paper. So we'll get a sample and a piece of paper. And so this is an example of our SARS-CoV-2 specific requisition. And this will be filled out with a lot of information. Sometimes you can pre-populate some fields with data from a kind of PDF generation. But very often there are still lots of written elements to this form. And there's this critical step on intake in the laboratory that can be a bottleneck of getting the information from your piece of paper into your laboratory information system. And so our pre-analytical team plays a critical role in that. And that can be a written next step if you don't have a huge pre-analytical staff and so based off of data analysis, which on our end we do an R to understand throughput, we know we have a general idea, depending on different flavors of interface orders, a one FTE can process 250 to 450 orders per shift. And then for manual orders, it's much lower somewhere between 50 and 150. So whenever possible, we want to shift manual orders, those paper requisitions into an electronic process. And this is not just true for accessioning. We want to think about that total testing process. We have a fixed number of people, fixed amount of analytical equipment. And so we want to think about what are our approaches to take every step of that total testing process and make it more efficient so that we can get more tests through. And even if we can't scale all of our analytical capacity, we want to do what we can to make sure that we can get results out in a timely manner. And that contact tracing, testing and tracing methodology relies on us being able to get results out quickly. So if we think about this total testing process, one of the areas where we've applied custom software is around ordering and that accessioning process. And so initially early on in the pandemic, we were working with skilled nursing facilities to really identify that these skilled nursing facilities are an area of high morbidity and mortality from this virus. And so many of our local SNF residents were not already registered in UW Medicine Patients. And there's high overhead to kind of doing that on a mass scale, on a population scale. So while we were developing the kind of the physical infrastructure to go in and have these drop teams that we're going to go in and help screen these locations, we came up against this issue of how can we do this quickly with, we have limited capacity to scale up our staffing. We don't want to deal with paper requisitions, but we identified that many of these facilities, even if they don't have an electronic health record, they do have rosters available in the structure format. So our kind of starting point for a lot of these pre-analytical improvements where can we take a spreadsheet, get orders into our electronics, into our laboratory information system via a spreadsheet. And so just to briefly touch on some of the resources that were helpful in doing this, we have been investing in expertise and time in developing Amazon Web Services resources. And so, you know, have a business associate's agreement in place can do, can work very securely within AWS and have that hooked into our authentication, our organizational authentication methods. So that was one key enabler. And then another enabler was really our doku stack, which is kind of very briefly, it's just the general principle is a platform as a service. So can we package all of the components that go around deploying a web application in a way where you can focus really on the application and the application logic and kind of have a template for all of these other pieces that allows you to just focus on your application, have kind of a template that will allow you to deploy an app quickly and then have that hooked into our authentication. So that's kind of our starting point for a lot of these solutions is really having this ability to spin up an app, you know, write the logic and then deploy it on AWS in a secure manner. So our kind of first round at using this type of solution was to work with Dr. Ong and his team in post-acute care where we would schedule drop team visits, get spreadsheets ahead of time and then we developed what we call up files or an upload application that would allow us to take a roster, do data validation and understand, you know, what data is missing, what's going to break if we try to send the data to SunQuest and really take some of the place of the role that the electronic health record typically plays. Typically, your electronic health record does allow data validation for you and so when you send a transaction from that EHR to your LIS you don't have to worry about, you know, about missing data or data that's going, that is not the right format for your fields and your lab information system. And so we developed, you know, a pretty lightweight app to replace that data validation function and then have a workflow to get that through our interface engine, generate a stream of HL7 messages based off that spreadsheet and then send that to SunQuest. And so, you know, this is just a quick look at what the app looks like and go very quick, you know, took like a day to develop this and then we've been working on the logic for the validation since as we add additional fields like insurance and things like that. So pretty simple and pretty straightforward use case but I think the reality is if you are working kind of with institutional IT teams that are stuck with either the electronic health record or the lab information system, there's not really this rapid solution to help Lulos together in many organizations, at least not in our organization. And so this overall can help us reduce effort, time and errors of the collection site, reduce our effort in processing because we're not manually typing in information and allows us to capture really that full information straight off of kind of the record from the facility. We deployed this not only for our skilled nursing facilities, this actually came into play and has come into play with doing screening in other settings. Like we have a large number of Alaskan fishing boats that are based in Seattle and so we've been able to rapidly deploy screening for these boats by working through this mechanism to get folks orders and screen them quickly. And so thus far we've been able to screen 38,000 people based off of this roster upload mechanism. When we were thinking about this mechanism we kind of thought that well that was a nice solution and we deploy that kind of end of story that we're proud of ourselves but that was kind of we kind of thought that okay this is just going to be a production thing that we'll do and that'll be it. But actually within a couple weeks of deploying this and kind of using this widely the city of Seattle came to us and asked whether we could help support a high throughput testing sites for them. And we kind of had this big challenge of there was a federally supported testing site that was going to be going away in three weeks and we needed to have a replacement for that really quickly. So really we want high throughput testing, we want to capture all the data and then minimize our laboratory staffing requirements so we again avoid at all costs large volumes of paper up requisitions that we can't that we can't log in we don't have the staff to log in. And so in light of these challenges we quickly came up with a with a analogous solution to to our spreadsheet solution using a Solve application. Solve Health is a company that produces a self scheduling web app and so I think one of the key requirements for the city was to make it as easy as possible for someone who needs a test to get a test and part of that is can you you know provide access or can you provide a channel for them to go on their phone say I know I'm symptomatic I need to schedule something and Solve has really played that role and traditionally they were working with physician health physician offices with kind of a lightweight front end to get people self scheduled into EHR schedules but we found with that assistance of the US digital response team in kind of rapidly doing we didn't do a full RFP on this time frame but we kind of did a rapid evaluation and said this is a good solution to help us get structured data that we can feed into this data validation and order pipeline and so just to kind of walk you through what the process looks like we have a very prominent city of Seattle COVID testing website you can go through and it gives you information on who's eligible for testing you click you can click through that and then find one of these sites you know whatever is most convenient for you you can go to that you can go through this go through that front end and then you end up with the ability to schedule yourself in for time and you enter required information through this application and then you know one of the lucky things for for the city of Seattle sites was we had a large number of these emissions testing sites that were used you know they were they were frequently used until I think December 31st the regulations around emissions testing changed in Washington so actually we have a large number of sites throughout the state that are that that are laid out like this but they they they're no longer doing emissions testing and so this was kind of a perfect start to to sending out COVID testing sites so repurpose these sites and then patients go through drive through we have registration texts who are kind of doing some screening up front you see the gentleman here putting a sticky note on the car and so there are different color sticky notes that indicate the level of completeness of the information that the patient has put into their solve record for the downstream registration tech to to review and then help make sure that we have complete contact information insurance information and things like that and then there the Seattle fire department actually staffs the actual collection staff for this and so you know how does that physical workflow tie into our electronic workflow we have our samples with barcodes we scan those into a field in the solve application and that means that that payload can be carried across to up file do validation and then we send that to SunQuest but importantly we have a sample identifier that can that actually follows the whole process and generates that order so that when the sample does is after it's collected and transported the lab we can actually just scan that into our system and retrieve the order and so that cuts out all of that cuts out a huge amount of work of logging in paper requisitions so so far with the system in place we've we've performed 150,000 tests for from the city of Seattle all sites the fourth site just opened up yesterday and then we're also working with King County and areas kind of testing deserts down in South King County to take the same system and apply it to underrepresented populations in South King County through the county as well so those are a couple of some of the variety of solutions that we've deployed for for our pre-analytical phase of testing getting samples into the into the lab and again you know we have kind of disparate systems that that aren't that are you know that are kind of fit for their purpose but there there's not really a great simple way to tie them together in the way that we need and you know that's that's where we use our custom software development to support that and so you can actually think about within the laboratory we can think about those processes in the same way so we do have automated testing platforms in the in the lab but those have had severe supply constraints and so many labs are still using a variety of manual processes with with instruments that are kind of tied together with or not actually tied together very well with with smooth data flows and so we can kind of have almost a similar type of issue on the analytical side within the lab and so you know let's take the same general solution of kind of these locally developed apps as blue so that we can send data back and forth between the systems and decrease manual transcription and manual error so you know the dream in the lab is you have this you know beautiful automated line and you know the samples come in the lab they magically go on the line and you know all these robotics and these instruments take care of all the work and then you know it will issue a result and you have very little you know it's a very low-touch process ideally for high throughput laboratory testing that's not the case in the molecular lab and so this is you can see these are one of the things that that we need to think about is you know molecular labs and particularly you have to be very much worried about contamination but for these respiratory samples in particular you're dealing with samples that have swabs that can't go directly on an instrument so there has to be some manual processes to transfer your media you know what the liquid that your swab is in to another two and then go through a series of other instruments for example the PCR the thermocycler instruments here on the right that's really the reality for most molecular labs there are probably some molecular labs out there that have more automated solutions automated lines but in general most molecular labs again are they're not doing they're not typically doing testing at this scale and so you have you have to kind of glue these these solutions together in terms of moving your data back and forth and so just at a high level to think about what happens for a SARS-CoV-2 viral test you process you transfer that to a tube that those tubes have identifiers from the lab information system so kind of physical up top is the physical workflow and then below is kind of the data workflow for for these samples so you're you typically have to go from these these tubes if you're not running an automated instrument you have to go from these tubes that have the sample to 96 well or 384 well plate and then so that's the plate preparation by a liquid handler if you're lucky to have a automated liquid handler and then you move that to an extractor and so that's that's an instrument that's kind of purpose built for getting nucleic acid which we're trying to detect out of these samples so you can increase your sensitivity and then you know from there there's some preparation and then it goes on to this thermocycler which is kind of that that PCR process that many people are probably familiar with where you're cycling the temperature up and down and then you're exponentially amplifying the nucleic acid prior to COVID many of the these many of the processes particularly on the on the back end of this were very manual and so one of the challenges that we identified early on were that to get the to get the samples assigned to the right wells in the software the staff were manually scanning in the the two or the labels on the containers kind of sequentially and there you know they would have to do this at in racks and they're doing this like 92 at a time to go on to the plate so this is if you think about taking that workflow that might be manageable for for a few plates a day for a technologist but when you have to multiply that to thousands then you're you're asking for some repetitive stress injury and it's just not going to be sustainable physically for the staff and so another issue that we identified early on was that the results transfer was again when you're working at smaller volumes you can't you you can manually review the output from from the result from the thermocycler and they were kind of have a manual workflow just putting a ruler behind under each result and then manually transcribing that to the laboratory information system which again not ideal would have been something that would have been nice to know about prior to the pandemic but you know was manageable with pre prior volumes when you're scaling up to do thousands a day that's that's just not sustainable and so those are a couple of the issues that we identified early on analytically that we wanted to address and again using the same kind of bottle of transformers and moving around data to to to reformat things to get them in the appropriate format so as an example we might have a XML file that's output from our Hamilton liquid handler and we need to coerce that so that it is so that's it can go on to our it can be uploaded to our PCR instrument so that the experimental layout is accurate and the right samples are mapped to the right well so the important thing here is you just have to preserve that mapping of you know what's the well on the plate and what's the sample that wouldn't do that well and so you know very pretty pretty straightforward kind of operations you can hear you see here this is a python um script in the application and we you know deploy this on our docu stack and it's really the operations you take your output file you drag and drop it on the web app and then it produces the output that you need that you can go through review and then you can download the files that you need to go on to the instrument uh so that's that's just that that actually that addresses that that 92 you're scanning 92 samples at a time portion of the workflow and then to address the resulting piece of the workflow again pretty straightforward challenge of of just transforming data so you have data in a certain format is here on the very far left and then you want to parse that in a way that our data innovations which is our middleware that is used to to to talk between the instruments and the laboratory information system did that solution actually has a has a standard format for for flat files to basically send a flat file to data innovations and then file those results into the lab information system so pretty straightforward set of operations but again having you know a nice web app deployment can help us take that script and then turn it into something that that the technologists on the bench can interact with quickly get the output that they need and send it to the lab information system and so if we think about this whole process basically you know pretty early on in the response really I would say within the first week of the response we made sure that this whole data flow was was much more automated and then you know that helped us really you know we were talking before early on in the pandemic we're talking about what our capacity was we're we were thinking about 300 to 500 tests based off of all these manual processes and then this really allowed us to scale up to 2500 tests per day you know at least the capacity to 2500 tests per day based off of moving it around and really taking out a lot of these manual processes you know how important was that the ability to scale for for the response so this is just showing very early in the pandemic this is based off of publicly available data this is showing the percentage of testing performed by UW by the UW virology lab first against if you look at the blue line that's total testing in Washington in state of Washington so that point we were it was us in the state lab that was performing testing state public health lab and then and then the red is the percentage of testing for the us and so in the first couple weeks we're actually doing 25 of the testing for all of the us initially and again that's kind of supports my contention that I think we were the best prepared lab in the us really was having the it support to help scale um could help support our virology colleagues to do that efficiently so I won't spend too much time on it but what this is actually also enabled us to do very quickly is to is to pool samples so if we think about that custom data flow and the and the ability to transform data in the various formats we can take a pooling protocol so taking our liquid handler and instead of putting a single sample in a single well in the in the micro well plate we can put four samples in the micro well plate uh and we have a mapping that we can export from the liquid handler that says okay you know you can look here at the uh each line is a well uh and then you can see the sample ID that's associated with that and so um you can take that in that information in that mapping uh and then generate you know what we need to go on to the uh onto the pcr instrument and in this case we just concatenate the the individual samples to make it fully transparent that these are the four samples that are in this pool uh we can load that into our uh thermocycler uh and then uh once you have this you know those those concatenated samples uh when you're ready to upload results to the ali acids a very simple operation and just break apart this concatenation and then you can individually explode out each pool into its cids uh and then you know for detected pools we don't send anything to the lab information system we have the the logic built in that prevents us from releasing those results but then the negative pools go to the lab information system and then we we can produce a output from this this page that uh that they can print out and then go retrieve the samples and and reflex those to um to the uh to the non pooled platform so using this overall solution we actually pooling is now kind of our default uh method for running samples uh in uh for out patients so you know this ability to to do these transformations has been able to take us from you know basically to uh up to quadruple our capacity using the existing laboratory developed test uh methodology so that's really thinking about the analytical side i think another aspect of the uh response that we've been able to support is is really getting results to patients and so kind in this day and age um not only are we communicating results from the laboratory to to the provider to the doctor we're also providing we're also communicating that information to to patients to regular patients typically through EHR patient portals um but in the settings that i've described so far you'll you know a lot of those settings are not traditional UW medicine traditional healthcare settings and so we have this challenge of how we deliver those results to uh to patients who really need we want the patients to know those results uh even more quickly than our public health folks and and our providers so they can uh so that they have either a piece of mind or they can take action on those results we also have a subset of healthcare employees whose data is not an electronic health record so we really want to be able to to provide results uh to folks and our lab information system does not have a built-in patient portal um patient portals do have some overhead uh and you know phone calls for every single results are really when you're testing at scale are not practical and so we our solution was to rapidly develop a web application that that really had the right lab data flow so that we could support result retrieval and couple that with the physical workflow of collecting the samples our our general solution was to kind of have a pair of codes that travel along with the sample and so we have a we basically have a QR code a 16 digit code that we kind of we produce both a 1d and a 2d bar code based off of so the 1d bar code can go on a laboratory requisition the 2d bar code encodes a link that has the unique you know the each of these codes is unique the unique retrieval code embedded within uh within that so they can just scan the bar code and access their result and so we you know we also have some other physical details around how the labels are printed to help keep help keep things straight so that you you know split the the right pair but the general workflow for this is you have your laboratory requisition code we develop we basically put a unique field in the lab information system to support scanning in this code so this you know this is part of our routine process for for accessioning samples in the lab is to scan 1d bar code so we can scan those right into that field generate a file that contains both the results and the retrieval codes and then using our aws stack we we generate objects for s3 objects for each of those results and the key is to you know something you have and something that you know so in this case we have the 2d bar code is the physical unique code and then the patient enters their date of birth into the application to retrieve their result so this is what the kind of our lightweight portal looks like you know you'll go you'll go immediately your retrieval code will immediately be auto populated if you scan the QR code and then you enter your date of birth and then you get your result back and we can also provide kind of a first level of guidance based off of the results for different populations of folks to kind of click in and make sure that they they understand what it means and what the next steps for them are so that solution is a little bit more involved than some of the other kind of lighter weight solutions app solutions that I described but and requires a significant amount of support for printing and distribution so we have to kind of control the printing distribution because we need to ensure that each of the keys or each of those QR codes is is unique so that really it can only be one source of truth for those we obviously end up having the field support have the field support calls based off of people who can't retrieve their results and then also you know this involves close partnership with our colleagues at the collection sites to make sure that they understand how the workflow works and and how we ensure that every pair of QR codes goes to the right places we can you know we can look at some of the data from this and you know there's some interesting trends in that on median time and some of this is a little bit older but the median time to first visit on the site from collection was about six to seven hours and then in general the median time from when the result is actually available to when the patient has retrieved their result is around two to three hours during kind of daytime you know obviously longer for overnight and we don't have our we didn't have kind of the data from before July 1st in easy easily accessible fashion but since July 1st more than 135,000 results have been retrieved with the system and generally have had limited down times or other issues using this kind of deployment stack so so far I've talked a lot about these custom applications that are mostly written in python but you know I think the important aspect of a lot of this response it's it's really taken some expertise and some some development over time of working with these solutions and taking things that we've developed you know and honestly in a lot of the context myself and the other informatics team have developed so many solutions more in the context of research but then kind of repurpose them and rapidly deploy them for lab operations so I haven't talked a lot about our thus far so the last part of this talk I just want to talk about really are in you know its role in supporting our operations and so just for some context about about the volume of testing that we've been doing our typical test volume for the UW biology laboratory was about 5,000 tests per month prior to COVID and currently we're doing five to nine thousand tests per day and you know we've in the process of doing that we had to move to a 24-7 operation we've hired a large number of staff both on the analytical and the pre-analytical side but we've had to we basically have had to kind of adopt a agile mindset really from the start of this and make decisions on a day-to-day basis and so I think it's harder to make decisions on a day-to-day basis if you can't if you don't understand what's actually what's fundamentally happening with flow samples through your lab and how you're delivering on turnaround times and you know how what kind of impact you're you're having on the response so R has really been critical to driving our dashboarding and our visual reporting and despite having a lot of expertise throughout many of our staff in Python kind of our go-to for whenever we're going to ask a question analytically is R for some context about what our infrastructure looks like so I've mentioned SunQuest before that's really it's not completely the center of our universe on the lab side but it's really an important data source for us and so this is just kind of shows you the flows of data into some of our reporting and analytics infrastructure that we've developed over time so SunQuest is our lab information system underlying SunQuest is a cache database which is the company inter-systems and we have these other data sources that we coordinate a series of daily extracts kind of a traditional you know ETL extract transform load type of workflow to our database server so we run Postgres database and then we written again some pretty lightweight command line functions within our warehouse package that allows us to import you know basically run some imports and schedule things set up the database schema and things like that and then on the kind of receiving end of all that data are our Python we have some use of Tableau but you're really hard to go to for all things for reporting in terms of visualizations are over time we've kind of built a workflow using Docker and get loud really not unlike what you can what you can get in place with RStudio Connect I think a lot some of our work on this predated RStudio Connect or at least our awareness of that solution but we've kind of built a solution that is that provides multi-language support and really containerizes kind of individual reports and then you can supply it with a configuration that not only includes you know some metadata like the name of the report to post on our reporting website but also who's allowed access based off of our permissioning in our groups that are available within some other infrastructure that we've built so you know this is a kind of has that nice workflow of you can work within Git and so we use a GitLab instance you know create some branches prototype things and then when you when you merge things to master that can trigger the automated machinery so that the so that your report on the website gets updated and we found this this works this is like a perfect match with with flex dashboard dashboards you can quickly kind of spin out a flex dashboard and then deploy it you know within minutes and then just kind of set it and forget it and so far we've produced we have 20 reports we have many more reports than that but we have 20 COVID-19 specific reports and I think I've been involved in the development of most of those in general our turnaround time for requesters is one to two days after kind of figuring out the design going back and forth and then publishing so I'm going to jump out of the presentation real fast and I've um see if I can hopefully you can see my screen so I've taken some select selections from our operations dashboard you know this is not the full dashboard this has some information that we review on a daily basis in terms of our operations so you can see here this you know we're very we're very focused on providing great turnaround times we want to make sure we support that contact tracing process so you can see our overall statistics our median overall turnaround time for yesterday percentage of samples resulted within 24 hours percentage of samples within 48 hours you know we want to we want this number to be as close to 100 percent as possible and then this is our these are our overall turnaround times by date so this is I think in addition to flex dashboard another key component is just extensive use of plot lead and this using plot lead makes it so easy to just take a gg plot and turn it into an interactive plot really just an extra line of code so we use that extensively and then um you know this is just showing 5th percentile and the green line median and 95th percentile turnaround times and this is the overall picture is helpful though what's you know more important for us is we tier our samples based off of you know with medical input input from clients and so forth in terms of criticality so we are tier one samples we want to try to deliver the best turnaround time for and then we have these other you know we have other categories based off of you know inpatient outpatient other different settings and then we can also drill down uh and make sure and understand our uh ed and inpatient turnaround times and these to be clear these are not actually our rapid tests our rapid tests we can get out the door much more quickly though those are on-site at the hospital these are for samples that are sent to the central virology laboratory so you know we we review these daily and then um you know when we see trends across these we have there we have more specific views this data based off of locations and clients and so we'll look at that and then provide we'll make adjustments when we see that uh that you know someone's turnaround times will fall outside of their tier uh kind of operational decisions like that then we have there's a selection of some processing data here that kind of shows the flow of samples through different sections of our um of our health system uh and then kind of overall sample flux to see you know how many we got in the door how many were resulted and then we have other you know detailed information about volumes of different tiers and this is this has been really helpful for staffing particularly uh early on and when we we see changes uh this represents uh for each you know each of these vertical sets of panels is a day and then each of these on the x-axis is the hour of the day and we have different categories so we can take a look at when our when our larger shipments are coming when that changes and then make sure that we adjust staffing based off of the expected number of samples coming in per shift so there's some more detailed drill down information on on kind of samples by hour of day and and FTE required that are similar to this but a a little more detail and we can also look at productivity uh among the staff and so we've made some changes and set some targets for improving our ability to log uh city of Seattle samples in particular and so you know we at city state for um at you know one order per minute uh and so this is daily just kind of calculating the average amount of time it takes to log into sample and you know we we have a goal of 1.2 orders per minute which it looks like with some recent changes we're getting closer to hitting we can look at you know from a public health standpoint look at population trends and so uh luckily the great news for Washington state is that numbers are kind of coming down across the board so this just shows positivity rate by day uh for Washington state and then we have positivity rate by county and setting so inpatient versus outpatient across our different counties and you'd see you know most counties here look are looking really good in terms of uh getting rates uh down to lower percentages uh and then I will I've kind of purposely obscured some information from this but one of the key aspects to maintaining a pooling workflow is is making sure that you don't have samples come through with too high rate of positivity if you if you're pooling four samples uh to a well and the positivity rate is 25 percent then you know chances are every single uh well on your plate is going to be positive and then you're just going to have to do a bunch of extra work to reflex all of those wells to the individual test and so we use kind of a day by day view of our weekly or you know past week positivity percentage you can see I've taken out locations here but you can see this is uh or actually organize each row here is a specific ordering location and so we can base off of their positivity rate for the last week we can basically exclude or include different locations in our sample flow from going to the pooling workflow uh and then I'll just mention quickly that in addition to um to monitoring kind of those more operational aspects we also supply chain is a significant issue with all of our laboratory testing and so we have a flex dashboard that is basically driven by uh an inventory spreadsheet process that uh that kind of we have staff who are monitoring inventory daily making updates and that really drives uh another dashboard that supply chain that has some indicators about you know when we're starting to run low on a platform and we use this information to help us shift testing from one platform to another when needed lastly I won't jump to the to the public dashboard but yeah the same flex dashboard that we're looking at drives our public dashboard and so the link is here and this kind of just provides an overall summary of total testing numbers positivity rates the website the bar graphs interactive so you can look at at the numbers here and then we provide yesterday's statistics and so based off kind of collectively all of the things that we put in place to increase our efficiency of testing we've tested more than half a million patients to date and detected you know more than 25 000 infections and you know are continuing to scale up our our testing really on based off of kind of daily iteration uh that's driven by by laboratory data uh you're largely displayed by r so to to wrap up um I'll just share kind of a few of the at least the the lessons that we have learned as our within our group I think we have reckoned we knew this we had recognition of this but but you know the pandemic has helped has kind of reinforced this in play and this is playing out a lot of our conventional clinical information systems play an important role uh in supporting access to testing but you know we have these kind of gaps really in integration solutions and so uh about being able to develop some open source uh software to support that uh or being able to develop on open source software solutions to support that has helped us uh do that data integration uh and you know we've been able to to develop some expertise and database infrastructure and aws that has helped us fill some of these it gaps and and really has driven a lot of our operational decisions uh and I think I just want to emphasize that really this increase increasing laboratory capacity for any anyone who's working within lab medicine or adjacent to lab medicine kind of that combination of increasing laboratory capacity plus partnering with our uh with our colleagues outside the lab uh can can help fuel increase access to testing I need to acknowledge uh kind of the three ends here Noah Nick and Nathan so Noah is the director of uh lab and anthology informatics uh and has been critical in uh and really helping our group move forward on uh with kind of these open source software capabilities Nick crumbs our newest faculty member who's been critical in the aws and Nathan writes our uh data scientist data engineers kind of stepped in to do application development as well and support some of these solutions I unfortunately didn't have a picture of Caitlyn she has been also critical to allowing us to scale our our testing and then other members from our lab men uh IT team are uh displayed there and then the city of Seattle I want to also call out and uh as well in the top right corner there I can't say enough great things about the UW medicine neurology laboratory we've we've been able to uh you know we're just providing them support and helping them increase their efficiency but they're really doing all the work to get testing out there and finally you know typically we talk about funding and support in this case uh not supported by grants but supported by our great leadership uh in the department uh Dr. so Jeff Baird our interim chair has really said you know we need to uh as a department we you know I will support you doing whatever you can to expand access to testing Dr. Ramsey has said the same thing said you know to spend whatever you need to spend bring in as many staff as you need to to support this response and uh and then uh Governor Inslee uh has provided funding as well as uh I'll call out specifically Steve Steve Ballmer he made a large donation to UW medicine for testing specifically uh and I do also want to thank all the donors we've had lots of people who um who donated funds and and food and and other goodies to UW virology so uh their support is always appreciated and I don't know that we have a lot of time but I will be happy to take any questions we have we have two minutes thank you so much for this uh amazing presentation um the first question that was posted most highly uploaded is specifically what AWS services did you use were there any issues with HIPAA compliance we'll have two minutes for for all questions so maybe we can answer this one quickly quickly I'll say that we had a business associates agreement in place and actually a lot of our time invested in AWS was focused on the security model and how we make sure that we plug in our our authentication our identity and access management systems in a in a HIPAA compliant way and then you know every step of the way we've also had to consult with both Amazon and our IT security team to make sure that you know we're doing everything above board and and don't have any appliance risks there's a variety of different um solutions that we've used in terms of AWS solutions like Redshift but I off top of my head I don't know all the full list of them but we basically anything that Amazon provides that that is kind of automates a large chunk of the deployment we we try to utilize and then just one more question how much will what you have learned be applied to other kinds of lab testing such as paper and hand scanning to modernize your workflows for other tests over the next five years yeah so uh you know our our typical kind of agility or response for other things are is is often much slower I think I have no doubt that for for some of these uh nursing facility and employer testing such types of situations that work will continue to you'll utilize things like the roster upload to generate orders um I you know I don't know how much that's in demand outside of the pandemic um but you know what we're doing in parallel to a lot of these solutions is deploying kind of the more production traditional hl7 interfaces alongside those and so I think I've kind of changed my opinion I thought you know there's no way we could go live with without an hl7 interface for a solution like solve you know we had to build an hl7 interface for that and based off of you know based off of the what we've done with this I think it's actually more valuable in some scenarios uh to to get something out there quickly that you you have to have dedicated staffing to support I mean that's one thing that can't be underrated whenever you deploy these custom solutions you have to have the right staffing level to to support them if they're being used compression workflows and I think my opinion has changed a little bit and I think I'm planning to do more staffing around custom software and custom development uh in the longer term than I was originally thinking about prior to the pandemic so I don't know I can't say that you know all these solutions are going to be used to facilitate um all laboratory testing certainly for our PCR workflows we're going to deploy that automated uh I mean we're working on deploying the automated uh data transfers those solutions across other areas in the lab once we're a little bit uh you know we'll spend more time on that when we're less focused on the COVID pandemic great and that's all the time we have unfortunately uh so thanks again Patrick and thanks for a fantastic presentation thank you