 So I want to welcome Laura Wiley, who is an assistant professor at the University of Colorado. She's perhaps best known for her wildly popular Coursera course titled Introduction to Clinical Data Science, which has more than 11,000 people enrolled in it. And her lab develops methods for using electronic health data and record data for research. And today we're very glad to have her speak about Review R, an interface to review clinical records built with SHINee. So please welcome Laura with a warm round of applause emojis. Thank you so much for that great introduction. So for everyone, David Mayer is in the chat. He is the actual lead developer of Review R. So he is the one that has made all of this work and done all the cool details. So as you have questions, he's going to be operating in the chat to answer those along. So Review R is a passion project of mine over the past few years, focusing on how to do chart review. So we all need to do chart review, right? We're building our prediction models. We need to build our gold standard cohorts. We need to do some technical validation of our work. And as we all know, it's not the most fun thing. It's not the most glamorous thing. And unfortunately, there's a lot of structural challenges with doing chart review. So one of my collaborators operates at an institution that will not grant non-clinical researchers access to EPIC or CERN or pick your EHR. And that's obviously a problem. So he's having to do all of his validation using the source database in their EDW, which is clearly suboptimal. But even when you have access to EPIC, a lot of my clinical colleagues run into the problem of they only have a single monitor. So now you need to have hyperspace up. You're trying to search through things. You're also trying to juggle your red cap window or your Excel document, and you're constantly flipping back and forth between them. And as a collaborator, it's frustrating because my collaborator will just not do chart review until he happens to be in the radiology suite where he has lots of big monitors. So all of this suggests that what we really need is a workflow-enabled tool to allow us to do chart review in a better, more smooth mechanism. And so enter reviewR. There is a better possibility. So this is the core review interface within the reviewR Shiny app. And it has sort of a standard view that is just a more data-driven view of medical record. So along the top left panel, you have your subject ID with standard demographic information like their gender, their birth date, and you'll notice whether any of the other red cap reviewers have completed their review. We have a navigation panel in that upper right-hand corner where you can select a particular record or just move between records with previous and next. In the bottom left quarter panel is our most important data where we have all of the clinical information. So as you think about the different types of data that you might have, each of those gets a tab. And so you can look at the notes tab and take a look at all the data that you might have in your EDW. And finally, in the right-hand side, we've got a review interface where you can actually record your chart abstractions and store them for future research as you're using your machine learning or other algorithm development. And the thing that's really cool is not only is this a nice single-page interface, you no longer will have the problem of accidentally typing the wrong patient or copying and pasting the wrong patient identifier because reviewer actually goes through and hand enters on the back end the subject ID that you're seeing with a subject ID that's put into your red cap instrument. So you will always have the right patient in your abstraction. Not only that, but we try and remove some of the repetitive data entry that you have to do. So a lot of times we're doing these very large chart review processes and we've got a number of reviewers and we need to record who did the abstraction. ReviewR uses a configuration panel that will allow you to enter in who you are at the very beginning of your review and then automatically populate it through your red cap instrument for every chart that you review, which is awesome. This configuration panel is also how ReviewR knows which field in your red cap instrument is the appropriate place to put your patient identifier. And the thing that's great is that this is all just using red cap on the back end and I know you just came from a red cap presentation and you know I love red cap, it's the best. And so what ReviewR does is it makes use of the red cap API. So each user gets their own API. He uses that to log in to ReviewR to record their chart abstractions. And this means you get all the benefits of red cap. You get all the audit logging. You can track everything that's ever happened and it's right there ready for you. So this writes the data from the API. You can edit as you go through and you change answers as you need to. Though we even have nice error handling. And if you do change your answer, it will make sure and validate that that was something that you wanted to do in case you had an accidental misclick. Plus, we love all the new innovations in R. So huge effort with DT, leveraging data tables within ReviewR. So what that means is for every table, you can go through and filter or search at a column-based level. Or in that upper right-hand corner, you'll see there is a global search command. And this global search will apply across all of the different tabs. And if you know DT, you know that it has the super cool support for regular expressions. So here I can find hypertension and hyperlipidemia all at once. And because of the configurations that we have, it will automatically filter all the rows on the particular table that you're looking at to only those that have a match and highlight all of the matches. Plus, if you use that global search command, there's a streamlined workflow. So one of the things is I will be looking at a particular trait or phenotype of interest that I'm pulling out. And it's a lot easier instead of trying to answer all 20 abstraction questions to just do one at a time and go through each record. Easy as pie to do that in review R. When you use that global search and you just click that next button in the subject ID, it keeps the search. So you can just quickly go through each step of your review protocol, tab through each individual patient, save it, and come back and do the next process. Super straightforward. Not only that, but we support a lot of clinical data models. So we have flexible data model support and automated data detection. So here's an example of the data models that we support out of the box. So lots of different versions of the OMOT common data model. And for those of you that are looking for publicly available data sets, we also support the Mimic 3 data model. Now, many of you are probably thinking yourself, yeah, that's great, but I just have our source EDW or I have a custom research table. How can I have review R help me? We've got a built-in help function. Now, this does take a little bit of programming expertise, but you'll be amazed at how close this gets you, even if you just know the very minor amount of our programming. So we have a dev add data model function that takes a CSV that's just a list of your tables and the fields in those tables. It will go through and interactively walk you through identifying which table has the list of all your patients, so something like the demographics table, and then selecting the field that tells it what the patient identifier is. And this is what's critical to be able to link that patient identifier into the red cap instrument. After you've identified those two data elements, it builds you three different files. It updates the database support RDA file, moves that CSV into the data raw file, and builds you a template, our file, that basically will do a direct representation of your data structure that you gave it in the CSV file. Now, for those of you that know things like OMOP, you really need to do some data joins to try and get it into something that looks fairly reasonable. You can automatically, not automatically, you can edit that in manually as you want into this dot R file as you need. But if you just want to show a straight representation of the tables you have, do these steps and you're good to go, already built in. So we do this for all of our internal data marks that we have on our campus, so we can customize it for each of our different projects. And of course, data models are great databases or even more critical. So right now we support two primary databases, Google BigQuery and Postgres. We also have a demo SQLite database that lets you actually play around with a VR without connecting it to real clinical data. So for things like Postgres, there's a standard connection panel where you need to put in your host name and your credentials and which database you want to use. But where there's some really cool development that David put into place is with BigQuery. So we have that standard sign in with Google link where you can go. And for those of you that are familiar with the tidyverse, you're familiar with using that credential panel to log into your Google account. And then so cool, you can go in and choose all the Google projects that you're affiliated with, all the data sets that you're affiliated with. So you can choose exactly which GCP project you want to use in review are for this effort. But wait, there's more. All of this is modular. So do you have a shiny app that you use GCP products for and you want to build this connector? Congratulations. That's a plug and play independent module that will give you the connection object that you can go plug into your own shiny app. Red cap, also plug and play. In fact, every aspect of our review are shiny app is modularized. It all uses Golem. So it's a nice, simple install on Cran to just install packages dot review are. And David has spent an exorbitant amount of time doing really great documentation, both at the function layer, external and internal functions, as well as a lot of different explanation docs to actually walk you through how to use review are, how to customize review are. So if you wanted to say, you know, use my SQL instead of Postgres, all you have to do is build that connection object. And we have a whole process to walk you through how to do that. As you're thinking about deploying, you can do this in your own local R instance. Or if you're an organization that has a data warehouse and your data warehouse team wants to enable this for their users, you can do server based deployment. So with that, I want to thank the amazing development team. Like I said, David has really taken the lead on review are and done an amazing job. I just started with some dinky little shiny app that I built for teaching. And he is the one that found Golem, figured out how to do all of the modularization. And then Luke is a fantastic collaborator at Northwestern software developer, who's really helped us understand some of the, let's just say, more challenging aspects of trying to make shiny look and feel nice and new. And of course, I love our wonderful little logo, hex logo for review are with our nod to red cap, because red cap is the best. And with that, I'd love to take any questions that you might have. Wow. What an awesome last talk for this first day of our Medicine 2021. Thank you so much, Laura, for this presentation. And so everybody, please, please give give Laura a hand in the chat. And I'm going to pull up the Q&A. So we have four questions. The most upvoted question is how is the data getting from the EMR to review are? Yes, that's a great question. In this case, we have designed review are to operate with your EDW, your local data warehouse. So one of the things that we'd love to do someday, maybe is try and build in like an HL7 fire connection so that it can actually like plug and play and epic itself or Scherner. But that's a ways off. Really, we're thinking about in a lot of scenarios where you're trying to do research and you can't access Epic and you're trying to just come up with a better way to see these data. The other thing that that then allows you to do is let's say you're at an institution with a really advanced EDW and you have de-identified clinical notes. Now we can actually do non-human subjects based research because it's de-identified and actually do chart review on those data in an easier way that doesn't rely on Epic. Another question. What are API package to use? RedCap are or the red or RedCap API? Another great question that thankfully David answered because I don't know. Apparently, the answer is both. And this reminds me of a lot of conversations that I remember us having as we were developing review are where each of them handle things slightly differently. And so some of the information that we need to set up and track review are comes from one package and then the actual connection objects that we need to build and send data back and forth comes from another. Don't ask me which one. I don't know, but I am sure that David would be happy to answer those kinds of technical questions in more detail. One more question. Please elaborate on OMOP. How was the mapping done? Does the reviewer have the mapping? Great question. So at Colorado, OMOP is done by our data warehouse. That tends to be the most common scenario for a lot of EDW teams that it's coming from a different source. And the researcher again, really depends on each institution. At our institution, we do actually have very robust documentation on the data mapping process, but you have to know who to ask to get it. God love all of the interpersonal socio technical opportunities with EHR based research. But in this case, again, reviewer is not sort of doing those mappings. It's really more this viewing tool on top of an EDW without really putting in sort of opinionated views on what that data should look like. Our goal is to build the most flexible tool to help support you do your work easier. All right. Even it looks like you had asked the question. We're using our red cap APIs. All right. With that, I would like to thank Laura again and thanks for a fantastic presentation and we'll close the session and move over to the closing remarks. Thank you.