 Yes. Okay, great. Thanks so much. Okay, so thanks. I'm excited to be here and sorry about the technical issues. I'm going to talk a little bit today about REDCAP, sort of the origin story of REDCAP, but also some stuff that we've been working on specifically over the last two or three years called clinical data interoperability services that we're really excited about, basically allowing sharing of EHR data into REDCAP in a fairly lightweight and friction less way for research teams. Before I sort of launch into that discussion, I want to give a little bit of the REDCAP origin story for people that aren't familiar with REDCAP. It's a platform that we created here at Vanderbilt back in 2004, really to help diverse researchers working diverse clinical and translational research problems and studies help them do a better job sort of thinking about devising and implementing data collection plans for studies. At the time, we were building everything one at a time for individual research projects. We knew with HIPAA security rules coming down around the need for audit trails and security and so forth that we weren't going to be able to sort of meet that need doing projects one at a time. So we thought a better approach would be to build a common platform built on metadata principles so that we'd be able to sort of create solutions and empower research teams to create their own data management plans, implement those without need for programmers. So it's a no-coat type of customizable data management platform. We started with case report forms and making sure that research coordinators at the time could sort of build and operate with electronic forms and all the validation in the bells and whistles that are associated as such would be sort of baked in and built in. We also were very serious about the HIPAA security rules and so made sure that we put in lots and lots of audit trails and data logging features. And then at the advice of our researchers that we were working with realized that we needed to also make it really, really easy to get the data out in various statistical packages including R, of course, and so that was sort of the origin of RedTap. It worked pretty well at Vanderbilt and so we started talking about that at national meetings, one of which a colleague from, a would-be colleague from the University of Puerto Rico expressed interest and so we started sharing with the department there. One of the nice things about Puerto Rico was my collaborator, my key collaborator there was a biostatistician and he taught me a lot about sort of how to get the data out and customized for import into R. That was sort of the origin story of RedCap and as well the origin story of the consortium. We wanted to create a scalable model to be able to disseminate and share at no cost to academic, non-profit and government organizations and as of this morning we're at about 6,000 institutions across 147 countries. The really cool thing about that though is not a lot of dots on the map. The really cool thing about that is we've always sort of devised and thought about RedCap in a philosophy that used systems get better and so the way that works with the consortium is the infinity diagram that you see in front of us with this screen and that we're always listening to our research community not just at Vanderbilt but across that consortium. We listen, learn and prioritize features, we then go and build and test those and disseminate them out to that larger RedCap consortium. Local experts decide which features and functions they want to deploy for their local research community. They then deploy that out, train the users, support the users and typically in those exchanges that's where the suggestions for new features come from and then we just kind of complete that virtuous cycle. We release features and functions on a monthly basis to that RedCap consortium and so if you think about that trip around that infinity diagram once per month, we've been working about 16 years. So a lot of researcher-informed innovation that's happened over that time. Almost from the very beginning though, this is a question that we got both at Vanderbilt and across the consortium. Can you help me get my data out of the electronic health record into my research database? That's really hard and particularly 16 years ago it was really hard and you had to sort of get into conversation about standards and being able to sort of get the data that you want mapped to the way that you want it. The fact that EHR vendors were exchanging information or not in very standard in very bespoke ways but with the advent of fire technology which is a relatively new standard but it's mandated by the ONC to be used with electronic health record and vendor systems and so the advent of fire really about five years ago, six years ago, really allowed us to operate in a better way and to build this as a scalable solution that can be exported outside of just our Vanderbilt system and our Vanderbilt EHR. We published a paper on this last year in September. It's called Red Cap on Fire Clinical Data Interoperability Services and that paper has the figure that I'll kind of walk through now in the old days to get data out of the EHR into your EDC system. It was basically open up a browser, open up two browsers, do a chart review and some copy and paste. Before fire, we did build some API-driven platforms to allow folks like us who had a research data warehouse at your institution to be able to sort of build into the API of Red Cap and create some transfer of functionality. But again, what we found was we could do that at Vanderbilt, maybe five or six or ten other institutions sort of had both the data warehousing capability as well as the API programming teams to sort of pull that off. With fire, we're able to do it much easier and we don't have to sort of rely on a bunch of internal infrastructure. And I'll kind of get to the, well, how do you get it turned on with a particular EHR in just a moment. When we thought about the use cases of why people were asking us to get EHR data into Red Cap, really we focused first on what we call clinical data pool. And maybe the best way to think about that would be a prospective study where I've got case report forms that are sitting there with expectation of structured data coming into them for a particular visit, for a particular patient, for a particular lab. That's a really good use case, but when we rolled that out at Vanderbilt and said, we've got this functionality in Red Cap coming, get it, about 50% of our folks said, yeah, that's good, but that's not what we want. Well, we want, and then we kind of describe what they want and we realized it was less of a prospective clinical trial with case report, with associated case report forms and more like a data mark or a registry type project. So eventually we built both of those into the into the features of this framework and technology. We call one clinical data pool, we call the other one clinical data mark. As I mentioned, we are using fire standards and there are a lot of fire resources out there, but one of the things that we found was that many of our most requested data mapping and exchange structure comes really from only a handful. Patient would give you things like demographics, medication is self-explanatory, observation, resource for labs and vital signs, conditions and allergies, as you see on the first column, the second column, but as we've evolved, we've also added recently the ability to support R4, which is another sort of the next version of fire and some of the resources there that we've been able to pull in our encounter immunizations more specific around core characteristics with observations and adverse events. There are many more fire resources that can be mapped and sort of utilized and I'll speak to some of those in a moment, but again the really high points, most of what the researchers are doing in clinical translational research at least in the capacity that we typically work with RedCap are encapsulated here. At Vanderbilt, I'll talk a little bit about the consortium at large in just a moment, but at Vanderbilt, I checked this morning and we have 132 projects using that clinical data pool or the prospective clinical trial or study integration services toolkit. For the registry use case, we've got 54 and you can kind of see there on top the data values adjudicated or imported into both 587,000 data points for those 132 projects in the prospective study model and 54 million in the registry. I think the way a couple of things really make me happy about this. Number one, we've built this to be much like everything else we try to build in RedCap and that is to be self-service. I have a very light support team that helps researchers actually do that mapping of fields from the EHR into RedCap. They just typically go, we do a small amount of training and then we put them to work on their own particular projects. The fact that we have this many using that lightweight support model makes me really happy because I know it's scalable. The other thing is looking at those numbers, even if we just take the one project type, 587,000 data points, think about the work and effort and the efficiency and the accuracy gained from being able to do those real-time data pools rather than having that copy and paste method that is typical. If I look at the types of data that are coming in, this is for last month, last full month of July. Again in Vanderbilt, these are the quantities of data in the CDP model up top as well as the CDM model down below. I should say that the patients, I can't hover here, but the patients in the registry model still are somewhere in the 10,000 range so it looks very small but that's in comparison. You can kind of see people use those mappings and collect different types of data for those types of projects as one could imagine. We do share this out. We've worked with Epic. We're an Epic institution at Vanderbilt and so we sort of built it out to be agnostic to what EHR we're using but it works easiest in Epic and the reason is we work with Epic to get this into their app gallery. I think that's what they call it, Epic. I think it's app gallery or the app store. But anyway, being able to sort of have a health IT individual at your institution click the right buttons to sort of get things set up fairly easily, that's something that works well and is easy in Epic. We're not a Cerner site and Epic and Cerner app stores work a little bit differently so the installation and the setup of this with Cerner is a little bit harder with Epic but we do have five sites that are running. This is a little bit of an old slide but we've got somewhere in the neighborhood of 35, 40 Epic institutions that are live and probably still, it's growing list, but probably still in the 50 to 60 institutions that are on their way to being productive and live. So just a few more slides and we'll kind of open up to questions if there are any but how do you start using CDIS and RedCap? First step is to talk to your RedCap administrator because it's something that basically they're going to have to broker on the RedCap side and it's a little bit different than most RedCap services and that you just can't turn it on and it stands alone. You have to have some help from your health IT group. It's not a heavy lift and again, if you're in the Epic world, it's really quite easy from a technical standpoint to sort of download something from the from the app market looks like and apply that locally. That said, permissions and governance, it's the easy parts, the technical, the harder part is getting the right governance and the right folks to sort of okay this at the institution level and figuring out how you want to sort of deploy it around which projects and under what conditions but technically it's pretty easy. Then once those fire services are enabled for RedCap, it's a fairly simple process where in RedCap you just sort of put in some secret tokens into the RedCap console as an administrator but then it's just enabling these services for an individual project and I've just got one really quick screen here that kind of shows a little bit of the RedCap screens for sort of starting the mapping process for for an individual study. Again it's not rocket science but it does take some due diligence from a research coordinator, somebody that knows the data. You can see here I'm just searching for HDL and seeing what's available in my EHR at Vanderbilt. The next step would be to click this one and then map it to my RedCap field. As I mentioned earlier, there's lots and lots of resources. We typically go for the ones that are more scalable, more usable rather than trying to sort of boil the ocean but there's a lot more room for evolution as we go forward. I want to give some acknowledgements to a lot of folks on our team not just on the RedCap team but across you know our EPIC team as well as you know others within the institution. This has definitely been a team effort and I want to thank you for the opportunity to speak. So I think at this point I can go back and field any questions if there are any. Thank you very much. I'm eagerly hoping that we can get that implemented at Mayo. Have you run into institutions that have said no? Concerns of security or clogging up the data? Yeah, I mean we definitely see that there's 35-ish that are live now and 50 or 60 that are perhaps on their way. I'm really not out there trying to say to Mayo Clinic or anybody else you need to do this. So that feedback probably wouldn't come to me but we do see that you know it does take time to sort of get it up and running. We do have office hours that we provide from RedCap and you know invite people to bring their security team or their health IT team or their RedCap team to those office hours. But yeah I mean it's yeah I think I'll just leave it at that. It's always a process. Are there other questions that people have? Another question then is where do you see RedCap going next? You know the very next thing that we're going to deploy probably within the next couple of months is we are we're about to sort of roll out some new functionality around document storage. We've always had document storage in RedCap but I'm really excited about some of these new features that allow sort of folders within folders within folders and thinking about sort of keeping the data, keeping track of data that are not structured but you know being able to sort of shoot in images or ECG waveforms etc. Being able to do that using the API I think we'll create some new we'll watch really smart people doing really clever things with that functionality. I think we'll continue to do I actually think the CDIS stuff that we're talking about here we're just finishing a study. We haven't published it yet but we've got some great great metrics around time efficiency and even data quality for this versus when compared to a study where we weren't using it but we're using traditional methods. I think that will create some good buzz and create some good ideas going forward for next generation in this work including maybe risk-based monitoring which I think is a really cool use case. Um and then there's a question in terms of the part of letter well just also just in terms of the process for getting the CDIS implementation at your institution. Yeah I'm sorry I think I cut out a I cut out a slide I meant to include and that had a link to that had a link to you know for more information here's where those office hours are. I think the way I would really start that though would be just to go to your red cap administrator and we've got all kinds of community resources for those folks you know let them know you're interested and that might might encourage them to sort of move forward with with the communications. We have office hours as well as lots of documentation artifacts around it we can share. Well there are a few more questions but I think we need to move on at this point maybe answer in the chat there's some questions under the Q&A section. We'll do thank you very much I appreciate it. Thank you very much.