 Okay. And thank you. It's my privilege to introduce our next speakers, Brennan Beal and Beth Devine from the University of Washington Choice Institute. They'll be talking about the Ready tool using Chinese as a tool for real world evidence evaluation. Let me share my screen here. All right. Are we good? We're good. Okay. Thanks for the introduction. So I'll just go ahead and introduce myself really quickly again. So my name is Brennan Beal. I'm a second year healthcare economics fellow at the University of Washington and in partnership with Abbey. So my second year is in house at Abbey. And then Beth, do you want to introduce yourself? Hello, everyone. I'm Beth Devine. I'm a professor in the comparative health outcome policy and economics Institute, University of Washington, health services health economist with expertise for today in real world evidence and comparative effectiveness research using real world data. Yeah. So I guess I'll go ahead and start with what is real world evidence. So probably everyone on the call is a little bit familiar with what real world evidence is, but I'll just go over it again in case. So real world evidence by definition is that derived from real world data. It's not incredibly helpful. It's kind of the definition within the definition. But real world data is that that's outside of what we would consider like a traditional randomized controlled trial, right? So that's data from like EHRs or claims or chart reviews, things like that. And why it matters and why it's so important to medicine one is because we're just having access to a ton of data currently. And real world evidence is a good way to capture aspects of interventions. And in our case, as a health care economist, mostly focus on farm economics is my focus. And so it helps capture aspects of medications that aren't captured in randomized heterogeneous controlled trials. Two really big parts of that are underrepresented peoples, right? So we talked a little bit about underrepresented populations in medicine today already. And real world evidence is one of the tools we can use to combat that. And then other things like adherence patterns, right? So in randomized controlled trials, it's kind of hard to get at that. And then it increases the generalizability of intervention comparisons. So it could be that drug A is better than drug B in a randomized controlled setting, but maybe, you know, drug B has other aspects in a real world setting that are more amenable to patient outcomes. And then ultimately it enables end users to harness real world evidence to make HDA adoption decisions. And that's ultimately what we're here for today. And so the problem that we come across is real world evidence is fairly complicated just because there's a ton of different study designs and a ton of different ways that you can go about generating real world evidence. And that's not such a problem for researchers, but ultimately we want that real world evidence to inform adoption decisions, right? So our main focus with our tool, which I'll introduce in a bit, was for payers to be able to access real world evidence and make a quick decision, not a super quick decision, but a well balanced productive decision for medication adoption. So if you just look through all the different guidelines, I mean, we've got The Great Handbook, Hamstar, Robbins One, Robbins Two, all these different aspects of real world evidence generation. So I mean, the list goes on. So there's a saying when you have a hammer, everything looks like a nail. So we, Beth and I, Beth had been working on this problem for quite some time. And she brought it up to me. They developed an initial phase as a tool and kind of an initial rollout. But we wanted to do something more structured and something that would take the process all the way from evidence identification all the way through recommendation. So we wanted to create an online platform that provides structural framework that can walk users for real world evidence identification all the way through the process of making an evidence based technology adoption decision. And so, and Walk Shiny, because that's kind of my expertise. I have been using R for like four to five years now. And actually, I've only been using Shiny for, I guess, a year and a half. But it seemed like a fun problem and something that we were well equipped to do. And so I'm going to do a little bit of a live demo so everyone can cross your fingers. So this is our tool. And this is just the homepage. So we actually have a nice little log in where you can see progress of all your other studies that you're doing. I'll get back to this demo later. But first, I just want to kind of go over the tool itself. So the tool is broken into, well, Beth, can you confirm that we can see this? Yes. So the tool is broken down into phases, right? And there's four main phases. So the first identifying real world evidence. The second is reviewing it and grading it. And that's kind of the meat of the tool. And third is summarizing your graded evidence. And then finally, making an evidence based recommendation. And so I'm just going to walk through a quick example with you all live. So the first part is identifying real world evidence. And this is really just going over the peak coach criteria, right? Population, intervention, comparator. And then Shiny was actually well equipped to do this. And some of these things, we can use the popovers to show, you know, this isn't going directly into a search string, but the end goal of this page. So let's say we're focused on like the diabetes population. And then let's do PO, Glutazone versus like Rossi, Glutazone. And then let's say we're interested in like HB A1C. So we can then look at a time frame. So we want to know over, let's just do 20. We did 20 as a max. And then we'll do just real world evidence. We'll say we're interested in a couple of these. These things aren't going directly into the search string. And then finally, like I mentioned, there's so many different types of real world evidence that we could look for. In this case, we can just select all of them. And then we'll submit that form. So when you submit it, it creates the search string and takes you to phase two. So you can see phase two up here. And we create a search string. So we can click on that. And there we have it. So this takes you to PubMed. That's really nice. And so if nothing else at the end of the day, we've created a search string tool. And so here you can kind of identify all these studies so we can see that we have type two diabetes systematic review, Poglidazone versus Rosiglidazone and type two diabetes, cost effectiveness, cost effectiveness, Poglidazone plus metformin. So you can see that this would be a good place to identify your evidence and find it. And then once you've done that, once you've gathered all your evidence and you're ready to go to step two, you can just say, great, thank you. And then this is reviewing and grading of the evidence, right? So this is the hardest thing, I think, from an end user standpoint, right? Researchers don't have trouble with this, but a lot of times end users do. And so let's say we identified a couple of studies. So the goal of this page for us was just to say, okay, now we can say if you have a pragmatic controlled trial or let's say quasi experimental, you should be using the Robbins 1 tool. And so we can link out to that tool if you prefer, but you don't have to because we've got it all here, right? And so you can go through and answer these tools based on the Robbins 1 criteria. And the goal of this is just to grade the evidence to say, do we think the evidence was biased or not? And then at the end of all of these, the end goal is to say, okay, now that you've went through all these questions based on the kind of study that you were using, right? So for systematic review, we give you the am star criteria. And so the end goal here is just to say, all right, this is unclear or it's high risk of bias, moderate risk of bias, low risk of bias. And I've actually taken the liberty to fill one of these out for us so we can go back to the account. Let's go to phase three. Well, we'll stay on, we'll stay on phase, yeah, stay on phase three. Okay, so once you've done that, you can submit all your answers and you can grade all of it. And so in this case, we had multiple outcomes for this, for this particular study, and it takes us to phase three. So this is just summarizing your literature. So here we could take advantage of the plotly package, which I really like, and it's really useful here. It's fun to make this interactive for patients or for end users. So for example, I'm only interested in my primary outcome, you know, you can toggle that off, toggle it back on, toggle off the primary. So this is a really good tool. If you just found a ton of literature, and then you want to kind of have a quick overview of, okay, what is the literature telling me if I could summarize it back? And so here we can say, all right, for our secondary outcome, most of them were low, well, it was half and half low and moderate risk of bias, but we had none unclear and none high risk. And so then we use the grade pro. And so we walk in users through this grade pro criteria. So basically based on all of the grading that you've done already, what's kind of your overall risk of bias? And so here, you know, you have multiple different things and shiny was just, I was kind of surprised because at the beginning, you know, you, we could have done this probably in any, any language. But the synthesis of shiny with all the different packages made this really, really nice and really, really effective. And so we can go through and just basis, are the studies consistent? Or are they inconsistent for our HBA 1C outcome? What's the overall risk of bias? Are the results precise? Is their publication bias? And then we have one for mortality. So in this case, we had two outcomes of interest. And then so finally, with the GT package or GT tables, which is relatively new, I think, at least within the year, we can, we can create like a really nice table of all of these responses from the end user. And so here we're seeing multiple packages that play all in one pretty quick screen, right? We have plotly, which again, I love. And then we have the GT tables, which is very, very nice. And then this is kind of the second to last phase. So once you've done all this and you've assessed, okay, now I've identified my literature, we have rated each individual one. And now we have the summary of all of our literature. Now Shiny can take us into making a recommendation. So I've already filled all this out for this one. But what it looks like is this. So this is again, designed for payers, but anyone in a medical profession could use this. I'm thinking maybe for, like, formula your decision makings. I know there's some people here from hospitals across the nation and even internationally, I guess, X US. And so was there any literature evidence available? Probably if there weren't, you wouldn't be here. So yes. And then, you know, is it applicable to answer your research question? Yes. And then was it sufficient, right? So not only was it applicable, was it sufficient? And so then this is just general guidance on making a recommendation, right? So there's a lot of questions that we have to think about as end users, such as like risk benefit, is the adoption feasible for, you know, a payer or a given hospital? Does this make sense? And then can I afford the intervention? And then can this is a big one, especially for rural evidence, is can I equitably deliver the intervention across my population? So ideally, as a payer or as a hospital system, healthcare system, that's your key criteria, right? Is can I use this for rural evidence and does this inform decision making for my specific population of interest? And then we provide a couple different options for recommendation making. And these are things like performance-based risk sharing, coverage with guidelines, coverage with prior auth, things like that, all these other criteria, and then you can even specify other. And then once you've done all this, phase five is really not a phase at all. It's just a summary. And this is very much still in production. We're working on it. So any ideas you have would be welcome. And then the next phase kind of takes you to a summary of everything that you've done. And so Shiny has just has been really incredible. And I've been really impressed. Honestly, we've taken it far beyond what I thought it would be capable of. And so we kind of have our P-Codes criteria up here. So population, intervention, Rosiclitazone. And then we can go into the summary response. So we still have Plotly right here. We have GT tables, like I said, so this is passed forward. And then we have the evidence-based recommendation. So we decided in this case that coverage with prior auth made sense. And so that's our tool. And that's kind of how we used Shiny. And then at any step along the way, you can save your progress, of course. And then it'll go into your account for that specific. So you can see I've saved it today. So that's the tool in a nutshell. I don't know how we're doing on time. Beth, do you have an idea? You're doing great. Yeah. Well, Beth, do you have anything to add? I knew that was a quick overview. Oh, that was a really nice one, Brennan. Thank you so much. I would just reiterate that the motivation for this came from a group of pharmaceutical companies that came to us and said our health plans just don't even know what to do with real-world evidence because it is so complex, as Brennan suggested. So the goal was to create a guideline for users who are evaluating evidence, not researchers who are conducting the studies, but users who are in health plans or health systems, who are making health technology adoption that is reimbursement decisions related to pharmaceuticals or devices or diagnostics or any other intervention of interest. And so the purpose of it is to do the search using structured and well-accepted criteria of the PICOTS and then to bring to bear the different quality rating tools so that users can get a handle on the evidence at hand. And then, as Brennan showed at the very end, a checklist, if you will, of things to consider in making that adoption decision. So really appreciate Brennan turning this into an R Shiny product for us, which has greatly improved it. We actually had it in red cap before and it was pretty clean. Yes, yeah, it looks a lot better. So we have a little time for questions. A lot of folks are interested in the chat and using this immediately. What kinds of use cases do you expect are going to be common? So I can take that and, Beth, you can add any ideas that you have. So the use cases in general are mostly for payers. I think we had payers in mind with this tool, especially as they're considering, you know, should we think about this with the prior authorization or should we go about accepting this evidence and is it applicable for our patient population? I think that was the end goal in mind, but I really think the applications could be pretty endless, Beth. Yeah, so formulary decision makers, you know, definitely because that's the world we live in, but certainly it can be adopted and adapted for other uses as well. So we're working primarily in pharmaceuticals, but as I said, it's not limited. The tool is not limited to the using it in the world of pharmaceuticals. But yeah, so I think it can probably go from here. Yeah, I mean, so something like nice criteria and making decisions. We have a question from Nish Jane asking, could you edit this to make something specific for Prisma based systematic reviews? So we could. Right now, you can include systematic reviews as a type of study design in your search. And the Amstar criteria for rating the quality of systematic reviews is already embedded. And Brennan quickly went through that. And so you can follow up the Amstar tool and go through the Amstar tool and rate systematic reviews. Okay, it does address that. Yeah. I think you guys did such a great job on time. We are in good shape. And I'm going to ask Beth to move us on to the next room. Thank you very much. Thanks everybody. Bye.