 So next, Alana Ram is gonna talk about outcomes reporting across the network. Now, you may have noticed that Alana was moving with a little bit of, you know, she has significant bruising from a fall. I was assured that it was not related to trying to harmonize outcomes, so. Just running too fast. So yes, and I can trip over a flat surface, so it's okay. So I'm gonna talk about the emerge experiences. We are in the process in trying to harmonize our outcomes across our multiple projects, specifically related to we have both chart review outcomes and our patient reported outcomes. And I will say most of what we learned in our work group is you can just take what Jessica just reported and probably just put it right on the patient reported outcomes group for. Emerge is which we did, because we did this also to do our patient reported outcomes, we created that in a work group as well. So we had a working group to try to harmonize our patient reported outcome measures across all of our different projects. And for the outcomes, for the clinical outcomes that we could measure by chart review, because as you saw, we're all at least returning results on a set group of conditions. We had different sites and experts create those outcomes forms for those different conditions. And so what that looked like was that the different, they were assigned or volunteered to collect, to create the outcomes form that we would then be using for the chart abstraction. And for the patient reported outcomes, what the working group did is we had to figure out, very similar to Cesar, that people were collecting outcomes at different times. They had planned based on how their project was designed to do a baseline or not, to do an immediate post-disclosure or one month or not. Pretty much everybody is doing a six month, at least in one of their cohorts. But they're also delivering these differently, maybe electronic, maybe by phone, maybe by mail. So we do have some differences across sites that we needed to look at as well and work with. So that was again another of our challenges with trying to create some harmonization here and see if we could find some common measures that could be used. So some of the benefits and challenges of trying to do this for our patient reported outcomes and using a work group to do this process, like I said, just take everything that Jessica just presented. We had the same problem and the same challenges, but we also had some, I think there was a benefit to this is that each site could create and customize measures that they needed. For some of our sites, we are really focused on family communication so we could add measures of family communication and the sites that weren't looking at that didn't have to include those measures. But we could at least have a core set of measures. So I think for baseline, it's just one question that's common. Is that right, Ingrid, one question? That's common across all. But of course the demographics are, but similar to what Jessica just presented, they may not be using the same variable name or they may not have used the exact same question across sites to do that. And it was also a very long process. So if you wanna know more about that talk to Ingrid, she can tell you all about how long of a process that was, but pretty much same issues that Cesar experienced. For the outcomes forms, it's really nice that the experts in the sites could create these outcomes forms for data collection for the chart review, chart abstraction. What we realized though is that there was, because different people were creating these different forms, there was not a consistency in even how questions were asked, maybe for mammography across different outcomes forms. So for the same test. Some outcomes were just not what we expected as we started getting into the form. So we needed to change how we answered the question or collected that variable. And some variables were not clear to the abstractors. So an example of that right out of the box, as we started testing the BRCA outcomes form, it was for women only. And Geisinger, 10 of our results were in men. So we needed a form to collect the outcomes for what happens after you return a result to a male. Other examples, we knew that we might need to collect, some people might know that they had this condition or this genetic change before they did the return of results process. What we found pretty quickly out of the gate again is that it's not a clear cut, yes and no. We also had people who knew they had this condition in the family and big shocker were at 50% risk and had never done the cascade testing for it. So yes, no, doesn't quite capture that. So we had to work on that. We also have the problem of the outcomes form wanting specific information on specific types of diagnoses and what we have in charts is family member has high cholesterol, not familial hypercholesterolemia or a specific type of heart disease. It just says heart disease. And so how do we work with that? Also issues with the test may be ordered or not completed or they had a visit but we can't tell if it was related to return of results or not because maybe they were, it was ordered in January, the colonoscopy was ordered in January. They had the test in or they got their result in March and they had their colonoscopy in June but we don't know if they scheduled their colonoscopy in January and they just couldn't get it done till June or they scheduled it because of the return of results. Things like that is very difficult. So we had to work on guidance for how we collect that. Also issues of, we had had cancers, what cancer diagnoses were done or appeared after the return of results but we were also finding pre-cancerous lesions, things like that and we had not included those on the outcomes forms. And family history and those sort of outcomes we found were really challenging because again in the chart it says things like three out of six relatives were told, at least at Geisinger, that's one of the things that we're seeing a lot in our chart. Whereas we had done specific, we had requested specific outcomes and when we created these forms. So some lessons learned from all of these I think. Our, for the outcomes forms, if we can agree upon, if you've got multiple groups creating these forms agree on some standardization, date collection, how date formats are gonna be collected, what time frames do you need and if you need different time frames for different outcomes, put that in your guidance documents. Create those guidance documents at the same time that you're creating. We are in the process of creating those guidance documents now after those forms were created about a year ago or so. Similar to what we did with some of the phenotype querying, instead of using one site maybe you use two sites. One site creates it, the second site has to test that outcomes form and test it with an abstract or not. A genetic counselor or other clinician who may not be doing the abstraction so that you can get the information that's needed for those guidance documents and those abstract or instructions. And then include a process for collecting the context. So we have those outcomes forms without some sort of contextual information, we're really losing a lot of really key information such as maybe the patient was too young, maybe they actually do document why a test was not ordered and we need a way to collect that information. Or they talk about why they didn't talk to family members, things like that. For the patient reported outcomes, similar lessons to what Jessica presented. But determine standards again if there's anything you can determine ahead of time. Try to do that or create your crosswalks, it's just gonna take time, you have to create these crosswalks. Very important to track what each site is doing because reality and what you plan and what happens in reality are very different. So as long as we can track that, we can use that. And then also we need to standardize the data entry. So in our case, we have different ways that the data may be collected. And when a patient is doing it by electronic administration, skip patterns, they don't see a question when they answer it, no. They don't see the next five questions when you're doing it on paper. Gee, patients sometimes answer those next five questions, they don't follow the skip pattern correctly. So what do you do with that data? You need to create rules. And again, they write things on there or if you're doing it by phone, they tell you things that are important contextual information to understand this data. Because again, this is a whole different population from the folks that are seeking genetic testing that we've usually been working with. And so I think what we're finding is that they're different in the more contextual information that we can collect, the better. And again, thanks to the entire work group because this is a really exciting thing that we're learning and it's always a new experience. Thank you.