 So we have about 45 minutes or so, I think, for discussion. And while you're getting poised with your name cards, I just want to ask one question of the panel to start. And that is, through these various implementation projects that we've heard about, how successfully have you been able to engage diverse populations? Are we effectively catering to fairly well-to-do, fairly homogeneous groups, and enhancing health disparities? Or are we able to reach out to diverse groups? Well, at Geisinger, diversity consists of Italians or Germans. So that's one type of diversity, I guess. We actually have looked at our MyCode population in comparative demographically to our general patient population. They're mostly similar. The MyCode population is slightly older and is slightly more female. And that would make some sense, because as we look at the biggest factor for recruitment into MyCode, it's how many times you come to the clinic. Basically, the opportunity to bump into a concenter. But other than that, there are no issues related to other demographic differences in terms of socioeconomic status. I guess we're officially designated as a densely rural population and a brilliant government oxymoron. So we do have the ability to even reach out to very small communities. And we've actually recruited members from just about every county in the entire state of Pennsylvania. And now, of course, we're in New Jersey. The New Jersey population is a different population. It's much more transient. There's much more race and ethnicity, diversity. And the health system is much less. Yes, there's a few of those, too. And the Geisinger has been in central Pennsylvania for 100 years, whereas the Atlantic Air system, which is now part of Geisinger, has been there much lower. So we were concerned that we would not see the same uptake. But we have seen essentially identical consent rates to the project at Atlantic Air. In fact, the biggest issue for them now is when are you going to do the next batch of sequencing? Because we've got a bunch of our samples in that next run. And our folks really want to start to get results back. So at least in our system, it's been working well. Peter, do you want to? So although we're on the north side of Chicago, we range in our catchment up to the Wisconsin border. So Lake County is very different from Cook County in terms of Chicago demographics. And even just talking to our primary care physicians at the different sites, there are different issues to this. And so what we're looking at is part of several projects. But again, this is where cutting through different lines in our health care system, looking at some of our public health initiatives, how are these intersecting with the application of genomics over to address these issues at truly a population health level as well. But there are a different set of barriers there. Lincoln, maybe you can comment about your population or not. Oh, does he? I'm realizing how disadvantaged this side of the room is because you can't really see. Okay. I'll say a couple words. Thank you. I'm actually trying to log into the system to get the data, to answer the question, but in real time. But we basically are addressing that in two ways. One is just by virtue of the reach we have, we're obviously getting the patients that are out there and a diversity of patients. And maybe in 10 minutes, I can tell you what that means. But the other way is we actually have established a series of research collaborations where people have both retrospective and prospective populations that we've targeted to try to build up those data sets for various ethnic backgrounds. And so those are the two ways we're approaching it. Great, thank you. I think, Mark, you may have had your thing up first. Yeah, but it was a question for Lincoln, so put me on hold. Okay. And Lori, did you have a question or use your mic, please? This question can be answered by anyone on the panel, but specifically for Peter, Lincoln, and Mark. You know, we're talking tomorrow about when do we have enough evidence? What evidence is required for implementation and when do we have enough? So given that you gave some really clear examples of either implementation process outcomes, like speed of return of results or returned results, what would you say to a new health plan coming to you and saying, should we implement genomic medicine? And if so, in which populations? Can you extrapolate from success in oncology to inherited disease risk or cardiovascular disease? So I'd just be, because as I was listening to you, I heard that there were some key constructs like a lot of you talked about your experience and I was noting some themes, but I'd be interested in hearing your perspective. So just to clarify when you say a health plan, there are a lot of differences. You mean a health plan that's sponsored by the organization that's actually running this versus in our health system, we have a wide variety of payers. What kind of market you're in? So in Chicago, we're still a fee for service dominated market because Blue Shield and Blue Cross have, I think, don't quote me on this, but maybe a 70% share of the market. So these are somewhat questions that have to be addressed at an individual institution level. Then also part of it is, well, what outcome are you looking for? Are you looking for a dollar saved or a patient care outcome, ideally a combination you'd wanna show both. So that was a long-winded way of saying that, and partly too is positioning in the marketplace of your health system as well. So there are a lot of different factors that play a role and I think identifying success is gonna be different from each institution depending on the micro environment of their culture. So for us, we've been focusing on, well, we know that there are patients in our system that we haven't identified that could benefit using things that are pretty well accepted in CCN guideline evidence-based. So as a starting point, let's try to capture this patient because we know from a medical perspective these patients should be identified and start to measure the impact of it, which hopefully will guide us further on some of these questions as well. Again, from our perspective, we knew when we launched the MyCode Community Health Initiative that we didn't have evidence that this was gonna be beneficial. So we launched it as a research project and so the return of results, that initial visit, all that sort of stuff is the institution pays for that out of our research dollars. Now, once we have returned the result and they transition into usual care, but we had the agreement with our health plan up front that any care that would be generated from this return of result that they would pay for that they would. So if we found a BRCA result, they would pay for the breast MRI and they would pay for the additional things that would be done and if they had a member that was a family member of the individual that they covered, they would pay for the cascade testing. So we'd done all that ahead of time. Now, even though we do have a provider on health plan, that's only about 40% of our patients. So 60% we're dealing with outside payers. Fortunately, we've not had any issues with the outside payers in terms of them picking up the medical costs related to these return of results. So at least from that perspective, we're not sure if it's just because they don't know how we're getting the result or what and we're not going out of our way to tell them either, but they are paying for it. So that's good. But as we began to accumulate the data and we're still relatively early in the outcomes data, much more at the anecdote stage than we are in the really robust outcomes stage, it became evident to leadership, particularly our CEO, that this was something we really wanted to transition. And I think one of the key concepts is the learning healthcare system is you don't have research over here in clinical care over here and never the twain shall meet, but they're really both embedded. And so that's a model that we're very comfortable with in terms of having both of those coexisting and then it makes the transition much easier. So as we move to transition this into a clinical realm with a reimbursement from a payer in our initial launch, we were able to use essentially the same infrastructure. We were able to cut out some things because of course we weren't relying on a research laboratory to actually do the testing. So as soon as the excellent was done, we could use the information because it had been done in a CLIA certified laboratory. So we were able to cut out a lot of the extra steps that were presented there. And so far at least it's worked reasonably well, but we've not approached any other payers about that. And I think the reaction would generally be there's insufficient evidence to warrant us paying for an exome in someone for which there is no indication. In other words, a population health indication. But the reality is we do exomes all the time for a variety of reasons, whether it's for intellectual disability, autism, or other things. Once we have that exome data, we can use it for other purposes. We'll have the infrastructure to do that. So we'll leverage exomes that are being paid for for an indication to use for off indication results in that. That's probably been also our approach with our pharma genomics program. A lot of, historically, the approach has been you target specific disease or indication, but our pharma genomics panel is broad. You can't order one related to just depression or what have you. You get everything or you get nothing basically. And while it takes time to build this evidence, that's what we're trying to do. It's been a challenge to bring, I'll be honest, the third party payers to the table in a meaningful way, but it's something we've been trying to engage in our community as well. But we're also trying to facilitate this. We're taking, in the process of integrating with CancerLink, Kevin mentioned as well. So it's something that one institution, certainly the Senate North Shore, isn't gonna solve all these problems, but how can we contribute to that greater body as well in a structured way? Because the questions the payers are asking are not necessarily the questions that we are as a health system are asking as the primary, but there has to be overlap to be successful. Okay, I'm not actually sure who was first on this side, but Heidi, do you wanna go ahead? This is more for Lincoln. Oh, thanks. Sorry, you might. So the question is around the molecular tumor board and just understanding is this, is every case discussed with a group of individuals and I was calculating roughly 30 to 40 per week based on your test volume. Or is there any cost build for that or to what extent are most handled by perhaps an individual molecular pathologist and only the really tricky cases bubble up to a lot? Could you talk a little bit about, because I think it's a really critical part of the process but also incredibly expensive if everything goes through it, right? Yeah, great point. Molecular tumor boards have a scaling problem. And so when we first started, yes, we would in front of our large molecular tumor board review every case. We now have small molecular tumor boards where we just require two reviewers to review every case and so that's how we've tried to get around the scaling problem. And we don't build for that separately so it's not something that we're charging for right now. We just bundle it into the total cost of the test. And then where we have added additional molecular tumor board members to accommodate the increased volumes. So that's how we've gone about trying to scale that. You know, I think with time and with additional data you could start getting into an augmented molecular tumor board where you may have enough data to inform an algorithm that can make most of those calls and then all that an MTB does is sign off instead of having to think de novo about each case. And the other thing we've done is really limit the amount of information that accompanies each case so that the amount of decision making is lower. So we have said our priority that all, that we will not comment on the standard of care provisions. So we should, we're just assuming that patients are getting basic standard of care for their advanced cancer. So we don't say, you know, this patient should have gotten, you know, this platinum doublet or whatever chemotherapy. We only, the molecular tumor board constraints their interpretation to just the genomic findings and that's it. Laurie, is it, your card, that's up there. So Lincoln, when you were presenting you talked about your little pallet study with the genomic risk versus the non. And it struck me as we've been talking about when is there enough evidence to do implementation. Like that was the perfect time to do like a hybrid study where you say, all right, I'm gathering this evidence in this very nice little trial. And in addition, at the same time, I'm gonna start looking at these characteristics of implementation science. So what exactly does the intervention look like? What were the required components to make it work? What kind of baseline knowledge did the providers have? Did you have to develop specific types of education to bring them on board? How was the health system driven? Was it a top-down approach from the CEO or was it grassroots because these providers were clamoring for it? Looking at all of those different pieces and pulling them together, and then when you went and you did your large scale expansion, when was it working easily and when was it not? What were the different characteristics in each of those settings? And then what you have at the end of the day is like this beautiful little kind of, if you look like this and you wanna do it, this is what you need to do. And that's exactly what implementation science is designed to do. My question would probably be, did you do that? And I don't mean to put you on the spot, but I just felt like it was a perfect demonstration of where you can bring together the evidence collection and the implementation and start those processes together instead of waiting for the perfect time when you have all the data. Yeah. I wish we had met a few years ago. No, we didn't do that. And I wish we had, and there are a lot of things that I would have done slightly differently if we had to do it over again. It was kind of, we followed a sort of intuitive path. And I could go back now retrospectively and follow one of these frameworks and we could come up with the study. But we didn't do that, and partly it was because we were trying to move quickly and we didn't have a lot of resources. But to answer your question, in our case, it really was grassroots effort that then when there was clear benefit to patients, then the executive team totally got on board and it's been great ever since. Let's give Lincoln a rest for a second and go to Rex and then we'll go back to Mark. Well, actually, one of my questions is for Lincoln. Oh, well, sorry. I'll ask the other one. I feel fine. So, Kevin, I was struck by the number of people that you've got serving as curators. And one of the things I wondered about is how do you assure consistency between all 250 of them? 150. Excuse me, 150. Which is still too many, probably. So, it's all about taking a portion of all the records, about 10%, actually. We run through multiple curators and then we have a series of QA, QC, procedures and metrics that are in our protocol so that at any given point, like in time, any day of the week, we can tell you where we're standing in terms of consistency. And so, it's brute force and statistics. And then just the second question was, for Peter, you showed this really nice dashboard. Is that a home-built thing or is that something? Yes, so that's home-built. It's using Tableau. It's something part of our analytics team helped us put together. We've had, fortunately, I get the privilege of working with a very talented HIT bioinformatics group. And so, how are we merging all this data together? Cause it's unique, Epic doesn't come out of a box telling you this information. So, we've had to be conscientious about how we built even just the order for these genetic tests so that we could pull out the data that we wanted to. And I think, you know, to kind of dovetail on the Intermountain motto. North Shore has a similar kind of motto. It's a different analogy, but we tend to build it and then try to build it better as we go, cause we do learn each step of building. And so, we've had to tweak the motto, which is why I don't have that data yet, but that's obviously our goal to make it as accurate as possible. And that dashboard is available to everybody and all the providers in the health system? No, so that's something that's restricted to keep members of the personalized medicine team. We recognize that it's very sensitive data to our providers in practice and the goal is to improve things, not to have, you know, and to know where direction we need to go with it. Mark? Yeah, I'll just make a follow up comment and then toss a question to Lincoln. The ability to do reports out of the electronic health record that we all have is really pretty much limited to the things that we're all required to collect. So meaningful use data, some of the quality improvement data that's needed for reimbursement and that. And so, there are systems that come native to the electronic health record system that will help to collect those data. But for basically anything substantive that you wanna do, irrespective of whether it's genomics or not, you really have to build your own reports and the tools that come equipped with the EHR are just not up to the task, at least for systems that have gotten to a certain level where they're well beyond the sort of the basic level of quality reporting. It's challenging sometimes to even get the data out in terms of what the electronic health record automatically feeds. And so a lot of us have also had to build additional data captures that are well beyond the back end of the health record. So that's another thing to unfortunately have to think about when you're thinking about implementation. So the question I had Lincoln, again, given my knowledge of Intermountain, I was very interested to hear about the mental health pharmacogenomics. I think that's a great idea. But Intermountain has been a leader in mental health integration, the use of alternative providers embedded within primary care practices to essentially offload some of the work that would otherwise follow the primary care physicians. So I'm curious if you leverage that infrastructure to take the pharmacogenomic testing live or whether that was something that was done in addition to that integration structure. Yeah, so Intermountain has published some data on the cost savings that occurs when you integrate mental health directly into the primary care clinics. And as you said, it's a way to sort of democratize access to mental health care by enabling primary care providers. So we basically piggybacked on that infrastructure. Initially, honestly, we hadn't thought of that. We thought of just going directly to our behavioral health care team. And they said, well, why not just go right through MHI? Which is the mental health integration. And so we have done that. And then when we got into MHI, those primary care providers said, well, how can we don't make this available to all primary care clinics? And so it sort of metastasized, starting with just behavioral medicine and then going through that infrastructure. So great question. And so now it's available throughout all of behavioral, all the behavioral medicine clinics, all of the mental health integration clinics and all the primary care clinics. Terry? So Ashley, I wanted to go back to molecular tumor boards. And if other people, so there are people along the sides that have other questions that might be more relevant to the current conversation. Bruce, if you wanted to take those. There's apparently not right this second. No, they're shaking their heads. Okay, fine. I gave you a chance. Go ahead, Tom. Just a quick question for Mark. You mentioned that the exome sequencing is done in a research lab. And then if there's things, any positive results are confirmed. Is there a concern that if there's no results returned back to the patient, that that is clinically interpreted as them being negative for things like BRC1 or any of the relevant genes that you're testing? Right, so we've done extensive education with our providers around the fact that if there's no return, that means that there's no return. And that if there's a clinical test that's indicated that test should be done, we would not go back to this research exome to ask questions that really should be best asked with the clinical test. Because we don't know what the specific, while we think the performance characteristics of the exome are robust, it's not a clinical test. It is a bit frustrating for our participants. We tell them right up front, we said, you know, you may not hear from us. It might be because we just haven't analyzed your exome yet. We have 40,000 of them waiting in the queue to be sequenced. Or maybe because we didn't find anything. So as an example, I was part of my code. I've never heard anything from my code, but I was one of the first to take advantage of the clinical sequencing. And I got my result in three and a half weeks, which was negative. So I still don't know whether my code sequence was negative or it hasn't been done or whatever. So we have that ambiguity. We're trying to figure out better ways to communicate to patients, but the basic bottom line is we don't want those sequences used for clinical indications because we're concerned that we could easily miss things that would be picked up on standard clinical testing. For example, bere arrangements in BRCA one and two, which is 10 to 15% of the individuals that exome is very poor at calling those types of rearrangements. Okay, Terry. So harkening back to the discussion this morning in terms of dissemination, I remember Howard mentioning back in GM nine. I know, shocking, you're gonna have to. At any rate. I'm in the middle of the line. Yeah, that's right. That the molecular tumor board, yeah, I know. Anyway, that the tumor board was actually a very effective educational tool and getting their fellows to attend it was a very useful way of sort of getting this knowledge disseminated. And I recall, Baylor, when they started doing their clinical exomes, I mean, not when they started, but shortly after the Genomic Medicine One meeting, they showed slides of their exome sign-out conference, which was open to anyone. They would actually bring pizza in and anyone who wanted to kind of, and the room was packed. I mean, it was really a great educational effort to the point where they would video cast those. I don't know if they still do. I guess the question more around the table was is there a way to harness that across our medical care systems, recognizing that it's not scalable, but still there might be the opportunity, once a month or once every couple of weeks or whatever, to get together a group of fellows or trainees or physicians from the community to actually try and educate them a bit on how this is done. So we have a set of our thing for rare disease, genome odyssey board, which has been very successful. I don't know if everybody can hear me or not, but I'm not to throw cold water on the molecular tumor boards, but we had one and we stopped it. I think for the reasons that Heidi illustrated out there, which is we felt our physician knowledge base had reached a level of which, honestly, the attendance was dropping and new discoveries were diminishing. So we've turned ours off for the time being, at least, because of some of the work Tempest is doing and bringing in RNA sequencing and other circulating tumor DNA. We're thinking about restarting it, but for now we don't have one. So I don't know if I think it's kind of getting to the scale issue as well. And I think, since my name was used in vain, I would mention our tumor board, which we don't have pizza, but we do have Jimmy Johns. Jimmy Johns, freaky fast. And what we've done is we have a core group that is looking over every single case. And then we have a monthly two or three cases dug deep with lots of people involved that's semi-educational, semi-clinical. Always has a real clinical question, but there's an educational component. And that's kept people engaged. This allows us to justify having that monthly meeting, but doesn't hit the problems that were just mentioned in terms of it being just an exercise with no real point to it. We kind of have a sort of a hybrid. We're still trying to find, what does our molecular tumor board, what does that mean in North Shore? Are we gonna integrate this information? Because we have a lot of different just tumor boards. So should we be targeting those as opposed to creating yet another physician requirement? We do have a consultation service with one of our champion oncologists where he will review this genomic data on a one-on-one basis to try to, for the really tough cases or to provide to oncologists who may not be as knowledgeable in the space and some reassurance about sort of what the next steps are, but this is still a work and process at our institution. I imagine as the field matures, the difficult questions become fewer, but there still always will be some where those kinds of conversations can be really valuable. I think there was a question way in the back, maybe two. Oh, yes. All right, let's go ahead. That's just a quick question for Mark. So I think that notion of doing research and then flipping it to clinical when you send it out for clinical confirmation has lots of advantages. What happens when you get a discrepancy between them? Yeah, we track that. We looked initially, and one of the reasons that we internalized our analysis pipeline was because we were looking to see, if we compared our pipeline as it evolved with inexperienced laboratories calling pipeline, what were we seeing that was different? And what we found was a very small rate of difference, and the differences were ones that were really not clinically substantive. So we recognized that that was really, that we'd gotten to the point where we had a pipeline that would meet our purposes. We've looked at discrepancies in terms of the exome call versus the clinical confirmation call. Most of those discrepancies are understandable. It's poor performance of the exome in areas where there might be a lot of repeat sequence or GC rich regions where the exome captures just not working particularly well and we're getting erroneous reads. So a lot of those you could identify on the basis of the quality metrics of the exome as not being likely. And then there are a few others that don't, but it's a very tiny percentage of the actual calls themselves. We also then have the interpretation where both of our groups are using, and now all of the laboratories that we're using use the ASMG criteria for classification and we look to see are we essentially calling things, we're only returning likely pathogenic and pathogenic because we're dealing with a population where we're not having an indication so we don't want anything to do with VUSs. Those are all false positives as far as we're concerned. We have the advantage of having some clinical data which can sometimes help to clarify. And so I would say maybe once every couple of weeks there'll be a discrepant interpretation where we'll get some additional clinical data and we use that to resolve it. But we were pretty surprised I think by the relatively low level of discrepancy in all three of those areas. But that's something that we continue to track. Did you guys find? I wanna thank all of our panelists for providing some really good examples of what can be done when clinical and molecular data can be accessed and analyzed. But all of the tooling and the reports that we heard about today require that access to structured data. Kevin, I was particularly struck by your example as Rex had said. You've built some very impressive tooling, but then behind that you've got personnel dedicated to pulling all that data back out, structuring it and coding it. You've got a team of, if I heard you correctly, 15 to 20 people dedicated just to building NLP tools and then 150 people dedicated to reviewing and curating those results. That is not something a typical or even atypical academic medical center is going to be able to do. And I think this highlights our collective, a significant gap in our collective ability to effectively capture relevant clinical data and computable forms. We need the structured data. We need the standardized data, but we can't hope to turn our clinicians into data entry technicians. So we're left with this gap. Tempest has obviously invested very deeply in this. You've thought about this because you have a significant financial incentive to developing technology to extract these data and codify it more efficiently. And I'm wondering if, Kevin, not to put you on the spot, but to put you on the spot. And as well as the other panelists, if you could comment about how we can scale this process more efficiently because from a data standardization and normalization point of view, this is really at the heart of what we need to do to go forward. So three years ago when I was running a clinical trial at the University of Chicago, the way I got data out of the system was I hired a fellow or two and they went through and pushed data into red cap databases and so forth. And I mean, Tempest, the data structuring component of Tempest, yes, there's a financial incentive to do it, but it only works at scale. And so whether it's Tempest or somebody else, I think a lot of the data structuring that is required to push this field forward is probably well done or maybe even best done by the private sector who can do that as a service. We give all of the data back to each of the sites and so we're not hoarding the data. We keep a de-identified aggregated set which has value in the aggregate, but every site, no matter who we work with, we actually insist that they take the data back. That data is meant to stimulate everything from your research to your quality control. And we help build reports and so forth to stimulate those areas as well with our collaborators and the hospital system. So I think it'd be silly for every individual hospital to build their own unique NLP and data curation team. It's something that scales, it's ditch digging. And for one of them, better analogy, it's ditch digging or lawn mowing and we just have to go out there and dig those ditches and it's best done in industry for the most part, I believe. Heidi, did you have another question? So I just did a comment on that as well with the data integration. So I think part of North Shore, we're gonna continue to kind of hybrid of some things we're gonna do in-house with very talented molecular pathologists, but a lot of things we're gonna do externally through strategic partnerships. And so part of the fundamental question is how much of this data actually has to live in the EMR versus pulled from somewhere else? So is it necessarily important that our EMR knows that what genomics coordinate of the BRCA2 variant that's been found? Maybe not, maybe so, depending on what your organization's goals are. But I think for most community centers, just knowing, okay, there's a pathogenic BRCA2 mutation right now is going to be sufficient. So part of what we've done is we've built a very repository called Flype and it has different uses depending on who you talk to in our organization. So it captures some of our next generation sequencing data from some of the tests that do in-house. It pulls in data from external lives that we've partnered with. It drives our clinical decision support for our Pharmagenomics program, which pings out to Act X. So it means a lot of different things, but what it accomplishes is a relatively seamless but consistent interface with our EMR so we can have a little bit more flexibility with integration across different entities of where we're getting data, whether it's updating from ClinVar, whether it's clinical lab data, whether it's some of our research initiatives as well. And what I would add is that I think we've not really addressed an important question and this to some degree is dependent on the use, but I think we always tend to approach the idea, well, let's just get everything. And I think a more useful question is, well, what do we really need and when do we need it? Sort of a minimum data question as opposed to getting everything. Now for the cases that Tempest handles that are very complex, that approach is probably a reasonable one. But as an example at Geisinger, we have because of our population demographics with their obesity and smoking and that sort of thing. Abdominally,ortic aneurysm is a pretty big deal for us. And we recognize that there were a lot of triple A's that were being found on imaging that was done for other reasons and was not being followed up on. And we said, well, this is a huge opportunity. And so we developed an NLP program that's very simple that just looks at the radiology notes, looks at the specific language and they can pull out in real time as radiology notes come in whether they're generated by our PAC system or scanned in from an outside record. If there's a triple A that meets a certain size criteria and we ping the clinician to say, you need to evaluate this patient for triple A and we have a series now of patients that have been identified because we've done that. That's not something we would ever turn over to industry to do. But it's also a very narrow purpose. And so I think when we're talking about genomics, we sometimes get overwhelmed by all of the different things that we could potentially do if we had the sequence data, but we don't have to do it all at once. And so if we think about whether it's an oncologic indication, what are the specific data that we need? If it's what we're doing with 61 genes, what's the specific data we need? It's not huge data. It's what Zach Kahane was talking about with simple data that sometimes we go for the big data and it's the simple data that we can really focus on and accomplish things with. So I think that type of a strategy and frankly, there's more structured data in there than we sometimes realize and it takes talented people to know where it lives, but it can be found. So I think there's a lot of opportunity there to be able to do that. So I just wanna agree with that. I'm not suggesting that like industry should be structuring all data and I've seen the radiology teams work and met with them and it's a very, very impressive at Geisinger. And I think the point is there's a certain set of standard data fields. So for example, we collect 30 to 50 data fields, maybe a little north of 50 sometimes, depending on tumor subtype and the situation according to a formula. And so if, and they're obvious stuff. It's like what therapies did you get? How long would progress for you survival, et cetera. And it's that stuff that's quite frankly, for reasons we all understand, locked up in Epic and Cerner and so forth. And so it's just a dirty job of digging them out and getting them structured and getting certain field structure. But when you're doing cutting edge research, now you're enabled to do that. Those fellows in my laboratory today just get the data and analyze it. And then they go and look at like radiomic parameters and things like that and develop new things. And so I think that's the world we want to move toward. At least until we live in a world where there's interchangeable, interoperable EMRs and everyone holds hands and things can buy out. So it's told that we can take five extra minutes. We have, I count five cards up in seven minutes. You do the arithmetic. Bob, you're on. So I think Bob's point one on the paraphrase, Bob. So that brute force works, but it isn't always the most elegant solution. And maybe in this case, that in this particular point in time, that's what we need to do. But there are more elegant solutions that will scale better in the future that have to do with data standardization and communication protocols. So hopefully that clarifies. So question for Lincoln, you mentioned the cost data. Is that truly cost data versus charge or reimbursement? And are you able to capture costs from like long-term healthcare facilities and rehab and things like that? Or is it, how well do you think you're capturing all of the costs for those patients? So yes, it is charge data. And we think we capture for patients that were covered by our health plan, select health. We think we're capturing all of the, any related charge during the time of the study. So if they were in some skilled nursing facility or something like that, then presumably we would have captured that. Jeff. And so when you say using charge data, again, it probably is not a big deal if you're using just select health data, since they'll be using the same charge data. It's standardized in itself, so to speak. Yeah, but across one method to normalize, which I think is what we're getting at, is you can use standardized costing using a Medicare fee schedule or something like that to be able to make sure that you're comparing apples and apples when you're saying, looking at different payer data. Yeah, yeah, so gosh, it was a whole separate team that did that, and I don't know if they used that standardized Medicare fee schedule or not. Jeff, did you? Yeah, I continue to think about the evidence generation challenge. And I was thinking about the, in the cardiovascular world 20 years ago, the National Cardiovascular Data Registry was started, which now captures data from 2,400 hospitals and has led to quality improvement, coverage decisions, guideline development, research. Anyway, the point is I'm listening to Intermountain and North Shore and Geisinger, and I'm sure others in the room that are still generating a lot of important data, but in a siloed way, and I would think that coming out of this meeting, one thing we could begin to consider is the formation of a genomic medicine registry that could be very powerful in terms of evidence generation, and of course, if we engage the payers in helping us think about what the structure would be that would help them with the evidence evaluations, that could be a very positive move forward. I'd be interested if anybody had any comments. So we, I think institutionally believe that data sharing is important. It's one thing to say that, and it's a totally different thing when you try to get people to start sharing data, and God, it's amazing how attorneys get nervous about what that means and what data can be shared, but I don't think that's a problem. We've taken steps in that regard. We have formed actually a data sharing consortium in oncology specifically called OPEN. There are others out there. One is Genie and there's a few others, but a genomic medicine, I think data sharing effort or consortium is something that I think would provide a lot of value. And we could provide, I mean, the cardiology community who's obviously done this for 20 years more could provide some insights as to how to actually structure this. We don't have to reinvent the wheel. Yeah, it'll help us avoid some landmines. Some of the initiatives are a way with CancerLink, being an example that Kevin talked about. So where are the areas to fit? So we aren't duplicating efforts as well. It's gonna be important as well, but we feel the same way. We're on the roadmap for trying to integrate as well with CancerLink and other initiatives. I think one of the challenges we realize is that there are actually, it's not so much the standardization of the data. There's a lot of standards out there, and so how do we reach consensus on what that standard looks like across the different institutions? I think there's a step before that and Jeff is familiar with this because it's the work that Ignite's done as well. And that's about actually landing on what do we agree on as important outcomes? Again, if we don't define the outcomes, then we're just collecting data and we may have the right data by chance, but we don't necessarily, we'll have all the data. So Emerge published, are the outcomes working group? Emerge published a paper looking at the ClinGen Actionability Working Group output compared to the defined Emerge outcomes to try and take an initial step at harmonization. And I think that's something that we talked about potentially then doing across the other NHGRI funded consortia like Cesar and Ignite so that we would begin to create a toolbox of outcomes that we can all agree on for certain genomic medicine use cases. We would all collect data around it and that would be a necessary step, I think before we develop the, or at least would inform the development of what that data repository would look like. So let's do Heidi and then Teri and then Dan and then I think that'll be all we have time for. So this built up this data sharing question and this is specifically somewhat for Lisa's talk. We talked about using, when you saw a variant to be able to go back to BioView and see what the phenotype risk score was. So it occurred to me that that dataset would be a perfect dataset to a new model we're exploring in the global alliance around a variant matchmaker to build off our gene matchmaker exchange where it's a federated network and the institutions agree that people through an API can query do you have a patient or multiple patients with this variant and if so return the phenotype to me as a way to explore what patient level data is on a given variant. Do you think that consent within BioView is such that that kind of federated API based access to return phenotype when I query with a variant is possible with that dataset? I think it's possible although I try not to have or answer questions about consent and I had to just always move this to somebody who's actually an expert but I think it's a really interesting idea and I think just like with my experience in the UDN the most often what we're able to do when people have candidate variants that look they think might be pathogenic if we can identify a number of individuals who don't seem to have any medical problems whatsoever that's helpful information and if you have, if I understand correctly your variant matchmaker is gonna have people who are affected predominantly correct. Oh, is that not true? No. Heidi, the work that Lisa described is all done in a de-identified set. And that's fine as long as the genotype and phenotype are together in there. No, absolutely. And just imagine what you could do if it weren't 20,000 but it was two million people with those kind of data. I think that's to telegraph that was what I was gonna say. I won't, okay. Yeah, I also think that even though ICD codes are certainly not the deepest type of phenotype that you can get, what you can get from having 150 people abstract your charts for you on the other hand they're very, very portable and they can give you an idea of if the variant is in any way possible to cause let's say epilepsy or something like that. You have decent capture of really broad phenotypes. So I think that that could make the whole method amenable to people kind of pooling their data together to start aggregating these large populations so that you do have information on a lot of different types of variants. Terry? So on the question of outcomes one of the things we struggle with at NHGRI is that the research studies we fund typically are funded for a four-year period when we're really lucky we get them to five years and that's pretty much it. So if you're designing a study that there's year of protocol development at the beginning of year of analysis at the end maybe you can shorten each of those to six months but you're still left with about a three-year period in which to recruit and intervene and get your outcomes. And so my question for the group and particularly those who need to be convinced or at least to assess the value of this is what kind of outcomes can we get other than process outcomes? That is the clinicians looked at the test or they changed some aspect of care or whatever but can we really get healthcare outcomes other than in people who are desperately ill and about to have an endpoint anyway? What kind of outcomes can we look for and mark it looks like your ready plans? Yeah, I mean I wouldn't be dismissive of process outcomes because there's a reason that we collect them. The primary reason is they're easy but the secondary reason is there's actually in many cases data that is evidence that it actually relates to a health outcome. And I think there's certain intermediate outcomes it would be captured within that timeframe as well. Getting an LDL to control or something of that nature. So each of those is not a health outcome but again if you're looking at familial hypercholesterolemia in the pediatric population, you're gonna need a 50 year outcome study. And so for most things I think what we have to do is we have to be thoughtful about what is the evidence that actually links a process or intermediate outcome to the health outcome of interest? How easy is it to capture the data? And one thing that payers are really interested in turns out to be do people really act on the information? So in other words if we do this test and we return it, what if nobody that gets a BRCA result actually gets breast imaging or actually does anything with it? Well that's an important data but if they all change health behavior that's something that the health plans really say that's the currency because that's what's gonna change people's health behavior because ultimately that's what we recognize is going to bring value to us. So can I just ask, I mean would those others around the table who are maybe not as research oriented as Mark and I are, are those kinds of process outcomes compelling to you or are there other kinds of outcomes? And this may be a question that you have to think about over dinner and we come back and talk about tomorrow but are there other kinds of outcomes that we can get in a short term in a relatively general population since we are the OM Institute, we don't study just cancer or just heart disease or just other things like that. So any thoughts on that? Now we're later. Mark's example of the LDLR reduction is somewhere between a process outcome and an outcome outcome because I think the vast majority of people in the cardiovascular science area would accept the idea that lowering LDL probably has a beneficial effect, especially if you lower it with the usual statins which we know are highly, highly effective. So there are outcomes that are process outcomes that people would label process outcomes that are actually outcome outcomes I think. I'm sorry, I was just gonna say that we also haven't done much in the patient reported outcome realm and I think that that's something that again we could get relatively interesting information. So the family history study that Nadine Koreshi did in the UK where they identified that when they returned the family history about cardiovascular disease that they noticed a significant increase in smoking sensation. It was sensation too. But they weren't powered to detect a significant difference but they found a significant difference because the impact of the information really spoke to those folks but they wouldn't have gotten that if they didn't get the patient reported outcome data. And I think ultimately at the end of the day it's about the patients and if the patients say this is important information to us and we're gonna make health behavior changes that's pretty important. So to dovetail on that I echo completely and we've published on our pharmacionics data that we actually did some implementation science tools with Amy Lanky and patients said that those interviewed and developed themes I'm sure I'm butchering the language because I'm still learning this but one of it was assurance that in this case our health system was taking a full look at well what are the different factors related to my medications and even if no change was made to their medications after they went through pharmacodynamics they felt more confident in what was prescribed and were more likely to adhere to their medications and I think there's one thing as clinicians we can agree on if a patient doesn't take their medication it's not gonna give them any benefit. We've about exhausted our extended. Dan did you have a comment? So I think what I've heard this afternoon is the value of big, big data sets in discovery and in implementation and it feels like sort of the beginning of GWAS we have lots of chattery signals lots of signals that might be real and with bigger, bigger data sets it becomes obvious which signals are real and which signals are just sort of in the weed so as Eric and company start to think about the next generation I think generation of very, very large data sets of phenotype and genotype information is sort of feels like a priority for not just the obvious reasons but for the discovery kinds of reasons that we've heard today I like to tell people wouldn't it be lovely if Nomad had a column that said phenotype? Had a column that said phenotype in addition to everything else so that was my great comment. But isn't that exactly what all of us is about? I mean it's a lot of what a lot of initiatives are about including the sort of the genomic alliance initiatives all the initiatives but we're at the beginning of a story and I think that a million people probably isn't enough. But UK Biobank plus all of us and the thing that I keep coming back to is Terry's time limited ability to fund and UK Biobank and all of us don't have that same problem. And Kevin's resource and to some extent BioView and those seem to have legs that are longer than a four or five year timeline. Well we've exhausted the extended time and the little timer up front has stopped counting which can't be a good sign. So yeah, I'm not going to therefore try to summarize but I just want to end with three very quick points that kind of came to my mind listening to this. One is sustainability of these various implementation projects. Deans come and go, CEOs come and go, their motivation may too and I guess to Terry's point about how long can you sustain an implementation test? I think that'll be an important question for us to look at long term. The second thing is apparently everything works. We heard about multiple success stories and I guess it would be interesting at some point to hear the hard lessons that learning health systems have learned along this path. What didn't work and why and what can we learn from that? The third thing was what went through my mind is I was listening to Kevin's talk or maybe not listening to it actually more watching it. Those of us who are stuck in the dungeon of Cerner will say that they were just plain ugly. It's painful frankly to look at the screens on these things and watching the very high tech sort of cool things apparently fueled by an army of hamsters in the background. But nevertheless, I often wonder if part of the issue with getting clinicians to adopt this is that if you're doing something high tech it ought to look high tech and we're stuck in utilitarian systems that don't and I guess a question I would raise maybe for the future is what could be called systems engineering and how you actually design things so that people really want to use them. I'll end it at that. Thank you. Great, so we are gonna take a break but it's only a 20 minute break, my apologies. We don't have any food for you. That's part of our austerity and efforts to reduce obesity. So we're all gonna run on the treadmill for 20 minutes. So please be back at 4.30 sharp. We have an experiment that we're very much looking forward to, a debate from two of our colleagues. So be back at 4.30, thanks.