 Okay, so we started, we have two slides, and we started with a discussion that sort of took off of this diagram or paradigm. In terms of evidence generation is, in our minds, a very broad topic that touches on a lot of different things. I sort of started with this paradigm of a general genomic application and thinking about an indication, a genetic test, an intervention, an outcome, and then wanting to supply evidence to support any given application along this lines. In some cases, you're starting with a healthy individual and it's more a screening opportunity, so you may not have an indication. But on top of this paradigm, one question we all asked or was brought up was, is this need for evidence generation any different in genomics than any other medical application? And I think in some regard, no, but in other regards, we all agree that there was added complexity in the genomic arena, and that comes in a few different flavors. One, in some cases, the indications can be incredibly rare, and therefore, the approaches one takes to generating evidence may need to be different because of the rarity of certain phenotypes. The other arena that I think makes genomics a lot more complex in the evidence generation area is the complexity of the genetic tests and the genomic tests, given that the content of them, the methodologies that are used, the interpretive process can be highly variable. And so in comparing scenario to scenario to scenario, if you're all using a different test or you're interpreting it differently, it's going to highly influence your outcomes and the ability to generate evidence to support your application. And so, you know, in some ways you can break down some of these components into the need to have evidence surrounding a method or the content of your test or the interpretive process, et cetera. There's probably more commonalities to other medical paradigms in terms of ensuring the definition of your indication, although there was agreement of the needs for standard phenotyping tools to ensure your ability to bring, you know, compare group to group. Also, discussion surrounding, given the rarity of certain indications and phenotypes, the need to aggregate data in different ways and that could be aggregation instead of at the individual variant level, maybe aggregation by gene, by disease, or even category of disease such as all extremely rare diseases and tackling them as a group. So these were some of the discussions that we had surrounded a number of these different topics. Towards the end, we moved on to trying to address Jeff's major aims of stating the priorities and the opportunities and the next steps. I wouldn't say we spent a lot of time deciding which of these three buckets to put certain things in, so I'm primarily going to just go down the list that we came up with at the end in terms of major areas of either priority or opportunity or places to start. So one of those is to simply catalog evidence-generating projects that are out there today, because if you want to build numbers and data, knowing that somebody else is doing the same thing as a way to help generate the evidence you need, we asked the question of whether the IGNITE Coordinating Center might be a center to help catalog the different projects that are going on. It was also brought up to catalog the availability of not only data but specimens that might be shared across centers and things like that. We also talked about the need to standardize tests so that you can compare apples to apples, whether it be methodologies, defining content of tests, standardization of the interpretation of the test, ensuring understanding of risk prediction. Are we talking about a variant with high penetrance or low penetrance? Also, defining evidence, ensuring evidence for gene disease associations, for variant pathogenicity, evidence for a test validity and for a treatment's utility. And also pointing out that we shouldn't hold ourselves in genomics to a higher standard than the rest of medicine. We also talked about scenarios where we felt there was sufficient evidence already, but seeing the challenge of how do you adopt genomic medicine applications that already have evidence that may not be adopted already and some of those touch on the education paradigms in terms of training, making cases for economic benefit, engaging the physicians in the process to ensure eventual adoption so that if they are identifying needs, they can help then implement them later on and developing society practice guidelines. We also went on to think about ways to share evidence in actual physical ways, so thinking about patient data sharing, recognizing that there may be difficulty with actually centralizing this data, but at least identifying countries or systems that are willing to enable access to patient data through federated approaches and so enabling the ability to share evidence generated in those systems across different entities and actually needing systems to capture the evidence, facilitating a federated network with standardized APIs to enable sharing, perhaps a API is a application programming interface and just a computational interface to allow systems to connect, yes, you build an API that allows another application to dock on to it and gather data through, if you make your API standard, then those systems can interconnect and then in terms of discussing areas of overlap, so what are other organizations already tackling and can we identify areas of overlap so we're not redundant with our activities? So those, we did not do any voting on the prioritization of everything in this list, but those were certainly the major areas we discussed. Thank you. Any comments? Aravinda. The standard should not be higher than other areas of medicine. I think I understand what you're saying, but some may take it to mean exactly the opposite, that it might be lower. And I'm just wondering what the philosophy was in you thinking that. Well, I think this came up, for example, when you think about the use of imaging tools in medicine used incredibly widely, but how much evidence is there that they improve outcomes when using ultrasound and other techniques? So I think the question was, if we set our standard in order to implement a genomic application, you have to achieve X level of evidence. Are we setting that too high for us to ever practically implement these things? That I think was the point. And I guess the point I would add to that is it's not our call, that it will ultimately be the people that are defining what they're going to reimburse that will be setting the standards of evidence, and so that's why I think a point that came up earlier, which is engagement with those people that are actually paying for services about defining what is in fact the evidence so we have a shared understanding is really a much more important activity than self-defining what we think the standard is. Because you're absolutely correct, but the reality is that because we're the new kid on the block and because the standards have changed from 20 years ago, we are being held to a higher standard because everything at this point is being held to a higher standard. That's a valid point. Maybe a suggestion to the list and see what you think. And I wasn't, I was at your group for a little while, but I wondered if you had the chance to discuss expanding on existing evidence-generating projects. So we heard that Canada has 17 programs running that are obviously going to generate evidence in, particularly in that environment. We heard about the PDX card and Estonia as well as the Genomic Medical Alliance this morning with certain ongoing projects. Would there be any enthusiasm for thinking about those as a foundational set of projects that could be implemented and generate evidence in other parts of the globe? So the question is if we were to gather what are those evidence-generating projects, I don't think all of them, all of those projects and the funded grants are all about evidence generation, but if we could catalogue which ones were out there and ask two questions, are there duplicates where we could pull groups together to amass more data, or is a project that's ongoing of interest to another country to implement, which I think is your main point, and simply by defining what these projects are allows an example of how to do a certain project that might be of use to help another country implement it. Is that... What I'm suggesting is that we already have a number of data evidence-generating programs around the globe. We heard a snapshot of them last couple of days, maybe to your point, cataloging them and then really having a strategic discussion across this stakeholder community about where is there an opportunity to really accelerate the generation of data that would provide the clinical utility and other outcome measures that we aspire that they should have, or see if they'd reach those thresholds that we're looking for. If I could just weigh in for a second. I think the idea behind number one there was to think about... Rather than go out and create a whole bunch of new projects, there was a strong sense that we're doing... We're not doing as good a job as we could at capturing all of the evidence generation that's currently going on out there. The idea of a catalog would allow us to think about where we should capture the data from. And the idea of actually identifying countries willing to enable access to patient data, even if it was aggregate data that said this is evidence, and that evidence could be deposited in a centralized database. The one thing that's not on this list, but we talked a lot about, is the opportunity for things like ClinGen to actually be part of this federation that Heidi mentioned, where that data gets aggregated. So I think it's all about trying to do a better job capturing these projects that are going on around the world already, rather than thinking about generating a whole bunch of new projects, because I think there's a lot going on. I think that's a great point, not to belabor what I'm trying to say is if we took Patrick Tan's I project that we heard about yesterday and began to see if that has relevance, first of all, in other places around the globe, and could that type of genetic testing platform be seamlessly implemented, and what are the barriers so we can begin to understand that, and also at the same time generate the data that makes a compelling argument that these should be reimbursed and clinically adopted, and part of guidelines, and all the things that I think we want to see genetics do. I'm just trying to push your catalogue notion to a next step, and I'm not sure whether you agree or not that that should be the case. I think I understand what you're saying. There's a couple of different opportunities there. As you catalogue these things to in some ways get what is the interim answer, or is it already accomplished and you have an outcome, and for those that show a positive effect, you might want to then implement them elsewhere. For those that aren't quite achieving an appropriate level of evidence, but it appears that if you expanded the numbers, you might do that, and so that would be an argument to then replicate that study in another environment so you can build additional evidence. And then in other cases, the study is showing that it isn't appropriate, and maybe you de-prioritize those projects being executed elsewhere, because you already have some data to show it's going the wrong way. So by seeing not only what's going on, but where things are deciding how to prioritize either implementing them as is, or adding more data by doing another larger numbers, or deciding not to pursue an area. And I would just note that these align closely with two things from the first work group that didn't quite make the cut, but aggregation of guidelines and also the collection of the actionable variance, however you want to define that. So that would seem to be very synergistic, and now that that's emerged from two different work groups, that may in some sense suggest that that's a valued activity. Yeah, I mean, I think it's in my own opinion, it's incredibly important that we all agree upon the fundamental evidence for a gene disease association, and whether variance or pathogenic, because your outcomes will be so heavily impacted by making wrong decisions in that early step. And that is a critical fundamental resource that we all need to work towards. Heidi, this question's probably superfluous, but did you consider the big picture sources of evidence? By way of example, I imagine that in the world of pathology there is a lot of evidence generation which cuts back to the workforce question. And secondly, if one looks at the example of the Karolinska Institute in AstraZeneca who've just put all of their scientists under the same roof and said please work together, there is probably some evidence that is generated by industry as they manufacture some of the tools that we use. Yeah, you know, we talked about this a little bit in terms of recognizing that a lot of the evidence that is, that will need is sitting in healthcare systems and in other places and this need to have a federated system to have that. But one of the reasons that I was a bit in turmoil in the first section about the EHR fields being of critical importance versus variant databases which is obviously one of my loves is that if we don't agree upon the fields that need to be in these data sources that we can draw from there's nothing to draw from, right? What are those fields? Who enters in them? What's the standards for what we're entering? And those are going to be critically important for our outcomes and absolutely in terms of the sources of data and the existence of fields for those sources of data are absolutely critical to everything we do. I totally agree with that. Heidi and Rex. Next up is pharmacogenomics. Sorry.