 So thank you, Heidi, and my role is to react. So I'm gonna react, if I get them. All right, so my reaction is actually framed by what I think is persistent wisdom from Larry Weed, the father of the electronic medical record, problem-oriented medical record system. He said this actually in the 1970. Both of these quotes are from the 70s. They're more true now, I think, than when he said them. Modern healthcare is a spectacle of fragmented intention. And the question really for this group is, is Caesar on a pathway to increase the size of the spectacle of fragmented intention? The second thing that Larry Weed said is we practice healthcare as if we never wrote anything down, right? So, you know, we actually write millions of things down, but we've structured healthcare as an industry, as if we didn't, so that perhaps an individual practitioner might become better on their own by looking at their own experience, but we never, as an industry, learn from the collective experience of everything we've done. So that frames my reaction, and I apologize for those who have seen this graphic before, but actually I wanted to use the graphic to point out that the 200 papers produced by Caesar are down here in the graphic, and the guidelines are not even in the basket. So, clearly the method to ensure that you either have decades-long delays or you simply make no systems-level impact in healthcare is you rely upon clinicians reading and remembering clinical reports, the published literature, and guidelines. So, to paraphrase James Bond, the world of all of these acronyms we're talking about this morning really is not enough, and it's not enough because none of them, at least in my view, would contribute in even a small way to the goal of a self-optimizing healthcare system that learns from every decision event, and here of course you can't correlate what somebody read with their eyeballs, but if you had a decision support infrastructure that recognized a situation for which there was best guidance available, it provided the guidance, and then you had some way to just track what happened next. You could learn from every decision event whether or not the user followed the best evidence guidance, and that ability to do that would contribute both to improved local operations, so every practitioner, every clinic, every hospital would be happy, and it would contribute to the combined real world experience with genotypes and phenotypes, so NIH would be happy, and we would all move ahead. And so really the goal needs to be, could CSER be configured so that it contributes to a framework where guidance improves as a byproduct of care delivery, specifically care delivery not provided by CSER investigators, that's where you get the impact. So I have my own idea of how you might do this, and two pieces of it are well known, that is that we've been doing clinical decision support in the world of medical informatics for over three decades, and so the idea of having recognition logic that's keyed off things that come into electronic medical records, that is elements of phenotype or laboratory results or diagnoses that basically create a situation where a rule could fire that gives guidance to some kind of target user, could be a clinician, a patient family, but by and large what we have failed to do at what I believe CSER to date and ClinGen and the other groups have failed to do is then have that rule connected to a downstream decision support like recognition logic, that is automated recognition logic for the correct thing happening or a good outcome or a bad outcome, and that's really all we would need, and that is if we systematically are recognizing the need to invoke clinical decision support, if we systematically track the outcomes of it, now we can close the loop. Now there are some tools that are widely distributed and even more so with meaningful use, decision support authoring systems are coming online in large numbers, we have event monitors that are essentially computer programs that just spend their whole day just watching things happen in the EMR and looking to see whether rules could be satisfied, and we do have system-generated alerts at hopefully the teachable moment of testing, therapy, decision-making, counseling, but what we largely lack are the things in the italics, that is we then do not downstream have automated tracking of outcomes versus those user decisions, and we also lack something that is prominently featured in the precision medicine planning for its cohort, and that is incorporation of patient-reported outcomes, and that is what did the human being for whom this was intended actually experience. So an ideal genomic infrastructure would have a public library that somehow would spontaneously, that is without an NIH grant to do it, would spontaneously incentivize this bi-directional engagement with healthcare and research organizations, and it would need to be managed by a trusted organization. It seems to me you could have a quid pro quo that if you use the public library's resources, you could have both the technology for easily uploading aggregates, you wouldn't have issues of HIPAA, of identification of individuals, but the aggregate local experience with the use of that decision-support infrastructure, and it is the case that since the last workshop, where this was prominently featured genomic medicine seven, there has been notable progress, so the IOM's digitized project has most of the way through designing and implementing two pharmacogenomics use cases, eMERGE and IGNITE have in fact developed the knowledge library, at least that first part of the recognition logic and the advice. I don't know if there's a lot of attention to downstream monitoring in an automated fashion, and also a PCORI-funded project at Geisinger is doing a comparative effectiveness trial to look at patient-facing decision support to improve patient provider communication. So it's good that we're making small steps there, but I think the risk of any NIH funded research consortium that is inwardly directed is losing the opportunity as one makes incremental progress on expanding the knowledge base of instrumenting it with, by building an engine, at least a prototype of the engine, that if the funding stopped, the engine would still run, it would still learn, it would still get better as a legacy of the project. So that's my challenge and my reaction. Thank you. We'll decide later whether you need to deserve those points, too. Okay.