 We'll go ahead and move to the response from Blackford Middleton and then we'll open it up for questions and beyond. Thanks so much, Walter. Good afternoon. Thanks, Dan, for the invitation to come back. Pleasure to see so many familiar faces and colleagues and also to catch up on what's been happening with eMERGE since Mark and I were here a couple years back in a very similar type meeting. Just to respond to the questions specifically, but before I do, you know, one of the things as an outsider to the pharmacogenomic community I get so excited about, I think that actually pharmacogenomic CDS has the potential to be some of the most impactful CDS that there is in the field, if you will. Certainly many of you are familiar with the program to adopt EMRs, HITECH, and the ARA bill. We've now got adoption of EMR. We have data flowing across EMR systems, but the value potential that we and I and others predicted for the adoption of EMRs from decision support is not being achieved. There are problems with interoperability for sure between these data systems and there are problems with usability. But one of the main problems as I see it is that these EMRs are not imbued with the knowledge that we had at places like the Brigham and are in existence at Regans Reef and elsewhere. These systems do not have the smarts that the analysis suggests could be worth $44 billion in ambulatory care, John Birkmeyer estimated $30 billion in inpatient care, and interoperability itself is worth about $77 billion. So what is the fundamental problem? It is both the implementation, the how we do this, but it's also the knowledge sharing problem. If we fail to share knowledge across a space as complicated as pharmacogenomics and if we're all expected to reinvent the wheel and implement a new wheel, if you will, in each and every install of Epic or Cerner or what have you, we're simply never going to attain the value proposition that drove the policy for adoption in the first place. So I think back, as I think about CDS in the pharmacogenomics space, to the great table figure 3.1 from the IOM report, Building Safer Systems, which described this socio-technical context, that we have to think about when we think about any CDS, but pharmacogenomics CDS as well. It's going to be about the quality of the data, the quality of the knowledge, the quality of the presentation layer, the quality of the inference, and the quality of the actionability, if you will, of that CDS, as described well in that report. In a recent review I did, we, Dean Sittig and Adam Wright and I used a six-component framework to sort of look at CDS over the past 25 years and look forward 25 years to see what might be coming. And I'm going to borrow that for discussion of the pharmacogenomics CDS. I think we've heard already a couple times today about requisite data standards, both in the EMR systems, but potentially from other systems as well that might impact the data input side to a pharmacogenomic inference algorithm of some kind. Patient preferences hasn't been talked about much, but in a PCORI grant that I did with Bob Robert Green at the Brigham, which unfortunately wasn't funded, we had a whole set of ideas around building patient preference models and I think, of course, to really enable CDS that exploits patient preferences and respects preferences or utilities. We're going to have to do that as well. I know there's a great deal of work done in standardizing the genomic test result data. I'll mention that briefly here, but I think as that goes forward, we can think about insertion into the EMR as Sandy has already pointed out. And then Mark already talked a bit about clinical outcomes data. Really what are the precision outcomes that go with precision medicine in this context? I think the biggest work that needs to be done, though, is around the knowledge representation of potentially complex inference algorithms for pharmacogenomics. All things in ambulatory care, chronic care management, preventive care services and the like are just different than I think what might happen in some cases with pharmacogenomic algorithms. I think it's already been mentioned that these may not be rule-based, they may actually be algorithmic, and you may need to subject coefficients to your patient's data and the like to get an estimate of risk or what have you. Knowledge management around the knowledge itself also is worthy of discussing or paying attention to how do we actually classify the knowledge assets as they're subject to knowledge curation management or knowledge engineering. Feedback loops in learning have been mentioned already. What are the controlled feedbacks that we might wish to take from the decision support event that allow us to update algorithms or update the feedback process? And then health literacy considerations has already been mentioned as well. Another big bucket of stuff is around the inference algorithms used, whether they are rule-based or inferential from a statistical coefficient or regression model or what have you. How do we actually give the end user, a primary care internist like myself, confidence or certainty around the inference that's being provided and then share that with the patient? And again, decision theoretic concerns regarding patient preferences, patient preferences are difficult to elicit, utilities are difficult to model and they may change over time. Perhaps that last breast cancer case might be a case where utilities could change. Probably the strongest recommendation I'll make though is to think about in the architecture and technology bucket, how do we bring forward these large knowledge repositories of codified, structured and curated knowledge in a way that makes them available across disparate EMR implementations or multiple instances of even the same EMR. In my own work at Partners Health Care, we built a CDS consortium from 08 to 13 and took some of the partners knowledge assets that were firing and in use there routinely and made them available via the cloud to other EMRs across the land. So Epic, NextGen, GE, the Riggins-Rief Institute, EMR, Gopher, as well as Partners' Own EMR. And from a knowledge engineering challenge point of view, externalizing this allows you to separate the concerns between the EMR, its implementation and management, and management of this knowledge asset. I think with that, we can then think about the CDS service layer, if you will, whether it's a smart container or a web service or a fire plan definition or smart apps, et cetera, all our contenders now. And I would hazard against spending too much time on computable objects, input, output, if you will. In our own research in the CDS consortium, we found that more people, more vendors even, were interested in receiving a web service rather than importing a decision support macro, if you will, into their system. On implementation and integration, it's been discussed a couple of times tangentially already, but how do we fit this into the end user's clinical workflow? And particularly if it is patient-directed clinical decision support, how do I fit that into that patient's interaction with a health portal or the like? What are the workflow domain ontologies that have to be considered, including specific factors, which decision support prototypical type is the right one to use? Is it an info button or smart on fire app or a documentation template or a calculated value, et cetera? And I just mentioned briefly, provider-facing versus patient-facing, and I asked earlier, can we think about genetic decision support for the patient directly? In other research we did at partners, we found that the activated patient, in fact, was one of the best ways to activate the provider for chronic diabetes care. And then usability, the human-computer interaction, I think this is fairly new ground in some ways. A lot of research has been done on CDS. There's still a lot of challenges though around what is the optimal interaction with the end user and how do we accommodate in particular, because it's not done routinely at all, patient and provider preference models. So in sum, I suggest moving towards more standardization of the data, writing on the coattails, as Ken suggested, of fire movement, the semi-movement. I'll mention briefly the AMA integrated health model initiative, building upon stands work and some others. That's pretty interesting. And think about also developing the standard transforms as shareable knowledge artifacts. So if I have an epic data type, I know how to transform it sort of automatically into a semi-model or your own knowledge model for the knowledge under consideration. For the knowledge representation, think about the emerging standards, the expression language. Ken mentioned already, clinical quality language. There's been decades worth of research here. This is definitely emerged and is now preferred among the quality reporting community, CMS and the like. But it is a convenient way to translate and specify and encode knowledge, but it has to be executed in another step. Work to standardize all the components of the knowledge stack from the terminologies, ontologies, these value sets and other classification algorithms which might be used all the way up to the front end. And recognize the potential of this idea of network knowledge. Ken may run the greatest algorithm for inferring X, but I might want to use that in my knowledge stack along with Y and Z from different places. That is being done now as well. I mentioned already this idea of implementing at scale, thinking about implementations, externalizing knowledge, implementing perhaps in the cloud and implementing a system of insight that has this ability to not only deliver and execute the knowledge artifact, but to record and observe and to send back updates to the knowledge authors on outcomes or experience. Finally, recognize that 90% of healthcare systems won't build this, won't research it and develop it. They will wish to acquire it. So I think partnering with the vendor community, both the EMR community as well as the CDS implementers community, such as the work I'm pursuing now with my new effort, would be fruitful and interesting to do. There are some research questions I think might be put on the table, methods of capturing and representing patient preferences, longstanding problem, not well done yet, I think routinely in clinical care. The idea of transit of semantic closure on data mapping, how could we more automatically map data from one system to another through a canonical form, perhaps. Contextual factors, I mentioned already, the setting specific factors that will impact for the primary care doc or the subspecialist, how well this pharmacogenomics CDS is working, and then evaluate, evaluate, evaluate. Perhaps pharmacogenomic CDS can accelerate its implementation and achieving value by not making all the mistakes that were made in ambulatory care, other forms of CDS over the years. Consider knowledge engineering, knowledge management, infrastructure at scale. This has been mentioned already. I think there would be many interested parties in this building upon the success of the FIKB and CDSKB, promote open sourcing the core knowledge assets so they're readily available to knowledge engineers and implementers who wish to build knowledge artifacts in the pharmacogenomic domain, and conduct more CDS pharmacogenomic demonstrations and pilots with both a heavy technology assessment component and an evaluation component, so we can see and learn what works at scale across multiple EMRs. Thank you.