 Well, thank you, Blackford. And I'm honored to be able to present to such an accomplished group. And good. I got the right first slide. The mandate given keynote speakers is something that all of you are at risk of. I read your bios. And it reminded me that there are some tips that one can adopt. And I've learned over the years. And so if you haven't, how many have already given a keynote at some point in their career? That's almost everybody. I'll predict that the rest of you will be asked to give keynotes based on your credentials in the bios. So I'm going to interleave a set of tips for giving a keynote talk. And hopefully I'll be able to show evidence that I've listened to the advice I've been given by my elders. So the first one is to know your audience. And clearly I am preaching, if not to the converted, probably to the choir, since many of you are advocating for systems approaches to decision support. But I also got very direct instruction from Mark Williams a few days ago, which is short is better. So I'm going to try and keep my keynote short. But those who know me know that I don't necessarily practice what I preach. The topics I like to cover are really only in three different buckets. The nature of genomic data and how it gave rise to the desiderata of the two papers that were just discussed and about which the survey was done. Some lessons from other industries about managing complexity. You might think from the title of this talk that it was going to be about patient safety. It is not about patient safety. It is about managing complexity. And it turns out that reliable systems in highly complex environments do happen to be safe. But that's not the central focus of our interest in them for this session. And then Mark particularly asked me to opine about the ideal state of genomic clinical decision support. So that's how we'll end up. So the desiderata, as they were developed in the 2011 conference sponsored by NHLBI, really derived directly from the nature of the data. And this isn't just genomics data. It's essentially all omics data, all these high throughput molecular methodologies that have given us the capability to simultaneously measure billions of molecular observations. And so it begins just by being large. Not large in terms of bits, but it gets its BD2K. It's big data to knowledge qualifications, primarily by virtue of its heterogeneity, or its variety, as well as its volume. So we've got billions of base pairs. We've got hundreds of thousands of proteins, tens of thousands of genes, thousands of different expression levels. But amidst all of that volume, it's clear that details really matter. And so the anchoring case for Mendelian and genetics in the 20th century in terms of clinical relevance was the discovery of the beta-6 valine substitution in the hemoglobin B gene, which gives rise. So a single change in the letter of the alphabet gives rise to sickle cell disease. And we are in an era now where we have no perfect and preferred laboratory method. So all generate useful data. None of them generate perfect data. All of the data has blind spots and noise and errors in it. Only a small fraction of the total observable data is conclusively associated with health status at present, although our expectation is that that will change over time. Importantly, molecular control mechanisms are essentially, if not absent, so poorly understood that we can't connect or predict on the basis of control mechanisms, whether something will have a disease importance. And then as a result of that, interpretation of variation is changing rapidly. So the fundamental characteristics of the data itself gave rise to the first of the desiderata papers and just to remind you, you've already heard it. The idea of the loss of data compression was not from the image data of the high throughput modalities, which is noisy. And as you may know, in our next-gen sequencing world of 300, 500-fold reads, essentially that's using a civil jury model to get a preponder of evidence about the identity of any nucleotide at any particular position. It's probably not necessary to keep that noisy source data, but to the extent that it gives rise to a consensus agreement about what the identity of a nucleotide is at whatever level of confidence that method generates, then if that's taken as the source data, the genomic sequence, then being able to compress that in a way that allows one to sort of dehydrate it and rehydrate it as it moves in and out of a clinical environment seem to be important. The second one was this notion that since none of the methods are perfect and they will change, we need to carry annotation of what the method used was so that we could compute, if you will, the expected variation from the ideal state. And we have laboratory data standards, such as LUYNC, that are built on that model of the result carrying with it the method by which it was generated. This thing's not working all that well. So compact representation of clinically actionable subsets came directly from Clint McDonald, who coined the notion of clinician think speed. And that is that his observation was it takes a clinician about a quarter of a second to get the next idea when they see something on the screen. And so any system that's at least that fast can stay ahead of the clinician's need for additional information. A big emphasis on the idea that you could have concurrent and probably linked versions of the same knowledge, both that readable by human beings and that that could be used by decision support rules. The idea of separating the primary sequence data, which presumably, if it is true, remains accurate and useful throughout one's lifetime, if not the lifetime as well of your descendants from the clinical interpretations that we expect to change as science changes. Also, just expect to be upset entirely the notion of a single, a lot of the dialogue about EMRs was OK, so it's three times 10 to the ninth base pair. As we can do that, it's not much larger than an x-ray. But it really is not a single, even germline copy, if things such as telomeric shortening and genomic rearrangements that are part of normal aging are complemented by the somatic variation of cancer. So at least we need the size, the EMR capacity, for multiple copies of the genome. And maybe it's a genome per metastasis, if you will, in a cancer setting. But also, the lesson from HIV of the molecular machinery being run backwards, so to speak, by retroviruses, that we probably have more surprises in store about the dynamic nature of our genome and our derivative products, such as the proteome and other omics. And then lastly, this notion that was anchored primarily in the observation of the growing importance of rare variants. And that is that each one of us really is, in a sense, a snowflake, a unique research resource, because there is really no one exactly like us who has our combination of rare variants, but the simple statistical requirement of being able to find people who are sufficiently similar us to get association statistics to develop predictive models and do that in a setting where it's not minor allele frequencies of 1% to 5%, but it's 10 to the minus fourth or lower. Now you need very, very large pools of individuals from which you draw these virtual cohorts for comparison. And so that raises the bar on the societal importance of being able to support individual care and discovery science that potentially every single person's genome has secrets that can help improve care and understanding of disease and health. Among the topics that have been front and centered as echoed by Blackford's recitation of the results of the survey is this opportunity and challenge of creating both human-viewable and formats and links to interpretation. So the sincerest form of flattery I found if you publish a paper is that somebody not only continues the title of technical desiderata, but when they create them, they actually begin numbering them above the number of the first ones that you created. So I'm greatly honored by Dr. Calmodo and Welsh at all in their extending for purposes of clinical decision support, the original desiderata, and not sort of rejecting any of them. And their observations, I'll go through them quickly because we have the authors in the room, is that this is the expected interaction of gene genes and of the ability to reason with both the clinical data combined with the genotypes means that we've got to be able to do this at scale with multiple genes, particularly for complex traits where there's only partial explanation of the variants by any single trait. Keeping the CDS knowledge separate from the variant classification keeps the two of them in a state where they can be separately updated. The problem of interoperability in the world of ONC and of operational genomics, the large number of gene variants in the paper was epitomized by the hereditary colon cancer story where there are up to 1,200 variants in the same set of genes that essentially have equivalent physiologic consequences so that we have to be able to not think of it as one gene variant per rule, so to speak, but be able to gather them together and keep the CDS knowledge general and simple as possible, leverage both the standards in clinical decision support and genomics to get forward progress in operationalization of these things, an important supporting of a knowledge-based deployed and developed by multiple independent because nobody sees enough of the cases to be able to do it inside their own organization. This one was a little, I think, anchored the lower end of the agreement and I think it speaks to the issue of whether you should err on the side of providing more information to people and not knowing exactly what they're going to use it for or don't expose them to anything that isn't actionable. And there the specter of, if you will, a genetic exceptionalism and or paternalism of I know what's important, therefore I will not let you see everything else that kind of raises itself and you saw that in the response on the survey. So all of that together, I have shown this over the years, this is just a cartoon of basically molecular data driving the escalating complexity in healthcare. It has no reality to it other than the line that's been well known since the 1970s of the upper bounds of human cognition. I'm a hematologist, oncologist and we were kind of molecular geeks early on and we often got overwhelmed with too many markers, too many RFLPs, too many molecular things that might bear on our diagnosis or therapy and so we were some of the first clinicians to do what human beings reliably do when you get more data than you can deal with and that we simply extinguished variables until we could get it into a space we were comfortable with but as we have 25,000 structural genes giving rise to each of them potentially one of several thousand expression levels and those giving rise to 400,000 or so proteins, we're clearly in a space that's above five to seven things that we can remember as covariates so that's the most important aspect of this growth of molecular data is it puts us in a decision-making space that exceeds the bounds of unaided human cognition for clinicians. So the next talk about giving keynotes is you really ought to talk about stuff you know and particularly if you have a personal experience it will add credibility. So in that context, one of the things that's relevant for what follows is that I got my pilot's license a few years before I started a medical school and is still active and have had experience in a range of aviation environments that include the little four-seater on the left that's my current kind of chariot for cross-country travel up to and including the Boeing 737-800 and remarkably complex machine and all of the workflow and training issues that surround aviation is something I have wondered and watched over four decades now of watching one industry reinvent itself in very dramatically new and effective ways and watching another industry which was my primary job keep not changing for reasons that became have become increasingly sort of illogical to me. So that's one set of context for what follows. The other is I'm not a nuclear power plant operator and to obviate any Homer Simpson jokes I don't play one on TV. However, there was a very interesting and intense two-day workshop held in San Diego in July of 2012 that brought together representatives from nuclear power industry and healthcare and I co-authored one of the chapters on how diagnosis of problem solving is done in those two areas for this monograph that resulted and so that gave me the opportunity and the idea for this talk of comparing and contrasting how these three industries deal with complexity. They are very similar in a number of ways. They all serve an important public good. They all depend highly upon highly trained skilled educated professionals. They work with high hazard socio-technical systems so they're capable of causing great harm when things go wrong. They're all highly regulated by a variety of external watchdogs and oversights but at that point they begin to diverge. The nature of the industries of healthcare compared to both aviation and nuclear power is that the standardization of practices and methods is very highly developed in the other two industries and healthcare is more or less prides itself on the notion of a thousand doctors, a thousand opinions. The rapid industry-wide adoption of best practices is also a prominent feature of the other two industries but is notably absent in healthcare. We often cite that sort of landmark study that showed 17 years to 50% adoption of best evidence in healthcare. It's probably about time to redo that study. And then importantly, reliance on individual professionals acting autonomously. And so I'd like to highlight for the next part of this talk these two things because the other two industries over the last 40 years changed in a very large way with respect to these two features whereas healthcare didn't change very much in its approach to those features of decision making. And so particularly nuclear power was not noted by its rapid adoption of new approaches and its group-wide problem solving and it now is. And aviation was built on a model of a marine model like ships of the captain of the ship model very rigidly hierarchical command and control structures. And it was kind of four plus in that. And now there's almost, there's very far, far less reliance on autonomous individuals and we'll go through that. So that brings me to keynote tip number four which is by and large and we'll test this hypothesis. Now people like stories as long as they are brief and relevant. So let me tell you a couple of stories. The first one's in aviation and it's the story of Captain Jakob Van Zanten. So Captain Van Zanten was a very, very famous aviator. If he had been a cardiac surgeon he would have been the Michael DeBakey of cardiac surgeons. He grew up, he was born in 1927. He grew up, got his pilot's license in the late 40s when he was only 20 years old. He became a commercial pilot, rapidly ascended to international prominence as the chief safety officer of KLM Royal Dutch Airlines. He was also sort of a handsome charismatic guy in person. Not only did he write training curriculum and ran a tight ship as a highly disciplined accomplished professional, but he got himself in the advertising. So this was an advertising campaign of KLM and there's Captain Van Zanten in the picture there smiling. So Captain Van Zanten was overtaken by a rare variant scenario. His 747 headed for Las Palmas on March 27th, 1977 was diverted by an unlikely series of events. There was a terrorist bomb threat at Las Palmas so they diverted all the traffic to another island, a little airport called Tenerife. And so suddenly a small airport with not much capacity was crowded with large airliners, Boeing 747s, multiple of them. They arrived in the middle of the afternoon. There was good weather but not enough capacity. So Captain Van Zanten offloaded all of his passengers in the terminal, had to go someplace else to refuel his airplane. Another airplane was brought into the terminal. When it was time to leave they were unable to shuffle all of the airplanes correctly because everybody was in everybody's way. He had to go back and pick up his 245 passengers. Time was going because he was working in the context of a Dutch law that says it was like duty hours for residents. If a pilot was on duty for more than a certain number of hours they could be criminally prosecuted in the Netherlands for staying on duty too long. So there was a certain urgency and during the middle of the afternoon the fog moved in and suddenly the airport was, nobody could see any of the airplanes that are out there and because the taxiways were crowded they had to have airplanes back taxing that is going back down the runway in order to get a turn off to go where they were supposed to go. With all of that occurring hours of delay the clock running, thick fog, nobody could see anything. Doctor, I'm sorry, Captain Van Zanden, I guess that's a Freudian slip. Exercised his pilot and command authority to move the throttles forward and take off in the absence of a take off clearance. A most basic fundamental mistake that can be made in aviation and as we know there was another 747 pointed the other way coming down the runway and the two of them collided. Captain Van Zanden lost his life and killed 582 other people in that hour and making it the worst aviation disaster in history. So the aviation community took an important message to heart about all of this and that is you had the most perfect pilot, the pilot that everyone wanted to be, the chief safety officer of the airline causing the worst aviation disaster in history. So aviation changed at that point over the course of the ensuing decade this model of reliance on autonomous individual highly skilled professionals and their ability to cause very dire consequences. Second story. And for those of you like the rest of the story there's a very interesting book written by John Nance who is the ABC aviation guy and I think this was actually co-written by Lucian Leap because it has such amazingly pithy insights into the nature of healthcare of not only that incident but why the workflow issues and the hierarchical decision making of healthcare are gonna prevent it from ever achieving the kind of reliability that is seen in aviation. So the second story is that also of a bad outcome the Three Mile Island nuclear plant underwent a partial meltdown as a result of a sequence of events of confusion, too much data misinterpretation. There was a human computer interface issue. The plant operators misread some of the sensor data and believe that they had too much pressure in their cooling water system so instead of too little pressure they vented it to the outside so radiation leaks occurred, the cooling system shut down, the reactors partially melt down. It is still the most serious incident, nuclear reactor incident in American history even now but the important thing about this is not the details of that but rather what that industry did as a result of that incident. There are about 40 companies that run the 100 or so nuclear power stations in America. There are competitors with one another. They're regulated by the Nuclear Regulatory Agency and a number of other safety related agencies and they up until this time had features like hospitals behaving like commercial competitors. They changed the model and in fact the professional society of the nuclear power generators has an important motto that came out at this San Diego conference and that is what happens to one of us happens to all of us and as a result of that view of the importance of group decision making they now have international networks for doing dynamic real time problem solving when something goes off nominal in any nuclear power station. They communicate quickly with one another to use the world mind, the best accumulated experience of everyone to understand if you will these rare variant situations. So next tip about giving keynotes is that rather than complaining it's better to light a candle than curse the darkness. So the candle that we're focusing on is the promise of automated patient specific clinical decision support and here is a example of the kinds of gains that can be achieved by what we would call routine currently extant clinical decision support. It came from Vanderbilt and it shows on the left hand side before their implementation of their provider order entry system with decision support that healthcare is about a one sigma industry that out of every 10 chances to make a mistake on average we make a mistake about three times out of 10 and then if we try really, really hard we might get one sigma but we're nowhere near nuclear power where nine sigmas is the standard and if they fall to seven sigmas they consider a crisis in the industry but the implementation of provider order entry reduced by two orders of magnitude the level of mistakes being made relative to prescribing and that has been sustained over time and so that's our image of the kind of improvement we could get in the management of complexity because human beings are just basically not that good at being large list processors of understanding drug-drug interactions and remembering all the possible component parts of complex systems. So for those who are not aficionados those who may be viewing this on Genome TV after today's session as well as on the web it's important to know that when computer folks talk about rule-based systems they're not meaning that providers must follow rules. People get their back up about that but really in an informatics context rules are focused in the recognition logic that is the ability to detect that some set of real world things or features of a problem exist and therefore it's time to do something and the interventions are not necessarily guidance that say you must do this but they can range from educational prompts, prompts to gather more data to become more sure about what's happening particularly with genotype, phenotype or those taken together. They can improve the certainty of diagnosis. They can do the more commonly associated view of clinical decision support which is you give you a choice of what the best evidence-based therapy might be. They could also simply provide information relative to prevention or prognosis. So in the spirit of again lighting a candle rather than cursing the darkness the Vanderbilt Predict Project is an early example of doing genomically enabled or pharmacogenetically enabled decision support. It had both a workflow that is a socio-technical model that included both a people component which was creating a genomics subcommittee of pharmacy and therapeutics that reviewed best available evidence and decided when there was sufficient evidence to go to clinical implementation where the key intervention was to do prospective genotyping of using about a 200 marker panel of drugs relative to drug metabolism and do that on a population of patients using an algorithm that identified those that elevated prior probability that at some point in the future they would be prescribed a drug for which pharmacogenetic data would be useful and so that was a focusing lens using features from the EMR to find a population then go ahead and do the genotyping ahead of time so that it already existed so that at the time that a prescriber any prescriber whether they knew the literature or not about pharmacogenetics would prescribe a drug for which that information was relevant there could be an infrastructure of decision support that would give them a notification of relevant data they might not otherwise know or understand and then close the loop that is follow the outcomes that is did providers actually change the dose did the patients do better or worse as a result of that so this was the face of the decision support alert a pop-up for clopidogrel therapy in patients who had a C19-STAR2 variant and it is an example of a key computer technology that's part of this arena called the event monitor where you have a computer whose job it is to just sit and watch data appearing in the clinical environment and then if certain conditions are satisfied to do something that sends a message to a provider so with that as a kind of candle that provides the existence proof that this can be done and it can be done at scale predict is now genotyped over 14,000 patients and the article in that citation summarizes the experience of doing that the ideal state that I was asked by Blackford and Mark to identify reminds me of another keynote tip talk I got from my research mentor Sam Rappaport at UC San Diego and he said, you know, it's better to be approximately right than precisely wrong so I'm gonna try and do that this is not a bad feature for your entire career by the way and so I'm gonna try and be approximately right and just opine about things that I think would represent an ideal state of genomic clinical decision support first it would always be up to date that making it thereby and trustworthy it would have content that could be repurposed for different types of users so you wouldn't have separate systems but you'd have perhaps different versions of the same knowledge for specialist, non-specialist laypersons and their families it would be in that context sensitive to both health literacy and numeracy literacy because a lot of this is statistical probability kinds of information it would explain all of its actions instead of just giving you a pop-up that tells you to do something it would allow you to understand why you were given that recommendation and an important thing I think here which very few clinical decision support systems have mirrors the experience, for example, of using Amazon right and that is that a system that adaptively knows what you don't know and what you know even when you don't know what you don't know so it appears to get smarter that is it doesn't give you prompts that you actually have already shown that you don't need but it recognizes the limits of your knowledge and importantly on the back end it actually does get smarter and we'll talk about ways it would get smarter at the national level as seen by the healthcare organization rather than users the ideal genomic clinical decision support would be an systems infrastructure that would measurably improve quality and consistency by these autonomous individual practitioners they're gonna still be there along with interdisciplinary healthcare teams it would track decision those support events and provides a basis for subsequently correlating the clinical course and importantly it would do that whether or not the users accepted the guidance I think that's a key feature the first couple of generations of clinical decision support figured that their job was done once the message was delivered but now we need to realize that the important that's just an intermediate step that's an intervention and we need in a systems approach also track the downstream outcomes of having provided that guidance and that at the national level that would not only support a continuous local process improvement but help create this thing called a learning healthcare system the building blocks for it include both informatics kinds of things like knowledge representation standards the electronic envelopes if you will and in my mind the envelope has to contain at least three classes of information the first is the recognition logic for the conditions of interest as represented in the clinical systems and this is both phenotype and genotype recognition logic that sort of primes the rule to fire then you do something you provide some kind of guidance for some set of target users the patients, families, clinicians and then this recognition logic as well for closing the loop on the decision support that has some downstream measure some analog if you will of the hemoglobin A1C and diabetes that if you would be so fortunate as having a process or outcome measure that shows you whether something good or bad happened downstream and you could correlate that with whether the guidance was accepted or rejected because historically in clinical decision support a lot of clinicians said I'm smarter than your rules I don't need your rules and maybe they're right so we ought to be able to learn and revise the rules based on the actual experience of what happened and whether or not they were followed then there's these other things that people in the room who build these systems are very familiar with it's the accessory collateral systems infrastructure so you have decision support authoring systems that allow groups of clinicians to easily import, review understand and implement these decision support packages that they get from a public library-like function you've got the event monitors, system-generated alerts automated tracking of outcomes and then let me focus on this well I guess the current it's fashionable to call things a commons if you pooling data from multiple sources so the CDS information commons I think it would be at its best if it was built on the principle of the nuclear power plants that is what happens to one of us happens to all of us and what a contrast that is in healthcare where we tend to use patient privacy as the justification for not sharing things and choosing not to learn about things in many cases that would be managed by a neutral trusted organization and here there are multiple possibilities and I hope over the course of this conference that you could imagine whether there's a natural home of places like the National Library of Medicine or a 501C3 consortium or a Wikipedia-like organization and the way you would close the loop nationally at least in my mind simply would be if you that you have a quid pro quo if you get information from the library and you use it the contract for doing that is that you submit your aggregate de-identified upload of that outcomes data back to the public library so that it learns and the whole system gets smarter over time based on the aggregate experience of all the users of the decision support. So my last talk tip comes from Albert Einstein and that reminds me that it's time to end and I thank you for your attention. Be happy to answer any questions.