 It's a quick I can sit down, but I still have a question for her. And this is like a moronic question from, it'll be clear that I'm not completely trained as a geneticist. The de novo idea, do you have any sense at all where that comes from? You sort of say, well, that's evolution. It's just evolution hard at work. But it looks like there's a particular predilection for those particular genes that will result in congenital heart disease, given how common congenital heart disease is. And is there a background rate of variation in those genes that are or are not associated with various kinds of heart disease that we would call congenital, or we might not even call congenital things like atrial fibrillation? Well, Dan, I'm not 100%. I'm sure you're a question. Neither am I. The de novo rate is constructed from whole genome sequences. Dailies group has really spearheaded this, and others have as well. And that is independent of phenotype and independent of anything other than across large numbers of databases, and 1,000 genomes being one of them. How likely do we see a change at that particular nucleotide? And then the question becomes, are any genes more susceptible and are more mutable? And the answer is yes. For example, the olfactory receptor, I think most people know, is extraordinarily mutable. And as best we know, the phenotypes associated with it are trivial. And so there are genes that we have to recognize are mutable. And I just think Dan is going to talk about this. And those are the ones we have to, in particular, be, if you will, discounting about whether variants found there are pathogenic. But similarly, there are other genes that are constrained and are less mutable, probably because if they are mutated, they are devastating to the organism. And it's not surprising to me that we found these very essential genes in this cohort. Because as I tried to tell you from my 32nd clinical synopsis of cardiac progress over the past 50 years, these kids all died in years gone by. And if they didn't die, they were told not to reproduce. So there was no enrichment in the community's genome of these variants. So they had to be de novo. And if they do reproduce? Well, that's the big question. That's the real big question. When this cohort in 2010 is 17 years of age, what happens next? And that's why I think interpretation is very important of what their children are at risk for. So are you going to put this in their electronic medical record? Well, these were research exomes. So no, but I will tell you, the PCGC has struggled with not only these findings, but I think equally so, not surprisingly, incidental findings. And I think that we are, I know that we are committed to finding a way for really returning very important incidental findings, some of which are present, would cause disease in infancy and childhood. So Mark. So this is an extension of what Dan was asking. And I think it reflects another way that we could aggregate data that would be of use. And that is what genes are more important to look out for based on their sensitivity. And we've done this in ClinGen already through a process called the Dosage Sensitivity Map, where we're looking grossly to say, is there tolerance of haploid or triploinsufficiency? And does that have disorders associated with it? But Joseph Shea, I think at UCSF or Stanford, has also been doing some very interesting work looking at mutation depletion, comparing number of synonymous variants out of exome collections to missense variants. And what you'll find is that in many genes, there's really no significant difference between the number of synonymous variants and the number of missense variants. But in other genes, there appears to be an extreme intolerance for missense variation compared to synonymous variation, which gives you at least some insight, perhaps, that these genes are more important and less tolerant. Therefore, a signal in one of those genes may have to be paid attention to much more so than in ones where there's not. And we write at the present time, we don't have a catalog across all human genes of that type of mutation depletion analysis. And so that would be something that would be worthwhile. The second thing I'll just say is that I would add one bullet to your list of things that would cause one to think of something as being very important and diagnostic. And that is occasionally there are single variants that are recurrent that cause exactly the same phenotype. And even very small numbers of patients, if you see the exact same variant recurring with the exact same phenotype, that could indicate something important mechanistically where the rest of the gene may not, in fact, have anything to do with that. But there's something about that specific variant that's important to pay attention to. And so flagging those, at which there are a handful of examples, would also be very important. Dan? Just on the point of looking for genes that have that incredible depletion of variations, I'll talk about this again this afternoon. We can actually find these genes empirically by looking for genes that are lacking variation in the general population in very large scale databases like EXARC. And so if we just focus on loss of function variants, for instance, we can zoom in on a set of between 2,500 to 3,000 genes that are almost completely absent of loss of function variants in normal individuals. And that turns out to be a set of genes that's enormously enriched for de novo mutations in the types of diseases that Crick had just talked about as well as autism, intellectual disability and others. But 80% of those genes, we actually don't know what their function is. So they're clearly very important. We just don't know what they are. Could I just add that there's a... It's okay. Another nuance is it's not necessarily loss of function in the whole gene. Gene domains turn out to be very, very important and that's just emerging. And is it remarkable to me for someone who has lived most of her life in adult onset cardiac diseases, we know that you can have loss of function mutations in the most common cause of cardiomyopathy, dilated cardiomyopathy and predilection to heart failure. And it occurs, these loss of function can occur across the huge molecule titan. The ones that are clustered in an A-band domain or parts of the Z-disc will really increase risk of disease. Those that cluster in the I-band are found in the general population and have very little if any manifestations. So it's gene by gene won't be enough. And I know that people are working at the road to try and get domains. There's some simple reasons that it can account for that. It's called alternative splicing and alternative exon usage during development. And development meaning throughout life development. And so those are simple things but we have to really expect more complexity and more granularity as we understand these more. Mike's gonna make a very brief remark and then less is up. So the loss of function is great but it's easy. I need the gain of function missense. If we could figure out a way to somehow automatically flag gain of function that would be a huge improvement I think also. So we need both sides of that. Yeah, just to change topics a little bit. I was intrigued by Cricket's movie of the Myos sites pulling the, I think they were sensors together and it brought to mind the issue which I think Daniel we also discussed that assessing causality workshop which is a concept which I don't think necessarily this is the right term but this concept of the proximity of the functional assay to the actual disease process in the human is a concept that that movie reminded me of and one could imagine I don't know if this is true but in that case you could imagine that that assay is about as close as you can get to measuring a functional assay of an inotropic defect in a heart whereas it might be a little less predictive of a 3D structural developmental defect in a heart and we have a lot of these things where and this is a debate that arose following that workshop with between that workshop and John Laurent Casanova where he very articulately showed that you can do N of one studies when you have incredibly close proximity of a functional assay to a disease process and when you're measuring something that is nine inferential steps away from the disease process like the number of presynaptic vesicles in a cultured neuron in autism you need to be very careful and we need to as a field figure out how we're going to measure that inferential distance so that we can properly infer from functional data to disease. Lesford briefly I 100% and always do agree with you. I would say that that's what the beauty is of not finding an N of one is it's being able to take an amass this is what cohorts teach you that there are pathways that are perturbed and the lesson in congenital heart disease it's called developmental transcription gone awry and so the proximal assay then can be a transcriptional readout which is an appealing it doesn't quite explain why there's a hole in your heart but it gets you a first step in the way and the second thing I'd add is with regard to the missense variants these cellular assays at least even at a proximal or very distant way allow you to ask is it similar to the slam dunk loss of function variant and remarkably we have seen very close correlation between missense variants and loss of functions some of the times and very divergent responses other the times and at least it adds to the list of whether this is the missense is likely to be very to be pathologic or not. I have a different but maybe naive question about the de novo mutations. So do we actually know when are they most likely happen? Are they in the parent's germline or after fertilization? If there is systematic study about the mosaic genetic mosaicism across general population because you might genotyping the blood but you are looking at phenotyping the cardiomyocytes and if there's something missing there. Great question. So we have the benefit first time ever for cardiology of the benefit. We get tissue from these children because of the repairs that they undergo and somatic mosaicism we are not seeing at a tissue level part one. Occasionally we do but it's a very very small proportion. Second of all we know statistically that there's a very close correlation between the increased risk of paternal age and congenital heart malformations and the hypothesis is that these are mutations that arise in spermatogenesis and are conveyed to the child but they are germline and certainly we recognize the possibility for somatic mutations. It's just not been a major component in everybody we've looked at today. Okay Mark has another brief response and then Callum. Yeah this is from the exome data. It looks like that maybe three to 5% of individuals that are diagnosed on the basis of exomes have somatic mosaicism probably showing a milder phenotype of a more severe condition. So that may be at least the preliminary sense of the N related to that question. So the majority is still germline right? So it's like three to five. That is correct yes. It's as close as we can get to saying germline. So my question is really an integral of Fles's question was the last two which is I didn't realize you had tissue from these subjects cricket. Did you look at the tissues from those individuals who were genotype negative to see if they had the same transcriptional disarray as you saw in the genotype positive? Because the vast majority of your patients had no mutations in any of the genes. Sorry correct. So if we take CNVs and we take point mutations that we predict are damaging and arrays for structural abnormalities across the genome we're not yet at anywhere close to 50% of kids with congenital heart disease. So there's a lot to do. We have two models and you're right as usual in terms of what we think may be a missing piece. When we look at the transcriptional profiling of exome negative children or if you will genotype negative children we can see that there can be loss of function of specific molecules in comparison to all the other children whose tissues we also have. So they have less expression and they often have mono-allelic expression that can't be accounted for by their genome DNA. So they're missing heterozygosity that's present in the genome and their level of transcripts is reduced. Frankly what I think that begs is the question that there is a regulatory mutation in the gene in the elements around that gene or perhaps distally. And I think that makes logical sense based on the idea that chromatin modification opens up the DNA for transcription factors to come in and activate gene transcription. Both of those pathways sit on cis-acting or potentially transacting sequences. So that's potentially another source of mutations. We're looking, whole genome sequencing. Just follow up on the, quickly amend that mosaicism. You were given sort of an aggregate averaging across a number of phenotypes important to recognize that the biology of the disease determines that in the specific. And there are some traits where you only see the variant mosaic, some where you see essentially none, others where you see an admixture and that's telling you about the biology of that variant in development as to when it will occur. So question for Steven related to Kirk's presentation. I'm wondering, I would assume most of these children do end up in an intensive care unit. So Steven, would they be included in your group of children who should have whole exome or whole genome done? Yes, definitely. And this backdrop of de novo mutations being the major driver of genetic disease in a NICU environment is through irrespective of phenotype. It's clearly way out there as the leading cause of pathogenic or disease causative variants. So is it standard of care cricket to whole genome or whole exome sequence these kids? I would assume not, but what would make it so? Or do you think it should be standard of care at this state? Well, I think standard of care is to get to the diagnosis. And so what's the most quick and efficient way and cost effective way to do that? And if a raise will tell you that you have a structural malformation that's accounting for it, you're done. I think that having, as we heard from Steven, rapid ability to move and get a rapid result from exome would be enormously productive. Has it been proven that it is the cost effective way? Not quite yet, not quite yet. And I think the other thing, as I again tried to drive home, is that even if you find that variant, I think we have to have a nuanced interpretation of what it means to that child. Okay, so it was a little scary for a second. I thought it was gonna explode. So I had a question for Steven and even for Gail, we discussed it a little bit during the intermission. You know, it was really striking to see that this is a subset of patients in the NICU or the Pediatric Intensive Care Unit brings you back to newborn screening, who should be sequenced, when they should be sequenced. And then the efficiency, because when these kids are in intensive care units, they're usually already in crisis, right? So can you comment on maybe what the role of the private sector is in exome sequencing? Certainly there are a lot of companies out there who are taking more of a consumer-based approach to see if they can't sort of break into that market. And I was particularly struck by your diagram of where the sequencing and the diagnostics are as fact as the early adapters at this point. If there's a profitability margin in there, that's gonna take off really quickly. So do you have a comment on when you think the private sector is really gonna play a big role in this? That's a good question. I mean, I think the impediment to the private sector just jumping right in has been that reimbursement has been really troublesome. So I think clinical utility is clearly there. At least in these retrospective studies, I think that cost effectiveness is starting to be shown. Again, it's retrospective analyses, but for selected cases, I think extending that to newborn screening is, that's a whole different world. And the whole formatting and pre-test probability issues mean that that's really a separate subject. Remember, in the NICU and PICU, we're chasing a phenotype, so we know when we're done. We know when we have a cause for newborn screening. We're looking at asymptomatic individuals, typically healthy individuals. And so we don't have any functional guidance to help us interpret our findings. So I think all of the things that I showed you in terms of this being mature, ready to go scientifically are for that specific situation of an acutely ill kid who's believed to have a genetic cause of their phenotype. I am really, really concerned about direct to consumer offerings starting to come up during pregnancy and also for kids. And this is a reason, I think, that it behoves us as the scientific and medical community to really start to accelerate our efforts in the area because I think there could be some tremendous disadvantages to seeing this escape from the medical normalcy and become direct to consumer. That's my major fear, is that it is so obvious that some of these things are ripe for utilization and as a group, we are so slow to implement. Yeah, I guess I'll make a comment about the biochemical part of newborn screening because I'm also a clinical biochemical geneticist. Don't run a lab, but I think it's really important to be doing research on newborn screening, whole exome, whole genome. But in terms of making diagnosis, we've had 15 to 50 years of experience biochemically, and while we do have some biochemical data that we're not sure what to do with and we often will go to sequencing there and look for pathogenic variants, it is much better than trying to look through all the mis-sense variants you're gonna get at the genomic level. So I think we need to look at it, but I don't think that in the short term it should replace biochemical screening and I don't know in every state, but in Ohio we typically have a five day turnaround after being done it 24 to 48 hours. So I think that should remain the gold standard for now. Yeah, because now almost all the talks this morning is focused on how to use genomics data about diagnostics. So I wanna read a little bit more far-fetched idea about this for intervention. So especially for Steven's talk about a lot of neonatal, very pathogenic mutation or Mendelian disease. So there could be a mechanism of fast track therapeutics for gene therapy or genome editing to really save the kids from dying or have development really serious phenotype. I totally agree. I think that the moment we saw the diagnostic bottleneck we create a whole wave of new therapeutic opportunities. So these are kids who traditionally just wouldn't have been ascertained in time to create a market. So neither in terms of the numbers or the timeliness of ascertainment, we're solving that bottleneck and that will make many, many orphaned diseases palatable for therapeutic development. I think it's gonna be very exciting. So I would just only add the half full or half empty cup for congenital heart disease and this association with NDD is that if you know a child who has one of these malformations and is at higher risk for neurocognitive issues, the sooner that child can get interventions. And I would suggest that if the autism spectrum community, not physicians, but the community has taught us anything is that early interventions and consistent intervention has made a huge difference in the learning capacity of these children's, their social behavioral development. And so we have to look at these as potential problems with addresses that aren't even necessarily at the drug therapeutic level. So I guess we have a few more minutes to keep talking and I have a question that maybe raises it up a little bit to think about things that we can do to make the connections between what we're seeing in the clinic and going back to fuel more basic research so that we can continue the dialogue in certain areas. And we heard a couple of things from people suggested about resources that would be useful. And I just wanted to ask if there were other things that anyone around the table could offer. I mean, we talked about the different assays and trying to figure out whether how proximal they were and how to infer from that relative to the different diseases. We talked about resources for genes or gene domains that were particularly important to look at so that if you found something in those you would be able to learn something from going to a data resource of some kind. But are there other kinds of resources or tools that we should be thinking about whether it's from cricket specific example or other examples that would be useful to develop or ways that we can try and make this conversation more efficient across the board if there's any way to do that besides just in particular disease sort of disease by disease area. So this is completely unrealistic pie in the sky but as I've been listening I was thinking if we could develop some sort of a standardized approach to take all variants that are being submitted to ClinVar and run them through sort of a standardized set of functional assays, introduce them in yeast and other model organisms and stuff like that and then aggregate all the data as opposed to relying on sort of hit and miss about did I find an investigator who's interested what did they do and then did we actually report what we found. I mean that's, that is a huge ask in terms of resources and capacity but if we had a standardized assessment of variants that we could then aggregate the data I think that would be incredible. So maybe take one step back and ask the question is there sort of a can you define a minimal set of assays that you would use? I mean I just sort of think about the genes that Crickett just talked about, think about the genes I worry about, think about the genes other people in this room worry about. I'm not sure a single assay or a single small set of assays would be all that informative but it would be worth thinking about whether you could define a set like that to start with and there are some generic things that you can think about that I think we're gonna hear about over the next day or two. I think that the next step would be to you know convening thought leaders to say what would that look like and then in addition to defining that minimum set what are the results from that minimum set that might promote variants to you know to the next level of assessment? I mean that would be the way I would sort of conceptualize it. I was just gonna suggest with you know as opposed to Mark's suggestion of doing every variant and every functional assay not that that was exactly what you were suggesting is there a way of developing you know something through matchmaker exchange or something similar than instead of a phenomizer you could have a functionizer and at least you know send out a query for anybody who is actively studying the function of a given variant in a given gene. I mean it seems like that would be relatively tractable that even Mark could afford to do. So human patients are the best phenotype assay we've got and to my mind what we need to do is constrain the space at the genetic level that might be influencing phenotype. To say background genotype influences a mutation we get that. But which part of the background genotype? And I think that these pathway analyses at least give us a handle. So what happens for example in my adult field of sarcomere gene mutations what happens when you have a pathogenic mutation in myosin and you have 75 other relatively rare variants in troponins and C proteins in Titan and the like. What happens in terms of that patient versus a relative or another person with a similar or better yet identical mutation and their local background genotype not at the genome level but at the functional level. And can we begin to think about clinical phenotypes not as an N of one variant but an N of one clustered in if you will the systems biology approaches of those dust webs that allow you to see interactomes that might be influencing phenotype. And what I like about that strategy is that it doesn't require an assay. We've already done it. We have clinical phenotyping and we have the genetic information increasingly about genomes and exomes and hopefully we can begin to interrogate them in a clustering way that is informative of phenotype. I'm not sure if that's clear but it's a. Yeah it was interesting for Cricket and Steven. So how universal do you think the environments are for other researchers to enable them to do the kind of research that you just talked about? In other words, are you in such special places that have access to the resources and the personnel and the special specializations in computation and genetics and access to clinical information? I mean how unique is that? Is what you're talking about something that any researcher could do if they wanted to do, if the answer is to know what are the, what do you think the main gaps are that we might be able to address with say the kinds of things Mark just talked about? So I think we'll have a completely different response to your question. My worldview is of doing research in order to enable a new way of practicing healthcare and so my peers are MDs and hospitals and healthcare practices and so if we ask are, is what we do locally extensible to other places? I think some answers are yes and some answers are no, that we need to address the gaps that I mentioned in terms of educating physicians, provision of additional software tools for automation and for computation and process engineering to sort of shrink wrap what today is a very researchy pipeline into something that's robust and deployable by any hospital with their traditional data handling capability so to me research is something that we are doing to get us to that phase and I think it's more implementation science and clinical trials and computational research rather than some of the other things. I think we're doing pretty well in terms of the 4,500 disease genes for which we know the molecular basis but there's this huge gap between that knowledge and putting that into clinics all over the country. I think it takes a village and I think you need computational biologists and I think you need phenotypers and I think you need really good clinician scientists and right now that happens in academic healthcare centers and I think that's where this kind of research and that's what it is, it's not clinical delivery yet, it's very close. It's one of the issues on the problem, it's one of the lists on the problem list but it's not yet a diagnosis and I think it has to do with the nuances of what we see. Yes, we can all recognize a loss of function mutation. What does that mean to that patient? And that's where I think the rubber hits the road so to understand that is more research. I'm not sure if I'm answering you but for the community based practitioners should they with any child with congenital heart disease do an exome? Well, not unless they're gonna be willing to pass it off to someone to help them interpret it because right now I don't think the interpretation would be good medicine in those hands but is it an opportunity to expand research? Yeah, absolutely. And those kids by referral and genome collective I think we'll all learn a lot. So the other question is do we have the bandwidth? Do people like you have the bandwidth to take on more? Do we need more such centers? How do we actually ramp this up to be able to deal with the scale of the problem that we face if we need these sort of specialized centers to do so? So I think specialized centers is what I'm alluding to today is really specialized disciplines in every traditional aspect of medicine. I mean, we all know about referring and referral patterns in every community into a more tertiary, quaternary medical institution. And if we build into that genomicists, I think that people will simply follow the usual clinical referral pathways but now having this enrichment of having genome science be incorporated in terms of interpretation of the clinical presentation and course. Okay, we've got Rex, Gail, Howard, and Gail, can I have a last word? So I think one of the things that strikes me is that this really, at some level, becomes a big day-to-day integration problem. I think, for example, there's a list of genes that we know what their functions are and there's a list of genes that we don't know what their function are. Maybe we need to be focusing on the genes, other things that we should be doing to understand the function of genes whose function is unknown because often GWAS hits and other kinds of hits fall in those genes. So that seems like one big program that we should think about. And then the day-to-integration, as did several other people in this room, last week I was at meetings that were focused on genome sequencing programs at NHGRI. And I left that with a real theme that one of the things that we have to struggle with constantly is looking under the lamppost, where there's light. And so in the case of genomes, what everybody is doing when they think about a whole genome is there's not enough power in a whole genome to think about whether something is really statistically significant or not. So what they do is they figure out how to constrain it so that they're not looking for the whole genome but only looking through part of it. And we've seen a couple of examples of that today, whether it was using a transcriptome to focus down the genes that are important to look at, whether it's temporal expression to look down, that whether where something is located. I'm sure there are a whole host of others that are big projects, but if you did them, could inform all of these. So for example, if protein X is missing, how does that affect the rest of the proteome? That's an interesting question that we should think about some at scale ways to address. And then if we can think about all those at scale ways to address and then think about integrating the data, then the people that have this gene mutation in some gene that they don't know anything about actually have some ways to constrain where they need to look. And I think that would go a long way. All of those are big problems, but we should be thinking about those kinds of data integration of the activities. Very quickly, while we work towards trying to get more and more understanding of the phenotypic significance of a variant, which I think is a lot of what we're talking about, maybe we should also be giving some thought to the fact that we'll probably never totally understand a lot of this and that we need to also develop tools for communication and for living in the world of ambiguity. And I guess one thing we're thinking about is somebody asked earlier about direct-to-consumer testing and there's lots of debate about whether that's a wise thing or not. But at least from my perspective, one thing they've gotten very, very good at is clearly communicating information. And if we in our clinical world put a 1-1-hundredth of the effort they put into had a clearly defined things and what color the screen should be and where you place things on the screen to enhance understanding, I think we'd be farther along in terms of the quality of our report. So I guess my point is that developing tools, apps and other things that help to communicate the inevitable ambiguity of genomic information is I think another area where effort may be placed. Yeah, I think there's gonna have to be a lot of choice in which genes you invest in to find the variants. I mean, it's one thing to find the genes, but for us now, a lot of it is, is this variant in this gene pathogenic? And I've talked to someone in a little startup where they've taken every base and amino acid in BRCA1 and they've modified it and they've developed a yeast assay and assuming that they can show someone who knows this that the phenotype in yeast will predict pathogenicity in human, that's incredibly powerful. But when you come to autism or human behavior that I don't think in most cases that's just gonna work and so then you're left with mouse or other kinds of assays developing neural stem cell models or organoid models. And I think a lot's gonna have to come down to making choices. I don't think we're not gonna be able to do it for everyone or we're gonna have to pick the genes carefully and the models carefully. Okay, Howard? So I think, Cricket, your point about that it's not quite clinical is not actually the case. I mean, there's people ordering this all over the place and we don't have any way of putting a boundary on that and so my concern is that if we don't move faster to Steven's point it's gonna be filled by consumer genomics and it's happening more and more. I think the leap between helix doing consumer only and somebody plugging an app in to analyze your genome is an app away and I think one of our challenges we have as a medical community is how do we be responsible as you're saying which I completely agree and get this out there in a way that people can use it and I think as long as we can't solve some of those problems, we're not stopping it. It's just continuing to go and so I think we can debate what's the right answer but the answer is that it's happening and speed to Steven's point is critical and we can't get it all right. There's no such thing as a right answer in all this so I don't know what the right answer is but I mean, I think the horse is out of the barn and we're trying to figure out how do we add some more context around that and to do it in a way that's responsible but we can't be slow. And for us to count the last word because we're already five minutes over. So I was just gonna say, I mean, this is an obvious information content problem. I mean, you'd need everybody with hypertrophic cardiomyopathy that has ever been born on the planet to be able to look at modifiers. We need a system that is completely integrated with our healthcare system to ever be able to do this and I just don't see any way around it. I mean, the scale of what we're talking about is so massive that unless we're actually using all the information we have access to and as last says, choosing other data sets, Mark said the same, that we need to gain access to we'll never be able to deconvolute. Well, and that's small challenge. We'll close the discussion and I apologize. I know that we weren't able to get to everybody but we have five minutes late which is my standard so I'm gonna stop and we'll go out and take a picture and Teji is gonna direct us. Yeah, so we just had out this door, the short people should be in the front so short is defined by me, I'm five four so if you're five four, be in the front and then lunch is behind us and we'll be back at 130.