 So the, I think that actually segues nicely into the few summary slides we put together. You know, I think Rex said nicely and George sort of talking about our work in phenotyping and really starting with that research grade and multimodal approach over longitudinal records and Rex reviewed the number of phenotypes that we've proposed in different networks but it's interesting when you look at VKB about half the phenotypes are emerged that shows other networks are using which is great but these 75 are mostly what's happened in one and two. Most of these 27 actually aren't on there yet. So it's a superset and a subset. So it just highlights that we're doing a lot of things aren't actually captured and these are all validated phenotypes and most are multi-site validated phenotypes which shows that transported ability question. We've talked a lot both Kevin and Ken, sorry, and George about some common data models standards that can be hopefully accelerate the process. I think we're actually moving there and we're creating this common variable dictionary that helps us simplify the covariate process a lot and allows some sort of the questions of how much work do you need to do to get a good phenotype and participate in huge meta-analyses. For instance, we can actually turn a lot of those around really quickly now because we have standard of high-level phenotypes and then we've talked a little bit about machine learning and that idea of accelerating and some of the tensions there, Maryland highlighted nicely I thought the idea of that tension and exploration of new kinds of things. I wanted to just show a few slides of examples of things that hits on things. It's not a few slides, one slide. This is the stuff in FeeKB and you can see amongst the 154 phenotypes that most are using billing codes but a lot of them, 42% are using some sort of text mining. I think that relates to stuff like this. We've done a lot of innovative kinds of phenotypes that highlight the EHR and I think a lot of us would say we don't want to lose that in the next round to say we do 200 phenotypes that are maybe not so interesting. We'd rather continue to do things like this is major adverse cardiovascular events on statins and we have a locus which predicts failure of statins independent of the LDL effect of statins that could be a drug target. That may have a real impact. Phenomide studies were highlighted by the RACs. Maybe you could say that's 1,800 phenotypes and it works out surprisingly well. We tend not to talk about it as the phenotypes because we can now just push a click of a button and have a module that does that and then we have the ability to do other kinds of things like the work that Columbia and we're doing on clustering phenotypes and there's been other types of projects like this where we can look at disease patterns that occur that we don't see otherwise and you really can't do that without the dense longitudinal record. Just kind of highlighting that trade-off between the complicated phenotype for the simpler ones and where do we think the greatest value for emerge would be and I think generally we've been pursuing those more complicated because they tend to be more scientifically interesting. We do use a lot of multimodal phenotypes and none of the current sort of quality standards support NLP. We can do great stuff with simpler methods and we want to continue to accelerate that as Richard pointed out. We have a lot of phenotype innovation across different degrees in addition to what I said before, optical character recognition that Marshfield pursued, a lot of portable NLP modules implemented in NIME and other sorts of resources, deeper phenotyping. I think this ability to go back to the patient is something that we're sort of seeing in Emerge 3 and some of the Emerge 2 work that is an extension that we've talked about both in deeper medical record dives as well as patient itself and then I think we're seeing different kinds of phenotypes emerge, no pun intended, with the sequencing data. And so those are the summary things we came up with. Marilyn, do you want to add anything? Two minutes left before the session ends. Are there any other questions? Rex? I just might comment on what Josh said. As we think about the tradeoff versus easy versus hard or complex phenotypes, you know, one of the things we might want to consider is phenotypes that have the greatest impact on health. Yeah, I think it's a great idea to set some priorities, but I also just want to interject the notion that in some of these phenotypes, when we start working across these systems, it may not be enough to have one definition of a phenotype because it's really dependent. We talked about three different users of diabetes. We have a handful of different definitions of diabetes. There's not one definition of diabetes, and that's we're done with that phenotype. If you want to do a GWAS, it's going to be highly specific. If you want to do a quality assurance thing, you want to see who's getting their flu shots, that might not represent, that might represent extreme cases. If you want to know prevalence of disease, then just taking the ones with definite disease against those that definitely might not give you the right numbers. You actually need to have built into your machine that spits out diabetes the option to dial it up and dial it down, depending on what the investigator really wants to, the question the investigator really wants to answer. Terry? Could I just, just one comment? Okay. In terms of other things to focus on, if we could try to focus on things that are sort of genomic-y, that would be nice to, since we are in this room. Thanks, Genomics. Terry? So one thing I can, there's a thread that I wanted to pull on from that Ken mentioned. And this is, if you're going to sustain this thing, make it so that endpoints that everybody cares about also are useful for academics. And you mentioned in the context of who, there was tons of money being spent on quality control and cleaning phenotypes. Make it so that this effort is useful to those people, because then they'll spend that money on you. And each institution has their own examples. But it could be a real win, because that's 20, 30 X the amount of money that's being spent on a merge. Great point. All right. We are right at 11.15. Thank you very much.