 Okay, so following up on the discussion earlier this morning, the way I wanted to just start to frame this is to think a little bit about the existing investments in functional genomics. So we can imagine GTEC, even though it's not an HGRI funded project, it's one that's producing a lot of data that's going to be relevant to the functional annotation of human genetic variation. We can think about ENCODE, we can think about a project like FunBAR, which is still pending, which are a series of sort of linked R01s, and ask, you know, what are we missing from this kind of a portfolio? And from Jay's talk yesterday and from the discussions that what came out was a view that it'd be great to come up with a systematic, reproducible, and accurate functional annotation of human genetic variation. And there's sort of what we call the top-down and bottom-up approach, which, you know, are not either or to rather a question of, you know, how much do you invest going from existing variants that you'd like to understand their function because they've already been implicated either by genome-wide association studies or by rare variant association where you've collapsed a bunch of, say, you know, putative loss of function variants that you'd like to see, are these all really loss of functions, that sort of question. And go through and sort of annotate those, that's what we sort of call the sort of bottom-up approach, versus the top-down where you might, and this is sort of a straw idea, let's make all three L changes where L is the length of some element you're interested in, so all single-step changes for some functional element of interest, make all coding region changes in a protein of interest, cis-regulatory elements and so on. So to do that, you're not going to get at that from the existing set of projects, we need to think about how we might be able to do that. That's the kind of first set of questions. And the second one is to deploy functional genomics in the context of understanding the biology of health and disease, so Aviv's, you know, incredibly visionary talk yesterday on the cellular atlas, or the idea of molecular phenotyping of drug environment, etc., sort of cellular response or comparative functional genomics, so how do we compare human mouse, etc., cell types and all the constituents, as well as, you know, what Eric was talking about in reconstructing cellular wiring diagrams. So the challenge to implementation, the first one I would say is reproducibility, reproducibility, reproducibility, right? So how do we do this really quite systematically so that the data makes sense, we understand the biases, and so we don't have a bunch of experiments that can't be analyzed in ensemble, and the other is that measuring, quote, functional impact is very different than, say, I'd like to systematically sequence the exome or systematically sequence the genome from telomere to telomere. It's really much more of an interpretive question. It's going to depend on, incredibly context-dependent. Are we talking about different cell types? What's the readout? What's the environmental context and so on? It's a different sort of kettle of fish as it were than what we would traditionally think about in terms of the large-scale sequencing program. Another potential challenge is that multiplex gene editing needs lots, who knows, kind of development before it can, you know, be fully scaled up. It's not something that, you know, most people would say it's ready to, you know, churn and we could easily make those 10 million variants. And then the last is that, you know, computational models in paradigm as we've heard this morning for this kind of data integration and knowledge extraction is going to be important. So with that, I sort of put this up as a sort of straw kind of proposal that folks can, you know, shoot at and I kind of just broke it out into sort of three scales at which you could imagine thinking about functional genomics. One would be the kind of typical sort of R01 catalysis model where we might put funvar, which is still pending, but, you know, a series of linked R01s focused on technology development. And this is, you know, sort of typical of NIH programs where technology is not ready to scale up, but we'd really love to see this get better. The advantage of this, and again, this is sort of, you know, my own thinking and it's probably wrong and so, you know, feel free to shoot at it, is that it's got a very broad possible portfolio. You can invest in lots and lots of things and succeed or fail quickly, we can figure out what's working, what's not working and it sort of could work very well. This advantage clearly not at scale, so rather unresponsive to the task. And then secondly, it's unclear how the data that's going to get generated by these sort of linked R01s is going to be coordinated and shared. And so the reproducibility, reproducibility, reproducibility kind of comes up. But this is sort of where you'd want to be if we think the technology is not ready, but we'd like to continue investing and put some money into it. The second would be, and again, I'm just sort of trying to put it into buckets, what one would call a sort of consortia-focused effort, a sort of set of U01s, U41s, whatever, and, you know, possible projects, the human cellular atlas, or CRISPR-Cas9 for a bunch of clinically relevant or interesting genes, or, you know, when ENCODE 3 comes up for discussion, maybe we should refocus it and make it much more about, you know, annotating human functional variation, or comparative vertebrate functional genomics, or your favorite sort of functional genomics large-scale project. You know, as a consortia, so there's a set of tasks we really want to take on. We're going to put together a consortia by the typical kind of NHGRI competitions, very bread and butter, what NHGRI knows how to do extraordinarily well, probably better than any other institute at NIH. The advantage, it's going to be at scale for the focus question of interest, right, for that very narrow slice where you should be able to do it at scale, standardized data sets and tool development, you'll get broad community engagement. I would say one disadvantage is that it's, you know, it's a zero-sum game. So whatever you put into this set of consortia, the human cellular atlas will come at a cost to some other set of things that are currently being funded or and would need to get wound down in order to make, you know, sort of money for it. I would say that's one potential disadvantage from the NHGRI point of view. A second potential disadvantage is that it can be narrow, you know, even though, you know, we're going to go deep. It's a very sort of focused set of questions that you're going to go after, a set of experiments you're going to undertake. So then the last kind of idea one could put forth is a kind of functional genomic centers that are counterparts to the Mendelian or the large-scale sequencing centers or what, you know, we've been talking about as a sort of focused effort of people who come together to create resources. Obviously here the main advantage would be reproducibility and scale, but across multiple questions and technologies, right? That's what's so great about the current programs, right? You have a capacity and you have expertise and you build so that community projects and investigator-initiated projects can be catalyzed and you don't necessarily set out all the experiments you're going to do from day one in the way that you would from some of these other consortia. Obviously one of the main disadvantages here is that, you know, again, that kind of the cost and the others that the technology may not be ready to be scaled up. And one could also argue that functional genomics is a totally different beast than sort of setting up sequencing capacity. And so there's really, you know, so amorphous that trying to do these functional genomic centers isn't going to really work if you just sort of set it up for capacity for the community. Okay. With that, I'm happy to sort of leave it up and have folks, you know, open it up for discussion. I don't know if Elise and Mike have additional questions. All right. With that, you know, Mike? Yeah, I'd like to lobby for, I guess, two consortia focused effort and possibly three because I do think some of these technologies that they're reasonably sophisticated, like if you wanted to do high throughput stem cell knockouts, it's not completely trivial for any particular group to do it, but it could be scalable and be done in reasonable ways. And I threw that out as one example. There are other model organisms, things like that that could actually take this on as well. And I think you could benefit by expertise of, again, having groups that are good at this and doing lots of these assays. And I think the advantage of a consortia-based effort is you can bring in some other expertise as well to help each other out and set standards and deal with this whole reproducibility issue, you say. And I think that models work quite well for ENCODE, for example. I think it helped us drive standards that before that, I think a lot of data that was out there was of varying quality. And I do think the consortia helped. And so that's probably why I'm lobbying for that with a little bit of R01 mixed in, because there is a lot of heterogeneity, and we don't always know the best way to do lots of some kinds of assays we'd like to do. So I'm very supportive of this, as you might imagine. So that's my two cents. Sharon? Yeah, I think there's still a fairly large role for R01s, as we talked about yesterday. There's a lot of functional assays that people haven't really demonstrated, have any association with clinical phenotypes. I think there are a lot of very creative ideas out there about how to do them. I do think the sharing part really, I think, needs to be mandated very clearly in any kind of RFA, that how the outputs need to be shareable. And NHGRI may want to think about standards about that. But I think we're still at the frontier of really figuring out the best assays, even the best way to use CRISPR-Cas, and I would hate to see it too centralized, too early. Yep. Bob? Yeah, I would echo some of these comments. There are a lot of different approaches that need to be tried at this point. But there are some that are ready to go at scale. And I like the ideas that came out in the evening session when people were offered the opportunity to do something with $10 million. And I think it would be a great RFA to ask people to put out their best ideas on the scale of $1 to $10 million, and see what came out. So, I mean, just to push back a little bit at that. In theory, if one had the functional genomic centers or large-scale centers set up, the allocation could well include that sort of community set of projects. You could compete it and say, if you had all this capacity for sequencing, what are the set of experiments that could be put forth through that? In the same way that the white papers were written for doing the vertebra genomes, right? You sort of set up the capacity and you sort of let it lose. The question is, can one create a generalized functional genomic capacity? And the answer may well be no. But I mean, my sense from the discussions we've had so far is that there are lots of different aspects of that. And you want to develop multiple different paths. And I haven't seen the sequencing as the limiting factor in any of this. I mean, it's the expertise in the particular functions and so forth, whether it's stem cells or a multiplexing assays like Jay talked about or whatever. It'd be great to see how many different ideas you could come up with. I would tend to agree with you. I mean, I sort of just put it up as a sort of straw comparison. Eric? Yeah, no, I also am dubious that you could create generic functional centers because I think what drives this is the passion of some set of people who care about a methodology and are going to be expert in it. So I would favor that. I do think there's an advantage to your consortium efforts. In addition, of course, the RL1s, which is that for many of them, there's a possibility to get other institutes to go in on it. And so while there is, it's not a wholly zero sum game. You did phrase it correctly. Whatever we put into it can't be put into something else. But we can get a lot of leverage there. If you did, I don't know, the first one, the cellular atlas, you would say, we're going to do this. And if you want any blood cells in there, NHLBI, you could contribute X. And if you want some other cells in the brain, there are some customers for that. Similarly for cast line CRISPR for clinically relevant genes. So I think things like that that are not centers that are kind of ossified around where it ought to be very flexible at this early stage, but that are organized enough that you could get other institutes to pay perhaps 75% of the share of it by saying three institutes get to pick their tissues or their problems. So I think there's great possibility in that third model there. Especially if you could make the argument that this is a sort of trans-NIH, something like I could imagine the common fund being very interested in a project such as the human cellular atlas, right? You have two shots on goal. You do the common fund or you find three like-minded institutes, whichever, you know, try both. Yeah, I have to say, I mean, it'd be great to hear from Eric his thoughts on the sort of feasibility of such deals these days. But I don't know if you want to comment on that. Okay, David. I want to come back to something that was discussed in the earlier comment earlier about, you know, focusing on how do we exchange information? What are the requirements for information, deposit, and collection? How do you measure quality? I think one of the things I've observed about our community, there's a lot of advantages of the human genome project as a model for all things, but it was a heavily managed activity. And I'm not sure that what we need to do is manage the implementation, manage the choice of things. We could have a lot of the diversity in the sort of R01 kind of model, but figure out how is the data going to be useful? How is it going to be quality controlled without telling people what to collect or managing their collecting it? And you could also have, if they don't meet certain standards, then, you know, there's, there's, so that's different than an R01 where everybody knows you get your money and then what you do for the next five years is pretty much what you think makes sense to do or a heavily managed thing where someone decides what you can do and tells you how to do it. You got to work on what are the criteria for success that will enable the things we need to have, quality, shareability, et cetera, and not mandate the rest. Yeah. No, I would, I actually, you know, I do think, you know, personally, I love the idea of a lot, you know, some investment in R01s to really, you know, flesh out and, and move forward the best ideas, but the, you know, the clearest advantage there is, you know, how useful is the resulting data going to be and may not need to be useful. Maybe it's, you know, about getting the technology developed and then you scale it up in the next phase. Yeah. So I also like the idea that consortia focused efforts because in terms of scale and impact, I think that's where the biggest bang for the buck lies. The, the R01s are important. If you think of CRISPR-Cas9, for example, we still need a lot of basic research for how to avoid off-target effects of those and that's going to be critical for the impact of that technology, for example. I think one of the things that we, we would need to think about or NHGRI would need to think about with respect to zero sum game is are there investments going into resources and efforts that could be leveraged or redirected to support these new consortia focused efforts or do you have to start from scratch for everything? And I'm thinking particularly informatics infrastructure that could, that could actually support some of the consortia efforts by redirecting. You know, as for those that, you know, have, have watched the, the sort of council discussions or, or know about the sort of discussions about, about budget. I mean, it really is this issue that the institute is grappling with, right, in terms of, you know, there's so many things that the institute would love to do and there's so many great ideas that get floated through and it's, you know, it's a question, you know, you do either do A or you do B or you do each at a half and then, you know, is that the right scale at which to invest in, and so, I mean, or yeah, I mean, you know, it's always great if you can get other people to pay your way, but, you know, you know, if that's not, that's sort of the, that's why I asked Eric about, you know, the feasibility of those sort of things. Other comments, questions, reframing issues that are left out? Yeah, just second, I think David's point that, I mean, I feel like it's premature to have this be overly managed. Maybe that's what you were saying. I think having for whatever, whichever of these goes forward, having a considerable degree of flexibility, I think will be important for assay development and technology development in this phase. And if you overly manage what people are going to do, then you're not going to, I think it's going to limit what happens technologically. Yeah, I mean, I get, my question was more sort of like, is this 2008, and we're talking about exomes, and it's a question of how do we take a technology and really scale it up so that it's, you know, reproducible and, you know, get to where things are, or, you know, is it that there's still a ton of work to be sorted out and it sounds like, you know, we're somewhere between 2006 or something like that. Yeah, yeah. Ewan. I think that that conversation is always very tricky, because very often the large scale or people going for it at scale is the way the technology matures. So, so judging that point inside of the technology spectrum has always traditionally been very, very hard. And you do, you ideally want to be at that tipping point where you invest. So, NHGRI normally, or the funding agency has to bet one or two years ahead of it tip, right, and then they catalyze the tip to happen. Right, I mean, that's why you're sort of putting it out now in hopes that in a year and a half or whatever that it all works. And I think these things, you know, single cell, CRISPR-Cas9 are two things which feel right to scale to me. Yeah, I mean, you know, to play the devil's advocate, nobody's talking about doing genome-wide talent anymore, right? You know, so the question is, you know, where, you know, is this the right time? Anyway, just to. Yeah, I know, I understand. I mean, the other thing that I wouldn't lose in this though is, or somehow I really think this business of like carpet bombings and proteins with changes and testing them. Yeah, that's another one thing. But I don't, I feel like there's a community that really needs that. Well, but there are people who are doing it. I mean, you know, lots of groups are sort of interested in that problem. And so, you know, I think once it gets done for a couple, then you got it, you can scale it up and improve it and, you know, get the, in the same way that, you know, exome capture, you know, used to have the advantage that you'd get a 1x or 2x genome, right? You know, now that's no longer an advantage of exome capture. So it's one of the, on this consortia focus effort and the amount of management, I mean, it seems to me you could have light management where it's the, you absolutely require data release, data sharing, quality assessment, you know, in this mutual evaluation so that you can see what's working or what's not without placing a, you know, a big damper on the creativity. And if that, and, you know, as you just mentioned, for the R01s, you don't know. I mean, there's not even a requirement that the data be released. I mean, other institutes use project program grants for that kind of thing, right? So are we sort of asymptoting in terms of comments, other questions, concerns? I think I feel happy to move on to section three. Okay, Mark and then Jay. In looking at your spectra here, I just want to make the point that I think this maps differently on to computational versus experimental efforts. I mean, you know, and we can probably see the implication. I mean, a lot of times the computational efforts I think can be dispersed but centralized in a somewhat different way. Agreed, Jay? I'll just add, I think, no matter what, I think R01s are a critical part of this. There's, you know, just there's a lot of creative young people out there doing technology development, who I think would be instrumental for this evolving over the long term. And so it's important to support them. And in the spirit of, you know, being land arrest in our fund raising, perhaps a sort of joint NHGRI NIGMS program in this might make a lot of sense because it's kind of squarely between the things that the two institutes do. And you can imagine a sort of, you know, bolus of R01s kind of focused on this that, you know, might be a way, just put that out as a suggestion. All right. Thank you, everybody. I don't know who. Group three? No, thanks, Carlos. Lunch. Oh. It's actually lunch time. Please come back here at one o'clock for the 30.