 All right, concept clearance. My brother will be on in a second. So I don't know if Rudy mentioned this. When I was preparing this, I think it was Jeff that pointed out is that you're actually all deputized, and this is going to be a concept clearance that is not just for NHGRI, but it's for 12 other institutes. And so I wrote this thinking that we're speaking for 12 institutes here. As you'll see in a second, one of the main reasons of that is because they're providing almost all of the funding. And so I also have done several contract concept clearances before, and they're very focused really at the conceptual level. So I tried to capture the three concepts that we're essentially looking at here. I would have maybe added Apple Pie and Mom at the bottom, but you can see this is the concepts that we're talking about. And this is a renewal of an existing program, but this is what the concepts that underlie the contract. So should we have a large central facility that can do things very efficiently and for many different projects? Should the facility be structured in a way that it can retain ultimate flexibility and change as the technology changes? And is it also a good idea to have flexibility that if we don't need it, we don't put the money into it? And so those are the key elements of how this contract is set up. I just want to highlight that it came up earlier today that I'm a big fan of intra-institute programs. I think it's because of all these advantages, shared expertise, governance, resources. You can pull research projects and you really get economies of scale. It's not something that the NIH does a lot of. It's doing more of it now than what I first got here. So we're not 27 little islands as we used to be. This is an example of an island that has 13 different members pooling together. Another way to think of it is that we're kind of running a co-op. And for those of you that are more Midwest oriented, you recognize what a grain co-op does. It allows people to pull their truck up there full of the raw material and to dump it in. And the grain co-op actually manufactures it into refined flour with defined characteristics, defined QC, and ready to go into products that we all eat. And in some ways, this is what this program does. It fills that middle ground instead of having investigators learn how to do phenotyping, learn how to get samples, learn how to do genotyping, or sequencing, data analysis, and then figure out what the answer is. We're filling that middle layer for them. Just move into some of the historical data on the center or the program. So it was founded in 1996. Bob had a major hand along with Francis in setting this up. It morphed a few times in the early years. But the model that I'm going to be talking about now is the one that's been operating probably since late 96. It's currently supported by 13 different institutes. As I mentioned in the written material, we lost one institute. We gained one institute relatively recently. And it's based in the current program. It's based in a building in Baltimore, an NIH building. It's been leased on the Bayview campus. This is what the program currently offers. And I'm talking about this historically, because what we're voting on today is what will be going on in the future. And that's a little bit more fuzzy. Currently, they offer SNP genotyping, including custom genotyping. There's a bunch of focused content panels. I didn't put them all on this list that are available. A lot of genome-wide association studies, as you in a second, have passed through this program. And also, next-gen sequencing, right now the services offered are targeted exome, whole genome, both deep hole genome and low-pass hole genome. And we also have a part of the program that offers statistical analysis, mostly data cleaning, data preparation for depositing into public databases, not that much final analysis as to, here's what your data told you. That's usually what the investigators do with the data once they get it. This program has a lot of oversight. There is a board of governors, representatives from each of the institutes has a member on the board. Three of them are institute directors now, I think. So the institute director themselves are part of the board of governors. There is a CIDR access committee, which is a peer review panel that looks over projects before they come into CIDR. That meets six times a year. I meet with the laboratory staff every other week and lots of phone calls at odd hours when things happen, as well as a contracting officer's representative. This is my better half, who does all of the paperwork, because this is a contract, there's a lot of paperwork. So the two of us actually oversee that area. The NIH budget office plays a role in moving the money around and we actually are administratively overseen by the contracting office that's run by NHLBI and they make sure that we do everything legal, or at least right at the edge of that. Shouldn't say that in public sense. Here's some trends over just the last couple of years. The number of projects that have passed through this center, and Bob, I don't remember what year you left, was it 2006? 2006, so this is the post-Bob era, I guess this is my era. The number of projects has gone up. The cost per genotype has gone way down, as most of you know, but this in parentheses, because I really think that the cost per genotype should be going down more than it is, and that's because there's not enough competition in the GWAS setting. We really should have a $70 GWAS chip. It's universally available instead of a couple hundred dollar GWAS chip. We potentially, in the next incarnation, the R Institute can play a little bit of a role in maybe pushing that. The size of the projects have gone up and up, as I'll show you in a second. Full-time staff has actually gone down, so this program's able to do more with less fewer people. The administrative costs for a brief while were going up until we screamed and kicked and said you can't do it this way, and then they went down, sorry, that's the NIH administrative cost, not the cost from the contractor. Just to give you an idea of some metrics for the program, this is the samples completed for fiscal year going back to 1998. And for those of you who can't see, the top of this scale is 180,000. Relevant here is in the last two years. Completed samples mean the data's done and been released to the investigator. There's been a couple hundred thousand samples go out the door. Sites, I pulled last night, I pulled a list of where all the projects have come from, different investigators. And Amy, I know people in Texas are sensitive about this. Texas has one, for some reason, the program didn't put the dot on Texas. So, I just did. I didn't do all the international sites except Toronto, because it happened to be on this map. But there are international sites as well. Many of these projects are consortium projects or multi-center projects. So we have projects where the samples involve three or four primary investigators but came from 25 different sites, all to Baltimore. Biggest one came from a hundred sites. And that one actually really scared me and it's been going really smoothly so far. Number of projects that have been posted in DBGAP. So I just show you, this is by Institute at the bottom. You can see the big institutes have more. NEI was an early adopter of GWAS studies. They have a fair bit. Highlight this in the written material just to point out that about a third of all GWAS studies in DBGAP came through this program. And probably more than a third of all the sequencing studies in DBGAP at this point probably came through here but that's because that hasn't really hit critical mass yet and they'll be coming in from elsewhere. This is to highlight that what the program does is it creates an automatic pipeline for data sharing. And it takes the burden for the most part except for some institutional certifications off the investigator lots of times. They can actually, as part of their project, say, please share my data. The program interacts directly with NCBI. And we have a data cleaning center. Cleans the data so it immediately is in there and it lets the investigator focus on analyzing their data and not doing all the sharing paperwork. Just to give you a snapshot just over the summer. There are active projects in the lab. There's about 100,000 samples sitting in the lab right now. Over the summer about 30,000 of those were in data production. That means that at any given time the people could have been touching up to 30,000 samples. That probably represents about seven or eight different projects. So as you might guess this is a program that relies on automation, computer tracking, limb systems and all the sophisticated ways to make sure that everything is working well. Going forward, I think you might guess correctly that we expect array based analysis will go down. They have not gone down nearly as fast as I had thought and we're still cranking out lots of arrays, both customer arrays and GWAS study arrays. And of course the expectation is that sequence based projects as the sequencing prices come down to where it's feasible to do epidemiologic size studies that the number of samples per year that go through this program that our sequence will go up quite a bit. Let's just start it. We're probably just at this part of the ramp right now. I want to end with some parameters of the program as we see it going forward. You'll see there are frustratingly broad and vague and that's because this is not a lump sum payment. This is you're voting on a concept to set up a program that has capacity and the more capacity is used the more funding will go into the program. If it's not used it won't go into the program. So it will be a research development contract. It's what we call in the business an indefinite delivery, indefinite quantity type structure which means the contract it doesn't say we're gonna buy X amount for the next five years. What we're buying is capacity. The term is not yet decided but the last couple of in five to seven years. The funds if no one uses it could be as low as 10 million over five day years or even less or if lots of people use it we figure the capacity could handle about 150 million over that particular interval. If you're wondering what it cost NHGRI it's essentially nothing. In the checkbook writing stage there's some personnel time, mine and John Garvey but really we manage this and we catalyze it for the rest of the institutes that wanna join in. We're anticipating that it's open competition we'll have contact solicitation we'll set up a review and then it makes sense for something that's gonna be highly efficient and centralized that there's going to be one award. I didn't mention in all this how the since I know some of you are familiar I know some of you have projects that have passed through from CIDR. The investigator who has the project has their project reviewed and then very much like Phil mentioned earlier they're given a credit they're not given cash they're given access and then the institute pays directly into the contract. So if you're at an institution and you want to use CIDR and you get approval to do so it doesn't cost the investigator anything directly. And so that's how this mechanism works. So it was interesting to hear Phil talk about that same model for data analysis. That's all I know that we've been a long day and I have my brother has to get in and talk. So that's all I have. I'm happy to open up for as much discussion as we need or additional comments. Could you comment on comparing what's done here with putting this out to the companies that are offering this stuff? I mean how comparable is our... Obviously that's changed a lot in the last five years. Yeah, I can tell you because we have to justify that according to competition for every single task order. And so I can tell you on just pricing we're about comparable where we blow the commercial things away is in quality and in charging and that we don't charge. This program say we invented this program does not charge for samples that don't produce data. So if you really, if you don't, if you send us 100 samples and you get 98 that have data you only pay for 98. And so we have economies of scale that way. There's all these other ancillary services like the data cleaning and the data QC. I didn't mention there's a lot of details I didn't mention sample comes in the door is DNA fingerprinted. So when it goes out the back end we know it's the right sample. Those kind of layers of quality control you don't get with the commercial companies. So we really do push and I really push the quality and the economy at the same time. We are not competitive and I'll just in whole genome sequencing at this point because the sequencing capacity is smallish at this stage. So... So your mom and apple pie slide, I think it's still valid today. So I really think there is a need for such a resource particularly for the what you call smaller institutes that that's a good thing. But my question is more technical. In 2015, 16, is it really worth spending several hundred dollars for a GWAS chip and not getting an exo on these sample sets? And when is that pendulum going to shift that you're probably, you being science is better off spending a little bit more and getting the rare variant spectrum in those specialized sample sets as opposed to another GWAS array chip that is really snip discovery on somebody else's samples? Right and we, so I'll give you the short answer is that we do both and there are projects that combine both. The longer answer is, has to do with what you just heard for a lot of these other concepts which are advancing technology and advancing the state of the science. This program is a little bit behind what we would call the bleeding edge and so those are informative and what's going on in the field is informative for what we'll offer in the future. The real kicker point would be if arrays stay where they are then they'll just go away and we'll just do more and more sequencing. If we can drive arrays down to be really, really cheap as a first pass there may be ways to combine arrays and sequencing in an economically feasible way. I don't think we're there yet because sequencing is still too expensive and arrays are still not where they should be. But I'm surprised on your sort of menu of things to do driving sequencing costs down further doesn't seem to be coming up. It just seems like the Institute could do a lot more to help pressure us, the community and the vendors to drive sequencing costs down further. We have the ability to do that via volume for established things where we don't have the ability to do it within this program. The rest of the Institute has the ability to push it in the technology. You're just talking competitive, yeah. No further discussion? Can I have a motion to approve the concept? Second. All in favor? Any opposed? Any abstentions? Okay, thank you. Thank you, Larry.