 Now, move to an update that's actually been delayed for a couple council meetings by Jeff Schloss. I know there's just significant interest in hearing about the latest in our genome sequencing technology development program, which obviously has been wildly successful, and where it's going. And so, Jeff Schloss, who's been directing that program for many years, is going to give us an update. And Jeff, just one second. You have some blue folders at your seats. That contains the conflict of interest document. Would you make sure to sign it? We'll collect it at the lunch break. So does anybody know which button I have to push? Is it laptop 508? To get it? I'm on a laptop to get it to look at my laptop. Yeah, 508. Yeah. I did that once already. Whoa. It's a little dark. I can't even see the keyboard. There we go. I'm on there. There you are. All right. Thank you. And now for something completely different. Well, there. Okay. Great. Yeah. It's pretty interesting to have these two talks back to back. So I'm going to talk with you about the technology development program for DNA sequencing. I'm not going to go quite as far back as Karen did, but I'll only go back to the end of the Human Genome Project, where we were finally recognizing we actually were going to cross that finish line and ask what do we do next. It was pretty clear to most people, I think, though not everyone in the field, that there were lots of reasons why the sequencing wasn't finished with the end of the Human Genome Project, but that instead there was a lot more sequencing to do for many, many reasons for comparative genomics to understand the human genome, to understand more of the world we live in, to really start now to look at lots of human genomes and understand genetic variation and understand how that contributes to disease. Outside of the purview of the NHGRI, we are interested, there's a lot of interest in sequencing of agriculturally important species, microbial communities, for many reasons medical, but also what's going on in the environment, how do those microbial communities affect our food, and there are other agencies of the government that are particularly interested in detection technologies that use sequencing. There was a lot of rationale to do some more. At the time when the Human Genome Project was finished, this was sort of the champion machine. It was a result of quite stunning technical evolution, and it was noticed that during the time of the Human Genome Project, the cost of sequencing did drop dramatically as a consequence of a collection of improvements in technology, in automation, and of economies of scale. And if one looked at the trend, one could see that it took about 10 years to reduce costs by two orders of magnitude. Estimates are that it cost something on the order of $600,000 to $700,000 to sequence the Human Genome the first time, but if you had done it again at the end, with the technology to the extent to which it had evolved, if you had done it again at the end of the Human Genome Project, it was estimated it would have cost about $50 million to sequence the Human Genome. Did I say million? $600 million. I said thousand? I'm sorry, $600 million. It should be bigger than $50 million. $600 to $700 million. I'm sorry, thank you for clarifying. So the rest of the $2.5 billion was sequencing other genomes and mapping all the time, trying to figure it out. Right. And thank you for pointing that out. And in talks with other audiences, I figured this audience did that. So thank you for... Yeah, right. And so in the planning process, the previous planning process, the one that was published in April 2003, as you all remember, this was the icon for that program. And actually at the first bookend meeting, I believe, which is December 2001, I think if I have the dates right, this notion of sequencing a genome for $1,000 was proposed. I believe that's the first time it was proposed when I heard it. And anyway, as part of this plan, we defined a series of quantum leaps, technological leaps so far off as to be almost fictional. They were characterized as dreams, but if they could be achieved, would revolutionize biomedical research. And so as a result of that planning process, we did launch a set of RFAs. We started in 2004 and asked... We said at the current time, the cost of a high quality draft was somewhere in the range of $10 to $50 million. And the goal of this initiative was to reduce that cost on half the time scale, that is two orders of magnitude. Instead of in 10 years, two orders of magnitude cost reduction in five years, and then another two orders of magnitude cost reduction in the subsequent five years. So that would take you down to roughly $100,000 in five years and $1,000 in 10 years. And so those were our goals. The initial awards were made in 2004. I should add that without some sort of a reasonable quality metric, the cost reduction is quite meaningless. And so a very, very challenging technology goal was to produce a high quality draft, not the finished sequence that that $50 million cost represents, but a very high quality draft. And so we went to the lower end, the $10 million as our estimate. This was the draft of the mouse genome. And we used that because there were quite extensive metrics in that publication about the per base quality and the contiguity and so forth. So let me grab something. We issued the RFA several times for receipt of applications each year. We've always had R21 feasibility projects and R01 regular research grants, and at various times we rolled in small business grants under the program. I should say as particularly with Karen mentioning all the grants are listed for LC. All of the awards we've made in this program are posted on this website. We've made roughly $52 million in awards for the first component, the $100,000 genome between 2004 and 2010. The last awards were issued in 2008. And so that's $52 million for that component of the program and about $125 million in development of technologies for the $1,000 genome since 2004. And that includes the commitments out to 2013, the current awards that have commitments. That's mostly in R01s and R21s. The overall, within that, the overall investment in small business grants is about $11 million. The average annual cost in this program, expenditure in this program has been about $20 million, with the largest in FY fiscal years 2005 and 2006. In addition to those numbers, I should say that about $13 million in stimulus awards were made. The scope of that program was a little bit broader than the scope of this RFA. So we're up in the range of $190 million over the period of time of this program. We've funded 45 academic groups in this time. Some of these have received multiple awards. They may have had an R21 followed by an R01 or projects with different aims. It's about 45 academic groups and 19 companies. And the companies range from startups to mid-size companies, some of which are vendors of sequencing technologies, to quite large companies including GE and IBM. So it's been a very interesting program in terms of the diversity of the grantees. And we've produced well over 300 publications in the program and large numbers of patents. We're actually trying to find out a little bit more about the licensing. So over this period of the program, we made awards in a wide variety, we made a wide variety of kinds of investments from relatively near term to the way the Human Genome Project was done to much more futuristic. Some of this, the science was well understood, but there were significant engineering challenges. And others, we were really moving into new areas of science where we really didn't understand the physics. And that's been one of the exciting components of this program and risk-taking. These are just some titles of the kinds of grants that were funded ranging from polyne-based sequencing, which is the sequencing by synthesis that we'll talk about in a minute, all the way out to various, a number of different kinds of single molecule methods. There's a diversity of methods. We've tried lots of different things. We're continuing to try lots of different things that involve mass spectrometry and force spectroscopy and charge-based sequencing, microfluidics, nanofluidics, and so forth. It's been a quite broad program. I'm not going to try to teach you how to do sequencing, but I need to talk a little bit about how the technologies work to give you a contrast from where we are now to where we're trying to go. So the current state-of-the-art sequencing technologies involve sequencing by synthesis or sequencing by ligation. Many of them are ensemble methods that is where in each feature you're sequencing a large cluster of molecules. Some of them are single molecule. And most of them involve optical detection using fluorescence or chemiluminescence. And so let me just walk you through quickly. I'm going to try to do this quickly, but not too rushed. A number of these, in order to make these ensembles of molecules, we'll start with genomic DNA, put some little tags on them so that we can handle the ends of those fragments of genomic DNA and somehow amplify so that from one piece, from one fragment of genomic DNA, we'll make a whole bunch of copies. And this slide represents one way to do that. You mix little beads that can capture those molecules in reagents and the genomic DNA fragments. And you put that on a vortexer and make an oil in water emulsion so that you have little droplets of aqueous solution that have all the reagents and the beads and the DNA surrounded by oil. So this is, in essence, a very large number of very small test tubes. And inside each of those little virtual test tubes, an amplification reaction occurs. And so you end up, if there was a DNA molecule and a bead, with a bead that has lots of DNA molecules, all of which originated from the same genomic DNA fragment. And so one of the themes here is that the workflow improves a lot, and this is one of the reasons that we've been able to reduce the cost and get the throughput increases that we have. So you generate little features, in this case, beads that have large numbers of copies of a particular template that you can use for a subsequent sequencing reaction. And we've skipped the cloning and colony picking and that very laborious component that was necessary during the Sanger sequencing era of the Human Genome Project. And you also get rid of, you lose a number of biases in the DNA molecules that you can collect. This is just an example of another way to make those clusters of molecules where instead of doing it on beads, you do it on a surface that has many of those clusters built on the surface. This is a diagram of what sequencing by synthesis looks like. So on the left will be a diagram of what's going on, sort of chemically of what we're going to see is each of those little blue dots is a feature that has a cluster of DNA molecules, each of which originated from a single genomic DNA fragment. And we're going to build, one at a time, another nucleotide, another nucleotide, and another nucleotide. And on the right side is what the microscope would see looking at that chip surface. So it's just going to build up and you do chemistry and image and do chemistry and image and do chemistry and image. You can achieve that by using DNA polymerase to grow that chain. And that's what some of the systems do. And you can also achieve it by doing oligonucleotide hybridizations and ligations. And I'm not going to go into any of these details. And by the way, this is a good time for me to say, to make a disclaimer, I'm going to talk about a number of different individual technologies that are commercial products and I'm not endorsing any of those. But it's important in order to have the context to mention these systems and the systems. And in doing so, I'll mention the people that have developed them. So what you get is images that look like that. And then, of course, you have images like that that are stacked up on top of each other. And so you can see that if you can make the dots smaller and closer together and more uniform, there's lots of room to increase throughput. And that's some of what's been going on over the last several years to produce the remarkable improvements in throughput and quality and so forth that have been achieved. So again, to stress the workflow change, we went from doing runs where we had large sets of robots to handle all these samples individually and 100 samples analyzed per run to being able to prepare the libraries without cloning in a test tube and then doing runs that have millions, hundreds of millions of samples being sequenced at a single time. Now, the duration of the run is different and I don't want to indicate that it's easy to make these libraries, but it is a lot easier. So over a period of years, a series of instruments have been commercialized. The dates here, the dates of commercialization of these instruments and the asterisks are the grantees. So we've had NHGRIs had a role in developing technology and contributing to the development of these technologies for most of the current systems. Different kinds of investments. So by the time we invested in the 454 system, they were already on a commercialization path. They had a grant from us for scaling to scale up faster, improve quality faster than they might otherwise have been able to do. Helicos we originally funded as a new investigator award to Steve Quake, a PI at an academic institution who developed some of the concepts behind single molecule sequencing that was subsequently commercialized. Somewhere in between was the small company called Agincourt that was a spin-out in part from the Broad Institute that we funded and ultimately that company was bought by applied biosystems that merged with life technologies for the development of the solid system and so forth. I guess, yes? Didn't we fund some early work that led to the biochemistry for complete genomics where they got IT that was critical? Was that not true? Well, some of the early things that they got intellectual property I thought were cobbled together and creating complete genomics. I'm not sure. I don't know what that would be. Okay. So if you're talking about the human genome project versus NHGRI, DOE did fund some of the sequencing by hybridization work that led to that was part of the underpinning of complete genomics. Yeah. We actually funded Illumina very early on but it wasn't for the sequencing technology it was for their B-based hybridization technology but not on this one. As long as I'm here I should say the pollinator system was developed in George Church's lab that was actually not in this program that was actually funded in a SEGS grant but we folded it in because it made intellectual sense in Pacific Biosciences we funded very, very early on small spin-out company out of Cornell University where the graduate students who worked on the program in Harold Craighead and Watt Webb's lab started a very tiny company to start developing that and subsequently was funded in this program. So these are just sort of updated versions of these machines and also shown to scale some of these are large machines some of them are benchtop and some of them are essentially desktop machines and I think that's all and the throughput of these machines is so great and actually the sort of the infrastructure you have to be able to supply them with samples and to get data off and interpret it that most of these companies have developed smaller versions that might have better application in different settings let's just say it that way better tuned to different settings So I decided not to fill out this graph but I want to make the point that each of these systems has different features and the features range from the method that's used to generate the data, the length of reads ranging from roughly 50 to to almost a thousand how many reads you get per run how many gigabases you produce and so forth very different error models very different run times and different costs and so I think that's actually a good thing it's one of the features that's contributed to the competition and development of these systems so I think we ought to think of these as tool kits where you can turn a bolt with a number of different kinds of tools but we have room for innovation, lots of new ideas and as I say competition that's been really important and I think that this program, NHGIRS program has contributed to this there are lots of different applications in DNA analysis, RNA analysis and even proteins bound to RNA analysis and one of the key features that's different here from the Sanger sequencing is that these are digital and also in contrast to most of the chip methods, I'm sorry the DNA array methods that have been used is that these are digital because you can count each of those features originated from a genomic DNA fragment the technologies have not we've not waited to apply the technologies until they were completely ready but as they were being developed they were being applied right away to a wide variety of programs that you're all very familiar with this has helped to pull the technology out tune the technologies to make the needs of the users what are the what are the throughput increases you can do the calculations of what the throughput is of these machines and at the end of the human genome project you could calculate it took about three months with a hundred machines to sequence a human genome early on in the rolling out of these systems you could calculate it took about three or four months with one machine a couple years later less than one month with one machine and now about a week or so with one machine but there's enough real estate on that chip now you can sequence three genomes in that time so that's really quite stunning progress and you've all seen this a hundred times couldn't do all those genomes if they cost 50 million dollars and fortunately it doesn't you can see the introduction of these machines coincides with the timing of this dramatic reduction so where are we going next well the first few slides are of entries that are recently on the market and being recently brought into laboratories one of these is uses still sequencing by synthesis but instead of using light detection it uses electronic detection it uses pH which is protons that are evolved as the polymerization reaction occurs and this is another case of scaling and the grant that IonTorrent has from NHGRI is once again they already had a commercialization plan these systems were going to be coming on the market and their grant from us is to scale and so this is an announcement that Eric referred to in his introduction in his director's report is this company's recently announced that later this year they'll have systems that can sequence a genome in a day for a thousand dollars so we'll see if that happens but that was what their grant was contributed to I'm going to be very clear the investments so we're investing 20 million dollars a year in this program the investments to develop these machines are in the 100 to 200 million dollars we're making a small and I hope significant contribution to that and to the development of the field another approach would watch DNA polymerase molecules in action in real time instead of having these cycles of chemistry and then detect and chemistry and detect watch the molecule in real time there are a few different companies that are developing this technology Visigen and Pacific biosciences have had grants from NHGRI to do this and the idea here is that if you set up your light correctly and the different companies use different ways to set up the light you can see the fluorescently labeled nucleotides that are in the active site of the polymerase and so the bottom figure is just to demonstrate that you can do this in a paper in science you can make this highly parallel and there's at least the opportunity the possibility of some of the reads being quite long which would be very intriguing if you want to be able to put genomes together not only can you read genome sequence but you can read epigenetic data directly off of the machine you don't have to do the conversions that we do now the biochemical conversions just shows that the polymerase the DNA polymerase pauses when it sees a methylated base furthermore you can read RNA directly or I should say they are working on reading RNA directly and this is a demonstration you just switch the polymerase in the system and you could read RNA so they're working on that it's not available yet another approach we're developing is nanopore sequencing I have way too many slides for this so I'm going to go through quickly but I want to show you the concept the idea is if you had a reader that's the same size of the DNA molecule much like a polymerase but could perhaps read without an enzymatic reaction but instead read something physical about the DNA and in this case we're going to read the interruption of ions flowing through the pore where the dream is what's shown on the bottom the ion flow is interrupted differentially depending on which nucleotide is in the channel I should say why would we want to do this you should be able to sequence genomic DNA your workflow would be simplified if it works at all you ought to be able to get very long reads that's also long reads you could actually do it without destroying the DNA for some reason you wanted to do that but if you wanted to do it you had to do it in T and modified bases probably you could read RNA too the original experiments with this were done with RNA not with DNA and should be very fast it should be very fully electronic you might even be able to make handheld devices in case you don't recognize that that's a tricorder okay and so I'm going to skip this because I've already said that the workflow would be easier a number of groups are working on this when they started off they said this is cool but you'll never be able to make a device with a protein pour in a lipid bilayer you can't make that robust so they started trying to make devices out of hard materials using the technologies of the electronics industry and it's an intriguing idea it's been extremely challenging people didn't even know how to make holes in materials that were the right size when they started this motivated investments by DARPA and other agencies to be able to figure out how to do this fabrication and I'm actually not going to go into this this is actually a little bit different electronic detection method people have modeled these devices and say if we could build the device and position the DNA in it correctly then we ought to be able to distinguish the signals of AC, G and T so far that hasn't been done in those kinds of devices and again it's because it's really hard to fabricate a device with atomic precision and to position the nucleotides correctly with the availability more recently of this material called graphene groups are trying to use that because you could conceivably it should be easier to fabricate I'll just leave it at that and have some of the correct electronic properties the state of this is as far as I know that people have shown they can make devices they can put DNA through it they can detect signals when a DNA molecule passes through but as far as I know nobody's yet near being able to detect to sequence molecules but the positioning problem has been at least partially solved by this approach which would functionalize the surfaces the metal surfaces so you get chemical fingers sticking out from the chemical surfaces that could hydrogen bond transiently with the DNA molecule and so this is a set of experiments showing that you can make these things and you can actually distinguish when you have a solution of nucleosides you can distinguish between the four bases and you can furthermore take that to the next step of distinguishing that there are different individual nucleotides within an oligomer you get different signals as you get thermal fluctuation in the device that's how small these are and you can not only distinguish among the bases but you can distinguish different methylated bases with these devices now this is not set up for sequencing it's an experimental system but if you put that chemistry into a nanopore maybe that would work the best device we have so far where you can achieve atomic resolution is proteins where you do genetic modification to change the protein structure and so this is working reasonably well a number of groups again their groups are sided this is again an experiment with nucleoside solutions if you run solutions of nucleosides through appropriately modified pores you can distinguish ACG and T different electronic signals and of course you put one at a time through and therefore the signals you can also distinguish methylated bases directly and then you can move this and this is a series of slides I'm not going to explain the point I want to make was that was nucleosides and this is now detecting specific nucleotide identity within an oligo but it's hanging stationary in the pore it's not moving yet and so this is just showing that you can do this so here's a sort of a random oligo with either CTG or A at a particular position and you can distinguish them and once again you can do this for methylated bases but the big problem with the pore we're talking about here is that this has a very long channel so there are multiple bases in the channel now they've modified it so that the reading positions are at particular positions but if you can change the shape of the pore so you only had a narrow channel in one place maybe that would work better people are taking that approach too and getting very strong and very well distinguished signatures so that's with DNA hanging in a pore now what about moving the DNA through the pore and so I'm not going to explain it but that's what this experiment does it starts to move the DNA through the pore one base at a time and you can detect that and here's now sort of an easier way of moving the DNA through the pore one base at a time that's with a molecular ratchet called DNA polymerase and so these are experiments showing that as you change the voltage and as you change the salt you can change the rate at which DNA moves through the pore okay if you didn't want to have to have a ratchet but just wanted to slow down the DNA moving through the pore so that you'd have time to read the electronic signal I haven't gone into this but actually when you electrophores DNA through these pores it wants to move through really fast now if you want to have a stronger electrical signal you put in more salt right so you have more ions moving through per DNA the problem with that is when you have more ions then the DNA moves faster so you need to have some other way to slow down the DNA and people have shown that you can do that by modifying the charge in the channel or by building a transistor like device with very fast electronic switching you would be able to ratchet a DNA molecule through a pore a base at a time so all the pieces are kind of there and now people are working on trying to put these pieces together many of these technologies from the groups that I've shown you have been licensed to companies these particular ones that I've shown you are licensed to two companies and one of those companies has announced just had a press release a week or a week and a half ago saying that they're going to be commercializing their technology later this year and they have a big talk coming up at AGBT next week so we'll see what they say so this may or may not be so far off but we'll see a lot of challenges why do we care again you have to slow the DNA down to be able to read a signal and if you slow it down to one millisecond per base and you had a small array of just a thousand nanopores you ought to be able to sequence a genome in a day this is just to say we are supporting other kinds of technologies this is just an example of a company with a lot of microscopy and for sake of time I won't walk you through this but they have some preliminary evidence that they're able to see individual bases on the surface people are minimizing now the challenge of DNA sequencing oh we can all do that the problem is the analysis making the libraries and getting the samples those are all true I would say this is a good thing again Eric has talked multiple times successful in moving bottlenecks we're not all the way there yet with sequencing but we're moving along the program I'm just about wrapping up here now the program has as I mentioned been launched by launching RFAs all the information is on the website a key and essential component of this program has been the annual meetings where people meet and share a lot of results we push them to share results we understand everybody has to have certain secrets usually they have a secret for a year and then the next year they can talk about it openly we do actually try to get people to have their IP file so that they can talk openly at these meetings there's a lot of competition and collaboration so collaboration we've published papers together to sort of take a look at the status of the field and what are the challenges we these meetings are attended by people from other agencies of the government so we can share our knowledge with them they share their knowledge with us they fund our grantees to do things that meet their needs and so forth we have published the bibliography I've mentioned over 300 publications and this is all available on the website why does this work the grantees are sharing information there's a lot of information they can tell each other and people learn a huge amount at these meetings they're really motivated when they come to these meetings and find out how fast the field is moving it gets everybody really jazzed they compete and they collaborate we've had out of these meetings numerous grant applications coming in where people are collaborating who were competing are now collaborating this is a great experience for the students who can learn in this environment and then move from one lab to another and we have as Eric mentioned again an open meeting where anyone can come on the day after the grantee meeting so we can sort of spread the joy this institute has been really committed that is absolutely critical Karen was talking about things that happen best at NHGRI I think this is really an example of that the quality and the qualities of the peer review have been essential to make this work program and review offices work together very closely this group the advisors of the institute have been willing to take some risks I think they've paid off and we you know to some extent fell in at a great time in the development of the field so there is we have discovery and then there's development and there's an investment chain to carry this all forward so I would say that the technologies that we've developed in this program that we've contributed to have really changed the face of genomic science and its trajectory into medicine and I had taken this next slide out but after Karen's I decided I'd put it back in because I really do think that these technologies span across the entire purview of what the institute is trying to do and thanks I'll try to answer a few questions okay needless to say this is the easiest job in the world when you have programs like this it's remarkable go ahead Jeff you said some numbers but what fraction of these grantees are companies or maybe the dollars go to companies and which ones I haven't divided it up by company dollars I think did I I said 45 academic groups have been supported but again some of those groups had more than one grant so it's not 45 grants but 45 academic groups and 19 company groups and that point about and the sizes of the grants some of them are SBIR phase one so they're really tiny or R21s in a company setting and some of them are large so it's interesting and I know I've reviewed for you before and I know what some of the issues are here you have to have you have to it is really good that the companies are being supported are being interested in this because this is not going to get completely done in academia although a lot of the ideas start there I think do you think you said something about a year of secretiveness or something I'm not sure what you're referring to some people come to this meeting and they're just happy they're very academic perspective and they just love talking about their results and they're very open I've had a few grantees academic grantees call me who are just so worried that you know they want to comply with the program they want to be able to talk about a lot of their stuff but there are a few things that they're not quite ready to divulge all the details and I try to push them to divulge as much as they can but I understand on the other hand if some of the stuff doesn't get patented and licensed it's not going to get developed and so I try to reach the happy medium well that's why I'm raising the point I think you're doing a great job balancing that because you got it you do have to protect it these won't get commercialized or ever get done otherwise yet our inclination especially in the genome institute is for everything to be freely available we can't do that I think you're doing it just right because you really do need both and yet at least the one review where I did a few years ago it was remarkable to me how many of them are in each other's faces too I mean some of them are multiple kind of competing some of the grantees are the scientists even when people are keeping something to themselves they have so much knowledge and they can share that and it really accelerates the field could you identify yourself please close sir what the cost of right now is it close to $5,000 or $40,000,000 okay so if I have the question right maybe stay up there a second to see if I because I'm not quite sure which number so the costs have come down from something on the order of $50,000,000 to something and what quality do you get for that amount of money I would say that we're probably close now to the high quality draft with a lot of the sequencing that's being done with default coverage on these short reads you can use that to map to known genomes and people are getting better and better as they develop algorithms to do assembly there are a bunch of tricks you have to do in making your libraries in order to do the assemblies but you can get a reasonably good put together genome from this combination of just mapping to a reference and some assembly and that cost roughly in the $5,000 to $10,000 per genome range that's good enough to do a lot of clinically relevant research and some actually some clinical applications I mean you've seen the papers where people are finding, you know, disease genes and giving hints in some cases giving hints to therapy but I think that is the last couple rounds of the RFA we're really saying you know if we'd said a $100 genome or a $50 genome back in 2004 people would have thought we were nuts now people say well why are you still talking about $1,000 genome but we are talking about $1,000 or less and we really do need to talk about improved quality we framed this in terms of cost and I think that was probably the right thing to do but we have a long way to go in terms of the quality of sequence that we would like to achieve First of all I just I think Jeff's done a tremendous job in this program it's watching it progress and see all these results at such speed has been really exciting just comment on you say it's been about 20 million a year that you put into this and it might be interesting to just make a few comments about how the technologies kind of get fast-tracked in terms of adoption how these technologies are put into the large scale sequencing centers and other things to be advanced maybe faster in terms of adoption and I don't know how that factors in in terms of also the cost that NHGRI puts in yeah so I guess the closest I could come to that is to say that the center's budgets and Rick could comment on this for at least in the most recent several years we've figured that the centers are investing somewhere on the order of 10 percent of their overall budget on implementing new technologies and that's been a really important component of the advances in these technologies so what I'm talking about here is really early stuff really researchy stuff then it sort of passes out of the purview of what this program can do and into the commercialization and development and that's where the experience of sophisticated users is unbelievably important because all these companies are rushing to get their systems out and they usually do it too early the information they get back from sophisticated users makes a huge contribution to the successful commercialization of the systems just real quick Jeff you may have said but I haven't noticed the targeted sequencing I know we're going to sequence the whole genome for $10 soon but in fact for clinical applications being able to pull out 100 genes and deep sequencing it for really cheap is valuable what we did with this program was we said this is to invent new sequencing technologies really completely something completely different we do not support projects that are specifically to implement a new way to gather up the templates what we said was if you have an idea of how to do that and you sort of can spin that along in the course of developing a new technology that you can use for whole genomes then we would contribute to that but the fact is that applications like targeted sequencing or how to capture exomes do really well in the standard study sections an NHGRI definitely supports a lot of that work that's not included here I mean they don't need to be huge and they don't need to go through this specialized review I was especially interested in the table you didn't fill in and I think I deleted it all because it's really hard to keep up with and then who do you believe what I was just going to mention is from an NHGRI perspective it seems important that that body could give really important guidance into which of those columns is becoming increasingly important in other words cost for example is decreasing in its importance as all this comes down but things like accuracy, read length maybe have very much increasing importance so prioritizing what's really important for the next wave and in my mind I think about clinical things will be really important yeah and that's really I think our next challenge is figuring out how to frame that and what role NHGRI can play effectively in that because some of that's really done well in industry channel last questions for Jeff if not we are fundamentally on time and we have to make a command decision here Rudy