 I want to thank you for inviting me here and for all the help you've given me starting up as a new director. It's been a great pleasure to work with Eric and I look forward to doing so in the future and with NHGRI in general. Eric asked me to give an overview of what's happening at NIGMS and especially of areas in which NHGRI and NIGMS have particular potential for collaboration and synergy. So let me just start by probably reminding you what the missions of NIGMS are. And there really are two overarching missions. The first is to promote fundamental research on living systems in order to lay the foundation for advances in disease diagnosis treatment and prevention. And then our second overarching mission, which is related to the first, is to enable the development of the best trained, most innovative and productive biomedical research workforce possible. I'll just highlight this phrase here, lay the foundation. And this is one of the places that you'll see that NHGRI and NIGMS share common goals. Because really the mission of NIGMS is to lay the foundation through its research portfolio and through its training programs on which the other institutes, the institutes that are focused on specific diseases and organ systems, build their research and training portfolios. We have five program divisions. They are the Division of Cell Biology and Biophysics led by Kathy Lewis, Pharmacology, Physiology and Biological Chemistry led by Mike Rogers, Biomedical Technology, Bioinformatics and Computational Biology, which as Eric said is a place of significant synergies and shared interests with NHGRI, which is led by our newest division director, Susan Gregorick, who came to NIGMS from the Department of Energy. Training, Workforce, Development and Diversity, which for many years was led by Cliff Pudry, who many of you may have interacted with. He retired just maybe six weeks, two months ago, and Allison Hall is the acting director while we conduct a search for that position, which I'll tell you more about in just a second. And finally, last but certainly not least, genetics and developmental biology, again an area of shared interest, which is led by Judith Greenberg, who is also the acting deputy director of NIGMS. Now those two points bring up something that I'd like to make you all aware of, which is we have two ongoing and very important searches at NIGMS. The first is for a permanent deputy director, and the second, as I said, is for a permanent director of the Division of Training, Workforce, Development and Diversity. And to put this last point in perspective, 50% almost of the pre-doctoral T32 training grant slots that NIH gives are from NIGMS. So in many ways, as Eric alluded to, we are the 800-pound training gorilla of NIH. And so this really is one of the most important positions in training workforce development and diversity in the country. And so I'd really be very grateful if you could make your colleagues and others aware of these two positions at NIGMS. And one place to find information about them is through our blog, the NIGMS Feedback Loop, which can be found here or just by searching Feedback Loop and NIGMS. Now they say you don't know an institute until you know its budget. So as Eric said, our budget this year is about $2.36 billion. And this simplified pie chart gives you some understanding of how the institute uses this investment of taxpayer money. Almost 90%, 89% of the budget goes to fund extramural research, so research done at universities and other institutes across the country. About 8% is invested in training workforce development and diversity programs, although there are some overlap between these two areas, I should say. About 3% goes to run the institute. And then much less than 1% is our intramural research program, which is actually really just one postdoctoral program called the Postdoctoral Research Associate Training Program for the Pratt Program, which funds postdoctoral fellows to work throughout the NIH at all of the institutes and centers as well as at the FDA. What you can see here is that we are almost a completely outward facing institute. We basically, for all intents and purposes, really don't have an intramural research program. Everything is going to fund research in the extramural community. Couple of hot issues that are being discussed and thought about and worked on very closely right now at NIGMS, which may be of interest to you. The first is that we are working very hard to renew and reinvigorate our commitment to investigator-initiated, question-driven research. And so by investigator-initiated, I mean that the ideas are generated by the investigators in the extramural community. How they're going to approach these problems and organize the research is also decided and established by those investigators in the community. So it's as opposed to what you might call top-down or maybe programmatic research where groups in consultation with the NIH decide the certain areas need targeted funding and we therefore would put funds directly into those research areas or into that particular arrangement of scientific research. Now I should say that the difference, the dichotomy I'm drawing here is not between investigator-initiated and team science because team science can be and frequently is investigator-initiated. So when we're talking about investigator-initiated research, we're talking both about traditional single PI-based research and now increasingly important team-based research. So I think we're very cognizant of the fact that as information becomes more complex, as there is more and more need for interdisciplinary approaches to problems, team-based science is increasingly important. And so as we reinvigorate our commitment to investigator-initiated research, we're also looking carefully at how we can best support team-based investigator-initiated research. Now the history of NIGMS, as some of you may know, is in investigator-initiated research. So you may ask, why do we need to spend any time and effort reinvigorating our commitment to it? And this graph really tells the story. So what you see here in the blue bars and on the left-hand y-axis is the funds NIGMS had invested in targeted research. So this is research focused on a specific area of science or a specific way of arranging researchers over time from 1990 to 2013. And you can see that in the early 1990s, NIGMS had very little of its funds invested in targeted research. Between 1998 and 2003, which as I'm sure you know, corresponds to the NIH budget doubling, this really increased dramatically. And interestingly, it continued to increase for several years after that. Now it made a lot of sense when there was continually additional money every year coming into the system to invest some of that in targeted research. For instance, to try to ignite new areas of science, to try experiments in new ways of arranging scientists to perform science. The red line, which is the right-hand y-axis, is the reciprocal of this. So this is the percent of the NIGMS portfolio that was invested in investigator-initiated research. And again, you can see that in the early 1990s, 99% of our portfolio was investigator-initiated. And then during the budget doubling, again it fell to the point that now only 80% of our portfolio is investigator-initiated. So it certainly would come as no surprise to you that the budget doubling has been over for a decade. And so given this, we feel it's very important to re-equilibrate the system to this post-budget-doubling reality. And so we are working to move this red line up in this direction, which is gonna mean moving these blue bars down in this direction. And so that's something that we've started both short and long-term efforts to rebalance our portfolio, again to re-equilibrate what NIGMS focuses on in this post-budget-doubling world. Now another area that we're focusing on is thinking about whether we can find more efficient and more effective mechanisms to fund this investigator-initiated research. So we're looking to explore and do experiments with new funding mechanisms that would be more stable, flexible, and efficient both for the investigators themselves and for NIGMS. And so one project that we're very hard to work on and are hoping to roll out actually, again as an experiment in the next year, is a new mechanism that would support a PI's overall research program instead of forcing them to break their program up into individual projects and try to get funding for each of those separate projects differently. And so that's something I'm hoping in the next year we'll be able to begin as an experiment. Now as Eric mentioned, we had a really first, on February 21st, 2014, we had a bilateral NHGRI NIGMS retreat, which maybe was the first bilateral institute retreat in history that we know of. And it focused on four areas that we thought there would be particular interest in overlap and synergy. The first is how the two institutes support research on the elucidation of biological function. The second was how we support databases, which are obviously very critical resources for the research community. And in fact, they're only growing in importance as more and more information becomes available. How we support biomedical informatics and computational biology. And you saw that we have a whole division of that. And I'm sure you're aware that NHGRI is very heavily invested in those areas as well. And finally, mutual interest in how we best support technology development for the missions of our two institutes. And we came up with a lot of interesting ideas in all of these areas, but the one that really stood out as needing immediate action and as a place where the two institutes could work together to try to really improve the situation, both for researchers and for the efficient use of NIH funds, was in databases. As I said, databases are critical resources for the scientific community. They're only going to be growing in importance given the increasing amount of information that's available and the increasing complexity of that information. But as I'll show you on the next slide, this growth of the importance of databases has translated into a growth of databases themselves and into a growth of the cost of the databases. And that really threatens to capsize the ship if we don't find more efficient and sustainable models to support these databases. They could really end up eating up very significant chunks of both of our budgets as well as the budgets of other institutes. And that's shown here. This is actually a slide that I got, Phil Bourne made me aware of it from a website. A PhD student in the UK did a very nice review of the literature and plotted the number of databases over time from the late 1990s to the present. And you can see this is roughly exponential. So regardless of the exact mathematical form this is following, it is, I think, fair to say that the number of biological databases that exist is increasing dramatically. As I said, this just by mass action increases the potential cost of supporting these databases. And anecdotally, I think that what seems to be true is that the cost of the databases themselves is increasing as well. So as I said, this really threatens to eat up a very significant portion of our budgets and potentially capsize the ship, if you will. So this led Eric and I to think that we really needed to start thinking about this in a very careful, data driven, rational way. And so he and I put together a working group. And of course, the first thing we did in this working group was to bring Phil Bourne, who as Eric mentioned is the new associate director for data science, on to really help lead this initiative because this is exactly the kind of thing we are hoping that Phil will take the lead on and will help us find appropriate models. And so this group has started working and started collecting data, analyzing the portfolios of the two institutes, looking broadly at other institutes as well, to think about whether we can develop proposals for more efficient and sustainable funding mechanisms to support the various databases that both our institutes and other institutes have right now, which again are critical resources for the community and have to continue. Now another area that probably many of you are familiar with and have seen in the news is the general area of reproducibility, reproducibility of scientific findings. You've probably seen it as the front line article of a whole issue of the Economist in the New York Times and the LA Times, et cetera. Francis Collins and Larry Tabak had a very cogent commentary in nature outlining the problem and talking about some of the potential solutions that the NIH was thinking of to address the issues. One of their key points, which I think really resonated with me in a number of fronts, was that efforts by NIH alone will not be sufficient to affect real change in this unhealthy environment. That is, as with many of the other difficult problems that we're wrestling with right now, the NIH by itself can't fix them. It's an important part of the equation. But unless there are changes in universities, in the research community, and in other stakeholders in the research process, very little will be accomplished. So we have to work together as an ecosystem, if you will, to try to affect changes to these challenging problems. So one point I'd like to make regarding this general reproducibility problem is that I think it's really at least two main problems, which have been conflated in many of the discussions in the popular press. And those two problems are, first, the reproducibility of the data themselves. So if another research group tries to perform the same experiment, will they get the same actual data, the same numbers, the same images, what have you? So that's one problem. The other problem is the strength of the conclusions drawn from the data. And those are related to one another, but they aren't the same thing. So in other words, were the conclusions that a research group drew from the data sound and fairly reasoned? Were they inflated, were aspects of the data maybe left out in making the conclusions to make them seem more important? So again, those are related problems, but they aren't exactly the same. And I think they have been conflated in a lot of this discussion. Now, both of these problems, even if you take them apart, really are driven by three different related issues. The first is the sociology of science. For instance, the scientific award system, how scientists are evaluated and promoted, for instance. The second are methodological problems that can contribute to the reproducibility of the data themselves or potentially the strength of the conclusions made from those data. And finally, contributing to both of these things are how we train and educate researchers. How we do that is going to impact how well they are able to deal with these issues to impact these problems up here. So as I said, the training and education part is something that affects the other parts very strongly. And so NIGMS, given its historical place in the training mission of NIH, we thought that this was an area in which we could have a particular impact. So one thing we're doing, and we're actually on Friday going to be asking counsel for concept clearance of this, is to put out a small FOA for the development of what we're calling exportable training modules. And so these would be modules in different formats that address different parts of these issues here that contribute to the problems. So they could be online modules. They could be interactive videos. They could be case studies with some interactive component. They'd be developed by universities, faculty, other not-for-profit institutions across the country. Then they'd be made freely available so that anyone involved in training scientists at any level could use them as part of their training program. So we're thinking we'll initially probably fund about six of these, assuming we get counts of clearance to do this, and then evaluate the program and see how it's working. Now the second area, which is another place that Eric and I have come together to work as a team, is in this methodological problem, actual methodological technological issues that may be contributing to the reproducibility of data and the strength of the conclusions drawn from those data. And the particular area we've been working on is reproducibility and cell culture studies, so tissue culture studies used as models of biological phenomenon. And two of my program directors, Jim Deathridge and Zongzan Ni, with help from a program analyst, Peggy Schnorr, have been doing a very deep dive into the literature for the past few months in this area to really see how bad the problem is. You've certainly read about it in reviews, but we wanted to really dig into it for ourselves and find out just what was going on out there. And so here are some things that Jim and Ni and Peggy have come up with from this deep dive into the literature. The first is that there are over 400 cell lines which have now been reported to be misidentified, and these date back to the 1960s. So these are cell lines that were said to be one tissue type, one tumor type, for example, and were later shown to be something else altogether. So at least 400 different cell lines have been misidentified. And through their analysis, what one finds is that in some cases when the misidentification is made public, the publications using that cell line as the misidentified type drop off dramatically, but in other cases they persist. So even after a cell line is identified as not what it was said to be, you do in many cases see a persistence of use of that cell line as its misidentified type. Sort of in keeping with that, in 2004, a survey was conducted that said that 70% of researchers had never actually checked the identity of the cell lines they were using. It's a bit of a staggering statistic. And Jim and Ni's analysis based on more recent literature would suggest that the situation hasn't actually gotten all that much better in the past decade. And consistent with that, surveys of major cell repositories have shown that 14 to 30% of the cell lines submitted to them by outside researchers for cataloging and banking are actually misidentified. So that is pretty, I think, stark information that up to a third of the cell lines that researchers thought were confident enough about to submit to a repository for storage and cataloging turn out not to be what the researchers thought they were. And this is an underestimate because the cell repositories can only say that something was not what it was said to be if it was something else that was known, right? If it was misidentified but was a cell line that hadn't been reported or wasn't easy to trace, they wouldn't know about it. So these numbers are actually underestimates. 2013 survey of the literature said that less than half of cell lines reported in publications actually have an unambiguous identifier and source. So in other words, less than half of the papers out there actually say exactly what this cell line is and where it came from. So that's another somewhat alarming piece of data. All of this is despite the fact that there are actually fairly cost-effective and fairly cost-effective and easy to use methods for identifying at least the most common cell types but they don't seem to be actually in frequent use, unfortunately. Now, I've been focusing in the reproducibility and cell culture studies on this problem of cell line contamination and misidentification but there are actually other issues that go below this. It's sort of like the skin of an onion, the misidentification is the top layer of skin but then below this are issues, for instance, genomic instability, genetic drift. So even if I say my cell line is based on tandem repeat analysis, what I think it is is my version of that cell line the same as your version of that cell line and if it's not, how genetically different is it and how much does that affect the phenotype that I'm looking at or the biochemical pathways that I'm looking at. Infections, mycoplasma, viruses, fungi turn out to be somewhat persistent and insidious and actually Jim and me have done a look into that and a very large fraction of cell lines turn out to be infected with mycoplasma or the viruses frequently not known to the investigators. Finally, even once one does have all this under control the particular growth conditions you use the serum, the substrate, the oxygen and CO2 concentrations can have significant effects on the phenotype and the outcomes of experiments. We have been talking to lots of different people in the research community both intramirally and extramirally and what you hear frequently about things like serum as well we found a serum that worked so that's the first part that should alarm you work should be in quotation marks here and then we bought as much of that batch of serum as we could and did all of our experiments with that. So if you only see the phenotype or whatever it is you're looking at with one batch of serum that should first of all raise alarm bells regarding reproducibility because of course five years down the road that serum batch won't be available and it sounds like you bought it all anyways so no one else can use it. So how robust are the conclusions that you're gonna draw from that? So possible action areas that we're thinking about I should say that Eric and I have a working group Francis Collins asked us to pull together a working group of not just our ICs but other ICs as well and we're getting external consultation for this group to think about these issues and to come up with some recommendations and so two possible action areas in general terms are to facilitate the development and dissemination of consensus standards for authentication handling controls and reporting in cell culture studies and I should say that we're very cognizant of the need not to add additional unnecessary administrative burden onto researchers that's an area that we're already concerned about and so anything we do we're gonna make sure is really high impact for the burden and low cost and as efficient to use as possible and in that regard we're also considering some kind of an effort which may even make sense to put some targeted funding into to promote development of more efficient and cost effective tools for characterizing cell lines and the reagents in which they're grown and so you can think that if we could put in relatively modest investment to get better more cost effective, more efficient tools to really ensure cell lines are what they are to look at the genetic drift problem to ensure the integrity of their agents and the consistency of their agents that could have a very significant effect on the reproducibility problem and the ability of basic research to be translated eventually into clinical advances. So those are some of the things that we're thinking about I'd be very happy to hear your thoughts from the NHGRI perspective and from your counsel's perspective. Well thanks John, so he's here to discuss so this is a great. So I was interested to hear about this notion of supporting the API's full program rather than individual projects which is more like the HHMI model or something like that. But I'm wondering how you're thinking about targeting it because especially in the computational biology bioinformatics piece of your program a lot of those investigators will have quote unquote projects that are not only NIGMS supported but may have funding from other institutes and so on and so forth. Are you thinking about bringing all of those projects in? Is this just for someone with multiple NIGMS funded projects? So the initial experiment would be just about NIGMS funded projects. So if you had one of the program grants that would fund your NIGMS portfolio. If that took off that I think maybe we would catalyze a broader NIH wide discussion but that would be very far down the road I would think. From our perspective if someone has a large basic research portfolio that we fund if they then wanna translate that into something that has more clinical applicability in one of the disease focused institutes that would make a lot of sense and then that could go out for funding by that institute. But initially it would be focused on NIGMS research funding. Of course a few of the disease institutes will also fund fairly basic computational research. So it's, that's why I ask, yeah. Eric, first thank you for taking the time to join us. We all agree about the increase in the number of these databases. We also agree of the growing importance of these databases. Can you and Eric maybe talk for a minute about how you plan on monitoring I guess the long-term expense, which I know is on your mind, but recognize the growing importance of the data and the need to make them more broadly distributed and how we're gonna get other stakeholders frankly to contribute to the pot, which I think is probably what is necessary. Yeah, I think precisely where at the early days of this Phil Bourne is starting to develop some ideas around efficiencies that one could create in the system by for instance having stronger connections between databases. You've outlined the problem exactly very coherently that as they become more important we can't pull back from them. In fact we have to make sure that they are strengthened and continue to grow but there's no way that we can pay for all of it. And so we do have to find ways to get other stakeholders to pay to support them. But I think we also really have to focus as well on creating efficiencies. That if we just let unbridled growth take place without really thinking forward what's the most efficient way to arrange this database world will end up again capsizing. So we need to think about both these things. Do you have thoughts Eric? First thing I'd say is John has a slight disadvantage in that he wasn't available for this meeting that the Moore Foundation held. He had somebody from GMS was there. But he's also he's spot on saying these are just early days. I think we figured out the nature of the problem at least at a first pass and recognize that it's only gonna get worse. It's not sustainable. And that we need to break down some of the cultural momentum we have of how we do business sort of like the propagation of more databases just keep going. So there's a whole set of things if this one issue probably as much as any is just completely validates the decision of Francis Collins and of NIH to create a new leadership position to have somebody because John and I are both passionate about this but if it wasn't the solution is gonna have to come at a trans NIH level. So I think Phil Bourne is gonna have to lead us and actually what I got convinced at the Moore Foundation meeting was NIH can't solve this alone. I mean, this is in many ways a government problem and international problem. I mean, it's just just keeps layering in public, private, private, there's all sorts of things that are gonna have to get in the mix. So I think if, you know, sort of like I suspect at the end of the day if we try to fix this with a series of band-aids we will not be successful. I think we got to take it down to the, you know sort of the foundation and rebuild the whole way we do this. I agree, but I would caution against making everything with a sample size greater than five or BD2K problem. Oh, I wanna be very clear. Well, to be clear, actually, to be clear this is not a BD2K problem. So again, I regard this as a NIH data science problem. BD2K might be a programmatic arm because some of what has to be done is not gonna be to put on a program. What I was gonna have to be done is to go out and solve a community and a government and a nation, a national problem. And for that you need a point person and a leader and a person who makes this their full time and you have the gravitas to pull this off. So I have become convinced in the last six months that this is one of the huge problems we face. And every curve that John, every graph that John looks at or I look at and even we look at at a micro level. I mean, just the data resources we support at NHGRI if they would eat us inside out. I mean, we would have no program left if we didn't figure out a solution to this. So why don't we go to BD and then Tony and then David. You mentioned a retreat that one of the key areas you discussed was technology. Could you elaborate more on what came out of that discussion? So that was a preliminary discussion and I think the main take home point was that we're gonna have another retreat and probably a series of retreats and that's gonna be the focus of one of them. From our point of view, we have a branch of one of the divisions which is focused on technology development and a question that we're asking is what is the most efficient and effective way to distribute the pot of money we have to really push forward technology development in a broad arena of different kinds of research topics from imaging to bioinformatics, et cetera. And obviously there's a great deal of experience here particularly in the genome sequencing realm and we're hoping to tap into that. But right now our portfolio is distributed with P41 centers as most of it and then a very small R21 program and not a lot in the middle. And so one thing we are thinking about is, again, what is the most efficient and effective way to support the broad range of technology development that needs to be done at all scales? So Tony. Thank you for highlighting all the issues related to cell lines. I think many of us have used those over our careers would agree with many of those problems. But in the pharmaceutical and biotechnology space, there's many more cellular assays which are being used to develop therapeutics. So how are you working with industry because they'll be using cell lines increasingly and what standards do you hope to get there because that's very important for the public. And in many ways that was the driver of much of the reproducibility discussion was that results that were coming out of the preclinical research arena were not turning out to be translatable either because they couldn't be reproduced or because there was some other issue. So one of the things we've been talking about is having consultation from industry. Certainly on our council we have industry representatives who will be giving us advice. We're hoping to reach out as part of the working group that we've formed to bring industry expertise and input into it. And there are also a number of stakeholder organizations that global biological standards institute, for instance, that was recently formed that are thinking about this problem as a totality from the industrial side, from the clinical side to the basic science side. And we're gonna be engaging them as well in this process. So John, thank you for coming to talk with us and thank you for taking the job. Oh, thank you. So I was pleased you spoke about the 15-year trend with regard to investigator initiated research versus targeted support for specific areas of research. And I was delighted and surprised actually to hear you say that you had intentions of rolling that 15-year trend back at least. And so, and you pointed out in the chart that at present it's about 80% investigator initiated in NIGMS's portfolio. So I have three questions. And so one is if 80% is too low. What's the right number? What's the target? And then other questions are what about other ICs in addition to NIGMS? Because that trend over the last 15 years is not by any means limited to NIGMS. And third is in so doing or in working to reverse this, what resistance if any would you anticipate from scientific or political spheres, right? All good questions. So the first one, we are actually in the middle of a strategic planning process, which basically I hit the ground running with our next five-year strategic planning process. And one of the key questions we're asking is what is the right number? It's definitely greater than 80%, I think in our view. Is it 90? Is it 95? Probably, we're not gonna get back to 99% given actually the need in places like technology development, I think it's an area that some targeted investment makes a lot of sense. So I don't know the right answer, I just know that it needs to go up substantially. The second question. What about other ICs? Right, other ICs. So there are 27 institutes and centers, as you know, each one has a different mission. And there's a reason that there are so many and how one focuses its investment and the way it arranges it is going to be different depending on what it's trying to achieve. We are trying to focusing on fundamental research and discovery. And I think our view is that for us and we can't do everything, that's part of the issue to recognize. We cannot do everything that needs to be done in science. And so what we're really focusing on is promoting fundamental research by targeting question-driven, investigator-initiated research and discovery. Other institutes, I don't wanna sound like I'm dodging, but other institutes have different missions and need to do things in different ways. And so, you know. Follow-up question would be, is there a discussion NIH-wide of that, of a target? For a target for every institute that would be the same? Am I a minimum threshold? What I mean is the 15-year trend is by no means limited to GMFs. And so therefore the solution cannot be, if there is, if that's a problem, if where we are today is a problem and I would be one who says it is a very big problem, I don't know if others would agree, but if there are others who, I mean, obviously you're deciding to vote that that is a problem, at least at GMS, then the question is, is there in fact an NIH-wide discussion of this 15-year trend and its consequences? There have been discussions of many things related to the consequences of the budget doubling. One thing I've noticed since I've been here is that that may in many ways be the biggest challenges we face, is that to re-equilibrate the system from the position it equilibrated into during the budget doubling has affected many different spheres, not just inside the NIH, but in the research community as well. So I go back to that, that the biomedical research community, the universities, other stakeholders also need to re-equilibrate in reciprocal ways. You know, in terms of specific institutes, again, every institute does a different mission and needs to approach in a different way. So I think coming up with a single number would be problematic. And I usually go to Tony Fauci, for instance, can say, absolutely without fear of contradiction that if we had a universal flu vaccine, it would be an incredibly important thing for humanity. And so putting targeted investment in that makes a lot of sense. For fundamental research, it's a very different equation. I can't say, you know, if we had more money in transcription, it would be a great thing for humanity. Of course, people should be studying transcription, but it's not the same thing. And I think that needs to be recognized in this discussion that different institutes really do have different missions. Partially, that's the question of my side. I don't think there's been very much broad discussion around the Institute Director table about this issue. But I mean, a lot of other issues that might touch on. In fact, I'm not even sure, John, I've ever seen a graph or a table that shows that figure for all 27 institutes. I just know anecdotally about six or seven, but I just know I haven't seen it for every institute. Yeah. And then your last question is about resistance. And I think anytime you change, make change, there will be resistance. And it's a question of making sure that we're heading in the right direction and enough of the stakeholders agree with you. And I think at this point, from everything I have heard, all the feedback I've gotten, it's been very positive about this direction for NIGMS. Maybe a related question there would be, if you look at the programs that have received targeted support, what is, in general, what is the half-life of those programs? Because that could have a lot to do with, you know, how entrenched are those targeted programs? I mean, that's a great question. It's something that we've looked at a lot and is really, we have decided that moving forward, we've laid out some guidelines for what would be appropriate for targeted funding of the scientific area. And one of the tenets of that for us is that they all have to have hard sunset clauses and that really five years is what we're gonna target, except in some exceptional circumstance and then it would be 10. But many of these things have gone on for 15 years, which I would say, you know, that in and of itself is a reason to roll the money back into the system. It's just, if something hasn't become self-sustaining in 15 years, that is a fundamental research area, and it's time to rethink it. Okay, we're still on then, Artie. Yeah, so thanks for addressing the reproducibility issue. I think as much as in snowlines, in informatics and in code, open source code, there's a lot of contamination, and we can many times not reuse the code that someone else published. So I think that's, you know, because it's earlier in the game, it might be more addressable now than later on. The other aspect is about the ecosystem. While I do think parallel efforts need to go internationally and with other agencies and so on, I would not underestimate the ability of NIH to change behavior in academic institutions. And I think one of the examples was when you did the multiple PI approach, because I saw that changing the way people do collaborations. So I think there are many things that might seem rather small, but the change behavior. I agree, and that's actually part of the reason that we're launching the training programs, the training modules, FOA, that we're hoping to get cleared, is that I think just by making people aware of the interest from NIH in this area, you can begin to shift the culture. So it's a point well taken. So I just wanted to follow up on that reproducibility question in the context of data, in particular, but less focusing on the costs of maintaining databases and more on incentives for getting PIs to put the underlying data from all of their work into the public domain. And so as we all know, NIH has had this data sharing policy for now 10 plus years for wherever PIs with funding over half a million is supposed to have a data sharing plan. Now, how compliance has worked with that is unclear. And so I'm wondering what people's thoughts are going forward in terms of either using incentives on the positive side or on the negative side with respect to compliance with data sharing obligations. Yeah, so it's a great question. It's really an NIH-wide question. Yes, indeed. So one that I can't answer specifically, but Phil Bourne has worked, this is one of his highest priorities, I know. And so I would say he is developing all kinds of experiments and models to address these issues. So has he spoken to your counsel since he's been here? He has not, but I'm trying to give the eye to Laura Rodriguez to go to a microphone and just give you a brief update on what's going on with the genome data sharing policy. Make sure the mic's on. So I think to already specific question about incentives to comply with the existing data sharing policies that we have, there are just lots of discussions going on about what we might be able to do and where the limits of that are and how can we produce care that are sufficiently enticing where those limits are to maybe get people even a little bit further. But beyond that, to Eric's point about genomic data sharing and the expansion of the genomic data types coming in, we do have a policy that has been approved internally, at least through the first step of that internal approval. And so we are hoping that that will be out in the next four to six weeks, pending the very stages that needs to go through. And so then we'll be able to talk about it. My implementation still won't be planned until 2015, and that would be for applications for funding that are coming in in 2015, so we're not for funding until 2016 that it's in place. Other questions for John? So to be, Howard. Thank you, and it's been quite remarkable to see all the information that's been attributed to you since you've taken the leadership role, often by people who can't even spell your name correctly. What's a challenging name, I think. Well, I didn't mean that part. I've been the first name. No, it's true. One of the things that it's always a challenge for your institute is where's the line between two clinical and clinical enough but still fundamental? And you literally have to face it with things like the PGRN and then there are other elements that are heading that direction where you have to pull back. How are you gonna approach that in general? What's your mindset? Well, we do have a number of clinical areas as you know that we're responsible for. So trauma, for instance, wound healing, burn, sepsis, anesthesia, and we also now the home of the Office of Emergency Care Research. So we do have some very clinically relevant areas. Our general philosophy is that within those areas we are looking for clinical advances but also tying them to fundamental research that will give a fundamental understanding of the underlying processes that lead to these clinical advances. So that's how we try to tie in those clinical areas our fundamental research mission to promoting clinical medicine in those areas. It's a good question. Yeah, it is. Thank you very much. I appreciate it. Well, thank you, John. That was just spot on and I knew it would be and I'm sure this council will continue to hear coming months and years of things going on with NHRI and NIGMS working together and trying to solve some of the problems that affect both our individual institutes but trust me, part of the reason like John and I got stuck with, I shouldn't say stuck, we got asked by Francis to co-chair this working group about cell lines is because two of us are, shall we say, rather vocal around the institute director table. So for some of these corporate things I'm just quite sure that two of us are gonna end up doing a lot of things together trying to help deal with NIH issues in addition to our individual institute issues. So we are spot on time. Are you ready to tell us?