 Hello and welcome to NewsClick. Today we have with us Dr. Satyajit Rath from the National Institute of Immunology. Satyajit, we have a recent set of papers which have been published in eLife talking about replication studies. It is an interesting example where we are trying to replicate the results on the basis of which drugs have either been released or in an advanced stage of release. The results seem to be very intriguing that three out of the five have actually either been not successfully replicated or are indeterminate. We do not know whether it is negative or positive. Only two drugs seem to have been replicated in terms of its effect. What has it been for the field of medicine? Yeah, it is interesting. And it is one outcome, many more expected of a fairly long effort, only one of many, that the field itself has undertaken. Out of sheer worry that we are basing a great deal of real life implication prediction, whether in drugs or in genetic medicine or in a variety of similar ways, on studies that are one-offs or two-offs. Rather than the conventional image of slowly accumulating, multiply validated models of understanding based on which we have some robust predictions to make. It is instructive to think about where this comes from. This comes from the fact that much of the funding in the life sciences today, quantitatively, is provided and demanded under either the explicit or the implicit idea that this is going to be useful for health, to some extent for agriculture, but mostly for health. And it is going to be useful for health in making medicines, making drugs, making vaccines, making diagnostics, making whatever, interventions. As soon as one does that, two things happen. One, we begin to be anxious about doing this quickly. And that means that we are very, very appreciative of the outcomes when they are striking. And we move forward with them very, very fast. Now, the interesting part is these are supposedly 50 high-impact papers we were chosen for replication, out of which 29 are in a some sense have passed to the filter that is possible to replicate and out of which 5 have gone to this set of trials. Now, if these are the really the high-impact papers in the field, then the whole bunch of other papers which are not quote-unquote high-impact papers are the basis of which even now other medicines, other tools and products are being developed. That has been a big question mark in the way we are doing medicine today. It puts a big question mark on the way that we are connecting our fundamental sciences, our scholarship in the natural sciences with the production of drugs and vaccines. Marketable commodity. That linkage we have strengthened in ways that may not perhaps be quite as sustainable as we think they are. That is what these studies are telling us. Are these studies telling us something fundamentally erroneous, mistaken, wrong, worrisome about how we do scholarship? Not at all. We are all familiar over the evolution of the modern notion of what scholarship and I mean scholarship in the sciences, social sciences, natural sciences means. We are familiar with the idea that scholarship makes errors. Individual pieces of scholarship make errors. It is the community of scholarship over time that comes up with improved understanding. These studies say pretty much exactly that. You know that touches upon a very central issue of this replication trials, that they are not saying that people did scientific fraud. That is all the argument. It is that it is not easy to replicate results for a variety of reasons. A, because the positive correlation itself are weak. A, B is in life sciences. There is also a whole bunch of quote unquote noise to the signal. It is very difficult to hold the boundary condition steady and also evolution of both takes place. The whatever is being developed, whatever is the environment within which we see all this. Therefore, it is really as much a, shall we say, a comment on how medicines should develop rather than how scholarship has been conducted. Would you agree that? Absolutely. In fact, that is a better summary of what I was trying to say than I put it. Let me push that a little bit further. We have, as an example, drugs that control blood pressure. And we can have drugs that control blood pressure in the sense that when somebody's blood pressure is 220 by 150, they bring the blood pressure down. But increasingly, we are not looking for medicines with gross effects. Increasingly, we are looking for medicines with subtle effects. We are looking for blood sugar control. That is not to do with bringing the sugar down from the number of 400 to the magic number of say 100. We are looking for medicines that bring blood pressure down from 140 to 120. We are looking for medicines that bring sugar levels down from 130 to 95. Effectively, what we are looking for is subtle effects, smaller and smaller effects because our expectations have gone up. Our expectations which once upon a time could be satisfied with gross effects have now gone up to the point that we demand more and more nuanced, more and more quantitative, more and more subtle control. And that means that the phenomena on which these interventions will be based are small effect phenomena. Coming back to something which is now dogging the social sciences. There have been arguments that randomized controlled trials are what are called the gold standard in science and they should be introduced in social sciences. So, two issues one is our randomized controlled trials really the gold standard in sciences because we do not use seem to use it in physics chemistry and a whole bunch of other what would be called scientific disciplines. It is used in medicines particularly for finding out efficacy of medicines. The second of course is if the life sciences are showing this kind of repeatability problems how easy it is to take it into a system which is even more variable shall we say than the life sciences. It is interesting that you bring up the randomized controlled trial and this idea that the randomized controlled trial would be a useful tool in the social sciences. I am somewhat sympathetic to simply because it is an interesting tool and it is an interesting tool that the social sciences haven't used very much not that they haven't but haven't used very much and any additional tool is useful for scholarship. The idea that somehow it forms a gold standard tool is of course, aren't nonsense. It is aren't nonsense because even in clinical medicine from which the idea is taken it is not treated as a global gold standard at all. If you go from clinical medicine into the life sciences of the kind of preclinical cancer biology which we were talking about with respect to the reproducibility project nobody uses randomized controlled trials as a as an experimental design in the nonclinical life sciences in biological research. So the idea that that somehow forms a gold standard experimental design is simply incorrect. It is a useful tool but it's also interesting to think about why it is so useful a tool in drug trials. In an odd sense it's a useful tool because there is so much to gain from a positive result in a clinical trial of a drug because once you have a positive outcome from the clinical trial in a randomized controlled trial you go into the market. So you're very close to a great result and in the entrepreneurial let me be polite entrepreneurial private sector driven model of drug development it's very attractive to fudge a little tweak the results to be let us say hopeful wishful and it's because of that that the randomized clinical trial that the double blind randomized clinical trial serves as something of a break a regulatory supervisory tool that provides for the elimination of at least certain kinds of biases and prejudices and that's all it does. It's why in the preclinical cancer biology examples that we are talking about none of them has used even double blinding leave alone randomized clinical trials why haven't they because they are so far upstream in the pathway of drug development even though as cancer biology research projects they have every intention to make translational contributions to eventual outcomes but they themselves are quite aware that the connection is far tenuous and non-linear and as a consequence the strength of the bias and prejudice at that point is not deemed by the field to be so severe as to demand double blinding and and and so on and so forth in fact my worry about the potential hype surrounding the reproducibility project is that rather than fixing the problem which is our untenable connective connected expectations from fundamental scholarship to marketable commodities rather than fixing what I see as a dysfunctional link between them we will begin to try to fix fundamental scholarship itself as though it is a marketable commodity in and of itself. Thank you very much Satyajit for being with us we hope to continue this discussion with you further as we get more and more results in this that's all the time we have for news click today please keep watching news click for further episodes please do also go to our website our facebook page and our youtube channel