 It's my pleasure to now introduce Anup Malani, who is the Lee and Brina Freeman Professor at the University of Chicago Law School and a professor at the Pritzker School of Medicine. Anup conducts research in law and economics, health economics and development economics, and his law and economics research focuses on models of judicial behavior and measuring the welfare impact of laws, and he's presented numerous times on really creative approaches to considering many of the things that we deal with, and I'm very much looking forward to his assessment of disclosures and disclosures of conflicts of interest, so Anup, welcome. I'm out of pacing when I talk, so it's very hard to stay behind the podium. All right, I want to talk to you today about some research that I've been working on for the past few years, looking into the value of requiring physicians, actually researchers, to disclose the conflicts of interest in medical journal articles. So I want to give you a little bit about the background here. So I think the consensus view, or at least the conventional view in research, in medical research, is that financial ties, that is to say, either funding from a drug company to a medical researcher or some other financial ties, like a consulting relationship or an expert speaking relationship or even honor area, tend to bias researchers, and the evidence that we have for that, I think some of the strongest evidence, comes from folks like Beckham and Leigh and Gross and Cismondo, where they've shown that research that is funded by the industry tends to find more positive effects of drugs produced by the industry. And on that basis, there have been calls to address this problem of financial ties between the industry and researchers. One solution that is pretty commonsensical is actually to bar these relationships. And there are examples of efforts to try to do just that. For example, Negem and Lancet, for a period of time, actually barred authors of review articles from writing reviews, if the authors had conflicts of interest, if they had financial ties with industry. Now the problem was that they pulled back on this policy because they had difficulty finding what they thought were adequately qualified reviewers to actually write their views, which raises the possibility that in fact financial ties and funding from the industry actually has value. To some extent this should be pretty obvious. We know that half of medical research in the United States is funded by industry, so at least there's half of the research that's going on, but that kind of ratio underestimates the actual impact on clinical trials. The ratio of clinical trials funded by industry to clinical trials not funded by industry is six to one. So particularly in clinical research, industry funding is really important. So an alternative solution I think that has been widespread, and not just in medicine but outside medicine as well, is the idea of disclosures. So the idea is that individuals who have conflicts, financial conflicts should disclose that information. It's a policy that's been in medical journals for quite some time. Even in the 1980s there started to be policies that required disclosure funding, but over time you've seen a ratcheting up of what type of financial ties or a broadening of the definition of what type of financial ties count and must be disclosed, and now financial ties of a wide variety are actually required to be disclosed by medical journals. And the idea behind disclosure is, look, we want the ties, but we want readers to be able to appropriately discount articles where the authors have financial ties. The thought is, hey, if I see a researcher and they have financial ties, I can down weight their conclusions to account for potential bias induced by those ties. Now I just want to think logically about whether or not mandatory disclosures are a good idea. In order for mandatory disclosures to be a good idea, a necessary condition is that it must be that the audience discounts the value of articles, the information from that article, if there are in fact conflicts disclosed. If that were not the case, for example, let's suppose that people made positive inferences from the disclosure, it seems fanciful, but I'm going to get to that point a little bit more in a second, but if they make positive inferences from disclosure, then you would never need mandatory disclosure. So mandatory disclosure would be enough, as an author would want to disclose a conflict because people would view me more positively if I had that conflict. So it must be the case, sorry, mandatory disclosure is only helpful if in fact we think people make negative inferences from disclosure, and that's why people are not disclosing. But it turns out the evidence on this is ambiguous. So there's certainly some evidence that suggests that people view articles more negatively if the authors disclose conflicts. So the evidence that I like to point to is Kesselheim's paper, Kesselheim and co-authors back in 2012. But there are other indicators that go the opposite way. So for example, in surveys done by folks like Lowenstein, that suggests that doctors think it's fine to accept gifts. You also have research from SA at all suggesting that when doctors make disclosures, patients actually trust them a little bit more as if, hey, you revealed the bad things about you. Now I feel like I can trust you. And so they actually rely on those doctors a little bit more. So the evidence is not entirely clear. So here's what we did. So this is a joint work with two collaborators, both of them at Booth, Christian Loitz and Jacob Laszlo. So when I say we, I mean the three of us, what we did is we gathered data. Well, first of all, our idea was to say, look, let's see what happens when medical researchers disclose conflicts in journal articles. And we're going to look at how, not just how anybody responds, but how other medical researchers respond. And in particular, how they respond in their citation patterns. So we're going to look at the impact of disclosing a conflict in a medical journal article on citations to that article. And the reason we think it's important is because I think researchers probably are in a good position to understand the impact that funding has on their research and their colleagues' research. It's also the case that citations are just an important measure. Not only is it a measure of the influence of an article, but it's actually really important for the promotion and career advancement of researchers themselves. So it's important for the labor force of researchers. So here's what we did. We gathered data from about over a quarter of all articles published in seven journals listed above, including places like Nejim and Lancet, BMJ and JAMA. We gathered a quarter of all articles in a 25-year period. So this was a long process. The period was 1986, all the way to 1907 to 2006. For each of these articles, we went through and picked out and coded up the financial disclosures in these articles. We also then, for each of these articles, we obtained from Thomson Reuters citations by other authors to these articles. So we measured three-year citations. We also got from Thomson Reuters and scraped from PubMed a whole range of information about covariates on these articles, the authors, and the journals, so things like the impact factor of the journal, for the article level we look at beyond citations, we look at the length of the article. For example, the subject matter of the article, whether or not the article, what type of study it was conducting or reporting on, whether it was a clinical trial and observational study, men analysis, things like that, kind of just a clinical record. And we also looked at features of the author, where was the author working at a top medical school, what is the average number of citations that that author had over the past three years, things like that. So a whole range of things. So we assembled this data set and we looked at it. We said, what is the relationship between disclosure of conflicts and citations? So our treatment group in some sense were articles that disclosed conflicts, control group was articles that did not disclose conflicts, and the outcome variable was three year citations. And we found something really surprising and super persistent articles that disclosed conflicts had more citations. Not a little more citations, a lot more citations. So you see a slight difference early on in our sample, you know maybe 10 or 15%, but if you look starting in say 1995 and afterwards, you're finding massive, massive differences, nearly double the number of citations to articles that disclose a conflict. Now that was a bit of a puzzle. And by the way, I do want to point out that this is very robust. We can, this is raw citations that you're seeing, but we can get rid of self citations. We can actually do things like down weight citations from journals that have lower prestige as measured by impact factor, all those sorts of things. And no matter how we slice and dice the data, you will get this as the dominant result, okay? Now, we make two possible, when looking for solutions or explanations, we came up with two. One possibility is that what the industry is trying to do, well what the industry is doing is funding certain types of research that just inherently get cited more. So for example, if industry funds trials, but not observational studies, and trials just generally get cited more, it's possible that just coincidentally, the industry's funding things get cited more, okay? That's one possibility. Another possibility which seems entirely reasonable is that the industry is actually seeking out better researchers, which makes sense. If I am trying to convince you that in fact my drug is good, I'd rather hire somebody that's actually a good quality researcher so when that person actually concludes that my drug is good, you're more likely to believe it. I'm more likely to subject myself to more rigorous analysis by more higher quality researchers because that makes the result, if it's positive, even more compelling. Does that make sense? So there's two possibilities. And by the way, it's not necessarily the case that this evidence proves that there's no bias also. It just means that we have to account for the possibility of these two types of selection that goes on. So what we spent the bulk of our effort doing is trying to see if in fact we can met out these sorts of effects where the industry is selecting out more citable studies or they're selecting out higher quality researchers and see if it's still the case that people make negative inferences even beyond that. So the first thing we do is we just control for features of the article and features of the author. Features of the article, for example, are things that make it more citable. For example, like study type, things like that. Things, features of the author are things like, are you in a top 50, teaching in a top 50 medical college? What is your past three citations? What are the three year citations? What are your past three year citations of the, amongst other, of the institution that you're at? Things like that. So we tried that. We just did a simple regression. Left hand side is log citations. So you interpret these as percentages in some sense. Adjusted citations, I think, is what you're seeing here. And then the right hand side, beyond the disclosure of a conflict by any of the authors of the article, are these other controls, these article and journal controls. And the key result is that no matter how much information you had, how many covariates you had, it is still the case that there's a positive gradient on disclosures, meaning disclosures are associated with higher citations, not fewer citations. And I highlighted the top right box, which is the specification that has the most controls, our efforts to beat out that selection effect that I was talking about, the positive selection effect. And we still find that there's a positive association. It's pretty persistent. Okay, now one possibility is it's still there because maybe citations aren't the best measure of how readers view an article. Maybe we ought to look at, for example, experts and see what experts think. Another possibility is maybe our author and article controls are not good enough controls for quality. So we're gonna tackle those two problems in the next two slides. So here's one thing we did. We first, to address this issue of whether or not citations are a good measure, we changed out the dependent variable. Instead of looking at what impact disclosures have on citations, instead we looked to expert recommendations of a particular type. So the University of Chicago's Department of Family Medicine runs a program called the Priority Updates from Research Literature, which you might be aware of. And what they do is when physicians recommend articles as particularly useful articles, they screen them on various measures of quality and pick out certain ones as recommended articles to be read by their readership. These are called pearls, nice enough. And so we thought, okay, given that you're nominated, what is the probability that you're gonna be a pearl and how does that differ based upon whether or not you disclose a conflict or not? So we just changed the outcome. So here now we find something that's a little bit more consistent with conventional wisdom, which is that when you now disclose a conflict, you are less likely, we observe that you're less likely to actually be recommended as a pearl, okay? The effect is about 10%. Now here's the one problem with this. This doesn't end the debate. And the reason is because it's not obvious that experts are the right measure to use. And it's not obvious that this particular set of experts, I think they're fantastic because I've got home country bias here, but I think they're fantastic, but the difficulty is they might be more sensitive. It's quite possible that they believe the literature or have the conventional view that there's bias associated with this. And so they put that into their decision-making. It could be that other people don't, that other readers, other researchers don't. So we wanna be a little bit careful about this and end the discussion just at this point, okay? So it might be that citations are important. So what we do is we return to citations and we try another effort to actually get rid of this possibility that there might be selection. Funding doesn't just go to anybody, it goes to high quality researchers. But the problem with controlling for quality is you might not observe all elements of a physician or a researcher's quality. So we tried something different. We said what if people could be their own control, okay? So we asked ourselves the following hypothetical. What happens if I'm somebody that discloses a conflict, what happens to citations to other articles that I've written in the past, okay? Now I've written both sets of articles. So in some sense the quality is the same of the authors. The question is are readers making a negative inference? Now it might be a little surprising that they do that because it's not a negative inference about the article where I disclosed, it's a negative inference about other articles that I've disclosed that. So it requires that extra step that you assume I was also conflicted on that other article, it just didn't reveal it, okay? So we looked at this. So first thing we did is we expanded the sample to include all medical journals. Then we defined a pre-period which is a period 1999 to 2001, a post-period which is 2002 to 2004. We did that because in around 2001 we had a big surge of reforms by the way amongst medical journals in terms of what disclosures were required. But that's what we defined as a pre and post-period. We defined control group as authors who publish an article in the pre-period without disclosing a conflict and then publish an article in the post-period and even in that other article they don't disclose a conflict. The treatment group however is a group of people that publish an article in the pre-period, don't disclose a conflict but in the post-period publish another article where they do disclose a conflict. And what we track is the citations over time to the article published in the pre-period for both groups, right? And these guys go along, by the way, we match them meaning we compare articles that are like similar to each other on a range of outcomes which I'll explain in just a second. And then we check to see if at some point when the treatment group article in another author in another article discloses a conflict does that affect the citations to the pre-period article of the treatment author? That's our identification strategy. That's our design, research design strategy. And what's amazing is that we actually find, not amazing, but was interesting to us was that we actually found a reduction in citations to the pre-period article amongst the treatment group authors. That means that if you disclose a conflict your citations to a prior article fall by 12 to 13% per year, okay? And that is pretty robust to however you match. You can match articles among the treatment group who patrol group on study type, on study type publication year, study type impact factor of the journal you published in, some combination of all these, it still persists. It's a very robust result. All right, so that gives you a sense of what information we have. So what is it that I conclude? Or what is it that we conclude? The first one we conclude is that there, you cannot deny that there is a positive selection by the industry for either articles that are more likely, more citable, more likely to be cited, just because the subject matter or something like that. Or that there is positive selection by industry for quality authors. The industry tends to fund or create financial ties with higher quality researchers. That's really hard to deny. That's in the raw data. However, if you're able to control that and filter that out, there is still a negative effect of conflict disclosures on people's assessments of the quality of an article. I think it's possible to have both. I read an article, I see disclosure, I infer that the person must be pretty high quality, but I also kind of realize that there could be some bias. I make two adjustments, okay? And we find evidence of both types of adjustments. And we find that readers actually are remarkably sensitive about it because there are these spillover effects, which we found to be a surprising result. Okay, so what does that mean? That means that it is probably the case that mandatory disclosures have some purpose, they have some value. For you guys, that might seem pretty obvious, but to me it wasn't obvious from the data that was the case. However, it's also important to recognize that financial ties have value. Drug companies still do seem to seek out higher quality, or at least there's evidence that they seek out higher quality researchers and fund them. And it's possible that they make those studies actually better. So there's a good and a bad, which I think is in hindsight not surprising, but I think that we have good evidence for that now, Mark. I see Lainey going up to the... Okay, Susan Toll, Oregon. In thinking about how much attention comes to an article that's published, there is a lot that can happen with one's media department, with paying for open access, with paying for distribution to the media more broadly, that people can put a machine behind your article and increase the odds that it will receive substantial attention, which may lead to greater citation. Oh, totally possible. In fact, that might be one of the reasons why you see the positive gradient. But the flip side is that then it should be even harder to see the negative gradient too, but we're able to separate out those two things. So I'm not entirely convinced that that's the only thing that's going on, I guess. Yeah, but it's possible, sure. Hi, Melanie Sir. There was actually an abstract at the recent American College of Surgeons meeting that looked at the rate of basically compared self-reported disclosures to the open payments data, and basically showed that we're not self-reporting appropriately most of the time. This is obviously based on self-reported disclosures. Yeah, for sure. So a few things there. One of the reasons we chose the data period that we did is because we'd done a prior paper, where we showed that when journals ratcheted up, there was a requirement. There was actually more self-reporting, not perfect self-reporting by any stretch. We can't even prove that, and the evidence that you presented suggests otherwise, but there's a greater propensity to report. And so that's one of the reasons why we looked at that pre-period and post-period because we expected reporting to actually increase, and so you would expect this negative inference to occur. So that's part of the identification strategy. That said, we can report more, sorry, that we can increase sanctions to increase the reporting. It's just important for us to remember that there are still positive effects, as well as negative effects, but there's no particular reason to oppose the mandatory disclosure, I think. I think it serves some purpose. Hi, Laney Ross, University of Chicago. So how do you correct for the fact that funding by industry and research has increased so much over that 20-year period that you're looking at? The funding has increased dramatically? Or is it that disclosure is a funding? No, that funding by industry has increased so dramatically. How do you use that? I mean, because the fact that we're seeing so many more articles being published with industry disclosures and that they're just funding the research itself, right? I mean, NIH dollars have been relatively flat and industry is now more than 50% of all clinical trials. Yeah, so I mean, there could be many reasons for that one possibility is that the returns to funding have increased. It could be the returns, particularly increase, because public funding has declined. It's also possible that we're just diminishing returns, so given any level of research results, we need to spend a little bit more effort. Could be that the more publications that the returns to publications are a little bit higher in the eyes of the FDA, and so it's more likely to get to drug approval. There are lots of possible explanations. I think that's a great question and it warrants a lot of research, for sure. I'm just not sure that it necessarily changes this particular result. So that I don't see just yet. One thing I will tell you is in separate paper that we wrote, one of the things that we found really remarkable and very consistent with this result, the policy impacts, the policy changes had an impact was that there's been a dramatic increase in the disclosure rate, which is different than and higher than the funding rate increase. And it doesn't just happen just before and after a policy change. It tends to persist throughout the first decade of this century and that we don't have an answer to either. It just seems like there's this massive increase in the rate of disclosures. Do you have an explanation to why there was the negative impact on earlier papers for disclosure in this paper? Yeah, I have to tell you, I was surprised. I mean, when we came up with this, we, our biggest debate was, is it worth the data gathering? I mean, do we really think it's possible that people make negative inferences about other articles and we were doubtful, but we have trouble killing this result. I mean, and we tried really hard, so it's there. Now, one possibility is, look, you know, in my own field, which is law and is economics, you know, if I found out that one author, for example, had fudged data in one article, I would suddenly start distrusting all other articles. Or conversely, if I find somebody won an award, right, I would suddenly be more inclined to read all the other papers that she'd written. And so it's not entirely surprising, I guess, but again, I want to be careful about this. In hindsight, this doesn't seem entirely surprising. We thought that this was a long shot. Yeah.