 You're watching FJTN, the Federal Judicial Television Network. The Federal Judicial Center presents Science in the Court Room, a series of programs for judges on science and scientific evidence. Program 5, Basic Principles of Epidemiology. This lecture is presented by Dr. Leon Gordes, Professor of Epidemiology at the Johns Hopkins University School of Hygiene and Public Health. Dr. Gordes is also a professor of pediatrics at the Johns Hopkins School of Medicine. In recent years, epidemiology has taken on increasing importance in the courts, largely as a result of the Daubert ruling that places increasing responsibility on judges to understand the scientific underpinnings of the evidence which they hear. The purpose of my presentation is to try to give you an overview of epidemiology as a science. It is done in much greater detail in the second edition of the Reference Manual on Scientific Evidence, published by the Federal Judicial Center, and in particular, in the reference guide to epidemiology. In this presentation, after a few introductory comments, I will try to describe to you the kinds of study designs that are used in epidemiology, how the findings from epidemiologic studies are used to infer causality, what are some of the problems with such inferences, and then to close with some specific comments that are of interest to judges and to the courts. If we want to know about disease in human populations, we have to be able to study human populations. We can do excellent studies in rodents, for example, but even the best such study means that we have to extrapolate from one species to another, and therefore studies of human beings are critical. Epidemiology plays this role. What is epidemiology? This slide shows you one such definition. It is the study of how disease distributes in human populations and what determines differences in disease risk among different population subgroups. Why does one group of people have a higher risk of disease than another? What can we learn from that? How can that help us to prevent disease? And if we turn to how epidemiology is used, there are many uses of which three are shown here. First, epidemiology helps us to assess the magnitude of the community burden of disease, how much disease and what type of disease is there in our community. Second, epidemiology helps us to identify the cause of human disease, a critical factor if we're going to be able to prevent disease. And finally, epidemiology is used to study the effectiveness of different types of treatments. In this presentation, I'm going to focus on the second use to identify the causes of human disease because this is the use that is most prominent in toxic toward cases. Underlying all this is the basic assumption that disease is not randomly distributed in human populations. That is, some people have higher risks of disease than others. And what we want to do is to account for why the risk is higher in some people than another in order to identify factors that can be modified in order to prevent disease. You see here a list of some tongue-in-cheek facts about carrots. Nearly all sick people have eaten carrots. Obviously, the effects are cumulative. An estimated 99.9% of people who die from cancer and heart disease have eaten carrots. 99.9% of people involved in car crashes ate carrots within 60 days of their accidents. 93.1% of juvenile delinquents come from homes where carrots are served regularly. And finally, among people born in 1839 who later ate carrots, there has been a 100% mortality rate. Now, we might chuckle in looking at this list, but it pays to ask, what is the real problem here? And the problem is that we have no comparison group. These data are given without knowing what is the percent of people in the general population who have eaten carrots. And so underlying the questions that epidemiology addresses is the need for comparisons. And I will stress this during this presentation. How does epidemiology and epidemiologists go about their work? We basically have a two-step process seen here. First, we try to determine whether there is an association between an exposure and a disease or adverse health outcome. If we demonstrate that there is an exposure, we then try to determine whether the observed association reflects a causal relationship of the exposure and the health outcome. I will first focus on the first question to determine whether or not there is an association between an exposure and a disease or adverse health outcome. This is a quotation from the Roman or Greek physician Galen, who is well-recognized in his day as being an expert physician. He wrote as follows about the treatments he provided. All who drink of this treatment recover in a short time, except those whom it does not help who all die. It's obvious, therefore, that it fails only in incurable cases. His excellent reputation may have very well been built on the logic implicit in this, because we have here a hypothesis that could not be falsified. There was no way of disproving this. The essence of what we do in science and in epidemiology is to develop hypotheses that we can test and either confirm them or refute them. And this is an essential. I'd like to turn to the types of study designs, therefore, that we would use in trying to confirm or refute a hypothesis. The first type of study is the randomized trial, also called the randomized clinical trial, because it's often used in testing new therapies. What is the design shown here? We begin with a study population, and we randomly assign the members of that population in this slide to a current treatment or to a new treatment. We then follow up both groups of patients, determine how many die from the disease in the current treatment, and how many die from the disease in the new treatment. If the new treatment is more effective than the current treatment, we would expect to see fewer people dying from the disease who receive the new treatment than receive the current treatment. So the design of the randomized trial is basically a simple one, and it's a very desirable type of study. Let's turn to the issue of breast implants and connective tissue disease, which has received great attention. If we wanted to carry out a randomized trial of breast implants, we would identify a population of women who would be randomly assigned to receive breast implants or not receive implants, and then both groups would be followed to determine what percent of each group develops connective tissue disease. Clearly, this diagram represents a hypothetical, because we could never carry out such a study. We would never get women to cooperate. We could not do it for ethical reasons, and so it is really a totally theoretical design, because a randomized trial can be carried out only when we are looking at a potentially beneficial intervention. If we have a toxic or potentially toxic substance or a putative carcinogen, clearly we cannot randomize human populations to receive that type of agent. Nevertheless, the randomized trial is often considered the gold standard, the standard of truth that we try to emulate even in other types of study design. If we're not able to randomly assign people, we have the following type of study called a cohort study, a defined population not randomly assigned, but self-selects or is assigned by other people to exposure or non-exposure. People, for example, may work in a certain industrial plant. Others seek jobs in another plant, and then we follow up people who have the exposure and people who don't have the exposure and look at the rate of disease in both groups. If indeed the exposure is related to disease, we would expect to see a greater number of people with disease in the exposed group than in the non-exposed group. Sometimes when we approach this type of study, we may just focus on this part of the diagram. Instead of beginning with a defined population, we begin with exposed and non-exposed people. Indeed, this is what is most often done in occupational studies, where we compare people working in one industrial plant with people who are not employed there. So what we're talking about is the cohort study also called a prospective study. Let's look at this in a little bit more detail. This slide shows that in a cohort study, we begin with exposed people and compare them to non-exposed people. This is the hallmark of a cohort study. We then ascertain what proportion of both groups develop the disease in question. If exposure is associated with disease, we would expect that a greater number of exposed people will develop the disease than do non-exposed. This is the straightforward rationale of the cohort design. And if we apply this to the issue of silicone breast implants, we would identify women who've had implants, compare them to women who've not had implants, and look at the development of connected tissue disease in both groups. And if implants are indeed associated with the development of connected tissue disease, we would expect to see a greater proportion of the implant group developing connected tissue disease than of the non-implant group. So we've now talked about the randomized trial and about cohort studies. And the final study I'll discuss in this presentation is the case control study. In the case control study, we begin with people who have the disease called cases and we identify people who don't have the disease for comparison and they are called controls. Hence the name case control study. We then determine the history of exposure. What proportion of people with the disease were exposed in the past? And what proportion of people without the disease were exposed in the past? If exposure is indeed associated with disease, we would expect a greater proportion of the cases to have had a history of exposure than of the controls. Again, let's look at the implant question. If we were doing a case control study of implants, we would first identify a group of women with connected tissue disease and a comparison group without connected tissue disease. We would then determine what proportion of the women with connected tissue disease have a history of receiving implants compared to women without connected tissue disease. And if exposure is associated with disease, we would expect a greater proportion of the women with the disease, the cases to have had exposure than of the women without the disease, the controls. So what we've seen up to now is basically three major types of study design that are used in epidemiologic studies, randomized trials, generally not used for putatively toxic agents, but case control or cohort studies that are used to explore the relationships of exposure to a specific disease. Well, let's assume that we have done the study properly. Now the question is, what do we do with the findings from the study? And so to recapitulate before turning to how we analyze these data, remember that in a cohort study, we're comparing exposed people to non-exposed. And in a case control study, we're comparing cases, people with the disease, to controls, people without the disease. And we're then looking at the rates of disease in exposed people in a cohort study and we're looking at the proportion exposed of the cases and the controls in a case control study. Both of these approaches are aimed at demonstrating whether or not there's an association of exposure and the development of disease. Let's go back to our original question. We're carrying out these studies in order to determine whether there's an association between an exposure and a disease or adverse health outcome or as it's restated here, is there an excess risk of disease in people who have been exposed? I therefore like to turn to the question of how we measure excess risk. The first measure of excess risk is the relative risk. Relative risk is perhaps the most commonly used measure of increased risk. What does it mean? The relative risk is the ratio of the risk of disease in exposed people divided by the risk of disease in the non-exposed people. How do we interpret that? Well, if the relative risk equals one, it means the numerator of that fraction. The numerator is the same as the denominator. The risk in exposed people is the same as the risk in unexposed people and there is no association between exposure and disease. If the relative risk is greater than one, it means that the numerator is greater than the denominator. The risk of disease in exposed people is greater than the risk in non-exposed people and this is a positive association and it may be causal and we will shortly discuss how we move from association to causation. If the relative risk is less than one, the risk in exposed people is less than the risk in unexposed, the numerator is smaller than the denominator. This is a negative association and could be protective. For example, if we have an effective vaccine, we would expect to see a relative risk less than one because people who had been exposed to the vaccine would subsequently develop less disease than with people who had not been vaccinated. Another measure of excess risk is the odds ratio and this is again discussed in some detail in the reference manual. Suffice to say that for general purposes you can interpret the odds ratio just the way you would interpret a relative risk. The odds ratio is most commonly used in a case control study. The third and final measure of association that I would like to discuss with you is called the attributable risk and the attributable risk is shown schematically in the next few slides. Let us consider an exposed group and a non-exposed group, each of which has a risk of developing disease. This bar represents the total risk of developing the disease in the exposed group and this is the total risk of developing in the non-exposed group. For example, let us say that the exposure is radiation and the risk is developing a certain type of cancer after radiation. What do we see? We see that the risk in the exposed group and the radiated group is greater than the risk in the non-exposed group but we also see that even in the non-exposed group that was not radiated there is some risk of developing the cancer. That is that all the cancer is not due to the exposure because even some of the non-exposed people develop the disease. What we see is that the non-exposed group basically represent a background group. They weren't radiated in this example. They have a background risk and this background risk is a risk that even the exposed people share because they're members of society and members of the community. If we want to ask the question, in the exposed group, how much of their exposure can we attribute to the fact that they were exposed or to say it differently? In the radiated group, how much of their risk of cancer is due to their having been radiated, we can figure out quite clearly how to calculate this. The total risk in the exposed group is due to the exposure and to the background risk, not due to the exposure. The exposed people have an additional risk that the non-exposed people don't have. If we want to calculate how much of this risk is due to the exposure, we would take the total exposure, the total risk in the exposed group and subtract from it the risk in the non-exposed group. This would be the attributable risk. This would tell us how much of the risk can we attribute to the exposure. The attributable risk has taken on meaning in the courts because it has been suggested that an attributable risk greater than 50% could be equated with more likely than not that a specific exposure caused the disease. This is extremely controversial and complex and is discussed in more detail in the chapter in the reference manual. So far, what we have discussed are the various types of study design and how we use those study designs to determine if there is an association of exposure and disease. Now we have to go to the next step and say, is the association causal? And this was the second question I showed you in the earlier part of the presentation. How do we move from association to causation? This is a particularly challenging step. Decades ago, when the possible link of cigarette smoking and lung cancer became apparent, the Surgeon General convened an expert group to set guidelines or criteria for how we would move to causal inferences from evidence of causation. Some of these are seen on this slide. First, is there a temporal relationship between the exposure and the disease? What does this mean? If we believe that a certain exposure causes a disease, then the exposure should have occurred prior to the development of the disease. If the exposure occurred after the disease, clearly it's not consistent with a causal inference. Next is the strength of the association, also called the relative risk, which I just discussed. Third is the dose-response relationship. The dose-response relationship says that if an exposure is related to the development of disease, the greater your exposure, the greater your risk of disease should be. The association should also be consistent in different studies. And if we remove the exposure, we would expect that the risk of disease would go down. The relationship that we're suggesting of causation between exposure and disease should be biologically plausible, and we should rule out alternative explanations. And it has also been suggested that the association be specific for a certain exposure and a disease. Let me show you two examples. And particularly the issue of temporal relationship is what I would like to focus on for a moment. This is a well-known saying, post hoc ergo proctor hoc. After this, therefore be cause of this. This is a human tendency to interpret the cause of any event as being the event that preceded it, that because something follows something else, we generally assume that it must be due to that. Clearly that is not so. As I said a moment ago, we need to have a temporal relationship. We need to know that the putative cause, the exposure occurred before the disease developed, but it does not mean that every time something follows something else, that it is causally related. This slide shows the dose response relationship of lung cancer and cigarette smoking. As I mentioned to you a few moments ago, the greater the exposure, the greater the risk. And this shows that with more cigarettes smoked, the greater the risk of dying as reflected in the mortality rate. So this is an example of a dose response relationship which provides extremely strong support, extremely strong support for a causal inference. Let's stop and ask the question, what are the legal and policy implications of finding a strong association even if causation cannot be determined by using these criteria or guidelines? That is we show a strong association, the relative risk is high, but when we go to that list of guidelines, we're not able to document them. Often we just don't have the data, the appropriate studies haven't been carried out. This represents a major problem for the courts in interpreting epidemiologic evidence. We speak of factors being risk factors rather than causal factors, only because the evidence is not strong enough for us to come to a conclusion regarding causation. But it is an unresolved issue and poses serious problems for people faced with interpreting the legal implications of strong association and yet not having the causal guidelines fulfilled. At this point, I'd like to turn to some of the problems that we encounter in trying to interpret the results of these studies and in trying to infer causation. If we want to infer causation, that is to conclude that a certain exposure is associated with an increased risk of developing disease, the first question we have to ask is could the association be due to chance? To chance. Let us look at an example. Years ago, there was a major issue raised which had tremendous legal ramifications regarding the question of whether bendectin use, bendectin was a sedative hypnotic used by women in pregnancy, the question was whether bendectin use might be associated with congenital malformations. Let us go schematically through the considerations in this issue. This oval, shown diagrammatically, represents a population of pregnant women. Within this oval, there is a subset of women who've used bendectin in pregnancy, shown by the circle. Within this group of pregnant women, there is also a subset of women who deliver a child with a malformation. Well, what are the possible relationships between bendectin use and congenital malformations? Let's look at a few scenarios. In this scenario, we see that all the children with malformations were born to women who use bendectin, shown schematically here. This would certainly be very strongly suggestive of a relationship between bendectin use and the risk of malformations. Another possibility is that none of the children with malformations were born to mothers who took bendectin. They are mutually exclusive circles. And this would be strongly indicative that there's no relationship between bendectin use and the development of malformations. What usually happens, what usually happens, however, is that we have a partial overlap of the two circles. There are women who have a child without malformations who didn't take bendectin. Many bendectin users never had an abnormality in their child. And yet there's an overlap area of women who took bendectin and whose children had malformations. This overlap has taken place by chance, not because bendectin caused the malformation, but just because you would expect with these two-size circles that there would be some overlap between the two. Where does the dilemma arise in interpreting that? When a child with a malformation whose mother has taken bendectin is presented to the jury, what the jury is seeing is a child from this area of overlap. But it doesn't take into account what is the probability that this overlap could occur by chance. And so we have statistical methods that are not part of this presentation for addressing this issue of the importance of chance and could the findings we have of an association be artifacts of chance? The next issue is that of bias. When we come to interpret the findings of a study, we have to ask whether there are mistakes made in the study. And there are many types of bias. One is selection bias. Who has been selected for the cases or for the controls? Was there a bias in the way we selected them that would cause us to come out with certain findings? Or in a cohort study, is there a selection bias in who was selected as exposed people and who was selected for the comparison group of non-exposed? We may have a bias in the type of information we obtain. How good is the information? Does it differ from one group to another? And what we try to do is to minimize the types of bias that might occur in a study and also to characterize the bias so we know the extent of the impact that the bias could have on our findings. I'd like to turn next to one of the most important issues in trying to interpret associations and to derive inferences of causation. And this is the issue of confounding. It is discussed in the reference manual in detail, beginning on page 369. This diagram shows us a causal association. We observe an association between a certain characteristic or an exposure and a disease, and the characteristic or the exposure causes that disease. This is a causal relationship and causation here is indicated by the arrow connecting the two boxes. It is also possible, however, that we can observe an association as shown here not because the characteristic or exposure causes the disease but because both of them are linked to a third factor. Both are linked to a third factor and that results in an observed association that is not causal. For example, this slide shows two possible interpretations of the well-known association of increased cholesterol and increased risk of coronary heart disease. On the left is a causal relationship, on the right is a confounded relationship. Here we see that increased cholesterol causes an increased risk of coronary heart disease. Here we see that increased cholesterol is observed to be associated with an increased risk of coronary heart disease, not because they're causally related but because a third factor called factor X is linked to increased cholesterol and is a cause of coronary heart disease. What could that be? It could, for example, be the genetic profile of the individual. There could be a group of individuals who have a genetic profile that puts them at increased risk of coronary heart disease and that same genetic profile is associated with an increased cholesterol. Well, you might ask, why do I care? What's the importance of this? The importance is that if the true model is what we see on the left, then if I can intervene and get people to lower their cholesterol, I can anticipate that I'm going to be able to lower their risk of coronary heart disease. But if the model is a confounded one that we see on the right, no matter what I do to reduce the cholesterol, I'm unlikely to have any impact on the risk of coronary heart disease which is being caused by that genetic profile. So understanding whether a relationship is causal or confounded is extremely important. Here is another example. Many years ago, Dr. Brian McMahon, Professor and Chairman of the Department of Epidemiology at Harvard, reported an association between increased coffee consumption and cancer of the pancreas. The question arose, was the association that he reported a causal one or was it the result of confounding? On the left again, a causal relationship. Increased coffee consumption causes an increased risk of cancer of the pancreas, and therefore we have an observed association of the two. On the right, we have the same observed association because both are associated with factor X. What could factor X be? One suggestion was that factor X could be cigarette smoking. Cigarette smoking is known to be a cause of increased risk of cancer of the pancreas. It is also known that you almost never find a cigarette smoker who doesn't drink coffee. There's a close link between cigarette smoking and coffee consumption. So that if we observe an increased risk of cancer of the pancreas in people who drink more coffee, it could be not because there's a causal relationship but because both of them are associated with cigarette smoking. Clearly the interpretation has major clinical and public health implications depending which one appears to be correct. Finally, I want to show you a more recent example of a study that appeared within the past few months that raised the question whether high blood levels of Agent Orange are associated with a high prevalence of diabetes in adults. The authors reported an observed association of increased blood levels of Agent Orange and prevalence of diabetes. Well, what does that association mean? It is possible, if the association is true, that we are really seeing the result of a causal relationship. That high blood levels of Agent Orange cause a high prevalence of diabetes. That is certainly one explanation. But the possibility was raised that this association that we observe might be due to confounding. How could that happen? We know that obesity is a risk factor for diabetes. We also know that the dioxins, which Agent Orange is a mixture, dioxins are stored in the fat tissue, in the adipose tissue. And therefore we would expect higher blood levels of dioxin in people who have more cells storing the dioxins. Therefore it is possible that we observe the association between high blood levels of Agent Orange and high prevalence of diabetes because both are associated with obesity. Obesity we know is a risk factor for diabetes and obesity is linked to high levels of Agent Orange in the blood. This has not yet been clarified what the nature of this relationship is. But it is evidence that the problem of confounding is a terribly important one that we have to take into account. And it is a problem that you're going to hear about in the courts as you hear expert witnesses testify about whether confounding has been taken into account in a given case. We will come back to that in a few moments. What is the problem in the courts? We can have a recognized confounder, such as cigarette smoking in the example I just gave you, or we can have unrecognized confounders. We can have it as a legitimate problem. That is, there can be a potential confounder, a factor that we know may be a confounder but it was not studied in the research that's being reported and presented as evidence. And that would be a deficiency in the study. But we often hear the statement made in trying to dispute a study quote, all confounders have not been taken into account but no specific confounders that have been omitted are listed by the person who is trying to disparage the study. The world is full of potential confounders. All we can expect is that confounders will be taken into account if they appear to be likely to be relevant in the given situation. And so the second statement is much too general to be worth very much. The person who wants to make that criticism has to say what are the specific confounders that could have affected the relationship that were not taken into consideration by the investigator? At this point then, let's just summarize. If we observe an association, it can be non-causal or causal. If it's non-causal, it can be due to chance, to bias, or to confounding. Or it may indeed be causal. I'd like to turn now to some questions regarding science and scientific evidence. These are questions that have been raised with me over the years by judges who are dealing with epidemiologic evidence in their courtrooms. First, why don't human and animal studies always agree? This can be very disturbing. One scientist gets up and testifies about findings from animal studies, and epidemiologists may get up and testify about findings from human studies, and the findings don't agree. We're obviously dealing with different types of subjects. You cannot automatically extrapolate in dose or in effect from animals to human beings. And so there are very legitimate reasons why the studies may disagree. Secondly, we are asked, why do epidemiologic studies or epidemiologists often disagree? It's a more difficult problem, and the reason we often disagree, maybe not that often, but the reason we may disagree is because there may be questions about certain methodologic issues in the study. There may be questions about differences in how we interpret the data. And this can be extremely complex. Judges will ask, why do many scientists often refuse to commit themselves and seem to hedge in their opinions? Is because there is not an end point in science. The scientist is seeking the truth. Today's truth may be modified tomorrow by further studies, where the dilemma arises that in a trial, a judgment has to be made yes or no. But science doesn't work in that way. It works in small increments of knowledge, and we are always subject to having our opinions refuted to make decisions. And finally, how can I tell who is an expert whom I can reasonably believe? Judges tell me that they are besieged by experts, and they don't know whom they can trust and whom they cannot trust. I'd like therefore to comment and to offer some personal opinions about how I believe you can try to assess an expert witness. I'd like therefore to comment on an expert witness. I would ask the following questions. What are the educational background and the training of this witness? Did the person train at a good institution and a good training program with reputable people? Second, is this person a professional witness? Does he or she make most of his living from being a witness? If the person is an itinerant witness, I would tend to have a greater level of skepticism about the testimony to be offered. Third, has the witness worked, done peer review, published research and teaching in the field or discipline in which he's testifying? Or has the person come from a totally different field and become an instant expert in epidemiology or in any other field? Does the witness always give an extreme and unequivocal position? I mentioned a few moments ago the importance of uncertainty and of changing opinions in science. Does the expert acknowledge the limitations and uncertainties and admit to the possibility of being wrong? Can the witness name other people working in the field including some people who disagree with his position? And finally, if the witness can name those people, can the witness state the reasons why those people disagree with him? I think that these questions can provide very useful guidelines regarding the potential testimony that the witness is going to provide. Perhaps one of the major questions that epidemiologists are confronted with in the courtroom is the following. Can we derive causal inferences regarding disease in an individual? The judge will say did this person's disease occur because of this exposure? The problem is that epidemiology is not well equipped to answer that question. Epidemiology can answer the question best whether a certain exposure has been shown to be capable of causing this disease in human populations. But in a specific individual in whom there are often multiple exposures for example such as at an industrial site it may be very problematic to come to a conclusion regarding the question of whether a specific exposure has caused the disease in this individual. I mentioned earlier that the attributable risk has been used particularly by lawyers with an attributable risk of over 50% being considered by some to be an equivalent of more likely than not. But I should say that this is a use that has been made by lawyers discussed on page 381 of the reference manual because it is an important issue. The issue of whether we can draw a conclusion regarding the cause of disease the cause of disease in a specific individual is one where epidemiologists would generally say they cannot draw that conclusion but in which the courts are desperate to be able to draw such a conclusion and therefore gone to this issue of attributable risk. It is a legal decision and not a scientific or epidemiologic decision to use attributable risk in this manner. I'd like to briefly mention a problem and that is the problem of how the data and conclusions are presented. Words have great power and you don't have to be right or wrong to convey an opinion with subtleties of how opinions are expressed. I'll give you two examples. First we have the question of the relative risk that I spoke to you about before as against the absolute risk. What is my level of risk? Not a comparison just what is my level of risk as an individual. Sometimes we'll look at relative risk and say the relative risk really shows a 20% increase. You say gee that's quite high but then you say well what is the absolute risk what is the baseline risk here and you say well in 100 million people. Well a 20% increase of that risk is minimal. So it's not just a question of saying there's a 20% increase the question is what is our starting point what is the baseline and depending on one's opinion one may choose to express it as a 20% increase or one may choose to express it as a baseline risk. So we're often using the means we choose to express the risk it often reflects the feeling we have about whether or not there is a true risk and what we're trying to convey. Let me show you another example. Let us say we carry out a study, a clinical trial a new therapy and an old therapy and we look at the mortality in both groups. In one group it's 94% and now with the new therapy it's 93%. Well you might say that's really not much to write home about. 94% versus 93%. Doesn't look like it's having much effect but someone else who may be advocating for the new treatment comes and says oh no you're not looking at it correctly let's look at survival with the two therapies. Survival with the old therapy was 6% it's gone up to 7% here we have a 16% increase in survival with these two therapies which is true the answer is they're both true. Again it's the question of how you choose to express the findings and therefore if you're hearing scientific testimony in the court you have to be asking always has this person specifically chosen a certain way to present the findings that may tend to color the conclusions. Another issue that I alluded to before is the question of whether an exposure is a risk factor or a causal factor. There is yet no unanimity among epidemiologists in defining risk factor as against causal factor but most of my colleagues would probably say that when the evidence is not strong enough to make it a causal factor but there is evidence of an association we would tend to call it a risk factor but the exact point of demarcation is not clear and often these two terms are used interchangeably even though they may not mean exactly the same thing. Finally in closing I would like to stress what I consider one of the most important issues that I've mentioned earlier. There is a tremendous need to accept uncertainty and to deal with it and I think this reflects the difference in culture between the scientific community and the legal community. Many of my colleagues say they do not want to participate in legal proceedings they don't want to be impeached by lawyers because of their opinion about whether a certain agent caused the disease because they believe there is a level of uncertainty that would qualify their comments and yet when they get into the legal arena they often feel that they are being forced to make an unequivocal statement yes or no regarding whether or not a substance caused the disease in this person. I believe there is a tremendous need for us to bridge the legal and scientific communities to see how can we make both groups more comfortable with each other so that scientific evidence can be best used in the cause of justice and of truth. Thank you.