 Yeah, we'd like to get started now. I'd like to welcome everyone to the third meeting of the chronic hazard advisory panel on phthalates and phthalate substitutes. I think it's appropriate since there may be people in the audience that don't know the members that we begin by introducing ourselves. My name is Phil Murkis, and I'm the chair of the committee. And again, I welcome all of you. And I'm Bernard Schwartz, I'm the vice chair of the committee. Chris Jennings, just a committee member. Holger Koch from Germany. And I'm Andreas Koch and come from London, but I'm also from Germany originally, but really in London. Thank you. And two of our members are not yet present, but apparently on their way. What I'd like to do now is turn it over to Mike Babich, who has an update on staff activities. Hey, I'm Mike Babich. I'm the CPSC project officer for this panel. And I just wanted to update the chap on some of our work that the staff has been doing. We originally had done several toxicity reviews. We had tox reviews of the original six phthalates that are mentioned in the CPSIA, the ones that are regulated by that. And since then, the chap had asked for information on additional phthalates, because there's actually a large number of them. And if you turn in your books to tab five, table one was just the original ones. And table two is the new ones. We've identified 29 additional phthalates, and we're working on toxicity reviews of these. We have them divided them by priority into tiers based on factors like how much is produced or whether they're measured in biomonitoring studies and so on. And so, Versar, our contractor, is working on tiers one and two, the first 10 phthalates. And those reviews are in progress. Those were also 10 were relatively, some of them were quite data rich. The tiers three and four, we're working on at CPSC, most of them are completed. And you have in your packet a document that covers 17 of them. And the other few, there's a few that are in draft and they should be ready soon. And the tiers three and four, many of them are very data poor. And then on table three shows the substitutes. The first five of them, the tox reviews on those phthalate substitutes were done by Versar, and you received them at the first meeting. The one we didn't include, the number six, trimethyl, 1,3 pentane, diol, diisobuterate, they call it TXIB, showed up in the studies that we did of the children's products. So we are working on a tox review of that as well. And of the five substitutes that we reviewed, we didn't, in our tests of the toys, we didn't see any DEHA, we did not see totem, or at least not very much, the triactyl trimelitate. But we did see a lot of the TXIB. Although it's not used by itself, it was present in a lot of products. The only other thing I have to add is I've sent out a draft log of our teleconference. And if you have any comments on that, let me know. And I'll try to post that, finalize that and post that sometime next week. Chris? Just to clarify, Mike, so if I remember right now in our previous meetings, we've talked about basing our work only on published data. So when you talk about these being these reports, can we consider that published data so that results from those reports could be used in our... Most of the reports are based on published data, publicly available data. It's not necessarily peer-reviewed from a journal article. Some of these are the EPA, Toscats, we call them their contractor reports. They probably do undergo some sort of peer-review, but they're not published in journals, but they are public. There are a few exceptions that we talked about, the DINCH. And I think there's a few others where some of the studies... We don't have the studies, we just have the summaries. DPHP is the other one. There's one or two others, but for the most part the studies are available. And the reviews all note whether we have the original studies and so on. And all of our documents that the staff generates are public, and they'll be posted on the website. Any other questions? If not, then we'll proceed. We're going to depart from the draft agenda. We were going to discuss the human biomonitoring paper, but we're going to put that off until this afternoon after lunch. And we're going to go instead to the invited speakers. And I think Dr. Burke is not yet here. So if we could go to Dr. Stahlert and get his presentation, that'll give us more time to actually get input and have a good discussion. Okay. Can everyone hear all right? Well, first of all, I just have to say that I'm honored to be asked to talk about this. It's a topic that I hadn't intended to learn about, but about a year or two ago when I was trying to do what I thought would be a very simple task, namely look at the dietary history in this large CDC data set, since some of the items in that history are noted as being canned or not. Okay. It's not good enough data to try to do a complex model, but this is epidemiology. It should be good enough for that. In other words, it should be able to help us separate people with high levels from those with low levels. And my attempts to do that were met with repeated failure, and I stumbled over it for months. I should have let go and moved on to something else. But eventually it occurred to me that maybe there was a different way to look at epidemiology that besides what they taught me in Epi 101. And I didn't know for sure that I was doing the right thing, but it seemed like at least a path that might start to unravel a little bit. And so the question was, is it possible to look at fasting time as an indicator roughly similar to what you would do in an experimental study where you have animals or people and you give them something at a certain time and you watch what happens? So could we look at this large population and treat them the same way? And of course I'm not the first person to think of this. There are other studies where people have done this, done in, you know, persistent chemicals, lead and so on. So what happened was I needed to learn about how NHANES handled fasting time in order to do that work, and so that's why I'm here, because you have a similar problem. This text, of course, is way too much. It's the only slide that has that much text on it and accept that it's repeated at the end. I'll briefly review it for you and then we'll move on to the more interesting parts of the talk. The first thing is that fasting time adjustment or not can actually influence the results you get in an epidemiology study from NHANES in certain situations. And naturally you would expect that it would be most important if the outcome is related to eating in some fashion or if the toxicant itself is. Now what I have found so far is that the results are less striking than I would have expected, whether it was this phenol A or some of the more rapidly metabolized phthalates. Nevertheless, it's not without value and it is something to know about. And there are various reasons why that might be the case. I'll skip ahead and just say that it's also very difficult. In fact, really it's impossible to validate fasting time in any kind of authoritative way. These are self-reports. So the people say what they say and those data are taken seriously by people who try to do other things with them. So for instance, when you see a report on the news that X percent of the U.S. population has undiagnosed diabetes, well how did they come up with that? Well they had to have measured something that let them know that the person had a glucose that was greater than whatever. The person had to say that they didn't know they had diabetes, that they didn't have diabetes. And then you have to know that the person had been fasting for an appropriate period of time when you got the data. And so that's what the fasting time is in NHANES for is to allow that kind of thing. But you can use it for other things and that's what we're talking about now. And then finally, you can certainly adjust for fasting time and that will often I think solve the problem. But it's related to other covariates and it's really not the best thing if you have other, it may be the best thing available in NHANES to deal with this problem, but in other data sets you might have better data that would help you look at pharmacokinetics and such. So why do we care about it? Well I've already said that, it either affects the exposure assessment or the outcome assessment. We probably should stop and mention just briefly what, since I think if I understand correctly, I'm really specifically addressing NHANES and not fasting time in the greater universe. So in NHANES, the fasting time is the hours and minutes since the last food or drink other than plain water is the phrase. But they then instruct the subjects that they should not count diet, soft drinks, coffee or tea that doesn't have sugar in it. Alcohol, interestingly, and certain minor items like antacids and such. Now if you look, you'll see, if you look at the detailed data you'll find that these kind of caveats are very small in numbers. So out of several thousand people, maybe there were 16 who violated the alcohol part. Because they were told beforehand, don't eat or drink anything besides plain water after a certain time. So these folks cheated, but they tried to capture the cheating. Next, I'll mention to you if this, do we have a, is there a, oh there's the laser. Everybody has a fasting time in the data set. But as far as the NHANES people are concerned, fasting is only valid if the fasting time is between 8 and 24 hours. And the reason that matters is because my first time into the data set, I didn't know that. I assumed that a fasting sub-sample containing fasting glucose and fasting insulin would naturally have fasting data. But that's not true. If the person only fasted for two hours, their data is still there. So if you were to simply take their data and do things with it, you might make some mistakes. So you have to merge it with another data file that would tell you how many hours they really fasted. The way they protect some people against this is that because this is a sub, this is a complex survey database, they have sample weights. And the weights are set to zero on the people who don't have appropriate fasting times. But the data itself is still there. And in my case, that's actually fortunate. I just want to give you an example of this. These are histograms of fasting time. There's three sessions in NHANES. The morning session starts around 9 o'clock in the morning or so. It goes for about three hours. And there's an afternoon session and an evening session. And you can see that the characteristic shape is very different depending on which session. But that makes perfect sense. If you just think about it for a minute, you realize that, oh yes, the people coming in in the morning, unless they got up at four in the morning, they fasted at least back to when they went to bed, which is maybe midnight or 11 or whatever. That's no fun. Okay. And similarly with the afternoon, right? You can think it through and figure out where the peaks ought to be based on what session it is. But what's interesting, though, is if you look at then, this is the 2003-04 data. This is 2005-06. They don't look the same. If you look carefully, you'll see that the peak in 2003-04 is around eight hours. The peak in 2005-06 is around two. And in fact, overall, the 2005-06 data has a shorter fasting time. So when I heard the result, when I found out that bisphenol A levels had dropped, that was especially interesting to me because I knew that the fasting time had also dropped, so the level should have gone up. So they dropped extra, except that bisphenol A doesn't seem, it doesn't vary a great deal with fasting time, so for various reasons. But anyway, so you have to watch out for these kind of things. And you might guess that the reason that these are different is that the rules changed. The subjects were instructed to do something different in 2005-06. In 2003-04, everybody older than 12 except for insulin-dependent diabetics were supposed to fast a certain length of time if they were in the morning session and some hours if they were in one of the afternoon or evening sessions. But in 2005-06, that was changed. Everybody fasts in the morning, including the diabetics on insulin, but nobody fasts in the afternoon or evening, which is why all of a sudden this eight-hour fast now goes to two. They're told to do something different. It's the sort of thing that can burn you in a data set that you didn't collect. These kinds of traps are everywhere. Factors affecting fasting time? Well, aside from the session, which I just told you about, there's age. And again, unless you are of a certain age, you're not asked to fast. So naturally, the younger folks have a very short fasting time. So if you expected Bisphenol A to dramatically drop the way it should, or the way you would think maybe it would in the data, then kind of by default you would expect that the younger children would have to have a higher level, obviously. From that point on, though, it flattens out. And then insulin status is kind of interesting. But again, it's an artifact of... I thought at first that it would be that the sickest people have a hard time fasting. But you don't see much evidence of that in the data. And I think the reason is that the sickest people don't come in. I'm making this up. I don't really know that. All I know is that the data doesn't show. What the data does show is probably simply the effect. This is 2003-04 data. Diabetics with insulin were told not to fast. So they did. Now that looked much better on my computer. But I will just tell you that the Bisphenol A data in 2005-06 has basically the same shape as 3-4 only lower all across the board. Another thing that was more interesting in the 5-6 data, and since this is just one look, it's hard to say whether this is real or not, is that if you follow the top data down, this max, what's interesting is if you draw a line through that, you get a half-life of about five hours, which, of course, is the right number. And what's interesting is the folks who just ate have levels in the 300 nanograms per mil range. Actually, this is creatinine-adjusted, but you get the point. So it's conceivable that this is a clue. Interestingly, that was not true of the 2003-04 data, and I don't know why. Whether this is real or not, I can't say. Now I'd like to show a couple phthalates for you. MEH-HP is one of the metabolites of DEH-P, and you would expect it to drop rapidly, but it doesn't. It does drop. Now this is a log scale, just like the BPA scale, so this is more of a drop than it appears, but you need to log-transform it because of the skewed data so you can make sense of it. MEP, on the other hand, is not generally discussed as a food-based exposure, and it's flat, flat, flat. So I think that's pretty much what you might expect. Now I did drill down a little bit on MEH-HP, the DEH-P metabolite, and the reason I did that is because we have this awful problem in NHANES and in a lot of EPI studies in general, which is creatinine correction is just this horrible thing that we kind of have to do. It's great in animal studies because you follow the same animals along and they're creating creatinine at a certain rate, and so you can use it to correct urine dilution. It's just wonderful. In EPI it's horrible. Black folks have much higher creatinine than white. Males have much higher than females, and it varies in a curvilinear fashion with age because it reflects muscle mass. So trying to use that to adjust for urine dilution is just a nightmare. So what I do when I can is I stratify. So what I did is they're all white, they're all adults, they're not too old, and these are males and those are females, and so now I can kind of trust that at least this group of people and this group of people are relatively homogeneous and the creatinine correction is probably not messing us up. And what I find interesting is that the women seem to be showing more of a drop in the MEHP than the men, which either means it's real or that the men are lying more about fasting time, which is possible. And just to complain more about urine creatinine, I just thought I should mention that urine creatinine varies with fasting time. I split it by race and sex, and you can see that just about everybody's doing something fairly similar, the white data is a little more mixed. At first I was really concerned about this because I thought, well, maybe back in the 50s people did some great physiologic experiments where they looked at what you might call early fasting. But most of the literature on fasting talks about fast greater than 24 hours. So that doesn't help us here. You can get some clues about what's going on with fasting, but they don't tell you much about what happens in the first 24 hours. So I was worried, what if people at 15 or 20 hours start to break down some muscle? Well, naturally their creatinine is going to go up in the urine, but it won't be going up because their urine became more concentrated, which is what we're using it for. We're using it to correct for urine concentration. So if it's going up because muscle is breaking down, we will now start to derail the data. So the one thing I tried, and I don't know if this is legitimate, I looked at serum creatinine and I looked to see how that was related to urine creatinine and I tried to adjust for serum creatinine as kind of a rough, you know, serum creatinine and then I also tried serum osmolality to see if I could get a sense of, since specific gravity of the urine's not there, had to use those techniques to see if I could figure this out. And I think it's likely that this tweak here is really related to urine concentration, but until somebody measures specific gravity on it, we won't really know. I think it's likely that, in other words, that adjusting for creatinine is okay in this situation. Now I have the sense that I'm going too slowly. How's it going? Okay, all right. A really good question, of course, is these people, it's self-report. When they say they fasted for 15 hours, how do we know that they really did? And that's especially important because from a risk assessment point of view, what you want to know is if it hangs on longer than you had anticipated, because that could signal either it being diverted into some fat storage or perhaps an alternate source of exposure. But it could just mean that they're not telling the truth. And there's a reason why they might not. One is, and the main reason is that if they were asked to fast and they do, they get $100 instead of $70. And if they need money very badly, maybe that's an incentive to exaggerate. I'm not sure it's an incentive to exaggerate by a factor of, say, two, but who knows? Who knows what would happen? So the question is, can we get any clues about whether this fasting data, especially the data that's out there a ways, is actually legitimate? So the first couple clues are the ones I've already showed you. Namely, if this is real or if this variation is real, then that would suggest that these people out here are getting more concentrated urine, because something really is different here with regards to fasting time. If all these people were lying and they actually fasted here, you would expect the line to just go flat right here. So those are a couple clues. And then I looked at the fasting literature, again, as I told you, that fasting literature really is talking about fasting that's much longer than what we care about here. But these are some things that could conceivably happen. There could be tweaks to C-reactive protein, urea in the urine, which isn't measured in NHANES, serum glycol, glycerol, which isn't measured, insulin, which is, ketone body's not, pH not, but bicarb is. BUN is, serum glucose is, and heart rate is. I looked at them all. Really, the best ones are glucose and insulin. In this one, what I tried to do to get a sense of whether the idea was, let's take the people who have a normal hemoglobin A1c, which is the best indicator we have that their glucose is in control, and that they're not diabetic. That's the other feature. And then let's see what percent of those people have a glucose greater than 100, since that's the cutoff for prediabetes currently. And so this was just a rough indicator that if a person is out here at 15 hours and they have a glucose greater than 100, let's say that they're lying. So how many of those are there? And now that's a very crude indicator, right? Because what it really probably only tells us about is the last few hours. So out here, okay, maybe their glucose is below 100, but maybe they ate just five hours earlier, so they didn't really fast 20 hours, they fasted 15. So this is only going to catch the grossest level of malfeasance. But you can see that at least at this level, for whatever good this measure is, about 80% of the people or so are meeting those criteria, even out at the higher fasting range. This is better. This is insulin. And I split it into two groups because I wanted to separate out people who had diabetes or were on insulin or both. You would expect... Since these are all self-report, I thought it was worthwhile asking it as an or. So anyway, the green line is people who are either diabetic or on insulin or both. And so naturally their insulins are higher. The red line is those who said no to both of those questions. And so it would seem that the line is doing at least roughly what one would expect if fasting time was reasonable. Namely, this time out here, which is the time we're most concerned about, for at least, you know, on average, is lower than this time here, which is lower than that time there. So again, it's making us feel better. It's still not conclusive, but at least if it were going up, we would know that these people had lied, right? So it's not. So at least we have that. But there's another way that one can go at this, and it's a little better, and it still has flaws. NHANES has another way you can try to get at fasting time. The way I've been talking about is when they go into the phlebotomist who's about to draw their blood and get through urine, and they say to them, and the person says, when is the last time you ate or drank something besides plain water? They answer the question, for privacy purposes, that time doesn't go in, but a calculation is done and that does go in. In other words, they don't want you to be able to search. You know that Aunt Millie down the street did this CDC thing, and you know that she went at a certain time, and you know she has these diseases. And so you pop into the database, see what time she had her blood drawn, see what diseases she reported, and now you know who this person is. So they don't want that. So they remove all sorts of things like that. But they do tell you how long. Okay, so that's what we've been talking about so far. The other way to do it is that they do an extensive 24-hour dietary history. It was developed by the USDA. It has five cycles in order to try to stimulate memory, so they might... I don't know the cycles by heart, but one of them is, what did you eat for these various meals? And again, this is the 24-hour period from midnight to midnight, the day before they came in and gave their urine. So one cycle might be, what did you eat for meals? Another time around might be, okay, now tell me about the snacks and tell me about the drinks. And then they start prompting for specific things that people tend to forget. And so it's a fairly elaborate process. And so what I was thinking was, okay, if the person reports their last food intake in that sequence, and if they have fasted long enough, if they say they've fasted long enough, I should be able to match up their last food intake with the time that they claim. Calculate my own time from their last food intake since I know what session they went to. I don't know what time exactly, but I know they went to either the morning, afternoon, or evening sessions so we can at least get in the ballpark plus or minus an hour and a half or two hours. So what we have then is the 24-hour dietary survey here from midnight to midnight, and then they either came in here, here, or here. So based on when they came in, if they report having fasted long enough, we have a chance of seeing that record. Keep in mind that the whole here is that they could tell us they fasted back to here. There could be a record here. There might not be any more here, and then they sneaked something right here and there's no way to know it because there's no data capture here. But so what I thought I would do then is let's take the morning session, maybe some of the afternoon, by this point nobody has fasted long enough to get us back there. So especially these folks, if they say they've fasted long enough but now we need a buffer zone because remember we don't know exactly when they came in. So I took the midpoint of this and then I added a buffer around it, actually two hours on either side. So this is four hours, four hours. So then we take it back to here and then maybe add a little more slack. So really what I ended up doing was looking to see people who reported their last food intake here or further in hopes of having the best chance of not missing things. And this is what that looks like. This is the fasting time they reported to the phlebotomist. I haven't had anything to eat or drink since so and so. And then I calculated a fasting time based on looking at the last food intake in the dietary history for the morning people and in this case just the adults. And then I calculated the difference. Now you might say to yourself, why would a person, oops sorry camera person I hope we're all right, why would a person underestimate their fasting? There's no benefit to doing that. It probably is the case that they didn't. If you notice there's a line here that shows that it's the line at which they reach the midnight censoring point in the data. In other words, if their last food intake was after midnight we can't capture it, it's gone. And so what it looks like, so what you get then is the previous, so if they report a shorter time interval, and say they ate something at two in the morning because they got up and they did it. So you don't see that in the data. What you see is the thing they ate at 10 p.m. right before they went to bed. And so they report having fasted six hours. What you see is 12 hours, but that's not true. It's not true because you're missing the point. They're telling you the truth, but it's during a time when the data is censored. It's not there. And so that turns into a negative here. So these folks had that happen to them. So this is the real thing probably. And then these are the folks who truly exaggerated. But they can't exaggerate more than this. And honestly, I cannot remember why. This sort of, I worked on this back when I was working on that paper that got published a year or two ago. And so this can get very impenetrable if you spend enough time on it. But at least what I'm trying to show you is that if you then look at the people, remember, I have to take a range because I don't know when they came in exactly. So that's what this band is, is taking into account the fact that I really don't know exactly when they came to get their blood and urine. So if you allow them to be in this range plus or minus an hour and a half, this is what you get as far as the percent of subjects from that graph that had a good fasting time. And I'm starting here at around 12 hours because of the reasons I explained earlier. And you can see that about 70% have a matching record that seems in the ballpark until you get to around 20 hours. And then it falls off. But at this point, the dots are sized for the number of people. They have a sense of the number of people. And so at this point, it's falling off, but if you remember the previous slide, there aren't very many people anymore. This is a couple thousand people here. So this looks bad, but it's pretty minor, really. And then finally, I just wanted to show you an example. I just wanted to show you an example that what I did is a simplified version of the paper that I was first author on in 2007 looking at phthalates and insulin resistance. Because if there was ever going to be a time when you could get in trouble, it would be if you could conceivably get in a lot of trouble if the outcome measure changes with fasting and not, and the exposure does too. And so DEHP is an example of that. It's a food-based exposure. It's got a rapid metabolism. And then if you're looking at insulin resistance, well, insulin was the best thing I had for looking at fasting time. So both of them are changing, and depending on which is changing faster, you could get totally different answers. So when I discovered this after doing the BPA paper, I'll have to admit I was a little terrified to see whether I was going to have to send in a retraction. As it turns out, it weakened it, but it didn't kill it. Over here, and this, for the survey statisticians in the audience, we can talk about it later, but these are just the raw data. These are not adjusted for weights and survey variables, other survey variables. But anyway, just done like that, the p-value here on the log-transformed MEHHP is 0.0002, and when you add fasting time in here, it's 0.003. And so you can imagine that if the p-value over here had been 0.04, then here it would have been 0.3 or something like that. So it can make a difference. And so just finally to sum up again, it can make a difference. I think from my efforts, I can say with it not certainty, but at least I'd be willing to place a bet that most of the people in NHANES are not lying. They're doing the best they can to report their fasting time, and I think most of them are probably getting it close. Some people are lying, but I think the number is small enough that we can probably go with it. You can adjust for fasting time, and that's going to be helpful, but something I forgot to mention earlier that you need to remember is that fasting time is linked to other things. So for instance, if fasting time is dependent on the session that the person goes to, you have now linked fasting time to clock time. And so now, for instance, MEP, if the MEP levels are connected to when people take their shower or their shampoo or smear other things on their face, then naturally that's connected to clock time. So that means it'll be connected to fasting time even though it has nothing to do with fasting. At least there's the possibility. You just have to keep it in mind when you're thinking about this. That's what I've got for you. I hope it was helpful. Thank you. I think what we'll do now is entertain some questions in terms of if we need clarification, but hold the general discussion of this topic until all speakers have spoken. So does anyone have any specific questions? Sir. You very eloquently pointed out this conundrum that from what we know about the toxicotic kinetics of these polar substances, you chose two examples, DHB metabolite and bisphenol A, that they should go down with fasting time. You showed this curve for the people with extreme levels. So how can we now resolve this paradox that the data really, your best fits, do not show this decline, but it should decline? You see my question. Yes. I think the most obvious answer is to do the experiment. I understand Holger has done an experiment with phthalates which he did dosing with food to see if he got the same kinetics. Now you did get the same kinetics though, right? Did you get rapid declines even though you were repeatedly dosing and dosing with food? Or was it just a single dose with food? Oh, okay. I think we have to do the experiment. And I know some folks are gearing up to do that experiment with bisphenol A. Of course, the other possibility is that... And the reason I point that out is just because maybe I'm totally wrong. Maybe food intake is sufficient to totally, totally mess up the kinetics altogether. And if that's the case, except again Holger's single dose experiment didn't show that, but... I mean, I don't know. I don't know the answer, I guess, is that's a very long way of saying I don't know. But I think a good place to start is to do the experiment and then to consider the possibility of other routes of exposure. You know, because the kinetics, you know, if the max levels of BPA and the current in the 2005-06 data are correct, and if the fasting times are reasonably accurate that people reported, gosh, it looks like the peaks in the highly-dosed people are following vocal-type kinetics. And if they are, that's very important to know. But if that's true, then to me it would scream out that we need to be looking extra hard on food sources because, you know, but again, you know, if I'm wrong, please educate me. Anything else I can help with right now? Rick, thanks again. Just to make it clear, you, to distill what you said, you see an influence of fasting on enhanced data. So you would say that taking account of fasting when extrapolating from enhanced data to intake, we have to take account of fasting. I think it makes sense. Now, of course, fasting is not the ideal measure. I used it because I tried to do more sophisticated things and they didn't come out any more, you know, the models didn't fit any better when I tried to look at when exactly people ate canned foods and so on. But clearly, if you could do it right, it would be to capture all the food and to capture the exposure in some sense and then do the math. Unfortunately, you really can't do that with the enhanced data because they aren't collecting it for us. They're collecting it for dietary nutritional assessments, right? So we don't always know which foods are canned. And so anyway, I guess it's, what I would say is in the absence of other information about when the exposure happened, if you know it's a food-based exposure, I think fasting time appears to be worth adjusting for as long as you keep in mind that it also might be tied to clock time and other things. So you have to think it through. But if it seems reasonable at that point, then I think it probably should be done. At least I'm going to do it. Whenever I do this kind of work, I'm going to adjust for it. So if I interpret the figure correctly, you would say that the fasting time can influence exposure or the calculated exposure from urinary levels by a factor of, let's say, 2, 3 over time of 24 hours? Obviously, it depends on the camera. With MEP, it doesn't affect it at all, apparently. With MEP, yeah. Because we know that MEP probably is not football. Right. And actually, we would have been very disappointed to see it do something strange there, but it didn't. But yes, I think a factor of 2 or 3 could be reasonable. And then, as far as the level that you might have expected to see in the urine. Another thing you said, that the fasting issue might have the most important effect on the maximum values. Well, I didn't, I don't think I said that, but I did see it. That is, I pointed at it. I don't know if that's true. It might be the case that, you know, we were talking about the BPA slide, right? What I'm assuming is happening is that if a person gets a large oral dose and then it decays, and if the large oral doses are roughly in the same order of magnitude for these different people, then you see this nice decay curve. But if there are other sources as well, so suppose the people who are concerned about carbonless copy paper are correct, that that's an important source. You could be getting transdermal exposure, which then creates a background hum, and then on top, and so that's the noise that we see, and then you hit them really hard with an oral dose, and then you get to see that come off at the top. But if you don't hit them hard enough, you can't see it because it's buried in the other noise. That's how I'm interpreting the graph. Does that make sense? Am I saying it? So you see several indications that fasting time has an influence on metabolite levels in urine, both in the, let's say, mean over the time and in the maximum values over time. And you would say that we have to take account of this extrapollating from urinary levels to daily intake data. I don't do risk assessment, but it seems reasonable to me. I may be way off the mark here, but it seems to me, I understand what you're saying about the fasting time might have some validity issues. But if we take fasting time as it is given on face value and then say, if concern about phthalates that are food-based, if you're within a narrow window of fasting, that may be a pretty good estimate of the exposure. But if the fasting time is much delayed, so now you're out some time, we don't really know, we may have a level of that metabolite, which may actually be lower than what it was earlier in the day. So would you actually think about correcting maybe people with larger fasting times as opposed to shorter fasting times? Again, thinking that shorter fasting times may actually give you a more accurate level of the daily intake? Oh, I mean, I think you're definitely right that, you know, if you were trying to get a sense of the, well, it depends on what you're asking for, right? If you're wanting to get a sense of peak levels that people are experiencing, then it would seem that you would want the shortest fasting time people to look at. But if you're trying to get a sense of the overall daily exposure pattern, then it might be worthwhile looking at how it trends. You know, again, using the word trend is a little dangerous and it's a lot dangerous because it's not a trend. It's different people at different times, but it's giving us clues about what may be a trend within person, right? So, but I think you could conceivably look at the population and get a sense of how the population levels might go up and down. In fact, there were times when I made the effort to turn the data around. See, the way I'm looking at it here is, you're having to kind of project back in time to when they ate it, but you can, of course, do the math and reverse it so that you're looking forwards in time and saying, okay, you know, this is their level if they ate it. It's too hard to explain. The point is you can go around on this and end up coming up with a graph that sort of shows you what maybe the population trend is over time during an average day. You know, that's, of course, very weak compared to getting a large group of people and doing the appropriate pharmacokinetic study, but it's better than nothing, which is what we have. Last question from me at least. Taking account of this fasting thing, possibly incorporating this in our calculations, would you consider this a worst case approach? Would you consider this an adjustment factor? Or how would you consider this a margin of an uncertainty factor? He's asking me risk assessment questions again. Honestly, I don't mean to be coward here, but those are risk questions that I'm not used to thinking about. I'm not talking risk. I'm just talking about adjusting this urinary metabolite levels to daily intakes, nothing with risk until now. This adjustment, would you consider this a worst case approach? Simply a conversion factor? I just think of it as trying to get it closer to being right. Okay, so that would be a simple adjustment factor. Well, let me put it this way. If you don't adjust, are you more likely to overestimate exposure or underestimate? Well, I suppose it depends on whether the folks in your study have just eaten or have fasted for a long time. In this case, we have a range of fasting, but most of the people have fasted a fair length of time. So I assume that you would underestimate. Then the other thing you have to be concerned about, I think, is the fact that if different groups fast different amounts, that's a real danger. If I have a couple more minutes, I should probably point that out because that's an important point that I didn't hit on. When the study came out in JAMA that linked BPA with diabetes, I kind of gasped a little. First of all, because I had looked and missed seeing the association because I didn't adjust the same way he did. But then, thinking about this, I said, uh-oh, diabetics don't fast if they're on insulin. If there are a lot of those people, fortunately, there weren't that many of them. But if there were a lot of those people who had BPA measurements, okay, what would that mean? It would mean the diabetics have short fasting times. And if there's a big drop-off with BPA, then QED, you would end up with a link between bisphenol A and diabetes. So I leaped in, are you following me? Did I say that in a way that was sensible? If diabetics don't fast, they have a short fasting time. Therefore, they should have higher BPA levels. So then what you would get is diabetes linked to high BPA, which is what was reported in the first go-round. So I went, uh-oh, but if you adjust for fasting time, it's still there. But it might not have been. And so I think we really- and again, fasting time is not great, but it's better than not paying attention to it. And maybe while I'm complaining for a moment, I should tell you that if you- and I don't know if this happens in the case of the phthalates, in any of the phthalates, but it's likely. This creatinine correction thing is a real problem. And if you look at the uncorrected data, that is the nanograms per mil of urine, what you get is that males are more exposed than females. They have higher levels than females. After the creatinine correction, it reverses. And that's what everybody knows out there in the world because that's what Antonia said. And Antonia is brilliant, don't get me wrong, but what she reported was the creatinine corrected values and in the creatinine corrected version because women have lower creatinines, men have higher ones, you're dividing by a bigger number in the guys, it brings down the level. So before the correction, men have the higher levels. After the correction, women have the higher levels. Interesting. So we've got to come up with another way to do this because if that matters, it's messed up. I don't know what the right answer is. Oh, actually I do. What's the name of the woman who... She's published Lekine. Lekine did a study on BPA. And she did some things that I... Well, one of the things she did that I thought was really good was that she looked at the urine output at the population level. Like how much urine does a man or a woman put out on average? Then she looked at the average amounts that they found in the urine of the men and women in Enhanes. And then she did the math to calculate what that would mean for the average man or woman in the population as far as output in 24 hours. If you know how much urine is put out in 24 hours and you know what the average BPA is in 24 hours, then you should be able to know how much went in and came out. And from that, she was able to compute that men had the higher exposure. So I think it's probably the case that that's correct. And so just the question is whether phthalates are a similar beast. With Judy's work that you just cited, it's population-based. That's the key. It's not like per person or groupings. It's overall average urine output. Exactly. So she was able to answer that specific question. Now, if you were to then try to take that and apply it to individuals, you get into a lot of trouble. But for asking the question, who in the population is getting exposure? A useful technique. Well, I think that means my time is up. Thanks very much. Thank you very much. Ah, it stopped. Which means I think we should go on to our next speaker. Dr. Burke, are you? The slides are loaded. Yes. I'm not sure how to operate this. Okay. I want to put the Burke slide. Yes, if I could. Okay. Let's all say it. I'll just click on. That'd be great. Okay. Off to the discussion now. So I was dying to ask a bunch of questions because it's so related to risk. And first of all, let me apologize. I'm probably really out of sequence for this meeting in the deliberations of the committee. And I was originally supposed to be here in July, but had a death in the family couldn't do it. So I'm happy to be here. And I really understand the challenge that you face. So I'm here today to talk about risk assessment. And so I'm representing the committee at NAS that did the work science and decisions because as the questions that were prepared for me back in July indicate, there are a lot of implications for your work and how you take a look at this. And before I begin, I just want to say it was really fascinating, to hear the questions of, wow, so we need an intake rate to do risk. And if we're looking at fasting time, have we overestimated or underestimated the intake rate? Because that has implications for what we say about risk. And particularly if we're going to use measures like margin of exposure or margin of safety and we're off by a factor of two or three, that's a very important implication. So I want to give an overview of this report because there are lots of things that are directly relevant. But I want to start out by saying, as you can tell from the title of the report, it's about the decisions. And one thing that's so amazing about your challenge, it has huge implications. It's a mandate to advise on a decision that CPSC has to make. And we as scientists sometimes really love the data and we really love the next experiment. But that has led to 30-year risk assessments and never ending. I once had an obsessive-compulsive wonderful student, but he could never finish his literature review. And so 10 or 12 years into the program, he realized he hadn't begun to write the dissertation because the literature review was so fascinating. And although it was a tragic case, he got the appropriate attention and went on to do fine. But we had to make the diagnosis here. And so a lot of what we were doing is at the 60,000-foot level of how do we help the process here because the science is really about helping the decision. So thanks for listening to that preamble. First of all, I want to give all the credit. So the chairman has to live through the peer review and make a lot of talks afterwards, as some of you know. But the work is done by the committee and I wanted to share. There we are up at Woods Hole and everybody thinks we get to have fun up at Woods Hole, but this committee had very little fun over two years. I can tell you that and even less fun through the peer review. But it was an amazing piece of work and I just want to list the names. I don't have to go through them, but it was a great group of folks that brought in very many perspectives. So some would say historically perhaps the red book or the previous approach to risk assessment was dominated by the toxicologists or not enough epidemiologists. Well, we had everybody in there, but we also had kind of theologians and philosophers and decision scientists too. It's very interesting to share that group. So to talk about our charge, it was to develop recommendations for EPA. Now, I have to tell you, I've spent a lot of time talking about this and I'm working with all the agencies from FDA to EPA and so we knew that the red book was written to look at risk assessment across the federal government and we feel that science and decisions certainly has implications, but our mandate and our funding came from EPA but it's relevant to the field. It was a chance to think 25 years into the red book into the four step paradigm to rethink and see where we're going. Our primary focus was on human health risk assessment. There were lots of folks who were concerned that we didn't really look into life cycle assessment and sustainability and other issues like that. We tried to take that into consideration but it was all about public health. So 83 of the red book comes out, there's that paradigm and how do you mess with that? Well, I can't tell you how many of the... I collected them all because these are the kind of fights we had. How do you change a framework? So those of you who know Joe Roderick's and his work at FDA and he's been a mentor to me in risk assessment and his work on that, this is I think someday I will sell Joe's little sketches of the framework and they'll be worth nothing but interesting. So our evaluation had two parts. One was, and that was the discussion of fasting and understanding intake rate would fall onto the first one, and that is improving the technical analysis in the development of knowledge and information to more accurately characterize risk. And the second was, improve the utility of risk assessments. Frankly, and I'm... So a little bit on my background, I'm an epidemiologist, I have been a regulator. I was a Deputy Health Commissioner in New Jersey. I was the head scientist of the New Jersey DEP in the early days of the toxic substances issues up in New Jersey that continue on have done a lot of regulatory risk assessments know the challenge of making decisions. And frankly, I think an awful lot of risk assessments ask the wrong question. So they give you a number, but the number is not all that helpful to the decision maker. And so improving the utility was very much a part of our thinking. And so I found it's good to put conclusions and recommendations up front and in the back so you can see how we did it. So here are the things I'm going to talk about in some more briefly than others. But first of all, it's important that you ask the right questions. And so the design of the risk assessment, that problem formulation, and I think particularly for this group is really important. It's fascinating to get into the science, but what is it we're trying to do here? We had to address the issues, obviously that we heard and just heard about of uncertainty and perhaps even more important variability and susceptibility in the population as we go from population effects to individual effects and how do we protect the susceptible individuals. The selection and use of defaults, risk assessments have an awful lot of defaults in there, both stated and not stated. And I think as you look at product substitution and making tough decisions like there's going to something where there's less information or there's more information about harm, there's something that we know we've made mistakes in the past with because the hidden default has been if we don't have any information we assume no harm. So we addressed that one head on. We came up with a unified approach to dose response assessment and I have to just say at the onset this has probably been the most... So the Red Book was called for a couple of decades the Miss Red Book because there was this line between risk management and risk assessment about the framers and I know the framers the chair was my advisor down in Texas so the framers never intended that risk manager shouldn't talk about the risk assessment and make sure it asks the right question it was just trying to shield the scientists the technical analysis from the other non-risk factors that have to go in there economics and things like that so that was the Miss Red Book and so in this there's so much in this book the flashpoint has been this unified approach to dose response which those who haven't read the book so Silver Book might be the unread book where people are reading the press clippings and folks who are reacting saying well they want this unified approach that means everything's a carcinogen everything's linear and that's not what we're saying that's not what we're saying we are recommending that we have a more robust approach to reference doses and margins of exposure so we have true measures of risk because if you get two bright lines and you're comparing two phthalates for instance and you have margins of exposure and you say well you have a nice big margin of exposure but which one do you choose how do you really know the population impact so that's what this is about we talk about cumulative risk assessment I have to tell you we don't have a solution for it we make some suggestions cumulative risk assessment is huge and that's why it was nice to know we have immediate recommendations and some that are kind of working toward long term as I mentioned improving the utility for the decision makers obviously there's stakeholder involvement I won't say much about that but let me introduce it here the stakeholder involvement and risk assessment has led to a culture of dueling risk assessments that has led to a process of delay and non-decision because it's unorganized it's unbalanced and I can say first on you and folks that we all know who do risk assessments in the regulatory agency say you give it your best shot and you brace yourself for the public meetings and the challenges and the lawsuits and that's not stakeholder involvement on the other hand it's really important to get the right perspectives involved in that stakeholder stage but there has to be a more organized process that we now have which is incessant delay and then capacity building we realize that to do these kind of things we have to have the appropriate scientific capacity within the agencies so the big challenge was are the current risk assessments really designed to meet the needs of programs and decision makers and all you have to do is so I'm on the EPA Science Advisory Board now and we have to do reviews of the documents iris which are not really risk assessments they're more like tox profiles but because they have numbers in there that can be used as points of departure for decision making they are controversial but does an iris document in itself answer the questions that this panel needs to no it doesn't it provides some guidance from starting points and so are the designs really appropriate so we took first on I think it's important for this panel to think well as we plan our work here are we asking the right questions to get to that decision to help the CPSC make a recommendation so we call for increased attention to how we design risk assessment in the formative stages it's not just about a quest for an RFD or a critical value or all those kind of buzz words but really are we looking at things appropriately to answer the questions that we need to make the right decisions including other aspects of risk all pathways cumulative issues variability issues and perhaps most importantly risk tradeoffs so uncertainty and variability were really important and don't have to tell this committee that so uncertainty is a bad lack of knowledge and it was good to hear the previous speaker under the gun he said I don't know because we should say I don't know a lot more because we say I don't know but I can model it and so we have to model it to make decisions but frankly that's why we have uncertainty factors that's why I think he went when you said should that be an uncertainty factor or an adjustment factor well uncertainty factors have gotten into a lot hot water so there's uncertainty and then there's variability and I think variability is really important and one of the major findings and statements in science and decisions is variability particularly in cancer risk assessment but generally across the board it can't be reduced but it certainly can be better characterized and we have to do that and we know that variability affects many many things but the big big uncertainty and so the hot water that we're in a bad unified approach or the misinterpretation of the unread book is about what happens down here when the observable range is up there and and that's the ultimate uncertainty what is what is the true model down here whether it's stalates or BPA or any of the things that we we have to make decisions about where we're talking in the observable range because speaking of variability we know that we have those wonderful people who can smoke four packs a day and live to be 95 years old who never in an Irish family but but we know them we all we all know someone like that and then there's the average population response but then particularly the clinicians know that we see susceptible subgroups in the population and I think the issue of stalates is addressing a particularly susceptible subgroup when you talk about issues of development so to this to kind of bring in one of the one of the key points and considerations that was part of the deliberations was our recognition that well we have and this is from Tracy Woodruff in a paper that she did in 2008 on stalates I think a little bit more accurately than the graphics and science of decision so I put it in here but basically one of the things that this committee took on that I don't think was adequately addressed in the red book which kind of took a single substance some would say even a single pathway approach to risk assessment we recognize that there are most of the things that we're measuring, physiological parameters some of them adverse, some of them not have the the bell safer of the normal distribution but we also recognize there are things that influence them so if you imagine this is an outcome you might have the distribution of that outcome in the healthy population with no background exposures or background risks then you have that same population with all the other background exposures and factors that may increase that risk so that you have your measure creeping up but then you have within that population a certain subset just kind of represented in the other dose response curve that I previously showed you have that other population that well here you have background exposure in addition to kind of baseline then you have the background exposures that accumulate risk and then you have susceptibles within that population so that you might have considering that if you're thinking about vulnerability you may cross into adverse effects for that subset at a much greater rate than you would for the population with no background so how do you consider all this in a risk assessment so getting back to uncertainty I think for a while there the risk assessment process really for probably almost a decade focused on uncertainty because we wanted to improve risk characterization and let people know that there are uncertainties in almost every step and it was detrimental because it wasn't uniform if you think about how we could look at an uncertainty throughout the process and certainly as we looked at EPA's approach there were inconsistencies in how they addressed uncertainty in their risk approaches and this is there's no set guideline for this but very often there are lots of delay because there would be uncertainty about a certain parameter in a risk equation and the fact is it wouldn't change the decision anyhow it wouldn't change the outcome of concern sure it's really important to do the research and understand that and refine your risk characterization but we felt that uncertainty has to be part of framing of the risk assessment how much uncertainty are you willing to accept so that you're guided in somehow on how much uncertainty analysis to do and present with your risk assessment and we basically said that it's important to present uncertainty but it should be planned and managed to reflect the needs of the evaluation and the risk management options would it really change your decision you knew that fasting time made a difference of perhaps two or three in the maybe overestimation of the risk because of the underestimation of the intake well that sounds pretty important actually but knowing the uncertainty and matching it to the problem at hand was our recommendation and we challenged the agency to say well you can stall a decision process by going after every last bit of uncertainty or you can develop a tiered approach and select the level of detail and the uncertainty analysis to match your analysis to help characterize those risks but obviously clearly state where those uncertainties lie and have a much more consistent way to address that now another thing that so people hate the defaults that EPA uses and I'm sure we all use those defaults the big challenge now what happens down in that low area do we use the default of linear for cancer for everything or do we have non-genotoxic carcinogens that are not linear and I think we're getting enlightened about that but there's defaults throughout the entire process and what we're saying here is when you're using a default it's important to state why but right now there's a lot of pressure from the various stakeholders and within the scientific community that these defaults are not state of the art science they've been around a long time and there's new information so how do you sort through this well frankly this recommendation was driven by the fact that again the challenge to the defaults had been a major reason for delay and because EPA put out a working paper that said we will use the best science and look at the science first and then decide on a default and who can argue with that unfortunately it doesn't say when to stop and say when you stop looking at the science and when you make a decision because there are no guidelines when to stop using the defaults you have to develop guidelines and select among the various options to ensure consistency and really to put some order to this process we realize there's some defaults out there that need revisiting there's certainly new science we're not against science but we wanted a clear system when there's a new approach used to say what the what the driving factors were to do that so we also and some folks didn't like this but we realized that the defaults are defaults because actually they go back to the inference guidelines of that misread book in 83 because we needed them to level the playing field and have consistency across the field of risk assessment and they aren't pulled out of the air lots of good science that went into them so there should be guidance on the level of evidence needed to justify the use of agent specific data so I don't think we disagreed with the agency policy of looking at the science first but we just made a recommendation is when you when you depart from defaults there has to be a level of evidence that you can demonstrate that justified departing from that default the other thing that I mentioned about defaults is many many other defaults in regulatory risk assessments are just implicit missing data means zero missing data means no harm look at things separately not together two big margins of exposure means two big margins of exposure and two safe things those are things that have guided the way we do business but that we didn't feel that we're scientific justification for so because it was so much back and forth about this we spent a lot of time on it so they should continue to push the science to support revising the default assumptions but have clear standards for the level of evidence needed to justify use of alternative assumptions and they should work toward development of developing more explicitly stated defaults to take the place of the de facto defaults that are in the risk assessment process now unification of those response so here's the big myth oh jeez is everything suddenly become radiation and become linear and that's not what unification is about the unification in the book is really to develop a similar approach for cancer and non-cancer that gives us measures of risk as opposed to bright lines and uncertainty factors because it certainly can be done a consistent approach for cancer and non-cancer effects so part of this is challenging the issue of threshold and we spent an awful lot of time researching threshold and certainly it's been a bastion of the way that we make decisions we assume threshold and we agree there are individual thresholds but it's much trickier to say there are population thresholds with susceptibles out there and so for curcinogens we have an approach that is based upon slope of the dose response curve but for everything else we assume there's a threshold we'll get to some estimation of that threshold from the port of departure we'll build some safety in there and then we have it made that's not necessarily the most informative way to do risk assessments now we acknowledge that reference doses and reference concentrations have been useful but they don't quantify risk for different magnitudes of exposure therefore it's very difficult to compare them to compare multiple phthalates for instance or product substitutions or frankly for the business of EPA and other agencies to make very tough economic decisions and choices so we also while we were here on dose response we felt that although certainly we use upper bound on the slope and lower bound on dose when we estimate cancer risk that doesn't really account for differences in susceptibility that we certainly are becoming increasingly aware of in cancer risk assessment so complicated hopefully you'll have a hand so that this just kind of contrasts the difference of the non-cancer so here's carcinogens, here's everything else the non-cancer approach was to identify an OL or derive a benchmark dose or point of departure and develop a reference dose by dividing that point of departure by a bunch of uncertainty factors and then making your decisions on margin of exposure or hazard hazard index in the case of carcinogens first you look at the mode of action do you have a direct acting or a non-direct acting do you go with linear or non you do your scales from animal to human you derive a slope factor and you characterize risk needing exposure that's why that intake rate is so important to calculate risk there and there are limitations to both alright so the limitations over there is this assumes a threshold for everything that's not over here and uncertainty is not distinguished from variability there's lots of things we conveniently always seem to be factors of 10 there to derive the reference doses but they have their applications and certainly they've been well applied to in public health for applications of safety but they have their limitations and issues and we try to put that forward and come up with a recommendation that there be more consistent unified approach to dose response modeling that includes systematic assessment of background disease and the other things that I talked about in the introductory distributions there including possible vulnerable subpopulations and modes of actions that affect the dose response we also called for a redefining we didn't call for scrapping the RFD perhaps a more robust way to approach them there will always be a reason for public health reasons to know that and go from there but redefining the reference dose of reference concentration as a risk specific dose that provides information on population risk just as we have done for cancer that's all we're not saying everything is a carcinogen we're not saying everything is linear we're saying we make decisions the way that we look at risk determinants because we know the dose response relationship is dependent upon certainly the toxicity of the individual substance so you want to look at the chemical stressor but you also have to consider background exposure and biological susceptibility and other population factors because an individual's dose response curve might be very different in a very heterogeneous heterogeneous population with background exposures and wide ranges of susceptibility to understand population dose response so we're putting that out there and recommending that that unified approach looks at the endpoint looks at the mode of action considers population aspects are there background health effects and other risk factors are there vulnerable individuals within the population that you need to consider and then also consider background exposure assessment this is where the cumulative risk issues come in identify possible background exogenous and endogenous exposures and and understand how that might impact how you develop your conceptual model for the dose response selection and the dose response modeling that can estimate risk we realize this is challenging we're not going to have all those factors but we're trying to raise the bar and have a forward looking approach and that comes to cumulative risk assessment that the committee certainly is very much concerned with and that is EPA and everyone all the agencies are asked to look about look at broad issues that involve issues of cumulative risk assessment of multiple exposures or mixtures or vulnerability of exposed population and so we called for a need for cumulative risk assessments consistent with the Thallate report and others from the academy and work done by all the agencies that looks at the combined risk of exposure to multiple agents but we also feel that there are population aspects and stressors beyond chemical that need eventually to be considered look at aggregate exposure and combine risk posed by multiple agents draw on approaches from ecological risk assessment to think of a broader way to think of population risk and in the short term develop their databases for incorporation of some degree of nonchemical stressors or population factors this is more forward looking but in terms of the task at hand here the recommendation would directly relate to the importance of multiple pathways with a similar health outcome or motive action so let me then go to the toughest part of this and so we did all those drawings and I made fun of it in the beginning but one thing that I want to assure so here's the new framework and we recommended that there needs to be a new approach that really is front loaded that it's very important to ask the right questions in the scoping and problem formulation stage to what decision do we have to make here and what risk information do we actually need to evaluate our options for risk management this has been a little bit sacrilegious to some of the red book folks who said where's that line between risk management and risk assessment now you're going to have the risk managers helping you frame the question yeah that's a pretty good deal I think now the second phase is still the four steps alright so the red book the traditional risk assessment is intact with improved planning and a stage three of that that says well confirm that you actually did the right thing alright so you have this iris document but did you do the right thing so that you can move to the final stage of how do you apply this and make a decision and how do you compare among options so that you can have better science application to the decisions there so I won't go into a great detail on these but clearly we asked we said that there's important to front load and ask the right questions in the problem formulation stage phase two the planning of the risk assessment is important so you bring the right science to do that stage two risk assessment and confirm that it worked and then risk management to understand what the relevant health and environmental benefits are of the various management solution and that's the tough one here with outlets and obviously there are benefits and applications and how do you balance those ultimately that's the challenge before the agency here so and then how would that be communicated so there are some that say Jesus is this is real precautionary if you think about it and that really bugs me that people think that because what we're trying to do is bring practical science better organization science to the very practical decision making because we do have trade offs and we do have tough decisions to make and it's most useful we feel that this new framework would help discriminate among the options we also talked about stakeholder involvement I talked a little bit about that but just to finish up what the key messages are there's this new framework that really has a formative focus up front the four steps are still there in terms of uncertainty you have to know what the decision ahead is so that you can match the analysis to meet the information needs of the decision makers develop clear estimates of population risk advance the cumulative considerations and then clearly to have the capacity to do it I think a message for all this is stepping back from the science a little bit because it's all about better decisions and I'll stop there thanks thank you yeah I think we'll entertain some specific questions for clarification are any thank you very much for coming today I have been I can't say I've read the whole book is this something we're doing? no they're testing the alarm system but I can certainly tell that you guys did a lot of work in that book it's a very dense read and I think that anyway everybody in this picture is to be commended I guess my question you know I completely agree with the idea of pre-planning is a very important thing and you've got to set the stage right I guess I'm interested in your thoughts on the whole issue that we may be looking at in terms of focusing on phthalates and anti-androgens you know at what point do you stop looking for I mean there's some the possibility of the number of chemicals that could be a problem you made several comments about you know just because you don't know something doesn't mean it's zero risk right it's just a daunting problem so if you could just say a bit more about that in the framework of you know if you're going to define the problem but at some point you have to stop because you just can't keep right it's that guy trying to do his thesis right it's the endless lit review so I actually think you have a mandate in a time frame here that helps you with that and we can't do everything we raise questions about cumulative risk and even non-chemical stressors and population factors that we realize we really can't take them into account but on the other hand can we be blinded to them can we continue kind of the narrow runway of one thing at a time without considering accumulation without considering a host or population susceptibility but we also realize there are ways to address them when it's your focus really to make a public health decision not to make a risk estimate so I don't think you're charged with coming up with a number a quantitative estimate of actual population risk you're asked to provide guidance on harm and on reducing harm or preventing harm and so that's why it's I think it's very important particularly for a panel like this and then you're on the Thalli Committee right so you know the challenge there at some point you have to draw the line and say okay here's a statement of the scope of work Thalli Committee as well took on this whole issue of accumulation but I think you have to stick to the charge and for this committee but in the broader spectrum we can't do it all and we hope to push the science forward for all those other unknowns as well but addressing the threats that we do know about where there is evidence and having some kind of bar for decision making there is a huge important step because right now we are in we're in iron so if you've ever sailed we're not moving risk assessment forward so we've tried to break that log in by saying we can't deal with all uncertainty we can put bounds around it and we can move forward if the regulatory agency is asking us in this case for advice, chemical by chemical not the composite of all Thallites then how important is it for us to do accumulative risk assessment when in the bottom line they want to know should we regulate this one and then by what mode of exposure are they going to mitigate by virtue of regulating one by one because they're not going to regulate on 30 Thallites how do we deal with that dilemma and it is a dilemma and ultimately it's guidance and discussions between the agency and this panel to decide the appropriate approach and I don't pretend to be a Thallite expert but what we're saying is to think of one substance in isolation where there might be a similar mode of action for other compounds is not appropriate in terms of characterizing risk and so just as I think this committee is doing and it's good to see the Thallite book there on the table you have to consider the known relevant and important other exposures even if you're taking one substance at a time just as it took a long time for EPA to recognize when they're setting a drinking water standard there are air exposures too for a compound and they have to balance that we now have to consider where there were well known families of compounds and mixtures that may be there in combination we have to consider them together it was I was asked by FEMA to come in and deal with the toxic gumbo worried about what was in the flood waters and Louisiana was doing the traditional approach there's a hundred things there let's look at the drinking water standards for each one separately thankfully pharmacists don't do that when they're giving our medications so I think the science is not there to take everything on but it's certainly to inform it and I think within the bounds of our current information we can raise the bar and say well we are going to consider this group of compounds and their potential interactions and estimator try to characterize the way first of all susceptibility and secondly how they may act as a group even though there are uncertainties about that health end point and I realize that sounds squishy but it's less squishy than what we do now it's not perfect yes you know that's the the crux of what we have to do we have all of these compounds to look at individually but against the sum in total of the entire class and so I hope and I realize I'm flying up here and you guys are down here because you got the specific data in front of you but I guess the guidance is this that don't do it separately you don't want to do that because you first have to look at them separately and gain every bit of information you can but you know that there is exposure to mixtures you know that these are compounds where they are trade off so you have to think of them first individually to understand their relative toxicity and sometimes you won't know for every end point and then secondly how they might interact and where you don't know say you don't know but at least have that as part of the consideration and if you can act on it describe what the best you can to characterize the risks but to not say the old way of we don't know it's not there and you're one of the big test cases there's three big reports out of the academy that kind of shaken up the world right now and Thalade Report is certainly one of them and this is a very visible very challenging panel so I guess that's the best device I have and I EPA is convening a panel as well to look at their Thalade Iris document so we're all in this together but I think it's a real point of change for how we look at potentially risky mixtures thank you I think we'll pause here for a short break and reconvene in 15 minutes so good to meet you