 Good morning, good afternoon, good evening, and thank you to everyone for connecting to the third public meeting for the committee to review EPA's 2022 draft formaldehyde assessment. Next slide, please. My name is Kate Guyton and I'm a Senior Program Officer here at the National Academies and I'm also the Director for this study. It is my great pleasure to warmly welcome all of you to this public meeting and to thank you for your participation. I also extend my special thanks to the committee members for their service and to our EPA sponsors for their support. I am also very grateful to our Board Director Cliff Duke and our National Academies team members, Liz Boyle, Anthony DePinto, Brenna Albin and Darlene Gross. Next slide please. To provide further context for today's meeting, this slide provides an overview of the consensus process and study timeline. We are here today for meeting three which is part of the committee's information gathering activities. The committee will consider information from today's session and from any future information gathering activities as they develop their consensus report. Next slide please. This slide provides an overview of today's open session agenda. The session will include a question and answer session with the study sponsor EPA. Thereafter at approximately 3.20 p.m. we will provide an opportunity for public comment. Comments will be invited from one speaker per organization with preference given to those individuals and organizations who have not previously addressed the committee. Each speaker will have a maximum time limit of three minutes to provide comments relevant to the committee's task. After the meeting anyone who wishes to submit comments or written materials that are relevant to the committee's chart should submit them via the National Academy's project page. Next slide please. As a reminder to our committee members, invited speakers and the audience, the National Academies are committed to the principles of diversity, integrity, civility and respect in all of our activities. All forms of discrimination, harassment and bullying are prohibited in any National Academy's activity. This applies to all participants in all virtual and in-person settings in which the National Academy's activities are conducted. We look to you to be a partner in this commitment by helping us to maintain a professional and cordial environment. Once again, thank you very much for joining the open session. I now would like to invite opening remarks from Jonathan Samet, our committee chair. Hi everybody and welcome and thank you for attending this second in-person meeting of the committee to review EPA's 2022 draft from Aldehyde Assessment. This session is an information gathering activity of the committee. The committee's effort is being conducted under the auspices of the National Academies of Sciences, Engineering and Medicine in response to requests from the U.S. Environmental Protection Agency. The committee has been asked, if we can go on to the next slide with our statement of task, has been asked to conduct a scientific review of EPA's draft document referred to as the Integrated Risk Information System or IRIS, Toxicological Review of Formaldehyde and Appendices. The committee will assess whether EPA's draft document adequately and transparently evaluated the scientific literature, use appropriate methods to synthesize the current state of the science, and present the conclusions regarding the hazard identification analysis and dose response analysis of formaldehyde that are supported by the scientific evidence. The committee will not conduct its own assessment of formaldehyde nor will the committee address the broader aspects of the IRIS program. Next slide, please. Recommendations about the IRIS assessment will be prioritized as follows. Tier one, recommended revisions that are important for EPA to consider and address to improve critical scientific concepts, issues or narrative in the assessment. Tier two, suggested revisions that are encouraged to strengthen or clarify the scientific concepts, issues or narrative in the assessment but are not critical. Other factors such as agency practices and resources might need to be considered by EPA before undertaking the revisions. Tier three, considerations that might inform future evaluations of key science issues or informed development of future assessments. Today's open session is on the record and is being recorded. I would like to emphasize to everyone that this is an information gathering session and the committee has not completed its deliberations. Comments made by individuals including members of the committee should not be interpreted as positions of the committee or the national academies. Once the committee's draft report is written, it must go through a rigorous peer review process as described earlier by the staff before it may be approved for release as a national academies report. There are observers who draw conclusions about the committee's work based on today's discussions will be doing so prematurely. Now next I would like to ask the committee members to introduce themselves to the audience and indicate their affiliations. I am John Samet, dean and professor at the Colorado School of Public Health and I'll go down my list of names just to simplify things. So Ayesha. Hi Ayesha Dickerson, the sister professor in the Department of Epidemiology at the Johns Hopkins Bloomberg School of Public Health. Dana. Hello, Dana Dolanoi, professor and chair of environmental health sciences at University of Michigan School of Public Health. Dave. Dave Dorman, North Carolina State University. Rakesh. Hello everyone. I'm Rakesh Coach from the University of California at San Francisco. Sabina. Sabina Lang, chief toxicologist at the Texas Commission on Environmental Quality. Andy. Andy Olshan, professor of epidemiology at the University of North Carolina at Chapel Hill. Yvonne. Yvonne Rusno, professor of toxicology at Texas A&M University. Leanne. Leanne Shepard, professor of biostatistics and environmental occupational health sciences at the University of Washington School of Public Health. Katja. Hi Katja Sayon, director of evidence based toxicology collaboration at Johns Hopkins School of Public Health. Joe. Hey Joseph Wimbals, I'm a professor of molecular epidemiology at the University of Southern California. Lauren. Good afternoon. Lauren Zeiss, director of the Office of Environmental Health Hazard Assessment within the California Environmental Protection Agency. Ilion. Ilion. Hi Ilion. Chief and professor of division of epidemiology biostats and preventive medicine University of New Mexico School of Medicine. Okay. Thank you. And now we'll turn to our EPA attendees to introduce themselves. And and from the outside, I'd like to thank you for a very detailed response to the questions that the committee sent your way. Certainly provided a very comprehensive response to our questions that was very helpful. So let me turn to you for introductions. My name is Chris there. I'm director of the chemical and pollutant assessment division at US EPA's Office of Research and Development. Hi everyone, my name is Andrew Kraft. I'm one of the cochemical managers on the iris formaldehyde assessment. I'm also Chris's associate in CPAT. Hi, I'm Tom Bateson. I'm an epidemiologist and the cochemical manager of formaldehyde. Hi, I'm Samantha Jones, and I'm the associate director for assessment science in our Center for Public Health and Environmental Assessment in ORD at EPA, which houses the division in the iris program. Great. Thanks and thank you for coming. And if I understand correctly, do you have any presentation materials that you want to use? Or if I understand right, we'll perhaps engage in a discussion. And I don't know whether you want to make any further introductory remarks to the materials you sent, or we can just get right into it and leave it up to you. Yeah, whatever your preference is, if you'd like us to introduce our responses before the discussion, we can do so. If we just want to clarify responses, we can do it that way, whatever. So if you want to make an intro and then we'll go into discussion, that'd be just great. So yeah, we received questions from that Mason panel and thank you for those questions. There were four questions, some of which had some subparts, so we answered them individually. I think question one, we interpreted that to have an overarching question and then three sub-questions, so we answered those individually. We provided those written responses as requested by Mason. We can introduce going question by question if you'd like and maybe use that as a presentation material or we can answer clarifying questions. When I'm introduced, I meant like one by one. Sorry. Yeah, we've had some discussion and I know we have some things that we want to talk about. We could perhaps then go back through the document or structure of the questions to see if there are key issues perhaps have not touched on and again, I'll just repeat what I said that I know you put a lot of work into those responses and I think many things became clear and we do recognize that over the time during which assessment was done that there was an evolution of approaches that the agency was taking. But let me turn to the committee and please, I know some of you have questions to lead off with. Leigh Ann? Yeah, so this is Leigh Ann Shepard. I was interested in hearing a little bit more from you about your study evaluation review process, aspects like the roles and timing of the primary versus secondary reviewer, blinding, and consistency over time within health end point. I note that this process has been going on for a long time so I imagine there's been some staff turnover and I was interested in just hearing some more details about that. Sure, maybe I can start on the actual process for the study evaluations and then if others want to jump in. So yeah, for our study evaluations, we had two experts review each study. We had a primary and secondary reviewer. Most of these reviews, the timeframe we're conducting these reviews is probably 2013-2016 timeframe so we were still evolving our systematic review processes within the IRS program. The primary reviewer was the initial reviewer and the secondary reviewer and went behind the prior reviewer and conducted their own analysis of the study and then checked the information that was provided by the primary reviewer. They were not completely independent at this time. We didn't have the software tools that we have now to allow for complete independence of those reviewers. So the secondary reviewer was not blinded to the primary reviewer's decisions which I think is what you're asking across those domains. Consistency, why we did have a team of topic-specific discipline experts so we had the same domains for studies evaluated within a discipline so Epi Studies had the same domains evaluated across health effects, Bermuda Animal Studies had the same domains evaluated across health effects, Control Team and Exposure and studies had the same domains evaluated across health effects to allow for that consistency. And then we also had our discipline-specific experts talk across health effects so the epidemiologists that worked on asthma for example would talk to the epidemiologists that worked on pulmonary function and they would review each other's decisions to make sure there was consistency in that manner across health effects. Am I missing any components of that that others want to chime in on? And there was kind of check in I believe in terms of assessing potential drift. Over time, yes that's the other thing I was going to ask right. So we did have turn over in the team obviously over the long time frame. There was consistency in some members and so within discipline we had that kind of carryover of some members being participant throughout the whole process of the FNADI development system so they all all ten years there's some members that have been there the whole time including Tom and myself. As well as we did have disciplinary workgroup reviews of our sessions so within the IRIS program we have I think at the time there were seven different IRIS disciplinary work groups so epidemiology, inhalation toxicology, cancer, tox pathways, nervous system effects, developmental effects, repro effects and a couple other PPBK and quantitative methods and so they also reviewed the different parts of the draft assessment that were pertinent to their disciplines so in that way we ensured some these are independent from the team members we ensured some consistency and an application across across the assessment by using those disciplinary work groups. Thank you I guess I'll keep going and just to clarify I think I remember this in the document by domain you mean categories like selection and confounding and so on right? Okay just want to make sure. So can you just clarify I think I kind of know the answer but I'd like to hear it from you the use of HAWK in the formaldehyde process and was it used for any of the study evaluations? No we did not use HAWK for any of the study evaluation formaldehyde that was a tool that was developed after the formaldehyde assessment was suspended in 2017. We did however use HAWK for the systematic evidence map when we updated the literature after 2017 when it was not suspended in 2021 we did use HAWK for the literature tree diagrams things like that we didn't use it for the study evaluations though. Let's say I had one more on my list um there was I think on page 21 of the response there was mention of a standardized template for documenting and presenting decision steps and I was wondering if it was possible for us to see an example of that. Sorry I'm just trying to get to the page. I think it's the second bullet in response to 1C. Right that was a template table for the study evaluation tables in appendix A5 so there was a template table that was shared across within discipline across topic specific experts that they used to fill in the responses within domain for their particular health effect. I think that's one of the things there were also template evidence tables within the main draft so we tried to have consistency in kind of the format of the evidence tables to the extent possible across health effect systems also the some of the figures had some some templates that we were sharing. So you're saying the templates are already in the appendix is that what you're saying? I'm just trying to read. I'm just trying to make sure I fully understand. Yeah it was a template version of the table that was then shared with the other experts within uh on the team yeah. Can you give an example of a table number or not necessarily right this minute I mean you can send it to us later just so that we fully understand that that would be really helpful and I want to say although I did spend a long plane ride yesterday reading this detailed document I I also wanted to echo what John said that it was very thorough and helpful thank you. Let's see Elion you had your hand up is it still up? Yeah just quick follow-up question that is when the primary and the second reviews have differencing opinion and judgment how would you reconcile the differences? Yeah so what's the process? Sorry consistent with our current practice as well first there would be a discussion amongst those two reviewers and then if they couldn't come to resolution we were bringing a third domain or discipline specific expert. Thank you. So I don't know this might put you on the spot a little bit this wasn't in the document but uh if you were in a start over again what would you do now that you uh um it will like what do you know better now that that you would bring into this or maybe maybe in some sense that's answered because you already have your new handbook and you're following all those procedures so um yeah. Yeah I mean the software tools make a huge difference using hawk uh from the beginning would have made things a lot easier it makes the documentation a lot more easily transparent you know have these lengthy tables that take up you know a lot of pages so for sure we'd use the software tools. Looking around this is um maybe uh one of the issues that came up as we were talking for the I think was the allergy and immune endpoints for which you uh brought in sort of panels of experts could just describe what you did what the process was how you identified the individuals how you used their input I think that was the only outcome for which you did that if I'm correct. Yes that's correct so we didn't have a lot of expertise in that specific area which were allergy and asthma conditions from the FB studies and so this is one of those situations where you have heads in turn and the people that actually wrote that section have retired since retired but we did um and I can go back and get more details in the actual process for recruitment which I don't have handy but we did work through a contract to do that um but we recruited five experts specifically on allergy and then five experts specifically on asthma there was one overlapping expert between those two panels that were consulted on um the types of studies that were available the types of outcomes that they were looking at and how to talk about those outcomes how to evaluate those outcomes and that led to um some of the criteria that were applied in the evaluation of those individual studies based on those panel recommendations with those discussions with those experts I think there were two or three calls but I'd have to look back for the details on the the process for that since I didn't believe that myself you know a general question like doing the study quality evaluations they're in different tiers like considered to be say high quality medium quality etc but can you speak to whether there was details for how to differentiate between say within a domain what would be considered a high or moderate or low is there always was there always like some criteria that was explicitly stated for that or was that a judgment call how how did that work out so for some of them there are explicit you know if you have this limitation it will be a deficient or I think we're calling it poor sorry at least we had within domain it was good adequate poor and then kind of critically deficient if you would within each domain and then across the domains is the high medium or low confidence or not informative and for some of those we did have explicit you know if you have this limitation this will be for this outcome of low confidence but no not for every domain within every outcome did we have here's how you hit good here's how you hit adequate here's how you get deficient within the domain sometimes that was an expert judgment based on you know this the important things for this endpoint are x y and z you know if you didn't meet one of these this is a downgrade the severity of how you didn't meet that might take you from a good all the way down to a poor for example so no not every time do we have this is high this is medium this is low sometimes we did but not for every domain so just as a follow-up would that have been documented somehow within your process because it may or may not always have made it into the final report correct so there will be documentation of the specific deficiency itself within those study evaluation tables and appendix a five so a five one includes all of the specific evaluations of the controlled exposure studies so there were seven domains for every controlled exposure study that were evaluated and they each have the limitations limited within the domain that resulted in the determination of that domain if you would then a five two i think is sensory rotation a five three pulmonary function etc and within each of those appendices there are considerations for study evaluation that discuss in general the important points for each health effect and then within the tables like for example in the animal tables and there's a header bar that includes the specific limitations that were most impactful to each domain decision and then that's also documented within those specific tables for each study do you want to speak to the cancer so speaking to cancer which is an a five nine there's a detailed narrative of how each domain was reviewed selection exposure outcome confounding analysis and what were the important things that we were looking for to be either positives or negatives and those are laid out there and quite a bit of detail maybe a very general question one of the areas of course were systematic review methodology still evolving is around mechanistic considerations and can you you know they're talk about the as you move forward in this assessment how you gathered and put together the mechanistic information i think just an overall description of of what you what you did during these years when perhaps the methods are not as far along as they might be now and you're speaking more towards identifying it or the whole how you gathered the evidence how you decided what was important and and and put together your narratives sure so from outides a little bit unique in that we had the nas 2011 report to start from and that really provided a pretty thorough roadmap as to what health effects were most important to consider how to go about thinking about those some of the considerations for evaluating those health effects and that really helped gear our literature searches which were very health effects centric so not all iris assessments are conducting individual health effects specific searches and more just by cast number or something like that so from out of hide really used the nas 2011 report as a roadmap for what are the key issues to narrow down that that that scope if you would and so within our literature search design we had not only mechanistic information collected as with some of the searches so for example for nervous system effects for develop and repro effects those were designed to try and pull in mechanistic information on those health outcomes we then also had specific mechanistic searches for certain pieces of information for example we had a separate search on inflammation and immune events that might be associated with multiple portal of entry or systemic effects relating to immune modulation and so that was a separate search from the health effects specific searches similarly for progenotoxicity and stuff like that with cancer was the way we pulled in the mechanistic information was kind of in that focus manner through those focused health health non health effects specific searches when it was not pretty much not developmental on repro and not not nervous system they were all pulled into these other searches and I sorry sorry sorry keep on and so we did there were subsets of mechanisms so to move on from this or so that was the list or to move on from that there were subsets of mechanistic information that were determined to be more potentially impactful I guess you would based on again that any as the tool as well as well as what the uncertainties were with the the health effects specific searches so we did conduct some more detailed evaluations of certain types of mechanistic studies for example studies relating to the to the mechanisms for cancer in particular genotoxicity had very focused study evaluation focus on those similar to the studies on the inflammation immune effects had kind of a more thorough study evaluation approach applied to those studies that might be more impactful to understanding you know the role of immune cell modulation and some of these health effects so there were some focused area where we dove deeper on mechanism than I think and then in other areas okay thank you the other questions yeah I guess I'll ask another question on the various value is study evaluation tables like I'm looking at table a-34 which is the sensory one that's on page 281 of the PDF you have these very helpful confidence graphs on the right hand side with colors on them for the various biases and so on or absence of color if you didn't have a concern and would you say there's a direct mapping between how those like the coloring you know how much color how many are colored and the final evaluation I know there was in somewhere else either it was the beginning of the appendix or in the preface somewhere there was a whole description of those and it gave some examples but it didn't go through all the possible combinations of what we see in the results so sure happy to take that so the colors are just to distinguish one domain from another they don't have any actual meaning they're just to keep them separate if there's no color there was considered to be no bias from in that domain if there is and there's you could think of it as with a line down the middle of it and if the shading is below that line that's indicating that we think that there's some bias towards the null in that way if it's equally above and below would say that the magnitude is maybe small but we couldn't tell the direction and if it's all on the upper side then we think there's probably bias away from the null there and if the whole thing is shaded if the whole thing is shaded we really have very little confidence in it there could be confounding of which magnitude and direction we can't ascertain or the sample size could be very small and we're not sure how purpose what direction yeah yeah a following along from Leigh Ann so if any one box is colored regardless of the direction how would you identify that as medium high so if one box was colored and but it was only sort of a halfway and not the whole way that would be considered it might drop it down to a medium for just one sometimes there could be two if they're considered slight potential biases if it was the whole thing and there were multiple domains that were colored then it might be uninformative we just couldn't say in between there could be low and we tried to summarize that in the rightmost column we listed what we thought were the most influential issues for that study across those domains and tried to give it a little context there so clarifying does can one conclude that if you didn't mention one of the categories in the right hand section that that implies that you had a high confidence in that or just that the what you wrote down was what drove your judgment that that'll be the other way of like it didn't necessarily say anything about the other categories I would say we didn't think it was influential as a bias not necessarily that they were all high but there was nothing to indicate that it should be diminished in our confidence so for table one for the sensory irritation not table one sorry just give me table one one for the sensory irritation studies and control exposure those boxes were not there does that mean anything or in the main document excuse us while we sort of find table main document 116 so as Andrew mentioned the the different end points were handled somewhat differently so earlier we were talking about the cancer studies and others and this is the controlled human study which were done a little bit differently yeah so for the controlled human exposure studies there were excuse me there were considerations relating to some of the the individual sbiv information bias selection bias confounding and other not documented in the same way you're right but those were also considered in the appendix tables but the one of the main evaluation points there was the controlled exposure itself so that was separately evaluated in a 51 like we had talked about and so the controlled human exposure studies would have undergone the same exposure quality evaluation as the animal studies were in terms of you know test article generation method analytical method and things like that that were also very important and then there also is a section like for sensory irritation there's a section called methodological considerations for evaluating the studies in beginning of each synthesis section and that talks about some of the things important to I believe so I'm not looking at rate the second but some of the important things to interpreting the controlled exposure studies in the context of that that particular endpoint because those studies were available there but against the same basic presentation of that the primary limitations should still be limited in that table even though it doesn't have that I think you're looking for that graphic with the colors that may not be there but it's still the same thing where the the main limitations and the confidence are presented there and again confidence high confidence would be few or no limitations within domains medium confidence but there are some limitations but they're not expected to substantially impact the results and then low confidence like Tom was saying would be one or more it could be a single thing that's wrong kind of major limitations within a domain that reduce the confidence and the results of their interpretability and then uninformative it's just too much going on to really say much about the results so that was consistent but yeah that that caterpillar may not be there for the controlled exposure studies you're right and that the epidemiologist is asking this question selection bias we always worry about and it's always mystically possible how did you decide what the directional consequences of selection bias might be I mean I think it's arguably possible that selection bias could drive things in one direction or another depending on the factors that underlie it but it's a hard determination to make so how did you do that in practice in practice we looked at the participation rates if they were high then it was unlikely to be related to the exposure if participation rates were low then we considered there was potential selection bias and then we would look to see what information there was that might relate whether cases and controls were providing different levels of information if in the cancer studies where we had cohorts we looked at sort of I believe it was all cancer SMRs when those were very low less than an SMR of 70 then we'd say that this looks like a very healthy population because compared to the rest of the population they're not getting cancer that might concern us so those were some of the things that we did lay out in the A59 section where we talked about cancer and selection bias in practice for selection bias and you know perhaps confounding is another potential confounding is another example that you standardize the approach across health outcomes in some way so that each group had the same sort of operational approaches to deciding about the presence of potential presence of confounding selection bias and the consequences thereof so the general strategy with confounding was to pre-identify what the potential confounders might be what what the other known causes of in a particular outcome might be and to list those up front and look for those when we reviewed each of the studies so if it was Mylar LeChemia we would look we would look at benzene and we would make sure whether benzene was likely to be a potential confounder in this environment of that study if it was how was it addressed in in the analysis and the same tack was taken throughout the the other outcomes to sort of understand for that particular like for asthma in the in the expert consultation we would have asked you know what are the other exposures that we need to be aware of when we review these studies and which ones are more important or less important take that information okay thank you okay following on to the question on participation rate on and high or low was there any cutoff that you used and was that used uniformly across say all every studies I mean there are general remarks that across most of the studies the participation rates were above 90 percent and when they were both all above 90 percent we didn't consider this to be a vulnerability for selection bias when those rates dropped down to 70 percent or lower we became very concerned those sort of general rubrics were reused across the studies because the epidemiologists were convening and answering these questions and polling the group and then applying them so we had four epidemiologists on the team so we were able to get good information that way so I guess I'd answer personally maybe answered a question I was asking so it sounds like these study quality criteria were being developed while you were evaluating studies as opposed to having them defined a priori and then applying them is that correct they were honed over time yes and there were times where we went back and said we have to look at these again because now we're seeing it in a different light or there's something new that's come come to mind there's some new information I'm sorry but hang on a second yeah okay so I have a quick follow on too so what if no information for example no information about participation rates is provided like how would you deal with those kinds of situations where they just didn't tell you and it's kind of true too a lot of like they just you know papers sometimes just don't report some crucial bit of information so how did you deal with those kinds of situations so when there's information that we're looking for we're looking forward across all the studies and we don't find it in the study that would be a concern and possibly we would have considered that a potential for bias since we just don't know and there were situations where we did reach out to study authors for additional information if it was you know a decision between a medium confidence and low confidence for you know what could be a critical study or something like that we didn't do it every time as I think you have seen in our answers but there were situations where it reached out to the study authors and they didn't respond and you know it's kind of you have to ding them but sometimes we did get a response that helped clarify some of that information thank you just follow up just you know sometimes there might be they might not mention a potential confounder that we're interested in and then we would have to consider what is the nature of the business or the exposure or the manufacturing process and is it really likely that radiation is used in a in a formaldehyde resin plant probably not and we wouldn't then we would set that aside ilian i expect you hi um this this is a question about those response assessment and uh calculation of rfc i appreciate the fact that you can now select a number of finalist studies and calculating i for c and doing those response assessment then pick up one among them but i'm not always it's not always clear to me whether um a number of i for c's actually inform your final selection versus you already have a study that you prefer the calculation of rfc's among a number of studies reinforces your choice so that is my confusion i wonder if you could clarify whether you always have an algorithm which informs you how to choose the final one among a small set of studies for example whether you always choose the one with the lowest rfc that there's a question is the question clear yes we would not always choose the one with the lowest r so you see in the from out i was trying to get to the figure that illustrates it which again is unfortunately taking too long on this computer right now so we had a number of candidate values across the different health effects some of which were based on one or more studies that we had modeled for the different health effects and there is a graphic that shows um confidence in that that candidate value based on consideration of the study underlying it so was it a good quality study based on our evaluations the point of departure for estimating it you know do we have to extrapolate well below the doses that were examined or the exposure levels that were examined in the study as well as the uncertainty that was applied to that point of departure to derive the candidate value so there's a graphic that shows um kind of those confidence determinations as well as the final candidate values as well as the uncertainty i i'm not and i don't have it up right now i don't know if you can get it while i'm talking about it but those would be the considerations that we applied to selecting that final number so it was obviously not in that graphic there are numbers that are much lower candidate values i think there's a a candidate value for male reproductive toxicity that's in quarter of magnitude lower um however the confidence in that candidate value was was quite low and the uncertainty applied to derive that candidate value was quite high um and so we had a lot more confidence in these human studies uh on a collection of endpoints that all clustered very closely together with uncertainty factors on the order of three to ten which uh in our assessment practice is phenomenally low um and so we went with the higher confidence candidate values with interpreted with with less uncertainty and greater confidence more so than the level sorry go ahead great uh to follow on this uh thanks for that but do you have a well documented algorithm that tells you say step one i do this you know choosing the one with the highest confidence step two so is the algorithm well documented versus it's a expert judgment case by case situation more of a fuzzy but expert algorithm um a lot of expert judgment is implied we could have a very high quality cancer study and a medium quality cancer study but the high quality high confidence study could be a very high exposures and an occupational exposure and what we're trying to do is estimate what uh risks might be at the low exposures so it might be a case that a medium confidence study would be preferred over high because it allows us to estimate the risk in the low dose range where that's where we're interested and we require less uh extrapolation to the low levels so it's a balancing of of these um we're obviously considering the quality but we're also considering the range of the exposures we're considering um potential dose response is uh one study only has linear one actually looked at non-linear um that might be important and we're weighing those factors with expert judgment is this well documented your documentation i guess that's my final question what can i find all these rationals somewhere in the documentation at least i have an understanding of that so figure two three is a figure i was talking about and there is a discussion of those different aspects um of confidence there and that has actually been expanded upon in the irish handbook that was finalized in 2022 it gives a little more explicit detail on you know the considerations of confidence uh that go into determining which candidate value is most appropriate for i think you're asking about the rfc uh to to represent the rfc it's not algorithmic so it's not you know one then the next and the next but these are the considerations that are applied to making that judgment call on which value is most appropriate so it's not an algorithmic approach but i think this is more what you're asking it's not you know an average or anything like that maybe in no in practice anybody we'll go to you next the who is actually making this these decisions about which study to pick was it the same group or was it the outcome specific groups who were determining within outcomes i guess i'm trying to understand if some uniformity was brought to the expert judgment for the selection this would have been a whole team decision so it would have been across all the disciplinary experts um and we also would have had our work group reviews we also have senior level reviews of all of our decisions we had at that time i think it was called cast or chemical assessment support team uh that was senior level reviewers that would weigh in on those decisions also um as well as during agency and interagency review the the kind of are we making the right selection would also be vetted through that uh but it was not an individual we're within the team that initially made this decision on you know we have the highest confidence in this value for these reasons and then it looks at sort of going up which is and this is grace and that's similar to now where you have a team they consult with the working groups and then again before it even leaves the division you go through the senior level division review and if i could follow on i mean the figure two two um that andrew was talking about it plots the uh the sort of organ specific rfcs against the composite uncertainty um and in the graph there are pulmonary function studies allergy studies sensory irritation respiratory tract pathology asthma female reproductive um and uh male reproductive and some of these are i think animal studies so we really need the whole team who have the expertise across all of these end points um to look across and and the footnote to figure two two explains that yeah evan yeah just to stay on the um dose response assessment um can you tell us um you know what's your current practice versus what was applied in this assessment with respect to uh deriving a point estimate for the non-cancer rfc versus a range of values because you've shown graphically that there is a clustering of studies and you've derived multiple candidate values and then organ specific values but ultimately there's one number as opposed to the range so what is the um again the guidance that you use to derive a point estimate versus a range and then what's in the handbook so how you as as an agency thinking about communicating that there is a range rather than a point estimate so in the definition of the rfc it does say with an uncertainty expanding perhaps in order of magnitude however we do provide point estimates because ranges are hard to use by our program and regional partners in application um you know do you choose the high end of the range you choose low end of the range that's another decision point that they would have to make as a risk manager that would be difficult I think part of what you're asking though is in the handbook there is also you know some discourse about you know risk specific doses and things like that moving beyond point estimates into things that that more perhaps better capture uncertainty and variability in a quantitative way so we're still exploring those avenues um we've not we have one assessment that's kind of piloting it I don't know Chris if you want to talk about that at this point but we don't have any assessments that actually have applied it formally to you know from from kind of nuts to bolts from start to finish I don't think we need to get into you know what what's outside of the scope of today's discussion but just to summarize that it's agency's practice and a long-standing practice to derive a point assess a point estimate but you did provide the all of the other candidates in accord with the recommendations of the 2011 committee correct and the previous tetrachlorothaline to you know to derive can multiple candidate rfcs yeah and even our candidate so we have so so tom mentioned organ specific values and those actually might be representative of one or more candidate toxicity values which would be an individual study we actually would have you know I think for some of those we have multiple endpoints within a study on the same health effect that were advanced as a candidate value and then another study and then across those two or three candidate values selecting an organ specific that represents those two or three and that is very consistent there's a graphic in the NAS 2011 report that walks through you know study endpoint you know point of departure I think we would call it a candidate value that's at this point and then selecting we have organ specific values which are we have been told are very useful for our partners that might be thinking about things like you know risk scenarios or cleanup actions that might be combining numbers for a certain health effect across chemicals if you would and then the overall rfc that builds from that but yes there's a graphic in the NAS 2011 report that we followed pretty much exactly in terms of thinking about how to lay out a dose response assessment in a logical and transparent way other questions anybody on remote with questions ah Lauren I appreciate hearing about your internal process in response to the previous question do you have that laid out in the document anywhere in the assessment you're asking about the the graphic that follows the NAS 2011 no about your internal review process the way in which individual work groups might meet discuss about a chemical expert work groups and so forth no I don't think that's in the formaldehyde draft assessment materials themselves that is probably in the handbook but it's some to some extent but no those processes are not laid out in that way thank you others so let me just ask a question this is just John Sanford asking a question now I'm curious about the language you're now using for the strength of evidence for causation and why you change and there's been a lot of standard practice we've gone to that and demonstrates etc just sort of curious about the about the motivation because it's a bit of a departure from many things I've worked on like certain general reports and other classifications and strength of evidence and when I read it it still sort of has a mental grading that um that's why'd you do it um so there's a long story between behind the evidence integration processes that we piloted a number of different terms to describe the different categories or strengths of evidence for for um the hazard if you would um we had a lot of internal discussions with our program partners across the agency on this we had convened work groups with the different um programs and regions to discuss this and there was a lot of emphasis from those programs on making the conclusion about the evidence and not about something other than the evidence that's why it's about you know the evidence demonstrated rather than do I have high confidence in the evidence or something like that was was something that came across pretty strongly from our agency partners but I don't know if there's um we tried a number of iterations I guess it's the way to say that and um you know we tried frameworks that exist within the EPA um and those didn't fly for our for our program we had those discussions so this is where we landed after some some discussions and just to maybe elizabeth christ just elaborate a little bit because you're talking about across the agency so you can have various opinions across the agency and so what worked for some parts of the agency didn't work for others and so what you're seeing now is something that was okay we can agree go so that's that's essentially how how we got there but as andrew indicated a variety of scenarios were floated and discussed or so um yeah and just to follow up these were reviewed as part of the review of the iris handbook or how how were these designation review these were discussions in the context of developing the iris handbooks this would have been discussions that were happening in 2016 thereabouts um led by you know core members of the iris handbook development team with you know agency representatives from the different programs and regions um that happened over it was it was a long period of time a lot of it because we would try something um we you know we had the anyway it was a number of discussions in the context of developing the iris handbook not the formata assessment specifically yeah I mean another concern that we have is the way that the EPA had previously treated um hazard determinations for cancer versus non-cancer and there was uh considered it would be very useful if we could standardize the descriptors across cancer and non-cancer so they would be much more similar than they were previously they're not descriptors so just one okay thank you I'm sure we could have a long discussion about this but uh I will curb my tongue and uh we won't go on but let me ask about um just you know we do have time left on the schedule just if there's anything else and again we're you know I think we have perhaps not so many questions because you did such a good job of um and responding to the questions that we posed to you but let me just check with the committee and see if anybody else has any uh questions or comments or anybody online what would you like to do with scheduling can we go ahead with the bubble yeah or do you want to have a 10 minute break with people here let's see um I think we're scheduled otherwise at 3 20 yeah ish ish so why don't we take a 10 minute break so we can so I think what we'll do and again I want to thank our colleagues at EPA again for the very useful responses to our questions and for coming and speaking with said I think what we'll do since we're scheduled at 3 20 we're continuing on with public comments we'll wait 10 minutes to just make sure that everybody is gathered so uh why don't we take a break for 10 minutes and then we'll come back thanks okay uh we'll go ahead and resume and move to our public comment period like to now recognize those who registered advance to make brief comments to the committee we have had invited comments from one speaker per we have invited comments from one speaker per organization and we'll recognize those individuals and organizations who have not previously addressed the um committee each speaker listed on the slide will have a maximum time of three minutes to provide comments relevant to the committee's task after the meeting anyone who wishes to submit written comments or other materials that are relevant to our charge should submit them via the national academy's project page as a reminder each presenter is limited to three minutes we have a substantial number of people who do want to offer comments to the committee and we will give you a warning as you approach the end of your allocated time so with that if we can go to the list of commenters okay all right first we will go with um Preston Beard if you're here we're ready for you to um go ahead and get started thank you thank you can you hear me okay yes yes we can okay great good afternoon my name is Preston Beard on behalf of the United States Chamber of Commerce we appreciate the opportunity to comment on the draft iris review of formaldehyde issue via the EPA the chamber and its members are committed to save from responsible management of all chemicals including formaldehyde and look forward to continued cooperation with the EPA on ensuring the protection of public health beginning with the development of a complete and foundational risk information that is at the heart of the iris program formaldehyde is an important chemical with beneficial uses spanning a broad range of economic sectors given the potential potential for iris assessments to trigger litigation and regulatory impacts on business it is critical to ensure the draft assessments are informed by the best available science and developed through a transparent and unbiased process that is appropriate that appropriately and integrates all streams of evidence unfortunately we have concerns that this standard was not met these concerns are not new to iris and in fact appear to reflect a continuation of long-standing problems inherent in the broader iris program over the course of the last decade a series of reports have criticized iris for lack of transparency improper scientific processes and inconsistent and flawed methodologies beyond just formaldehyde and the iris program this approach also sets a troubling precedent for other risk assessments that EPA may undertake for chemicals and pollutants under the toska pesticide registration and the clean air act we therefore strongly urge EPA to address these shortcomings in a rigorous and impartial manner while using the best available science consistent with the 2016 lautenberg amendments to toska and other statutes in light of these concerns the chamber urges EPA to take the necessary time to follow updated iris process and to follow updated iris process and fully incorporate comments from all relevant stakeholders to issue or revise draft and ensure that any final assessment is transparent scientifically sound and adheres to statutory intent moreover because EPA failed to incorporate fundamental concerns about key issues during the interagency and intraagency review process the agency therefore should can coordinate with OMB to conduct a formal interagency review of draft formaldehyde iris assessment that facilitates review and comment from experts and agencies familiar with the use of formaldehyde across the country in closing the chamber believes that this approach sets a troubling precedent for other chemical risk assessments and we strongly encourage EPA to revise the the draft assessment and incorporate the best available science in practices for systematic review of formaldehyde iris assessment that does not consider the weight of scientific evidence could lead to unwarranted regulations that would ripple through the supply chain thank you for your time thank you and we'll now move on to tokesha collins right please go ahead hi thank you can y'all hear me yes we can hi i am vice president of environmental affairs for the louisiana chemical association lca is a non-profit louisiana corporation with over 100 chemical manufacturing sites in louisiana in april of last year EPA released its draft iris assessment for formaldehyde based on the voluminous set of documents lca requested that epa extend the comment period on the review for at least 60 days notably many other industrial and commercial entities made similar requests which really demonstrated the universal need for more time to review and to fully digest the documents unfortunately however epa denied those requests lca considers the 60-day public comment periods that have been woefully inadequate to allow for a thorough review of the documents let a note let alone enough time to prepare informed and complete comments for epa's review the comment period allowed for only really a preliminary review and commentary and thus undermines the transparency of incompetence in epa's review process lca retained an outside toxicologist to review the documents and during her review she made several findings in the interest of time today i won't go into great detail about all of her findings obviously you can find that in the comments we submit it to the docket however among other issues she found that epa overestimated the relationship between exposure or formaldehyde and the incidence of cancer such as nasal pharyngeal cancer and sino nasal cancers and myeloid leukemia finding evidence demonstrates quote-unquote a relationship between exposure and those cancers instead based on epa guidelines we find the classifications of evidence indicates quote-unquote and evidence supports it's more appropriate here the drought the drought assessment concludes that human exposure to formaldehyde at extremely low doses causes a variety of adverse health effects however these conclusions are based on very little new evidence and are not scientifically supported lca appreciates the opportunity to discuss these issues today and hopes that the agency will review all of the work done by and information okay by all the commenters who have concerns about the review process and that the agency makes meaningful changes to the risk assessment to incorporate the new research and data that challenges aspects of the draft assessment thank you thank you uh we'll move on now to uh gear mcgrabber mariam please go ahead doesn't look like they're there so we'll move on to the next person okay then uh we'll move on to adrian krigsman please thank you can you hear me yes yes we can i'm so sorry you can speak now hello my name's adrian krigsman and i'm speaking on behalf of troi corporation and arcsata company troi corporation is a manufacturer of preservatives for industrial processes such as the manufacturer of paints and coatings construction products and metalworking fluids these are regulated under the office of pesticide programs the antimicrobial division under fifra the federal insecticide fungicide and redenticide act and many of these preservative active ingredients are now undergoing a periodic reevaluation of toxicology data and assessment of risks risks by opp under their reg review program some of our preservative products have been determined to be from aldehyde donor chemistry since their primary mode of action uh of antimicrobial action is through the release of formaldehyde into the test article we would like to highlight the importance of the draft formaldehyde assessment in the review of these products as well as the interaction of opp with the office of toxic substances within the reg review program um highlighting the importance of the draft risk assessment on formaldehyde as in 2011 opp proposed to use the iris formaldehyde assessment as the basis of regulation of formaldehyde donor pesticides under their then what they called red program since that time epa has proceeded with their reg review program a 15 year cyclic approach towards the upgrading of pesticide active ingredients and within the past year opp staff have indicated their collaboration with the toxic offices staff on the 2020 draft formaldehyde risk assessment and again reiterated their intent to await the finalization of this assessment uh as part of their overall use in their reg review program um the evaluation of the 2022 draft risk assessment must be conducted using the best available science and review a various key parameters of formaldehyde toxicology such as the mode of action linear versus non-linear approach and overall exposure because of the downstream effects on such other regulations such as FIFRA and the reg review program um moving on to formaldehyde assessment to a particular type of approach for metalworking fluids um formaldehyde donor chemistry pesticides are used to preserve metalworking fluids for microbial attack besides providing microbial control this class of preservatives also provides added benefit of combating various endotoxins associated with metalworking fluid spray mist these endotoxins are known to have an effect on respiratory function of workers within that environment if opp determines the end result is the cancellation of this class of preservatives in this I'm sorry that it ran out of time but I think you got to the main points thank you um next is um Mary uh Mirabeau please go ahead next is uh Brock laundry I'm sorry I missed I'm sorry sorry Brock um next is Brock uh Landry thank you my name was added to the list in error I don't have any comments for you at this time okay then we'll go on to Mary Mirabeau does not look like she is here so uh we'll move on to Peggy okay uh Peggy Murray uh we'll be next and we will circle back and check on those who weren't here but uh am I on please go ahead you're yes you're fine we can hear you okay great hi I'm Peggy Murray I'm the research director for the Center for Truth and Science which is a private nonprofit organization with a mission to ensure that only the best science is used in policy and legal decisions in their April draft assessment that they released the EPA directly linked exposure to inhaled formaldehyde with myloid leukemia and other LHP cancers and concluded that the relationship was causal or is causal and in contrast the most recent US national toxicology program report did not make claims of causality but only suggested in association and a number of scientific experts have questioned EPA's conclusion on leukemia and inhaled formaldehyde and that's based on an insufficiency of causal evidence including the absence of mechanistic plausibility so after an initial examination of the EPA literature review process the center determined that an independent rigorous review of studies utilizing currently the currently most advanced systematic review methods is needed for optimal development of policy and prevention uh so we focused on the fact that new studies would need to include epidemiological analyses relevant animal studies basic mechanistic investigations and studies of contribution to cancer risks that can be attributed to the additive effects of endogenous formaldehyde so we released an RFP in December 1st in 2022 it closes this Wednesday February 1st we expect that the awards will be made pretty quickly after an independent review and that the work will be completed and submitted for publication by this fall we hope to determine the extent to which there is scientific evidence for a clear causal link between exposure to inhaled leukemia and LHP cancers we want to see accepted state-of-the-art systematic review methods including the opportunity to replicate the review so we're asking that applicants include a plan for publication that includes allowing the transparency necessary for others to run the same analyses in order to determine replicability of results so this is kind of a new approach to systematic reviews Paul Tugwell in 2020 published on this and we think it's a good idea so we expect the findings to be published in the fall thanks a lot for the opportunity to talk about it and we'll certainly be willing to share those findings with everyone thank you and now we'll move on to Frederick Nundu uh Frederick does not look like they're here so we'll move on to Andy O'Hare I'm moving on to um Andy O'Hare if you're here please go ahead Good afternoon my name is Andy O'Hare and I am president of the Combosive Panel Association or CPA I am pleased to provide you with the perspectives of CPA on the 2022 draft from Aldehyde IRS assessment CPA was founded in 1960 and is a trade association representing more than 95 percent of the North American manufacturers of particle board medium density fiberboard and hardboard the total impact of the industry on the US economy is almost $10 million annually the industry directly supports over 23,000 well-paying jobs these products are produced using wood fiber that would otherwise be landfill or decay in the environment the fiber is generally sourced from sawmills and tree harvesting operations the panels are produced by combining the fiber with resins followed by pressing and sizing in composite panel mills the panels are key ingredients in long-lived products ubiquitous in residential and commercial buildings including cabinets, furniture and flooring most resin systems employed in the panel making process contained formaldehyde as a key ingredient consequently CPA is very interested in the IRS assessment and its potential impact on policies impacting formaldehyde use the formaldehyde emissions from these products is very highly regulated in 2010 Congress passed an amendment to the Toxic Substances Controlled Act called the formaldehyde standards for composite wood products act we had support from national environmental groups and we were instrumental in the passage of this law with president obama's signature EPA prepared an implementing proposed rule in 2013 a final rule was issued in 2016 the rule established very low limits for emissions of formaldehyde from composite wood products to protect human health and the environment the formaldehyde resins used to make these products are important contributors to the successful product performance the availability of these versatile and cost-effective wood products would be significantly impacted by a TOSCA rule limiting the use of formaldehyde resins indeed the EPA rule is a risk management tool supported by a rigorous independent third-party testing and certification program CPA strongly believes that the EPA rule is the type of common sense approaches congress envisioned when TOSCA was amended in 2016 the extremely low proposed risk level suggested by the 2002 draft EPA IRS assessment are well below formaldehyde concentrations in the environment their potential reflection in a TOSCA risk assessment could eliminate this and perhaps many other very useful applications of formaldehyde thank you for your attention to our views and I would be happy to address any questions you may have okay thank you uh we'll go on now to uh Leslie Ressio please go ahead can you hear me uh yes we can all right so um my name is i'm Leslie Ressio I'm the chief scientific officer at Cidervation I recently reviewed the US EPA review on formaldehyde and I think there is a misinterpretation misinterpretation of a manuscript that I was a senior author on and a study conducted at CIT we observed p53 homozygous single base substitution point mutations in the rat nasal squamous cell carcinomas from formaldehyde exposed rat this reduction to homozygosity at the p53 locus for point mutations reserved in the case that you the other p53 allele was silenced or had been deleted or there was a gene conversion recombinational event or there was an anuporty event of the other allele rendering uh now homozygosity for a single base point mutation in the cdna a particular concern to me is that this is used as as a misinterpretation of a genotoxic mode of action and this is not plausible for a number of reasons for one thing is for homozygous point mutations in the identical location so having a mutagenic target that those two base pairs specifically in both alleles is not plausible the dna approaching cross links that form the primary lesion of the formaldehyde exposure is not an addict that produces point mutations exclusively at gc base pairs a separate study by the ntp showed in p53 heterozygote might have concluded that the results of this short-term carcinogenic study do not support a role for p53 in formaldehyde induced neoplasia in that study there was no observance or increase of leukemias or lymphohomatopoietic cancers to conclude the p53 mutations we observed were likely due to a passenger kind of event not a mutagenic event induced by formaldehyde and finally there's no data to support a role for formaldehyde induced p53 mutations in formaldehyde induced neoplasia either nasal cancers or hermital poetic cancers that's it okay thank you committee questions okay thank you we'll move on then to david salt miras please we're moving to lee yang next please go ahead see is lee on i'm asked to unmute let's see lee yang if you're ready you can go ahead please okay and lee will circle back to you in case you're having problems then to uh elvis zornik please here give us a moment here to charlotte anthony okay charlotte anthony if you're on please go ahead sorry i wasn't registered to make oral comments onto the next person okay um next to harvey cluel harvey oh there i go yeah i can hear okay we can yeah we hear you now thanks sorry good afternoon my name is harvey cluel and i'm a principal consultant at rombo us consulting together with my colleague mel Anderson i've been conducting research on the carcinogenicity of from all live for more than 30 years first for the air force later at cit and the hamner much of this research has been funded by the american chemistry council however the opinions expressed today are my own at the hamner we conducted studies on the dose response for genomic responses of the rat nasal epithelium to inhaled from aldehyde these studies clearly demonstrated that effects have been hailed from aldehyde on cells only occur concentrations that significantly increase cellular from aldehyde above endogenous levels which requires inhaled concentrations of six ppm and above recently my colleague roi connelly has made important improvements to the formaldehyde bbdr model to address perceived uncertainties uncertainties and to incorporate an extended data set where both inhaled and endogenous and formaldehyde conformed dna addicts in developing the current epa cancer guidelines the epa under the leadership of dr bill farland pioneered a new approach for conducting cancer risk assessments that was anchored in the notion of a chemicals mode of action for carcinogenicity the mode of action serves as the basis for the evaluation of toxicity studies and the selection of the most appropriate extrapolation approaches to support science-based risk assessments under the epa cancer guidelines the principles of structured review must not only be applied to the selection of human and animal evidence of toxicity but also to the evaluation of mechanistic evidence regarding the mode of action the draft assessments failure to incorporate a systematic review approach for reviewing and integrating mechanistic studies excludes this critical aspect of the risk assessment process and introduces substantial risk of error and bias into the assessment a formal mode of action human relevance framework developed by the international program on chemical safety which is cited in epa's cancer guidelines and the iris handbook provides a structured framework for such evaluations two recent public publications on which i am a co-author were able to apply this framework to the cancer endpoints for formaldehyde i believe that the necessary mechanistic data for determining a mode of action for formaldehyde is available and that the structured review would support the conclusions of a recent international interdisciplinary expert workshop documented in anderson at l 2019 that the mode of action for rat nasal tumors is driven by cytotoxicity and proliferation with mutagenicity only contributing at exposure to so they associated with toxicity and that there is no plausible mode of action for inhaled from all to cause cancer in tissues other than the immediate portal of entry thanks for your attention thank you and next we'll move to chris farmer thank you can you hear me uh yes we can thank you very much good afternoon my name is chris farmer i am general counsel for the national funeral directors association in fda and founder of the law firm the farmer firm here in houston texas i'm here today to speak on behalf of the members of nfda i've dedicated my 20 year legal practice to representing funeral service businesses in all aspects including making sure that funeral service professionals have a safe and helpful place to work funeral service professionals are well educated on the potential risks associated with formaldehyde they are also extensively trained and equipped on how to use it safely formaldehyde remains the preferred preservative used in embalming the united states today and is unrivaled in its ability to safely ensure that remains are in a condition so that families are able to say goodbye to their loved ones today i want to share my concerns with you regarding the epa's 2022 draft formaldehyde assessment first the literature review of that epa conducted did not incorporate many aspects that are critical to executing a systematic review such as systematic review protocol objective inclusion and exclusion criteria and transparent methods and results for evidence integration and synthesis these failures resulted in the exclusion of key studies that may have impacted the weight of evidence assessment and consequently could have changed epa's decision making second in deriving the reference concentration the epa relied on a potentially flawed approach for selecting key studies by prioritizing general population studies over controlled human exposure studies they relied on studies that are subject to greater potential bias and confounding additionally there exist significant limitations in the studies the epa relied on to dictate key health effects that could impact conclusions on causality including limitations on study design and interpretation of adversity for the selected endpoints we asked that the academies reconsidered the key studies and identified points of departure used to predict the rfc as well as the weight of the evidence around conclusions made by the epa finally when deriving the inhalation unit risk value describing formaldehyde cancer potency the epa analysis had two major flaws first the epa relied on a study for which potential confounding has been identified and reported on in the literature some of the studies reporting on this confounding were not included in the epa's assessment also the epa did not fully evaluate alternate modes of action and similarly excluded or disregarded key studies of their review we encourage the academies to consider both of these fundamental issues when evaluating the epa's assessment thank you for your time today thank you next one move to james ensstrom hello can you hear me yes we can i'm dr james ensstrom and i'm adding to my october 12th comments i've had a long careers in environmental epidemiologist at ucla and i've published significant evidence that air pollution particularly pm 2.5 does not cause deaths in the united states i want to emphasize that epa has not based its 2022 iris assessment on personal exposure to formaldehyde personal exposure must be used because people spend most of their time indoors where exposure levels are very low in my los angeles office my formaldehyde monitor reads between one and six micrograms per cubic meter this level is below the epa inhalation reference concentration for no human health effects of seven indeed available human studies do not show health effects below 35 micrograms per cubic meter furthermore the national academy and epa must recognize and quantify the extreme funding bias publication bias citation bias against null findings i have fully documented these biases in pm 2.5 epidemiology and i have evidence that the same is true for formaldehyde epidemiology these biases distort all the findings in the iris assessment one ask one example of these biases my independent reanalysis of the acs cps2 cohort which found no relationship between pm 2.5 and total mortality yet american cancer society officials will not confirm my findings and will not deal with transparency and reproducibility issues which are fundamental aspects of the scientific method and epa did not cite my null findings as they are now proceeding to tighten the pm 2.5 max also epa has not focused on the evidence that there is no relationship between formaldehyde and total mortality and total cancer instead epa focuses on specific risks for minor cancers like nasophil and perangial cancer tragically there is no compromise between scientists with different views my major concern is regarding the loss of science in the united states you must all watch the december 15 talk by stanford by renowned theoretical physicist laurence crouse the title is is woke science the only science allowed in academia and also you must read the december 22nd article by david strong entitled the sciences are going to die epa's effort to continually tighten air regulations hurt science and hurts america and this committee should rethink what e's pa is doing thank you very much for your opportunity this week okay thank you and next we'll move on to con lue if you're on please go ahead my name is con lue uh so professor from u n c chapel here i really appreciate the community to provide multiple opportunity for me to provide the public comments to contribute to the science-based of america risk assessment over the past 15 years so we have spent much effort to study the key issue related to formaldehyde casino genicity and the risk assessment by developing the sensitive of america specific dni duct and the dna protein crossing biomarkers which build on our significant improved understanding of chemistry between formaldehyde and biological molecules such as dn protein i have previous provided a detailed writing comments about the ls of formaldehyde draft our only focus on important issue today for mea has chemistry and its interaction with the proteins formaldehyde is one of the most extensive study chemical but its chemistry and interaction with the biological molecule are quite complex the ls formaldehyde draft that didn't develop adequate understanding and the discussion of formaldehyde chemistry and is free the following exposure which significantly impact our understanding of formaldehyde casino genicity and the risk assessment especially for low dose exposure in addition to its metabolism through the adh3 pathway for melaheus as a highly reactive aldehyde actually rapidly react with other biological molecule including dna and protein and especially proteins the interaction between the formaldehyde and the protein has a significant uh uh has an important implication about its fit and casino genicity at a low dose exposure for instance we have previously demonstrated formaldehyde can rapidly target protein uh residue such as a lesson and more importantly such protein binding limited the availability of formaldehyde to enter into the nucleus to cause the dna damage and consistently in our actual low dose 28th state study we were we were not able to detect any inhaled formaldehyde induced dna damage at 300 ppb 30 ppb or one ppb in contrast we confidently and robustly detect the inhaled formaldehyde induced dna damage in numerous other rodent acid as a dose about 0.7 ppm so the the reason why we see such a dose dependent formation for adh3 formation according to a strong possibility that that formaldehyde may have may have a stretch hold to induce adh3 and damage and given the high activity of formaldehyde react with the protein so the formaldehyde binding with the extracellular protein and other abundant protein may limit the availability of formaldehyde to enter into the cell to cause the damage and mutagenesis and the casino genicity so the this is to represent a significant data gap the understanding formaldehyde has a low dose casino genicity and clearly the further study is needed and i start here thank you thank you and we'll move now to uh Heather Lynch hello can you hear me yes we can great thank you hi my name is heather lynch i'm a health scientist with stan tech thanks for the opportunity again to speak today in the 10 years since the national research council's review of the 2010 draft formaldehyde iris assessment epa has been concurrently revising the formaldehyde assessment and overhauling the iris program to incorporate systematic review methods as requested by the nrc the iris overhaul culminated in the release of the draft handbook for conducting iris reviews in 2020 which was just finalized in december of 2022 during the october 12 2022 public nasum committee meeting on the 2022 draft formaldehyde iris assessment a member of the committee inquired about the timing of the draft development relative to the reforms made to the iris process doctors craft and there confirmed the concurrent timeline indicating that the formaldehyde assessment was the quote testbed for the iris handbook during this discussion dr samet raised the question of whether the coccard in collaboration one of the pioneers of systematic review could provide insights into what should be done when new methodologies emerge during the reviews often lengthy development indeed the coccard handbook recommends carefully considering evolving methods and best practices during the assessment stating quote depending on the changes required it may be appropriate to conduct a new review from scratch meeting current standards if a new review is not needed the coccard handbook indicates that newly available information must be appropriately incorporated into the review in the same manner and using the same methods as the initial body of study selected further and perhaps more importantly the coccard handbook states that any changes to the methodology must be clearly documented it's not entirely clear to what extent and when the new systematic review principles and methods were incorporated into the epa's 2022 from elehi draft but we do know that the draft was not restarted at any point in its 10 year development further there was no systematic review protocol to document the changes finally the 2022 draft clearly describes that different methods for literature search and selection were employed for the pre 2017 phase relative to the post 2021 phase of the iris draft development overall while some of the methods now described in the handbook were clearly incorporated into the 2022 iris draft it seems unlikely that the entire review given its start date in 2012 reflects current epa approaches or more importantly current best practices for systematic review thus epa systematic review methodology should be carefully reviewed by the nasim committee if this is to be a state of the science assessment that informs guidance and regulation at state and federal levels at a minimum a consistent and clearly described systematic review approach should be used to evaluate the evidence thank you thank you and next one moved to uh james sherman so are you is he hello my name's james sherman can you hear me yes we can oh great sorry i'm a fellow in the toxicology and product stewardship team at selenies previously i described how the nisi masano et al 2012 asthma study was misinterpreted in the draft assessment today i'll focus on two other studies that were incorrectly advanced for the asthma endpoint first ven et al 2003 was not designed to investigate the induction of asthma or exacerbated asthma responses the metric used was wheezing there were a number of study design flaws some previously recognized by nas that are detailed in the written comments i will submit to you i'll get to my point here the epa reevaluated ven to conclude that formaldehyde likely causes asthma and high confidence of formaldehyde exacerbates asthma control as well as providing medium confidence of the in the asthma po d in contrast then at al clearly stated in their manuscript we saw no effect of formaldehyde on asthma risk such disconnects between what the peer reviewed literature says and the determinations made in the draft assessment do not reflect good science secondly there are a number of design flaws in the chrysanalstia al 1990 study some previously recognized by nas that were not recognized in the draft iris assessment such as small numbers of children the two highest exposure bands in complete reporting and the inherent weakness is in its cross-sectional design i'd like to point out three critical points that i identified that were not previously recognized in the study first there was no dose response when asthma instance was evaluated for all children children exposed to ets or children not exposed to ets secondly although varying formaldehyde concentrations were characterized in the main room the subject's bedroom other bedrooms and the kitchen the only comparator concentration used was the kitchen concentration as all parents know children's age between 16 and 15 years old do not spend a significant amount of their day in the kitchen the singular comparison is not an appropriate comparator for judging cause and effect of a chronic disease in children thirdly the analysis presented by chrysanalstia included 301 children although there were only 298 children in the study when breaking out the children exposed to ets versus those that were not the number of children evaluated was 293 out of the 298 in the study no explanation was provided for the extra or missing participants considering the low number of participants in the two higher exposure bands the unexplained extra and missing participants in this analysis is quite concerning thanks for your service and i hope these comments will be reflected in your peer review report so thank you and what i'm going to do now is circle back through those who were not there on when we came to your names and see if you're you have joined now so first was gear ma grab a marium are you on not okay and mary meribow no no and uh frederick nun do and let's see katie stump nope and lee yang all knows okay all right during due diligence here um so i'd like to now draw this open session to a close i want to thank those who joined for for comments from the public i want to thank our epa sponsors and our committee for this valuable discussion and thanks to all who joined this information gathering session as well as the national academy's staff thank you all and goodbye