 Good afternoon, everyone. Thank you for joining us today. My name is Megan Lowry. I am a media officer with the National Academies of Sciences, Engineering and Medicine. Thank you for joining us this afternoon for a webinar on the report that was just released last week titled, Advancing the Framework for Assessing Causality of Health and Welfare Effects to Inform National Ambient Air Quality Standard Reviews. You can now download a copy of the report and other supporting materials at www.nap.edu and that link will also be available through a QR code throughout the presentation. And a recording of this webinar will be available in the coming weeks. For those of you who are not familiar with the U.S. National Academies of Sciences, Engineering and Medicine, we are private nonprofit institutions that provide independent objective analysis and advice to the U.S. to solve complex problems and inform public policy decisions related to science, technology and medicine. For each requested study, panel members are chosen for their expertise and experience and they serve pro bono to carry out the study statement of task. The reports that result from the study represent the consensus view of the entire committee and must undergo external peer review before they are released to the public as did this report. Before I introduce a few members of the committee that wrote the report who are joining us today, I'll just go over a few quick reminders. Please note that this webinar is scheduled to last one hour, so we'll start off with the presentation first and then we'll open it up to any questions you may have once the presentation is over. For questions today, we will be using Slido. So to ask a question, just type it into the box below or to the side of your video player and you can submit a question at any time during the presentation. So now I'll introduce the committee members who are here with us today. We have Ted Russell, the Howard T. Tellipson Chair and Regents Professor of Civil and Environmental Engineering at Georgia Institute of Technology. Elizabeth Stewart, Executive Vice Dean for Academic Affairs and Bloomberg Professor of American Health at Johns Hopkins Bloomberg School of Public Health, and Richard Smith, Mark L. Reed, the third Distinguished Professor of Statistics and Professor of Biostatistics at the University of North Carolina at Chapelville. And with that, I will turn it over to Dr. Russell. Thanks so much, Megan. And as Megan said, I and my co-chair and Richard Smith are presenting today on giving you a public briefing on the outcome or the report on advancing the framework for assessing causality of the health and welfare effects to inform the next reviews. Next slide. Next, there we go. And as said, I'm Ted Russell from Georgia Tech, joined today by Liz Stewart and Richard Smith. Next slide. But obviously we were not the only three people who were part of developing this report. I have listed here the report committee in its entirety, and it wasn't just the wonderful committee members, it was also the NASEM staff that were very much part of being able to provide this report and this briefing today. So I wanna thank everybody, the committee and the staff for making this all possible. Next slide, please. So as said, the committee was given a statement of task has worked out between the EPA and NASEM, and it's briefly given here. I'm not gonna go into the details in that we only have one hour today. And I wanna make sure that we have as much time for the briefing and the questions and answers afterwards. But fundamentally what EPA requested of us was to investigate the framework for that's being used as part of the NAACS process for determining causality, linking some exposure to various health or welfare outcome. And this is part of the EPA NAACS review that they go for each of the different criteria pollutants. There's more details in terms of the specific questions that we were asked in specific tasks on the right hand side. And we'll go through those as we go as we give this briefing to you today. Next slide, please. So as I noted, and as the task request is that we were asked to look at the framework that's used as part of the causality determinations as part of the integrated science assessment. The ISA, as we will call it, is part of the process, the NAACS review that's done for each of the different criteria pollutants. And there's some very important things about this is that really this is very much the science-based determination. And it comes near the beginning of the process, not the very beginning. And what's actually not shown here, but we'll refer to it, is there's also a preamble to the integrated science assessment that lays out the framework. And that's really what we studied that. We actually looked through the whole process, how it's written up in the framework, as well as how it's used in the ISA themselves. Next slide, please. So the approach that we took to investigating this was very much we studied the framework as it's laid out in the preamble and how it's been employed in the recent ISAs. We also, as part of this, looked at methods from nine different frameworks that are used in various other committees and panels, both nationally as well as internationally, to see how they compare, to see the advantages and ways in which they could inform the process. And we also engaged a variety of experts from around the stakeholder community at multiple levels, both agency as well as private sector, and they had a variety of perspectives. And we included both individuals at EPA, critics, et cetera, and looking at both people who use it and directly, as well as those who have been involved in the criticisms and comments on it in the past. Next slide, please. So looking through our task, one of the first questions the approach used is it appropriate? And one of our conclusions was that seldom will you actually have a single study that will be definitive. So given the variety of different types of evidence that are used in making this causality determination between an exposure and an outcome, that a way of evidence approach allows EPA to look at and draw conclusions from a variety of different disciplines. And this is what's required by the Clean Air Act. Having said that, we did find that increased transparency in how you actually apply the weight of evidence approach would actually provide increased confidence and understanding in how these causal determinations are made. And it would also help support other conclusions made as part of the ISA as well. Next slide, please. So this is all put together. And when you look at the formal framework as it's actually employed, they have five different causal categories as shown here. And they range from not likely at the bottom to a causal relationship at the top. The specifics of those are given on the right-hand side, but it provides these five levels are supposed to indicate the strength of this relationship and the uncertainty. Next slide, please. So one of the second conclusions was that we did find that the five causal categories that are used, and I just showed are able to actually characterize the strength and the uncertainties in the causality determinations as made by the ISA assessments. The granularity is both meaningful as well as defensible. And we also find that the likely causal category is distinct from the other categories as well. Next slide, please. One of the questions we were specifically asked is this was if the framework itself is appropriate for addressing both health and welfare effects. And we find that the same framework with the same five categories can be used and it is adequate for guiding causal determinations for both health and welfare. As long as a comprehensive and well-defined scientific questions are laid out at the beginning, such that you're addressing the questions and this could be as part of the IRP or some other part of the process that are really made to the questions are appropriate for use in developing the ISA and in bringing forth those causal determinations. Next slide, please. The ISAs can be more effective in addressing the health and welfare effects primarily by providing extra guidance in the framework to make sure that the causal determinations are adequately supported and explained and that the relevant exposure patterns for both health and welfare are made given that the type of exposures can be different between health and welfare. They should be modified to be more relevant or to make sure that they are relevant to the scientific questions that are important for the next review process. And it should also look towards the future recognizing that the future may differ difference in the past so they should provide guidance on how to consider future changes in terms of how atmospheric processing may change the exposure to outcome processes So with that, I'd like to pass it off to the Richard Smith. All right, thank you. Next slide, please. So as Ted already explained, the committee supported the overall weight of evidence approach that EPA adopts and supported the five causal categories but did express a number of recommendations related to the transparency of the process. And what I'm gonna go through is the first couple of recommendations that the committee made. One issue that the committee looked at was heterogeneity and response. Now, to give you a bit of background here, the Clean Air Act, which is the act of Congress that essentially lays out the whole process for the EPA to follow, that emphasized that in addition to setting standards, that are requisite to protect the public health, that's the wording from the Clean Air Act. It also recommended that sensitive subgroups, that the EPA should give particular attention to sensitive subgroups, which is typically interpreted as meaning people with health conditions or the very old or the very young or things like that. And that implies the question of heterogeneity and response, which would mean that different groups of people might respond to air pollution in different ways. And the committee, what we did, we were primarily focused on the preamble, but we also looked at some of the recent ISAs to see how the preamble had actually been interpreted in practice. And we felt there would be some scope for looking at this heterogeneity question more closely. So our specific conclusion first, that this question of heterogeneity and the response of individuals or populations complicates the causal assessment. It's not a simple yes, no question. Is this thing, is this polluted causing health effects or not? And the current framework separates the description of vulnerable groups and it also applies to ecosystems and species. That's the welfare side of our report and potentially obscures the understanding of causal relationships. So next slide, please. This leads into our first recommendation, which is that EPA should include guidelines in the framework regarding how heterogeneity in exposure responses is considered to ensure causal determinations fully account for evidence of effects in sensitive groups of humans, other species and ecosystems. And to the extent practical, the framework should provide explicit guidance how to do this. Considering only the average population or a broad ecosystem effect can obscure causal relationships that exist for specific sensitive subgroups or subspecies or communities or ecosystems. Next slide, please. So another topic that the committee considered was how EPA actually, oh, I'm sorry, I should have, let me go back a second. I just wanted to clarify a bit the question of selection bias. So one issue here that the committee drew attention to is developing methods to systematically assess bias due to lack of representation. And in other words, this is sort of part of the process that we would recommend EPA go through to address the questions in recommendation one. Okay, let's go on to the next slide, please. Thank you. So the next issue that the committee recommended on was about study quality evaluation. So the issue here, I mean, a very broad brush, one sentence description of what an ISA does is they collect together all the published studies that address questions related to air pollution and health effects. And then they try to evaluate what they say in order to come up with a weight of evidence evaluation. So a critical part of this is how does EPA select the studies in the first place? And having been on the committee, we hear complaints that from time to time that people say, well, one study was omitted or why was this study included, et cetera. So we thought that this was one issue where EPA could be a bit more transparent about its process. And what we want to say here is that the preamble does address this question. It highlights the sort of features that would be important in determining whether a study is relevant. But it doesn't give much information about specific studies, why was one study selected and another one wasn't. So if we could go to the next slide. So recommendation two is to include in the causal determination framework used for developing ISAs include a set of foundational study design attributes and analysis approaches to be considered when selecting and evaluating studies and to include discussion of what attributes were examined and how the resulting examination of individual studies actually influences which studies are selected. Clarify and systematize in a framework for causal determinations, the aspects of a study that will be considered in assessing its relevance and quality. We feel this could improve the transparency and replicability of the process. Next slide please. So this is again, following on from that, formalize criteria to study validity and the individualized use of tools for each ISA to implement those criteria. We noted that a number of formal tools have been developed with acronym such as HAWK, PICOS and HERO. I'm not gonna try to explain what those are right now but they're all tools that have been used by EPA. There's no one tool that beats everything. I mean, you have to consider a variety of approaches and we wouldn't recommend these tools as being decisive benchmarks but we think they can be used to enhance the process and we feel the continued use and refinement of those tools would improve clarity in the study selection and evaluation process. So at this point, I think I will hand over to Liz. Thank you. Great, thank you so much. Next slide please. Okay, I'm going to continue with more on some of the recommendations or the rest of the recommendations that the committee came to. And this one really builds on one of the previous recommendation that Richard just talked through. That recommendation sort of talked about study design attributes in general. This recommendation then builds on that to be more specific around something known as confounders. And this was something called out particularly in the statement of task with EPA looking for us to comment on confounding and confounders. And in particular, the committee sort of concluded informally, not a formal conclusion but on the right hand side of the slide that the ISA framework currently recognizes co-pollutants, so other pollutants not the one of interest in a particular ISA as confounders. So it does recognize that potential confounding but it is not explicit about other types of confounding. So things such as weather effects, other environmental effects and socioeconomic or demographic differences. And the challenge here is that confounders such as these can bias results in individual studies where an exposure and an outcome might not really be causally related to one another. It might be that there are these confounding factors that are sort of making them appear related to causally related to each other but it's really due to confounding. So our recommendation is that the framework provide explicit guidance for assessing the approaches used in individual studies to account for important and potentially biasing confounders. And then also how the strength of those approaches and how they're used in an individual study might influence the weight of evidence considerations in the causal determinations. So sort of examining each study and how well it is dealing with confounders and then thinking about that and articulating how that weighs in to the weight of evidence approach. Next slide, please. And so in particular to expand on that a bit more the committee concluded that the guidance to this example guidance to this weight of evidence approach could take into account again how well a study articulates concerns about confounding and what are the relevant confounding factors. And here I will highlight that as discussed in the report there's not going to be one set of confounders that are sort of the official set of confounders for any study. Each study will be examining each study that is used in this ISA process might be examining different research questions at different exposures, different outcomes, different populations, whether it's health or welfare or other areas. And so it's very difficult upfront to say that these are the five confounders that need to be dealt with in any one study. And so the idea here is that a study though should provide some framework, for example, a conceptual framework for the research questions that that study is examining and then what potential confounders might be of that research question in that study. Once that is articulated then it's important for the study and for the assessment of the study to examine whether the relevant confounders are observed and adjusted for in the study design and analysis approach and whether that's done using scientifically meaningful and appropriate statistical methods. So again, sort of step one is identifying relevant confounders and then step two is examining how well that study deals with those potential confounders and whether it can be adjusting for the important ones. And then the third bullet here sort of acknowledges that in many of our when an exposure is not randomly assigned, there may also be unobserved confounders. So there might be an important confounder that is not observed in a particular data set. And so the framework could think through sort of how well a study examines robustness of study results to an unobserved confounder or whether that unobserved confounder might change study conclusions, sort of assess in the include in the framework explicit discussion of this to consider how that should then influence that study in the weight of evidence, how that study sort of contributes to the weight of evidence conclusions. So again, sort of thinking about what are the relevant confounders, how well are they adjusted for and then how much are the results probed and sort of explore it to see how robust the conclusions are. Next slide, please. Another sort of aspect of study designs that was sort of alluded to before are on topics related to what are known as reproducibility and replicability. And these relate also to the general topic of transparency of individual studies. So the committee's conclusion in the report was that EPA recognizes the importance of replicability of individual studies when making causal determinations, but that the current framework does not provide explicit guidance regarding how the potential reproducibility and replicability of individual studies should affect the influence of those studies to the causal assessments. Next slide, please. So to be a little bit more explicit because these terms might not be familiar to all viewers, it's important to define them. So on the right-hand side, consistent with other national academies reports and other literature, we defined replicability as that replicability indicates that consistent study results are observed across studies conducted with different data. And in some ways, this is inherent to the weight of evidence approach, which is looking across a range of studies to see what the sort of weight of evidence is saying about some relationships. Reproducibility refers to the ability to obtain the same results given the same data. So for example, does a research team make available the code used to run their analyses, or is the data available? Sort of how easy would it be for another research team to pick up the same data and obtain the same study results? So on the right, the committee concluded that EPA should investigate how study transparency, reproducibility, and replicability should influence the study quality and relevance assessment used in the weight of evidence approach and provide guidance in the framework for how those aspects should be assessed when considering the individual study quality and relevance. Again, that's sort of consistent with then the left-hand side of this slide. The official recommendation stated that the framework could develop guidance for the framework, the recommendation is to develop guidance for the framework to assess individual study documentation of data, methods, and assumptions. And again, how that assessment informs the influence of that study in the weight of evidence approach. Next slide, please. Again, sort of moving first to the right-hand part of this slide, one topic that the committee discussed, and as Richard sort of alluded to, there are some areas where there are very sort of quantitative, what are known as sort of formal risk of bias tools that are used to sort of turn the study assessment into a quantitative rating scale. The committee concluded that there was no evidence to show that application of these more formal methods for weighing evidence provides more reliable or better explainable results than the consensus approach incorporating expert judgment. So again, sort of we concluded that the weight of evidence approach which combines assessment of the scientific literature with expert opinions and sort of expert judgment weighing that literature is appropriate and that there is no evidence to show that sort of a much more quantitative rating scale is appropriate at this point in time. That said, our recommendation recognizes that there are advances in these areas and there are many other scientific contexts and related areas including around the world that are doing similar sort of weight of evidence type approaches. And that some of them are exploring different ways of conducting this process. And so our recommendation is that EPA monitor research in the scientific literature on evidence integration and the evolution of other frameworks used to assess causality. Basically to sort of continue to monitor that and to see if there are any emerging approaches or characteristics that might be adapted to improve the causal determinations over time. To improve the framework over time. So again, at this point in time we were not recommending a movement towards that but recognition that this is an evolving area and that it will be useful for EPA to continue to monitor it. Next slide please. This slide again sort of goes into that in a little bit more detail and again sort of just highlighting that there are emerging areas of research that may be relevant in the coming years. Things such as again sort of formal meta-analysis formal risk of bias tools, systematic review and decision analysis. And the report sort of again sort of recognizes that we are not recommending moving towards these formal risk of bias or quantitative rating scales, rating approaches but that again sort of over time it might be worth examining and sort of considering all these different advances and what might be useful in the future. And again, I think the last point on this slide is that we also very much recognize that particular care should be taken if EPA moves in this direction particular care taken to consider the health and welfare endpoints and whether there are different considerations for those given these types of studies and the range of types of studies that are used in those for those different areas. Next slide please. And another conclusion that will move towards a recommendation is that given the broad range of topics covered when determining causality for criteria pollutants access to a broad range of expertise both within and outside the EPA including expertise in emerging areas such as causal modeling and inference is needed throughout the causal determination process to ensure incorporation of the latest scientific knowledge. Next slide. So the recommendation on the left hand side is to articulate in the causal determination framework a clear process for identifying and incorporating the necessary expertise for each step of this causal determination process including the development of the questions the individual study selection and assessment and then the final review. And the idea here is again sort of related to some of the transparency themes you heard earlier that providing systematized guidelines in the preamble for identifying the disciplines and perspectives needed for rigorous objective consideration of all the science the range of sciences will increase confidence in the resulting causal determinations. Sort of the importance of in a weight of evidence approach the importance of having the right experts involved to ensure that that weight of evidence is being done appropriately. Next slide. To wrap up and then we'll turn to any questions that have come in. Overall, the committee came to a conclusion that the fundamental structure this weight of evidence approach based on a review of the scientific literature extensive internal and external review and the five causal categories as currently described in the preamble is able to support the causal determinations made as part of the next review process. Assuming that comprehensive and well-defined scientific questions and pollutant exposure outcome relations are identified, including for key sensitive subgroups as you heard from Richard that expertise with a range of perspectives needed are engaged including in emerging areas. We also again sort of as you heard already concluded that the framework needs to include more guidance for addressing uncertainties related to heterogeneities in exposure response of particular subgroups sensitive ecosystems or groups of individuals. And that the ISA the framework could be modified so that the ISA is sort of address the relationships between scientifically relevant and well-defined exposure metrics for health and welfare endpoints. And then finally again sort of just keeping in mind new methods for evidence integration that might improve the weight of evidence approach. So final conclusions again sort of the weight of evidence approach is an appropriate way for EPA to be assessing the scientific literature and then coming up with the causal determinations. But there were some of these aspects and particular recommendations that we came to that could help improve the process and increase transparency. So thank you very much and I will turn back to Megan. Great, well, thank you very much to all three of you for that great presentation. As Liz said, we are going to open it up to questions now. So as a reminder to submit a question you can just type it into the box below your video player. All right, so our first question today is going to be as our disciplines who are investigating causality what can they learn from your report? I can start here and then see if the others have anything to add. I think one of the things that was striking I work across public health and education and a range of areas is that many fields are in a situation like this where they're trying to draw conclusions from a wide range of evidence. That evidence might include studies of biological plausibility in lab settings that might include epidemiologic studies following individuals over time that might include ecological studies where we have data at large geographic scales. So often we're trying to draw conclusions where we're trying to combine all these very diverse sources of evidence. And so I think other many areas of science are gonna be in similar situations and they would be able to draw lessons from this report for their own use of sort of how do we combine these complex, different and complex and wide ranging sort of sources of data and evidence. Great, thank you, Liz. Anyone else have anything else to add on that one? All right, our next question is you say in your report that the EPA should engage a range of expertise. What expertise do you mean here? Should I come in on this one? So I think this whole question of causality, it's something quite widespread in the scientific literature these days. Studies by philosophers, social scientists, biostatisticians, epidemiologists, there are a number of groups involved. And these groups could be engaged to supplement perhaps the more traditional areas that EPA consults. I mean, they've always consulted epidemiologists because epidemiologists specifically study the relationships between environmental factors and health outcomes. They also engage, sorry, exposure scientists, essentially people who try to assess what level of air pollution people actually are exposed to. And there's also a large literature on mechanistic studies of air pollution. What are the processes by which air pollution emitted from various sources get transmitted through the atmosphere? So in effect, what we're saying is that we think that those areas of expertise could be supplemented, perhaps by people with more specific expertise in these causality questions, but I mean also more generally the whole process of science is becoming broader and more widely understood. So we think EPA should continue to keep an eye on what other areas of expertise should be brought into these assessments. And I wanna second what Richard just said is that this recommendation actually has to do with having a broad range of expertise in each part of the process in terms of the type of information and the type of expertise that's necessary to make these causal determinations. So it very much goes beyond causal determination scientists, but very much bring those in because that is such a rapidly evolving area, but very much the more traditional areas. And this can't be done by just a handful of individuals in the agency or in the review process. It really does have to be a broad range of individuals with the range of expertise that's necessary throughout the whole process. Great, thank you both. Our next question is, can the committee elaborate on how the evaluation of an unobserved confounder does not lead to a speculative discussion of individual studies? Yeah, I'd be happy to take that one at least to start. Let me first elaborate a little on sort of what the committee was thinking about in when we were talking about that. There are, so again, as I mentioned earlier in any given study, there might be a concern that some relevant confounder is not measured, whether patterns or something else. There are statistical methods that can be used for especially for certain study designs that basically ask a question, how strong would that confounder have to be in order to change the study conclusions? And so it sort of turns this sort of general vague worry about an unobserved confounder into a more concrete question about the plausibility of such a variable that would sort of actually be strong enough to change conclusions. As an aside, the first example of this actually was done in looking at the relationships between smoking and lung cancer where sort of Ari Fisher was able to show that such a, actually it was not Fisher-Cornfield, was able to show that such an unobserved confounder was just implausible, that it would have to be so strongly related to smoking and to lung cancer that it wasn't plausible that it existed. And so the idea here again is to sort of turn a vague concern about an unobserved confounder into a conversation about how plausible such a variable would be if it were then in order to change the conclusions. And so again, this is where in some sense a weight of evidence approach makes sense because it's the experts. The data can't, of course, data can't tell us about things that are not observed, but experts with their knowledge of data and other studies and sort of just the science can sort of help assess how much to worry about it essentially and these statistical tools can help provide a framework for thinking about that. So it isn't just sort of a vague, oh, we might be worried, but it can turn into, there's a tool, there are tools that can sort of help that conversation. Maybe I can add something there. I think the questioner was particularly concerned that this confounding question might lead EPA to go into a great deal of detail about maybe some specific studies and we were implying earlier that that will be a bad thing. Maybe I can just clarify that point a little bit. We're not saying anywhere that EPA can't go into great detail about some individual study that it thinks is particularly important, just that it would generally not be appropriate to base the whole conclusion just saying, well, there's this one particular study out there that's so convincing that nothing else matters. We don't feel, in general, that's the case. But on this question of unobserved confounding, I mean, there is an issue that maybe EPA could be looking, maybe if EPA incorporated this more explicitly as an assessment tool, it might be that they would say, well, maybe this particular study isn't so great because they didn't consider a particular confounder, whether it's observed or unobserved, you can imagine confounders that should have been taken into account but weren't. And we think this could be part of the process by which EPA assesses its individual papers. All right, thank you. We actually have a follow-up question to that last question, which is, is this related principally to epidemiology? How does the committee anticipate analysis for hundreds of studies? I think that's where I would just really echo Richard's comments there. We are not, that's not gonna be feasible. It's more thinking about, given how the studies were done and what the studies present, EPA could sort of assess what is presented in those studies, not that EPA would need to be doing this, they wouldn't have bandwidth for that, but that, for example, a study that is very thoughtful about this and sort of probes this on their own could be sort of given more influence in the weight of evidence conclusion, whereas a study that really is not very thoughtful about confounding and either observed or unobserved might be have less influence. And so again, sort of stressing that we recognize, EPA's role here is not to be conducting individual studies, but the idea is that these aspects of study design can be used to help think through the study quality and relevance for the ISA process. Great, thank you. Our next question is, are there any major differences between how the framework should be applied to health versus welfare effects? So if I might, actually, so the frameworks can be the same. And in terms of using a weight of evidence process, that would also be similar. We very much understand that there's, can be very major differences in the types of studies that might be performed to provide this to essentially assess causal determinations, but it would still involve considering the range of science, be it laboratory science, field science in this case, how this is all put together, you would look at the quality of the science, how they controlled the confounding, and go through the same process involving a large number of scientific expertise, and again, integrating the range of evidence across the range of studies to make these determinations. So the type of studies that would be involved can be different. But again, it's involving the range of expertise that's appropriate and informative on making these determinations. Great, well, thank you so much. Is there anything else that you are hoping the audience will keep in mind as they read through your report or any final points that you wanna make before we end today's session? Well, okay, maybe I can just suggest a couple of things. I think this was, I will say, it was a privilege to work with this committee. It was to be a member of it and to work with the other committee members. And I think overall, I think the committee members were a pretty high-powered committee. So I would recommend looking at the, I think the committee, although we had this very specific statement of task, which is essentially to advise EPA about its weight of evidence framework, on the way we looked at a number of other issues, such as what are the new developments in causal inference and epidemiology generally that are being taken into account. One, I mean, something that wasn't explicitly part of the statement of task, but we highlighted a bit in the report was EPA's treatment of multi-pollutants, for instance, which at the moment, EPA largely concentrates large because of the Clean Air Act, kind of has one standard for particulate matter, another one for ozone, another one for sulfur dioxide and so on. There certainly is interest in, well, how do these different pollutants interact? And there's discussion of that in our report and generally discussion of how modern thinking about causality affects science generally. So I would recommend people look at our report and don't think of it just as addressing the question of is the next EPA standard being correctly assessed? That obviously was a major focus of it, but also think about how does this affect how we think of causality in science generally? Yeah, I'll just thank you, Richard. I totally agree with your comments. And I think just to again, reiterate one of them, which is the really wonderful multi, the range of expertise is on the committee. And I think I certainly learned a lot and I think it comes through in the report that we had experts in ecology and we had statisticians and we had epidemiologists and we have environmental exposure experts. And I think that really then came through in the reports or being comprehensive and taking into account these different perspectives. And so I think it's unusual to get a range, that range of scientific disciplines sort of in one place thinking about the same thing. And so we just encourage people to look at it with that lens, recognizing that diversity that went into it. All right. Well, thank you all so much for taking these questions. This was a great start to the conversation that I'm sure will be ongoing about this report. I will note that a recording of this session will be available on the National Academy's website in the coming weeks. And once you exit this webinar, you'll be redirected to our report page. So with that, I would like to thank our speakers and thank our audience for joining us today and everybody has a wonderful evening.