 Good afternoon, everybody. My name is Eric Tate, and I am one of the co-chairs of this committee on utilizing advanced environmental health and geospatial data and technologies to inform community investment. I'm a professor of geography at the University of Iowa, and I focus on indicators of social vulnerability to hazards. My co-chair is Harvey Miller. Harvey, can you introduce yourself and maybe your affiliation and your main area expertise, and we'll follow that with the committee can do the same. Sure thing. Harvey Miller, I'm a professor of geography and director of the Center for Urban and Regional Analysis at the Ohio State University, and my areas are geospatial science, transportation, and sustainability. Let's go with Monica. Sorry, I have some hearing loss, and I can't see the caption, so I'm a little delayed on trying to hear you all. So are we just doing introductions? Yes, brief introduction. Monica, until justice data partners, we do environmental justice research, and we partner with communities to do their own research in Louisville, Kentucky. Lauren. Hi, I'm Lauren Bennett. I'm a program manager for spatial analysis and data science at ESRI and focus on spatial statistics and spatial temporal analysis. Walker. Hello, I'm Walker Wheeland. I'm a research scientist with the Office of Environmental Health Hazard Assessment, part of the California Environmental Protection Agencies, and I develop environmental health screening tools. Kathleen. Hi, I'm Kathy Segerson. I'm in the Department of Economics. I'm a professor there at the University of Connecticut, and my field is environmental economics and environmental policy. Ibrahim. Hi, everyone. I'm Ibrahim Karai. I'm an assistant professor of population health at Hofstra University. I study the physical and mental health impacts of injuries and disasters on socially vulnerable populations. Marcus. Hi, everyone. I'm Marcus Luna. I'm a professor of geography and sustainability at Salem State University in the Salem, Massachusetts, and I'm also the coordinator of the Geo-Information Science Graduate Program there. And I work with communities using geospatial and other techniques to address environmental investment needs. Jay. Hi. I'm a professor of geography in the Department of Sociology and Anthropology at the University of Texas, El Paso. Interested in applying geospatial tools and a variety of quantitative methods for analyzing environmental and social injustices. Also serving as a member of the US EPA Science Advisory Board and the new EPA Environmental Justice Science Committee and chairing the EPA's EJScream Scientific Review Panel. Okay, I think I got everybody on the committee. We also have staff members from the National Academies that are helping to coordinate this meeting. This includes Samantha Maxino, Anthony DePinto, and El Shane Orr. Amir Robinson is helping produce this webinar. So I'm gonna give you a brief roadmap of where we're headed today in this session. We're gonna have presentations from two groups in the federal government. We'll hear from the Centers for Disease Control and Prevention, the CDC, and the US Environmental Protection Agency, the EPA, talking about their tools for looking at burdens and disadvantage. Afterwards, we'll have some time for the committee to ask clarifying questions of each presentation. At 2 p.m. Eastern, we're gonna have a 20 minute break and we'll return for a panel discussion between the presenters and the CDC and EPA and CEQ and the committee. Throughout this process, people can submit written comments through the Alkimer that will provide a link in the chat. And as always, written comments are welcome through the study website and we'll put that link in there as well. Just a disclaimer that I should read out that any conclusions or recommendations made by individuals during this event should be considered opinions of those individuals and should not be considered conclusions, recommendations issued by this committee or the National Academies of Sciences, Engineering, and Medicine. All right, so that sort of concludes the intro. I'd like to welcome Ben McKenzie. He's a geospatial epidemiologist and coordinator of the Environmental Justice Index at the CDC and Agency for Toxic Substances and Disease Registry. Ben, welcome. Thank you so much, Eric, and thanks for that introduction. Let me go ahead and share my screen and I'll start off by sharing a little bit more about the Environmental Justice Index. Can someone confirm that they can see my screen? Yes. Perfect, thank you so much. So again, I'm Ben McKenzie, a geospatial epidemiologist with the Centers for Disease Control and Prevention and the Agency for Toxic Substances and Disease Registry. I identify as a white man. My pronouns are he, him, his, and for descriptive purposes, I'm wearing a blue dress shirt and a black jacket joining through Zoom with a blue CDC-ATSDR background behind me. So I'll be presenting to you all today on the Environmental Justice Index. So CDC and ATSDR, in association with the U.S. Department of Health and Human Services, Office of Environmental Justice, developed the Environmental Justice Index, or EJI, as a tool to help to address the environmental injustice and health disparities linked to injustice by measuring and visualizing where U.S. communities are facing the cumulative impacts of environmental burden on their health and well-being. So for the purposes of our tool, we define cumulative impacts as the total harm to human health that occurs from the combination of environmental burden, pre-existing health conditions, and social factors. So this EJI was publicly released on August 10th, 2022 with the launch of a website at eji.cdc.gov, as well as a publicly available mapping tool called the Environmental Justice Index Explorer, which allows users to interact with EJI maps and data, and I'll show a little bit more about that later in this presentation. I want to emphasize that what the EJI does is provide a single score that distills data on environmental, social, and health factors into an understandable measure, a measure that can be used for context and comparison when thinking about how to address injustice and promote health. And health is front and center in how we develop EJI and in how we see it being applied. I also want to make sure to mention that the EJI measures these cumulative impacts at the community level, visualizing the relative impacts of injustice on health for census tracts across the United States. And over here on the right, you can kind of see what that looks like for DeKalb County, Georgia, which is where I live and work. So this map taken from our EJI Explorer shows what level of relative cumulative impacts on health might be facing communities in my county. Tracts and darker shades of those with high relative EJI rankings represent communities that might experience more severe impacts on health relative to the rest of the country. This EJI is the first national place-based tool that's designed specifically for the purpose of measuring cumulative impacts of environmental burden on health. Doing this, as we say, through the lenses of human health and health equity. But I also want to emphasize that the EJI wasn't developed in a vacuum. It really builds on existing tools like the CDC-ATSDR social vulnerability index as well as state-level environmental justice screening tools like California's CalEnviroScreen tool. The EJI is adapted from the environmental justice screening method, a method that's been used by governments, scholars and community groups to produce cumulative impacts screening tools that combine the best geospatial data available for their jurisdictions. And broadly speaking, this environmental justice screening method combines data representing environmental, social, and health factors that contribute to overall impacts on health. So the first EJSM or environmental justice screening method tool officially adopted by a state government was CalEnviroScreen, which was first launched in 2013. That's what I know many on the committee are very familiar with. CalEnviroScreen provides a composite spatial index that combines 21 indicators or individual measurable factors that contribute to cumulative impacts. CalEnviroScreen uses indicators that represent both aspects of cumulative pollution burden and indicators that represent aspects of population characteristics, characteristics that make people more vulnerable to the health effects of pollution. And this method of combining environmental, social, and health factors together to highlight cumulative impacts has been really popular among environmental justice scholars and advocates. I know many on the committee will have read some of the excellent calls to action by environmental justice scholars like Charles Lee and organizations like UCLA's Less Considered for Innovation, calls to action which really emphasized the importance of taking a holistic view in thinking about injustice and health. And the popularity and usefulness of this environmental justice screening method have led state after state to develop their own tools following California's example. States like Washington, Colorado, and Michigan and more have developed their own environmental justice mapping and screening tools that measure cumulative impacts each using a mix of state specific and national datasets. So with this precedent in mind, we set out to create a national level composite spatial index to measure cumulative impacts by adapting the environmental justice screening method. We began by developing a theoretical framework for our tool, using the environmental justice screening method as our basis and with a focus on creating a tool that measures cumulative impacts on health and wellbeing. We decided early on that the framework of our index, its building blocks, if you will, would be health vulnerability, environmental burden and social vulnerability. And each of these three components, which we call modules are designed to be composite indices in and of themselves. In fact, both the social vulnerability and environmental burden modules of the EJI are themselves adapted from other indices produced by CDC, ATSDR, namely the social vulnerability index and the environmental burden index. This means that each module within the EJI represents a distinct measurable concept that contributes to those overall cumulative impacts on health. So like CalEnviroScreen, we used a percentile ranking method to normalize and to aggregate our data at the census tract level, meaning that indicator scores within the EJI are represented along a zero to one scale with a score of 0.85 for a given census tract for a given indicator, meaning that that tract in question ranks higher for that indicator than 85% of all other census tracts in the nation. And this percentile ranking method is relatively simple and effective and makes the tool easy to communicate and to adapt to local needs. Something that we felt was really important for our tool, knowing that injustice occurs locally. Like most environmental justice screening method tools, we chose not to assign any weights to individual indicators within these modules, which actually differs a bit from CalEnviroScreen, which does assign higher scores to environmental indicators that represent measures of exposure to pollution than to indicators that represent other aspects of environmental burden, which CalEnviroScreen refers to as environmental effects. The CalEnviroScreen documentation really emphasizes that these environmental effects are given half the weight of pollution exposures because the presence of pollution due to hazardous sites or land uses doesn't necessarily translate to actual exposure to pollution, which is what CalEnviroScreen measures. We made the decision to weight environmental effects equally to factors like air pollution because we felt it was important to acknowledge and to account for the effects not only of potential chemical contamination associated with some sites, but also the effects on community stress and wellbeing of non-chemical stressors, stressors like noise pollution, odor pollution, and other forms of environmental degradation or lack of environmental amenities. I'll also mention something kind of unique in the way that EJI's help vulnerability module is calculated. So while state-level cumulative impacts tools use state-level data on things like hospitalizations due to asthma or heart disease, those kinds of data aren't available consistently at the national level. What we do have instead at the national level are estimates of chronic disease prevalence at the census tract level, provided through CDC's Places program in the National Center for Chronic Disease Prevention and Health Promotion. So these places estimates use data from the Behavioral Risk Factor Surveillance System as well as some demographic data in order to model small area estimates of disease prevalence, estimates which really are the gold standard for granular disease prevalence data at the national level. However, some of the demographic data used to create these estimates are also used in our social vulnerability module, meaning that incorporating those estimates directly into the index could lead to potential over-weighting of these factors and issues of statistical dependence. So our team worked with the Places team at CDC to develop a method for incorporating those estimates into our index in a way that preserved some of the statistical dependence or independence between factors. Ultimately, settling on a system where census tracts were flagged if they scored in the top one-third in the nation for a disease prevalence indicator and then multiplying the sum of flags by a normalizing factor of 0.2 in order to create a final tract level score of between zero and one that could be directly compared and added to other module scores. And that brings me to the final step in calculating the EJI where scores for all three modules, scores that range from zero to one for each were summed and then percentile ranked again to calculate a final overall EJI ranking that represents relative cumulative impacts on health due to environment and vulnerability. This again differs a bit from the widely used CalEnviroScreen method which uses a multiplicative rather than an additive model. It's a bit more in line actually with methods used by that original environmental Augusta screening method. So I know that there's been a fair amount of work done comparing additive and multiplicative models, work that suggests a high degree of overlap in scores using these two methods. But it's also been noted that the additive method does tend to allow more influence by individual modules, just something to note about this method. That said, this additive method is intended to make the EJI both more adaptable by users at the local level and to make it really easy to understand among a range of users with widely varying backgrounds and expertise. So having introduced kind of the framework and methods that we use to combine indicators within modules and then ultimately to calculate of the overall index score, I'll go ahead and speak a little bit more to how we identified and evaluated indicators for inclusion in these modules in the overall index. So we initially identified a list of potential indicators through a list or through a mix of literature review, review of other tools like CalEnVirusScreen and EPA's EJScreen, as well as consultation with subject matter experts. We screened these indicators for inclusion in the EJI using some overall criteria that we applied to the best national level data available for each indicator. So our first criterion was that all data had to be accurate and reliable, meaning that we had to find data from a trusted source, such as a government agency that could be relied upon to produce accurate data and to continue producing data in the future. Data also had to be analytically sound, meaning that they had to be a quality measure of the indicator they are intended to represent. They had to be available at scale and because our unit of analysis was the US census tract, that meant that all data, we required all data to be provided either at the census tract level or at some finer level of resolution that could be aggregated to census tract. And then finally, we required all data to be timely, meaning that they had to represent relatively recent conditions. Most of the data that we used were collected in the last five years and these data had to be updated regularly so that we could use them in future updates to our index. This kind of screening process resulted in a total of 36 indicators among all three modules. So I'll go ahead and show you what those look like for those individual modules. The EJI Environmental Burden module includes 17 indicators, each representing a feature of the environment whose presence or absence contributes to overall environmental burden on health. And these indicators are separated into five functional groups, groups that we call domains, which represent aspects of air pollution, proximity to potentially hazardous and toxic sites, features of the built environment, both positive and negative, proximity to noisy and polluting transportation infrastructure and water pollution. All of these factors contribute in some way to community health and well-being. And we know that many of them build on each other, amplifying overall effects on health, which is one reason why it's so critical to measure these factors cumulatively when thinking about impacts on health. It's also important to note here that this environmental burden module doesn't capture all environmental issues. Data for some potential indicators that we initially identified, indicators like indoor air pollution, septic system failure and associated soil contamination, those data just aren't available as national datasets. Other data representing drinking water quality or agricultural pesticide use, things that we really hope to measure, aren't available in a form that we can use for this kind of spatial tool at the resolution that we want to. It's also important to note that data on ozone, fine particulate matter and impaired surface waters aren't available for states like Alaska or Hawaii, which actually led to the exclusion of those states from the 2022 version of our index. These are all data limitations that we recognize and which we really hope to address in future versions of our tool as spatial environmental data improve and become more accessible. The second module of our index, the social vulnerability module, includes 14 indicators sorted into four domains, representing aspects of racial and ethnic minority status, socioeconomic status, household characteristics and housing type, which is a structure very similar to that of the CDC ATSDR social vulnerability index for anyone who's familiar with that tool. These are all factors that are known to modify or to compound the effects of environmental burden on health. And these also represent aspects of procedural justice, the ability of communities to influence environmental decision-making. And then finally, the third module of the EJI is the health vulnerability module, which includes five indicators representing prevalence of key chronic conditions associated with environmental injustice and health equity, asthma, cancer, high blood pressure, diabetes and poor mental health. And we know that people with these preexisting conditions are more susceptible to the effects of environmental burden on their health. Environmental factors like air pollution, noise pollution and aspects of the built environment have all been shown to exacerbate disease in people with these health conditions. So this health vulnerability just compounds those issues of environmental burden and social vulnerability to drive a raw cumulative impacts on health. I wanna make sure to mention that one reason this module is relatively small compared to other modules is that prevalence for many chronic diseases are highly correlated. For example, high blood pressure is highly correlated with coronary heart disease, partially at least because it's a precursor to that disease. So where indicators exhibited multicollinearity, we chose between those indicators based on the strength of evidence linking that indicator with vulnerability to environmental effects on health. So with all that information in mind, and I know it's a lot, I wanted to make sure to mention that we provide this information and more on our website at eji.cdc.gov, which has information like a short back sheet, FAQs and really detailed technical documentation that provides theory and basis for the EJI, everything from the rationale for why each indicator was concluded in our index and how it contributes to overall cumulative impacts on health to calculations, running through how EJI scores were calculated, specifically using example census tracts. And then finally also mentioning important limitations that should be considered when applying the EJI. You can also navigate from this page to our EJI Explorer, which I mentioned before. And I don't think I have time for a demo today, right at least right now. So I'll go ahead and for the interests of time, just mentioned that the EJI Explorer allows users to view patterns in relative cumulative impacts on health to make comparisons using index scores and define census tract level information on the individual factors contributing to cumulative impacts for each community. EJI data is also available through CDC's national environmental public health tracking program through their environmental justice dashboard where it can be viewed alongside a wealth of other national data related to environmental justice and health. I also wanna make sure to acknowledge that community engagement is a key part of environmental justice, right? It's procedural justice, which is why since the release of the EJI, we've been working to host live demos, webinars, and to participate in listening sessions in order to introduce communities, public health partners, and subject matter experts to the EJI and to get feedback that we can use to improve our tool going forward, making it as representative as possible of the lived experiences of people facing injustice and injustices impacts on health. And then finally, I went to end by addressing one of our primary charges from this committee, our purpose in building this tool. Our primary purpose in developing the EJI has been to advance CDC and HHS goals of environmental justice and health equity. And we see the EJI as contributing to these goals by empowering communities, public health professionals, and others to identify U.S. communities that experience the most severe cumulative impact of injustice on health, helping those people to focus actions on areas with the greatest need, helping to shape public health interventions that are aimed at alleviating health inequities, guiding hypothesis development by people researching issues of environmental justice and health equity, and then finally allowing policy makers and public health practitioners to establish meaningful goals for advancing health equity to track progress and evaluate success in moving towards a cleaner, healthier, and more equitable future. So with that, I just wanna thank everyone for your time and I'll be happy to take any questions if we have some time for that. We just heard from Ben McKenzie, the CDC talking about the Environmental Justice Index. And I just wanna take a little bit of time to see if there are any clarifying questions from the committee. Ibrahim. Thank you, Benjamin, for the highly informing presentation. So I have a couple of questions regarding the mental health components of the index. Could you inform us about the primary data source that you used for the mental health data, specifically the mental health variables? Number one, the variables included that were aggregated into the mental health component. And then secondly, the data sources used. Yeah, absolutely. So I think you might be referring to the indicator for estimated prevalence of poor mental health within the health vulnerability module. So that again is drawing on CDC's places data, so those small area estimates that are modeled using behavior of a factor surveillance system data. So it's specifically estimated prevalence of poor mental health for greater than 14 days with adults 18 and older in the United States. So that's the data source that we're using for that indicator. Okay, thank you very much, Benjamin. I also have another question regarding the spatial unit of analysis. I understand that the data were collected at the census tracts level, but when I went into the website, I realized that one could actually download the data either at the census tracts level or a county level, a state level, for example. So could you please explain the justification for you making this data available at the county level? That is one. And then secondly, was it too much of, was it challenging kind of like incorporating the county level? I understand that you have the data at the census tracts level. So considering a higher level would be much easier, kind of aggregated, right? So I'm just wondering if there are challenges associated with that as well. So just to clarify, we actually, the tool allows you to download data for a particular state or for a particular county, but all of the data are at the census tract level. We currently don't calculate the EJI at the county level or any other kind of higher level of aggregation. And we also kind of discourage people from aggregating EJI data up to those levels. So again, while it's available through download for particular areas, we only provide the calculations at the census tract level. Thank you for the clarification, Benjamin. And not to digress too much. It does the same apply to the SVI data as well because with that, I also know one could download at the county level with a specific county level, FPIS codes, right? That's correct. The SVI does provide calculations at both census tract and county levels. But again, we're using both SOFA vulnerability data, both that data from the census and a variety of health and environmental data that aren't necessarily coming from that or available also at the county level. So we provide our calculations purely at the census tract level. Okay, thank you very much. Absolutely. Okay, Marcos. Thank you, Ben. That was a great presentation. I don't know if I misheard you. When you were talking about the health vulnerability indicators that you use, you commented that they were often correlated and you tried to address that correlation between the variables by linking them to specific environmental burdens for that place that I hear that correctly or could you explain how you handle that? Yeah, absolutely. So when we found indicators that satisfied our theoretical criteria for inclusion, indicators that we had data for and which also there was evidence in the literature that those indicators made populations more vulnerable to the effects of environmental burden on health, but they were highly correlated, we chose to go with the indicators where we saw a stronger evidence in the literature. For how that indicator influenced or made populations more vulnerable to health. So for example, looking between diabetes and obesity, there's a wealth of literature available on linkages between diabetes and vulnerability to environmental burdens. So that kind of influenced our decision to go with diabetes as opposed to obesity. So that's across the board then when you're talking about this or an individual or like environmental burden specific cases. So can you clarify the question? So yeah, I'm sorry. So when you're making that determination that you're kind of weighing which burden is most relevant or most influential or to that a given environmental burden, is that like a blanket decision for across the country for that burden relative to that burden to that kind of help of vulnerability or are you making that decision sort of on a regional, I'm trying to grab my head around how you're deciding when to choose one health vulnerability indicator as being relevant to giving an environmental burden. Yeah, absolutely. So first off, there wasn't necessarily too many variables that ended up being excluded because of kind of correlations, right? There's not necessarily all that much data available at the national level with the census tract resolution. So again, kind of using the example of diabetes and obesity. We kind of looked at weight the literature showing how obesity made populations more vulnerable to health effects of environmental burden and the amount of literature looking at diabetes and just kind of had to make a judgment call as to which one had the most evidence behind it. Also consulting with some subject matter experts, looking at other tools, tools like the tool being developed in New York that kind of also made the decision to go with diabetes. So all of those kinds of factors were being taken into account in making those decisions. So we're running a little bit behind schedule. Lauren and Walker, do you mind holding onto your questions? We'll have a little bit more time for discussion after the break. So we definitely want to hear your questions. So, but I'd like to move to our next speakers. We are going to hear from Tai Lung and Matthew Lee from the EPA, their environmental protection specialist and their work on their tool, EJScreen. Tai, Matthew, welcome. Thanks, Eric, but let me just make sure. Yeah, thanks, Eric. I appreciate the opportunity to be here. Actually, Tai's going to join us for the Q&A portion. So I'm going to give you all a quick 20 minute overview of EJScreen and then, like I said, Tai will join us afterwards. But I really, let me get this in presentation mode. Really appreciate the opportunity to be here. Again, I'm Matthew Lee. I work in EPA's Office of Environmental Justice and External Civil Rights. I also serve as a lecturer at the University of Pennsylvania where I teach a course on the principles of mapping for environmental justice. And I have the pleasure of introducing EPA's EJScreen to y'all today. I'm going to kind of try to bring everybody on a level playing field with my 20 minute overview of EJScreen. So some of you who are all familiar with the tool, this is going to be a little bit of a regurgitation of information. But again, I think it's especially important for the Q&A portion to have everybody on a level playing field. So I'm just going to go over EJScreen generally, and then we can get into some details. But EJScreen is EPA's web-based GIS tool for nationally consistent EJ screening and mapping. And the keywords there are nationally consistent. We have coverage for the entire United States, including Alaska, Hawaii, Puerto Rico, and we actually just added data on the other U.S. territories into our tool. But this is truly a nationally consistent EJScreen and mapping tool. And what EJScreen does is that EJScreen combines environmental and socioeconomic data to highlight areas where vulnerable populations may be disproportionately impacted by pollution. And hopefully, as all of you all in the call know, this gets at the crux of environmental justice, right? Your vulnerable populations, your poor communities of color, linguistically isolated populations, communities on tribal lands, who we know face higher pollution burdens. And those are the exact type of areas that a tool like EJScreen strives to highlight. Now, equally important to understanding what EJScreen is, is understanding what EJScreen is not, right? This is a screening tool. All of these tools that we are talking about are screening tools. And for EJScreen, we even put screen in the name. Again, this is a screening tool. It's not covering every single environmental or EJ issue. And that national consistency that I just talked about, that is a limitation in and of itself because we are limited to the data sets for which there is national coverage. And EJScreen is using the census block group as a unit of analysis. And as I'll talk about in a second, that is a very refined unit for which we simply don't have national wide data sets. So again, this is a screening tool. I'm not gonna spend a ton more time here because most of these caveats and limitations are inherent of any screening tool, not just EJScreen. But if you do come away with nothing else for my presentation, please recognize that EJScreen is not a labeling tool. EJScreen is not a designation tool, right? So you are not going to see maps from EJScreen that says this is an EJ community or this is not an EJ community. And that is an inherent difference between a tool like EJScreen and Seagist, the Climate and Economic Justice Screen Tool, which I know you all heard about a couple of weeks ago or earlier this week. That is a designation tool, right? That does designate disadvantaged communities and not disadvantaged communities. That is not what EJScreen is doing. EJScreen is simply highlighting areas that have both vulnerable populations facing higher pollution burdens. And we primarily do that through the creation of two sets of indices. We have our primary EJ indices and then we also have our newly created supplemental indices. And as the graphic on the right kind of alludes to, these indices are again that combination of demographic data with an environmental indicator. Because we have 12 different environmental indicators built into the tool, we have 12 different EJ indices and 12 different supplemental indices. One index for each of the environmental indicators. And I'll go into the details of the calculation in a second. We also feature seven different socioeconomic indicators on their own. And we round out the tool with a suite of health, climate and critical service gap indicators, all of which I'll go into detail in a second. EJScreen is by no means a static tool. We update the tool on an annual basis. And even last year, 2022, we updated the tool twice. So for our updates, any of the environmental data sets that can be updated are updated. Likewise, if the census releases newly available census data, we are incorporating that into EJScreen. EJScreen is incorporating the Census Bureau's American Community Survey five year rolling averages. So right now we are utilizing 2016 to 2020 ACS data. For our next update, we will transition to the 2017-2021 ACS data. EJScreen is at the highest resolution for which this data is available. So each one of these color-coded polygons that you see on the map here is a census block group. Again, this is the most refined unit for which the Census Bureau releases detailed demographics. So this is the highest resolution of data that is available. Is at the block group level. And that sets EJScreen apart from a lot of your other tools, like Ben just talked about with the EJI and even CGIST is available at the census track level. EJScreen is available at the block group level. In the name of transparency, everything within EJScreen, those indicators that I just went through can all be downloaded. So if you are your own GIS person or work with your own GIS team, you can go on our FTP site and download all the data. That being said, you do not need any special GIS software, any special passwords, anything besides internet access to access EJScreen. And we at EPA and our federal state partners and our community stakeholders all use the same exact tool with the same exact datasets. I do want to take a second here and talk about the unit of analysis that EJScreen uses. You've heard me talk about block groups and Ben talk about census tracks. But again, it is important to understand the unit of analysis that these tools use. The census does have a unit of analysis called the block. This is literally you and your neighbors. The census does not release detailed demographics at the block level due to privacy and security concerns. The most refined unit for which the Census Bureau releases detailed demographics is the census block group. A census block group is roughly 1,400 people but it does vary in terms of population size. They can be as few as 600 upwards of 3,000 people and certainly varies in terms of geographic size. So the block group that you're looking at on the visual here is in a densely populated urban area. This is probably only two tenths of a square mile. A very refined look at the community within this area. There are census block groups, however, in rural North Dakota, for example, that are 700 square miles. And that is still the most refined unit for which the census releases the data. So again, you just have to take that into account when looking at the data. Some census block groups are very refined, some are much larger. But since we feel we at EPA feel that EJ issues are inherently local, we feel that the user should be utilizing the most refined data sets available. And again, that is the census block group. We do also make our data available at the census track level. So we recognize that, you know, the EJI, that's CGEST, CalEnviroScreen, a lot of your other tools that are out there are only available at the census track level. So we also make that EJ screen data available at the census track level. So you can compare apples to apples and things like that. You can also run county level reports in EJ screen. And one of the really nice things about the tool is that you can then compare all the data to state and national averages. Real quickly, I'm gonna go through the different environmental indicators that we incorporate into the tool. The first few that we have are air related. We have PM2.5, ozone, diesel PM, air toxics cancer risk, the air toxics respiratory hazard index, and then the traffic proximity indicator that we get from DOT. And then to round out the tool, we have a lead paint indicator, a few proximity to EPA regulated facility indicators, an indicator on underground and leaking underground storage tanks, and then last but not least a wastewater discharge indicator. So again, this is by no means every single environmental burden that could potentially impact a community. These are simply the indicators for which there was national coverage and either block group level data or data which could be relatively easily down weighted to the block group level, because some of our air related data sets are available at the census track, which we then down weight to the block group. Likewise, all of our socioeconomic indicators are also available at that block group level. And these are more or less your classic indicators of social vulnerability, you know, communities of color, low income, unemployment, limited English speaking, less in high school education, and then your sensitive populations in terms of age. And again, all the data that is currently featured in the EJ screen is coming from the 2016 to 2020 ACS. So while you can get all this information on these individual socioeconomic indicators on their own, EJ screen also uses those different indicators to formulate two different demographic indices. We have always had our demographic index available in EJ screen from the inception of the tool. The demographic index is one of the components of the EJ index. I'm gonna talk about that calculation in a second, but this demographic index is a very simple calculation. It's simply looking at percent low income plus percent people of color divided by two. This goes directly back to former president Clinton's executive order on environmental justice, specifically identifying these two segments of the population. So again, our demographic index has always been and will continue to be utilized as one of the components of the EJ indices. New for this last update in October, we did create a new supplemental demographic index. This is looking at five different socioeconomic indicators and rolling them into an average. So we're taking the average of the five of these different indicators. We then take that supplemental demographic index and combine it in the same exact way we do with our environmental indicators to form the supplemental indices. And again, the supplemental indices and the supplemental demographic index itself are not replacing the demographic index. They're simply living side by side and they offer exactly what it says, a supplemental look or a different look at vulnerable populations. To dive a little bit more into the calculation of the EJ indices and again, the same exact calculation applies to the supplemental indices as well. You're just gonna swap out the supplemental demographic index for the demographic index, but we are taking that single environmental indicator in its percentile format and multiplying that by either the demographic index or the supplemental demographic index. And why do we do this, right? We are doing this in an effort to identify areas that have both higher pollution burdens and vulnerable populations present. Again, the crux of environmental justice. One of the really nice things about EJ screen and a similar aspect of the EJI is that we are presenting the information in percentile format. So while you can still always get the raw values associated with any of our indicators, putting the indicators and the values into a percentile format allows for kind of comparability and that type of analysis and puts things in perspective for the user. For me, even as a power user of EJ screen, it really doesn't mean much to me if you told me that my PM 2.5 level is 9.4. I don't know if that is good, bad, high, low, entertainment, not entertainment, but if you told me that my PM 2.5 level is in the 90th percentile nationally, okay, only 10% of the nation has a higher value than I do. So it puts things in perspective and that is exactly why we rank all of our results in percentile format. Again, you can still get the raw values but kind of upfront and center are these percentile rankings. So that is how we again, present all of our EJ indices, the supplemental indices is in percentile format. EJ screen does feature a host of other indicators, some in percentile format, if it lends to that format, some not, but we do have three different health indicators that we have incorporated into the tool. This is thanks to CDC Places in partnership with Robert Wood Johnson. These health indicators are available at the track level. So we do not have block group level data. I do not believe there is block group level data for these health indicators. These are track level data sets for low life expectancy, heart disease, and asthma. Likewise, we have different climate indicators built into the tool. We have two really nice indicators on wildfire risk and flood risk. Those are coming from an MOU we have with the First Street Foundation and then we also have information on drought, coastal flood hazard, the 100 year flood estimates, and modeled one through six foot sea level rise. And last, but certainly not least, we do have information on different critical service gaps. So we offer the user information on food deserts, on medically underserved areas, and on access to broadband internet. Depending on your use of the tool, there is a variety of different ways to look at the EJ screen data. If you are looking at a city, a county, a watershed as a whole, probably the maps are a really good place to start toggle on and off each one of the indicators or indices one at a time and identify hotspots or areas of interest that you then may want to generate a standard report. A standard report is a really nice way to generate three pages of EJ screen information on a specific area of interest. So whether that be a block group, an area around a school or your church, your home, that standard report is kind of a one stop shop for all the information that I just went over in EJ screen. You have heard me talk a lot about these block groups and census tracts. We at EPA totally recognize that these are quasi-political boundaries that the community barely recognizes that pollution burdens certainly don't recognize. So we do allow the user themselves to customize their area of analysis. So you can do things like drop a pin on a facility and put a one mile buffer, a three mile buffer around that pin. You can actually physically draw a, let's say a plume of pollution coming off a site if you knew where that was. You could actually draw that plume of pollution in EJ screen and generate a report on that. Likewise, if you knew there was a discharge coming into a waterway and that discharge affected, let's say one mile downstream, you could trace that one mile downstream segment, put a buffer around that and analyze that. So you're by no means limited to these block group or census tracts assessment, the user themselves can also designate their own area of assessment. And then last but one of the nicest and newest features that we have available in EJ screen are the addition of the threshold maps. So one of the things that sets EJ screen apart from the EJI, which Ben just talked about, is that the EJI is a cumulative scoring tool. It wraps those different indicators up into a cumulative score. EJ screen does not do that. It is not wrapping anything up into a score. But one of the kind of first steps that we've taken towards examining cumulative impacts is by this release of a threshold map widget. The threshold map widget allows you to kind of look across those different indicators. So you can set a certain threshold, let's say the 80th percentile, for example, and you can look at which one of the 12 different indices exceed that 80th percentile, if that's what you're looking at. So again, this is by no means wrapping up the different indices into a score, but this is letting the user look across all 12 of those indices at once. So again, we feel that this is a very nice step towards examining cumulative impacts by kind of providing the user with this cumulative outlook on all the data sets that are in EJ screen. I know that was a lot. There's no way I could give you all the information on EJ screen in 20 minutes, just like Ben presented with the EJI. We do have an EJ screen website that has a ton more information and actually a lot of training sessions just like this that are prerecorded hour long, have a live demo associated with them. So if you're looking for more information, there is a lot of data available on our website. We also have been offering office hours. So we have public office hours coming up on the 19th of April for any user to come in and talk about their use of the tool. So I will stop there and see if there's any questions. And then I think Ty and I can get into the real nitty-gritty during the panel discussion. Thank you so much, Matthew, for your presentation. We'll have some general discussion, time for detailed questions after the break. But I just wanted to see if the committee had any, you know, really brief, just clarifying questions on definitions, that kind of thing. Marcos. Yes, hopefully it's a quick question. So I love EJ screen. Let me just say that first of all and I saw the widget that you have on there. I'm really eager to go play with that. So when you're showing the percentiles, that's the main way that you see how a place compares in terms of the level of burden. Does EJ screen include or have you thought about including indicators when those values exceed like a legal or regulatory threshold or even a health-defined threshold? Yeah, it's a great question and a great point. We haven't gotten there yet. It is simply looking at the percentiles for whether that compares to the state averages or the national averages yet. But I do think you're exactly right on is that there should be some sort of corporation, incorporation of a health-based threshold or some sort of air quality threshold, whatever it is. And we just haven't gotten there yet. Okay, Ibrahim, did you have a clarifying question? Yes, a very quick question. Matthew, thank you for the presentation. The medically underserved areas, right? You included a variable about that. Is it on the sensor struct level or sensor's block group level? And what's the primary source of the data? I'm going, I'll pop it in the chat as soon as we're done here. I have to double-check. I believe it is sensor's track level data that's coming from HHS, but let me, we have it on our website, the exact source and I'll just pop it in the chat for you. Okay, thank you very much, Matthew. No problem. Okay, so let's go ahead and take a break right now. Let's make this a 15 minute break. So let's plan on returning at 25 minutes after the hour. Okay, folks, we are back from our break. I'd like to welcome back Benjamin Kenzie and Sharanda Buchanan from CDC, Tai Lung and Matthew Lee from the EPA and newly welcome Lucas Brown from the CEQ. And what I'm gonna do right now is open the floor for general discussion. And I just wanna lays out some of the expectations and ground rules surrounding this discussion. This discussion is for the benefit of committee members. The general audience will not participate but you may provide written comments and alchemy and these will be reviewed after this open session. And written comments are always welcomed via the project website and there should be a chat, a link dropped in the chat momentarily. So I'm just gonna open it up now for general discussions, questions from the committee. So please, Lauren. Thanks. I wanted to, mine is a pretty quick question because, but it comes back to Ben. You were talking about the work that the team did to kind of understand which models variables had incorporated the same factors in their modeling and then kind of figuring out which were the most impactful variables to include. And I'm curious if there was any kind of documentation that accumulated in that process that is shareable. It seems like a really useful exercise that I would hate to undertake again if it was documented in any useful way. Yeah, absolutely. So just for clarification that was specifically referring to the health vulnerability module. And I think that's all of that information kind of the specifics of validation and analysis. Those are things that will be made available through our kind of forthcoming methods paper. So they're not currently publicly available. All of the other, most information on kind of the methods and everything is available through our technical documentation but not necessarily the data on statistical analysis. And before I call on a couple more questions or I just want to introduce also Natasha DeJanet and Charmila Murthia joined us. Say hi. Okay, Monica. Thank you. So Biden said that this is the year of open data and I know that NASA and the EPA have been releasing new data sets online and NASA has some that's for wildfires and particulate matter. So I was wondering that if this data is not incorporated into your tools do you think those open data sets might be useful for consideration in the future? I wasn't sure if they're at census tract level. So I was just wondering if you all think they might be a helpful addition to the tools. I cannot be there as we've been in some conversations with NASA about some of their satellite data. You know, I think that satellite data really has the potential to change the way we look at some of our environmental data sets. So we're very excited about incorporating some of that data. There are some questions about exactly how we use that in addition to some of our on the ground monitoring data. Especially if they're telling us some different things. So we've been kind of in conversation with some of our different air offices about how we could use their in02 data from NASA in addition to some of our actual monitored data. So I think we'll get there. We're probably not gonna get there in the next couple of months, but I'd say by next year we'll likely have some of that NASA data in our PGA screen tool. And I'll hop in on our side and just say that, yeah, we also, we're very excited by the potential of satellite data. We've been in discussions with NASA's help and air quality applied sciences team kind of talking through what data might be most applicable or might be available and kind of provide a better quality than the data currently available at the since this track level. So again, I'm not sure about the timeframe for incorporating those data into the EJI, but definitely something that we are looking at and excited about. Okay, thank you. Walker, please. Harvey, can I hop in? I'm sorry. Sorry to cut your off. Do you wanna hop in here, Lucas? Please go ahead. Go ahead. I'm actually already running behind. I'm gonna answer one question previous if you'll forgive me, but when you were asking about kind of the data roadmap and indicators we considered in the technical support document for the CGIST that's available on the downloads page of the screening tool. On pages 30 and 31, we have a list of indicators that we considered for inclusion in the tool and that are not quite there yet. Sometimes mostly it's because the data's not available at the census track level right now, but those are included there because we think the underlying issues are fairly important and if we were able to get data at a narrower resolution that's appropriate for the tool, we'd be excited to explore that. So that might be a little bit of a starting point for your consideration. And thanks, Walker, for letting me jump in line in front of you there. Thanks, Lucas. Walker, please. Yeah, that was actually great because it partially answered some of my question and also this question is in some ways rephrasing or reframing Lauren and also Monica's questions. So Ben, it was more of a question of are you all planning on releasing a sensitivity analysis that looks at indicators correlating with one another? How domains or indicators contribute to that overall EJI score? But it sounds like with your methodology document, you're gonna be tackling some of those. Is that right? Exactly, that's the plan. Great. And then also, Ben, this is actually great that Lucas hopped in. Similar question for you. Are there indicators or data sets that you all were thinking about for EJI that could not include for whatever reason at the moment? Yeah, absolutely. And I think I've looked at the list that CEQs put together and it's very similar. You've got some big issues that I think people are very familiar with things like drinking water quality, agricultural pesticide use. Again, things I kind of touched on a little bit in my presentation that we're just not finding the data for those indicators. There are a number of other indicators especially related to environmental burden that we don't include. So another part of the EJI that I kind of mentioned is that the model itself is pretty adaptable. It's pretty easy for users with GIS expertise to download the data, add their own indicators if there are local level data available for some of those things. So that's always something that we like to mention is that it is adaptable and those data can be added at the local level. But the national level, again, those data just aren't there yet. And just following up on what Ben said about the water quality data, many of you may know some of the challenges with water quality data is we know that there have been contaminant violations in water systems but we don't necessarily know the service boundaries of all those systems. So it's hard to put on a map who's drinking that water that we have the contaminant data for. There's a group called the Environmental Policy Innovation Center EPIC that's been working to map water service boundaries for the whole country, sometimes using explicit known maps that are known to be true and sometimes using a little machine learning inference to model it. There's a lot of interesting academic work going on. Rachel Morello-Frosch and others are working on the Toxic Tides Project to map toxic release inventory sites that are within areas that we expect to experience a lot of coastal flooding and other risks in the coming years. So certainly there's a lot of internal federal efforts here working on making this data available and then a lot of exciting efforts in academia and nonprofit and the private sector. Thanks, Kathleen. Hi, so thanks so much for those presentations. It's super interesting, particularly to compare across these two indices in the CEQ index. And one of the questions that it raises in my mind has to do with who's using the indices and how they're using them because we heard a lot from CEQ about the fact that their index is intended to, in some sense, determine eligibility for investments. And so I think Matthew used the terminology, a designation tool and specifically said that yours is not. And so I guess I'm curious to understand better how that and the purpose of the tool and who's gonna use the tool, influences how the tool is structured. So for example, if you have a score, that's quite different from having an indicator, in or out, eligible or not eligible. So I wondered if both Matthew, you and Ben could talk a little bit more about who's using your tool, how they're using it and whether that has influenced how you structure the tool. Absolutely, I can hop in on this one. So it's definitely something that played into our development of the Environmental Justice Index. Again, like EJScreen, the EJI isn't intended as a labeling tool. It's comparing relative cumulative impacts on health across census tracts, across communities. So it's designed for different users. Obviously our focus is on health. We wanted to make a tool that was useful for health professionals or health officials to identify areas experiencing health effects from environmental injustice and then to respond to environmental factors, social factors that we're contributing to those cumulative impacts. But we've also seen usage by communities who want to take the information from the tool. The tool provides a third party validation to what communities already know, which is that they are overburdened by multiple environmental burdens. They face multiple social vulnerabilities and health vulnerabilities. They can take that information and use it to advocate for themselves. So that's definitely also a usage that we see for our tool. Yeah, great answer, Ben. And just to expand on that specific to EJScreen, we built EJScreen from the beginning with the idea that it was designed for a variety of different users and uses. And that just is inherently different than let's say the Seages tool, which was designed for a very singular purpose, which does allow itself to be a designation tool. EJScreen, there isn't that single purpose behind EJScreen. We see a lot of different users using it in a lot of different ways. And that's why you can look at different units of geography. You can look at different indicators by themselves combined together with threat and vulnerability. There's just a lot of different ways that you can look at the data in EJScreen. And again, it wasn't designed to give you one answer. And that is just a very big difference between that and specifically Seagest. If I can follow up with that question, are you tracking how people are using your screening tools or are you planning on? Yeah, we've, and Ty, feel free to jump in at any time. But yeah, we've been tracking the use and users of EJScreen almost from the inception of the tool. And we are constantly using that feedback that we get to continually make the tool better to better meet the needs of different users, et cetera. That's a great answer. And I'll say that EJI is much newer than EJScreen. It was only released again back in August of 2022. But we are already tracking who is using our tool for analysis, who's using it for public health interventions and looking at community groups that are using it again to kind of advocate for themselves. So those are all things that we're trying to keep track of going forward. And I'd love to hop in here as well on the CEQ Seagest tool. So the purpose of this tool, the intended audience, is that Executive Order 14-008, we were charged in CEQ with creating a geospatial mapping tool to help us identify disadvantaged communities, communities that experience disinvestment, marginalization and potential overexposure to environmental hazards. And we want to do this on the basis of communities that are geographically defined. And so this is part of the Justice 40 Initiative and the Justice 40 Initiative directs our agencies to make sure that 40% of the overall federal, the benefits of the federal investments reach these disadvantaged communities. And so I don't want it to sound as if it's an in or an out. We want 40% of those investments to reach disadvantaged communities. There is still another 60% of federal investments as well. So I just want to make sure it doesn't sound like an in or an out, but, and then I also want to point you to our guidance and instructions. So if you visit the about page of the CEQ Seagest website, there you will find the recent memo. And the memo really specifies directly the audience for the Seagest federal agencies using this for those benefit elevation purposes. If I can follow up with that answer, Tasha, if you don't mind. So the Seagest tool is a binary tool. It's like identifies a community or doesn't identify, or at least designates a community or doesn't designate a community. Is that intended as like a first step screening? Or is there any value in like an ordinal ranking or a score or anything that goes beyond a binary designation? So we do want this designation to be the first step in identifying communities for federal agencies to identify communities that are disadvantaged, but agencies also in the guidance memo receive instruction around that they can further prioritize based on their areas of expertise and based on their specific areas of interest. So, but also when you download the information from when you utilize the spreadsheet on the website, you're able to much more robustly qualify and quantify communities disadvantage based on all of those different geographic parameters. But when you use the website, you will see a yes or no and you'll see which categories qualify a community as being disadvantaged. Thank you. Well, Jay. Thanks to Ben and Matthew for your excellent presentation. That was really helpful. And I was wondering when I look at the social vulnerability indicators for EJI or the demographic indicators for EJI screen. And obviously there is a major dependence on census or ACS data and which kind of makes a focus mainly on residential population or nighttime populations. I was wondering if either if you have thought about expanding, looking at non-residential populations, daytime risks, using locations of schools and their demographic characteristics, for example. I just wanted to hear your thoughts on that. Yeah, I'm happy to. Oh, sorry, Harvey. I was just a great question. That's all, please. Thank you, I'm happy to hop in here because this is actually something I'm very passionate about part of our group at CDC. Part of our interest is looking at mobility measures and how daytime mobility affect exposure and health. It's something that I haven't necessarily seen applied within this kind of cumulative impacts framework which is really kind of developed for looking at residential proximity, residential exposure. One thing that we do include within the EJI, however, is measure within our databases a measure of daytime population, which allows users to kind of at least get some kind of idea for which census tracts are residential or which census tracts are places of work. But it's definitely something that we are excited about looking into, excited about incorporating into our framework for understanding cumulative impacts because it's such an important part of understanding kind of the framework of exposure. And I'll hop in here to say, likewise, that's one of the areas that we're actually really focused on as well. We have, we are planning to work with an academic on is there a better way that we can incorporate some data on places of work and obviously schools as well into our calculations rather than just strictly looking at the place of residence. It's a little bit more complicated than I think we have been able to wrap our hands around how we exactly do it thus far. So, I think giving some input from academic system would be pretty important as well. I'd like to echo that as well. I think this is a problem that we often run into in epidemiological investigations that people are not in one place all the time. We're mobile, we're moving, we're in different places. We may work in a different place. We may visit other places and the committee has recommendations around this. We'd be very interested to hear them. Okay, thank you. Ibrahim. So Ben, I mean, when you say you track how users put this tools to use, right? How they apply these tools, how do you get to track them? Is it by the products, the publications or do you reach out to them or do you ask them to fill out some funds and just be curious? Yeah, absolutely. So, number one, we do try to keep track of publications or policy uses that we see popping up. And then often we will have people reach out to us directly to our mailbox. Again, we make our mailbox very prominent for people to ask questions or provide feedback and we'll get people responding with saying how they've used EJI for their own purposes. So those are kind of some of the routes by which we're tracking EJI use. Thank you very much. I can hop in to say with DJ screen, we do the same, we have like a mailbox that tracks all of the feedback that we get on the tool, which is one primary way, but we've also done surveys of our EJI screen, our external EJI screen users. So we did a survey about five years ago right now and so it's a little bit dated. We were planning on redoing that survey though, just to get a more updated look at who outside of the agency is using EJI screen and what they're using it for. Because I know that in those five years since we've done that survey it's probably changed pretty significantly. Thank you. Yes, thanks. Lauren. I have a question which, I mean, I'm curious how much y'all work together or interact with each other when working on building these indices? And I'm curious how you perceive the overlap and the value, you know, like how do you, if I'm a citizen and I'm saying I'm trying or I'm a decision maker and I'm saying I'm trying to figure out this landscape, kind of how do we, how do you justify three tools to evaluate environmental justice and how they're different and how you kind of work together to make sure that they're meeting different needs. I just wanna understand that landscape a little bit better. Right, can you imagine these tools working in an ensemble to read some policy or investment decisions? That's a good question. Who wants to go first? It's probably a hard question too. Well, I'll kick us off. I mean, I just think this, we just are living in a really exciting moment where I think we have access to it and we have access to all of these tools. And I know that we are in communication. And I think Lucas, Matt and Ty can speak to the early work that was done. They spent a lot of time with us when we were building the CGS initially so we could learn from them. We are, I think the reality is is that there are all of the different users. Each of the different agencies does have different purposes, different particular targets. But I think our vision is for there to ultimately be data overlays. Ways that these tools can actually interact. I think NOAA put out the Climate Resilience and Mapping application tool that has a nice overlay of CGS in it. We're seeing examples of this and I think we're really just at that moment where we can be really building on what each of the different efforts are focused on achieving. And I'll help them and just say we have, yeah, we're constantly working with our partners, agencies to try and make sure that we are using the same data and that we're starting from the same place. One of the things we've been talking about is setting up some kind of an interagency working group on data and tools to make sure that, if we're doing something on climate that NOAA is telling us, yes, that's a good climate source to use. So we're not all coming from, we're not all doing our own research. We're not all just building in whatever dataset makes the most sense that we're all coming from one federal government point of view. So I think that's one of the things that we're really focused on currently with the proliferation of all these different tools. That said, I think a lot of the tools do rely on the EJ screen as a base starting point. And often you see a lot of our data as some of the primary digs that's. Oh, sorry, go ahead, Lucas. Go for it, Ben. So I was just going to hop into Echo, Echo Tie and Natasha's statements that again, I think this is a space that we really, really are looking to collaborate on and work more closely together with everyone across the federal government. But also that these tools do have different purposes. The EJI is specifically designed for evaluating cumulative impacts on health, right? Each of these different tools has specific purposes, specific users, and each lens a different valuable perspective to policy makers, to public health professionals, to communities. And they can use these tools in combination to address environmental injustice. Monica, oh, I'm sorry, go ahead, Lucas. Thanks, Harvey. I definitely endorse everything my wonderful colleague said there about kind of the different purposes in our coordination. Sometimes we have an internal saying that a little bit of these EJA screening tools is kind of like a game of iron chef. There's like only so many nationally consistent census tract level data sets, and we're all kind of using similar ingredients for slightly different purposes. And we do stay in coordination of, we have different purposes, but we wouldn't, you know, when we're using the same data, we try to use it similarly or express it similarly so that there's not a confusion on the data elements. And I'll continue my new tradition of being one question behind. In terms of the feedback channels on the CGIS, we have a pretty active Google Analytics channel monitoring the site. So how many people arriving from where, how long are they spending on different pages, kind of measuring what files are they downloading? We have had about, I believe it's five different methods of feedback on the screening tool over the last year, including our request for information. There's a structured survey on the site that you can fill out that's linked from every page about how you're using it, how you might like to see it change. On every single census tract, people can click a button that sends us feedback on that tract. So if some of that is useful, we could work to see what might be appropriate to provide in terms of our data on how people are using the tool. Thank you so much for clarifying all of that, Lucas. I think that was really important for you to add that response. And I just wanna build on what you said. There's one other way that I wanted to make sure the group is aware of that people are providing feedback on the CGIS and that is through email. And I'll share our email. I do not have access to the chat. So I'm going to just say it. It's screeningtool-supportatomv.eop.gov. Thank you. Hey, Monica, now please. Yes, how or do you account for chemical disasters? Because on the one hand, a trained derailment can happen in any neighborhood regardless of economic status. An explosion at a chemical facility will not happen in any neighborhood. But if you are hit with a trained derailment, chances are in 30 years, you will be a legacy community with legacy pollution. So if the CGIST is like, if there's a binary, where do we put those communities that are getting chemical disasters because climate change warped the train tracks? And they really, they don't fit the mold of an environmental justice community, but they may become a super fun site for all we know. And I think that's a very important question. And one that we've actually heard something along these lines, especially considering what our friends in Ohio are facing right now, it is fresh on people's mind of, how can all of these types of tools be able to be responsive to emerging threats as you have laid out here? So the question is, and the indicators that are included in the tool, and as your committee makes recommendations for further data to be considered, data that's around infrastructure that might point to where some challenges are or potential challenges could be, would be very beneficial. And some of those show up under our climate and show up under our housing burdens right now. And so recommendations along those lines of data sets that are available that point to infrastructure challenges would be a great benefit. I see people freezing on my screen. I hope that you were able to hear most of that. We were. And I'll build on that to say, one of the data sets with an AJ screen and one of the data sets that people use in other tools for an AJ screen is a data set on facilities with risk management plans. And those are actually facilities that, a risk management plan is put in place because those facilities have the potential to have some issues and leaks into communities or something like that. So we put that in there as a preventative to look at facilities that do have that potential to impact communities in a way that they're not yet, they have not yet done. Do we have that for trains? I'll kind of wrap off. Oh, sorry, Ty, I don't know if you had another response. Okay. I was just going to wrap off by saying that, again, this is, it's also something that we've definitely considered. We do also include in our tool proximity to facilities with risk management plans, those facilities that house a highly toxic chemicals and have some potential for accidental release. But also I think a very useful thing that the EJI brings to the table is that it's looking at existing vulnerability, existing social and health vulnerability and environmental burden to allow people to view which areas might be the most impacted by a spill or accident that affects multiple communities. So that's just one extra thing that I think our tool brings. Well, thank you. And I wanted to clarify as well, I appreciate Ty and Ben bringing up the point of the proximity to risk management plan facilities. We have that included as well. And it's for the count of risk management plan facilities that are within five kilometers and for the census tracts that are within the top 10% of that. I think that's an important point to add as well. And look forward to recommendations from the committee further around this. Other questions from the committee? Yes, Lane. Yeah, just a real quick one that there's obviously a lot of spatial data, a tremendous amount in all of this. I'm wondering to what extent it allows for upstream versus downstream vulnerability because that can make a big difference. And I don't know whether the kinds of data sources that you're using to capture some of these vulnerabilities can distinguish between that. Can you clarify what you mean by that, Kathleen? Well, for instance, if you're located, I mean what prompts the question is this question about being located near a facility that could potentially say, you know, have a leakage then it matters whether you're the community that's downstream from that or the community that's upstream from that. You mean literally upstream and downstream? Literally, yes. I mean, water-wise but also airshed-wise, yeah. So I don't know whether that level of detail is captured in any of these data sets or not, perhaps not. I guess I cannot then. You know, with the water indicator that we use with an EJ screen, it does actually map the pollution as it comes downstream and uses a fate and transport model to show the dispersal of those chemicals throughout the water. A lot of our air pollutants don't actually show, you know, downstream versus upstream but they're showing actual pollution. So, you know, it is showing where the pollution is coming from facilities and impacting communities. So I think our tool does address that and, you know, a lot of the other tools suggest and the EJI use data sets from EJ screen. So they also incorporate those in that way. I think we could always do a better job of doing some of that modeling or incorporating some of that modeling into our tool but we also try to keep the tool more accessible to communities and sometimes if we get too detailed into, you know, modeling of pollution, it can kind of become a little bit harder to understand. So, you know, we let some of our more detailed air and water tools be, you know, the experts at modeling those downstream pollutions but in any way that we can incorporate downstream versus upstream pollution modeling we do do that in our tools. Oh, I think Eric, did you have a... Oh, you did, sorry, Eric, please. Thank you. We've heard the word cumulative several times this afternoon. I was wondering if you could each talk about how you define cumulative and how it's reflected in the data and your tools as well. Yeah, I can take a first stab at that one. So again, the way that we are defining cumulative impacts is in the context of health. We define it as the total harm to human health and wellbeing of multiple environmental burdens, multiple social and chronic health factors acting together over time. So, you know, we're thinking about this in terms of communities that experience multiple different types of pollution, right? They're not just experiencing high levels of air pollution but they're also experiencing noise pollution that's causing stress, that's potentially making them more vulnerable to air pollution. We're talking about communities that don't necessarily have the financial resources or health insurance that are available to them that helps them to kind of respond to those respond to those environmental burdens and protect their own health. That's how we think of it is that model of where do we see multiple different sources of environmental burdens interacting with underlying social and health vulnerabilities. I'll hop in here and say, you know, in a lot of ways we are looking at the same thing as Ben, I think his definition would fit a lot of what we're talking about but I also want to differentiate between like a cumulative score and cumulative impacts because when you talk about cumulative scores you really need to have the science dialed in in terms of, you know, how much each of these different pollutants impact the body. We are not quite there yet. Cumulative scores are very difficult to do with all of these different pollutants. And for that reason, we've kind of steer clear of combining all of these different environmental pollutants and saying this is the cumulative burden on communities from all of these different pollutants. But what we do do is we allow you to look at the cumulative impact. So you can look across the various environmental pollutants and you can see that, you know, this pollutant is at the 95th percentile in your community and this other pollutant is, you know at the 90th percentile. So you can see how all of these different pollutants are impacting one community. We're just not there with the science to say this is how they all impact you and give you one score which is what we're a little bit more careful about being an environmental agency. Can I just do a quick follow up there? Thank you for your responses. And so I guess there's some differences in whether to aggregate or not aggregate in these different tools. But Ben brought up this idea of not just multiple but interacting. And is this something that's been a question as y'all are developing your tools? Have you debated including it sort of where you are at sort of like interacting, interaction of these determinants or burdens or factors? I would say that, you know, within the EJI which again is what the EJI does is it builds on a state level framework for understanding cumulative impacts on health, which again uses it doesn't apply specific weights or specific kind of importance to individual indicators but rather acknowledges them kind of all acting equally together on health over time which we know is not going to be entirely accurate but it does provide that kind of high level screening overview of where communities face multiple environmental burdens and multiple social vulnerabilities. I think that again that's very important to recognize is that it is a screening level tool in that regard. We do try to acknowledge within our documentation within everything that we do where we see literature showing interactions between individual indicators and make sure that people understand that. So if you go into the EJI and you're looking at an individual community in our mapping tool, you can click on a particular indicator and it will take you to a page that lays out both kind of the dataset behind each individual indicator and kind of an overview of the literature on how that indicator might contribute to cumulative impacts on health, what kinds of factors it might be interacting with. That's kind of the approach that we've taken and where we're at right now. I just wanted to squeeze in if there's a moment to do so. Eric, we very much appreciate the question that you've raised because this is essentially exactly what we're looking to interact with this community around is understanding cumulative impacts and understanding their application in the tool, the methodology, but also different factors of the cumulative impacts, whether ranking is a recommendation. So we're very interested to hear from the committee recommendations when it comes to cumulative impacts and see just. Thank you. Can I just follow up? What do you mean by ranking there? Are you talking about ranking the communities or ranking the impacts or can you? Well, give us, explain that a little bit. Sure. And we're not pushing for any specific way of doing it, but we're interested in the whole of recommendations and whether or not that recommendation would include ranking and so it could be the ranking of the impacts among communities. So, but I don't want that to sound prescriptive. It was just an example. Yes. Hey, thank you. Thank you. Now, Walker. Thanks. So through the development of these tools, I think there are a number of ways where the developers of these tools and people working on these can get feedback or collect information on how to advance the tool. How to improve the tool over time. Internal ways and external ways both. Consulting with experts in your agency. I think somebody mentioned office hours, having formalized training programs. Somebody else mentioned email earlier on. I'd just like to hear if y'all have anything to share regarding your respective tools. Are there particular methods of feedback or collecting information that really were, really particularly valuable for you and really stuck out to you as a good way to get information that helped to advance indicators or methodologies? If I can add on, oh, sorry, go, if I can add on to that question. I also like to know how you kind of balance this, you know, the feedback from users with the grounding of these indicators and the literature and theory. Because I know Ben and Maticketer talked a lot about that, about looking at the literature and what, in the indicator construction. How are you balancing those two? Yeah, I think those are both very good questions. From EJ's screen standpoint, I think the most beneficial avenues that we're getting feedback from actual users are during the training sessions themselves. We do a ton of tailored EJ screen trainings. So we tailor the training to the audience, right? And that's where we're getting kind of the most bang for our buck in terms of feedback from those particular users is during those training sessions. And then we can take that information back, digest it as, you know, as a team and determine what gets incorporated in. Because as you mentioned, Harvey, like some of this stuff is straight up wish list items that we're hearing that isn't in reality is not going to be incorporated into the tool. It'd be awesome, right? But we don't live in that reality. So, you know, there's the training sessions, the email inbox, I don't think we could, you know, that is huge. We do get probably hundreds of hits a year on suggestions from the email inbox. And then within the last six months, we have opened up those office hours to the public. We also hold internal EPA office hours, but I think in terms of getting information from the public on how they are actually utilizing the tool and what aspects of the tool that we could enhance to benefit their use, I think those have been invaluable, those office hours. I would absolutely agree. You know, we also present on EJI among communities across the United States. We do presentations, webinars, answer questions. And often it is when we're interacting, I think interacting with individual communities where we get some really useful perspective on indicators of concern in those communities. Again, they're not always indicators that we're going to be able to address. There are a lot of restrictions in the data for doing something at a federal level. But at least there are things that we can look into addressing. But we also get lots of great feedback through our email and through conversations with other federal partners and academic partners, all of which we kind of are taking into consideration as we think about our next iteration of the tool. And it's definitely something like Harvey you mentioned. We're going to be having to work to balance what we're hearing from communities, from users with best practices from the literature and kind of making those decisions as we can. But something we want to emphasize with the EJI is that we want it to be something that is founded in the best science available. So we are definitely making sure that we are incorporating the best practices from the literature. And I see also my colleague, Dr. Shrenda Buchanan from the National Center for Environmental Health and HHS Office of Environmental Justice has joined. Dr. Buchanan, do you have anything to add to that? Yeah, thank you so much. And thank you for having us today. I just want to just quickly introduce myself. I actually head up an Office of Environmental Justice at CDC, but I'm also serving simultaneously as the director or interim director of HHS's new Office of Environmental Justice. And so that coordination is very closely tied and linked. The assistant secretary here at HHS, assistant secretary Rachel Levine has been doing a series of community engagement all across the country. And we've had the fortune to be able to actually present the EJI and many of those community engagements. There's lots of good feedback that comes from that, just hearing the questions and the comments and all of what's coming from community members themselves because we want them to be able to utilize that. When we start to focus in on, again, one of the issues that we're concerned about here today in terms of cumulative impacts, they really feel that that is moving them a little bit toward and forward in being able to justify environmental injustices and solutions to those injustices. So again, that's been very, very helpful. It's been a saying we're not able to incorporate all the feedback, but it's been very, very beneficial to engaging communities and hearing from them what they would like the tool to look like and how they might utilize it. And in some of those community engagements, we're talking about academicians, we're talking about just community organizations, environmental justice community-based organizations and all of that feedback has been really, really good. The EJI coordinator's box, which I'm assuming Ben has alerted you guys about, we monitor that daily. Lots of good feedback comes to that as well. Again, our EJI was just released this past August and the tremendous outpouring of feedback and thoughts and very, very, very much from the community has really, really helped us to kind of think about what can we incorporate as we think about the next iteration of environmental justice index. So again, thank you for having me. And I'll hop in after Dr. Buchanan, certainly there's a lot that takes place to improve the tool over time. And this is built in to our instructions on this tool, actually that by the start of each federal fiscal year, there is to be an update to the suggestions. And to do so, we work to listen to communities. We work to hear from the White House Environmental Justice Advisory Council. We have requests for information that are out. In addition, we hear from federal agencies, our federal partners on this. We get lots of feedback via email. We have our office hours as we just had office hours for federal agencies today. And then there are surveys that are available and survey tools that exist through the CGS as well that people are able to provide that type of feedback. So we really want to hear from all the communities and all the users, so that we are able to continuously update CGS meaningfully. Do you have a sense of how many users you have of these different tools? I can hop in from the EJI side. Like I think Lucas mentioned with CGS, we do have kind of user analytics that track, how often each page is being visited, kind of the pathways that people take through our website. But that doesn't kind of translate down into number of unique users. So we have some kind of idea of how many times the website has been accessed, but not of individuals using the tool. Just give us a second to log into our Google Analytics and I'll get you there. Okay. It'll keep me one question behind. It'll be great. Yeah, we have the same ability in terms of Google Analytics to tell you the number of hits that we get on a page. But I would say definitely likewise to Ben's response that doesn't necessarily tell us how many people are actually using the tool. There's so many different ways that people are using the tool. And some of those are tracked, some of those are not. So it's hard to have a really good answer on that one, I would say. Okay, Ibrahim, you've been very patient. Thank you, Hazy. So I have a question for Natasha regarding the CGEST tool and about the ranking for disadvantaged communities. So as it is now, the tool, especially the map interface is actually binary. It's either whether it's disadvantaged or the community is not disadvantaged. You pull the spreadsheet and take a look at the variables. Yes, you could actually on the surface value be able to compare, engage and rank. But it's actually difficult when you are dealing with about 20 variables. So thinking about this, Natasha, would you recommend or would you suggest? And I just would appreciate if you could share your thoughts about this. Having a variable, a single variable that could actually provide an insight into the aggregate measure or kind of a composite measure of all the variables combined that way we could actually compare between census tracts, for example. And also having another variable that would reflect what we see on the map on whether or not a community is actually disadvantaged. This is on the spreadsheet. So that's my first question. My second question is regarding the mental health component. So that domain, we don't have that variable included under the health domain for suggest. And I'm wondering whether you could explain the rationale behind excluded mental health. Thank you. Let me lower my hand. I can quickly go back to the user's question. The numbers are, we've had about 26,000 users from the launch of the 1.0 this past fall until last month, mid-January when we stopped measuring. Well, the particular metric that I'm counting from the launch was about 51,000 page views, 90% new visitors staying for about a minute and a half on average. But I would say also just that the fact that we have a spreadsheet, and that's actually where a lot of our federal agencies will be using and manipulating the spreadsheet. So that's actually not being counted, captured in those kind of unique views. And I think that builds nicely to your question on ranking. So what we have is a thresholds approach or cut-offs at above a certain percentile of exposure, then this community is considered to in fact have this vertical indicator. And if that is in addition to one of the socioeconomic indicators respectively, then that community would be identified as disadvantaged. Now, what you're asking is about ranking and so through the spreadsheet, as you pointed out, one may be able to rank, but if you're using the online interface of the CGS, you will have a binary yes or no, this community is disadvantaged or not disadvantaged. And we once again welcome recommendations that the committee may recommend for sure they're being able to enhance ability to identify disadvantaged communities. And what about the role of data uncertainty in affecting those rankings or decisions? Is that something you have considered? I guess this would be for all three groups. Like we know, for example, that census data can have margins of error, especially in more rural and remote areas. And yet we're getting a crisp answer, either a score or a ranking or a binary designation. And my question was almost the exact same as that. Sometimes communities, and we have like local data, like through smell my city, that show that the TRI data does not reflect what's really happening in terms of emissions. So how do you account? Or I was gonna also mention the census, people may be afraid to take the census. How do you account or are you concerned that you may miss some communities because the data's wrong? And if communities have local data and they send it to you, what would happen? I can start to answer that question. I think again, it's a very, very good question and something that people in this field think about a lot. We don't necessarily include measurements of measurements of error within our calculations. Our calculations are based on estimates which do have some level of error, at least the census ones have some level of error associated with them. One approach to that is just making the measurements of error available within the database so that users can understand areas where they have or where high levels of error exist. Aggregating up to census tract level also, that's providing a little bit less error having that slightly higher level of aggregation. There's a little bit less error associated with each estimate. Also, one thing that we stress within our documentation is that small levels of difference between say two census tracts shouldn't necessarily translate to saying that one is experiencing higher cumulative impacts than another. It's better to, more informative to look at areas with large differences. Just because again, there are errors associated with all of those data. In terms, I'll also mention that this is something that our group at CDC has been working on for a while is evaluating sensitivity of tools like the social vulnerability index to census error. So hopefully something that we can start to address and future a bit more. In terms of answering Monica's question about local level data that's provided to us, again, we're trying to use nationally consistent data sets, which does mean that if there's data available for an individual community or state, that's not necessarily something that we have the capacity to integrate into our overall national level, assessment. That said, again, we've designed the EJI to be something that is easily adaptable to local needs and circumstances. Our model is specifically designed so that people can take state or local level data added into the model, recalculate and come up with their own calculations that do incorporate those local data. So that's a functionality that, again, it requires a somewhat advanced user, but they are able to do that. So I think that's my answer. Thank you. And thanks, Ben. I think that was a great answer. And in terms of integrating local data or even any other data sets into EJ screen, while we are limited to those national level, national coverage data sets for our data indicators that are incorporated into the methodology, the user themselves can always add in their own data. So if the user has a shape file, for example, you can add that as a layer directly into EJ screen. You can add that from your PC, you can add that from a map service from a different tool, or you can add that from the geo platform, which the geo platform is kind of like an online library in the sky. And you can bring those data sets, those more localized or any other data sets directly into EJ screen. It's not going to adjust the methodology, right? It's not going to be incorporated into the calculations, but you can at least start viewing it as an overlay, which is really nice for a lot of the community groups that might not have their own GIS system, right? They can utilize EJ screen and just layer on their own data. So I think back to this question, I think when developing the CGS tool, we thought a lot about the challenges with the data and we tried in between the beta and version 1.0, we tried to make a couple of changes to try to address the concerns that were raised. So for example, we do now impute income where that's missing in line with best statistical practices. Part of the decision to include the lands of federally recognized tribes was both in response to public feedback that we received, recommendations from our partners at the Department of the Interior, but also recognizing that that data, we did tribal consultations and we heard from tribal leaders who said, this data does not represent what we're seeing on the ground. The question about data and really its accuracy at the local level just reminds me of the conversation we had last week and a question about whether or not there might be data sets that are available at the local level or in certain geographies. And I think one of the challenges that we all face is that we are trying to rely on nationally consistent publicly available data sets. And if this committee has a methodologically sound way of recommending a way in which you could actually be stitching together some kind of data set using in a way that would actually address these concerns, I think we are interested in hearing those ideas, whether or not we could incorporate them as then is yet another question. But you have the ability to sort of think outside the box in that way. And we are really just relying on the tools are as good as the data. And I think we all recognize that the data needs to be better and in particular, for example, in the US territories, we know we need better data sets, but we're not in the, we're talking to folks trying to put those streams in place, but right now the tool reflects what we have available publicly. And I 100% agree with everything that Charmilla just said that is, the tools really are dependent on the data. One thing that we do with EJ screen, and so we are very clear about EJ screen being that, it is a screening tool. And a lot of times, if you get down to that local level, you can find better data. So, we always deal with as a starting point, like I always say, you know, any of the results from EJ screen should be verified on the ground when possible, because there is always gonna be better data out there, especially at like the local level. I'll just add to that by saying again, totally agree with Ty and Charmilla that, I think if something that I always like to emphasize with EJ is that it's not designed as a silver bullet, right? It's not designed to be the be all end all. It's designed to start conversations and not to end them. And it is always important to supplement that, these kinds of national level data with local understanding, community input, and just kind of making sure that you're looking at conditions on the ground and the lived experiences of people in those communities and not just at the data. Also just wanna emphasize again, that these kinds of data aren't, they're not designed as a replacement for something like a health risk assessment or a exposure assessment that's really designed to get at those more detailed questions of how factors interact to influence health. It's a high, these are high level screening tools, especially EJ, it's a high level screening tool that again provides an entry point to understanding those relative conditions of overall cumulative impacts on health. Thanks, Eric. So it's like many of these tools, not all of them, in their demographic information use a measure of income. And I'm wondering to what extent there's concern about like local variation in cost to living and therefore like varying definitions, definitions of what low income actually means. And their implications for national comparisons because this is all relative, right? Right, low income means something different in California versus Ohio. Is that what you're talking about, Eric? Yes. I can start, because I think that's something that we were definitely thinking about in the creation of our tool, which is one reason that along with just an indicator of overall poverty, you know, percentage of population living under twice the federal poverty level, we also included an indicator measuring percent of residents who making under $75,000 a year, who also experience extreme housing burden, right? That's starting to get some of those answers or some of those questions around cost of living. We're looking at where lower income communities also experience very, very high levels of payment for housing costs. So that's a start to addressing that in our tool. Others? So in the CJIS, the socioeconomic threshold that is primarily used throughout is based on 200% of the federal poverty line. So it is a fixed number. We do have area median income, which is adjusted to either the income of the metropolitan area you're in or if you're in non-metropolitan area, rural area adjusted to the median income of the state as one of the indicators as well. So there is some flexibility there. And likewise, we're using twice the federal poverty level as well. We're not looking at fluctuations yet, local or even state level fluctuations, but I think that's an excellent point. And again, even with like the health data, this is providing you that foundational level view of then you can always build on, or local and better data if it's available. Okay, Lauren. One thing that has come up that came up, I think EJI includes mental health. I think EJScreen didn't include mental health if I was just, I mean, I was looking pretty quickly. And I know that Justice Fordy doesn't. I'm curious kind of what the, just wanted to get a sense of when you were thinking about including a measure of mental health versus not what the kind of thought process was there. I guess I'll start off. With us, we have just recently included for the first time some of our health indicators. And for our first cut, we took a look at some of the indicators that could be more health related. And so we chose once, based on that heart disease, BC, but it's not the end all be all, we're hoping to expand some of our environmental indicator or health indicators in coming years. We're also relying on the data from CVC. So we're looking at what that places data from the CVC rather. So we're looking at just those indicators that have that higher resolution of data. But that said, we're likely gonna be expanding on those data sets. And mental health would be one of the ones that we're definitely considering. It just didn't make our first cut. Okay, Walker. Yeah, just regarding a specific indicator for the EJ screen folks, it looked like low life expectancy was included in the supplemental demographic index. I think I saw on that slide, but then I think I also saw it as a health indicator. Could you just talk a little bit about that? It looked, I don't know, I don't think out of place is the right term, but the low life expectancy was sort of grouped together with some of these other socioeconomic indicators. Maybe y'all could just talk about that a little bit. Yeah, so low life expectancy is one of the five indicators that is incorporated into our supplemental demographic index. And it is one of those, that's a health related indicator that is incorporated with other demographic indicators. But we thought it made sense to kind of group those together for the supplemental demographic index. That being said, the low life expectancy is incorporated in our health indicators in the category if you go into the tool. And I'd say that we felt that low life expectancy is kind of one of the most powerful indicators. That kind of tell a story of what a community is, how a community is impacted. So although it wasn't one of the other ACS datasets, we felt it was important enough to bring it in. We're just about ready to start wrapping up this session. Any last questions from the committee? Okay, hearing none, I think we'll start to wrap things up. I just want to thank everyone for their comments and the responses from the three agencies here and the CEQ, two agencies and CEQ. I want to remind everyone that any conclusions or recommendations made by individuals during this event should be considered opinions of those individuals and should not be considered conclusions or recommendations issued by the committee or the National Academies of Sciences, Engineering and Medicine. I also want to remind the audience that the committee welcomes written input at any time through the project website, which is found in chat. And I also want to remind the committee to reassemble for a closed session at 4 p.m. So thank you, everyone. Thank you to the presenters. Thank you for the discussion. And we appreciate your time. Thank you, everybody. I appreciate you all being patient while we've played with schedules and made these meetings happen at all. And I'm sure you all will be hearing from the committee soon. So thank you to our guests. Bye-bye.