 Alright, I'm very excited to to be here and to be talking to you guys. I really appreciate all the interest and and I hope this is a this is helpful. Like Jane said, I'll try to keep an eye on chat when questions come up. It's a little late here. I'm not my most alert. So I can't make any promises, but there's plenty of spots that we can we can pause and kind of reflect on questions and I've got some questions for for y'all as well. To help kind of shine some light on some of the things that we'll be talking about today. So we'll go ahead and get started. So I'm a holder. I'm a data designer and sometimes researcher. My firm is three is a pattern and we help clients design and develop data tools that play nicely with humans. And sometimes these client projects raise really interesting research questions that that I get kind of hooked on and then those turn into whole research projects and and that's that's that's what brought me here today. So almost every book on data is we'll start with we'll include one of these three charts. So these are John Snow's color maps William play fairs trade balances and and Charles Menard's Napoleon map. And so when people think of good data is they think of these because good data is supposed to be intellectual and enlightening. And that's fine. But I'm personally much more interested in the non intellectual ways that the data this can influence our attitudes and our behaviors. So, for example, these charts on the left are pretty remarkable. They show that some impossibly high percentage of Americans have basically become exercise crazed over a few years. And then the charts aren't pretty and they're not remotely true. But they accomplish something of behavioral holy grail, and then they they get people to actually go to the gym. So in a recent study of, I think, 64,000 members of 24 hour fitness, which is a gym chain here in the States. People who saw this increase their weekly gym attendance by 24%, which is pretty massive for something miserable like going to the gym. Sorry to anybody who actually likes going to the gym. And so these charts work, not not because they're beautifully designed. They work because they're psychologically effective. It's a social norm that that people are sensitive to. And people tend to change their behavior to to match certain perceived norms. And then this is another chart that I've been thinking a lot about lately, thanks to a collaborative mind, Peter Blakely. So charts like this can be influential but maybe not in the way that you would expect. So what's remarkable about this chart is that by conventional design standards. It's a pretty good chart. It's it's clean it's simple it disaggregates it draws contrast between categories. And these are all good things to aim for. If this chart weren't about people, particularly people from marginalized communities. And there are a lot of these conventionally good charts that show social outcome differences between different different social groups or different racial groups. But instead of rallying support for more equitable outcomes like you like you might expect these charts can actually backfire. And so the way that their frame can lead to toxic conclusions about the people being visualized. And so Cindy Sean and I presented a paper on this at last year's this conference. It has some some wider implications for visualizing social outcomes that will start to unpack during the talk. So just just to get us into the right mindset. I want to start with this question of, can information backfire so what are some examples of information that's that's accurate and well intentioned, but but that's that's still harmful. If you'll have any ideas on this, feel free to put them into the chat and I'll go through a couple of examples as well. But what are some unintended consequences of otherwise good information. As they bubble up feel free to type them in. So one one example comes from comes from the states in Arizona's petrified forest National Park. So they wanted to see if they could use clever signage to try to persuade people to stop stealing petrified wood from the park. This is this is a problem at the time. And so one of the tactics that they tried was basically a guilt trip. They posted a sign saying many past visitors have removed petrified wood from the park, changing the natural state of the petrified forest. And this is this is accurate information. People were in fact, ceiling, and it's well intention. But this this sign actually leads to four times more theft than if they just posted a sign saying, hey, please don't steal wood from the park. Another example of this backfire effect is from a study that we just wrapped up a few weeks ago. So political polarization is one of the big issues here in the states. And so a lot of times in the news you'll see charts like these showing these these big gaps between conservatives and liberals on their support for different policies. And what we found was that charts like these can can actually make people more polarized. And so news reports that attempts in good faith to try to characterize this big issue might actually be making it worse. So again, accurate information but can make it worse. And then finally more more relevant to our topic today. There's a common belief that that raising awareness of inequality will will kind of directly solve inequality. And that's that's often not the case. It's it's it's much more complicated than that. So for example, during COVID, a few different studies found that the more that white people in the United States were aware of racial disparities and covert outcomes, the less willing they were to support public health interventions. And they found that it didn't just erode support for more equitable outcomes it actually reduced support for for all health interventions at the time. And this this backfire effect from from comparing social groups isn't isn't new equity researchers, particularly in education refer to this as deficit thinking or deficit framing. And this describes an effect where highlighting outcome disparities leads to essentially victim blaming. So when people see that outcomes are different between groups, they assume that it's caused by some some personal deficiencies of the group with the worst outcomes. And this this stereotyping effect is what senior and I showed in our paper and we'll unpack how this works and data is next. And so our first big takeaway is that information can can easily backfire, even if it's accurate and well intentioned. And this is this is probably the most important message that I can share. Ideally, I'd like to make everybody just a little bit more nervous about putting new information out in the world, especially for not sure how it will be perceived and especially if it's information about people from from marginalized communities because information can backfire in ways that are hard to predict. And so how do we avoid these backfire effects. I don't think that the world necessarily needs more rules about which charts are good or bad and actually some common conventions can be fairly harmful in this context. I think the more important thing is that we understand how information is perceived in general. And this is a question of psychology as much as it is designed. And so we'll look at two specific concepts from psychology and see how they play out and data this. And then at the end, if we have some time left, we can walk through some examples that go through some, some other quirks link to the paper. Yeah, I can, I'll share that towards the end. So the first thing to unpack our attribution biases in intro to psychology, you might have heard things like the fundamental attribution error or correspondence bias. And these are our concepts from from social psychology, describing how we form judgments about other people. And this is important because it turns out that even when we're just looking at charts about other people. So these biases still apply. And this is core to understanding how something like deficit thinking can play out in a chart and so we'll start here. And Aaron has has kindly volunteered for this for this little bit. But we're going to start with a quick exercise to try to wrap our heads around the concept. So I'm going to describe a scenario to y'all and ask you a few questions, and feel free to put your responses in the chat and we can see how kind of everybody responds to it. And so Aaron has volunteered we're going to put him on the spot. And so I'd like everybody to imagine that you are you're out of town, and you're you just gotten some coffee and you're walking out of a coffee shop. And there's a small park in front of you and across the street, and something moving catches your eye, and you stop and you look, and you see Aaron is dancing very aggressively. And he's he's also wearing a Hello Kitty t-shirt it looks like it's unclear. And you see that his phone is mounted on a small stand in front of him. And it looks like it's probably a TikTok dance of some sort. We're too far away to make out his expression. But based on just the movements that you can see he's he's clearly putting a lot of effort into this dance. And so the first question for everybody is, why, why is Aaron dancing. Feel free to put some speculative answers into chat. He's happy, he's happy joy bees, because he enjoys it. He's an influencer. He's enjoying life. He's happy. He's taking some drugs. Just how I roll. All right. These are good examples. All right, now we're going to switch it up. In this scenario, we're going to reverse it. We're going to say that Aaron has just walked out of a coffee shop. And now he sees you dancing very aggressively in the middle of the park in the middle of the day. Why are you doing this, why, why might you be dancing in the middle of the park. Make a client happy. Not dancing a bug flew into my shirt. That is one of my jokes. Thank you for previewing that bears my kids. That's a good one. All right. These are these are all very good examples. So what we're seeing here this this scenario demonstrates the this these attribution errors, where typically when we describe somebody else's behavior. We describe it in terms of their their personal choices or their their personal characteristics. And so when when this question is about somebody else, people tend to answer in terms of personal attributions like they enjoy dancing or they, they want the attention or they want to be tick tock famous. But when the question is reversed. People people when the questions about ourselves, the influence of external forces are much easier to recognize. And so we'll answer with external attributions like I lost a bet. I was doing it for my niece's birthday. And so the general idea here is that we're very bad at recognizing the external forces that shape other people's outcomes and behaviors. And so we assume that they're caused by them personally as if the outcomes and behaviors are entirely within their control. And this this has some some weird implications for database. And so let's take a second to to think about this chart on the left. This shows average hourly wages for four different groups of restaurant workers. And you can see that for example group a earns about $7 an hour more than Group B. So why is that why do we think that Group A earns more than Group B more experienced group here in the US better looking more skilled unionized like that. All right. So, a lot of times when people explain these differences, they'll explain them in terms of, again, personal attributions about the people within the groups. So for example, you might say that Group A earns more because they're harder working. They have better customer service they smile more. And these are pretty close to to some really answers that we saw. In reality, these differences are probably more reflective of external factors or at least influenced by these external factors like how nice is the restaurant how busy is the restaurant is it isn't in a wealthy neighborhood. And so deficit framing in this context for database is presenting group outcomes in a way that encourages these personal attributions, it encourages blaming outcomes on people. And this is essentially a version of the attribution biases that we just discussed. And so this is a problem because these group level personal attributions are essentially stereotypes. If you believe that these outcome differences are caused by differences in personal characteristics, then you implicitly believe in a harmful stereotype that the people with the worst outcomes are somehow worse people. And so we'll walk through an example of that in a little bit more detail. And so this is the same chart, but we're changing the colors to emphasize a purple group a versus a blue group B. And you might look at a chart like this and your first read is average outcomes for purple group a are better than average outcomes for blue group B. And that's accurate that's that's fine that's a good read of this chart. But a lot of readers will go further and explain the differences in terms of these personal attributions. The takeaway becomes a has better outcomes than B, because the people in group A are somehow better than the people in group B, in some way that would produce these outcome differences. And there's there's obviously nothing in the chart that would support a causal conclusion like that. And then the real danger is that if you believe that the outcome differences are because group A is better than group B, you implicitly believe that the people in group B are somehow personally deficient. And that's that's obviously a problem that that's a harmful stereotype about the people in group B. And so to put this in broader context, the deficit thinking effects suggest that even though charts like these only show differences in outcomes. They can be misread as evidence for intrinsic differences between groups of people. And so we've walked through this conceptually it's consistent with what we know about social psychology. But but how big of a problem is this with charts kind of out in the wild. And so these are results from our last few experiments where we tested multiple different tarp types showing outcome differences between groups of people like the ones that we just looked at. And this this chart shows the distribution of how strongly people agreed with personal attributions like these outcome differences are because group A works harder than group B. You can see that it's it's it's centered most people are kind of in the middle of it. But the gray area on the left shows people who disagreed at least slightly. And the orange area on the right shows people who who agreed with these personal attributions. And so you can see that that 53% of participants agreed with these personal attributions that that essentially blame the outcome differences on the people themselves. And since the chart gives no evidence for causal conclusions like this. This this implies belief in a harmful stereotype about the people being visualized. And so this this confirms that the deficit thinking effect can be triggered by supposedly neutral charts and that it can affect a pretty sizable part of audiences. And so our next big takeaway is that charts showing differences and outcomes can be misread as evidence for for personal differences between people. And this this confirms that the deficit thinking effect can be triggered by otherwise neutral charts. I see a couple questions. Did you see something similar when presenting environmental explanations as well. The, we didn't present any of the explanations directly, because the questions in the experiment were multiple choice. We can, it's safe to assume that they had both environmental and personal explanations on their mind which is, which makes this makes us that's slightly I would expect results to be more extreme outside of this experiment, because you don't have people reminding you of external attributions through the multiple choice questions. But that that is a thing that that we want to get into next, or not not next in this talk but a research topic that that's coming up and that I'm thinking about a lot lately. So if all charts be neutral. This is a whole different thing to kind of unpack my short answer is, there's, there's no such thing as a neutral chart but that's a different talk would be happy to unpack that at some point in the future. Alright, next question or next section is. So we talked about how design choices can can impact this attribution process that we just talked about. And so we'll talk about perceptions of variability in data. And so the world is is obviously messy and complex. But that's that's it's hard for our brains to kind of deal with that. And so we like to simplify things. Unfortunately, sometimes we oversimplify things in ways that can be misleading. And it turns out that certain charts can have a similar misleading effects that can look a lot like stereotyping. And so to understand how a chart might nudge somebody towards stereotyping. First we'll look at that stereotypes in general. So, for example, let's consider a stereotype that that people in this purple group are especially our earners. And so stereotypes assume that within a group, people are more similar than they really are and between groups people are more different than they really are. And so the stereotype implies a distribution like this one on the right where not only the people in the purple group earn more their earnings are very similar. And therefore all purple people earn more than all other people. In reality, even if average earnings for people in the purple group are higher than average earnings for everyone else. The distributions will still look something like the chart on the left. You can see that that earnings are widely distributed within groups, and that between groups, there's a lot of overlap. And this this will generally be true for any kind of social or for a lot of social outcomes, especially when the groups are based on something as kind of loosely defined as as race. And so the next question becomes, could certain charts create similar misperceptions. And so we'll walk through another example this time looking at earnings for four different groups. The distribution on the left is close to reality for income differences between something like racial groups. And again you can see that there are a lot more differences within groups than there are between groups. So if we take these distributions and plot them as a bar chart of average income for each group, we'd end up with a chart like this one in the middle. But when people see this bar chart, they may not imagine the distribution on the left. They may imagine something more like this without any indication of variability. It's easy to assume that that it's not there. And just like the stereotypes charts like these can create a false impression that people within groups are more similar than they really are, and that between groups people are more different than they are. On the other hand, if we show the same distributions as a jitter plot or any other chart that shows the range of outcomes within groups. It becomes very apparent that within groups, there's a lot of variation and between groups, there's a lot of overlap. And this this actually parallels real life where the more exposure you have to individual people from other groups the easier it is to see them as as individuals, and the harder it is to stereotype them. So to tie this back to the previous section, when we talked about correspondence bias and deficit thinking, we showed how somebody could go from seeing differences in outcomes to beliefs about people's personal characteristics. And what we're proposing here is that there's there's a middle step that facilitates at least part of that, where chart design can create an impression of an outcome stereotype that leads to stereotypes about personal characteristics. And our research, our basic hunch was that charts that emphasize within group variability, like the jitter plot on the left will help readers to see that that outcomes are very different within groups and overlap heavily between groups. And this should reduce personal attributions or stereotyping. On the other hand bar charts or any other chart that that hides outcome variability should should increase stereotyping. And so we tested six different chart types showing four different topics of social outcome disparities. And the charts on the left emphasize variability. These are these include jitter plots and prediction intervals. You can think of a prediction interval as basically just kind of showing the range of typical outcomes. It's not that exactly but that's the mental model that most people use when they see that. And so the charts on the right will hide variability and these include bar charts dot plots and confidence intervals. And so we that slide gets messed up sorry. So we generally confirmed our main hunch. So across the three different experiments where we can compare to high variability charts shown here in blue like jitter plots and prediction intervals to low variability charts here in orange the bar charts dot plots and confidence intervals. We consistently found that the charts that emphasize variability reduce stereotyping relative to the charts that that high variability. And so that that brings up our next takeaway which is that when when visualizing social inequality design choices can reinforce harmful beliefs about the people being visualized like stereotypes. The good news though is that this gives designers and chart makers and communicators some control and solving for these misperceptions. For example, we can we can show variability and then reduce tendencies towards towards stereotypes. And so we have some choice in the matter. And so it can be tempting to try to reduce outcomes into simple averages or easy sound bites like group X earns more than group Y. And as as communicators are our instincts kind of push us in that direction we all value simplicity, but being overly simplistic. Rob's readers of important contexts, and it can also make disparities worse by misleading audiences towards stereotypes. So since social outcomes are messy, especially in the context of big complex issues like inequality or systemic racism, rather than than chase this false simplicity, our designs and our charts, our communication should should lean into the messy truth. And so these charts on the left are overly simplistic to the point of being misleading. And when presenting social outcomes bar charts dot plots and confidence intervals, these these all emphasize between group differences and hide within group variability. And this this leads viewers to harmful conclusions about the people being visualized. And so we should do less of these. On the other hand charts like jitter plots and prediction intervals on the right. These show within group variability and make it easy to see how much outcomes actually overlap between groups. This is tendencies to blame the differences on the people within the groups and makes it harder to misread charts as evidence for harmful stereotypes. And so when visualizing social outcome disparities, especially involving people from marginalized communities. We want to do fewer charts that monolith people like these on the left, and more charts that show more variability like these on the right. And so these were these were all of our takeaways. One information can backfire to showing differences in outcomes can be misread as as evidence for differences between people. Three is that our design choices matter. They make a difference in this. They can they can reinforce harmful beliefs about the people being visualized. And then four is one of mean into the messy truth. And so this, this is the points that we covered in the talk are meant to be kind of an introduction to the research and in general how social psychology can affect data this. But one of the things that I'm working on next and kind of excited about our workshops or deeper dives or just more research to be honest on how we can start to apply some of this. So I think some of the big areas that that if you guys are involved in, I'd love to hear from you would be public, any kind of public communication of social outcomes. So, so maybe where you guys are publishing results from from your studies publicly. The second is public health. So one of the side effects of COVID was the dashboard epidemic as people say so the dual epidemics of dashboards and COVID. And I apologize to Aaron who's actually going through suffering more than dashboard right now. But these that a lot of these charts that that that came out were not not great, and not just for kind of communities at risk but risky for for the population at large. And then the third is is people data in the workplace. So looking at the ways that that organizations kind of look at their employees. I think there is potential for good there but there's also a lot of risk. And so that's that's an area that that I'm thinking about as well. And so, if, if you are involved in these things, I would love to talk to you. And one from a research perspective. If, if you're interested in these things. I'm also working on kind of workshops for these. Feel free to email me there and I'll let you know when I have something a little bit more substantial on the applications of it. And of course, my firm is three AP, if you're working on any public visualizations of these, anything that that shows social outcomes or reflect social issues would be happy to help us there, help there as well. I'm Eli at three IP, please don't hesitate to reach out. And finally, these, these three kind folks helped make all this happen. And so I'm very grateful to them. And so thank you at the end. And that's all I've got. Thank you guys. Thanks Eli that is what a fabulous talk. I'm just going to say you order and ask, did you look at box plots as a, another graph choice when representing the spread it off of data as an alternate to the other. I think the way that the way that box plots would be read would be pretty similar to the prediction intervals and the jitter plots that we that we did look at. I suspect that because they show the percentile ranges. It's enough to kind of create that effect of showing the overlap between groups and the variability within groups. And so box plots should, I would expect them to have a more alleviating effect of stereotyping. I tend to be just as a general thing can be misinterpreted in a lot of cases so I don't, I don't use them a lot personally. I would probably I would tend towards the jitters, if you have the data to do it. But I think that it is an option for for reducing some of the stereotyping effect. And Kylie had a question for, do you have any suggestions or examples of these types of graphs where there are real differences between groups but not but those differences aren't because of individual characteristics. So, for example, systemic racism, or where there are systemic structural inequities. Do you have examples showing those specifically? I suppose examples of structural inequities. Is that what you meant, Kylie? Did you want to elaborate? Hi, yeah. So I work in the area of Aboriginal overrepresentation in the criminal justice system. And so yeah, we have we have plenty of graphs that look like this, but they do like there are genuine differences between groups. And when we present them like this it reinforces that message over and over again but how like if we try and show the variability within groups it will just reinforce that stereotype because So the variability is so strong that that even if you do show, so I'm sorry the differences are so severe that even if you show variability, it would still wouldn't overlap. Yeah, yeah. Yeah. And I don't we don't know how to present this in a way that doesn't show that or reinforce that stereotype. So I've run into that in a couple of cases or it seemed like I've run into that where looking at looking at geographic differences when it's kind of rolled up by rolled up by something like that state or like a large geographic area. You can't see as much kind of variability there and for the outcomes that I was looking at at the time, I did run into that problem where one group, even if you looked at them state by state, didn't overlap at all with the other group. And the fix there was to go a little bit more granular and look at county level data or city level data or something that's a little bit smaller. I don't know exactly how that would translate to this case. But I, I would suspect that people are fundamentally similar in our behaviors and the things that we do and I, what I find is that that even in the cases where it where it has kind of almost something. If when I dug a little bit deeper like you can you can find some of the common ground. I would be more than happy to talk about that specific case though if you want to reach out and we can try to figure out some ways to unpack that. And just from Ramakrishna, do you have any, so I guess a lot of us, I'm pretty much just Excel so nothing fancy. We'll probably have a lot of people on on the line today who use Excel Power BI Tableau perhaps. I'm really curious with the jitter plots. I suspect Excel doesn't do those easily. What is your favorite? I have prepared for this question specifically with tutorials. Oh my God, amazing plots in Excel. It is not fun, but you can do it. And so I've got two little walkthroughs for jitter plots and Excel or Google sheets. I think I believe that Tableau and some of the other kind of more approachable data as tools also have have ways of doing it pretty easily. But at least in the Excel case, I've got a rough version of it. And there's there's other kind of tutorials on doing that kind of around the internet as well. And I would also say that it doesn't have to necessarily be the jitter plot the prediction intervals are kind of like showing the ranges of outcomes might be easier in some cases too. So that that's an option to consider as well. Fabulous thank you all I'm sure we'll all jump on to that later on. I'm just checking. Are you happy to share your slides as well Eli with the attendees from today. It's a big sigma design file so it's not it's not really sure. Oh, I see I see. I wonder if you could just maybe pop the that how to address in the chat and then everyone will sort of be able to quickly click on it. Natalie's asked a question. Have you had experience working with program designers or decision makers to make you know this messy truth work so to better inform program design or to lead to, you know, improvements that sort of more sort of going for those. The core reasons for what we're saying not the surface reasons which is those personal attributions I hope I'm getting that right Natalie please feel free to jump on if you need to expand. Thank you for sharing that link. So I think the thing that gets in the way of, I'm going to interpret that as kind of how do we talk to decision makers about this and how do we use that that data to kind of influence better decisions. The, one of the things that that gets in the way is this kind of distraction effective. When you see that that outcomes aren't what you want them to be. We jump to we jump to blame and we jump to these kind of simplistic solutions. And so I think that the that showing the variability and showing and kind of short circling that first process of kind of jumping to the blame can can help get towards better solutions where better solutions are typically just like okay let's step back and think about what are the structural factors here where the environmental factors here. What's actually improving people back. And so by by kind of discounting the first, our first tendencies are our first beliefs about the causes of kind of whatever issues we're seeing. We can help verify and draw focus on solutions that that work because we have a better sense of what the actual problems are. Wonderful. Great Natalie. Look, we've still got a bit of time, you know, don't be shy pop your camera on and ask Eli question, because I can bet your bottom dollar that there's someone else in this pool of 119 people including myself he'll be thinking exactly the same thing. I do have a couple examples that of looking at some more applied stuff that we could run through. Absolutely. Okay. So this is a chart from the, the, the center of disease control in the United States. And so this this chart looks at at prep coverage. And so prep for context is a drug that's recommended for for people who might be more exposed to HIV. It basically reduces chances of transmission. And there's there's a big national push in the states to try to increase coverage within some of the more vulnerable communities. And so this chart looks at prep coverage percentage for for three different ethnicity groups relative to a goal of 50% coverage. And so the first thing that we can see is it's kind of hard to read. And so we can we can give it a quick facelift. And now we can we can see the values a little bit, a little bit clearer. I've also made the goal range kind of explicit that 50% coverage. And you can see that, for example, white people here are at 63% coverage. And they're within the goal range. Everyone else is at 14% or lower and pretty far below the goal range. And so this, this chart is generally fine for comparing a stack of percentages. But any volunteers for how this might be misinterpreted. I'm immediately thinking so this doing that personal attribution bias that people from within the black Latino. And I think this other people category always gets me. Disinclined to come forward for prep so they're not seeking prep and then there's a, I guess that deficit judgment to, oh, why aren't they like, you know, like why aren't they seeking. Whereas I guess I environmental interpretation might be that or that prep is more easily accessible and and or available to people who are to the white population in comparison to other population groups maybe. Yeah, so the one of the risks is the stereotyping risk of people can can read this as behavioral. And so like it can be misread as maybe maybe white people take better care of their health or care more about it or that's that's one way this can kind of go go wrong. And the other, you also mentioned an external attribution which is maybe access is different. And that's that turns out to actually not be the case here. So there's a few issues with this one is it's it's kind of confusing because of the metric that they're using the metric is based on percent of the, the people who are who are vulnerable not not the population. And so it kind of creates this impression that that that more you might read this as saying more white people use prep than people of color, but that's actually not the case. And then we mentioned the stereotypes that's another issue. And I think the third issue is maybe relevant for you guys, which is this I think could have a backfire effect and actually reducing and so if you're a person of color and you read this you may think that crap is for white people it's it's not for me. And so this maybe for you guys is to the extent that the programs that you're evaluating are successful. And so this is based on a recent project that we just wrapped up a couple months ago, but it seems like generally if people see that something is rejected by their in group, they're more likely to reject it as well. And so you might expect that that people of color who see this would become actually more resistant to using prep themselves, because it seems like nobody else like them is using it but that's actually not the case. And becomes a little bit clearer when you look at a different, a different way of calculating the same metrics. So this is, this is still looking at kind of usage, but instead of the previous metric that was looking at usage per kind of like most at risk population this is just looking at prep usage per 100,000 people in the population at all. And so what this kind of clarifies is that it's actually not not differences in usage. It's it's differences in burden. And so the, this the switches a metric uses a metric with a kind of like more intuitive denominator which is which scales with the population. Here we were also using the jitter plots to show geographic averages, which should accomplish some of the showing variability within groups and overlap between them. And using the goal ranges to show, here's, here's ideally where we would like to be. And instead of having that bait directly into the metric. And so what you can see here is that the behaviors between groups are actually very similar. It's just that to reach our kind of public health goals for for black and Latino people, we just we have a lot more work to do. So they have, there's a higher burden there. And that's that's kind of the general idea is any chance we can get to show that show what people have in common show that that we all are behaviorally generally pretty similar. But it's our environments that it's our environments and kind of what we're up against that that can be that can make a big difference. Um, just switching, just switching over to another question the ordinance asked a great one actually. A lot of us often deal with categorical variables. So a quantitative plot like jitter is probably not going to work. So how might we sort of better present our categorical variables, perhaps. So, this one actually is a good example of that. So prep usage is is a binary is binary like you are a user or not. And the way to show variability here is it's so that the kind of the averages here are still kind of the global average. But here each dot is a geographic, a smaller kind of geographic region. And this is a state. So in this case, you can see that black people in Florida have the highest rates of kind of prep usage. And so by kind of summarizing the binary or categorical variables up to a smaller geographic area, you can you can use the geographic variance to show kind of what people have in common. And a nice additional effect of this is that it does highlight how much geography can can play a huge role in the differences and now comes like these. And so the trick of kind of rolling up to rolling up to a small geographic unit is a good way to show variability with with binary or categorical inputs. Did you have anything else she had in mind?