 Alright. Thank you. Alright. Welcome, everyone. I hope everyone's doing well on this afternoon. For those of you who don't know me, I'm John Burto. I'm the Associate Provost for Faculty Affairs, and it really is my great pleasure to welcome you to the 2023-2024 Distinguished Scholar Teacher Lecture Series. For those of you who don't know, the Distinguished Scholar Teacher Award was established in 1978 to recognize tenured faculty members who are committed to and have demonstrated excellence in instruction and in their research efforts. The award is sponsored and administered by the Office of Faculty Affairs on behalf of Provost Jennifer King Rice, and recipients are selected by prior DST nominees and recipients. I'm very pleased and honored on behalf of Provost Rice to recognize Dr. Chris Laskowski as one of our newest Distinguished Scholar Teachers. As we will hear more shortly, his scholarship focused primarily in model theory, which is a branch of mathematical logic. In addition to his research, Chris has contributed significantly to the university's learning environment, engaging students in courses he teaches and through his mentorship. Provost Rice and I congratulate Chris on his this much deserved award and recognition. It's my pleasure now to introduce Doran Levy, Chair of the Department of Mathematics, who will formally introduce Dr. Laskowski. Thank you. Thanks John, and thank you all for coming. I'm Doran Levy, I'm the Chair of the Math Department, and it really is a great pleasure and honor to introduce Chris Laskowski, our new Distinguished Scholar Teacher. You know, these introductions sometimes start with, you know, the usual stuff of like, you know, all this boring information that you can probably just read on the backside of the pamphlet that you received. But nevertheless, I'll still mention a little bit of that just to, you know, honor Chris yet again. So Chris received his PhD from Berkeley in 1987, after which he was a more instructor at MIT. And then he joined the University of Maryland in 1989 as an assistant professor doing mathematical logic. We heard that. Mathematical logic has been a big, there's a big tradition in our department for mathematical logic. And at the time that Chris joined us, how many logicians were there? Four? Five. Okay. Missed by one. Gradually, that number went down. And over the years, as people gradually retired and were not replaced by new logicians, not that long ago, our logic group numbered one. So if you look at old departmental reviews, the report of the logic group was, we need more people, right? Which actually happened as we recently hired two logicians, Christian Rosenthal and Artem Chernikov. And now from one, we have three and probably one of the best or certainly one of the best logic groups in the country, if not in the world. So small numbers doesn't mean low quality. Actually, when it's concentrated really, really well, you can do wonders. Okay. So I mentioned something about the trajectory of Chris coming to Maryland. But Chris has been nothing short of a wonderful colleague. So being a wonderful colleague is a combination of many things. A great researcher, so wonderful mentor, teacher, and someone that's in charge and involved, greatly involved with our outreach activities. But really a great person to have around. He's here pretty much every day. Not many people are. And just seeing Chris in the corridors and welcoming him and greeting him and him greeting everyone, it's, I mean, how should I put it differently? Chris is a fixture of this department. Okay. So what Chris does for research is something I'm not going to attempt to tell you about, which sounds mostly like my letters for the APT committees. And I think that actually Chris will also not try to make this attempt and he promised to give us a very accessible talk. So I think all of us know what averages mean and we learn about averages of averages. But let me just say again that I'm super pleased that Chris has been bestowed this award of a distinguished scholar teacher by the provost office and by the University of Maryland. Congratulations, Chris. And with that, Chris Laskowski. Okay. So hopefully, people on Zoom can hear as well. So thank you, Dr. Boteau. And thank you, Doron, for the wonderful introductions. I'm very honored to be here speaking in front of all of you and everybody on Zoom. I'd like to also just thank both Doron Levy and Larry Washington, who were both previous DST winners and they nominated me for this. So none of this would have happened. And in particular, I'd like to thank my wife, Carol de Francis. Quite frankly, without her constant love and support, much of my career would not have transpired. So thank you. Okay, so before launching into things, there was discussion right at the beginning. Today is mold day. Happy mold day to everybody. Again, since most of you are mathematicians, you might not have heard about this. You can view this as being a chemist knockoff of the pie day. Think for mold, think 6.0 times 10 to the 23rd. And I sort of cryptically wrote this today is 1023. The 2023 is redundant or whatever. So unlike just reciting lots of digits of pie, one tends to sit around until mold jokes. This will play a part as we go along. So as to what I'm going to talk about. Well, can read from the thing my main field is model theory, which is a branch of mathematical logic, which is a branch of mathematics. Great. But this is really a lousy thing. Say at cocktail parties or elevator speech, it's really a bad thing. So number one, someone says, what do you do? I teach. What do you teach mathematics? The usual thing. The majority. Oh, I really hate math. Many times they'll say, I got up to level X and then I had a really bad teacher and then I can't imagine what life's beyond that. Okay. But then there are a few people that say math. Okay, yeah, what kind of math. And then you say mathematical logic. And you can see that they tense up. Oh, no. And like, look down at their hands. They're sure that the next words out of my mouth are going to be this statement is false, or some sort of a paradoxical thing. They're looking and trying to remember what's the symbol, the Vulcan greeting from Star Trek or something like this. For whatever reason, popular press, mathematical logic, certainly discusses paradoxes or in and around girdles in completeness theorem, or the collection of all sets is not a set. Or Bertrand's Russell a barber can shave everyone's head except as just all sorts of things. And when I teach set theory, I try to make a really special thing. I think the term paradox is very poorly aimed. It isn't a proof that zero equals one or that things are all falling apart. But rather, when you see something paradox it means you need to be careful. It means warning date something non intuitive is going to be happening. Rather than bore you with my research, I want to give an instance of something in popular press which goes is a paradox, but one should really view and try to see what is going on because it really as well demonstrate has a lot of real world applications. So with that as a preamble. Let's start out and let's talk about baseball. I said real world, but we'll get to more things beyond that. But let's take two players, David Justice played for a while, many years with Atlanta Braves and then moved, moved on to two other teams, whereas Derek Jeter always played with the New York Yankees. What about them. Well, let's look at the years 1995 and 1996. So you start with David Justice. And many of you know about statistics and baseball but one of the most popular things is batting average. So you take the number of hits. So in this case 104 and divide by the number of times he comes up to the plate the number of at bats. It's just a ratio. And in this case comes out to really should be 0.253 but no one says the zero. So he say he bats 253. So clearly the larger the number, the better off every you're doing. So in 1995, David Justice had a higher batting average than that Derek Jeter. Now you can note here that for Derek Jeter, the number of at bats was somewhat lower. This was in fact his rookie season and he got called up during the year. So he didn't have that many at bats. But then now we passed in 1996. And once again, David Justice had a higher batting average than Derek Jeter. So in both years, Justice's performance dominated that of Derek Jeter. But if you then take the two years combined, if you take. So the combined things say for David Justice. He had 104 hits in 1995, plus 45 hits. And then divide by the number of at bats is. 411 plus 140. So this comes out to the 149 over 551. Or in other words, a bad combined batting average of 270. But as you can clearly see, Derek Jeter's combined batting average is 310. Now at first this should seem kind of odd. How could it be that David Justice dominated Derek Jeter in both 1995 and in 1996. But when you combine these guys, it switches. Okay, well this is sort of what we want to be discussing with this. Now, why does this feel strange? Well, suppose we just have numbers. So we have a big number A1 and a smaller number B1 and another. So A1 is bigger than B2. A2 is bigger than B2. Then from that, if you add the two big numbers together, you get something bigger than the sum divide by two. So the average of A1 and A2 is bigger than the average of B1 and B2. Whenever A1 and B is bigger than B1, A2 is bigger than B2. Great. So, but this is not what you're doing with batting averages. Remember batting averages are hits versus at bats. So when you're getting this combined thing, same computation that's here. You're taking the hits plus in 1995 plus the hits in 1996 divided by the sum of their at bats. And in general, this is certainly not equal to the latter. Think about the right hand side is, as in my title, the average of averages, but this left hand thing is not. And now, I guess, when I gave a talk like this before Larry Washington pointed out, this is very close to we spend the math faculty spends a lot of time teaching freshmen that this is not how you add fractions. There's a half issue, but even without that. Okay. So keep that in mind. So that the thing that I want you to go for going forward is the care must be taken when averaging averages. So we're going to get to more sophisticated things than this, I promise. But this whole thing this flipping that's occurring with this goes by the name of Simpsons paradox. And from my general comments, I'm really kind of unhappy with the word paradox here, because it's simply that we need to be aware of what's going on. So, now I guess this being the new fangled thing. What do I mean Simpsons paradox. I do not mean Homer. It is not named after Homer Simpson, which is he might like it to be but rather much earlier Edward H Simpson of 1951 from from the UK, a rather noted statistician at the time. Always whenever you name something in mathematics there's some ambiguity, and some people call this the Simpson you'll effect from much earlier you will in 1903 was a sort of aware of this kind of thing and certainly I prefer the word effect to paradox that just saying something is happening here. Now, so this is just something you should be aware of to start off that just when you're looking at data sets of baseball certainly provides lots and lots of data sets, then stuff like this can happen. So, in the wild, which to as a chemistry just means that just just in nature without any sort of contriving things. It's relatively rare, but it does occur. And just other baseball pairings. The most recent I could find was actually two Red Sox players Ellsbury and Lowell who had the same phenomenon for two years in a row. One dominating the other I forget which was which you can always check these just if you just literally Google your favorite baseball player stats, you're just going to get a listing of these. And you can do these on your own. Somebody painstakingly went through cases of all star players and found these just among all stars very familiar names to baseball fans. The last ones a little remarkable with Babe Ruth and Lou Gehrig. These were the first three years of Gehrig season career and he Gehrig actually beat Babe Ruth in batting in three years 1923 2425. But if you add up all of those together, Babe Ruth dominates. Gehrig in the in the sum of the three. Okay, so there's more to it than just baseball. This can happen in the wild and it can have some real world significance. So let's just imagine now that you are a doctor circa 1990. You're the small town. And your specialty is kidney stone treatments. And now a patient comes in woman and just huge amount of pain and clearly as a case of kidney stones and you want to know what to do. What should what procedure should you do to to help ease her kidney stones. Well, you're in 1990 and you're on top of your game. And you've read the following thing. This is at the time it really was the gold standard for what you do with treating kidney stones. There was a long multi year thing all in the UK, published in the British Medical Journal, and it compared. And a lot of words renal calculate calculi is of course kidney stones. And now it gets a little grizzly but you can either open surgery which is you go in and get the things. This other percutaneous I'm not going to embarrass myself but let's call this PN for the second method. It's kind of putting in a straw a very thin thing and trying to suck out the, the, the stones. And now this third newer thing was this extra corporeal. So in other words outside of the body the ideas you get a machine, and you just hit it just with a shock wave. And the idea is to try to jiggle things around enough that you'll just pass the stone. But that requires extra equipment. And you're in this small town. Remember, so that's out of the picture. So you really don't have the equipment. So you're either going to treat this patient by open surgery, or this PN method straw like method. What do you do? Well, you look at this paper, and it's very clear. The data says that open surgery succeeds 78% of the time, whereas this PN straw treatment succeeds 83% of the time. Done deal, right? Accept that. If you go in and ask, does the patient have small kidney stones, then this treatment, the open surgery actually beats the going in for a straw, 93% to 87%. And on the other hand, if the patient has large stones, it's more problematic, so the probabilities drop. But still, once again, the OS treatment, the open surgery dominates that of the PN. So in either sub case, if the stone is small, you should use open surgery. If the stones are large, you should use open surgery. But if you don't know the size of the stones, then you should use the straw. So now seriously, imagine you are this doctor. What do you do? So which treatment do you use? And then even a more basic question to you is, should you even bother to check whether the stones are big or small? Because if so, it will maybe confuse you about what to do. Okay, so it is curious that in this paper, this really gold standard paper, they don't discuss this issue at all. They just have just the facts. Here's the table. Here they are. Now admittedly, they were rooting for this extracorpial thing, and much of the paper is discussing that. That was the newfangled thing, and they're discussing the pros and cons of it. But just in reading the paper, this Simpson's idea just isn't mentioned at all. So the question, yeah? Well, but then if you're going to go to the danger route, then certainly open surgery would presumably be worse, but in either case, it's dominating that the success rate is dominating that. Anyway, so I can imagine that each of you, being a doctor, can have different opinions about how to answer this, but at least it's an issue. Okay, so let's continue on. But sometimes, and this is going to be the bulk of the talk, what appears to be Simpson's effect can really be a hint at some missing causality in the data set. And if I've learned one thing in preparing this talk and just thinking about things for a number of years, at some level, statisticians really don't understand causality. And even what the definitions should be for it. This isn't necessarily a failing, it's just a really involved problem, and there are a lot of traps. Okay, great. So let's first take a completely toy example that will get things across. So question, should students study for a test? Yes or no? Well, let's do a scatter plot. We're just going to randomly, so the number of hours study of the x-axis and the score on the test is the y-axis. And we're just going to look at these various thoughts and what do you conclude? Here's all of the data. You get a best fit line. And clearly, things are not good with studying. The best fit line is decidedly slopes down. In other words, if you're a student, you should not study for a test. Now, you can almost anticipate from the general shape of what I had here. Suppose I tell you that in this, all of these top guys are graduate students. And all of these guys are undergrads. In many cases, it's the other way, true, true, but let's, okay. Now, within these subpopulations, let's try to get the best fit, whoops, whoops, whoops, whoops, whoops, whoops, whoops. Sorry. Let's get the best fit lines for this. And clearly, if you are a graduate student, you should be studying for the test. If you are an undergraduate, then you should be studying for the test. But scrolling back, if you're a student, you should not be studying for the test. Okay, so one could easily look at this data set, this toy, and write two different contradictory, compelling papers about the conclusion. And in fact, the thesis of this, by means of subdividing what's going on, the same data set can be used to justify two different contradictory conclusions. Okay, so this was all a toy, got a good lap out of it, but this really came up, probably by far the most famous example was the issue of graduate admissions to UC Berkeley in 1973. So start off with the guts of this fact, in fall 1973, 44% of all male applicants to graduate school were accepted, but only 35% of female applicants were accepted. And this is a pretty huge data set 12,000, these are the precise numbers. So overall 41% were admitted, but eight, like this, this, if you do any sort of analysis, this is highly statistically significant. So just on a chi square value of this, this is 110 is huge. The probability that this could be happening by chance is very, very small. So this certainly would be in the realm of the legal system or something that should Berkeley be sued here, say for sex discrimination on this, on how they get in. And as you're going through this, certainly there exist many parallel situations where in today's world where it has entered the legal system. But given this, Berkeley was quite alarmed with what's going on. So just for a thought, think to yourself what could be happening to cause this. And the provost commissioned a report, which actually turned out the result of it, it became a seminal paper in statistics. It was published in science, and it was a big noise when it came out. Peter Bickel, all three of these were professors at Berkeley. Peter Bickel was a young, at the time, statistician and he wrote one of the standard textbooks for introductory statistics. Hamel was an anthropologist. I couldn't find the field of O'Connell, but they really studied what was going on. And it went one by one at the admissions data for each of the 85 departments on campus to see what was happening with this. And this all here is accurate data with it. Now, rather than go through all 85, the top six or the sixth largest departments on campus. And some of these, but A through F, you can see at first lush that there is some wild things. So first of all, in A, 82% women, 37, 34. This guy is very close to even very close to even so say, really the only big place where there's a big difference in admission rate is in A. But if you look at this a little bit more closely, one thing you're going to observe is that there is a huge difference by department in the number of applicants. Say number one, or the A is the engineering department and there were 825 people, males and only 108 females that were admitted to it. But the admission rate into engineering was sky high. It was, well, even I guess they favored women somewhat with this, but at about 70%. On the other hand, say in English, and this is either English or a compendium of English comp lit or something. And that there, there were almost twice as many women that applied rather than men, but note the stark difference. The admission rate was only 34, 35%, as opposed to up here in the 60s. So if you look at item C, there were 560 men versus only 25 women, but yet this was a relatively easy department to get into and so on. So there are big, big swings in the male to female ratio of applicants and the admission rate is by far from being uniform. Okay, and this is really what's explaining. If you go down to the 85, the departmental level, I'll now quote literally from the summary of the summary paragraph of their paper. So first of all, examination of the aggregate data take everything all together on graduate admissions to Berkeley shows a clear but misleading pattern of bias against female applicants. We have this huge Chi Square score of 110. However, when you break it down to the disaggregated data department by department. They, there were few decision making units that show statistically significant departures from expected frequencies in either direction and about as many units appear to favor women, as opposed to favoring men. What's happening is, is that women are we're we're getting tracked or we're applying to highly competitive departments, whereas men, on the other hand, we're applying to departments which accepted many more people. The ratio was was different. One thing that I found surprising just a couple of sentences down from this, the graduate departments that are easier to enter tend to be ones that require more mathematics in the introductory preparatory curriculum. And I'm curious, is that true at Maryland that again this was in 1973 and enough just really to state it that like engineering. I know here admits an awful lot of people but is the engineering acceptance rate much lower than much higher than than for English. And how do English and math compare. There's a lot of things that we can ask our associate provost here. This might again so does this does this final sentence still hold at Maryland in 2023. Okay, so let's finish with all of that and have to throw this in. What was Avogadro's favorite Olympic event. The mole vault. None of these are any good, but you have to throw them in every once in a while. Okay, onward. So now that was all 1973 let's skip ahead almost 50 years to a really odd thing at first blush with coven 19 in the early days of this. And this, this is brought put together by a blog post of Dana Mackenzie at UCLA and Jordan Ellen Berg, who is a professor of math at University of Wisconsin. But you can almost view him as being a hometown guy. He was went to high school in Maryland and was one of the winners of the his of the high school mathematics competition. He was a very bright guy. We don't want to put those updates, but, but also I also had the, that was before I came here, but I had the pleasure of actually teaching him. While I was at MIT I was teaching a graduate course and he would walk over he was an undergraduate at Harvard and came over to to take this so anyway, good guy. So let's give a couple of slides of data from the CDC from early on in in in the coven thing so things started well really really started going in March 2020. And now. We're going to be concentrating just the first four or five months. And I know this slide is hard to see, but we're just going to be concentrating here on the white non Hispanic cases, roughly a third 35% of the cases were were white non Hispanic. You can't read that up here is the Hispanic. It's almost the same and the black is here. Those are the main bulk ones, but by contrast. If you look at the deaths that happened between this among in the white non Hispanic category, it's gone from one third roughly up to a half. If you take a look at that this goes a completely against what what everyone was saying in the newspapers about about that somehow that it was the pandemic especially early on was really hitting all of the minority communities really hard. And it was having a profound effect there. But why is it that among the white non Hispanic, you could think privileged people, they had a third of the cases but a half of the deaths. So, what is going on. Let's try to answer that by looking at a breakdown of things again for this March to June CDC data. And here it's it's lots and lots of cases a million plus cases and 100,000 plus deaths. This is not a small data set by any means, but we're going to break things down by age. And among people in the 30 to 49 range, 26.5. So all of this is for whites, not non Hispanic whites. So, 26.5 of the people, the percent of the people 30 to 49. The COVID 26.5% of the 3049 who had code had COVID were white non Hispanic. But in that range, even though we had 26.5% of the cases, the deaths were only 16.4%. So, if you look at this a little bit, a little bit more closely, you'll see, well, first of all, in the zero to four thing, thankfully, there were very, very few cases, especially early on. I read somewhere that among the 100,000 deaths 13 were in this category early. So, we can just sort of ignore this thing is really microscopic. And then just going line by line. Again, if you were white, then your probability of dying was less than your compatriots in every one of these categories. Okay, okay, the better outcomes are due to the privilege and the better health care and better diagnosis and these various finger things to see about your blood oxygen levels. But still, how does that explain if the whites are doing better in every one of these things, how does this explain the 35 to 49%. And the key thing here is not written with this, but white people are old. This can really be seen, not in the assembled room, but in the general population, 9% of whites are 75 or older, but of non-white only 3% are 75 years or older. So in these two problematic categories, and these were really where I now need to put better outcomes in quotes, because quite honestly, the death rate was absolutely huge. It really, again, the lion's share of all of the deaths were really just in these two categories. So what was happening was that just there were just so many more very elderly whites that that was dominating what's happening throughout this. And that can be just explaining this but but you want me to be really careful that if you looked at that first graph about the explanation is somewhat deeper than that then what might expect. Okay, so now continuing on. What do you get if you cut an avocado into a large number of pieces. Guacamole. Yes, everyone you're a good. Okay, well we needed that after the talking about this and especially since there's going to be another maybe not completely cheery topic coming up. This is something known as the low birth weight paradox. So it was studied excessive extensively by Alan Wilcox, who was a researcher at NIH and research triangle. So, in order to get to this I need to define three things. First of all the median weight of a newborn is 3.6 kilograms and for years gone by a baby was labeled LBW low birth weight if his or her birth weight was less than or equal to 2.5 kilograms. And for the chart that's going to come the mortality rate is above these babies is the number per 1000 that do not survive their first year. So, a lot of data was collected on this and here it was split by whether or not the mother smoked during pregnancy. So, the mortality rate for 1000 so in the general population so just among the non LBW babies for maternal non smokers, the death rate was 11.1 so enough roughly a 1% chance of 99% let's be positive and 99% chance that that the baby would survive the first year. But among maternal smokers it's slightly worse than that. But if you go to the low birth weight things then 210 out of 1000 low birth weight babies lie to maternal non smokers. So, if the mother smoked during pregnancy then this rate would drop to 114 so roughly get cut in half. Okay, and this was published by this guy Yaro Shalmi and I'll talk about this was actually I would say an important paper in an odd way but at this moment you might want to think what is going on. And hint, not advocating that mother smoked during pregnancy. Yeah, yeah, yeah that's that's that's going to be that that's going to be the kicker and I'll be able to illustrate this by by a series of pictures, but good. Great. So what is going on with this. Well, let's start off just straight. This couldn't get anything else but this this is for birth weights in Norway but I think it's basically universal. So the average birth weight is like 3.6 kilograms. And this is very roughly a normal distribution. However, to the left, there's a tail, the tail to the left is much longer than the tail to the right. But fundamentally, it's a normal distribution as one would expect the numbers are large. What else could it be. So on this, doctors have arbitrarily determined that the low birth weight cut off is at 2.5 kilograms, which in the normal in the standard population cuts off this tail. Now, here, just the grim things to think about this includes almost all pre term babies, but also ones with genetic problems with just something wasn't right in this. So here I hesitate to write everything in red. It sounds like things are doomed when in fact the good news is really 80% of these did survive for at least the first year, but but it's really the flattened part of this tail. Okay, now, another thing that is known and has been tested many times throughout is the effect of mother smoking through pregnancy. This decreases the average birth weight, but it still happens a lot. And this this can be quantized. So it's known that mother smoking during pregnancy decreases the birth weight by approximately 200 grams. So forget the units on this, but roughly what's happening is we're getting this normal distribution, but we're shifting it to the left by 200 grams. So the peak is going to be 200 grams off and the same general shape. But then what really the problem is, in my mind, the definition of low birth weight is not changed, depending on whether or not the mother smokes. So as a result, if we cut off this thing, now there's going to be in the blue, a much bigger tail. This is exactly what you were saying that among mothers that smoke, there are many, many more low birth weight babies. But also, many, many of them are fundamentally healthy. I'm not saying that smoking is a good thing, but fundamentally, there's no, there's not much wrong with them. So if you're looking at a ratio, think you're just dumping in lots of fundamentally healthy, healthy babies, and you're putting them to the left of this divide as opposed to the right. So to summarize this, the point is, because of the shift in birth weights, well, at not making a corresponding change in the definition of a low birth weight, many more fundamentally healthy babies of smoking moms are put into this category. But among the low birth weight babies, so there's a greater share of fundamentally healthy babies, and that's what's going to cause that ratio to go down. So just to be clear, this does not mean that the mother's smoking is good for the baby, causing this shift to the left. However, this year, a show me who was biostatistician is trained at Johns Hopkins, moved up through the ranks, and then actually in the 1950s, he created the biostatistics lab at Berkeley. So he really had a national following with this. He was also a smoker. And this was right at the time, 68 to 71 was when there were just discussions about smoking that everyone smoked all the time, but but the CDC others were trying to cut back and the FDA warning labels were coming on. But he used this data, this fact, scroll back here. This is his data. He was using this as an argument in favor of mother smoking or saying it wasn't bad because look, here's actually a benefit. With hindsight, I find this quite shocking. But worse, this paper that he submitted got into the general press. And two titles that I managed to find in the Boston Record American, I don't think it still exists. The title, Mother's Needs Worry, Smoking of Little Risk to Baby from 1971. Worse, In Defense of Smoking Moms and Family Health Magazine. That was still, I remember that from being a kid, that this really got into the thing. So to see that this effect doesn't all have to be this smoking versus non smoking. If you want, say, a more a cheerier example of this whole thing. So this Wilcox, who's been studying this effect. If you just concentrate on babies that are born in Colorado, then maybe it's because of the elevation. Who knows, but the percentage of babies born that are low birth weight is significantly above that of the US population. But for any other purpose, they're just as healthy as in other states. So I don't mean this to be a smoker or non smoker thing. This I view as being a cheery thing. I'd say if anything, upon epidemiologists that the big problem might be that just absolutely fixing this low birth weight thing at 2.5 kilograms. The outcome heller high water is what's causing these these seeming seeming paradoxes. But in any event, just the takeaways, if you want to look at all of these examples taken together, the main takeaway I would have is that we're looking at a single data set. And then by doing various means of subdividing the same data set can be used to justify contradictory conclusions. I'd say beware of headlines or sound bites with this. If there are any journalism majors here that wants to really be careful about how to how to interpret this. And then finally to link it back to my title. In short, averaging averages can be a perilous undertaking. With that, thank you very much for listening. And I hope you'll enjoy the snacks and lacrosse and long.