 All right, and can you see the slides right now? Yes, I can. All right, excellent. So welcome to this workshop at Introduction to Data Literacy. And just to kind of introduce myself, my name is Russell Peterson. I am a research and instructional services librarian. I work mostly with the Modern Languages Department and the Capstone Center for Student Success. But I also have like research interests in disinformation, conspiratorial thinking, and kind of cultivating critical thinking skills, especially when it comes to how data is presented, both in scholarly sources, but also just like an information we interact with every day, like on social media or within other more traditional media spaces. And that is also my contact information if anyone wants to get ahold of me after this presentation. Some goals for today's session. I want every participant who is either participating now or watching later to understand what data is and how it's used for rhetorical, commercial, and political purposes, identifying the errors and discrepancies and how data is represented, as well as being able to engage with data visualizations and multiple contexts with a healthy degree of skepticism, not seeing every single visualization as something to be inherently skeptical about, but to exercise a healthy degree, to know when it's the appropriate time to say, let's actually question this and question like how this data was gathered, how it's being represented and what kind of like narrative or argument is being constructed here. And then lastly, being able to communicate some interpretations of data with your peers or with anyone else that you interact with on a daily basis when you come across either data visualization or some other type of representation of data. And so let's first talk about what is data and how it's used for everyday purposes. By defining it first, I can say that data is information either in a quantitative or qualitative fashion and it's often collected for a specific purpose. So this could be data that is collected from a survey and it's trying to gauge for instance, how many people are using a particular commercial product, say like the makeup brand or let's see like a certain technology like the iPhone or any other type of technology so it could be for a specific commercial purpose, it could be data that's gathered to kind of glean how people feel about a particular issue. So the Pew Research Center for instance is gathering data that is like qualitative by people's phone interviews and focus groups but it's also kind of gathering how people might respond in a particular situation if they're answering a survey question and so that data could be raw or it could be cleaned as well. It could be organized in a certain fashion and displayed in a tabular way like through charts and Excel spreadsheet or it could be represented as like a data visualization that you can see on the right hand side here. That's like a heat map of Eastern Europe and so data is given meaning by the context that it's situated in, how it is presented or how it is gleaned or gathered so whoever is gathering that data is it being represented through a chart or is it being represented through a map or what have you, that really is what is giving the data meaning not just what kind of questions it's asking but how it's kind of situated as well like who is responsible for presenting this data and data is rendered visually to aid in pattern finding comprehension. It's definitely used to persuade people as to kind of like to have people agree on a certain argument or to give like a clarity of purpose and then using data. So in the scientific community, data is used to prove a hypothesis but in other realms, business like I was saying earlier with technology, data is used to justify decision making by conveying a numerical authority that's difficult to challenge. So if like Apple for instance is releasing a set of like reports saying that like, oh like sales of all of our products have been going up by like 20% in this past quarter it's hard to kind of like challenge that on its face because numbers by themselves conveying authority that can be hard to dispute unless you know something more, unless you're more, I would say skeptical about how that data is being represented. Like if they're saying, oh like our sales for the iPad have been going up 20% this past quarter but then you see like over a set of time that overall sales are declining and that makes you question how they're actually presenting that data and how they're conveying that. And so the three main ways that data is used to convey authority is rhetorically, it's used to win an argument of some sort. This is definitely true in the political sphere or in other spheres a commercial to sell a particular product but in the political sphere it's also used to gain power. So if people are like having are releasing like certain polls to indicate that like, oh 75% of Americans are in agreement on this particular issue or they do not like this particular policy that is used to kind of justify putting in that policy in the first place. And so what is data literacy? Like how do we like make sense of this particular skill that is used to kind of like have more fluency with data being able to interpret it. And I've cited two librarians here who have helped define what data literacy is in our particular field. And Chantel Riesdell says data literacy is the ability to collect, manage, evaluate and apply data in a critical manner while Javier Calzado-Pravo and Miguel Angel Marzal define data literacy as the component of information literacy that enables individuals to access, interpret, critically assess, manage, handle, ethically use data. For today's workshop, we won't be really talking about ways of like collecting or using data, but we'll definitely be talking about how to evaluate data, how to critically assess it as well as like how can we access it through some of the databases here at the library. And so data in context, how was data gathered? So how can we establish context and give data meaning for what it is? And ways that we can determine context is by asking some sense-making questions. For example, who, what is the entity or organization responsible for gathering the data? That could be important if it is like, say a well-respected research body, like the Pew Research Center, or is it like a more partisan source or is it a more commercially focused source that could lead you to question, like how was this data gathered? Should we trust the entity behind it? What, what kind of data is being collected? What is being measured? So is this quantitative data? Is this just by the numbers? This is like how many people have responded to a particular question maybe, or is it like, was this data gathered through like telephone interviews? And what are the implications of say data that's gathered through a landline that not many people own anymore versus data that's gathered through cell phone interviews? For example, where, where is the data being gathered? Is it a particular setting or location? Is it relevant if it's just data gathered in the United States? If it's data gathered from two very different places, how does that play into whether the data is trustworthy or not? When, when is like the exact timeline for collecting this data? If the data was collected maybe 20 years ago, does that mean that it's still relevant for today? Why, for what purpose was this data gathered? Are these intentions stated or being inferred? So any data that is being gathered for any particular purpose, whether that is like required by law maybe, if like crime data for instance has to be gathered by law, how are they gathering that? So you have to kind of consider like the purpose behind that data gathering as well. And then yeah, what was the method of collecting the data and what medium did it take place? I think internet polls can have a lot of inherent problems. So it's important to kind of see in what manner was data gathered so that you can properly evaluate, okay, is this like a source that properly kind of like assessed all like the unique different challenges that a particular medium presents when you are gathering data in a particular way. And now I'm going to have you kind of like look for data on your own. And we're gonna do this by going to the Statista database and we're gonna look up two particular studies. And by doing this, we're just gonna extract keywords from like these research questions and apply those keywords in a search in Statista. So what are the number of COVID-19 vaccine doses administered in the US by manufacturer? We can do that by just extracting keywords like COVID-19 vaccine and manufacturer and applying those keywords into Statista and the same for how many high school students are using nicotine vaping cartridges. High school students could be a keyword mixed in with nicotine vaping as well. So yeah, I'm gonna exit out of this presentation here so that we can go to the library's website and it can show you actually how to get to Statista. And I see we have a new person enter, hello. We are just now going to starting this activity of looking up certain data sets and we're gonna ask some sense-making questions of who is the entity that is responsible for gathering the data? What is the data? When was it gathered? Where was it gathered? Why, for what purpose? And what medium was it gathered? So I'm gonna show everyone just how to actually get to Statista. So I'm at the library's website, which is lib.ua.edu. At the home page here, I'm gonna click the databases icon and then I am going to filter down to the S databases and then scroll down to Statista. All right, here we go. Now it adds to Statista. All right, was everyone able to get to where I am right now? Yeah. Yes, okay, great. See a thumbs up, excellent. All right, so I'm just gonna go back to the slide right here and I'll just allow a few minutes for everyone to kind of look up these studies and then share your conclusions with us. So think, oh, in the chat put down some information about, so I think you did find the study that kind of outlined what are the number of vaccine doses administered? Pfizer is definitely the clear majority followed by Moderna and then J&J with 15 million. Yeah, so let me go to Statista and show you that study. See vaccine doses, manufacturer, right? As of October, 2018 by manufacturer. Okay, all right, so this is the exact study. As of October, 17, 2021, these are the number of vaccine doses administered by manufacturer and in terms of sense making questions, what were you able to determine about this study and how the data was gathered? So it was gathered by the CDC, right? So yeah, we can see from the source, it was gathered by Centers for Disease Control. If you click on their source link, they even give you more in depth, kind of like telling you how they were able to collect this information. So they were able to get this data from all of their vaccine partners. So that includes clinics, retail pharmacies, long-term care facilities, and let's see. Yep, and they're even giving some disclaimers. It counts people as fully vaccinated. They received two doses on two different days or received one dose of a single-dose vaccine. Yep, and so this data is also including boosters. And you can kind of surmise. So it's like, since it's the CDC, like they're kind of collecting this information for governmental purposes, that is like their charge, that is their mission to kind of like track the health outcomes for every American, especially during the pandemic. It is their job to keep track of this information. So you can kind of like, and I've assessed like based on that, that this is kind of like a worthy source to use when citing about how many vaccines are being administered in the United States. And then for the, let's see, the nicotine vaping high schoolers, before I do this search, yeah, I guess like what were people finding about this study and in terms of like, how was the data gathered and where it was gathered? Yeah, feel free to put that in the chat or to speak up, okay. All right, so this one also looks like it was from the CDC. Yeah. Okay, so we'll do another statistical search, nicotine, nicotine vaping cartridges. Okay, so let's say prevalence of nicotine cartridge vaping among students in the US by grade. Okay, yeah, so you might have found another CDC study but the one I'm looking up here seems to be from the Pew Research Center. But there was one more other, it's the latest study of 2020. Okay, so you found a 2020 study, excellent. It's a collective of all, I'll put it in the link. Okay, so we can also, yeah, look up that study, nicotine vaping 2020. Okay, I'll put the link in the chat. Okay, great, nicotine research share of US high school students. Okay, great, yep, so this is percentage of US high school students using e-cigarettes and we can kind of like see a sharp jump from 2017 to 2018 and then a slight dip in 2020 and you can go to like the source link specifically and see this is the morbidity and mortality weekly report that was released at the end of 2020, kind of like giving a more like in-depth description about the use of e-cigarettes. And again, that is like the charge of the CDC, kind of like highlight like what are like the health impacts for a particular product like e-cigarettes, especially among younger people, excellent. So going back to our presentation here, I think it's worthwhile to kind of develop these skills of trying to figure out like where data is coming from just that you can properly assess like, okay, like what is the meaning of this data? Where is it coming from? Cause it can be often not represented in those like very handy charts that you see in Statista and might be like a screenshot of a graph that you see online on Twitter or it could be represented in multiple different ways. And so even if data is coming from a certain place, you also have to pay attention to how is this data being delivered and is it being used to mislead in a certain way? And that goes into the next section of our presentation is being able to spot whether intentionally or unintentionally, there is data that's being represented in a certain way that it is misleading. And to quote kind of like a seminal piece in this topic, I'm gonna quote Daryl Hough who wrote How to Live Statistics back in 1954. So statistical methods and statistical terms are necessary in reporting the mass data of social and economic trends, business conditions, opinion polls, the census, but without writers who use the words with honesty and understanding and readers who know what they mean, the result can only be semantic nonsense. So the numbers by themselves can't really tell you a story. They can't really tell you a certain argument or persuade you in a certain way. It's only writers who are able to ethically use that data and convey maybe a certain narrative or it also takes readers who maybe are able to kind of understand and able to interpret that data in order to make sense of it and maybe be exercised their judgment in terms of saying like this is an appropriate use of data or this is an inappropriate use or someone is like not being forthright with how they're kind of conveying information. And so I'm gonna kind of go through a couple of different examples of how data can be misused in this way and how it can be used to mislead. And so the first example would be something called sampling bias. And so this is a statistical practice where someone is trying to, it's a way of selecting like a small group of individuals or to estimate certain characteristics about a population. So there are claims that are being made when a sample is more, so stronger claims are made when a sample is more representative of a whole rather than taking like a small section of those individuals. And so that sampling bias occurs when members of an intended population are overrepresented or underrepresented. And so it also occurs when the people who are gathering the data are compromised in some way. So for example, like I think a common example of sampling bias is like a slogan that you see in a lot of like dental products, like nine out of 10 dentists recommend using Colgate, using this certain toothpaste. And there are like certain assumptions that are being made there in that like, oh, that must mean that like hundreds of, or tens of thousands of dentists have been surveyed on this. And then like all of these dentists definitely like responded to that survey and they definitely don't have an incentive to lie to everyone if they were doing this of their own volition. But so there's several assumptions that are being made when you come across a statement like this. So who was being surveyed? What was the sampling size? So it could be the case that only like 50 dentists were surveyed. And so that's definitely not as representative of like the entire dentistry profession. And who are those particular respondents? Are they actually dentists? Were they forced to kind of like show their credentials or was it just like people that were kind of like found in the straight and saying like, oh, like you are a dentist professional. And are there like incentives to lie? Were people that were surveyed? Did they, were they, for example, paid to give their opinion or is this like a particular, do they, how should I find this? It doesn't benefit their career in any way to like respond to a survey. Maybe if it's from a very prestigious outlet. So that could be some examples of why you might want to think about are there any incentives for people to not be forthright? And I think this also plays into when people are asked about very sensitive topics like topics about money or about drug use or sex. These are topics that people tend to either exaggerate or kind of conceal their behavior or activities. So some examples of sampling bias might be like an alumni report that says like the average University of Alabama graduate makes over $100,000 per year. So it's worth asking like how many like alums were surveyed was this only like business students? Was this only like engineering students? How many like total Alabama graduates were surveyed? And then another example being like a new poll states that 85% of Americans have never consumed an illegal substance. People are incentivized to lie about those topics because they might not want to admit to doing anything illegal in a survey. They might not trust the survey taker. So it's worth kind of considering like how might the sample that was gathered from the particular survey might be biased in a particular direction. Another method that it could be used, another statistical method that people use to maybe mislead others is cherry picking. And so cherry picking is the practice of only using a small subset of data to kind of sell a certain narrative. So I think I brought up Apple as an example earlier of maybe a company that is only, they might only wanna share a particular subset of data kind of like make themselves look good and maybe drive up market share of their stock and saying like this particular quarter like this technology product like sold like this many units but it doesn't look at like the years long trends of like that particular product not selling well and actually declining in sales. And so debates over issues like climate change and crime rates are a big victim of cherry picking because rapid changes from year to year do not account for years or decade long trends. So a lot of people who engage in like climate change denialism kind of like show like maybe a small subset of climate data saying like, oh it's actually showing that the earth is cooling over time or there's just like so much like rapid changes that you can't really say that the earth is like warming but then it's not taking into account like decades long data that's showing that the amount of the temperature of the earth is actually rising at an unsteady rate and it's going above like certain industrial levels. And the same is true for crime data as well. This is a screenshot from the San Francisco Chronicle. They had a tweet a few months ago saying that car break ins are up 753% and that like begs the question of like what actually happened in 2020? There was like a massive like pandemic that caused a lot of people to stay inside. And so that isn't accounting for maybe like like maybe five years of data or 10 years of data that could show that like crime could be actually decreasing. There's a lot of data to suggest that crime is still steadily declining across the years and especially over the decade since the 1970s. Another statistical I guess method that people use to mislead is to conflate correlation and causation just because I'm sure you both have heard this before just because something is correlated or two things are correlated doesn't mean that one is the cause of the other. So for even if like a claim turns out to be true like in this example. So just to kind of describe this graph here it's showing that countries that didn't implement a mask mandate right away had a huge explosion in cases while countries that did mask up had fewer cases. This is an example of something that two things that are correlated. There could be a whole bunch of different factors that might explain this correlated phenomenon. A lot of those countries for example in South Korea, Japan, Singapore these are all like East Asian countries. Maybe it doesn't document maybe particular practices in those countries that don't have to deal with masking. It could be the case that the virus was not as prevalent in those communities whereas it was more prevalent in these European and North American countries. There are so many other different factors that could explain that even if it turns out that masks are very helpful in curbing the spread of coronavirus. It's important to kind of like be kind of aware of that and be kind of skeptical about arguments that say that this like one singular issue is the cause of this very complex problem. So it is worth kind of like questioning something even if it turns out that kind of thing might be true in certain cases that masks are very helpful in curbing the spread but it doesn't account for kind of the multifaceted complexity of why coronavirus cases are rising in some places and not others. And then there are ways that visualizations are used to mislead people and there are certain practices that are used within visualizations to mislead people. So one example of this is truncating graphs and omitting the baseline on the Y axis. So in both of these examples, the baseline is at this very incredibly high number. So like in this example, 94 million is like the cutoff point when it really should be zero. And so the effect of that is that it can misrepresent data as more exaggerated or more dramatic than it actually is. And yeah, so this chart on the right, it says that over 100 million people now receiving federal welfare and it's kind of like looking at this kind of like explosion of cases over time from 2009 to 2011 but if we had the baseline at zero, you could see that that's like a very marginal change compared to the actual numbers. And I think that chart counts anyone receiving federal welfare as an residing in a household in which at least one person received a program benefits that could be Medicare, Medicaid or sorry. So anything figures count means tested welfare, not social security or Medicare. Okay, so it could be like anyone in a particular household that's affected, so that could be Medicaid or a different federal assistance program like temporary assistance for needy families for example. So anyone who is receiving that benefit, they count everyone in the household as receiving that benefit. So that's just something to keep in mind for that particular chart. And the one on the left, there are many problems with this chart but one is it's kind of exaggerating the shortness of women from South Africa and India because the baseline is set at very close to five feet and so this isn't like equal intervals. So it kind of like makes it seem like people from or women from Latvia and Australia are gigantic compared to women from South Africa and India when it just is just a couple inches and not like a huge like marginal or significant difference. Another example is just more manipulations of the Y-axis. So intervals between X and Y-axis should be even consistent and visualizations that manipulate those intervals do so by exaggerating and increase or decrease through the use of like labels or illustrated elements or color. And so like this example on the left, if y'all can see it is cancer screenings and preventive services are going down for Planned Parenthood but abortions are going up from 2008 to 2013. But the issue is that these two numbers should be like completely different. This is like going from like 2 million to 900,000 but for some reason the 300,000 abortions are like above this number right here. So the numbers don't even seem to be on the same plane there. It's like a manipulation of the Y-axis whereas for this example on the right, under President Obama more students are earning their high school diplomas than ever before. Even though it is like a marginal increase from 75% in 2009, 82% in 2015, it's kind of like shown that this is like over like half of like this particular number based on the amount of books that are being stacked. And so like since the baseline, since the Y-axis like being manipulated here, you can't like really see how profound the difference actually is for the number of students with high school diplomas. I would say one of the most notorious examples of going against kind of like manipulating people or misleading people with data visualizations is going against conventions, going against standard conventions and data visualizations like the Y-axis should start at the bottom left and should start at zero. That's like a pretty common convention when it comes to line graphs and bar graphs. And the reason for that is that charts and graphs have become more standardized and to go against that standardization to go against those conventions can be very confusing for people at first glance. And so it's kind of generally agreed upon that you should like start the Y-axis at zero at the bottom left, so you're not confusing people. And this kind of like notorious visualization starts the Y-axis at like 1,000 to kind of like convey an argument that after 2005, after the stand in your ground law, the number of murders committed using firearms actually decreased. But the problem is that if you actually flip this to where it should be, if the zero was actually down here, you could see that the number of murders committed using firearms actually increased after that law was implemented in Florida. And so that's like one example of just like how going against conventions can mislead people because it's not properly starting the Y-axis at its proper point. This example right here is actually going with conventions with population density maps. Usually darker colors are meant to represent that something is more densely populated. So the states of California, Illinois, and New York, Massachusetts, New Jersey are more densely populated than Wyoming and Montana. But if that convention was reversed in some way, if for example, you tried to say that the lightly shaded colors indicate a more densely populated place, that would confuse readers who are already used to knowing that a place that is lightly shaded should be less densely populated. So they might confuse certain places to be more populated than they should be. And so that can be pretty important when you see like a visualization like on social media, like on Twitter or Instagram. And you're just kind of like briefly glancing on it. You're not really studying it in depth. If it goes against a certain convention, that could easily like mislead someone to thinking a certain idea is true or not. And then using the wrong chart can be a pretty big way of misleading people or at least confusing them. Pie charts, for example, are notoriously misused. They're supposed to represent parts of a whole. So this example on the left doesn't really make sense of Americans who have tried marijuana, 51% say today, 43% say last year, 34% say 1997. Yeah, that just doesn't make any sense because if you're trying to gauge like how maybe marijuana use has changed over time, you would use a very different kind of visualization to display that versus like a pie chart which should always like add up to 100% and it should be kind of like parts of a whole. It should be like the show like how the variable represents or how it's related to the variable and how it's representing parts of a whole versus certain other types of visualizations. There are data that correspond with certain types of visualizations. So if it's data that is being used to compare to values to the differences or similarities in those values who would use comparisons. So like bar charts, dumbbell plots are pretty good for those. If you're trying to kind of gauge relationships, a heat map, for example, you're trying to show whether it's connections in data or a depicting of the correlation or lack thereof. Heat maps or scatter plots are good for that. Distribution, visualizations that display frequency or maybe changes over time. Histogram would be a lot better for this kind of data right here, this kind of poll showing a change over time compared to a pie chart. All right, so now we are going to rank visualizations and so since we have two of you here, I'm going to put two PDFs in the chat, feel free to maybe choose one PDF over the other but essentially you'll be ranking visualizations from which visualization is most reliable versus which ones are the least reliable and just using what you've kind of like learned today about how visualizations are represented, how people maybe are misusing certain visualizations to convey an argument. You can see just like what visualizations might be more reliable versus what might be least reliable. So I'm going to stop, no, what I'm going to do is actually put this in the chat so everyone can see it. Let's see if I can actually share a PDF here, one second here. I think what I'm going to do is, I don't think I'm actually able to share files in the chat. My apologies. So what I'm going to do is going to do a screen share of one of those files that contains all those visualizations that you all can see that file and be able to kind of comment which one is most reliable versus which one is least reliable. So let me go ahead and find that and then I will share that with everyone. I'm going to use the second one. All right, so I'm going to share my screen one more time and you all can see this PDF right now. Give me a thumbs up if you can, awesome. Yeah. Great. So let's see these four visualizations right here. I'm going to zoom out a bit. Okay, so let's see. Okay, so we have two visualizations right here. One is a population density map. One is kind of a donut or another kind of like pie chart that is showing how many seats that the labor party in the UK won versus how many seats that the Tories won based on your first impressions of these visualizations, how would you characterize their reliability? You know, one seems more reliable than two definitely. It's definitely following the conventions of the population density maps of red is used to indicate that a place is more has more population density versus colors like yellow and green. And then yes, the one from the sun. I think one thing to kind of like note is that the UKIP party is kind of like, even though it didn't win any seats it's shown to like have at least half of what the liberal Democrats won with this chart. And the S&P almost has like the exact same area as the liberal Democrats even though they won more. So it is definitely misleading like how this is represented here is even though this party didn't even win anything it's somehow like represented on this donut chart. Yeah, UKIP shouldn't even be showing since that's here exactly. All right. And then like these two other visualizations right here one is showcasing types of debt by the average US household. And then this second chart at the bottom here it's showing hours spent online by age and gender. It's using a bubble chart. And one seems more reliable than two here. Yeah, I would say yeah, the second one this is like a weird way to kind of like distinguish differences between like men and women by like showing, yeah, it's hard to kind of like even discern what these different bubbles mean if it's like comparing something to something else. Yeah, so maybe 12 year old men are spending more or spending less hours online than 12 year old women it could be hard to kind of gauge that at first. Yeah, I think the different size of the circles is that supposed to convey that those, this is supposed to indicate that more people are spending time online there. Yeah, it's kind of difficult to kind of gauge like how that is even relevant at all. Yeah, the bubble chart, yeah, it is quite unclear yeah, bubble charts are often used to kind of demonstrate frequency, I think like heat maps for example, show like if a place is like more densely populated or more people are responding to a particular issue like the bubble like expands, it gets bigger but in this context, it doesn't really make sense. You would think that like maybe a bar chart would be more appropriate there. Whereas this one, yeah, I would say it's, this is probably the best way to capture that data is with kind of like the stacked bar there. It is like a little bit misleading though in that yeah, auto loans are taken to be like three fourths of mortgages even though 28,000 is definitely a lot less than 176,000 and so I think that this chart here needs to be adjusted in terms of like how much credit cards and auto loans kind of need to be like kind of cut in half shrunken significantly compared to mortgage debt and student loans especially. All right, so we're gonna go back to slides here and just kind of ask some final thoughts about how do you think you will engage with data in the future now that you kind of like seen these examples of how data can be used to mislead, knowing the importance of the context of data, how it's gathered, how it's being represented and presented to give a particular argument whether that is for commercial rhetorical purposes, yeah, just I think being able to be littered with data means being able to critically assess it and being able to know for example why data is being presented in a particular way and for what purpose. So I'm hoping after this presentation you'll have the takeaway of you can properly see data whether that's like it's represented through social media or other channels like in scholarly sources for example and be able to kind of maybe look at the methodology section and see like how did they really gather this data? Can we properly trust the argument that's being conveyed to me? So yeah, I hope that folks kind of like have that takeaway after viewing this presentation today but I'd be happy to answer any questions that you have. Let me just share my work cited page right here and just kind of sharing how I put this together today but I'm happy to answer any questions, okay. Go to in the chat here, I'm going to be careful, pay attention to what information is intended to be conveyed, also pay attention to things like cherry picking manipulation of axes. Yeah, absolutely. Will the slides be made available? Yes, the slides including my notes on those slides I will email to each of the registered participants and this recording of this presentation will also be made available as well. All right, wonderful. Thank you everyone for attending and if you're watching this recording later feel free to email me. I'll put up my contact information so that everyone can see that. You can reach me via email at rtpeterson1 at UA.edu or contact me via phone as well. I'm happy to answer your questions in whatever way you wanna get ahold of me. So thanks everyone. I really appreciate you reaching out and listening to my presentation today. Thank you.