 Well, hello everyone. My name is Chris Raleigh. I'm the director of campaigns and advocacy here at the Center for Election Science. I see a lot of friendly faces, a lot of old friends, but also some new names, which is exciting. So before I get into it, I just want to share some rules of the road. Hopefully you will hear about the polling event to learn more about CES, voting and polling throughout the country. Just some rules of the road. There's going to be a lot of information. I'm sure you guys are going to have a lot of questions. Please put them in the chat and we'll make sure I'll be on the chat, making sure you guys have all the information you need and we'll share our information at the end. But without further ado, I'd like to introduce Dr. Whitney Waugh, who is our director of data and applied research. Got it right. I think so. And it's a long name. We love long names here at Center for Election Science. But without further ado, I'd like for her to take it away. All right. Thanks, everybody. And I'm sure you can tell that I am ready to go here. And so I do apologize. I am feeling a little under the weather. And so if you need me to, I guess, repeat something because I think I'm speaking, I think, really with as much enthusiasm as I can right now, but still coming out monotonous. So please do feel free to let me know if you need something repeated. All right. So I'm going to take it away. I'm going to share my screen. Let's see. Sorry about this. So because there was quite a few slides, I have separated them into two. And so the results will be presented in the separate presentation. Okay. So thanks, Chris, for that wonderful introduction. And so why don't we just get down to it. So a little bit about the project in the case that you weren't familiar last year. It wrapped up. I just checked, I think on like December, right before the holidays, because you never want to pull during the holidays, right? But ultimately, this provides kind of an overview of what I'll be talking about today, kind of what this new approach that we took in this last year in having a much more iterative kind of taking a more standardized approach, right, when it comes to kind of testing the objectives that we want to be seeking to address. And so that means, you know, hashing it out, right? Me and Aaron have had plenty of brainstorming and collaborative sessions as to figuring out what's the right method, right? Because before you can, this analogy I used the other day, but before you can go on your trek, right, to reach whatever objective you want, you really need to go and see, well, what tools are going to be best, right? Do you need the shield? Do you need the malice or whatever, right? For me, it's always pepper spray, right? But in any case, you want to make sure that you have the best tools to get you to the end point that is not just as close to the target as possible, right? But in a way in which people can try to follow and replicate what you have done. And when you do that, that is as good of research as you can do. And so making sure that we're developing a process that is both, you know, meeting those two objectives, right? So validity and replication. And so this is like the work of all of CES, honestly. And yeah, I'm excited to get into it today with you all. So as I said, we had some objectives, right? What is our dragon that we are trying to slay? And so there are two kind of big overarching objectives here, right? And this is like, these are kind of the problems and the questions that we've all really, well, one, they keep me up at night, but I'm not sure about all of you, right? But in that case, it's how do we get at these questions, right? So the first being really want to get, you know, our finger on the pulse of America here, right? To assess public attitudes and really kind of see where the dial is, right? As far as what do people think about alternative voting methods? How do they feel about plurality, right? And really it's, and also just making sure that we're speaking to the people always, right? So there's a strategic component here as well. But really it's still at the end of the day is just to understand who we serve, right? And that is the people kind of understanding what it is about approving that we can really touch on to make sure that they understand that this is what they, you know, this is the reform that they should get behind, right? So basically another way of putting it, right, is I always think that you want to meet people where they are. And so having them, so getting a better understanding of where they are is always a goal of mine. And so now we're talking about let's go to the tool shed, right? And knowing that this is the first time that CSS had a director of applied data and research so close, Chris. And knowing that it's a proof of pushing me through. So knowing that, you know, kind of the first time we had this position, right? Wanted to make sure that, you know, we're equipped to really have a, you know, a good foundation moving forward. And that kind of meant building that foundation, right? So typically what is done in nonprofits, and just the private sector in general, we, you know, outsource to polling firms and don't get me wrong, I love SurveyUSA, I mean can or besties. But when it comes down to it, you cut out that middleman and also it gives you a little bit more control, right? And a little bit more discretion when it comes to how you're implementing your survey instruments, right? Your solicitation of subjects and basically, you know, really making sure that you kind of have oversight throughout the process. And as you can imagine, polling is not done overnight, right? Having this kind of rigor, it does take a little bit of the building up, right? And making sure that you yourself are having the oversight over every step of the process, right? So making sure that your sample who is being recruited, right, is meeting your objectives, right, from the very beginning. We want to make sure that it's being representative of, you know, likely voters. So we know who they are thinking about making sure that polling, as we know, in 2016, that was a good example of not, you know, getting those quotas met as far as like making sure that certain demographics were being over, you know, oversampled, right, to address any bias moving through that process. And so a little overview of the polling approach because it was through, you know, brainstorming and collaborative sessions with the CES staff, right, that we were like, this doesn't seem like it can be done one overnight, but not even close to that, right? It's something that probably needs to need to be happening in phases. So you can't answer all questions in one go, essentially, right? And this is what I was saying about before, about having an informed kind of iterative approach to research where before you just slap a, you know, slap a set of questions on a piece of paper, right? Or in our case, like a Word document. Before you slap your questions on there, making sure that your questions, right, are coming from a place that is data supported. And that is, you know, if it's not something that has been tested before, that is coming from experts, right, just doing your due diligence to make sure that whatever kind of the questions that you're asking, the language that you're using, right, is as close to the right tool that you need to really get at that dragon, right? So here we're talking about the, we're refining this methodological approach, the survey design, which arguably takes the longest, right, in this process, where you're workshopping language, you know, consulting with experts. So our campaigns team, when it came down to, you know, testing, what are some other kind of popular electoral reforms, right, aside from voting method reform, getting their insight and their input, which they also took a standardized approach in, right? So they rank, like, so they assigned scores to, you know, the ones that have done the best in their experience, right? And then the research team, we added our scores and that's how we evaluated what was included, right? And then we get into our phase one, which is, well, before we just wholeheartedly, like I said, just start testing, right? Oh, would you vote on this? Would you vote on this, right? How would you vote on this? Would you support this, right? Well, don't we want to get the best possible ones, right, that the ones that could really express our message the best? So basically putting our best foot forward, right? Don't we want to ensure that that's what we're putting there? And so, and if that's the case, well, why don't we test these variations? And so this is like my brainchild in a way, because I was thinking, well, there are a million different iterations, right? We could, I think between this group of 24, we could come up with, well, let's say I could come up with a million and so you guys don't need to come up with any, right? But I think we could come up with a lot. And so in order to test all of these, you want to do it systematically, and you also want to do it in a holistic way, because this is not something I think where you want to cut corners on, right? And so a conjoint analysis, which I will explain a little bit more in depth, not in depth, but briefly, I'll touch upon after this. But that's what I propose we use. And so that using these results, we would take what we learned from phase one and implement that into phase two. And that's in phase two was the survey that got us the juicy bits, right? And that's how we kind of really went there on that. And so like I said, just a brief touching point. It's just a research technique that's frequently used in like marketing advertising, but has recently been introduced in political science, academia, literature. But basically, and I think it's probably better if I just show you the example, right? So there are different types of conjoint designs, but the standard and the most, I think the most validated measure is a standard choice-based conjoint, right? And so basically, say you're trying to buy a car, well, there are certain qualities that are typically advertised to us, right? And as you there is no better motivator than money. So this is used quite frequently because they want to see which are the things really matter to people when it comes to which option they choose ultimately, right? So what I really like to hone in on is making sure that we're when we're getting an opinion and we're getting someone to say what they think and feel that we get as close to what it's like in real life, right? So they're never going to just get one thing exposed. They're never going to be exposed to just one thing, right? So kind of the considerations that you need to take into account are, well, what other things are they going to be kind of thinking about, right? And so having them choose, right? Forcing that choice for them, you get a real sense. You get a real sense when you see two options or even three, right? And you have them choose between all of them. You kind of get a sense of what their utility rankings are, right? Of what, how much they actually care for each one of these different qualities. And as you can imagine, when we apply that with like a potential ballot initiative, we get a really good sense of what really matters to folks, right? Because you can basically put a quantity on it. And then so like getting back to it, right? This is just a way for us to also control for order effects, right? So, you know, how we kind of lead into an initiative and how we really, how people really feel about that, right? And if they even need that, because it could be that certain that approval voting on its own, right, is exactly what we think it is, right? That it should be a popular reform. And so this launched into phase one, which of course, just further complications, right? So wanting all, you know, part of this is also being forthcoming and transparent about how like the, really that the challenges that arise in the typical research process. So here we, we see survey, we ended up having to split into two, into two different variations, right? So the easier to reach states as in, when I say easier to reach, that means sample recruitment, right? So when you're wanting to get respondents and to meet a certain to get enough respondents in a particular state, certain states are much easier to reach than others, right? And so as to not want bottleneck, right? The rest of the, so when you don't want to, sorry, so one as not bottleneck the rest of the project, but also to make adjustments as needed, right? So it's always kind of a flexible process where you really want to take these things, these kinds of pointers into account because there's really, because there's no way you can really, because all states are different. I think that's probably, you know, the takeaway of today, right now, but we all know that all states are different. So needlessly, you know, we would need to take those differences into account and you can't expect to always apply the same instrument to slay the same, to slay a same dragon at different states, right? And so what is happening? Okay. And so this is just a like kind of brief description. And so we solicited our sample and recruited subjects using Lucid. And Lucid is the same company that is the same crowdsourcing company that SurveyUSA uses and plenty of other pollsters use. So basically, we're also a pollster now. So we're relying on the same kind of tools. And as you can see here, we have this full corner and then the condensed one is, you know, making sure that we're still getting and testing like the key critical points, right? But making sure that we're adjusting so we can actually solicit enough sample, solicit enough subjects for each of these states in the harder to reach areas. And so we're targeting likely voters. So that means adults, of course, like above 18, and people who have indicated that they have voted in the last election, and obviously kind of adapting some of the tricks and kind of the tricks and kind of the right approaches that are being used by different, not just like polling firms, but in academia, right? So really making sure that we're also being tricky as well. So we know that there are people who are kind of breezing through, right? And that's often a criticism that is levied against crowdsourced samples, especially online samples. But there are ways to ensure that you're having a quality sample. And phase two, single subject and non-single subject states, right? And getting into kind of like the different types of state compositions, again, right, as far as like what they really allow and what they don't allow. And here we get to the results, which I need to switch to a different slide here. But in the case, I could also stop for, I think, questions. I think that would be a good time because I do need to go a little bit, I'm going to take a little bit of a break. Any questions for Whitney? Yeah, hold on a second. I'm just going to switch on the switch here as real quick. And then I'll hop over to that. Yeah. Looks like, looks like, Whitney, you've got everyone's attention and explained it perfectly. So I tend to get that a lot. So here we are. No, sorry. So here really, I think, for the sake of like brevity and also wanting to kind of, I think, make sure that we're just covering top lines here, going through state by state, right? The breakdown, because as you know, a percentage of support for like a top AV initiative, right? So some approval voting variation. But it turns out on the whole, from all the states, the 21 states that we pulled approval voting is pretty popular, right? When it comes down to it, people really enjoy picking all they like. And they have indicated, and especially after kind of really honing in on the parts of the ballot, honing in on really the parts of approval voting, right? Because that people are really drawn to, right? That they feel impacted by, so they can feel empowered by a particular reform, right? By honing in on the qualities that they really appreciate. I think we were really able to capture really close, I think, to what we would see if approval voting made it to the ballot, right? In each of these different states. And so we're going to go through each of these states. And you see the percentage of the support, right? For the top AV approval voting initiative, we'll see the yeses and an open is meaning if they have an open primary. It doesn't seem to be quite any patterns, right? Everyone seemed like across the states. We see here it's California, in Washington having a top two primary. And so, yeah. And so, sorry, I think there's something. And so, getting to kind of what we learned, right? So across all states and even in the stricter, like single subject states, we're getting approval voting said that it would be supported, right? From like 65 to like 75%, which is great news. And also, we're finding support, right? Kind of systematically less, right? For other forms of other forms of voting methods. And so we're really learning kind of, I guess, on its face, what we already would think, right, is that language matters, right? But seeing that applying new methods, right? Really can yield different insights. And knowing that using these kinds of like the new pepper spray or something to get to where you want to go, right? But trying this and innovating these methods, we can really get at different parts of different parts of the question that can help us build an understanding of what really matters to people. And so, and by touching on what, you know, what really matters to people, I think we can work towards reform that really matters. And here we, and then obviously, needless to say, right, that there are these kinds of insights, and you can build on top of this, right? And then we're also making sure that we're having a robustness check, right? Making sure that it's not just all, you know, funny business in our sample, right? But making sure that the work that we have done is also replicable using SurveyUSA, right? So using a polling firm. And so we did that. And using SurveyUSA, we pretty much replicated what we had done in the fuller project. And we were able to get pretty much like there was no statistical significant difference between the results that we got from North Dakota and the results that they got. And so in that case, here we are. And then as we're looking ahead, so saying like, oh, what's next here, right? Really wanting to do, really wanting to conduct more experimental research, comparing outcomes, and developing a nationwide survey to better understand public opinion and public attitudes towards methods, really comparing, right, kind of, and weighing those different considerations that people take into account when they're making not just like what their electoral behavior is, but also what they think, right? So it does matter, right? If they feel more efficacious after they vote, whether they feel like they trust their institutions more, right? Because institutional trust actually has put an impact on whether or not they vote in the future, right? And so, and then really, I think what I really want to try next, right? And with greater capacity would be to implement a research design, right? With even stronger external validity, right? So trying a different experimental design where we're presenting messaging in a competing context, right? Where we can mimic like real campaigns like having, you know, say, and even having, you know, competing messages about, you know, voting method reform, but also trying it with different types of messaging, seeing what those effects are, really, right? So say maybe it's an advertisement for like blood pressure or something, right? Just making sure that we're controlling for what could be an effect, like, you know, a misperceived effect, right? And really just getting closer, right, to the target and always moving closer to that target. And in which case, I think CES is well equipped to do so. And so, in that case, I will end it here because I have my contact information there and I will do my best to reply to any questions and stuff or any follow-up, really. But I want to leave some time for like questions and yeah, and for discussion, really. Brittany, there was some questions in the chat that regarded what elections we applied this to. So can you talk a little bit more about the different variations of versions of approval that you tested? So the different, is it in the chat? I can read the questions as well. Okay. Like the type of primaries? Oh, the type of primaries. Like, do you mean like, what? I would say how we tested kind of top four and also... I see. Okay. So the variation. So to make sure that we didn't have any kind of inexplicable language for particular types of primaries, I think the choice that we made was to just use very broad language. So as in like, oh, this change from the primary that you have now, the existing primary, the establishment primary, and shifting to a new one. Or if they already have an open primary, then we made sure to not have that question there. I mean, that part of the language there. Yeah. So that's a great question. Yeah. That's definitely the kind of flexibility that we wanted to take into consideration when we were applying kind of, in essence, the same study across so many different states at once was having these kinds of tweaks, right? More questions from the chat. There was another question about the exact wording. So do you maybe want to talk about where some of the exact wording or the measures came from? Oh, from. So the ones that were tried, right? So really, like I said, from the very beginning. So this would happen in even before phase one, right? But basically, I looked at, so the campaigns team, right, they, first we started off with subjects like as in, oh, so if you're talking about approval voting, that language, well, I started off with the base language and typically, and then we honed in on the kind of different elements, right? So I was like, gosh, I'm just going to try to draw. But a typical ballot initiative, particularly one that proposes approval voting, right? There are different components to it as in, like, which offices it would apply to, which, like, does it apply to both the primary and does it apply to the general, right? And so really just figuring out how to word it, one, kind of taking normal heuristics, right? People don't want to read an essay, especially when you have a method that is pretty relatively easy to explain, right? You really don't want to, like, one, you don't want to oversell anything, but also two, just get straight to the point. So those kinds of common heuristics were used to first write some drafts, right? And then once we kind of ironed it out, then having different framings. So we focus on voter-centric framing and also where we make it kind of, you know, from the perspective of the voter, right? But then also institutional framing, right? So where we're talking about changing institutions, right? So and it turns out that people feel much more engaged in the initiative when it is voter-centric framed, right? And so taking all the, like, kind of, you know, really knowing the landscape of like public opinion research is always a benefit. But yeah, also relying on, you know, experts and, you know, on your own staff is also really useful too. And so that was a long-winded way of saying we read some things, yeah. And then we tried them, right? So making sure that we're doing like little pilot tests, even amongst staff, like, does this sound good, right? Does this sound good? We really want, and really not just amongst staff, but amongst, like, our parents, right? People who just do not speak for my own parents, they just do not care, right? But making sure that, and, you know, English isn't their first language, so making sure that it's like easy enough for them to understand, yeah. And so in one way, like, thanks mom and dad, right? Any other questions? Wendy did, just so I understand, did you have your second? Did you already move over to your second slides or their? Yeah, I did. Yeah, I did. And I just wanted to make sure that we're covering, like, that I was covering everything kind of in a timely manner for discussion. And then also, yeah, I didn't kind of give away the farm. Okay, I got two questions from the chat, Whitney. Okay. How many phone calls did you make to get a person to answer the poll? That's a very good question. I personally called no one. But for all of these states that were not considered difficult to reach, right, challenging, where I was really pushing Lucid to their very, very max, like I was like, no, I felt like I was negotiating. I was like, no, get me a few more subjects, please, right, in certain states where like North Dakota, for example, is difficult. But for all the online platform ones, it was just solicited online. And we really want to make sure that as much as possible that we're keeping the platform consistent, right? Because it's so is completely different to answer a poll, like, you know, reading it, right? And also, you can't do the conjoin on the phone. I wouldn't trust it anyways, right? That's not the point of it, right? It's like, you are being exposed to compete like different met different options at once. And you can't inherently you can't be exposed to different options at once on the phone. So for the conjoin itself, we did not use any phone. And it and really, I don't think we use any phone for the second phase either. It's just in our other. We had some for the North Dakota one in particular when we were really doing a case study pretty much. And really want to because ultimately, the goal is to do a case study on all states, right? Like, when's California, I'm just kidding, but that's where I'm from. So, but yeah, you really the goal is to do a case study on all states, but for North Dakota in particular, getting anything above like 200 subjects online is impossible. And so they need to do it by phone. And in which case, then you have to adjust, you know, your survey to account for those differences as well, in which a lot of pollsters don't do actually. And it is frightening, truly. All right, Wendy, I'll leave it up to you how you want to answer this question. Evidence or what evidence is there that voters prefer, you know, a pick all your like of approval voting type of situation or ranking? This project, I mean, wait, so what evidence is there? Yeah, the exact the exact question is, do you find evidence that voters prefer choosing candidates or ranking candidates? Yeah, I mean, I think like in the aggregate, at least, right, per state, even by city, when, yeah, taking it in kind of the way that ballots are counted in the aggregate, right? By case by case basis, as in like at the individual level, I think that's definitely I think a slice of life I would like to take a look at, right? But in the aggregate, I would say there has not been an there has not been an instance, at least in my testing and my own personal testing. And this is like I said, after of almost half a year of work, right, to, to get close to that target, like to get close to what people to knowing what people want, right? So I'm not saying you can go around, right? And just like say whatever you want, right? But this is like carefully selected language. So in our best case scenario, right? I have found time and time again, the evidence is there, at least in this particular project, that people systematically prefer a pick all your like situation. Yeah, and would support that in terms of like, so there are different measures as far as like what you want to consider preference, right? But in our case, like support for a potential initiative that proposes one of these reforms. If that is the measure for preference, then I have seen systematically that, at least in the states in this, in this project, they have systematically preferred to pick all you like by about, depends on the state, there are definitely states where it's closer. Yeah, but there is definitely, like I said, systematically higher for pick all you like. That's a good question though. Thank you. I'll ask you Whitney, because this is actually a question that I had is, is there anything that surprised you? I think it always surprises you when, when coming from academia, it always surprises you when the result you want is exactly what you get. And so I think, and just in general, it was like something that we really need to make sure was what we were seeing, right? And so that surprised me, I think was just, I think the overwhelming support, right? But in a way, like I, I myself kind of, you know, when I was just learning about alternative voting methods, right? And taking into these different, like taking into consideration kind of what goes into each, right? And it, it's nice and also very validating, right, to kind of see it play out where, when it's, when you're appealing to what people care about that, approval voting and being able to pick all you like kind of serves that and addresses people's concerns. And so in a way, it's like, I think it's the pessimist in me, but you know, always expecting, I think, like, I'm preparing, which is the, not what you're supposed to do, but expecting and preparing for the worst, but which is the academic way. There is no, there is no for the best, but and yeah, I think that was surprising. And then like I see demographically as well, some of like, without getting delving into too much is a bipartisan somewhat bipartisan support that we see for a pick all you like approach, because I think like at the end of the day, it is pretty nonpartisan, right? If anything, it's, it kind of the method itself seeks to kind of, what did Allison say? That was so good. It was to, let's find what we agree on, right? And I really think that people get that message, right? Is that that's what approval voting is, is not trying to lift like, you know, the Republican or the Democratic Party, but supposed to lift issues, right? Like it's supposed to really get people to think about, well, who really stands for something that I care about, right? And I think that's the next wave is really what I want to be looking into more is like, what gets people out to vote, right? Is what's that issue? Because we all, because, you know, time and time again, like research and social sciences, they really harp on the fact that most people are single issue voters, right? They have one issue, they die on that hill, and that's how they determine which party they vote for, right? And, and really getting into that, like, what is that issue, right? By the end of the day, it's like, I think the nonpartisan, like support that we found consistently, right? I think that also surprised me. I think to that point, Whitney, I think what is probably surprising for people on this call and who watched, who read your article, is that's for approval voting was so high, which in our probable own world, we see that there is not the awareness, you know, whereas, so theoretically, people like something, maybe they, they haven't heard of, can you explain why you think, you know, it did, it did, it does such have so high support kind of with everybody. Yeah. And, wow, I hope I can, you know, keep this under 10 minutes because I would love to continue talking about that point in particular. But just basically how people deal with uncertainty, right? And, and how people process information from that standpoint. So oftentimes, we think about uncertainty as being kind of anxiety driving, right? And anxiety as like a discrete emotion, it tends to make people withdraw. So it's not an action, motivating emotion. So this is like my kind of like my dissertation research as well. So like thinking about how, like what emotions kind of factor into uncertainty, right? But I can also have a net positive effect, right, for when it comes to like, you know, kind of preconceived biases that are typically associated with political information nowadays, right? So especially in this kind of like really hyper partisan and polarized environment that we live in today, it's like anything political is affiliated and associated already, right, with a particular party. So there is an advantage in some ways of being unknown, right? And especially through the act of like, you know, the polling that we've done, it also acts as an educational, right? Kind of like an app, like kind of informs the respondent about like, at least in brevity, right? What these methods are about if they don't know anything, right? About that either. But the thing is like, you know, alternative methods, there is, there is another method alternative method voting method out there that is, you know, clearly better known, right? And has been around much longer. And so it acts as obviously, there are many benefits of being around longer, but in some cases, it can be a negative, right? When people are already having those associations and preconceived notions that we, as I think like anecdotally, we can all, we can all express how difficult those things are to, are to adjust and to change just in personal conversation, right? Or if you're a keyboard warrior, you're not shifting any dials, right? At least you're not shifting any dials with the wrong approach. And so I like, I take it upon myself to find the right approach to do that. And so to do that, it takes a lot of time, it takes a lot of research and not just research, but you know how they say practice doesn't make perfect, perfect profit just makes perfect. Perfect research, impossible. But I try to get close. And so by doing these iterations, we get as close to that dial as possible. And so, and so when you see like, wow, how do we see such high numbers? Like I said, it surprised me, but you have to believe, you know, I got back myself and I got back the work that we've done, which is that we are really tapping into what people care about, right? So it's in our best case scenario, right? Where it's our language, right? And that is our presentation. That's what we see. And so it might be unbelievable, but I can drone on a little bit longer about the methods and hopefully they'll convince my I have to keep it free. I think I think this is kind of an interesting question. Can you put the high, the, you know, we're saying the number of support is high, 60s, 70s. Can you, especially with your background and your research, can you put that in context for people, you know, what does it mean that 70% theoretically would vote for this, you know, not just, you know, okay, mathematically, that's a great number. But what, you know, where does that stand in, you know, when it comes to issues or even voting on things, what should somebody know when they think of that number and we say it's like high? Yeah, so obviously, from my perspective, which is informed by scholarship, also by my pessimism, right? But it's even taking into context, right, like standard error, right, that typically is associated with polling numbers. And, you know, accounting for really just like worst case scenarios, as far as like, how representative is a sample and applicable, right, to like the fuller to the actual population you're hoping to study. Taking that into consideration, and it's really, I think it's still a strong number just given, I think the precautions that I typically take at least, but I mean, it is what, so really, it's like kind of the really have to jump through hoops, right, in a way. So in comparison to other polls, I'm not sure if they kind of practice the same likes, like stringent sampling quotas to ensure the sample quality. But even given this, there's always a possibility, right, of like, a surprise campaign, right, that's like, you know, in competition with yours when you ultimately like launch it, right. There's always research can get you close to dial, but it's still real life, right. That's why research is never perfect because you're setting something that is not perfect. It is we're setting humans, right. And so there's always something that's going to pop up in that. So in the context of like what you can actually apply, right, is as much as what you can apply with any other kind of poll that we see, right. So let's take kind of release really popular polls for like presidential candidates, right, and take that as an example, like what can we really take from that when we see someone has like something percent, right, and they're leading the polls, right. What you really need to be thinking and the questions you should be asking is how they got that number, right. Because 60% when it's done, like I pulled me and my brother, right, I don't even know how you get 60%, I should count for more. So it would be like my phone, right. But if that's how they count, you know, if that's how the numbers are like come to, that's obviously going to be problematic to apply. So it really is always about the method, not about the result in a way, right, but it's how you get to that result because results are utterly meaningless and I don't mean this in like a harsh way, but it's utterly meaningless to me if I don't trust what you have done, right. And so when it comes to like putting in context, you know, I always knock like even if we were to give ourselves a 5% leeway, right. And it may be if like awareness doesn't hit and education efforts don't hit as much as we would want to. But yeah, just going in and making sure that that number this, like as long as like I think like 60, 65% with a research design that really has taken care to get you a quality sample, right. And to not under sample certain populations, like, you know, the rest belt like states, right. Making sure that you know, African Americans aren't just, you know, counted as where there's only like two in the sample, and you're multiplying that by however much, making sure that that's not what you're doing, right. Then polling numbers can be close, right, can get you close to what ultimately comes out. So it's really about contextualizing those numbers with the method. I think that's, that's kind of, I guess, like my response to that. Okay. And since we're working close to the hour, I think it'd be fun with me if maybe you shared, you know, a lot of folks here on this call or maybe what, you know, probably on this call wanting to know how they make the bet, the case to the two others to maybe elective officials, you know, just with this, with this stuff in mind, but also just when you're experiencing, what should, what should they take away and how can they use this numbers to make the case, besides finding out this maybe more popular than they thought. Well, you guys are, that would be giving away the farm, I'm just kidding now, because that's actually what I'm working on is creating a model, right. So there are three, and this is what I've been kind of brainstorming about, but there are three essential parts to any form of persuasion, right. And in voting reform in particular, anything political, right. So you think about three things, and it's usually like in that way, which is the sender, right. So as in the source of the message, so let's say that's me talking to you guys, right. So me, then the message itself, and then audience, right, to not, to leave out one of those three, even, right, out of how you present a message is already to kind of put yourself on the wrong foot, right, as far as like convincing someone or changing their mind, right. So taking into consider all three of those components, right. So say for me, and I have done this, so having to master it in a way, but speaking as like a woman of color, right. And speaking to a more highly technical audience, you hit them with the stats, you hit them with the credentials, right. You do what you need to get yourself onto that footing, then the messaging that typically works as far as like moving that dial depends on obviously your audience, right. But what I have really found to be effective as far as like in my own personal social groups and stuff is to really hone in on for those who kind of don't know about any alternative voting method, that's always who we're dealing with, right. A lot of my friends had no idea, right, they might have heard of ranked choice, but they never heard of approval. And so what I say is, you know, how you like, you know, how you like, vote for the lesser of two evils always, right. People really resonate with that line, right, we found in polling has polling, but they really do, they're like, Oh, yeah. And, you know, if they happen to vote for Sanders, and we're like, Oh, well, yeah, you know, if they happen to vote for a particular candidate, like that's somewhat third party, right, or have like a, and pretty much, right, you can make your best bet that most people shirk the partisan label nowadays, right, is the fastest growing group. No one wants to be labeled as either party, right. And so just assuming that everyone at least leans one way or the other but still doesn't like the label just saying lesser of two evils, right, you know how you always are forced to do that, right. Well, it's just less you pick all you like, right, like all the candidates you like all the plans you use for it. And as you can see, right. One is easy, people. I think this is the point that typically just doesn't get talked about enough, but I find to be very important, both like normatively, right, for democracy and representation, which is accessibility, right, how accessible are these different alternatives. Like if I could explain it to my parents, who were not born here, right. And if I can't tell them about it, and I can't explain it to them. I think that's already a huge factor, right. And as you can see in like SF in New York, they've spent like hundreds of thousands, if not millions, not just to implement like the kind of machines to deal with certain ballots, right, but also to educate people how to vote and is not voting a pillar of like kind of the democracy that we live in, shouldn't it be accessible like easy to understand. And I think that argument typically sells as in like, you know, we're not trying to exclude anybody from this act that we are all supposed to be doing, right. And obviously, you can not do it if you don't want to, but it shouldn't like, you know, just understanding it should not be a criteria, right. And that typically sells. So yeah, lesser to evils, allowing people to vote kind of like all the preferences and how it's going to shift the attention away, right, from the parties, really, and not just to the parties, but to even like candidates in a way, right, because I, and I think Allison put it so well, who's on our campaign team. And she said, you know, let's find what we agree on, right, like let's find where we agree. And I'm like, yes, I'm gonna get that tattooed on my forehead, like so good. But in any case, so like, like I said, always take into consideration who you are as a person. And honestly, you know, if you're a white male too, and you're speaking to a particular demographic, thinking about that, right. So thinking about what that relationship and what kind of what affects that message, right, because it's never just the message itself that matters. It's always the sender and the audience. And I think people just do not take that into account enough, but it is a model that of communication that really, I think like, hopefully gets published, I don't know, we'll see, but right, but getting a point across always needs to take into consideration those three factors. And if you do, I guarantee you, you can think through it on your own really, like how best to appeal to someone. Yeah, knowing what priors they might typically have, right. And knowing what kind of framing even your identity gives off, right. It's like, that impacts the message that is being sent for sure. But obviously, I love this like, this field of research. So I could talk on that. I think we're out of time. My throat is so dry. Yeah, I know. Whitney has been an incredible trooper this entire time fighting through illness. Whitney's email, if you'd like to reach out to her, is whitney at electionscience.org. You can reach out to all of us at staff at electionscience.org. Please feel free to reach out with any of your questions. And the data is available. I've had a few people ask in the chat. It's in the link is in the chat, but it's also on our site as America's most popular voting reform, which who knew, but that's, that's the truth. I'd like to thank everyone. I look to thank Whitney for all of her great work and for her awesome chat with us today and running the numbers with us. And if you have any questions, please feel free to reach out. Whitney, any last words? Yeah, I feel like the adrenaline is just like chiseling out. But yeah, if you send me anything, please be patient. Yeah. Thanks, everybody. Thanks, everyone. Bye-bye.