 Hi there and welcome to the first ever episode of Infoversity. This podcast is coming to you from Syracuse University's School of Information Studies, where we investigate the intersection of technology, business, and humanity. Today I'm joined by Professor Jennifer Stromer-Galley, a senior associate dean at the iSchool and renowned researcher in the field of social media and politics. Hi Jenny. Howdy. In the second edition of her book, Presidential Campaigning in the Internet Age, which came out in 2019, and a history of presidential campaigns and how they have been adopted and adapted to digital communication technologies from 96 to 2016. She has more recently been working to make it easier for public journalists to track what political candidates say to their social media followers and received a John S. and James L. Knight Foundation grant to build an online dashboard that tracks candidate spending and messaging. Jenny has also been a principal investigator in a $5.2 million project called the Trackable Reasoning and Analysis for Collaboration and Evaluation, or TRACE, funded by the Intelligence Advanced Research Program Agency, IRPA. The project experimented with different tools and interfaces to support complex reasoning and intelligence analysis. And earlier this month, she has received a grant from the Knowledge Graph Database Company, Neo4j, to track misinformation on social media and in online political ads in the 2024 U.S. election. This topic is of particular interest as we kick off a major election year in the U.S. and abroad. And in fact, 2024 is slated to be the biggest election year in global history with an estimated 4 billion people going to the polls in 60 countries. So welcome, Jenny. Thank you for joining us. Oh, it's my pleasure. I'm delighted to be part of the inaugural podcast for the iSchool. I think that's pretty cool. And I really thank you for being here. And that was one heck of an introduction that we had for you. That was amazing. And your accomplishments are just incredible. Can you spend a little bit of time and talk about how you got involved and got started in your interest in political elections? Yeah, I can. I don't know if it's embarrassing to say, but I actually got started by studying Bob Dole's website. So this is back, imagine it's 1996. And this really old guy named Bob Dole, who was a senator, had decided to run for president against Bill Clinton for Clinton's second run for a second term as president. And this is back when there's no social media. There are barely websites. The World Wide Web is just really starting to diffuse. People in higher education had email addresses. They might be able to create a little website or a web page, but it was still a really novel technology. And so I was really struck by the fact that candidates could use digital media to talk directly to supporters. Because imagine before the web, before email, before social media, and all the things we currently are consumed by, we mostly got our information from three television stations for news, and maybe by the 90s, we had broadcast, sorry, cable, so I meant to say. So we had Fox News, we had CNN, and then we had a newspaper that came in radio stations. And so the mass media was a very powerful intermediary between political candidates and politicians and the public. And so with the internet coming along and potentially democratizing, like little de-democratizing the ways that the public could engage with the political elite in the country and vice versa, I just, I was really fascinated by that. So I wrote my master's thesis on Bob Dole's website and this new political campaign medium. And that's, that's kind of where things started. That's fantastic. You've obviously been involved in social media for a very long time. What do you think's changed about social media since its inception to today? So in the context of how political campaigns use digital technologies, I mean, there's been a lot of evolution. So you mentioned in the intro the book that I wrote, Presidential Campaigning in the Internet Age. And, you know, it's a history. It looks at a decade of how candidates evolved their campaign strategies as the digital media environment changed, right? Because they both changed hand in hand. And we don't give a lot of credit for this, but actually innovations by campaigns do drive technology. And there's actually a long history of this. And so for example, when radio was just starting to become a thing in households in the 1920s and 30s, radio producers used the debates to sell radios to households to try to get them to adopt radio and bring this kind of national event of a political debate into people's households. And so you see similar kinds of interactions, if you will, between the companies and the technologies, these kind of innovations in information technology and campaigns, because presidential campaigns in particular are when you have kind of a mass media event, right? The public, generally speaking, is sort of tuned in. And so it allows for the technology companies to really leverage their technology to say, hey, if you use us, then you can gain additional connection to this kind of national event. So, you know, looking across from 1996 through 2016, so much has changed, actually. I was thinking about it as I was coming in today, just how much, what we think of today as normal even a decade ago would have been seen as really quite, I don't know, I don't know what the word is, amazing, extreme, problematic, right? I mean, I personally think we are at a very challenging moment in our democracy. And I do think that the ways that political actors can directly talk to the public and bypassing the, you know, the news media has been called the fourth estate, right? So it sort of serves as this intermediary in between the public and political figures. And you know, the news media journalism, that industry has really been challenged through digital technologies, the business model for journalism has eroded. And so you see newspapers folding all over the country, you know, here in Syracuse, our daily newspaper has actually only delivered now three days a week. And largely what most people are getting has news, it's national news from syndicated media like AP or Reuters. And so local news has really eroded. And I think that has a potential problematic effect, especially around misinformation, which I have a hunch we'll talk a little bit about next. But I think, you know, looking at the current political moment, when you have so many elections happening across the globe, there is a real opportunity, right, for ordinary people to get involved with those political campaigns for political actors and political parties to directly talk to their supporters in the public. And that's a good thing. I actually think there is a democratizing dimension to that that's really quite beneficial. The challenge is that bad actors can also misuse that direct connection and can spread incorrect information, whether it's accidental or intentional. And of course, we're more worried about the intentional spreading of false and misleading information. And that wasn't a concern back in, say, 1996. In 1996, the concern was really more the fact that the public couldn't really communicate and connect with political actors. There was this intermediary between them. So there is a bridging of the distance, I think, today that social media enables that allows the public to be more involved in political action, political engagement, finding communities of people who share their perspectives. That's good. But then there is always, unfortunately, I think with these technologies, there's always a dark side and human behavior. There's both good and bad and the technology enables both to happen. So it seems like. If I may sort of digest a lot of what you said there, that's all right. That was great. There is a, you know, it seems like throughout history, from the political landscape, there's always been informing. And technology has enabled informing and not informing in some ways. And what we're seeing now is with the explosion, it's becoming exponential with the ways that someone can be informed and be misinformed. And that presents itself as a problem and a unique challenge, I guess, is a good way to think about what you're saying there. I thought about that radio and I was like, OK, there's a way to inform and like buying people radios. It's like, that's how you get to your electorate, right? You got to let them listen. That's right. Now there's so many channels, right? So I think that's one of the challenges. Those are so many channels. If you're a political candidate, what do you do? I mean, like, would you go on Mastodon? Do you go on Twitter? I mean, how do if you're like a you want to be mayor of a small town, you don't have an army of people to manage social media for you. But there's so many different channels. So can you talk maybe a little bit about how that works? Yeah, for sure. No, it's it's funny because back in like 2000 and 2004, right? So 2000 candidates raising email. Oh my gosh, email. Wow. But honestly, at that time, an email, believe it or not, is still an incredibly powerful fundraising tool for candidates. Subscribed to my newsletter. Exactly. Yeah. And so, you know, any any supporter, any any citizen who's willing to give an email address to a political organization, whether it's a party, a candidate or, you know, an activist group, if you're willing to give up your email, that means that, you know, you're committed to some degree. So that becomes this reciprocity of cool. You care about our cause. Then why don't you give us a little money? So email is still really, really powerful and it never gets talked about when you think about social media. But honestly, email was the first social medium invented in 1972. So we've had social media actually a really long time, but the proliferation of channels is really remarkable right now. And, you know, again, think about 2000. So email. There were blogs, actually, kind of early generation blogs and there were discussion boards like threaded discussion. It was a big use net user. 100%! Way back in the early 90s. I was a use net person. Yeah, for sure. Absolutely. Use net is sort of like the Twitter of two decades ago, right? Yeah. Well, it comes around, goes around, right? 100%. It's just different evolutions in kind of the core thing of allowing people to yell at each other. So, amen. Sorry. So, and then 2004 comes along, blogging really becomes hot. MySpace is a thing. And then by 2008, you start to see MySpace, which was still a thing, Facebook and Twitter, email, blogs, and of course the traditional website, because the website is still a staple today for campaigns. And so basically from 2008 until 2020, for the most part, it was Twitter, Facebook, Instagram kind of grew in importance, but not to the same degree as Facebook and Twitter. And Snapchat actually was a little bit of an experiment for some campaigns in part because of the geofencing. So if you were holding a political rally in Iowa, you could encourage your supporters to use a Snapchat geofilter to sort of say to their friends, hey, I'm at this event for Ted Cruz, and that seemed kind of cool and hip. But Twitter, I would not have predicted actually the sort of shift that happened with Twitter that has then enabled and opened up, I think, a new proliferation of social media platforms like Blue Sky and Mastodon. There's also on the political right, there are new social media platforms like Truth Social, which was started by Trump, as well as GAB, which is kind of a religious slash conservative social media platform. And then some others that come and go, they get pulled down from the internet for various reasons. So there are many, many choices that campaigns now have. Strategically, campaigns generally go where their supporters are. So for the most part, that means that they're going to Facebook and part because that's where older voters now are. And Facebook still reaches roughly 80% of the public. Now, it doesn't necessarily mean everybody's active on Facebook, but it still has massive reach. And so Facebook is sort of a standard go to Twitter. I don't know what's going to happen with Twitter this election season. Things are so unsettled with that platform and its utility that candidates are on. We just did an analysis looking at where the candidates are right now for the 24 election for president, governor, senator, and select house races. Almost all of the federal races have Twitter accounts. They have Facebook accounts. They have YouTube as well. I forgot to mention YouTube. You're saying. YouTube has been central really since 2012 as a way to take advertisements that you might be running on television and then put them up online to reach potentially other voters and supporters. And of course, now there's TikTok as well, and you do have candidates. It's an uneven smattering, if you will, of candidates that are using TikTok. But they're there too. So as a candidate, you have to make decisions, where do I think my supporters are? Where can I get the most kind of bang for my buck, if you will, in terms of talking to people that matter? Generally speaking, candidates are not on every platform. It's not worth their energy or time strategically. So where's your voter? Talk to those voters and try to get them to actually turn out to vote on election day because at the end of the day is what a campaign is about. So as part, let's turn it into your research a little bit. So with all this going on, right, and everyone's using these different platforms, and then if you're in a Senate race, you might be looking more at these higher end platforms. If you're local, you might be looking at different platforms. How do you, as a researcher, go about collecting this information? And then what do you do with it? So in, I'm trying to think here, so roughly, so I started here at the School of Information Studies in 2013. And from 1996 through 2012, I was looking very closely at how campaigns were using digital media as part of their communication work. There wasn't a systematic way to study, say, Facebook posts by Barack Obama and Mitt Romney in 2012. There was no access point, if you will. Actually, I was looking back, I have screenshots, like physical, not physical, digital screenshots that I took from both Romney and Obama's Facebook accounts back in 2012, because that wasn't the other way to try to capture what they were saying on those accounts. But the beauty of being in a School of Information Studies, combined with the platforms like Facebook, opening up what they call APIs, which are application programming interfaces, you can think of it as basically a door between me and the database at Facebook. And Facebook can open up that door. And when they do, then we can actually more easily pull down in database form that candidates messages, when they post those messages, how many people reacted to those messages. So that's been transformative. And so when I started here at the school, I started working with Jeff Hemsley, another colleague in the iSchool, and a few others who now have retired, looking at and collecting social media messages. And so I've long been concerned about the challenges of really tracking what the candidates are saying to their publics on social media. There's so many messages, as you mentioned, there's a lot of different platforms. And it's just really hard to keep track of it all. And I was concerned that for journalists who are following the political elections, many of them don't have the computational skills or the ability to do that kind of collection analysis work to really see what the messaging looks like. And with micro-targeting, which I think is especially a concern. So micro-targeting is basically a candidate saying one message to one constituency and a different message to a different constituency. And I think thinking about misinformation and all of the challenges in our communication environment right now, when you've got multiple social media platforms and you've got many, many messages, it's easy for bad actors to hide bad messages in the massive volume of information. It's a needle in a haystack problem. So the goal that we started back in 2014, looking at governor's races in 2014 and then in preparation for 2016, was to start collecting and then building computational classifiers to categorize the type of messaging that we're seeing from political candidates. And so you mentioned the John S. and James L. Knight Foundation grant that we got. So in 2016 and 2020, we built an interactive dashboard. And so if you go to illuminating.ischool.syr.edu, you can actually go and see the dashboard and play with it. And we only have the presidential data. I have other data like governors and senators and stuff, but for the most part that dashboard is presidential. And so you can see like, how much is Donald Trump attacking Hillary Clinton in the 2016 election? And here's a fun fact. It turns out that Hillary Clinton actually was more negative. That is she attacked more in her advertisements and on her social media accounts than Trump did. So there's these assumptions that say Trump was more negative than Clinton and 16, but actually Clinton was more negative. Now there's different types of negativity. And if you look at in civility, which we have a classifier for that too, it turns out that Trump is a lot more uncivil than Clinton or coming to 2020, same with Biden. So that sort of interactive dashboard, that's the beauty of data and data science is the ability to help digest large volumes of data and make it easier for other people to understand it, make use of it and write stories about it. So yes, that's some of the things that we do with the data that we're collecting. There must be, pardon me, there must be a tremendous volume of data that we're talking about here, right? Because when you think about all of the signals that happen on a daily basis during a major election across all the different social media platforms, this has to be a large, a very large quantity of data, right? Yes. I think somebody said that we filled up 13 servers worth of social media messages. However much that is, I don't even know. But yeah, and especially because one of the things we've been doing since 2018 is collecting and analyzing ads that are run on Facebook and Instagram by candidates. So we started looking at Facebook and Twitter, basically the candidates accounts. And then Facebook in 2017 after a scandal involving Cambridge Analytica, which was a private company that was basically selling to political candidates the ability to micro-target it, target advertisements on Facebook based on the personality characteristics of the targets, which is that there's a holy grail quest in political campaigning, going back to micro-targeting, to try to match the right message to the right voter such that you'll pull them to you as a candidate. And so that's what Cambridge Analytica was selling. It's snake oil. Honestly, there's a bunch of issues there, but that's what they were selling. And of course there is data breaches in that because the way that Cambridge Analytica built their predictive algorithms of Facebook users was actually based on basically ill-gotten Facebook data. So Facebook said, this is a problem. And so in 2017, they created the Facebook ad library. So since 2017, we've been collecting the candidate advertisements. And I have to say that's where the volume is most spectacular because Mike Bloomberg, when he ran for president in 2020, wealthy billionaire, he ran oodles and oodles and oodles of ads. He broke all of our collections because there were so many advertisements. And as we come into this election season, we're running into the same issues because we're not only collecting candidate advertisements now, but we're also collecting ads around the candidates. So what's interesting is that if I give Facebook evidence of my existence, basically my driver's license, I take photocopies of my driver's license, I give that to Facebook. They will approve me to run ads. And so ordinary people run ads on Facebook around candidates. And then of course, political action committees, activist organizations, potentially bad actors from other countries, all can run ads potentially on Facebook. And tracking all of that is going to be one of our biggest challenges actually as we come into this election season. Again, you're going to that exponential growth. I wouldn't even think of someone like myself, a common voter, paying for ads to support a political candidate. Yeah, it's wild. It's like the new Wild West. It's wild. You'll have people who will pay for an ad. It's basically them doing like this. They're just talking into a microphone to some anonymous public and then they slice it up and then they run ads from it. Yeah, the impact and effect is not clear, but hopefully we'll have a better sense of that this time around. So let's go back to something that you said earlier. We know there was evidence of Russia attempting to meddle in our presidential election. In 2016. In 2016, right? Was that a good example of micro-targeting? Or was that... That's an interesting question. Was it advertisement-based? Was it actors pretending to not be who they were and just spreading, you know, saying, this is a message that I want people to connect to. Yeah, so 2016 was interesting because there's a couple of different dimensions. Some of what Russia was attempting to do was to actually hack election machines. So there were municipalities in a couple of the southern states. Florida, I believe, was one. Georgia might have been another. Don't call me on that because I haven't looked in a while. But there were a set of municipalities where Russia was actually trying to hack into electronic voting machines. They were unsuccessful. There's no evidence that there was any inappropriate vote tallies coming out of that effort. So there's that. And that's a whole other conversation, just how we vote and susceptibilities. Maybe those are air-gapped, though. You know, they're not even on a network. I understand. That's exactly right. So that's one dimension. But the other dimension that caught the attention of Congress and the intelligence agencies were the efforts by RT, which is Russia today, which is basically a kind of a Russian troll farm based people hired by the Russian government to spend time building social media accounts in the United States. So these are generally Russians who have good English, who then spend time building up what looked to be a legitimate Facebook or Twitter accounts. And then at some point they kind of shift from being posts about puppies and, you know, what I ate for breakfast today to being political messages. And so there was coordinated efforts, especially in Florida, to try to organize rallies in support of Trump that were actually RT organizing those events. So it wasn't ordinary citizens in Florida saying, yeah, Trump, it was actually Russian actors trying to mobilize American citizens. In addition to that, Russian operatives were also creating a proliferation of Twitter accounts that basically were trying to sow division within the electorate, especially on racial lines. So both kind of anti, I don't know how to put this, kind of racist messages as well as kind of pro civil rights messages in an effort to push Americans to basically hate on each other. And that that effort to sow division in the United States is something that was really prevalent in 2016 and then actually continued a little bit in 2020. So, you know, this is this is part of one of the challenges with the internet, right, as it brings out the worst in people, brings out the best in people. And there's a lot of misinformation out there. You know, you have children, I have children. I'm constantly correcting their facts that they get on the internet and things like that. And so can we talk a little bit about the current landscape as far as misinformation and what's going on in elections with misinformation? So there's a lot of concern about the potential for misinformation in the selection season. You know, there's it's hard to not know a little bit these days about the concern of deep fakes, for example, which are basically ways for people to create videos that look like they might be, say, a video of former President Barack Obama. Saying or advocating for something when, in fact, is entirely a digitally created avatar and a digitally created message that was puppeteered by somebody. And so deep fakes, you know, the concerns about generative AI, the that is basically the ability to pretty easily and quickly create fake representations of politicians or journalists, actually, is, I think, a genuine concern. And because going back to what you mentioned earlier, the proliferation of so many different social media platforms, it's really hard to to monitor, to track. The messaging comes from many different sources, including politicians themselves in terms of sort of pushing misinformation. And so you it's a it's a it's a mess, honestly. And I think that it it will be. Important for the public to be savvy about. I don't know if I can say this, but, you know, detecting bullshit. Yeah. And when they when something doesn't look right or feel right to really question it and not just pass it on and say, look at this crazy thing, because that's how misinformation tends to spread. But it requires ordinary people to have a stronger bullshit detector. And I don't know how we help them with that. Congress has completely abdicated its responsibility in properly legislating this environment. So we still currently at this moment have no strong legislation other than a requirement that the social media platforms indicate when an advertisement on, say, Facebook or on Google has AI generated content. That is not enough to help us navigate the selection season. I was on Reddit, one of my favorite places. And in the mid journey subreddit, there was someone who generated a photo of Hillary Clinton and Joe Biden and sharing drinks, you know, and it was AI generated and it looked very convincing, other than the fact that you wouldn't think Hillary Clinton, Joe Biden, maybe would be drinking like that. Right. You know, so, you know, it's it's it's a hard problem to to get to the bottom of. But it is a problem that in this in this next decade, we're going to have to wrestle with it as as a as a culture. Absolutely, because it's only going to get better, in my opinion. It's not you said, battered, not better, better. It's going to get better. The AI is going to get better. And the situation is going to get worse because it's a good way to put it. And, you know, it is very concerning for things like elections where you can be misled by falsehoods that are represented as truth and sowing division. Right. I mean, it could just be as simple as misrepresenting facts, but it could be even bigger in terms of just sowing division. Absolutely. And it's hard to be aware of what is truth and what isn't. Do you have any advice on that? I'm going to put you on the spot because I just said how hard it was. But yeah, no, it's super spread. I mean, I think it's a multi pronged effort that will be required. There's not a single solution to this. I mean, one is AI companies. So you think about the 11 Labs is one of the companies that makes it very easy to generate oral or audio AI voices. And so, you know, I could sit down and pretty quickly built an AI generated or I could basically a fake Brock Obama voice that I could then layer onto either, you know, a set of stills. Right. It just photos of Obama and stitch that together into what looks like maybe an advertisement where Obama basically is attacking Joe Biden. And 11 Labs and the other AI companies that are making these tools and technologies so readily available. I think have a responsibility to think about how we're going to also build detection tools that the platforms and ordinary people can use as part of their browser, right? You know, a little browser add in or a little app on your phone that allows you to say this was made by AI because honestly, if I'm a bad actor and I read an advertisement on Facebook, why would I say, oh, this was AI generated, but you wouldn't. So it is entirely up to the advertiser to sort of report that they're the one who, yeah, they made this ad and it is AI generated. So the companies that are building the generative AI need to do more and do better to help in the public sphere. The platforms, social media platforms, Facebook, Twitter, TikTok. They also, I think, have a much stronger responsibility than they're currently taking to help support good information in the public sphere. You know, the problem is the Facebook, TikTok, Twitter, etc. These companies are for profit companies and their bottom line is to shareholders, not to the citizenry, not into the not to the public good. And because of that, they don't, in fact, they have actually reduced their integrity teams coming into this election season. I don't honestly know if X or Twitter even has an election integrity team this round. I know there are layoffs at Facebook. I don't believe there have been any kind of new efforts to build up a set of staff whose job it is to really think through the policies and processes and practices by these platforms to help ensure a healthy information environment. So those are those are failures, right? So the AI companies, there's failures, the tech companies, there's failures. Congress, again, has not done enough quickly enough coming into this election to really tackle the problem when most senators average age is 70. It also further challenges the ability for them to even understand what the technology is and how to regulate it. The Supreme Court currently is looking at a case that might even further challenge the federal agencies to regulate in this space. So then that leaves the citizens and the politicians. Right. So the politicians themselves, we have a new norm of I don't know what the word is. It's almost the word that comes to mind is lawlessness. And then that's probably not the right word. But once upon a time, even a decade ago, there was an expectation that politicians held themselves up to a certain standard of truthfulness, of decorum to the opposition, to the news media, to people that they disagreed with that currently doesn't exist. And as a result, there doesn't seem to be any norms that are holding politicians back from saying what's not true in the spirit of expediency. So that's a huge problem. And then the public, right? So then it's like, OK, well, you American citizen, you better just get better at this and figure it all out. But that's also very, very challenging, right? We're busy. We have busy lives. We are overwhelmed with information. I'm a political junkie. I, you know, this is my lifeblood. But for most poor ordinary people, politics is just one of many, many things they might pay some bit of attention to. And when you're unfortunately the phrasing in political science is low informed voter, which is kind of a derogatory term. But the idea is that ordinary people who aren't junkies like me don't have a lot of knowledge of actors, events. And as a result, they're more susceptible to misinformation because they don't have a strong, a bullshit detector. And so that then leaves them to be more susceptible to incorrect information. Then you fall into the problem of ideology and identity. You know, ordinary people, some of them are really beholden to their political party, and it's part of their identity. And when that happens, when information comes to them that is false, but aligns with their beliefs, they're more likely to then believe it and then pass it on. And so that's also very much a challenge. So I've just laid out all the problems. I don't know what the solutions are other than a multi-pronged effort by educators, by ordinary citizens themselves, by politicians, by Congress and the courts and by the platforms to really take this problem seriously. Otherwise, I'll be honest, make I actually fear for the health of our democracy. I can I can see that. I totally can see that. I am not a political junk junkie. I'm probably classified. I won't go there. I don't want to say low and form, but maybe medium. And but I would I would agree with you. I think based on my understanding and knowledge of what the capabilities are as far as deep fakes in video, you know, I for my own classes, I'm thinking about using voice printing so that I don't have to just keep speaking all the time. I could just write out what I want and then I generate my own voice. Right. And you're going to end up replacing the professor. I just want to replace myself. And but but you can see how these things can be used for for not so good purposes, right? And and yes, it isn't very realistic to expect that everyone now there is going to be able to figure out whether or not they're being duped. Right. And some people, if it aligns with their ideology, may not think about it as being duped. Exactly. And so there are a lot of different. There are a lot of different challenges here. And I think the biggest one, if I may summarize all the things that you said, which was so informative, is that the scale has changed. Right. I think these problems have always been there in some way or another. It's just technology, as it does with everything, becomes this this enabler that makes us all work harder and faster. And now things are are are doing what they do at a break at a break back pace. Right. And that's what becomes concerning is that you know, we used to have the the great vine game where you'd play and spread a rumor. And then by the time you got to the 10th person, it wasn't the same rumor. But those guys are the telephone game, the telephone game. Those sort of things now happen at breakneck speed, right? And like the new cycles are so short. And, you know, I guess one one way that we deal with crisis nowadays is just let it pass because it will it will go away. And so those ends up being a really real big challenge. And so, yeah, I hope we do find something I think at some point it will we will level out with this because like so many technologies that are disruptive, they do find a way to to come out even. But I think the biggest challenge and one thing that you mentioned before is that there's so many all at once right now with between the deep fakes and the generative AI, we can generate text now, we can generate video, we can generate audio, it's it's a lot all at once. Yep. And it's all on your phone, right? So it's so easily in your hands, it's in your hands all the time. And so it it it's almost like a virus, right? And it sort of infects and and it potentially hate to use this metaphor completely, but, you know, it ends up affecting infecting the electorate with problematic information. And then the public doesn't vote in their best interest. They're in effect being manipulated. And, you know, my home discipline is the discipline of communication. And the communication discipline got its start primarily by the study during World War Two and the study of propaganda. And I feel like we are in a new moment of propagandistic tendencies in the democracies around the globe. And social media like radio in in its time allows for the this kind of rapid spread of information. And it allows bad actors to really potentially manipulate and anything that I can do as a researcher to help protect the public or inform the public about what they should be watching out for worried about and also to help journalists and policymakers to think about how to support a more healthy information environment that that's what motivates me as a researcher. Well, that's fantastic. I appreciate you spending your time with me this afternoon and sharing your story. Is there anything else that you wanted to talk about before we wrap up? Oh, my gosh, I feel like we've been in such a gloom and doom note. I know, I know, it's very sad. But I mean, on the flip side, you know. I don't. Is there a flip side? I don't know if there is a flip side. You know, I look at it. One of the things that I was thinking about is, you know, you're mentioning propaganda, right? And it's like propaganda used to come from the authority, you know, like the news media outlets could spread propaganda and the governments could spread propaganda. But now anybody can spread. I guess if I have enough money, I can ever do some advertisements and spreads propaganda, right? Maybe I don't want flat earth. I want a inverted sphere for my earth. But, you know, I guess that's not a that's not a good thing by by any means. But it doesn't allow people's voices to be heard. Exactly. Yeah. So I mean, I guess one of the good things in there is is the little person can has a bigger voice now than they've ever had. Yes. Right. Yes. Absolutely right. And that that democratizing potential. I mean, going back in 1994 and 1995, when the web was starting to really disseminate, that was the big dream is that this new digital technology would democratize and and it has. But it's gone a little warped. And so I think that's the effort as we come through the next year or two is to figure out how do we how do you put some checks and balances in place so that people can have a powerful voice, but it isn't creating a toxic information environment. I think that's the challenge. So and again, I think in the information study space, we have a unique set of skills and understandings and talents that really allow us to speak to and help the public and politicians in my context anyway. Right. To help make a difference just as you're helping companies think about how to use information technology for good. I think, you know, in my world, I can help us think about how to build information technology for good in the public sphere. And so that's that's the upside, right? Is the power of the tools, the technologies to make a difference. So I don't know. We'll see. It'll be a really it's going to be a really weird 2024, I think. And but hopefully we come through this a healthier democracy. Then we are at that right now anyway. Well, I appreciate that. And thank you so much for your time. It was great talking with you. And I guess this wraps up our very first podcast.