 Hello, welcome and thank you for joining our OPA seminar today. My name is Jan Piasocki and before we begin our discussion I would like to briefly introduce our project and of course our guests. So the Web Immunization Project is focused on resilience against online misinformation. Our team consists of researchers from my Institute of Pharmacology, Polish Academy of Sciences, Poznań University of Technology, Jagiellonian University Medical College and University of Oslo. We received funding from BEA Financial Mechanism in the IDEA Lab call operated by the National Science Center and you can find more information about our project on our website, on Twitter and Facebook Immunization as well as on our YouTube channel. The seminar is also being recorded and will be available later on on our YouTube channel. So today we are in Oslo and Jonas comes from the Department of Psychology of the University of Oslo will be the host and the moderator of today's seminar. After that you can submit your questions in chat or you can write your virtual not physical but the virtual hand and Jonas will take your questions. And our guest today is John Rosenbeck who will be speaking about how to counter misinformation using psychology. John Rosenbeck is the British Academy postdoctoral fellow at the Cambridge Social Decision Making Lab. This research focuses on misinformation, vaccine hesitancy, online extremism and inoculation theory. As a part of his research he co-developed the award-winning fake news games Bat News, Harmony Square and Go viral. John is also interested in social media research, agent-based modeling, natural language processing, his doctoral dissertation in 2020 examined media discourse in conflict zones primarily in the so-called impulse republic of Donetsk and Luhansk in eastern Ukraine. So welcome John and please take our virtual stage. Thank you so much Jan and thank you Jonas for the invitation and everyone else as well. Very happy to be here. Let me share my screen just a second. How's this? Visible yes? Is that a yes? Okay great. Yeah so I'll be talking for about I guess 45 minutes to an hour. I'll try to keep it as brief as possible because I'm very interested also in hearing everyone's thoughts and ideas and also how to apply some of these insights for example within the conflict in Ukraine. As Jan mentioned earlier my PhD was about the war in Ukraine between the media discourse in the war in Ukraine between 2014 and 2017-18 and so of course this is a very I guess personal and personally important issue for me so I'm curious as to your thoughts in that direction for sure at the end but for now we are. This is sort of the structure of the talk and the reason I've structured it this way is because I want to get to the inoculation stuff and what to do about misinformation sort of at the end but before I do I think it's important to talk a little bit about sort of the philosophical I guess or epistemic approach to tackling misinformation. I use the word misinformation for deliberate reason and never fake news or disinformation or rarely because I think that there's some interesting thoughts to be had about what to do about the problem depending on how you see the problem right and then of course there's also the interesting question about whether we even have a problem right and I think the intuitive answer to that is yes there's also some people who are saying well we kind of don't which is an interesting thing to think about and something worth taking seriously and then also what why do people fall for misinformation right meaning if we can we have a pretty good idea of what the factors are that make people vulnerable to falling for misinformation then that informs our design decisions when it comes to interventions etc etc so all of that will lead up to my the final part of the talk that relates to the solutions and so I'll briefly discuss sort of fact checking debunking and pre-bunking and then the idea behind inoculation okay so you hear the word fake news a lot or at least I guess you used to it's not as common now as it was let's say three four or five years ago but this I thought was an excellent example of fake news I'm not sure if you're familiar with this story I'm sure most of you know the onion which is a satirical outlet in the United States so they make joke articles and the reason I mentioned this article is what number one is hilarious but number two some Chinese propaganda outlet thought this was real which was amazing so they actually thought that this was a real news story so they they copy pasted it onto their website along with a collage of pictures of Kim Jong-un on his horse and Kim Jong-un looking you know beefy and all that um which I thought was awesome because for me what this shows is okay yeah sure it's a fake news story but there's no harm done whatsoever right point being just because it's false doesn't mean we should want to do something about it satire is a really good example of false stories or oftentimes false stories that we don't think as harmful simply because of the fault this however is a different story right and so I think this is a very interesting example this is a facebook post by some guy named Quinn Cincoe in some facebook group for some area of Brooklyn and he's very upset if you look at the angry emoji um and he's upset because he thinks that these women are protesting to end father's day right meaning it plays into a prior about well discourse that's tragically fairly common on the internet about um feminism haven't gone too far feminists haven't become very extreme and so on and so on to the point where they even seek to take away um you know a simple joy like father's day right now the thing to note here is this is uh if you can maybe you can't tell because it's too far away but from close up it's pretty obvious this is photoshopped right this isn't a real protest they were protesting something else it doesn't really matter what but someone photoshopped this in with a clear goal to upset people and make them think that feminism has gone too far right so it's not like the point like this I think that you can describe this as fake news with some accuracy but it's clearly qualitatively very different from the previous story right which is doesn't do any harm whatsoever but here what you get is you invoke a reaction that is very emotional but also based on false information meaning your you think that there is an actual contingent of people large enough to stage a protest that is so extreme that they think father's day shouldn't exist so that gets to the point of when we talk about something like falsehoods right you need to qualify what you mean by falsehood meaning is it a harmful falsehood is it not a harmful falsehood etc etc right what kind of falsehood is is being being promoted now that's all well and good but that's not the only thing that we're dealing with so this is this happened a couple months ago this was March 4th in Archbill in the Guardian very very interesting story that we're trying to explore a little bit with some colleagues to see the extent to which this is the case but what you saw here was there's a lot of bots on let's say Twitter and other social media outlets that were spreading misinformation about COVID-19 just low quality and or false and or misleading information about the virus and the vaccine etc etc now when February 24th happened and Russia invaded Ukraine all of a sudden those bots went silent and a few days later they popped back up and started talking about Ukraine and so for me what this means is there is a component to this that is inorganic right we don't exactly know how large that component is but it's important to note because it appears to be the case that a lot of the misinformation and the disinformation is is not only sort of artificially propped up by bots which are I don't know presumably backed by some state entity it's not only that we have that but it's even content agnostic right a good thing like why would let's assume that this is measurably true right I don't know exactly if that's the case but let's assume that the Russian government has been running bot accounts that spread COVID misinformation that then pivoted to Ukraine then that tells us that these these people who are mounting these disinformation campaigns they don't really care about what they talk about necessarily or they do care but only to a certain extent meaning the point of it isn't to talk the talk about a particular topic the point is to spread misinformation and disinformation in a general sense and whatever topic that may take on is dependent on whatever societal issue is most important at the moment right important to take into account here's another important thing I think which is this this was the most viral article on Facebook of I don't know the first half of 2021 or all of 2021 doesn't matter at least the very least that we shared millions and millions times right and the problem with this there's a couple problems with this headline number one every single word in this headline is true right all of this did happen and the outlet that published this article which is the Chicago Tribune is perfectly legitimate like it's a good outlet it's not a not a problematic one at all it's a decent newspaper however what I think everyone can agree on is that the implication of this headline is highly problematic meaning the reason that this was shared I think it's safe to assume on Facebook millions and millions of times isn't because people are like oh that's academically interesting let's uh have a common reason discussion about whether that might be the case now of course not they shared it because they thought that the doctor died because he got vaccinated which isn't the case or at least there's no evidence of that right so that if you read if you look up this article the first paragraph you see is there's no evidence that the doctor died because of the vaccinated right but headline doesn't exactly reflect that doubt so that ties into my earlier point about falsehood and veracity and so on falsehoods aren't necessarily the problem in and of themselves meaning a lot of the misinformation that we see that we can reasonably classify as misinformation which I think that this falls under under a reasonable definition isn't incorrect information it's misleading decontextualized sprinkles of truth highly polarizing etc etc etc and that is a bit of an issue because that means that if we talk about what to do about the problem I personally would say you can't um define the problem as false information that people are consuming because you're missing out on a significant so I'm going to really really important part if not the majority of the content that will be relevant to tackle another dimension to this problem is this which is an article that was published by my colleagues uh Steve Ray to use a really good bc student at our department almost finishing uh Jay Van Bavle professor at New York University in Santa Amanda Linda my colleague here in Cambridge and what they the nice thing about publishing pns is they make you write a title for the article that summarizes the entire article and so what this what they found in this study was um the type of content that generates retweets and lights and all these kinds of things online the by far the strongest predictor of that is content where you're expressing outrage or anger or negative emotions about other groups so this study was done in the US so in a democratic echo chamber right or a liberal echo chamber content that is negative about republicans is most likely to go viral and vice versa more so than just generally negative language more so than positive language more so than humor etc etc right so that's a bit of an issue because uh that type of content isn't necessarily conducive to accuracy can be meaning sometimes it's reasonable to be negative about members of other groups for sure right you have good reasons to be but if that's a general dynamic that drives virality on social media that plays into the misinformation debate to a large extent I think okay so just to summarize um there's a lot of debate going on within community of misinformation research as many of you will know about what how to define misinformation and so this is this is sort of my perspective on this um a lot of these definitions focus on veracity meaning is is the headline that you're showing people true or false right I would say the majority of research does that but there's other definitions where you look more at intent right whether determining whether something is harmful is uh is decided by whether the producer is intentionally trying to seek to deceive or manipulate their audience right disinformation often falls under this definition um but what I would argue is in number one if you're just sort of scrolling through your feed on twitter let's say you can't um always discern what a person means behind it right you don't really know if it's a russian bot or if it's a chinese bot or if it's an american bot you have no real way of knowing just from looking at the content right so okay intent is important but from a practical perspective difficult to use and then then there's the problem with truth which is even intentionally manipulative messages spread by dishonest actors can be true right happens all the time decontextualized maybe but true in the list and seek to fuel polarization rather than spread false information so my only point is if we only focus on veracity as the problem we're missing out on way too much which is why we don't tend to do that more on that later um just again sort of summarize I suppose uh what we tend to look at when it comes to misinformation isn't necessarily truth or falsehood but rather can you determine whether there's a manipulation or misleadingness going on right so can you pick up on certain known strategies logical fallacies etc etc that are being used that would make you suspicious as to the reliability of the content right so there's many of these uh you can you emotional language or evoking like the one that I mentioned before right um conspiracies impersonation but also logical fallacies and so on and so on um so that's more of a holistic I suppose or bottom up definition of misinformation that we tend to uh employ to a circular standard right that was point number one so that's a I guess a lot of food for thought but point number two is okay that's all well and good but do we have a problem at all right because some are some researchers for instance there's a few at Sciences Po in Paris so Sasha Altai Hugo Mercier and so on uh Manon Berich and their work is super interesting and they published the pre-print recently I'm not sure in what stage of review it is uh it's a very interesting point where they basically say well look there aren't people aren't really exposed to misinformation all that much right fake news very few people ever see a fake news headline right so uh what we should do summarizing and and not doing justice to their argument at all it's worth reading uh what we should do rather than seeking to combat misinformation is foster engagement with reliable sources with reliable information rather than sort of trying to make sure that people disengage from misinformation etc so because simply because you're far more likely to encounter a useful or reliable source than you are to encounter misinformation now I think that's a very interesting argument and it holds quite well if you ask me if you follow this definition of veracity right like fact-checked false content isn't that common that's correct because number one it's very difficult fact that everything so that majority I would say a false content is never fact-checked simply because of capacity but also that depends like if you're only focusing on true versus false that story rings much more true than if you also focus on misleading content manipulative content etc etc as I've tried to read before so here again whether we have a problem is partially contingent on your definition nonetheless right you've I'm sure you've seen these kinds of headlines meaning during COVID uh in Iran people died after they drank a poisonous concoction to cure COVID Brazil obvious case of people in high positions such as President Bolsonaro clearly expressing false beliefs about the virus that leading to policy and those policies being fairly disastrous led to a lot of deaths like and then in the UK I guess slightly more innocuous because it was fairly localized um and nobody really died people were so convinced that 5G gave you COVID or something you're not exactly sure to the theory um set fire to these things in a couple of instances so these are I think real life examples of the consequences that misinformation and belief in misinformation may have right um but a bigger concern at least in the early stages of the pandemic was okay does belief in misinformation uh lead to well is it linked to decision making when it comes to vaccines right so in theory what you would think is well if someone believes misinformation they're less likely to get vaccinated and so early studies these are well early but these two studies uh one by one by us our lab and then one by uh LSTM London School of Hygiene and Tropical Medicine um both showed pretty much the same thing which is there is a measurable correlation between an independent correlation between belief in misinformation about the virus and um vaccination intentions meaning not actual vaccination right so that's a good indicator that we might have a bit of an issue in the sense that the more people believe misinformation the more likely it is that they are uh uh not happy about getting vaccinated but as all of you know correlation does not equal causation so we teamed up with these researchers specifically Sahil Lumba and uh Alexander de Figueiro um to address this question properly and that's this one uh this is under review at the moment at science so we'll see if that goes through it would be amazing um so what we did was we wanted to check if misinformation belief is correlated with um actual vaccine uptake not self reported vaccine uptake but actual vaccination rates so we had a sample of maybe 16 000 people in the united kingdom and we had a there was a huge sample and we had their post codes meaning we knew where our respondents lived approximately and we also had a standardized instrument to measure how likely someone is to believe misinformation as a uh how do you call this as a psychometrically validated testing right and so what we found which was the title again sums it up uh the better someone is at detecting fake news uh or let's i should have put it properly the at a geographical level at the geographical unit let's say a county or something right if that county had a higher average ability to detect fake news right according to the standardized instrument vaccination rates were measurably higher in this county and this association held up if you controlled for things like age and gender and education and voting behavior meaning that's a big thing i think as a as a general skill how good are you identifying whether uh something counts as misinformation right that skill being higher in a particular region is correlated with or predicts actual vaccine uptake which for me means okay that means we do really have a problem meaning we should it's worth tackling this skill somehow uh because that will there's a very reasonable assumption you've made that that actually leads to uh higher vaccine uptake okay great now on to the next part of the talk um all of that's well and good now that we know that there's a problem to a certain extent and we have a bit of an idea about the scope of the problem that we're talking about and the definition of the problem that we're talking about um there's a lot of research that has been done in the last couple of years mainly since 2016-17 into uh why people believe misinformation because for me at least this informs our decision making when it comes to designing interventions right so um the way you usually do this in psychology you identify predictors of misinformation susceptibility so a very famous one is analytical thinking or cognitive reflection test performance right um one of the questions in this test is um emily's father has three daughters april may and and then you're supposed to give the answer the intuitive answer is tune but that's incorrect it's emily because at the beginning it says emily's father right like that that's kind of question so it's supposed to be a measure of um intuitive versus reflective thinking and so uh the work by uh uh gordon pennigieg and david rand argues that this is the most important predictor of why people believe misinformation sort of a reflective open-mindedness i think is what they call it in their work okay so it basically means that if you run a test uh this score on this car to the reflection that should be the biggest predictor of how likely someone is to fall from this information okay there's another body of work which is uh jay van bavals uh sort of theoretical contribution um he argues and some agree with him that uh it's not necessarily analytical thinking but rather an identity he has an identity based model of misinformation belief which is to say um people are motivated by accuracy to a certain extent they want to hold accurate beliefs but this these accuracy related motivations can be overwritten in some cases by identity considerations such as political ideology right so that's that's van bavals model uh the study that i showed earlier uh we found that numeracy skills simple ability to solve a math problem was the strongest predictor of reduced COVID-19 misinformation belief in five countries so that was a fairly consistent predictor it wasn't that we had any sort of anything theoretical to say about it i i will be honest in saying that i'm not a theoretician i don't understand theory very well um but it's a correlation so you might as well give it a like might as well test it and then uh within the context of the united states a big predictor is uh political partisanship specifically conservatism so there's a lot of work that shows that in the united states and i do want to specify in the united states because this doesn't hold up everywhere um identifying as more strongly conservative is related to reduce stability to identify misinformation across a range of settings like it doesn't it's not necessarily uh um have you say this a function of let's say the headlines or whatever it might be it seems to be a pretty robust predictor in and of itself and so we did a study recently where uh we had the opportunity to test which of these factors which are these models are the most robust and uh that's this one so that came out in may of this year in judgment and decision making we collaborated with a lab at the muck's blank institute led by stefan headsock um and uh it was a really nice study of one of the things that we studied was whether it matters what question you ask right so uh if you want to to measure how susceptible someone is to misinformation you can give them a bunch of headlines and ask them how accurate do you think these headlines are but you can also ask other questions how reliable do you think these headlines are are these headlines real or are they fake etc etc so there's a bunch of these questions that have been asked question framings we call them and then also response modes meaning how many scale points did he use two two or false six seven etc and so what we found was that that the question you ask doesn't really matter which is good right it measures more or less the same construct which is really nice if you want to do a meta analysis but the other part of this paper was uh what are the most robust predictors and so this is what we found this is the figure from that study um here you see um on the x axis um open mindedness score here right um aot meaning lower load high cognitive reflection test performance numeracy test performance and political ideology one being highly liberal and seven being served and so the higher score here it means a higher ability to discern through from false information or misinformation from non misinformation right it goes from 0 to 1 almost nobody scored 0 but people do score 1 and so what you want to see is the the the steeper this line on average here are the different sort of ways that we measured misinformation right the steeper the line the stronger the predictor and so what you find is the line for activity open minded thinking is super steep the slope and for political ideology is fairly steep meaning we find that same association that conservatism is related to reduce the ability to identify misinformation numeracy is okay meaning it's it's fairly uh fairly robust but there's some flooring effects here as you can see because almost nobody scored zero on this test and then the cognitive reflection test performance the line is completely flat except for one condition but that that's not nothing um so what that means is active the open minded thinking was by far the strongest predictor of misinformation susceptibility followed by conservatism and again this was done in the united states right then numeracy skills but ideology political ideology and numeracy were about the same and finally cognitive reflection test performance so what we conclude here is if you have a standardized instrument for measuring misinformation susceptibility and you test all of these different accounts alongside each other the identity based account wins by by a lot like the the the classical reasoning account doesn't perform very well and that's also borne out by some other studies for example by Borgeson at all uh from a couple weeks ago i think where they tested various models of misinformation leaving alongside each other okay so to summarize that uh of course as is always the case there's various factors that play a role and by some false misinformation and it seems to be that both identity and thinking style are usually important and analytical thinking ability does play a role i think but a smaller one than we had originally anticipated and then an underlying debate that i haven't really addressed here at all is um well maybe there are simply latent factors that predict or that are that explain all of this right and so uh if you're ever interested in having a really interesting talk with someone i would highly recommend leo or smigrod smigrod uh who is postdoc also at our our gene reaches fellow here in Cambridge and she works on something like cognitive rigidity meaning maybe it is sort of cognitive flexibility that underlies much of this not necessarily uh identity and so on and so on but that i won't get into that she's far better at that than i am so i won't i will bore you with the details however what i do think is um because there isn't an exclusive identity component it is not the case that people believe misinformation because they have a particular identity right it's a trainable skill and that's uh the thing that i want to discuss next right what can we do about this problem luckily i think everyone pretty much in the field is convinced that misinformation susceptibility isn't a fixed construct it can move right people can improve in disability somehow through learning or in other ways so i won't go into the details of what kinds of interventions exist because that would be too boring but a very common one is uh fact-checking right and i think that fact-checking first of all that we know this is incredibly useful and tends to be very effective at correcting misperceptions right or very effective but at the very least uh it doesn't backfire and in some cases it works but uh as we discussed before it's not always so clear what the facts are right and it's much more difficult to fact-check misleading content than false content because you can't slap a label on it that says false you have to be very nuanced which means you have to take a lot of time people interpret it the way they want etc um another difficulty with fact-checking i suppose is if someone believes misinformation let's say they believe that the earth is flat you explain well actually the earth is not flat it's round for reasons x y z assuming they believe you right it's not as if the effect is completely gone meaning there's a residual memory of the misinformation that continues to hold some kind of influence right it resides in memory this is called the continued influence effect meaning the correction doesn't completely undo the misinformation in its entirety um another issue is the illusory truth effect meaning repeated exposure to misinformation increases its perceived reliability meaning the more you see any individual example of misinformation the more you're likely to believe it simply by virtue of repetition but that's especially dangerous if the misinformation is coming from multiple sources right uh there's also the issue source credibility so one thing that i think is often underestimated is uh you could fact-check something but if the source of the fact-check isn't trusted by the person who believes the misinformation they're not going to adapt they're not going to adapt their attitudes whatsoever right because they're like well there's something like i don't trust this person telling me this why would i believe you right and then this last point i think is also often underestimated it's very interesting to explore this a little bit so Fabiano Zolo who is a professor at the university of venice probably since article 2017 which is extremely interesting and this is what it is um it's called debunking in the world drives worth a read what she did was she looked at engagement with fact-checking accounts on i believe it's facebook and so here you have the likes and the comments on fact-checking accounts and posts by or posts by these fact-checking accounts okay what she also did was she identified two communities basically online right one community that is polarized towards science and one community that is polarized towards conspiracy theories now that doesn't mean that they all the conspiracy theories only share conspiratorial content in the science people only share scientific content it's more it's a it's a preference right okay and so what you find here is very very very very obvious which is to say the conspiracy community doesn't like or interact with uh these fact-checking posts pretty much at all right and that's a that's a serious issue i think that is worth thinking about when comes to debunking fact-checking how do you make sure that people from the communities that are polarized towards more misinformation i.e. would benefit more from uh being corrected let's say that sounds very patronized and you don't mean it that way but i hope you can but i mean if if your fact-checks don't reach these people are we really doing something very helpful right i mean seeing from an objective standpoint i would say yes but maybe there's more that we can do to increase engagement among different communities that aren't already sort of polarized towards science okay again this doesn't mean that i think we shouldn't be doing any fact-checking i think we really really should but there are certain limitations that are uh difficult to uh uh ignore okay now this was a very very long story but and i hope i hope i have bored you but the idea behind this story is that sort of informs a lot of the decision-making that went into why we designed the interventions that we even do that we've designed um and so we'd like to use this this quote um if you're faced with this kind of a complex problem it's probably a good idea that your own approaches are also a little bit flexible and inventive um as professor Snape pointed out in uh Harry Potter so what we wanted to achieve to a certain extent is not instead i guess i shouldn't have said instead alongside debunking uh pre-bunking and uh i think this was a word that was originally coined by John Cook um who is now at Monash University in uh Melbourne in Australia a guy who created Cranky Uncle if you're familiar with it really really nice guy and really good at this job um and the idea behind this is you want to reduce the probability that someone falls for misinformation in the first place right which has all sorts of beneficial hopefully downstream consequences and the way that we've approached this pre-bunking idea is through psychological inoculation and a psychological inoculation is basically a metaphor um originally from the 1960s so the idea is much like a medical vaccine is a weakened version of a virus let's say right that if you rejected in the body the body thinks it's sick starts creating antibodies so when you're hit with a real virus you don't get as sick or not sick at all the idea behind psychological inoculation is basically the same if you preemptively present people with a weakened version of the misinformation that you're trying to tackle right uh which you do by warning them initially hey wait hold on someone might be trying to manipulate you and you preemptively refute the misinformation that people might tell you x y and z x y and z aren't correct because of reasons a b and c um what that should do is trigger the production of what you call mental antibodies there's no chemical that does that of course but metaphorically um which then induces increased resistance to future exposure and persuasion attempts meaning you already have a defense system that prevents you from being persuaded as much as you ordinarily would right okay now what my colleague sander vander linden looked at in 2017 before i joined his lab was okay can you use this inoculation idea to prevent people from being persuaded by climate misinformation and so this is a graph from that study um what this basically shows is the y axis shows the shift in perception before and after um about the scientific consensus about climate change right so the question was how how what percentage of scientists do you think believe climate change is happening in his man made right and so if the shift is positive that means that it's just in the right direction if it's negative it means it shifts in the wrong direction meaning people end up believing that the scientific sense is less than it actually is and so if you tell people 97% of scientists uh believe that climate change is real and is happening their shift in perception goes up by quite a lot if you show them misinformation this was a petition the oregon petition it's called fake petition about uh like climate scientists or scientists supposedly having signed a declaration claiming that climate science is false that shifts their perception in a negative way which is a serious problem if you present both the misinformation and the simple fact side by side there's no shift whatsoever which goes to show the power of misinformation and then so these two are the important ones if you provide a general warning partial vaccine right with a partial refutation of the misinformation that shift in perception goes up meaning there's a partial recovery of the original simple facts effect um even if you show people misinformation so that's good and with a full vaccine this recovery is even more with full vaccine here meaning you debunk pre-bunk I should say pre-bunk this misinformation petition by saying well it was signed by like there was no no access barrier meaning people signed this who had no idea what they were doing uh meaning the spice girl signed it and Charles Darwin supposedly signed it and Charles Darwin you know did so uh that is a good proof of concept of the ability to the power of using inoculation to prevent people from falling from misinformation of course it's not 100 effective but then again we all know that the COVID-19 vaccine isn't 100 effective at preventing people from acquiring COVID completely but it is very effective right okay now that gets to the heart of our research program to a large degree which is that's all well and good but I can lie much faster or spread misinformation much faster than the entire audience can fact check me right uh because it's simply like there's an asymmetry in terms of how much effort you have to invest which means with inoculations if you design an inoculation for every single example of misinformation that might be useful in limited circumstances but it also limits you because there's a scalability issue so what we've been doing is okay what if you identify as I mentioned before several common misinformation techniques or manipulation techniques such as emotionally manipulative language conspiratorial reasoning trolling logical fallacies etc meaning techniques that we know are common and that we can define as manipulative or misleading according to for example formal logic or in other ways that are reasonably objective and the advantages of such an approach are that it's source agnostic you don't have to tell people this source is good this source is bad which runs into issues of distrust you don't have to make a truth claim which is very useful because oftentimes people dispute what is and what isn't true and in some cases that's fair enough and in some cases it's just sort of being obtuse and it's more scalable as I mentioned now the metaphorical syringe of this inoculation right like there's many possible syringes you can have a piece of text as in my colleague sander studied from earlier but you can also create a game right and so that's the idea right that the method of inoculation is in this case playing a simple game online so we have three of these bad news bad news.com harmony square and gold viral and bad news is generally about all sorts of misinformation harmony square is about political electoral myths and disinformation I suppose and gold viral is about COVID what does that look one of these games so this is basically what it is each game consists of a couple of levels and in each level you learn about a particular manipulation technique meaning you learn how to use it or how it's used in online environments so first you pick an avatar evil genius Carmen San Francisco etc etc right so there's a bit of baking soda um could you pick a name that's your character and here you see that in harmony square the point is you're hired as a chief disinformation officer and your job is to so discord on harmony square right and you learn how how political operatives tend to exploit polarization tend to increase sort of intergroup distance and so on and um that's how the how the game sort of teaches you how you might be manipulated and then how does that work this is a example just one example from a recent study um if you ask if you assess how susceptible someone is to misinformation before playing and after playing what you want to see is that they're less susceptible after playing you can also do this with a control group design which we've done many times but just to give you a simple illustration and so here you find that there's a significant reduction there's differences of significance from 2.58 to 2.22 right perceived reliability headlines and for real news meaning non misinformation that production is far less there's a bit of a debate about that but at the very least the idea is that people become more discerning of what is manipulative content what does not manipulate the content just to summarize that body of research um if you play one of these games that means that there's a reduced perceived reliability of misinformation people become more confident and they are less likely to indicate wanted to share misinformation with others we also done some longitudinal studies um and these effects are significant for a week or more depending and longer if people are given short reminders or or boost the shots right so those are the games but there's a problem with some of these games or with all of them really which is they have an opt-in barrier meaning everyone um uh has to make a decision and a commitment to play the game and so that means that you have to make the games entertaining and so on and so on but even if you make a very entertaining game which I'm not sure we've done uh the vast majority of people is never going to want to sit down and play it right so we collaborated with Jigsaw which is a branch of Google uh to expand on this idea meaning we wanted to create some short videos that can be shown as sort of public service messages or ads on video sharing platforms right uh and here you do the same thing each of these videos explains particular non manipulation technique uh these are the five that we've um created emotional and manipulative language fair among our incoherence, false dichotomies, scapegoating, and autonomous attacks and uh what you hope to see is that people who watch one of these videos are uh better at identifying these techniques in social media content and so let me check the time I should have some time uh this is an example of one of these videos and do let me know if you can't hear the sound um here we are you might think about skipping sound let's hear the sound is the microphone on it's off we can't hear the sound you cannot hear the sound annoying I don't know why that is but I guess I'll just skip it but either way I'll put the link to the website in the chat but for those of you who are joining uh non-virtually the link is www.innoculation.science that's the website that has all these articles nonetheless I don't know why that's it's always really annoying with these these kinds of things that for some reason they don't um project sound properly not so sure so we did a series of studies about seven in total that I'll discuss about this and this has just been accepted it's not online yet but it's we have the DOI so it's it's there and what we did was we ran uh six lab studies one one for each video and then one replication for one of the videos and we wanted to figure out okay do people actually improve in their ability to detect these manipulation techniques and that's the case they're more confident uh they they they consider social media content that is manipulative meaning makes use of these manipulation techniques to be less trustworthy and they're less likely to be willing to share this type of content with other people and I'm happy to get into the study design of that but we're very confident about these conclusions but uh this is a really important problem within the misinformation research space I think which is this uh I'm not sure if you've seen this article it's super interesting it's uh by Stefano de la Vigna and Elizabeth Linos and basically this is the thing that they did they tested um uh lab studies about in this case nudges right against field studies when these nudges are actually implemented in real life and what they find is it's underlined here um if in the uh lab studies the average impact of a nudge is very large about eight one seven percent point uptake uh three thirty three percent whatever increase over the average control which is nice I mean I don't know exactly how to evaluate these numbers but it doesn't really matter in field studies they still call the effect sizable and highly statistically significant whatever but much smaller at 1.4 percentage points right meaning it's reduced by a factor of six four depending it isn't entirely clear uh but at the very least the impact is much smaller in field studies compared to lab studies and that for me what that means is you need to make sure that the interventions you design start out with a good effect size in the lab like a very robust one because otherwise you simply don't have enough effect size left when it's finally implemented in real world environments okay so luckily the effect sizes that we had uh in the videos were pretty good not not not the world's greatest effect sizes but good enough that we were confident that there would be something left and what we had the opportunity to do was run a completely ecologically valid inoculation campaign on youtube meaning we had two of these videos the one that I tried to show just before the one about emotional language if you're a monger and the one about false dichotomies right if you're not with me you're against me type thing um and we showed hundreds of thousands of youtube users um these videos as ads one of these videos as ads right and uh within 24 hours this isn't exactly correct I should update this description um within 24 hours of watching the videos and ads they were also shown a single survey question which was here's a headline can you tell me what manipulation technique if any is being used in this headline right and so there's four answers that they can give one of them is correct so we also had a control group which didn't get the inoculation video but they did get the survey question and we had three of these headlines or items per uh video per study and total end of about 22 000 and these are the results this gets really complicated but at the very least this is the number that is most important to pay attention to um on average the treatment group was about five percent to five to ten percent more correct than the control group and highly highly significant because it's not significant it's a bit zero so you can see this item one didn't work but the other items except for this one which is very very close to being significant um worked like a charm meaning in this completely ecologically valid study we found that you can reduce susceptibility to misinformation improve someone's ability to identify misinformation techniques not individual examples of misinformation techniques even in an environment that is as noisy as you do so we're very happy to see that meaning that this effect is so robust even if you implement it in a in an environment like that so that's pretty much the end i hope i didn't bore you too much but um what i have wanted to talk about is well first of all how to define misinformation isn't exactly easy right uh which informs some of our design choices what is true isn't always clear and manipulation is more common it seems to be than false content um there's a variety of reasons why people believe misinformation which include identity my side buys and to some extent analytical thinking ability and luckily uh it seems to be the case that misinformation susceptibility is a movable skill meaning it can be trained um this is something i didn't really discuss but it is certainly true which is i've talked about susceptibility and reducing susceptibility what's far less clear is to what extent you can change behavior right meaning can we design let's say psychological interventions that uh have a measurable impact on the amount of stuff that people share an amount dodgy things that people share online unknown so far um and some of the work that we're currently working on is number one can misinformation exposure influence the results of an election that's a very interesting study more on that later i'll have a lot more to say about that in half a year or so and another one is because so much of misinformation seems to be related to polarization bias open-mindedness and so on can we leverage those insights to reduce polarization on social networks which hopefully has downstream effects that we think might be interesting all right i've talked a lot thank you so much uh and uh i'm very happy to answer any and all questions comments uh criticism and so on and so on so thank you so much can you hear me yes yes thanks a lot for interesting talk you touched upon a lot of topics that are very interesting for what we are doing as well um let's start with having a discussion somebody uh wants to ask a question uh if you might need to speak up yeah hi this is Nicholas from the University of Technology thanks for the talk uh during your discussion about the paper titled ability to detect fake news you've mentioned that you are using your own uh standardized tool to measure the susceptibility to misinformation could you please share words about how this tool is organized what are those the principles how does how do you measure the susceptibility so this is a project that was led by one of our PC students that are Martin's and uh Friedrich Götz at University of British Columbia um and the idea behind it was you need a psychometrically validated set of headlines that people have to rate in some way right over false let's say and what we did was we used GPT2 which is a natural line sort of generation algorithm we fed GPT2 uh corpus of a bunch of false headlines and we asked it to spit out about 400 headline examples that it considered to be false right and we had we started out with about 400 uh true headlines as well which were taken from normal sources like associated press and so on and so on so that's our starting set uh headline set basically which we then through a series of iterations whittled down through sort of psychometric testing uh response theory and so on and so on to 20 headlines in total um 10 false and true and participants rate each of these headlines as true or false and the nice thing about this is that there's a lot of psychometrics behind it meaning we know that each individual item is interpreted the same way by left-wingers and right-wingers for instance uh and by men and women and so on and so on right so that you don't uh the interpretation of the headline is the same and which means that the problem like you're you're getting it correct or wrong means the same thing if you're a liberal or conservative it's item response theory type thing gets very complicated um so took a year and a half to design the damn thing but we're pretty convinced that it works now in that way in the sense that if you take this test you get a particular score right or a series of scores and these scores are pretty indicative of a skill not it's not exactly like an IQ test meaning it doesn't work as an individual performance metric but it does work as a group level performance metric right so you can very safely say if group A performs better at this test than group B that means that group A is better average on average at detecting misinformation than group B is right that's that's a safe conclusion to draw it doesn't mean that if I take the test and you take the test and you take you are better at it than me that means you are better at detecting misinformation than me you can't say that just like with IQ it's sort of the idea right if you have a higher IQ than me that that has certain uncertain connotations for intelligence and so on it's very tendentious I know that but that's the idea of an IQ test this isn't exactly the same but it's the closest thing we have if that makes sense all right thanks right questions yeah I have one question can you hear me hi Alex from the University of Oslo hi Alex so earlier in your talk you were talking about that conservatism was one of the most robust predictors of susceptibility to misinformation right and I was just wondering I was just wondering what kind of misinformation did you sort of present for those individuals because not that long ago there was a paper published which said sort of the both extremes both extreme left and right that is far higher on conspiracy mindsets so I was just curious what kind of misinformation did you present to the participants oh it was the same psychometrically validated testing okay so it wasn't like yeah we were very careful not to have the items themselves be biased as much as possible yeah and we check for this in 100 different ways also because the editor of the journal John Barron is super nice he made us check these 100 different ways to see if we're not wrong you were fairly convinced it isn't a function of the item set that we find this yeah okay thank you okay great John I have a question that kind of relates back to the question you asked whether misinformation is a problem and you talked about these effects and the domain of health which I think are unimaginable in terms of the size we published a meta analysis in current opinion where we see that I think one percent of health behaviors explained by conspiracy believes longitudinally so correlationally effects are bigger like you said but but a very small longitudinally so so my question is um what what are the domain what are the domains you think are most affected by misinformation and um connect to that question um if we take the perspective of those who create misinformation or have a motive behind it what motive do you think they have why why what is the function of misinformation on their perspective sure um I think uh politics for instance is a very common domain right because uh there's a very clear outcome identified that that that benefits from spreading effect misinformation or disinformation which is you win an election let's right there's other considerations that go into this as well right for example for someone it's beneficial if people hate each other a lot and by which I mean for instance with the russian disinformation campaigns going on in the united states they disinformation comments play into sort of societal pressure points with the goal of increasing animosity right now there's a goal behind that which is paralysis meaning a country that hates itself or where people hate each other aren't isn't a very effective country meaning politically city um and uh on top of that there's a financial component to it meaning the fox news let's say but also msnbc to a certain extent and other outlets they they rely on sort of generating outrage or playing into outrage to a large extent for their business model right you see the same and it's not a new thing we see the same with the yellow press right uh clickbait isn't exactly a new phenomenon simply how you actually uh sells therefore there's that component meaning misinformation is often outrageous let's say uh novel um generates some kind of interest right and that means that there are sort of certain strategic benefits uh in terms of reality but also in terms of potential ease of political consequences and so on does that answer your question or did I this is your question and I find it very interesting I mean you came with an example of fox news which is obviously showing that mainstream politically centered institutions such as google um cooperate with a lot of fake news web pages in terms of uh ad ad placements that that's also a very interesting um point yeah thanks thanks a lot so I think we'll move on to another question yeah hi john from university in krakow so we did a systematic scoping review comparing different kinds of interventions amongst which we also was reviewing your work so thank you for that um and um but doing some classification work and we are in in right now in the in the uh process of reviewing and so there were some doubts from reviewers um that um we first of all we used uh law definition which is like this umbrella term for misinformation which for instance is cyberbullying as a kind of misinformation so my first question is if you would agree that kind of misinformation and second of all we also included some ways of countering cyberbullying um which mainly was was videos um forming cyberbullying so I'm very happy to see that you just uh tested this uh uh intervention videos because I thought it might be perceived as a kind of inoculation and I tried to sell it as such would you agree that a good classification or not thank you um for the video is would depend on the content of the videos because the it's not not every video is in inoculation I would say uh meaning inoculation is a fairly specific procedure in that it follows this preemptive warning preemptive reputation approach right uh so I if it can be reasonably said that these components are present in these videos then then sure if not then it's maybe more akin to media literacy let's say or something along those lines right it gets very muddled this whole thing about definitions and what is when is inoculation and so on but sort of that's sort of the the anchor that we tend to use um is cyberbullying misinformation yeah that gets really complex I guess it's more it's it's harmful content I think right but you're not really like there's no explicit sort of manipulation components necessarily right I don't necessarily I guess but if if it's sort of you uh being mean to someone online you're not really trying to manipulate them necessarily right you're you're just being mean or trying to make them feel bad so I think that's sort of the distinction I suppose that I think is is useful to draw does that help I guess it makes it worse doesn't it I imagine that uh the kind of cyberbullying that that is misinformation is uh you make somebody and tarnish his or her they name right you spread false falsehoods about them rumors so so this this that way or it might be on a racial background or a sexual background it's like so if you see this that way that that might be kind of misinformation but on the other hand the example you brought probably is not yeah that's saying I mean cyberbullying is an umbrella term just like misinformation is so you're basically trying to determine where the two uh umbrella terms overlap I guess right right so in some cases it doesn't in some cases it doesn't I guess and maybe that's a way to describe uh meaning if it's sort of two teenagers yelling at each other online doesn't really count as misinformation but if it's um uh deliberately spreading sort of rumors about someone well then it would it can be set all on the same category yeah makes sense yep I have a question hi this is Rafa from my student from acolytical degree of science as in correct of thanks for a beautiful talk I have a question because you mentioned there are these two components the effective and cognitive component of vulnerability to me I was wondering whether you could identify some basic cognitive mechanisms or you know effective processes which could be uh targeted for instance uh uh when when countering misinformation whether on this basic level one could imagine some sort of uh intervention yeah it gets extremely complicated like I I sort of alluded to this this role that might be played by cognitive rigidity exactly but that isn't something that isn't necessarily a trainable thing I think it could need the flexibility could could uh factor in I mean I mean it could be uh this is a completely unknown territory that I don't I've never seen research on this it's also the most voracious reader in the world so it's very likely that I might have missed something but to my knowledge uh I don't I haven't seen anything that goes in this direction uh same with like open-mindedness right like this idea of reducing my side bias there's a lot of literature on debiasing but it hasn't really been very well connected to the literature on misinformation and misinformation susceptibility um like there was some thought uh a while ago that maybe sort of generally um training sort of a general I guess cognitive ability in some capacity whether it be through solving math problems whatever might be might have sort of beneficial side effects in terms of reduced susceptibility to misinformation but I'm not so sure about that that seems very contentious to be honest um because once you start talking about this problem from a very cognitive perspective which might make sense but you're also immediately going to start talking about other related issues here do you cease to talk about misinformation and misinformation susceptibility alone we're talking about a whole bunch of other things so it gets very complicated I'm not a cognitive scientist myself uh so I would I would not feel completely confident having a lot to say about this at least not yet okay like sorry for the follow-up so how do you place this misinformation vulnerability and general well in the light of general vulnerability to information I mean vulnerability to information is very broad right uh vulnerability implies something negative meaning accessibility vulnerability to persuasion perhaps right uh yeah if gets gets incredibly complex um I think generally speaking here we're only we're not only talking about individual differences but we're also talking about the structure of social networks the extent to which our use of social networks informs our opinion making and decision making um what's the role of echo chambers what what does it to you when you're constantly online and in these environments where outrage is sort of whipped up constantly right regardless of let's say cognitive ability your cognitive vulnerability all these kinds of things so my my general point I suppose which is also something that I've been trying to say in discussions with I guess policymakers and so on is we should be wary of considering this only or mostly a psychological problem I don't think that's necessarily very helpful because that allows us and especially people like Mark Zuckerberg and so on to evade discussions about the responsibility that social networks have when it comes to what kind of content people consume and the consequences that that may have by which I mean there's a real possibility that Twitter is making us angry right let's say uh and we know for sure that these kinds of outrage driven algorithms are beneficial to engagement and therefore um you can sell more ad space and therefore you can make money so what I don't want is for Twitter to say oh we're just going to inoculate everyone and pre-bunk whatever else right and that's also a problem why it's cheap and it doesn't require you to reconsider your business decisions in any way right and so I'm not so sure if it's such a great idea for Twitter to be dodging that responsibility all that much which it's it's easily done if you focus too much on the psychology of the problem even if I do think that for example these interventions are effective as far as the psychology of it goes uh that doesn't mean that I think that is all or even most of what we should be doing thank you um I have another question uh you talked about it and there's a lot of work from that misinformation has to be understood from the partisan perspective um and uh I'm just wondering um I'm gonna try to um formulate this as a provocative type of question um given that the effects are pretty small right of most interventions that exist to date um what about the flip side what about um the effect of such interventions uh for people who are let's say more um or who perceive this into these interventions as partisan activity targeting them um do you have any any results showing kind of resistance effects um people who are I mean the the best cases people are not reacting at all right they don't move after um after an intervention but some people might also be pushed towards the extreme um at least that's what we see in other types of psychological phenomena so that you have the um insights in this regard um yeah so the the science advances paper that we're publishing soon on the videos uh one of the things we did was we ran a really large number of moderation analyses to see if the interventions work less or not at all for people of different like the like across political spectrum across different levels of misinformation susceptibility for different levels of open mindedness and all these kinds of things and the answer is that like there isn't really like there's a difference in effect size but it still works so by and large with very few exceptions you find significant differences between the treatment and control group across all sorts of different groups which is good because I do think that it matters because we also published a few papers earlier on like moderation effects in in misinformation interventions for example accuracy nudges are moderated by political conservatism in the united states meaning they don't work very well conservatives they don't work very well for liberals either but they don't work for conservatives especially um so uh that's a that's an issue right because you don't like I mean that doesn't mean that therefore you shouldn't be doing any accuracy nudging um but it does mean that you should be aware of limitations I think and so here we we were trying to really be sure that this there isn't a meaningful moderation effect going on but that only tells you one thing right that only tells you is someone from a particular group do they still benefit from the intervention if you ask them to do an item rating task let's say in the answer but that is pretty clearly yes but that doesn't mean anything about for reactants right so what you see now a lot is this politicization of fact-checking meaning uh within the context of the war in Ukraine this happened a lot right like there's um in this case I guess sort of Russian telegram groups doing fact checking of Ukrainian news or western news etc and the point of that isn't honest an honest appraisal of sort of the reliability of information it is part of the information war and other than that you also see people who are generally just very skeptical of fact-checkers and any and all fact-checkers and even the word fact-checking but and so I don't think theoretically there's anything preventing pre-bunking or whatever to uh be be be vulnerable to that kind of thing of course that might happen you can politicize anything you like right um the best way to avoid that although you can never avoid it completely is to at least in my view avoid making truth claims because those are always disputed but also make the interventions make sure that the interventions are sort of entertaining enough um uh a little bit sort of against the grain maybe right not uh patronizing at the very least um and and hope that that remain retains some some benefit um but I'm not under the illusion that uh there's no way that like that this intervention is the holy grail that this will work where others have failed nah of course not there's always going to be people who simply don't want to engage with the intervention and therefore will be ineffective for them and that's that will always happen if the intervention is voluntary um and even if it isn't voluntary like introducing a are you sure you want to share this button on twitter like a lot of people are simply going to be like yes right they might use it once or twice the first time that like oh yeah right i should be i should think about this but after a hundred times are we sure that that effect is the same my guess would be no probably so you always have to deal with this problem uh if you're designing interventions that aim to tackle a psychological component or a behavioral component such as the matches yeah it's a lot i mean i think it's very promising that you don't find that political orientation can flip the the size of the the balance of the slope i mean that's that's at least promising that that it doesn't well let's say push very conservative people um or into into the direction of believing in misinformation so that's that's great um i think that's at least a damage control finding um yeah it's the best we can do really that's that's it uh at the very least you want the interventions to be effective broadly speaking across groups in the way that you measure it right but that doesn't like that says nothing about exterior factors such as how someone reacts to being shown the intervention in the wild so what about those who are believing quite strongly in and misinformation um are they a lost case so with respect to the inoculations they're not this is important to note they're not tools of persuasion themselves they are intended to prevent persuasion uh unwanted persuasion so if someone's already persuaded that's the same as does a does a COVID vaccine help against someone with who's in the hospital with COVID right you do break doubtful sorry but you do try to break the circle so any new misinformation should be blocked to some extent right for people who are open and amenable to learning from the intervention yes um so i i do it's not that i would say that people who are very firmly convinced that 5g gives you COVID and they fully believe this like they they're not gonna necessarily benefit from playing a game even if you force them to play through it right it's just it's too much to ask of an intervention that's that short acting um i don't think that's that you can expect that because we know that that's also almost veered to the territory of like deradicalization and the deradicalization literature is very different from the inoculation literature in the sense that deradicalization tends to involve a lot of effort and so i i i don't think that the interventions have the power to achieve that it would be way too optimistic you know i would i would just be sort of yeah i feel like i'd be something snake oil if i said that okay thank you um are there any more questions i don't think we have any questions yep so uh you mentioned these accounts that spread misinformation about COVID-19 which are right now switched to war to ukraine you said something around their lines that there's artificial components to it so so so you believe um the accounts are not orchestrated by some political forces uh or is it is it some random AI doing this what are your thoughts about this it's it's very bizarre to see that there are some accounts that just do misinformation and it's it's very to to believe that nobody orchestrates that no i i do think that there's orchestration behind it but it's difficult to prove that um like we know for instance about uh the troll factory in St. Petersburg a very famous example of only automated accounts but also people being hired to spread misinformation effectively it's up in the air so how effective that is that i don't know the literature seems to show that it's fairly minor but who knows um but i'm also not and so it's simply true that states use online environments for disinformation and russia is one of them but so is china, iran, i'm sure the united united states is up with something too right as they really are but the most high-profile example is russian disinformation and um um like that's like these are campaigns like these are uh paid for by the government um uh they are run by political operatives and they have political goals behind them and so the strategies that they use are partially automated i would say so partially these are bot accounts partially they're not automated etc uh so what's interesting about that for me is that apparently it's also completely content agnostic so it doesn't care about these disinformation campaigns don't necessarily talk about russia all the time if they're russian right they're not like russia is great the west is bad or something it is a bit of that but the point like the point of these accounts spreading misinformation about covid isn't to make some kind of point about russia it is to get as many people as possible to believe misinformation at least share it uh so that you have to spend time tackling it you have to spend time and discourse and political discussions talking about covid misinformation um and so on and so on right you have to be concerned about the extent to which these this misinformation affects vaccination rates all of which takes time away from other things to be talking about and also uh you're you're sort of trying to fuel some kind of discord right you're trying to get the vaccinated and the unvaccinated to hate each other as much as possible for instance yeah there's one more question uh so if uh if you're searching for an anecdote about human element of troll farms um there was a uh a story that reverberated through poland about a group of accounts supposedly polish writing in really decent polish language but all the time referring to the americans as yankees does but it's popular uh in russia the russians really do call americans yankees so these both were telling about these are this and that and that and that and writing this in polish and someone's saying look vanja is you who do this but my question is you mentioned that one of the uh techniques uh you found uh useful was to teach people about the mechanics of of misinformation and teaching them through gamification of of how that may but also you now mentioned there are uh kind of meta narratives uh employed by countries where each campaign serves a greater purpose of some political strategic strategic goal like chewing discord in european countries or trying to to increase the the doubts about the european project and so on so forth would that make sense also to try to teach people or show people uh particular instances of this information as agents of this larger meta goal of this large narrative of us versus like east west uh you know this uh liberal uh free for all do what you want attitude versus we are the moral truth and these stuff yeah yeah uh yeah i mean that that in principle you can leverage inoculation theory for this purpose i think uh it's just a matter of testing how effective it will be right but i do think that's sort of generally pointing out um the extent to which like the individual examples that you might see about ukrainian refugees uh behaving badly whatever right like that these individual cases are inconsequential in principle to the general larger meta narrative a meta point that is being made right um i think that is incredibly useful to do and and i do think that there is a deficit in knowledge in in that regard uh with many people including you know like how would you know this if you didn't agree analysis articles about it right you never would uh and so i do think that you can leverage inoculation for that purpose i think that would be useful in fact that's also why we've applied for this research grant which we're hopefully receiving some some good news about soon uh to specifically sort of tackle these kinds of strategies that are common in in in this particular disinformation discourse particularly in eastern europe right because that seems to be especially like a hotbed of uh disinformation in many regards that relates to east versus west relations but also values right lgbt is one of them that is always being exploited but also of women's rights and so on and so on like so uh yeah i think that would be incredibly helpful i i have two questions so maybe at the beginning at the beginning one which refers to maybe uh some intervention have you countered or have you thought about possible of interventions which are not focused uh on uh say on the content uh uh of that appears on social media but rather on the general culture how we use our our phones for or for instance there are some some theories that let's say wherever there is a technological revolution and a new uh device appears we as a as a human species as a seroton culture we have to learn how to use it in a way that is not harmful for us so do you think that that it's also some some uh some scope for or for change so yeah there's there's cool work being done by many uh research but an example that i think is usually worth mentioning is the civic online stuff that they do at stanford so people like joel brickstone and sam weinberg and so on uh it's more it's much more media literacy than inoculation really uh but it's very interesting where they like the the point of that curriculum or those lessons is to make people more aware of like how media works um how to how to do lateral reading how to do your own fact checking these kinds of things right and so so that is much more aimed at at the sort of teaching people strategies to navigate online environments in a more beneficial way i suppose so there's a lot going on in that direction i think that's that's helpful again as part of this sort of broader spectrum of of solutions that are being developed and my second question is about you told us about your ecological study that you did with with youtube and in our scoping review uh one thing that we realized but we stopped let's say uh we we collected only data in meat of so we had just the papers that were published till the meat of 2021 and we realized that there is only a few studies which which could be called ecological and most of them are are conducted in the lab so yeah okay what do you think are the the biggest obstacle to to to construct this kind of more ecological studies from your perspective as a as a uh uh costs and cooperation from social media companies like one of the things that i'm very angry about is that this youtube study costs 40 000 to run right and we were lucky enough to work with google which owns youtube so they paid for it but it's in principle it's bizarre that there's such an asus barrier to begin with right and so we really had to like work very hard to get agreement from youtube and google to do this uh which is insane to me it should be completely democratized uh meaning i think every researcher who works for a university let's say should be given access to these kinds of materials on this kind of study opportunities um because right now the simple fact is that this ecologically valid study which i think i agree is the only one that i could find at least that was uh to the example an actual misinformation campaign that you can run on a social media platform um it relies completely on one team in a way right which is insane that's not how science should work it should everyone should have equal access and hopefully arrive at the same conclusions so there's a lot to be done in terms of i guess gently asking is one way to do it but another way would be forcing social media companies to open up this kind of access not only to the api but also to add space and so on to make doing this kind of research a lot easier so i think that that is an incredibly important step the fact that there's so many barriers to entry for these kinds of studies is unjustified okay i guess that is it for today thanks a lot for joining us and answering your questions thanks for the patience um it was very interesting do you have any concluding for us or just i want to also to thank all of our participants and yeah thank you very much all right thank you thank you thank you so much everyone and thank you for your uh your questions and your participation