 Okay, I just got the message that we can start. Good morning to everyone. This is the third panel, Psychology of Living in Disinformation. And our host is Professor Jonas Kunst. Jonas, please take the floor. Yes, welcome everybody to our third panel today. Like Jan said, we are gonna focus on the micro and individual level, namely on the psychological processes that make people believe in and be influenced by misinformation. We have two speakers and we will start with Professor Ramald Polcik. Dr. Polcik is a professor at the Jagelonian University in psychology. He's internationally renowned for his work within forensic psychology and here especially within witness psychology. And in his talk today, he will focus on how we can induce resistance to misinformation in the context of witness psychology. Both talks today are gonna be about 20 minutes and we will take five to 10 minutes questions and answers afterwards. So Professor Polcik, the floor is yours. Very much, share the screen. Well, okay, can you see the screen? Yes, we can. Okay. First of all, allow me to express my gratitude that I was invited to this conference. I am grateful because for a long time, I am very interested in the topic of inducing resistance to social influence, okay? This conference is about web immunization. How can online social networks create collective resilience against misinformation? I am very interested in some broader area which is however closely related to this, namely as said inducing resistance to social influence. For a long time, it is my firm impression that resistance to influence is a seriously underrepresented area, okay? In psychology. People, psychologists seem to be much more interested in research into effective influence than in protecting people from being influenced. This is really striking. Inducing resistance to influence is a very underrepresented area. The disproportion between the number of experiments and research concerning the efficacy of methods of influencing people over those aiming at protecting against influence as enormous as is the disproportion between the numbers of methods available for exacting influence over the ones serving from protection. It is shocking that existing manuals of social influence such as the well-known book by Professor Cialdini who wrote a book about these so-called principles of persuasion, okay? In this book, Professor Cialdini does not mention any, does not cite any research on protecting against influence. This is particularly striking if you consider that Cialdini in this book does discuss methods of protection but does not cite any existing research, okay? Let's review quickly the role of reciprocity. Okay. The first of the role he discusses. It means that people feel obliged to give something in return to being given something, okay? So we are sometimes given a small portion of some stuff and we feel obliged to buy more of them. In the last paragraph in the chapter discussing reciprocity, Cialdini mentions how to protect against this method. Namely, he suggests differentiating among donor intentions. We should consider whether the donor is really selfless and really unbiased but he cites no research to support the efficacy of this method of protection, okay? The next rule, commitment and consistency. People want to be consistent. This often makes them to stick to a decision even it is plain wrong. Once the decision is reached, we find any possible arguments to support this decision even and to ignore arguments which contradict our decision. Cialdini mentions how to protect against it. For example, we should distinguish consistency from stiffness, from stubborn stiffness. We should avoid a purely mechanical consequence. Again, no research to support the efficacy of this idea, okay? Although any number of research confirming the efficacy of this method, okay? The next rule or method, social proof. We usually tend to do what other people do especially if you are not sure what we should do. How to protect against people who abuse this method. We should perform a careful analysis of the correctness of the actions and views of other people, okay? Good idea, no doubt. But no research in this book confirming the efficacy of this method of protection. Next method, authority. We tend to obey authority figures. So Cialdini suggests if we do not want to be abused by persons or institutions who use this method, we should take away the element of surprise. We should increase our awareness of the power with which authority acts on us. We should consider and we should consider whether someone who seems to be an authority is really one, is really an authority, okay? And again, I begin to be boring and no research to support the efficacy of this method of protection, none at all, okay? Liking, we tend to give in to people we like. So we should consider whether a person who wants something from us has not won our sympathy in an unnaturally short time. If so, it should be a warning according to Professor Cialdini. Again, excellent idea to analyze whether one did not win our sympathy unnaturally quickly, but no research confirming this method of protection. And scarcity. When we believe something is short, in short supply, we want it more. They often tell us, the sellers often tell us, please buy because I have the last 10 exemplars of this thing, okay? How to protect? We should control and monitor our own level of arousal. According to Cialdini, it is the arousal. It is the emotions triggered by this technique which make us act quickly and without thinking. So we should think. When we realize that we are in a rush, we should be warned. Again, good idea, but not supported by existing research. In this excellent book, pleased in my last edition, Cialdini cites no research about the effectiveness of these methods of protection. This is shocking because he cites, he leased hundreds of studies on the effectiveness of the methods of influence as such. There is a lot of research supporting the efficacy of these methods, such methods or other methods. And there is very little research into effective protection about against these methods. Another book, just the second example, a well-known book brought by Pratikhanis and Aronson, Age of Propaganda. And the authors mention some methods which should protect us against unwanted influence. For example, we should think rationally, more rationally and less emotionally. We should develop a general skeptical attitude. We should learn about techniques of social influence. We should support consumer movements. Fine, but it is shocking that in this book, there is not even one experiment cited which would confirm the efficacy of such methods of protection. Okay. It is a long time that I am striked by this discovery, by scarcity of research on protecting against social influence at all. Okay. Today, I would like to share with you our research concerning protecting against misinformation. Okay. Let's say that there seems to be one exception. Okay. There is quite a lot of research about inducing resistance to persuasion. Okay. This particular situation seems to gain interest from researchers for a long time. The classical research by McGuire about inoculation. McGuire states that attitudes could be inoculated against persuasive attacks in much the same way that one's immune system can be inoculated against viral attacks. So he had his participants to be exposed to weak persuasive messages, weak and easy to defend. When the participants were able to resist weak persuasive messages, they become more resistant against stronger persuasive messages. So this seems to be one exception. But other methods of social influence well, very little research. Okay. Now to our method. As professor Kunst said, I am interested especially in forensic psychology and namely in the misinformation effect and the memory misinformation effect or monastic misinformation effect. What is it? Well, it is a situation in which a memory report becomes contaminated with information coming from another source than the event to which a person was witnessing. Okay. So we have the original information, the event itself and we have some post event information which may be incorrect. This post event information may come from various sources, various mass media, but also other witnesses. Also the interrogators questions can include some suggestive cues. Of course, online sources nowadays can be a source of incorrect information as well many other sources. Okay. So why is this important? Let me mention, this is important because witness testimony, the testimony by a human still remains a very important source of information for the legal system, okay, for courts. And it is now well established that errors in human testimony are the main cause of wrong decisions reached by courts. Okay. If the court, if the judge, the court is able, has some more material evidence, errors are more rare. But if there is only a human witness and he's wrong, the court can reach very wrong decisions. Okay. This is a typical paradigm within which the misinformation effect is usually studied. It should mimic the real sequence of events. In reality, the first thing is that a person is able to see something which has legal consequences. Okay. An event, original event. In the experiment, the participants in both groups, the misled and the control one, may watch to a video, okay, presenting some event. Let's assume that a green car was visible in this video. After some time, ranging from minutes to days or even years, the participants are presented with some post-event material. For example, a description of this original film, which in the misled group, in the experimental misled group contains one or more details which are incongruent with the real content of the original material. For example, it is mentioned that the car which in reality was green was red. After another time, the final test takes place in which a series of questions is asked about the original video, including the question, the critical questions relating to the misled details. For example, in this example, was the car green or red, okay? Or what color was the car? The final test can take various forms. This three-stage paradigm should be similar to the real sequence of events in which there is first some event watched by the witness. Afterwards, some misinformation can reach the witness and only afterwards, the witness gives his information, okay? It's being interrogated. Okay, it is now well established that in such experiments, the misled group performs worse, okay? It is so well established that for a long time, we no longer check whether the misinformation effect takes place. We rather make research into the mechanisms of the effect and it correlates. Or as we are especially interested into methods of protecting witnesses against misinformation, okay? How can we get the witness be less susceptible to misinformation of various kinds? In devising such method, we made a main assumption, okay? The assumption is that in fact, in the moment of the final memory test, okay? In fact, many of the participants do remember the original information. In this example, the green car, as well as the misinformation. So they remember that they saw a green car and they remember that they were reading about a red car. And now they are asked, what color was the car? Many of them give answers consistent with the post information, okay? With the false information, not with their own correct memory. Why? Because they lack confidence in their own memories. They do not trust their memories. I do not have time to side research confirming that this is real, okay? That many subjects do remember both sources of information, give information consistent with the post event material and ask why they say, I was not sure. I thought I was wrong. Okay. And the text must have been correct. If this is true, then increasing one's confidence in their memories should reduce the tendency to rely on external sources of information. If indeed one of the causes of giving in to misinformation is the lack of confidence in one's own memory, then increasing this confidence should reduce the misinformation effect. And this was our main assumption. We developed a technique called which we call to reinforce self information. This technique was designed to increase the memory confidence of the participants and consists of two core elements, self affirmation and positive feedback about memory quality. Another research showed that these two elements increase self confidence, okay? In this technique, self confidence consisted simply in writing down one's greatest achievements in life, okay? So each participant had to write down, think about their greatest achievements in life and write them down, okay? This was self affirmation. This was the only manipulation. As for positive feedback, it was a fake positive feedback manipulated. The participants had to memorize 60 nons for two minutes. After them, they were writing them down. They were counting them. And they were given a false feedback about the population average results, which in fact was one and five standard deviation lower than the real average. In this way, most participants learned that their memory was better than the average for the population. So we have these two elements, self affirmation and positive feedback. A false one, but the participant didn't know this. And reinforced self affirmation proved effective. As for now, it proved effective in over 20 experiments. I just remember one experiment in which it was not effective, okay? In about 20 experiments using various variations of this technique and various variations of the procedure, of the three stage procedure, we always had the effect, okay? So it is promising. It is promising, but I should add the main problem and limitation with this technique. It is promising, but it is impossible to use in real life settings, okay? This is obvious. You cannot make real witnesses write down their greatest achievements in life. This is too strange. Obviously, you cannot give them false feedbacks for no matter the reason, okay? This would give, this would cause real problems, even legal problems. You cannot lie to a real witness, no matter the reason. So having confirmed that in this very basic form, increasing self-confidence works in this context, we are now thinking very deeply about a technique which would increase self-confidence of a real witness, which would be able to apply in real life settings with all the complications, legal complications, psychological complications. We have to be very delicate here, but still the technique should be effective, okay? And this is now our main research topic. Okay, and perhaps increasing self-confidence may be effective also in other settings, not only memory misinformation effect and witness testimony. Perhaps increased self-confidence may be effective in using collective resilience against misinformation in the context of online social networks. This question brings us back to the main topic of this conference. Okay, and thank you very much for your attention. Thank you so much, Professor Porczyk. I found this presentation super interesting and of high relevance to our project. So this is very appreciated. We have a couple of questions. In total, we have five minutes to answer these. So I'm gonna start with the first question from the audience, and that is why are there no, is there no research concerning the protection methods from Sialdini manipulation techniques? Is it because of problems with operationalization? Well, obviously I have no clear answer because I don't know, but I am afraid that people and institutions, people and institutions interested in methods for effective influence are much more rich. They are richer. They have more money. Okay, because who is it? Advertisers, okay? Or political parties, okay? They are usually rich. And they are very interested in buying methods of effective influence. And what about the people and institutions who care about the consumers, okay? They are not that rich. They have less money. This is a very brutal answer, but I am afraid it is at least partially true. Could you imagine that this might also be caused by a publication bias that people have actually done research, but I did not find support for it and then it wasn't published. I don't think that publishers would be not interested in research concerning resistance. In the contrary, when we had the occasion to publish our results, the editors were usually very interested, okay? I was able to talk to Sialdini once. And he admitted that two areas are seriously underrepresented. That is the area of resistance and the area of persistence, okay? There is very little research into how long a given method once applied is effective, okay? So I don't think there is a bias. Right. It's serious, but... Yeah, it's very interesting. Okay, so I'm gonna move on to the next question. That is pretty long. So the question is aren't most of the recommendations by Sialdini inferred by negation of the attack vector and can they be generalized as exercises in mindfulness becoming interested in the unfolding event and invoking self-examination and introception? If Sialdini is right in its recommendations, can we conclude mindfulness to be a meta-skill of immunity? So maybe we can focus on the last part of this question. I would say that mindfulness and a related idea, namely reflectiveness, okay? General ability to reflect is a very promising idea. And indeed, if not all, then many of these ideas by Sialdini seem to be summarized under the general idea of mindfulness. Most of them, many of them can, okay? Mindfulness is a very promising idea in the context of inducing resistance to unwanted social influence. I'm just also like, I think I would personally agree in this by theory, but I also know about research showing that mindfulness actually induces people's likelihood to have false memories because they don't judge them. One of the core things is to not judge information by or input generally. So it's an interesting question, but I think it's also, there is some research showing that it might actually have a negative side as well, right? So anyways, that was just my opinion on this. There are more questions. The next question is, is self-affirmation connected to self-esteem? And if yes, then is the susceptibility to misinformation negatively correlated with self-esteem? Yes, the three variables, self-affirmation and self-confidence and self-esteem are closely related actually, okay? If you induce a technique for self-affirmation, then also the self-esteem seems to be boosted. So they are indeed closely related, okay? Right. Okay, Professor Polcik, thank you very much for a super interesting presentation and talk and for being part of this conference today. So our second speaker is a master of philosophy in psychology and current PhD candidate, Alexander Gunnerson. Alexander just started his PhD, but he's already published various articles in renowned journals. In his PhD, he is focusing on the psychological processes that make people believe in misinformation and today he will give a talk on the psychology of accepting and sharing misinformation. So Alexander, the floor is yours. We can't hear you. We can't hear you. There is probably some technical problem. So I could take another question. Should we do this? Yes, maybe, yeah. Okay, so Professor Polcik, there is one more question and that is this is an interesting administration of a post-truth treatment that has positive consequences in a social situation. Can they be invoked in witness counseling? Isn't the absolute notion of truth inhibiting administration of effective treatment in this case? And could it be a policy bias demonstrating rigidity and opinion? I think you need to turn on your microphone. Okay, I am not sure if I understand the question correctly. Yeah, it's a bit abstract. If it can be invoked in a social, in witness counseling, isn't it the absolute notion of truth inhibiting administration of effective treatment in this case? Well, I hope not. If I understand the question correctly. So maybe let me elaborate the question. So what Professor Said was that we cannot administer this treatment because we cannot really use this misinformation of misquoting the average of recall of memory in a setting of witness testimony. But we do have evidence that this is an effective treatment. So maybe the notion of truth that we cannot use non-truth, right? Which is this false information about the average recall. I don't want to. Okay, yeah. Can you hear me now? Yeah, and I think we have to move right on. I'm sorry to interrupt the question given time issues. So Alexander, please, if you can share your screen then you. Yeah, sorry for the technical issues. Yeah, thank you for the introduction, Yunus. Today I'm going to be talking about the psychology of accepting and sharing misinformation. Topics in this presentation includes what is misinformation and how it differs from related terms. Next, how do we assess what is true? I will present a basic psychological process of how we relate to information that is presented to us and how we engage with it. In addition to the basic psychological processing of information, there are individual differences in how we relate to information and how we judge what is believable. That is what makes some of us more susceptible to accepting and sharing misinformation. Lastly, I'm going to briefly talk about some of the potential consequences of believing in and acting upon misinformation. Now, this was partly covered in the previous panel, but as you can see from this overview, there are several terms and concepts related to false information. Some of these have clear definitions and meanings like content marketing, whilst others are hard to define like fake news. However, two aspects of information are generally used to differentiate between most of them. First, the level of facticity. Is there something true about it or is it completely nonsense? And second, is it created and shared with an intention to deceive, profit, or otherwise cause harm? Here's a quick illustration of how one can think of different kinds of false information. At the vertical axis, you can see the level of facticity, whilst at the horizontal axis, is the intention to deceive. Even though several terms related to false information are used within research, misinformation seems to be the preferred one. So what is misinformation? Misinformation involves information that is inadvertently false and is shared without the intent to cause harm. But moreover, it has the capacity to spread through society and particularly on social media. The aspect of intentionality is important because people who share false information, especially on social media, doesn't necessarily do it to intentionally deceive or harm anybody. However, the creator of the shared information on the other hand, might have an intention to do so. In that case, it is referred to as disinformation. Now, most psychological research on false information is determined misinformation. And that's because when we survey people's social media behavior, for instance, it's hard to determine whether they share false information with an intention to deceive. So it's better to give them the benefit of doubt. Now, let's move on to one of the basic psychological processes involved in assessing what is true and how we relate to information in general. When we evaluate whether information presented to us is likely to be true, we typically consider some but rarely all of these five criteria. First, is the claim compatible with other things I know? Is there much evidence to support it? Is it internally consistent and coherent? Does it come from a trustworthy source? And lastly, do other people agree with it? Each of these criteria are sensible and thus bear on the likelihood of a message. Importantly, I can assess each criteria by considering relevant knowledge, but that is a slow and effortful process. So instead, I can assess the same criteria by relying on my intuitive responses, which are faster and less taxing. And indeed, my brain, like probably some of you guys, prefers fast and effortless. But when the initial intuitive response suggests that something may be wrong, we tend to turn to the more effortful analysis. In this way, the initial intuitive assessment of truth is a sort of gatekeeper for whether people will engage with the presented information with a critical eye or just not along in agreement. These assumptions are compatible with the long history of research in social and cognitive psychology, where the slow and effortful strategy is often referred to as analytic or system to processing and the fast and effortless as intuitive or system one processing. So the key is the ease with which the information can be processed. If it's processed easily, we tend to use system one processing. If it's hard to process, we switch over to system two processing. First criteria, is the information compatible with what I know? Does it fit with my prior knowledge? Any claim is more likely to be accepted as true when it's compatible with other things one knows than when it's at odds with other knowledge. Here, the well-known confirmation bias can also kick in. If the information confirms your prior beliefs and opinions, it is more likely to be believed. On the other hand, when something is incompatible with what one knows, we tend to stumble. We take longer time to read it and I'm more trouble processing it. Thus, if the information is more compatible, it makes it easy to process and they often not along in agreement. If it's less compatible, then it makes it more difficult to process and we tend to switch over to more analytic processing. If it's compatible, there's often some kind of evidence to support it and usually, if there's evidence, it easily comes to mind. People's belief and confidence in a belief increases with the amount of supporting evidence. Next, is the information internally consistent and coherent? Coherent information is easier to process than information with internal contradictions. If it contains contradictions, we will stumble and I have more difficult in processing it and most likely switch over to analytic processing. What about the source of information? Is the source trustworthy? Information is more likely to be accepted as true when it comes from a credible and trustworthy source. Trustworthiness can be based on the source's expertise, education, achievement, and so on. However, trustworthiness can also be based on feelings of familiarity. You trust your families more than you just trust strangers. Additionally, repeatedly seeing pictures of a face is sufficient to increase perceptions of honesty and sincerity as well as agreement with what the person says. Moreover, a given claim is more likely to be judged true when the name of its source is easy to pronounce. Here's an illustration from some of my prior research which shows that face perception influences ratings of trustworthiness. I'm not going to say more about it other than that these two faces you see here receive very different trustworthiness ratings solely based on their appearance. The last of the criteria for truth assessment, thus other people agree, is their social consensus. Research shows that they are more confident in our beliefs when they are shared by others and we are more likely to endorse a claim if many others have done so as well. Relatedly, information that is familiar because you encountered it several times through friends and family is easier to process, remember and understand. Now, in addition to these five criteria, I would like to add one additional factor and it is repeated exposure. Demagogues have known from linear that truth can be created through the frequent repetition of a lie and as Hitler put it, propaganda must confine itself to a few points and repeat them over and over again. And indeed, research shows that the best predictor of where people believe the rumor is a number of times they are exposed to it. I will now give you an example of how repeated exposure combined with some of the criteria for truth assessment come into play when we are presented with information on social media. To begin with, most social media posts are short written in simple language and are easy to read and it satisfies many of the technical prerequisites for easy processing. Now, as an example, let's say you are suspicious of vaccines or opposed to vaccination in general. You encountered this post. This easy to read message is posted by a familiar face with an easy to pronounce name is also a millionaire and two personality. Of course, a credible source. Well, this was before it become present. The content of the post is compatible with your own beliefs because you're opposed to vaccines. The post is liked and retweeted by your friends who are also opposed to vaccines. Thus confirming social consensus. Moreover, it is retweeted and shared by several of those in your online social network and sharing repeated exposure. With each exposure, processing becomes easier and perceptions of social consensus, coherence and compatibility increases. At the same time, accumulating likes and retweets ensures that the filtering mechanisms of the social media platform makes exposure to opposing information less and less likely. And thus the content of the tweet is considered to be highly believable. Now, this is of course a sort of extreme example. I will have this psychological process and the fiber tier are coming to play on social media but the same processes applies to topics of a much more mundane nature. So to briefly sum this up, to sum it up, if the presented information is compatible with your own existing beliefs and values, it has evidence to support it, is coherent and internally consistent, presented by trustworthy source and have other people agreeing, then there's a chance that you just might not along in agreement and not engage with the presented information analytically. However, some of us are less susceptible to misinformation and thus not as easy for fake news. There's been conducted several studies on individual differences on susceptibility to misinformation. Within the social sciences, research on misinformation typically measures susceptibility in the following ways. Some present an equal number of true and false headlines and ask participants to rate the accuracy or reliability of those, like this example right here. Other may present statements which are not in this sort of Facebook format or journalistic format. They provide written claims such as 5G networks, maybe making us more susceptible to the coronavirus and participants are asked to rate accuracy from very inaccurate to very accurate. Within this kind of research, they look at individual and group level differences in susceptibility to misinformation by measuring some other variables of interest. I've chosen a few of those variables which I now will present. First up, people differ in their propensity to engage in analytical reasoning. This is related to the five criteria and information processing I mentioned earlier. And research shows that some more inclined to reason analytically whilst other to a larger degree rely on their intuition. Propensity to engage in analytical reasoning is often measured by the cognitive reflection test. It presents people with questions like these. There's only one correct answer to each of these questions. But when you present them to people, you often end up with two groups based on their answers. Those who answer intuitively will answer first to question one and 10 cents to question two. Whilst those who reason analytically will answer second to question one and five cents to question two, which are the correct answers. People's performance in this kind of test correlates with their ability to discern misinformation from true information. And that is participants who reason analytically are better at recognizing misinformation as false. So thinking analytically rather than relying on intuition seems to be a resilience factor against accepting misinformation as true. Research also shows that delusion prone individuals, dogmatic individuals and religious fundamentalists are more likely to believe misinformation. The people who score high on these kind of attitudes typically agree to questions such as, do you ever feel as if there's a conspiracy against you or the things I believe in are so completely true I could never doubt them? Or the basic cause of evil in this world is Satan who is still constantly and ferociously fighting against God. High scores and deep types of measures are all positively correlated with belief in misinformation. However, mediation analysis suggests that these relationships may be partially or fully explained by reduced engagement in analytic reasoning and actively open-minded thinking, which may broadly discourage implausible beliefs. So because dogmatic individuals and delusion prone individuals and religious fundamentalists are engaging less in analytically, I think, analytical thinking, they're also more susceptible to believe in misinformation. Again, analytical reasoning seems to be a resilience factor. As misinformation is often heavily politicized, much research has investigated political ideology as a predictor of susceptibility to misinformation. This type of research has often been conducted among American participants and it typically groups participants in either Democrats versus Republicans or Clinton supporters or versus Trump supporters. This was during the 2016 election. Typically, the results indicate that Trump supporters rated misinformation about Trump as less accurate but misinformation about Clinton is more accurate and vice versa. However, studies have also shown that when participants are given time to deliberate, they tend to correct their responses. And this correction was independent of whether the misinformation was consistent or not with their political attitudes. So again, deliberation, conscious analytical reasoning seems to positively influence the ability to detect misinformation. There's recently been conducted a cross-cultural study on the susceptibility to misinformation about the new coronavirus. It compared participants from five countries, the US, UK, Ireland, Mexico and Spain. This figure you see here shows the included predictors of susceptibility or some of the included predictors of susceptibility to misinformation. And everything left of the dotted line in the middle indicates reduced susceptibility to misinformation. Whereas the marks to the right of the dotted line indicates increased susceptibility. And as you can see, in three out of five countries, conservatism or political right-wing attitudes predicted increased susceptibility, whereas numerous skills which is a kind of proxy for critical thinking predicted reduced susceptibility in all five countries. And the same was true for those who showed trust in scientists. Additionally, they found that self-perceived minority status predicted an increased susceptibility to believe misinformation about the coronavirus. However, as it was self-perceived, you do not know what kind of minorities that seems to be inclined to believe misinformation, whether it's religious, political or ethnic or some other minority groups. Although misinformation is clearly not a new phenomenon, it has become a much more serious issue with the advent of internet. And the belief in misinformation can have serious consequences if one acts upon that belief. For instance, there's a lot of fabricated and false information circulating the internet regarding vaccines. The most famous one that the MMR vaccine causes autism. And as a result, vaccine hesitancy have resulted in disease outbreaks and deaths from vaccine-preventable diseases, such as we have in recent years have had measles outbreak in the Netherlands, UK, Ireland, the US and several other countries. And the increase of misinformation on social media has also proven to be a real threat to the democratic process. As annual functioning democracy relies on a well-informed populace. So disinformation campaigns are targeting specific groups of voters and the political landscape is getting more and more polarized. For instance, the pizza gate conspiracy theory during the US presidential election in 2016, fake news websites posted false stories claiming that a pedophilia ring with high-ranking officials in the Democratic Party used a pizzeria, as you see a picture, illustrated here, as a meeting ground for satanistic rituals, human trafficking and child sex abuse. And a man who believed this story entered the pizzeria with an assault rifle to investigate and end up firing shots. But luckily nobody got hurt. And lastly, the belief in misinformation about the coronavirus, such as the belief that it was created in a laboratory in China, negatively affects people's compliance to public health guidelines, such as wearing masks or washing your hands and are also more hesitant to get vaccinated against it. That's all, thank you. Thank you very much, Alex. I think that was a very comprehensive and interesting presentation. So we have a couple of questions. I'm just gonna start with the first one. And I think you partially, this question came a bit before you talked about this issue, but I'm just gonna repeat it anyways. So one person wondered whether there is a difference between susceptibility to fake news and the tendency to conspirational thinking. Oh, that's a very good question. Both yes and no, I would say. Those who seem to, as far as I've read, those who seem to are susceptible to misinformation also have a tendency to conspiratorial ideation. But I would need to further investigate that question to provide a clear answer. Was for sure, is that there is much more research on conspiracy beliefs? Yeah, exactly. The misinformation, but I would hypothesize that there's a huge overlap. Yeah. Okay, so the next question is pretty long. Maybe you should read it while I read it out loud. Isn't asking people to evaluate content already inducing a stimulus that fundamentally changes the environment in parenthesis prompting system two, increasing the stakes, in which people engage misinformation in the wild, fluid system one and system two, interplay, relax, low stakes. Yeah. So I think it's kind of deals with the ecological validity of the approach. Yeah. How we measure susceptibility. Yeah, exactly. Yeah, people use different kind of stimulus to sort of measuring this kind of susceptibility. And which kind of system they are engaging. Which arguably we should be discussing in our future research. What kind of, in what way we want to do this. Because this is a very important issue. Yeah, I personally believe that obviously a testing setting and experiment is a bit artificial. And if you evaluate a lot of different stimuli, you kind of might actually start to use more elaborative reasoning than you would on social media, where you have maybe a little attention. But still these studies show a lot of meaningful individual differences. So I would say that it at least approximates a real situation. Yeah, several sort of replicate the way you interact with information on social media. Some of them even use interactive web pages that sort of mimic social media. All right. And I mean, some data suggests that 60% of Twitter users just retweets without reading the article as a whole anyways. So. Right. Well, that's just an important information that has been validated and there is some ecological validity to it. Okay, so there's the next question. Can susceptibility to misinformation, to disinformation be understood as a mental disorder? Or do you think there is any correlation between fake news, susceptibility, and mental disorders? As far as I know, I haven't read anything about mental disorders and the susceptibility to misinformation, but it's very interesting. I don't know if you, probably some mental orders that have very dogmatic sort of thinking styles which have shown that they are more susceptible. So there could be some relations there. Yeah. But interestingly, a lot of the variables that have shown to predict the susceptibility are normal variables, right? Normal non-pathological variables. Yeah. So, and this is probably also why this is such a huge problem. Otherwise it would only kind of affect a small number of individuals. Yeah. But it does actually have a broad impact. Yeah, exactly. Because of that. Okay, so I think we should probably stop there to keep the time. I think both speakers, once again, I think this was an excellent session, very interesting, different perspectives. And thank the audience for very interesting questions. Yes, thank you very much. That's true. You are very much on time. Right now we have a lunch break. So the next session about affected groups starts at one. So see you in less than one hour, okay? In 40 minutes. Bye-bye. Thank you.