 Okay, hello, this is our second panel. The moderator of this panel is Mr. Kamil Mikulski from the Kosciuszko Institute. So please take the floor. Dear guests, your audience whom I do not see at present, but I do know that you're out there. Good morning, everyone. As Jan has said, my name is Kamil Mikulski, and I'm a disinformation researcher and the project manager at the Kosciuszko Institute, which co-organizes this conference. We're a non-governmental think tank and a research institute leading the public discussion in cyber security related matters. And we also organized the cyber sector. I have a great pleasure to moderate this panel, which is titled Credibility on the Web, Checking in Calcating, and Self-organizing Trusted Sources of Information. But first, please let me introduce the distinguished panel. We have a special guest from Lithuania, Viktoros Daukšas, who is an innovator, developer, and new technologist enthusiast. He's also what's most important, the head of the bank EU and initiative, which works, I had a pleasure to come across. The bank EU is a prominent technological analytical center and also an NGO like the Kosciuszko Institute, whose main task is to research disinformation in the public space and execute national and educational media literacy campaigns. It researches the Baltic states and recently also Poland. So hello, Viktoros. And my second guest is Professor Adam Wierzbicki, who's a professor at the Polish Japanese Academy for Information Technology, and the pioneer of research on web content credibility. He is also an expert on big data, web and data mining and machine learning or more general in data science. Hi. This title, the check-in in inculcating and self-organizing trusted sources of information, it's very interesting when you think of this issue of credibility. We are coming across many different websites in the web who may or may not be credible and it's not always very clear. Actually, in many cases, it is somewhere in the great area and it's very hard to decide whether you can trust that source or whether it's biased or it's just not okay. And with that, I would like to start the discussion by asking my first question to the professor about truth and credibility. Sometimes we think that those elements and those notions are interchangeable, that the truth is credibility and being around. But I know for a fact that they are not. I would like to ask you how to tell them apart and when you're out there in the cyber space, which one is the most important? All right, thank you. Thank you for this question. Well, I'd like to show you a few slides very briefly and if I can, because right now I don't have the ability to share the screen, but let me start with the definition of credibility because that is probably what can start this discussion going. Basically, credibility is a property of information that we receive and it is a property of information that makes us believe that this information is true. So you can think of it as credibility evaluation is something that we're doing all the time. Actually, whenever we communicate, we're doing this and it is probably a part of our evolutionary background as human beings. On the other hand, truth, well, it is a completely different concept that is actually independent of credibility. You can think of information which is true, but unfortunately it's not credible. On the other hand, you can think of information which is not true, but is credible and this is the case of all successful fraud. So you can easily imagine examples of this kind, unfortunately. Well, you could treat truth as ideal or maybe as something which is unattainable that depends on your epistemological opinion in reality. On the other hand, credibility is much more practical because this is something we're doing all the time. We're evaluating credibility all the time. You can think of credibility as a complex signal which depends on unfortunately many things. What you said about the web pages, credibility being ambiguous often that's because web pages are complex pieces of information with a lot of different things. For one instance, this is actually a division of credibility into different concepts that has been done by a very famous psychologist and media scientist, Karl Hovland, even into I think in the 1950s. It was quite a long time ago. So he divided credibility to source credibility, message credibility, media credibility, like you see here on the picture. And all of these can impact our credibility evaluations when we receive some information or when you look at a web page, unfortunately, our credibility evaluation can be affected by a lot of different things as well, like by our knowledge, which is a very important factor. Or unfortunately, also by our social environment, for example, by peer pressure, right? You can think of the definitely peer pressure being a factor which impacts credibility. Sorry, I ran ahead with a couple of slides. Take a look at this picture. You can use credibility to define the things we're talking about, fake news or disinformation. For example, in this picture, we see a situation when we have a source. The source has a negative evaluation of its own message. So it doesn't believe its own message, but it depends for the receiver to believe this message, to find that the credibility evaluation of this message is positive. And this is the case of all this information and fake news out there, right? You can see that this can be defined. I'm using here the terminology from the European study that you're probably familiar with. Let's look at this case. It's a little bit different. Now, the original source's credibility evaluation is positive and it intends the credibility evaluation of the receiver to be positive. Isn't that a perfect case where we actually don't need to do anything? Unfortunately not. So you can see that we are still investing in different kinds of information like, for example, the case of just forwarding fake news. This isn't the original source really. This should be a forwarding source in this case. But if this is the original source, then this could be misinformation, information which is wrong by mistake or malinformation as well. Now, what do you think about this? Like, is credibility still useful to evaluate how the source is making a credibility evaluation? And you think about it in this term. Maybe the source is not sufficiently critical. Maybe the source is using reasoning that is based on a cognitive distortion. For example, I will give you an example. Let's say that we have a message. All immigrants are a threat to our society. Okay, so now the original source of this message might actually find it credible, might believe it. And if it does, it tells us something about the source. This source is using a generalized reasoning which is a cognitive distortion, okay? So perhaps what we should think about when we think about credibility and truth on the web, we should think about norms on evaluating credibility, okay? Because psychology and media science has found a lot of things about how we should evaluate credibility basically, okay? What is the right way of evaluating credibility? What is the wrong way, right? Based on that, we can propose such norms and teaching such norms would be a very good way to help internet users to make sense of credibility online. But that's pretty much what I wanted to say using these slides, but perhaps I can also just answer some of your questions. This is extremely interesting. When I think of the credibility that you just have shown us, it is very much focused on how usual netizen would like to approach this issue, meaning he is searching that the internet, he is discovering certain websites he serves, but this is a kind of approach that tells you which of the sources you're currently reading is credible or not. And I would like to a little bit reverse this approach and ask Mexico a question to Victoras when you're down there in the cyberspace and you're not thinking whether this piece of information is credible or not or the source is credible or not, but you're on the other side and you want to look out for malign actors and look for the bad guys, essentially saying, what is the difference in your thinking and what do you actually do to get them? Yeah, hello everyone. Sure, it's a very good question. So for that, I will show some slides to give a better understanding of what we do and what approach we take for that. Just shortly, so now currently we're working in six countries, Lithuania, Latvia, Estonia, Poland, North Macedonia and United States and every month we analyze, we receive about one million content pieces and we analyze manually 8 to 10,000 content pieces each month. We produce quite a lot of reports from that that I will share some processes how things work. So this is the main process, how information is analyzed. Currently we're analyzing in 26 languages, about one million content pieces per month. So we use AI, our technology to do some process of automation. So AI cannot do everything, it's more as a feature of automation of some process. It's not a general AI, but it can help us to create scoring mechanisms, do the topic recognition and putting information in different shelves. So it would be easier for an analysis. This information comes in all kinds of shapes, but we quite know well what are the main narratives, long-term narratives, also the topic narratives that change over time, but we quite know them well and it helps us a lot easier to find this information. Also we use citizens in Baltics, they are called Lithuanian elves or just elves. So they are active citizens that support us and help to analyze information that is received and also the corporate with analysts. Then we have this labeling process where the analysis is done, each content piece is reviewed manually and then the reports are prepared, articles are prepared and they being published to stakeholders and also to general media to inform citizens of what's actually happening there. When we speak about different domains and so now we use just two types of information, it either it's disinformation and misinformation or misinformation. Typically the main difference is the intent or how often that author or domain or organizations tend to spread this information. What I could suggest for every citizen person is to acquire and use more of a critical thinking to understand what's happening there. And it's actually quite easy. It's not something that is very complicated. So you need to always ask yourself, first thing, who is the source? Who is the author? Just click on him or her, check what he's writing about, what other issues is the author in the article. There are other particles that have no author and that already suggests that there might be some credibility questions. Is the source known? Whom that source belongs, how it's financed? So some of these questions can be very easily Googled. If you just spot something in social media, a very popular link, just Google it and maybe you will find the Wikipedia page. Maybe you will find some about page or other information. So anytime you feel something is quite emotional or impactful, the first step would be just think and the second step is just to do a little bit of Googling. Maybe there is already another think tank who already debunked that or maybe there is a fact checker who already fact checked that and you can easily find and verify if that is true or not. Then the other thing is how? How the content is presented? What type of photos? What kind of quotes, interviews? And is there any suspicion with that? Is the headline shocking or emotional? So the 95% of this information comes in negative shape in a negative sentiment. So it's quite rare to have some kind of positive disinformation, this still happens but it's much, much less often. And the third is the circumstances when that is published, what kind of event is connected to that? So when you think about these three steps on any content piece level or news website level, this can help you to understand what's actually happening there and why. This is used by our analysts and our community of volunteers who support us with their own work and that works really, really well. We made a lot of iterations with different processes and this is one of the really, really that works well. Here is another suggestion of global engagement center and this is a suggestion how to just differentiate between different sources and that also helps a lot because if it's a government funded website like Kremlin funded websites or China funded websites, there is quite big change that you need to be more vigilant and check what's happening there and analyze it better. And then when we analyze this, we publish reports of what's happening there, how much disinformation were in different countries, what were happening on different peaks of disinformation, what kind of narratives and such narratives were the most popular ones during that period and time. Then also it's important to give some examples. So here's just an example from Estonia. So Estonian government increased spending on defense and the Kremlin media picked it up as increasing offense and started to spread these disinformation articles which were even 34 articles with this case. And it's a clear technique of disinformation forgery and hyperbilization in this case used. And we analyze and find many of these cases from 600 to one and a half thousand disinformation cases are found in Baltic states each month. So that's quite a lot of disinformation. So just to conclude here, what I would say is that with when you want to understand this three step process, who, how and when helps a lot. Then there are all kinds of community events like this one that you are participating in. This also helps a lot. You learn new things, you meet other people who can, when then you can ask them, then you can discuss and understand better what's happening. Critical thinking is one of the skills that in these days is very important. We all need to think more critically, not to get paranoid, but we really need to think more critical to understand what is happening there. One more good example is a get bad news game that we done in cooperation with a draw company from Netherlands and the game was tested and developed together with Cambridge University. And that game teaches citizens for six disinformation techniques. And we adopted the game, localized it in Baltic countries and this game is a quite a big success because it increases in 15 minutes, it increases the resilience to disinformation by about 20% by the research of Cambridge. So that's quite a big result. And currently we already have 100,000 people who have played the game in Baltic countries more than 140,000 times. So that's another example how we can understand better what is happening there and how to do that at very large scale. Yes, I think that gamification of disinformation spotting and also other things, you know, like I guess this is probably the best way out there to increase this awareness, especially on the EU because it's honestly, it's great fun and it's not only what you just mentioned, the EU versus the Zinfo also offers tests how to spot Russian disinformation in the cyberspace. And if we've got some gamers out there, you can also find related games on Steam who will help you to tell apart certain elements. This is simply amazing. And I must admit that I really do like the model that you just presented a lot because it comprises a few things that are excellent. First of all, you've got engagement of the civil society. This is indispensable. And you have employed the AI to help in your work and every practitioner knows how much information do you sometimes have to deal with when you would like to just do simple media monitoring. It's really, really a lot. And also what I like is that there is a cross-country and cross-country analysis and it's really cross-border. It's simply amazing to see how it changes from country to country. I would like to pin on one thing because we still would like to get some more information from the credibility, which is a cornerstone here. And to this credibility, I would like to know a little bit more about this. I see it as a kind of ideal, like the truth that websites credible. And I wonder if we can measure it and if we try to measure the discredibility and how? With this question, I would like to ask Professor Birchbecki to elaborate a little bit on the credibility of the source and to tell us how to measure the discredibility of information in cyberspace. All right, thank you very much. I wanted first to get back at what Professor Birchbecki said, which was very interesting because I started talking about norms of credibility evaluation. And this is exactly what you do. You try to teach those norms using these games. This is probably the best way to actually approach the problem of this information today and to deal with post information. It's really interesting. Also, the process that you showed with the involvement of AI is something which we have also been doing in our research. So let me tell you a little bit about our older and current research regarding measuring credibility. Well, first of all, the kind of interesting part is that since we are all the time involved in credibility evaluation and since credibility is a signal, it is something that can be measured just by asking people. For example, using a Likert scale. Like for example, in this graph, what we're showing is a Likert scale of credibility evaluations from not totally not credible to totally credible. And we are showing a distribution of these evaluations from a sample of web users which we have been doing. And this particular graph shows that these three different colors are for less experienced and more experienced internet users in terms of they are just overall experience of music technology. Internet technology shows that the more experienced users are slightly more critical, slightly less likely to give the very high credibility evaluations. The effect is much stronger when we consider topical expertise, for example. So when we have people who are experts on certain topic, they tend to have much more balanced credibility evaluations when compared to non-experts on this topic. On the other hand, you can definitely tell something from these distributions. Many times when you look at the general distributions, they are skewed towards the positive evaluations. What you see here is a subset of the evaluations for a type of web pages about high yield investment programs. And as you know, this is the type of fraud on the internet. It's quite popular, fortunately. And it shows you from this distribution, you can tell that this is not a very credible type of web pages, although a significant minority of people still believe that it is quite credible. The other question you might ask is, okay, well, we can measure that, but are these ratings subjective? Are they maybe random? Are they robust to a different, just receive a correct list? And it turns out from our research that the general speaking, the subjectivity of these ratings due to demographic and social characteristics is not very high. You can be pretty sure that if you gather even a small sample of these ratings, like 10 perhaps, it should be enough in practice to give a good idea on the credibility. It depends, of course, on how you gather the ratings. But what we have been doing is we have been drawing a large sample of ratings first, which created the reference distribution. And then we generated smaller samples, for example, size 10 as well. We could generate these samples completely randomly or we could just, for example, take evaluations from women or just take evaluations from less or more educated people, choosing the ratings at not random, but with the particular characteristic of the receiver's fixed, okay? And what we did is we kind of measured what's the difference from these distributions obtained from the smaller samples, either random or the fixed characteristics to the reference distribution, okay? This can be actually measured using a specialized function called an earth mobile distance which computes differences between distributions. Never mind about this. What you see on this table is here you have the reference distributions which are the reference values, which are the differences of the small samples which are random to the big distribution from all the ratings, okay? And as long as these values in this part of the table don't exceed these values, it means that the ratings we have in these datasets are resilient to age, gender, education, internet experience, politics, income, or occupation. Actually, as you can see, only one value exceeds the reference ratings and this value was not obtained from, this is actually a data set of movie lens opinions about movies, not about credibility ratings. These two datasets is what we created using our research which contained credibility valuation. So you can say credibility versions are quite robust to demographic characteristics, okay? And the questions you have been asking a lot is about sources, how we can evaluate sources. Well, we have been studying the domain of medical web content which is different from what Victoria's is doing but we have used the same process, almost the same process of creating a large, large set of web content and then kind of splitting it up into topics and trying to prioritize which topics are most likely to contain non-credible content. And what we have found is something which brings us back to the source evaluation. Because there is one way of trying to evaluate sources in the medical domains which is using information from an international NGO called Health on the Net. This NGO is devoted to providing web portals on medical topics with certificates. You can get this kind of a certificate that you follow the home code you have been evaluating independent evaluators. And our finding, most recent finding is that we have taken a lot of statements from different web pages on medical topics. And those websites that had home certificates have much, much fewer non-credible statements than all the others. So it turns out that this kind of a certification actually works in practice in the medical domain. You can also take a look at the criteria that Tom is using with respect to the evaluation of the sources which is also something which leads us towards those norms of credibility evaluation. How we should evaluate sources of information. And this is also something which Viktor has been talking about because for example, financial disclosure and transparency are a big part of this approach as well. So just to summarize what I've been saying is that you can measure credibility in different ways probably. You can reason about credibility also because in mathematical models I haven't been talking about this here. But for example, this reasoning has led us to the conclusion that we have to advertise truth. This is a kind of a very surprising thing that came out of completely simulations and mathematical considerations on the concept of credibility which basically means that if we have a community which has a low level of knowledge or it is a post-truth community which has a wrong credibility evaluation on a certain topic then one way of perhaps the only way of actually getting through to this community and changing their opinion is to advertise truth, to make the truth sound more attractive, to do basically the same things which authors of fake news do for their own messages but do it for the truth. It's a little bit of a surprising conclusion but well, it just shows you that this is an area full of surprises I think. On the other hand, one more thing that you can take into account when you think about credibility measurement online is that this level of expertise and some other factors, knowledge and also social environment they can strongly influence credibility rates. Not demographic characteristics but definitely your social environment. That's the part which is what I've been talking about post-truth. Small minorities of people which have the same credibility evaluation, they will tend to believe. And they will exercise peer pressure or they will exercise a certain attitude towards all the others which are not part of the group which will also lead to the bias in credibility evaluations. To advertise the truth, this is a very strong message. I do like it. And I think this is a great outcome of this utterance and a kind of a conclusive point that I will keep in my memory after this session. Still, we think of disinformation as a threat in multiple different domains. From the policy level, we sometimes can perceive it as a kind of political threat and included maybe in the political risk analysis or just see it as a political risk that can be seen from the high level and of public services and so on. That's really interesting that we can add this thing but also to other evaluations and to basically measure it in a certain way, include it in models. And from this thing, I would like to move to a little bit different point because this is what I get that academia or experts and policy makers can think of. But I would like to move also to an average netizen. What can that person do to avoid being bubble, to avoid being manipulated and how to structure and how to organize the information environment that person has to be informed, to rely on good and authentic sources and to find himself in this environment in which there is an increasing noise and a lot of information that some of it is not really authentic or legitimate. And with this question, I would like to move to Viktoras. Viktoras, if you could tell us how to organize and how to structure your information ecosystem to be good in it. Well, that's a good question. So first, I think the integration of science and practical work is, in most cases, that's the answer for long-term solutions. And what Professor Adam is working on and these studies, these are extremely important to learn because then you see the patterns, what kind of patterns are found and then you can implement the solutions. And that is a very big deal. I mean that it's like having a map in the forest and knowing where to go instead of when you're doing something and you don't know, you just walk around in the forest. So these type of researchers, as Professor Adam presented, are very important for this field, understand better and how to move over that forest. So when we ask about how to build this safer environment, how to not keep ourselves in bubbles, it also requires, I would say, even you can compare it with food diet. Now we live such a faced life. We need to run everywhere. Okay, now we are a bit closed in our homes, working remotely, not going too much outside. And in order to be healthy, we need to stop and think what we are eating, what kind of food we're taking. Are we going to outside or do some sports in-house? So in order to feel healthy, we need to do that. Our bodies require that, they require it for millions of years. So the same thing is with information. If we don't think what we consume, we end up with just some kind of very popular social media links and you can compare them with the cheap calories. You consume it, you just spend two hours on TikTok. The question is what have you learned and was it just fun or was there some kind of, maybe you were very relaxed and you were just consuming information. And you never gave even a thought of was that information credible or should it be something that you could quote to your friends or your friend in a bar or with your family. And if you are not thinking about that, you might end up in a very, very weird situations when in some kind of public space, even in family space, someone is saying things that are clearly not true and even just by putting some thoughts on that. And if the person is so convinced and he's so bubbled in that, it's very, very hard to convince the person differently. People tend to believe what they believe and belief systems are kind of a core of our thinking and our mind. And there are many researchers that show that. So I think that's very important. So be careful with what you consume. If you scroll social media sites, if it's cats and dogs, it's fine. But if you're starting to read something that is more, that is very emotional, that has some kind of, it's something about NATO, military, politics, energy. There might be a lot of things that will be forgeries, fakes, and just attempts to play with your emotions. So maybe next time when you go to vote, you would vote how someone established the narrative. So to achieve the goal in the large scale population. So that's an important thing in order not to become ships, we need to think. And this thinking and thinking like of a diet, that helps a lot. It's a good metaphor to understand and we need to think critically. We just need sometimes to stop and actually think because when we run and run and run over lives, run from work back to home, to our children, to our family, friends, and life happens, there's so many things. And we need to stop a bit, think what kind of media we're consuming, what benefits it gives to us and is it actually something that you can later quote and not look as a fool. So that's a serious consequence and it's important to think about that. When we speak about bubbles, this is another problem. It's also connected a lot with our beliefs. If we believe something, we tend to find many more arguments why that is true. And a very good like a measurement point is that if you're reading and analyzing and learning something new, is it very comfortable for you? Is it fully aligned with your values with what you think? And it doesn't put any thoughts that is this really right or wrong? Is this not very comfortable for me to hear? So every time we move to this less comfortable space, it's a learning process. We learn something new and that's a good way to get out of these bubbles. Also events like these, researchers like Professor Adams and others. Just putting a bit more effort on that. There's a lot of really good YouTube channels. It's a bit more difficult space, I would say. Because you might end up with algorithms showing you a lot of conspiracy theories. So there is a high risk of getting there. And that is another problem. But if we just take step by step, when you think and read about the article, so just put a little thought in that. Who is publishing the article? Why they are doing that? And what kind of goals they have? That will help to understand what is happening or just do the first baby steps. But it's already a huge thing. When you start to think, it's the first thing that leads to change. So you're saying it's also working to stay vigilant for every individual that is checking the news sources and other pieces of information he encounters. You need to be careful with vigilant words, not to get paranoid. I guess that's not the thing that you need to achieve. But when you read something more critical or lightweight, you consume information. And consumption of different types of information in a way defines you. So defines what you know. It defines what you think. So you need to expand your horizons and you need to just double check if you are reading a really correct facts. Even in some history books, some facts later over 10 or 15 years or 20 years changes and you need to double check. So I would say that if it's something that it gives you like a thought, oh man, is this really true? Can it be true? Just type it in Google and look if someone else already had this question. And the funny thing is that Google algorithms do a really good work in typing autocomplete for questions. And in most cases you will find that somebody already typed that question and you might find some relevant results. The gentleman, I see that we do have a few questions and left. So I would like to give the floor to Professor Adam to comment on what you just said. I will only mention that I used word vigilant because it has an prominent career recently in everything that relates to disinformation just like resilience. But this is a career unparalleled, of course, to fake news, which was the buzz of the year. And Professor Adam, if you could comment. Well, I really like the word vigilance as well. I'm a fan of Harry Potter and there is a character there called MediMudi who has this kind of a terrible constant vigilance. It's what he's telling to his. And this is something which is useful, very useful in the internet as well. Of course, you can reach the level of paranoia as this example shows, but still you have to be vigilant, I agree with that. I also very much like Victoria's analogy to a diet because it shows you that the consumption of information might actually affect your beliefs and your mental makeup, your kind of a horizon. Actually, I like this kind of an analogy as well of a horizon of the internet or horizon of the web, of the things which you can reach very quickly, which are the first in your mind when you search for something, what expectations do you have? This kind of defines this kind of a horizon. And where do you go? Do you just go to social media or do you spend all of your time there? This is actually, this has been shown as a big factor in the existence of conspiracy theories and in the effect that they can have on entire populations or societies. In societies where people spend the majority of their time on social media, it can lead to disaster basically. But I would like to stress yet another point, which is a little bit strange from my mouth, but still I want to do this because I have been researching medical web content recently. And this brings to a very important question is who can you trust on the internet? And also should we trust experts? This is something which you haven't been asking yet and we haven't been talking about this. And it is really important in the medical domain because there the content can only be evaluated if you have sufficient knowledge or expertise. On the other hand, Google gives us the illusion of reaching this information very quickly. You might think it is at your fingertips, you just have to type in the search terms and you get a lot of actually medical articles. You can access to PubMed, you can access to lots of different places. You can access, of course, websites which give this information to you in much more friendly terms or you can access social media where people discuss these things and give ready-made opinions which you can actually consume, like Victoria said, right? And there is this very famous relationship of the amount of expertise and the amount of confidence, right? If you have just a little bit of information, your confidence tends to rise tremendously. If you actually get a lot of information to become an expert, the first to your confidence drops and then it rises but not as much as what you really have when you just have very little information, right? So my advice, at least in the terms of medical domain, do trust the experts, even though this is not the only way you have to do, you have to use your own common sense of experience as well. Experts don't know everything. Experts can't always be trusted. Remember that controversy on the vaccines and autism started with a medical expert who published his research in Lancet, the main medical journal. Still, this has been verified, although it took a very long time, but it has been verified. And that's also the difference between an expert and a non-expert is that experts must undergo verification, okay? When we publish research articles, they are reviewed, right? It's about the same. The difference is also very important in the media, right? In mainstream media, there is, in principle, someone who should review the information on social media. There is no one, right? So that's the big difference as well, coming back to what Victorios was saying. So we are shortly running out of time, and I guess this is the right moment to move to the questions from the audience. We joined quite a few of those. I don't know if you're going to be able to answer all of them, but if not, you'll try to get back to the people who ask the questions afterwards and provide the answers in writing. I will address those questions right now in the order they were posed. And as some of them were not really conclusive into which person are they asked, I will just duly distribute them. The first question is clearly to Victorios. Is the debunked EU open source, and is it easily adaptable to other languages than Polish and Lithuanian? Yes, debunked. Technology is built on open source technologies generally to analyze public information. It can be easily adopted to any language. Currently, we have 26 languages that we are operating in. And second question is to the professor, what is the actual difference between misinformation and disinformation? Could you explain more for us? You have addressed those things and also malinformation. So if I can put input, if you could include malinformation in your answer, I'll be driven. I'll try. I'll try. Also the difference between misinformation and disinformation is kind of easy to explain. It depends on the intent of the source, right? So if the source wants to manipulate you to give you some information, which is the source evaluates is not credible and wants to make you believe it's credible, that's disinformation. On the other hand, if the source just makes an honest mistake, and that would be misinformation, okay? An honest mistake would be misinformation. So I believe that disinformation is credible, but I made a mistake. I tried to convince you it is credible. Maybe together we actually find out it's not credible and I say, okay, sorry, I was wrong. This is misinformation. That's the type of misinformation. When malinformation is a little bit more complex than that because it starts with a core of truth, okay? There is something, for example, let me go back to this example with immigrants. Okay, so unfortunately, let's imagine a scenario that some immigrant committed a crime, okay? Now this could be just a factual statement, right? This happens, the police reported it. That's it, right? What I make with this statement makes it malinformation, okay? I can say now, as usual, immigrants commit crimes which threaten our society. And I give this example, okay? That is malinformation because I'm actually changing the meaning a little bit. I am actually generalizing it, for example, or I'm making it more threatening. I'm saying this will destroy our society, immigrants will destroy our society, here is the evidence. And this is changing the meaning. So actually the message, in my opinion, at least, the message of malinformation no longer is true. It is a manipulative message which has a core of truth, okay? Very often conspiracy theories are created in such a way. For example, conspiracy theories tend to add a little bit of more information to something which is true, like we have the fact that COVID started in Wuhan, okay? You can find in definitions of malinformation that it has the intention or intent to harm. But in you, and also based on my own views, I think it should be rather the intent to manipulate the harm. Exactly, exactly. The intent to manipulate by changing your credibility evaluation, right? You can go back a little bit from harm to a step back just to change your credibility evaluation. If I'm trying to influence your credibility evaluation, that's manipulation, especially if I'm doing this, starting from a very questionable premise, like for example, generalization or catastrophe, some kind of cognitive distortion, right? For example, right? So, yeah, so my malinformation, I would say it's a type of manipulation as well, right? And another question goes to Viktor. You have mentioned critical thinking earlier on as one of the, let's say, research tools that should be available to everyone. And the question goes, how do you define critical thinking? I'm not sure what the person who asked the question meant actually it comes to the person who has to employ this critical thinking. But let's base your answer on two types of people. First of all, of an average netizen that doesn't have a high expertise in evaluations and second on the expert, what does the critical thinking mean for both? So, I would define it in a very simple way. I would say that if you are only running and running and never giving even a thought of what's happening around you, then it's a clear evidence that there's not too much critical thinking or maybe thinking at all. It's just consumption. It's like sitting by your TV and eating everything your hand can get or what your eyes can see. That has consequences similar with information. I would say that for a citizen, if you just spend time in social media and many of us do, and when later we get this iPhone or Android weekly report of where and how many hours we're spent, we're quite surprised how much time we spend on iPhone or Android and so on. And that's become a big part of our lives. We pick up those phones so often, 100 times over day. And when you spend time there and you consume something and then later if you quote it or share it, here's the important part that comes. Are you sharing information that it's scandalous and is it really real or it's fake or it's just connects with your beliefs and you're sharing it without verifying? So I would say that the first thing for every citizen is to think before sharing. If this is really real, you can ask your friends, do they think it's real before sharing and then to share with it? It's always good to ask some kind of question and to double check. I don't believe that it's possible for every person to become an expert. That is too difficult. That is too time consuming. You need to spend all your working time on that. It's too complicated. But you can find people who know more and you can verify things with them. You can follow journalists. You can follow analysts. You can follow influencers who are really credible. And if you just double check for the feedback about those persons, about those experts, in most cases you find. Just be aware that if you are asking hammer, it will always find nails. So be careful if you're asking someone who is very one side or other side leaning, that you will get responses that are very connected with that. And in a way you could say that that person is not fully or still balanced. And if you ask something that some older family member, they might have some values that are very, very strict and maybe they are not good for these times. So you need to think of that. So every time you receive information, you need to understand whom are you receiving it from? How it's, is it balanced? Is it not balanced? So these small things help a lot to think. And for the experts, this is quite, that's the place where I could speak for next eight hours. So that is a bit more complicated. But also that there is another question to you that is very much related. And as I can say, it's a bit hard and tricky. Mr. Daksha, isn't the case that there's always more money on the side of disinformers than debunkers? Moreover, analysis and debunking takes time as you have pointed out. And when the clock is ticking, damage is already done, people misled, sentiments polarized. You present your enterprise is easy, but I believe it really is not, especially when the matter cannot be fully disclosed or checked, topics related to the military, trade secrets, someone's health status and so on. And that's a very inquisitive approach in which I would like you to elaborate a little bit just to say how can you first of all stay objective, but also how can you proceed when there is a clock ticking and there is a risk of damage being done or the damage is done and you need to mitigate it? This is, by the way, a cornerstone of resilience as we understand it from the systemic perspective. I love this question. Probably this is the one of the most popular questions I get in the last three years. We really have been traveling a lot. We spoke as an organization in more than 100 events, 18 countries. So this is a very, very often question and it has a very important reason behind. So with how to put it in the right, first thing, when we started like three years ago, we asked the question, what is happening at the big picture if you look around the world? We found more than 100 organizations around that time who were working to counter disinformation. We interviewed them, we analyzed and we saw a thing that most of them were analyzing and working and doing things and quite manually. And here we define it as a little bit joke. We call it 2G method, which is Google and gut feeling. That's how the analysis is done. Only some organizations were able to have some automation to be able to analyze and respond much more rapidly. The other thing that we discovered is that on one side we have disinformation actors, on the other side we have debunkers and we think that the main thing that is the game changer around the world is what are the costs to create disinformation and to debunk it. Currently the main problem is that to create disinformation it is much cheaper and faster and lies spread faster than truth. That's another problem, conceptual. And the other side debunking is pretty slow, resource intensive, it's quite fragmented. There's quite many organizations doing that, but they're really small and defundant. Not all of them have processes and automation to be able to do that really, really well. You require methodology to do that really well because if you don't, that means you don't have processes and that means you will slow or you will risk with your credibility. So the general question where I would put everything is that what do we do altogether to change the costs to create disinformation and to debunk it? We need to change the balance and when to create disinformation will become much more expensive than to debunk it then all around the world things change. So that's the bigger perspective, how to look at it. And then the other part of debunking. So we spend a lot of time in perfecting processes, perfecting methodology, doing these iterations, implementing, testing for two or four weeks, looking if that worked or not then trying a new thing, a new thing and that's how we're learning. We need to learn to improve those processes to be able to report faster. When did we do the analysis? So just a final thought, analysis is like a diagnosis. It shows what is happening there but this is the first step to understand and later to move forward and make decisions with that. We have one last question also to you, Victor. So can we conclude that tolerance of discomfort is a necessary quality for dissolving bubbles and lifelong long learning? Can we say that this is a meta-skill of information literacy? Very good. It's a very good question. And in 10 or 20 years, like 20 or more percent of the jobs that are now will disappear. The question is what we will do other that robots cannot do and to be creative, to think critically, this is something that robots still or AI still will not be able to do for a long time. And, you know, if learning is not painful, probably you're learning something that you already know. So that would be the final thought. Exactly. With that, I would like to thank you, Bill, for this amazing discussion and all your inputs. And I guess there are three thoughts that I will remember after this session. And I would like to become a kind of conclusion of what we have discussed. First of all, that we should advertise the truth. Second of all, that we should avoid fast-foods is also with regard to the information environment we are creating ourselves and for us, of course. And third one, which is sometimes overlooked when you do see this information and malign actors everywhere, that it's also not good for you. So as you said, on the margin, but it's still very important that we have to resist getting a bit paranoid about what is happening around us and just check the sources if we can. And with that, I would like to thank you all, thank you, my distinguished guests here. And also thank to the audience. And I think that we should move to another session. Yes, thank you very much. And yes, we are moving to the next session.