 Thank you very much. Dear colleagues, the organizers, thank you very much for this invitation. It's the second time I'm speaking here and it's always a big pleasure and very inspiring to be in this environment. I would like to talk about the usage of artificial intelligence tools to prevent and debunk misinformation in social media. And we, and with us, I mean our group operation and transformative governance. We are from the social science so we work a lot with the people perceptions, opinions, views, including such disciplines as behavioral economics psychology sociology and other social sciences policies studies. This is our group, and our major focus is interdisciplinary approach and governance and decision making processes in so-called WUCA conditions. WUCA stands for vulnerability, volatility, uncertainty, complexity and ambiguity, including the systems thinking. So we are trying, if we speak about risk governance, risk assessment, if we take digital solution tool, we try to develop the tool but we also try to understand how this tool will be implemented, how it will be perceived, and what will be the possible benefits but also the risk from such tools. So what we are developing in our group is a cooperation model to see how the cooperation and discussion process is going. We develop decision support systems and participatory modeling planning. We develop these three big kinds of methods, which help us to structure the problem so we apply different methods such as games, for example, on the topics of public goods and common pool management and in our work with young that an internet and social media is a common route as well as artificial intelligence tools to deal in internet. We include bounded rationalities, social heterogeneity, meaning that there is no one common opinion, but we are all different. And if you speak about policy on digitalization, for example, there are different perceptions of risks and benefits. And the issue is how we would, would we develop a compromise solution. We speak about formation of strategy poles and selection of the most important drivers. And finally, in our participatory modeling, we use multi criteria, optimization and prioritization which is also based to a large extent on decision making experiments on games we use systems mapping and morphological analysis and participatory scenario planning. These are some of the methods which we apply. And digital policy digitalization are cyber and internet effects are among our major focus of research. So now I'm moving directly to the introduction to this topic. And we speak actually about misinformation and social media. Why it is important. Actually, this information existed for centuries, but now internet and technologies and also the tools are facilitated spread and make it almost universal. We speak also about situations of high uncertainty connected with safety threat. And this also creates a good ground for various misinformation conspiracies fake news rumors. And here, I also would like to highlight that we are dealing with the area of misinformation, meaning that for all strong information is being spread unwillingly coming out of risk perception so we don't deal with disinformation when some false information is being located by purpose to create harm somewhere. So if you speak about climate change and social media. Now we see significant spread of misinformation. Also new wave, which is being connected to several issues but a lot of conspiracy started already during the COVID time and explanation of the COVID time. So we are continuing to grow and what we observe in our work so if someone tends to one kind of conspiracy very probably she would be tending to another kind of conspiracy. We are conducting research on social platforms like Twitter to the extent as long as it was possible. So for us it is crucial to understand how information there is shaking and reflecting public discourse. So in our study we analyzed several thousands of tweets and we have tools which allow to follow the discussions in social media, and we are also developing artificial intelligence tools to deal with this misinformation. To help people provide them with the tools which will show them is important content or highlight them is important content. For example, with a number of UK institutions we developed a tool which is called miss informed me, and this is the app, which everybody can download on the laptop and this app would be showing. So if information is coming from the source which was ranked already by fact checkers as not credible to certain extent, it would send a warning to people so that listen this information comes already. And while we are doing this we tried all possible kinds of appearance of this tool. But what we observe if you blur it or you close the information it makes them contrary. So it raises interest to go and to see to this. And we decided to treat it further. So, our intent by using this artificial intelligence tools is to break this instant reaction, which makes the spread of misinformation viral so that the person when he gets such kind of content that maybe in a few seconds in taking the decision, listen wait search for alternative source this origin without to stop the spreading of significant amount of misinformation. What we did this is a part of research which we did on Twitter during the last times, where we tried to understand climate change and discussions which are going on climate change. So, especially on oceans, forests soil megafauna and insects, and we met always areas. And we also see other growing consensus on climate changes and external threat require requiring immediate actions. But what actually happens now for the spread of misinformation, especially during the disaster, it's also very interesting topic of research. What we observe that for example earthquakes are connected with a certain level of misinformation, when people are searching for official reliable sources. And in fact, in the past we observed the situations when for example panic in Albania in the capital of Albania in Tirana was forced by information and social media that there is a high probability of an earthquake and people starting to leave the city. And this was actually not, not to. So, and there are several organizations like MCS in France, which are working in this area, trying to pre-bank and debunk this misinformation, and to send people the right and correct information. So, if we see there is also a certain complexity and variety of available IE tools, so there are many alternatives for users to use these tools. There is a lot of information with automata, news informants, so a lot of tools were already developed, but the willingness to use them is still quite low. And this was the question which we addressed by the start of our research, what is actually keeping people away from usage of these tools. So, a meta-review on all studies published by Spocus on the usage of IE tools to deal with misinformation. So, we clustered the studies, we identified selected keywords, and what we find out that currently the majority of the tools and on the studies which look at how people are using these IE tools to deal with misinformation on classification. So, a lot of them are connected to COVID. So, other kinds of tools like analyzing the impact of the risk or the content of the case or combating misinformation, they are quite minor. So, 11% of all these publications are in social science papers. So, we see huge need for further research here, and only 5% are on decision making in decision science. So, minor portion of the papers are dedicated to other topics, which go beyond COVID-19 risk, but most of them will probably launch at the time and they focus also on this topic. So, the majority are dealing with the topics on how we detect misinformation, is the decision to filter the news left to the convenience of individual users. Other individual users are considered as active actors in attempt to combat misinformation. Researches and professionals have the same vision. However, and here we looked also at the source of the studies and at the source of financing, which was supporting the studies and we could see free big bubbles so currently the entire research is being focused on free countries which is US, the biggest funder of this kind of research which is funded by UK and by China. So, interests in other regions we found a couple of papers in countries like Saudi Arabia, but mainly so this is currently driven by these free countries. So another methodology which we use here trying to understand the usability of artificial intelligence tools is heuristic modeling, which are methods from behavioral economics where we identify drivers of users. And what would be driving them to use these methods. So, are people willing to use social networks during natural human disasters if we take for example this topic as climate change adaptation is our focus. What factors influence this intention? What factors relate to infrastructural characteristics of social networks or to the subjective characteristics of the users? Because of course if we take various groups and we have here also research on journalists, fact checkers, policy makers, laypeople, citizens, and what are their expectations on these tools and clearly it's not possible to develop the tools which would fit everything so we would need to prioritize. And what are the major factors which are important for them, which factors influence their willingness to use these fact checking tools. And here we see that a lot depends on the intention, on the easiness to use or how easy it is to use the tool on perception on the usefulness. And here still a lot of work is required because these tools are still perceived as something complicated or what we frequently hear from people who are cooperating with us and decision making experiments that it's also scaring that some kind of the app sits on the laptop and starts to rate the news, starts to comment in the social network so all these perceptions have to be addressed and deal with if he would like that the usage of AI tools in this area would be spread stronger. Another part of our research is on conspiracy theories, so called conspiracy theories where we analyze our tweets so we try to grasp, we take a certain period and we try to grasp all tweets which are being communicated in social media. And here we identified various kinds of conspiracies and I would say that we have a group of PhD students who are working here dealing sort in the keywords and they have a lot of fun and we are also learning so much on this or how people can explain in certain ways with certain areas. So for example, on COVID-19, we have a 5G conspiracy, which means that COVID-19 was caused by the deployment of 5G or Bill Gates or Bill Farmer, and we have an opportunity to follow all these discourses on social media. So for example, combining them with implementation of risk mitigation measures. This is also very interesting because it shows the big volatility and I will show it further. We are working also currently at, it's also earthquakes and they are explained by various risks, starting from the military, explanation, logical, everything. And we also try to understand these discourses and we are now extending the tool because our first prototype was focused on English-speaking media, so now we extended it to all countries in the EU. And our next step will be to include Japanese and also to go to the far east region to be able to understand the discourse there. So just one of the results of such analysis and here, so for this one, 1.7 million of tweets were screened during the COVID time, and we could see that, for example, such conspiracies as 5Gs peaked at the beginning, but then disappeared and other conspiracies we have just like a roll here. Yeah, and just a couple of papers published here, so and yep, I see that my time is so, but thank you very much. We're going to hold questions till lunch because it's not very far away, but thank you very much, it's really fascinating.