 Good morning, so I think that we can start our first panel. I give the floor to Professor Mikołaj Morze. Yes, thank you. And welcome everyone, welcome to this first panel devoted to machine learning. We have two exceptional speakers and the main goal of this panel is to identify ways in which the project can leverage the new developments in machine learning and in natural language processing in particular. Our first speaker is Dr Łukasz Kobyliński from the Institute of Computing Science of Polish Academy of Sciences. He's one of the most recognized specialists in computational linguistics. He is the member of the largest NLP project currently executed in Poland, Klarin Piel. He is the developer of tools for Polish language modeling within a very popular spacey library. And he is one of the organizers of the Paul Evel data challenge, which is the part of the larger family of challenges that aim to the development of computational linguistics and NLP tools. And he is also the host of the Stacja IT podcast, something that I personally as a person who is literally addicted to podcasts, I appreciate very much. So please welcome Dr Kobyliński. Thank you very much for this introduction and for the invitation for this conference. Let me just take a moment to start the presentation. Okay, so the title of my presentation is Am I Talking to a Human? And by that I would like to immediately start the discussion about the state of the art in NLP in 2020. So the discussion is whether it is now the time that we actually ask ourselves this question when we interact with machines. If so, why and how has the situation changed in recent years? So to start the discussion, let's first have a quick look at the historical background on NLP. We have been actually working on NLP since at least 1950s. But what I want to show by this slide is how the approach to NLP research has changed rather quickly through the years. So one of the first approaches that was applied to NLP was based on statistics. So this is illustrated by the saying of John Rupert Firth from the 1950s, you shall know a word by the company it keeps. So the idea was that we need to look at the text itself, at language itself, and from that language we will discover important knowledge, important relationships, and applying statistical machine learning methods is the way to go. But then came so-called AI winter in 1970s. Publications by Minsky and Pappert were published where they criticized or have shown limitations of neural networks. There were publications by Chomsky who criticized this statistical approach to NLP. And basically most of AI research has been stopped by stopping funding. And most research concentrated on applying expert linguistic knowledge to processing language. So people concentrated on creating grammars, rules, dictionaries, and this was supposed to give better results than this previous statistical approach. But then things changed again in 1990s where more data was available, more processing power was available. And it turned out that these old statistical approaches and machine learning approaches are starting to give better and better results. And here for the illustration we have a quote from Fred Jelinek who worked on machine translation who said that anytime a linguist leaves the group the recognition rate goes up. So this illustrates this change in the approach that was applied then that we don't need this expert knowledge. We are relying more and more on statistics and on the language itself. So looking at these dates we may ask ourselves and we asked ourselves in 2010, are we going back to the linguistic approach in this 2010? Because this would be the most obvious trend here that we are going back and forth between this statistical and linguistic approach to NLP. Are we achieving some kind of situation in which we cannot rely on machine learning alone and we need this linguistic knowledge again to move forward. But then came various new developments about which I would like to talk today. And most obviously deep learning and this changed the situation in the way we are still involved mostly in machine learning and artificial intelligence when we are talking about natural language processing. So let's discuss what are the most important developments in the 21st century that really moved the natural language processing forward and created the situation in which we rely on artificial intelligence in this area. So in my opinion there are four pillars of this situation. The first are the algorithms. So we have the right tools to efficiently model the data to discover knowledge from the data. And of course these algorithms are not at all completely new, but they are optimized. They are created with these large amounts of data that we cope with today to actually efficiently mine and discover knowledge from language. The second pillar is the big data trend or revolution. The idea that we have so much data available now coming from various sources from the internet, from internet of things, from our phones. These enormous amounts of linguistic data give us more knowledge than ever before. Next, third pillar is the available processing power. So we actually have processing power to discover knowledge in the data. We can utilize cloud infrastructure. We can use new processing architectures to transform the data. And we are able to process this data in reasonable time. And the fourth pillar is the need, that is the business need for NLP methods and NLP applications that really change the world and the business world and services that we use every day. So discussing in a little bit detail these four pillars, I would like to start with the algorithms and I don't want to get into very much technical detail here. But just to give the intuition what has changed in the recent years, let's stop for a moment looking how does the machine see written language. How do we represent text when we want to process it to transform it to mine some knowledge from it? So what we have done just 10 or 15 years ago most commonly was to use the so-called bag of words representation. So we just would count the words that were in a particular text and using such a simple frequency lists and bags of words. We would feed them to algorithms to classify them to segment them and so on and so on. This representation is very primitive and as you may assume it gives a large number of problems related to ambiguity, to lack of semantic information in this representation. And what has changed in recent years is the approach that we now seem to apply the most often and it is actually a comeback to the saying of John Rupert Firth in the 1950s. So we are coming back to this idea that we will know a word by the company it keeps and we actually are doing it in an LP. So what we are doing now is actually we are looking at the words and we are looking at the particular window around the words, looking at other words that appear in a similar context. So here shown in green is the word it in a movie review and the word it appears in context of other words such as movie in yellow, fun, recommend, scene, see, scene. And if we have a big enough corpus of such movie reviews that it will soon appear that this word it in this kind of reviews is appearing in similar context than the word movie. So we may say that the representation of the word it is similar to the word movie because it appears in similar contexts. And this diss intuition used in real world mathematical representations leads to a very interesting thing. It appears that the semantic relationships between words translate to the mathematical space of words representations. So if we know that there is a relationship between men and women in the real world, that is the change of the gender, that the same relationship holds in the mathematical space of words representations. So we can then somehow calculate new words, find words which hold the same relationship than other pair of words. For example, if we are interested in a relationship similar to the relationship between king and kings, we can ask the representation. What is the equivalent word holding this relationship for the word queen and the representation response with the word queens. And this is really incredible and also something magical even happens when we ask these representations for other types of relationships. And it appears it doesn't have to be a grammatical relationship, it can be a real world analogy, for example, between countries and their capitals between people and their roles. So we can, given the relationship between France and Paris, we can ask the representation what is the equivalent word for Italy and it responds with the word Rome, or we can ask about Einstein being a scientist and we get that mess is a midfielder. So this is really magical and it turned out to be a very important milestone in NLP research in recent years. But what we can ask ourselves is how is it possible? Is this algorithm so incredibly intelligent? Well, the algorithm is actually not that new, but what has changed is that the quantity of data that we are able to process today is vastly higher than what we could do in 1950s or even 1990s. So our algorithm knows these relationships because these relationships are in the data and if we have enough data, we can find it automatically. And this is the second pillar of this NLP revolution in recent years, big data which allowed us to collect and process vast amounts of data. This illustrated here by the total size of Wikipedia article text in gigabytes or the size of common crawls of the internet, the number of web pages that this common crawl project collects, downloads from the internet. Having these amounts of data allows us to find those relationships that are hidden in the language itself, in the text itself, similarly as humans as a child learns each year by reading and hearing language. But the third pillar is also necessary, this processing power, so we can actually perform calculations on petabytes of data. One change, of course, was the change in algorithms and calculation tricks, but actually new processing architectures such as GPU and then TPU allowed us to use much larger neural networks that were used in 1950s that were criticized by Mieski and Pappert. And also, we just have much more raw processing power using, for example, a cloud infrastructure and then this allows us to use this, not that new approaches to petabytes of text. And finally, the fourth pillar is this business need. So natural language processing is not longer tied to academia research anymore. NLP is a core technology for enabling many products and services of today and this research is conducted in many privately held companies that moves this research forward faster than ever before, and the financing is much higher. One of the examples that you may know from everyday life are these so-called voice assistants, either on your phone or as a standalone device. So as you probably know, these assistants are created both by Google and Amazon, but also on your phone, by Samsung and many other companies. And this application is an example of this progress in NLP that was made in recent years by solving so many NLP tasks in one device. Because the voice assistant solves at least the problem of voice recognition, so the machine can actually transform your voice into text. It solves the problem of chatbot, so when you talk to it, the machine understands your intention and the objects that you are talking about. It solves the problem of question answering, so when you ask it a question, it responds. And many, many other problems in one device that had to be solved for this device to be successful. And talking about this example for a moment, I would like to focus on these NLP problems. So one problem mentioned is the problem of a dialogue agent, so commonly called a chatbot that you may know from webpages or these voice assistants. So the idea is that we talk with a chatbot, like with a regular person, and the answers are generated based on our questions. There is a memory of the dialogue, there is a state of the dialogue. So the machine knows the context and knows what it has been asked before to generate new responses. And this was all possible thanks to advanced language modeling that is taking place nowadays. So we are able to get these large amounts of data, these petabytes of data and create language models which extract this knowledge about language itself. And various additional processing steps that allow us to discover the intents of a person that is using the chatbot, the entities that are being related to the context and performing actions that are requested by the user. And this example of a chatbot or a voice assistant also brings us to the problem of previously mentioned question answering. This is another problem that is quite well being solved right now by various commercial services. Here is the example of the Google search engine where you can just ask when was Lincoln born and this search engine will correctly identify that you are talking about Abraham Lincoln and show his photograph. It will correctly find the answer to your question and print it in large letters here and also come up with other related questions. So this is the more specific NLP task that needs to be solved for those voice assistants to be working, but still we have a lot of smaller problems that had to be solved for this question answering to be working correctly. So we have the problem of named entity recognition, the problem how to identify that Lincoln is a person and it is Abraham Lincoln, the problem of analyzing the structure of the sentence and identifying that we are actually talking about the time of birth of Abraham Lincoln. And the problem of extracting information from various sources, probably from Wikipedia and other web sources and finding in long strips of text that the actual birth date of Abraham Lincoln was February 12. So this is all very interesting, but the example that is probably more related to the topic of this conference is an example of automatic cyberbullying detection. So this is one of the tasks that were organized during the poll level competition that was already mentioned in the introduction. So in 2019 we had this task of recognizing cyberbullying on the internet and training data containing examples of such cyberbullying was presented to participants of the competition, such as annotated examples of disclosures of private information, personal attacks, threats, blackmailing, ridiculing, gossiping and so on and so on. And the task of the participants was to come up with the algorithm and method, the model that would automatically identify such examples and annotate them in particular categories. And it turned out that it went very well. We had a very large interest in the task and the results were very promising and further enhanced by even newer models, these so-called language models that are being developed still and are being evaluated on this data set denoted here. There are CBD where we have an accuracy of over 70% of identifying such mentions of cyberbullying in an unstructured text. So this is very promising. And one of the hot topics in NLP is, of course, natural language generation. This is even more known as this open AI group announced that one of the language models that they developed, namely GPT-3, is so advanced that it's too dangerous to humanity to actually release it. This is, of course, somewhat a publicity stunt, but actually these models are really getting better and better and are trained on more data. So you can probably test them yourself on the internet just as looking for GPT-2 or GPT-3 and it will complete your sentence just by looking at statistical probabilities in the language model. And this example is, of course, wrong. So we are asking the model somehow where was Lincoln born, asking it to complete the sentence Lincoln was born in and the predictions are in Germany. And that is because this test is made on an older GPT-2 model and smaller data set, but also this model is actually not trained for question answering but rather for continuing sentence that were provided by the user. And these examples of NLP applications are appearing in more and more business applications and academia related projects. We could discuss them in large quantities. But one important application that I would like to point to is also the combination of image recognition and natural language processing. So what we can do now is also combine those methods that were developed for image recognition and train models that are able to automatically label and even describe images, photographs that are able to describe the situation that is not only the objects but also the situation that is taking place on a photograph. And this is also very important application that allows, for example, creating documents that are readable by people with disabilities. And this is one of the projects that I'm working on right now. So in conclusion, NLP has come a long way to this date. But just the recent years had the great share in this revolution based on those four pillars of algorithms, of available processing power and available data and the business need that is the driving force of this work on NLP. Thank you very much. Hello, Miko, you have to unmute yourself. Sorry, we will. Hello. Is there a problem with my. Yes, we can hear you. We can hear you. Okay, sorry. Our second speaker is Yanis Gajski, a graduate of the University of Technology and machine learning practitioner and entrepreneur who started a very successful startup in the highly competitive area of automatic speech recognition and spoken language understanding. Prior to that, he has been developing a startup focused on effective, effective recommender systems and leveraging human emotions and effective states. And I have a hunch that his newest enterprise, something that he's working on right now is going back to this very human centered machine learning and that this is the main reason he's very interested in collaboration with our project. So, Yan, please take the screen. Oh, hi. So, let me just share the screen and get this party rolling. Okay. So, yeah, so just a little bit additional information about me is. So, as Miko said, I'm an entrepreneur and I'm hopefully turned social entrepreneur. So, so my next business or next enterprise is going to focus on on culture design through platforms for personal growth. I specialize in machine learning, I am a published scientific outdoor, and I'm also an avid practitioner of lateral thinking and interdisciplinary analysis, and this is basically the angle which I want to take in that talk. So, I want to be, I want to traverse multiple domains to hopefully create a more holistic understanding of the problem of misinformation and the influence of kind of adversary social network interactions on society and individuals. So, in that talk, I would like to challenge some in the box thinking that that is popular in the subject of fake news and misinformation. I'd like to make a case for complex systems modes of thinking. So, saying that they're really important and actually quite indispensable tools in the effort of weapon immunization, and in general, in other efforts of positive social change. And having introduced that notion of complex systems I would like to advertise a couple of useful frameworks, both as established and bleeding edge that explore modality, this modality of complex systems for applicable understanding and design of effective interventions in complex systems. Then I'm going to go a little bit interdisciplinary and introduce a modern definition of trauma as a useful lens through which to view the human condition to understand behavior and understand the sources of harmful behavior. And, in general, understand possible interventions that are more than firefighting that are more long term. And then, drawing on these, these examples from complex systems I would like to kind of make a case for investigating the techniques employed in therapy and personal growth as possible cyber vaccine. So, trying to convince convince the listener that they could possibly quite effectively reduce harm from misinformation. And lastly, I would like to kind of provide some inspiration of using software and AI in making those treatments scalable and cost effective. And a sketch kind of a field study that could be done within the web immunization grant or this this this general endeavor that we have here committed to. And just a disclaimer, I might be wrong about all of this. I'm kind of taking the liberty of the courage to be wrong. But this panel is about idea generation. So I hope that's okay. The time is short and the subject is broad. So the depth will be limited. So feel invited to reach out to me for additional information on the things that have been touched on during this. And they're also going to be some the blog, the geography at the end, where you can actually reach to the sources that have inspired some of the thoughts in this in this presentation. So, let's start with the challenging let's start with being outrageous and basically probably not being liked by the other panelists. But I think it's going to be fun. So, so I think I identified three very important box types that are often inhibiting effective work with complex systems and the society and the humans are complex systems. I want to give some examples for each of those so we have a focus on symptom versus cause. We have a focus on pathology and firefighting, and we have reductionism where we try to really we out of fear of complexity we really limit the field of study to a very narrow part of the system, which is often times results in reduced predictive capability and reduced quality of interventions. So, first focus on symptom versus cause. Let's see on examples. The first box we're going to talk on anger so we can, for example, create an assumption box where anger in social media should be suppressed because it threatens the civil society and stability of our institutions. If we kind of go deeper and unpack anger a little more. We see that this is a primary emotion of protecting oneself and psychologists say that there are multiple modes of anger and some of them are very healthy. So anger, according to Jordan B. Patterson can be an immature reaction to an overwhelming situation, but can also be a necessary reaction to tyranny. So, so a very adaptive emotion a very important emotion. So, now we can have a look at what kind of prison bars does this box create. And we can see that by suppressing anger we risk censorship or authoritarian regime where grassroots change is impossible. So we we kind of created dystopia by trying to help and alleviate an existing problem. And how do we step outside the box and these are going to be in this format here usually questions that we can ask ourselves to kind of broaden our understanding of the issue. So, and so we can ask, for example, how we can help ourselves to mature. So we are less overwhelmed, thus, reducing those immature reactions, or how we can help ourselves to be aware of our anger and see the real drivers. So how our anger cannot be hijacked by outside agency. Or we can also ask what tyranny do we or the people oppose, and can we find constructively find ways to constructively enlisting that energy of anger into meaningful change. And similarly, distrust distrust. And we can say, okay, there's a lot of mistrust caused by social media. And there are all this conspiracy theories and all those kind of misinformation, just circling around. So we need to teach people to trust in the authority of science and institutions. And how do we best do it right so that's the box. But when we unpack this trust, when we go deeper into how it functions in the complex system. So it can be a result of a cognitive bias of unresolved developmental issues or traumatic events and that becomes not distrust by but mistrust we we just place our trust improperly. And can be a false dihotomy wrongly generalized from experience so kind of I have been mistreated the ones, everyone must be evil. But can be an adaptive reaction to an entity or system that has violated as repeatedly. I don't trust this person because two times he cheated me right, or I don't trust this institution because it has provided improper information on multiple occasions right. So distrust is an essential component of creative and balanced cooperation. So now that we unpacked it we can see the prison bars and we can see the risk of a sunken cost fallacy when propagating a falsified narrative through a frosted source becomes an imperative kind of an end in itself out of the fear of compromising the believability of the source itself. So, once a public agency kind of tweets, a particular science fact that is debunked, it might be prompt to continue this line of reasoning out of the fear of losing this credibility right this currency in the information entrepreneurship right and this is very dangerous. We can see, you know, that leads to censorship or alienation of affected groups like we just pushed them away because they're mistrust. So, so we kind of close them in their their own segment and just push them away right. But it can also disable the skeptics and non conformists who are essential agents of societal evolution right, there is, you know, most of innovation comes from those who disagree with the status quo right mistrust the status quo. So, what do we do to step outside. What is the reality of people who mistrust right is this is it generated by another non adaptive process right could it be caused by anxiety, or hyper vigilance that all resolved from from trauma. How can this process be amended this generating process not the symptom right. What's, and also you can ask what systems have perpetually violated trust. And how coercion has become a societal norm, because of marketing and kind of post truth politics. You know mistrust may be a natural reaction to a culture of coercion right. And how those drivers can be amended and put in check so we can restore healthy trust right. We can educate ourselves about healthy trust and distrust, and how we can have distributed agency right so these are some of the questions to step outside the box. So now we go into pathology and firefighting. For example, how this kind of modality of thinking can cause and put us in the box is, we should find ways to spot super trolls and super spreaders and disable them right so we find those who you know share and misinformation to in a direct message to 100,000 people or 50 other friends right. But if we look deeper in the box if we look to unpack this is like the super spreaders are usually a minority, given the hypothesis that they follow a power law. Their impact, though their impact is amplified by the fact that sharing and giving positive feedback that the that affects the algorithms is free in social networks. So there are basically their activities unbounded. It's free and thus can be easily gained. And also their impact is amplified by regular spreaders. And is affecting the unconscious bystanders as well. So from that modality we can see the prison bars. So this can be taken to an extreme and that really results in censorship, where we kind of take away the freedom to share or the freedom to, or we just silence particular groups that are not consistent with a general agenda right. But also, and this is to quote Sharon vaguely. You know we have this problem that science has always focused on people and conditions that are pathological disturbed or at best normal. And once you can kind of see it in the past 30 years, there have been about 46,000 scientific studies studies on depression, and an underwhelming 400 on joy. So we does with that mode of thinking we we risk regulating social networks into rigidity, instead of stimulating them into flourishing, we can kind of take away like throw out the baby with the bathwater take away the social networks have brought us because we, we kind of over regulated. And this kind of turns into a whack-a-mole game where you know the trolls become smarter and we hunt them better and arms race, and lots of resources wasted. So how do we step up Saudi box and we can ask ourselves what contributes to a digital hygiene. And can some of the behaviors be automated or encoded into the platforms themselves right. We can also look at people who do not spread misinformation or where misinformation is harmless to them so it does not change their behavior in harmful ways. And look at their features and see if we can amplify them if we can kind of educate them right to other people. We can kind of, we can also increase the stakes of the game of sharing and make the sheriff's stakeholders in the effect of their sharing right. And that includes like transitive trust networks, for example. And we can also induce induce limiters in the networks so we can create scarcity that could possibly in inhibit mindless sharing. For example, we can have a limit of weekly shares per account right. So these are, you know, different ways of looking at the problem from a different perspective. And then we have reductionism so increased focus on a particular facet of a problem that kind of closes us off to the underlying complex system that generates the problem. And an example of thinking like this could be we must focus on availability and quality of information effect checking provided by a reputable source so that people can make more informed decisions like this is the box. But when we unpack it when we look a little bit deeper on how people make decisions. And we see that this box relies on the assumption that we can overrule misinformation with better information by providing fuel so information for system to our rational thinking system to follow the canamans logic. And to show you, this is the system to this is according to Kanaman, we have two systems, we have intuition and instincts that take about 95% of our decisions and we have a rational thinking that take 5% right. And also rational thinking is informed by heuristics from intuition and instinct. This is where, you know, the data that's being put into the rational process originates. We can see that our decision making on average is seldom rational. As this process of rational thinking is slow expensive and often unpleasant and requires focus and attention. And in fact decision is is a multifaceted problem invoking multiple systems so we have for the rational thinking. We have emotions and intuitions that are strongly tied to our biology. So our cortisol levels will change our intuition right our our oxytocin levels will change our intuition right and we have relations. And we use empathy and social predictions in our decision making. So, additionally, what we can see is is misinformation or false beliefs are not harmful, unless they elicit suffering. Unless they elicit some response that is harmful. So, unless there is violence risk behavior or self harm or trans a transitory harm where you know this information hurts somebody we spread it to. They're not. They're not harmful right. And in that sense misinformation has to out compete other decision drivers in our the way we make decisions to become harmful, right, so we can look at things that that make up for good decision making, like, you know, values awareness mindfulness right as inhibitors of bad decision making that could be caused by misinformation. It's, you know, it's a it's a very fluent game between all those driving factors. And also, because of that many people act contrary to their currently declared beliefs, and this is has to be understood. When we talk about the effects of misinformation. So, so now we can see the prison bars, we are mistaking a human for their system to their rational ego right. And, in fact, many of contemporary social engineering hacks are designed to bypass rationality and kind of just serve the ways of emotion and real relational pressure. So fighting them on rationality rationality is a lost cause. You know, we can, again, have this something cost fallacy. And, and also kind of thinking about, you know, credible sources is prone to corruption. So, so we have the, we can kind of see this, the, these prison bars and now we can ask how to step outside. So we can ask, for example, how do we foster mental postures that reduce suffering and decrease the chance on inflicting suffering on the others, regardless of the information that is currently hold in, you know, in our working memory right. So how do we help ourselves achieve better emotion regulation. So, our emotions cannot be hijacked. And how do neurobiological aspects of the human experience like diet, respiration, posture or hormonal profile become drivers of that affect emotion regulation and social behavior right. We can even ask, maybe, you know, good diet and, and, and the respiration techniques are more important than media education, because they can, they change our homeowner profiles and does our social behavior right. And, you know, and we can, we can think about how we kind of basically promote those positive regulatory behaviors as a way to countermeasure for negative drivers. So, yeah, so that's the box that's, and that's some ways of thinking to kind of to spring the discussion to help us step outside. But you can also see that that outside of the box. We're getting very complex right there are many interactions there are many additional inquiries there are many additional lines of thought. So we can now ask how do we navigate outside the box and and some really useful tools in that is complex systems and complexity theory. And basically complex systems are systems that are composed of many diverse parts that I that are highly interconnected and capable of adaptation. And they perform some collective function. The key features in the way we view complex systems is the network perspective. So, again, things highly interconnected or feedback loops are non linear. So we have butterfly effects we have threshold effects. We have this proportionality of input to output is very hard to predict with kind of easy causality what's going to result. Right. And so you guys, you wanted to say something. Yes, I hate to do this really because I feel that this is the beginning of a fascinating dialogue and discussion but unfortunately we have to really stick to the schedule because there are next panels and schedule is very packed and unfortunately you have just reached your time. Okay. As, as I was afraid of. So, we're still have five minutes so maybe I will give back the mic to to Isabella about the questions and definitely will schedule a much longer time to discuss everything that you didn't have time to present. Sure. Maybe I can jump to conclusions for two minutes. So make it please two minutes. Okay, okay, just feel free to stop me. So, basically, we need a well rounded approach that optimizes the mind, the embodied brain, our relationships, and this approach has to be informed by complex systems systems. So we need to look for ways to create a mentality thriving society in and it's a matter of global security. So preventative mental health care is a must lifelong education and practice is a prerequisite for happy and safe society. And we can look at how software and AI can contribute. So it can contribute in scaling those efforts and can contribute in making judging the progress of those effort more data driven. So it's a must to reverse engineer complex systems. And it's good at monitoring complex systems because it can take in a multitude of signals that humans are, you know, have hard hardship to follow. So what's an objective for weaponization is, we create a field study where four groups. One is unconditioned. A second is educated about misinformation and ways of reasoning. The third one is enlisted into a personal and relational growth program where we kind of teach mindfulness positive psychology physical exercise, and group support and rational practices. The second one is a combination of two of the second one and the third one. And we see, we kind of see if the hypothesis that having a kind of developed grown human that has the tools for for a positive engagement in life is actually seen enough to the misfits of misinformation to the to the threats of misinformation. And that's it. Yeah, so the first question is to Dr. very low of granularity, which with text data collected on the level of individual states and or countries, even. And this might suggest that there is a huge variability of language usage between people and local news outlets. Radio newspapers, combined with local dialects influence the way we speak and write. What is your opinion on that in your experience. Do you see this phenomenon for languages other than Polish? Well, of course, there's a diversity in in in language and what we are doing in in an European in broader research concerning linguistics is collecting data from various sources. So these are specific projects concerned, for example, with language dialects, even in Poland conducted to conduct research on on dialects of Eastern Poland, or dialects of Southern Poland. And each of these projects has its specific specific collections of texts corpora that are analyzed. And what we have to do is even sometimes individually analyze the grammar of the language and the specifics of the language to be able to come with with conclusions that are specific for this language group. So the idea that that we can have one universal language model is still, I think, many years in front of us. There are some language models that are trained on many languages at once, but they are not not really as efficient as as models that are developed specifically for a more focused language group a single language or even a dialect. So this will always remain a problem and even more data is probably necessary to create single models encompassing multiple languages. Thank you. And the last question to young. The main characteristic of a public health intervention is question that don't you think that don't you think when that the metaphor of information virus and public health measures is something that prevent us from thinking out of the box. Can you repeat the first part of the question. Intervention is the main characteristic of a public health intervention is coercion. Don't you think that when the metaphor of information virus and public health measures is something that prevent us from thinking out of the box. The method of intervention. Yes. Actually, I wanted to make a case for for exploring things like culture design, where culture design focuses on a fostering certain postures in in the people that allow them to more effectively adapt. And, and I invite those, you know, kind of asking that question to explore the field of culture design where it focuses on kind of behaviors of individual agents and the way they kind of come into relationships and educating them not not only by forcing forcing but educating them into a way where they can self adapt and self educate in the future and self govern. So that is, that is very interesting field emerging field on the kind of in between complexity theory kind of social evolution and social intervention, and, and governance. Yeah, and also there is a request from our attendees. Is it possible to share this interactive list that you show me on. Yeah, I think I think it will be possible and I think we also going to record the full version of the talk. I'm sorry I just got carried away thinking about it. So, so, I'll be happy to share it. Please request me via email and, and we're also going to record. It would be great. Thank you. Okay, thank you very much and not to irritate the head of the project that is it because we're already eating three minutes of the next session so thank you all for participating in this fascinating discussion. This is a conversation definitely that will continue throughout the project and will spoil many new ideas. Thank you very much and probably you can move to the next panel. Yes. Thank you very much. Don't forget to log to the next session or just to click the next live stream.