 Hi everyone, I'm Michael Dyslaty from Halle Institute of Economic Research and in this video I'm talking about what are the mandates and geometrical learning for social network analysis. First, let me thank you, the user organizers for setting up the conference online despite coronavirus crisis. So let's get in first, just another view of what I'm going to talk about, background model, then now the background is what the mandates, as we know, is representation of world as vectors in a highly dimensional space. As an application, Kozolowski and co-authors have done something recently, two years ago. My extension is primarily focused on social network analysis, but it can also be extended to other fields. The interesting part is that some features of the social networks, for instance, ashtags or association rules are the network structure of users are not fully exploited and this is what I'm trying to fill, I'm trying to fill with new project. So let's give you a review of the model, the model as inputs, large amount of text, users, text, likes, and friendships of followers and not the social networks. And from the inputs, three parts are developed. First, neural network word and bandings, which is something not new, then hierarchical structure of interest and categories, hierarchical in the sense that there is a sort of hierarchy on the words, then a social network structure of friendship and interest. And then the second and third part are combining to create a science, it's a new idea that gives the idea of diffusion and frequency of a word in a space. Combining all those parts, we get to the model and the model currency or artificial one for word and the other for users. Model output is a projection of words and their silence and an actual user in a unique word embedded space. This is what is really new in my project. Here we can have an overview of the model in a nutshell. Now, in addition to what I said, also to additional function entered into the model, one for filtering and the other for mapping, in particular to map the users into the word and the space. In the first trial, we'll use not to back from Python using the library reticulate from R. And at the end, the output would be a word embedding space with silence, it's a new feature and a number of users. In the same space, this is the novelty. And the model could be applied dynamically to evaluate the evolution of structure of words and users over time, also thinking about application on Facebook, there would be a limited sample and Twitter, and also it will be interesting to compare the social networks. And also I'm thinking about as an economist, the application involved in the European Central Macs, which is database to analyze the evolution of economic culture, so words and central bankers, policy makers, and the users in the last 20 years. So thank you very much. And with that, the policy right now at the moment is closed. It's private but will be open in September. And thank you very much. And please write me if you have any questions.