 This price has been created by EPS about five years ago. It is awarded every two years to two varied scientists. And these distinguished scientists are working in the field of statistical physics, modeling of physics, complex systems or complex networks. So it's a great pleasure that we have this year to indeed very distinguished scientists, which are Albert Laszlo, Barabasi and Angelo Volpianu and I think that's worth a clap. Okay, great. So, for this price traditionally we follow in alphabetic order so that means I will start with Laszlo and afterwards I will talk about Angelo. So, let me start on this and, well, Laszlo Barabasi is extremely well known so there's no introduction necessary but I should keep in mind that perhaps some people are following us on the YouTube channel coming from completely different areas so allow me to sketch a CV in three lines or something like that. So Laszlo was studying in Bucharest in Budapest and then he got his PhD in 1994 at Boston University, his advisor being Gene Stanley. And after a brief stay at the TJ Watson Research Center of IBM, his career continued then in 1995 at the University of Notre Dame first as an assistant then associate professor and later than as full professor and director of the Center for Complex Networks at this university. In 2007, he was appointed then as director of the Center for Complex Network Research at Northeastern University Boston, and holding also a simultaneous position at the Harvard Medical School. And from 2013 onwards, an additional affiliation is also the Central European University in Budapest. Now Laszlo is a pioneer in network science. He played a leading role in the development of this new science, not only at its beginning but also in the following two decades where he was involved in basically all the major advances in this field. And his breakthrough was really at in 1999 when he had this discovery together with Reika Albert of the many real world networks that exhibit a scale free structure. And this discovery really opened up an immense research activity. And many regard this as year 1999 with that paper as the birth of network science. He went on to introduce a statistical physics model for the emergence of scale free phenomena, which is the famous preferential attachment model. This is corresponding to one of the most cited papers in physics, talking about citations I mean his number of citations is now something like 270,000. That gives you an impression what the impact of this work was in the past two decades. Okay, the really interesting thing is it started with physics but Laszlo then expanded to many different areas including biology, medicine, social sciences and made important contributions there. In biology, for example, he has shown that there's a crucial role of genetic metabolic and biochemical networks in molecular cellular processes and in medicine he introduced the field of network medicine, and in social systems he worked on and worked on uncovering the physics of social ties communities and human mobility patterns so fantastic work done really. So, as he has so many interest, Laszlo you really do so many things, we had real difficulty to compress all that into a short citation. So I read now the official EPS citation for giving this price to you. So, Albert Laszlo Barabasi is honored for and that is now the official text for his pioneering contributions to the development of complex network science, in particular, for his seminar work on scale free networks, the preferential model, error and attack tolerance in complex networks, controllability of complex networks, the physics of social ties, communities and human mobility patterns, genetic metabolic and biochemical networks, as well as applications in network biology and network medicine. So, a big clap for Laszlo. So Laszlo, great you're there online and I don't really know how to award a trophy like that online, should I throw it on, yeah. So, it was sent by by mail to Boston and you will get that trophy, and you will get also this certificate here, which is signed by Lutz Berger, the president of the European physical society and myself as chairman of the division where you have this official text. So, I give finally the send a picture, maybe. Oh, yes, yes, send a picture. And that leads me actually to Guido Caldarelli, who is a board member of our EPS statistical and physics division and also the president of the complex system society, he will say a few words of congratulations. Can you hear me? Can you hear me? Switch it on. Hello. You know now you can. Laszlo for the same. Yeah, for this strange of things in life. I am in the place where we first met. I don't say how many years ago. I'm in season 3S now. And yeah, you know you're laughing you're remembering our dinner with uncooked spaghetti. Laszlo. Thank you for what you achieved in these years. Thank you for bringing statistical physics to explore things that traditionally were not explored by physics. We made a lot of things we followed you in many things on behalf of complex system society. Thank you for all the things you achieved and for bringing you know so many problems to study so many answers to the facts that are in the world around us to show that, you know, we can play a role even in these very, you know, world that we have around the instantly I mean not only in traditional problem physics that also belong to the world, of course, but in every every person life. So thank you. Thank you again. Thank you everyone I don't know if it's the right moment for me to say a few words can you hear me. Okay, great. And as said, we have to price winners completely equivalent just in alphabetical order. So it's a great pleasure now to talk about Angela. And again, he's extremely well known to everybody so no need to have a long CV, I just do it because of the YouTube people who might come from different schools that I summarize your CV in two lines so you graduated from the Rome University la Sapienza and then you were CNR fellow and assistant professor first in Rome and then associate professor and then of course very soon, a full professor there. And you have been in Rome. All the time, but interrupted from several visits to international institutes I mentioned here at the University for second Paris, Brussels University Marseille University of Stockholm University of California at San Diego, North, it's the Niels Bohr Institute, and you are currently also external faculty member at the complexity science hub in Vienna. Now, Angela is an outstanding physicist who has made really fundamental and seminal contributions to the field of statistical physics and non in physics and his research interests are distinguished by their strong connection to fundamental issues really to the old topics originally raised by Boltzmann, Kolmogorov and Chin Chin. And over the years you've done build up a strong group in Rome and inspired many young people there and, of course, also many, many books of yours are there on the market. You're a general public, but your immense scientific productivity is reflected by the fact that we found almost 500 papers of yours that you co author which is an incredible number 500 papers I mean, this is fantastic. But most relevantly, some of your works really were a breakthrough in the field and produced new areas of research. And the most prominent example is the topic of stochastic resonance which was introduced by Angelo Wolfianio together with Benzy, Parisi and Suterra in 1981. And this developed in an incredibly active research area afterwards with many other people also doing important contributions then. In 1984, Angelo Wolfianio together with Paladin also dealt with multi fractality originally was Mandelbrot of course responsible for the fractals but you showed that these multi fractals are then relevant in physical implications for the characterization of invariant sets in dynamical systems also in turbulent flows. And today, today, today this concept of multi fractality is really very important and you find it in many papers that analyze time series and so on. But there's many other important topics where you to do to a lot, for example chaos in Hamiltonian systems and the partition of energy follow ups of the family pasta oolong problems and then you looked at diffusion and transport in various nonlinear settings and the role of complexity and its characterization in general so fantastic work over the years. And again, I read now the official text, where we have a short citation for your work again it was hard to compress it into a short sentence so here's what EPS will come up as the official text for your price. Angelo Wolfian is honored for his seminal contributions to statistical and nonlinear physics, touching fundamentally important issues in dynamic systems theory and statistical mechanics, including the mechanism of stochastic resonance, the multi fractality of invariant sets of dynamic systems, the dynamics and multi fractal properties of turbulent flows chaos in Hamiltonian systems and the limits of predictability in complex systems. So congratulations. And Angelo may I ask you to join the stage now here. And I should say that we follow strict social distances procedures, and we wear masks and we stick to the rules of being a good example really, and therefore there is no handshake but something like that or so. I'll say that again. Once more for the photograph. Okay, great. And please take your trophy and both. Yeah, both is for you really and if you. Okay, great. So stay here, Eric, our rail also will say a few words, especially for you. Okay, sorry. Yeah. So, dear Angelo, car Angelo and a small fraction of what you have done in science has already been mentioned by the chairman so I will not repeat even a longer CV. I will also say that you amazed me when I first came to Rome of having created with Giovanni Paladin and a band of friends who did science together. And that was an inspiration for me then and has remained an inspiration for me and many others to be happy doing science and to be happy with one's friends. And I'm so glad that you are still doing it. After all these years. It's quite some time ago, since we first worked together in Rome. Many happy returns and many more prices. All the best. Okay, and now it's time that really the price winners can give their talks and for the talks I hand over to Morgan's chance and who's the chairman for the talks then of the price winners and who will. Here's one. Thank you very much. It's a great pleasure for me to chair this session. And I want to introduce the first speaker Laszlo Baravasi. And big congratulation Laszlo. Yeah, in the early 90s, a young student from Budapest came to visit Kim's name and me at the boy Institute. I vividly remember you derived preferential attachment on the board, I still almost see your equations there maybe they are still there on the board who knows. And of course that has led as we heard to incredible work all over the world. At that time we also wrote a paper together on multi scaling of interfaces which actually came a very nice paper actually high, highly cited not as much as other things. And we have kept close ties over the years which I appreciate I visit you several times in Boston, actually quite recently, and you have come to Copenhagen, many times over the years. It's a great pleasure to introduce the first price winner. Laszlo Baravasi to give a talk and around 40 minutes plus five minutes. Questions, please. Thank you more hands. Can you all hear me now. I wonder. How do I get and I'm assuming I can share screen. Right. Just get the first slide. I'm going a little unblind here because I don't hear any feedback so I'm assuming that you can actually hear me. If not, then I'm sure you have a way of sending me a WhatsApp message or other ways to let me know that we cannot hear that is not there. First of all, I am truly honored for and and I'm honored at many, many levels. You know, as I will tell you a little bit is wonderful to to be awarded this prize and I actually consider this price, a price not for me, but really a price for the network science community for the exceptional work as the community has done within statistical physics, trying to really bring networks into the place where they belong as a forefront of understanding complex systems. So, with that in mind, what I'd like to do today is to really tell you a few words about network science and then I'm going to get since we're scientists. And, you know, I think we always want to hear something new. I'm hoping that I'll tell you something new about some of the work that we're doing now on physical networks. But first of all, let's talk about network science as a big picture. And as I said, I'm a network scientist which means that me and the community had to spend the last kind of 20 years focusing on networks like this, which is like the internet how the computers are connected on the internet. We looked at social and organizational network, how different people are connected within a company university and so on, and how that affects the company's productivity. And then many of us have spent quite a bit of time in the last 10 years looking at what I would call network medicine that is trying to understand how the, how the interconnected nature of the neural networks of the networks within ourselves lead to the disease and eventually how to intervene in this network to potentially cure those diseases. And this kind of work actually had quite a bit of impact not only within physics and statistical mechanics, but way beyond that. But I would like to focus today on statistical physics first, because that's where we all came from that's where I got trained in and that's many of you got there trained in, and that's what we consider really our intellectual home. And I would like to put in perspective of how networks kind of phase into this many, many decades of journey that we had in statistical physics in modern statistical physics and nonlinear systems. I was a student of some of these advances. I started my career first for reporting your chaos, then on fractals, and then on many other things as you will see moving on the world. So what I did actually it's a slide that one of my students have done many years ago, trying to compare how the different advances and statistical mechanics compared to each other in an impact. And this is not to show actually some of the different impacts but to understand what has happened actually at the end of 1990s. As I said I started my career as an undergraduate in Bucharest studying chaos. And one way to look at chaos is that it started with Lawrence's paper in 1963. And this is you see how many citations Lawrence got and you can actually see that there was not much attention on the topic till about 70s. And that's when nonlinear systems started to wake up and then had bigger and bigger impact as kind of time went on and as chaos was discovered as a topic. Then of course came spin glasses and many of us were inspired by that and of course random organization that became a toolset. When I was in graduate school in Boston, I really spent about half a year learning myself for normalization there was no course at that time, and actually publishing a couple of papers using it in the context of surface dynamics and fractal surfaces. Then neural networks then fractals of course the area where I did much of my work before I started focusing on networks. And what you see that these areas had quite a bit of impact, right, some of these papers like Mandelbrotts have been reaching as many as 300 citations a year, that was actually huge within the community. And they kind of woke up roughly at the same time kind of making their impact particularly in the kind of 80s and 90s, when these fields were really at their heyday, and when I started my studies in physics. And then at the end of 90s actually networks came along and I'm just highlighting two papers that I could show many more. And what you see and what's interesting on that is not only the speed at which the network kind of emerged but also the fact that when it came to impact they actually are a different scale compared to this to these giants that we're all building on. And a question is why. And I often think about that and the reason I often came up with is that networks was the first area that really went cross disciplinary and interdisciplinary. That is not only engage the core statistical mechanics community which you did right many of our colleagues are previously working on other things. Once the networks came along they incorporated network thinking into their work. But it also inspired lots of young people as well as lots of other people from other disciplines who otherwise had no background in statistical mechanics. So I don't think people did this way maybe no interest even, but their work was really affected by networks with that biologists be that engineers, computer scientists and so on, dealing from everything from the cell biology to the internet and to the So why fractals was a very important theoretical concept and I'm picking factors because I really worked a lot on practice surfaces. It's really its impact state at most within material science and physics, and of course inspirational beyond that with the images but really not inspiring huge amount of work in other areas. For example, network science really was adopted by disciplines from sociology to medicine to neuroscience and so on. So in a way this kind of implicates not as much the fact that this work had so much more intellectual value than the previous work, I am such a big fan of everything that the community has done, but really the work of network scientists was adopted by a much bigger community, it really was essential to their life. So, and why was that and I, you know I pointed out to you that this kind of this happened around 2000, and it's easy to think that we network sciences came along and saw the light. In reality, we were responding to a trend that was really happening in the society. And to see that I kind of picked out the work the usage from the n grams on the Google n grams that looks at all the books published since 1600 till today of how often they use certain terms like quantum right that started to be used from the 1920s on, and a certain percentage of the network sciences is the word quantum and evolution is a little bigger right then quantum it's has a wider appeal per se starting from the 1860s or so. And what is interesting is that kind of starting from the 1970s and 80s, the use the concept of networks has really picked up. We had it speak around 9099 2000, which is indicating that what we network scientists that we call ourselves network sciences is to come and this people did. We were responding to the societal trend. The society was already thinking a lot about networks for many, many different reasons from the world why that the television networks you name it right. We were kind of capturing that line of thinking and responding it by bringing the science to explain what's happening in those areas. And of course, through this process network sciences a community was born that is really a subset of statistical mechanics plus many others. And, and now you can actually get the PhD and network science which means that you're going to have to start very study very hard statistical physics to do that, and many other things. So we're training now, even at North Eastern and in Budapest many people in the tools of the secret physics under the guise of network science right, and has been both recognized by both kind of the different funding agencies who have been funding for that, as well as obviously by the scientific community and so on. And, and we think a lot about where does this take us and when does the work of statistical mechanics ends and when do we actually have to hand the field on to other disciplines who are applying it. And there's a very strong community that many of you I'm sure have participated in encountered which is what we call the net site community. And I'm one of the reason I'm honored to for this event to take place in Trieste is because the very very first scientific meetings on networks to place in Trieste. And some of you may remember that because you were part of that meeting and, and, and from there on actually today, the last meeting that we had in network science that was called networks because it brought several communities together not only the net site, had more than 1600 participants so it's kind of shows the breath of the subject. The most important thing I would like to mention what I'm really proud of is how the community of statistical physicist and network scientists have responded to the crisis at hand, which is effectively the COVID virus. And the reason why we could respond and there was an exceptional number of papers because the tools that we have developed as a community in the last one years are really were developed to respond to the moment. And I want to show you an old video that is not mine but by Alessandro Vespignani. In 2009, he was the first scientist to try to predict the spread of a virus in this one case was the age one and one virus. And he brought he built a network based methodology to say how many people will actually be infected in every major city, and at what time you expect that infection. And of course when you look at this particular video as showing how each one and one starting from Mexico has reached much of the world. It's hard to escape the images that we have seen in the last year and a half about the spread of COVID. And, and really, we as a community statistical physicists need to be proud of the fact that without us, we would have no predictive tools right now. And without network epidemiology that many of you have worked on to develop, we would have had when the COVID arrived, no tools to really predict and to react and to develop actually different measures to stop the epidemic. And there is a reason why Alessandro Vespignani and his team are now the White House advisors, and the modelers. And there is a reason why in many countries from Germany to France to the United States, network scientists and statistical physicists in particular, have the ears of the head of their governments from Merkel to to more chrome to all the way to Budapest and others. So fundamentally, I think, with the COVID, the network science and general networking came to age. And not only in the terms of predicting COVID, but many of our colleagues in the network medicine were involved, including us, trying to also predict the repurposing opportunities that is identify existing drugs that fundamentally could cure patients who have the disease. And they, and also others were involved in kind of developing new ideas of how you should actually do the locks down to be the more humane but also the most effective and so on. So in a way, it was amazing how the statistical mechanics and the network science community was prepared to respond to the moment. And that has consequences in a good sense. This last two years were the moment where everybody on earth became a little bit of network scientists. When the dinner discussion for the first time was about reproduction reproductive number and the number of contacts we have and the networks that spread the disease and the measures we need to use in network terms to actually stop the disease. So it's amazing how much awareness of network thinking has emerged during the last two years. In many ways, the field in the visibility has matured and has kind of stepped up to be at par with many of the big disciplines that we have had for 100 years. And I'm really proud of the community for being able to respond to the moment in such an effective manner. And of course that I often say it takes a village, it takes a huge community to really take this knowledge coming from statistical mechanics and translate it to a much wider audience and to such a big problem as covered. And coming back to my particular case and the today's award, I want to say that I really accept this award in the name of the network science community, because the community has been really recognized here. And I'm humbled to be the carrier of that particular award, but really, I'm really here to represent all of us who have spent the last 20 years developing the science of networks. And particular thanks of course in my case to the, to my lab, I, this is what I say takes a village not only takes a big community, but also takes many, many people in one slap to be inspired from. And here I tried to put together a list of everybody who passed through my lab in the last 20, the last 30 years. And it's a list that I'm very, very proud of and probably recognize many names who are very active scientists right now, and they're active members of this of the statistical physics community. I feel like network science has taken a lot from statistical physics, but it also has given back a lot of studies to statistical physics. And that's why I'm proud to be today here with you on and receive this award. But let's not only talk about the vision for network science and then what network science has achieved, but I promise you that that will also mention a little bit some of the work that we've been doing right now, that is what excites me right now as a physical physicist as a network scientist. And that leads to the concept that we've been exploring lately of physical networks. And what do I mean by physical networks and why are we interested in physical networks. Well, the cleanest manifestation of the physical network is the brain. And what I mean by physical networks is that many networks that we study network science are what I call conceptual or virtual networks, like social network when you know me and Morgan's know each other. But it doesn't mean that we are permanently connected with the cable that we drag among each other between us right. So, so many networks from biological to social networks have meaning have causality have consequences, but they're not physical objects. However, we are surrounded also by physical networks and the most manifest example of such physical networks is the brain. And what you see here is actually the mouse brain, and it's wiring at the neuronal level. And it's meant to illustrate this particular visualization that and the one that actually see behind me actually on if you look at the screen is to meant to illustrate the physicality of the mouse brain in this particular case. If we look at other brains we see the same and this brings it even closer right. So here is actually a little cut out from the mouse brain. Where, where the different neurons have been actually a different organelles have been colored separately. And you can see that the mouse brain or any brain is a very very dense packed physical network. And what has happened in the last few years is that thanks to the human connect on project that is ongoing everywhere many countries in the world, we started to get very, very detailed maps of these physical networks. This was previously not possible. And only in the last years the technologies have emerged fundamentally you need to slice the brain in very very thin slices to do that, and then reconstruct from those slices the word the neurons are. And here I'm showing you actually the brain of the fruit fly that has been by now fully mapped out that it's connected that is every single neuron has been mapped out individually. There's also where they know where they connect to other neurons and effectively what is the architecture of the system. So as a few years ago we started to have real maps of physical networks. And it's not only brain. There are many physical networks around that from transportation networks where the roads can actually merge but they cannot just randomly cross each other right, you have to go above and above and under each other. In order to have a seamless transportation to the block system right of which are kind of effectively cubes that are carrying the blog in both directions to nutrient supply and plants, all the way to an increasing interest in this network based meta materials, something that we are also kind of thinking a lot about in my lab. The question is, what makes physical network so special. And of course, one of them is the spatial network nature that notes and links are embedded in a three dimensional you can use space. And hence we need details spatial information to describe their positions and link and the routes of the links. But by itself this would not be enough because I would say transportation networks and what we call spatial networks that have been quite a bit explored by the network science community and engineering are also in this category that there are spatial networks. Right. I think what is new here is what I would call the exclusion principle that the notes and the links are inherently unable to overlap in physical networks. And as I will show you that exclusion principle leads to very unexpected effects and offers interesting challenges in the way these networks emerge for and as well as our ability to describe a structure. And often we also have to deal with cost issues, because you know the links always require energy to be built material energy to be built, and sometimes to be maintained. So, so the length of the links is often limited by material cost. So often we need to use energetic or material or optimization principles to really understand why is the network looking the way it does. Well fundamentally what is new about physical networks is because of these two principles spatial nature and the volume exclusion principle. The adjacency matrix alone does not fully characterize the structure and evolution of physical networks. Hence we need new theory. And why is that we need new theory is because the most results of network science know that most of them are built on are kind of starting from the assumption that the adjacency matrix that is who is connected to whom is a complete description of the system. And therefore tools ranging from degree distribution for controllability to finding communities in networks, a sort of activity you name it. Those are all based on one quantity only the adjacency matrix. However, when we come to physical networks, the adjacency matrix is necessarily, but it is not sufficient to describe the structure of the system. Hence, we need a new way to approach that. Now, how do we do that. And what are the tools that we kind of in my lab have employed to do that. And the first thing we did actually is to say, let us find tools to lay out these networks. That is, how do we actually kind of how does that physical network form. And for that we have to develop tools that are based on two principles. One of them is, we're trying to minimize dividing length because it's costly. So, effectively means that we're using elastic links so there are springs connected to each other. That's the first. And the second one is to avoid crossing that is that we that the links cannot and the notes cannot cross each other. And what you see in this image is that we do have a methodology so called force directed layout that was developed in the 1990s to lay networks out. That is not sufficient in this case because the force directed layout does not consider materiality from the force directed layout the nodes and the links can actually cross each other. And the links can cross easily and there is no they actually don't see each other, other than bank specific straight units. In the models that we developed that crossing is not allowed so therefore this is how the system should look like if we do it properly. In order to do so we actually ended up building an energy model an energetic model that simply sums over several terms that are relevant in this story. It does sums over the elastic energy of the link so it tries to minimize the elastic thing right, we also try to sum over effectively the, the, the node load interactions and the link link interactions as well as the node link interactions. And we do so by effectively putting repulsive terms between the notes and between the links, such that they would not cross each other. And once you have an energy function like that what you see over here. Right, then you can actually use traditional molecular dynamics, as well as simulated on the ring and all the beautiful tools that come out from statistical mechanics to try to minimize this energy that is try to find the configuration at which the network will naturally come out by minimizing its energy finding the shortest links, yet that kind of satisfies these things. So, the, once you have this and you put it into molecular dynamics simulation let me show you what this does. So, in the traditional models. This is what would be if we would not have the repulsion terms in there right so the links would actually cross in the middle without any problem if they choose so. However, in the model that we have right now that type of crossings are not possible. And because there are these kinds of repulsive terms in the system, then the links avoid each other, as you would actually expect in a situation like that. And this is just to illustrate this at the small scale how this works. And let me actually show you this at what happens if I give if I take a square lattice. And I totally randomized the base the notes and the links are placed in space but I maintain a score lattice. And then I'm looking at the energy and energy in this case actually corresponds to the total link length and you will see why is that later. And if you run the molecular dynamics effectively it will search for a while, and then suddenly the energy will start dropping it figured out how the network should look like, and eventually will converge to what we recognize as a three dimensional score lattice. And this is just a check that the model knows what it's supposed to know in real life systems. Well, let me give you a more complicated situation, a network that we don't know how it should look like and what you're going to see here is the flavor network. And how ingredients in the food are connected by having similar flavor chemicals. And, and here I'm showing you what the computer does how it's kind of search for that final space. And, and eventually, you know, over a longer run, it kind of starts figuring out where are the communities where are the units of the network and you must realize that this naturally also identifies the communities in the network, and kind of converges to this what I would call the native state of this particular network to use a word that comes from, from protein folding. And once you have that you can even 3D print it so you see actually in the right hand side that those of you who can actually see my screen and myself you see behind me one of those that is currently exhibited in the museum in Germany in Karsruhe. So if you happen to go to Karsruhe visit Zekheim and you will see that and couple of other beautiful objects of this kind. So, now the question is, what happens physically and what are the different line skills and the problems that we are dealing with when we try to lay out these networks. And in this particular problem we have the competition of two line skills there is a node repulsion range, which is shown as kind of an exclusion principle that is coming that is coming from the potential we put the note for the nose and don't cross each other. And there is a link thickness and the link thickness also emerges as the repulsive force because there is a potential that doesn't let the links to get closer than this distance are all from each other. So the question is, how does the network change when we change these parameters and, and after some work you will probably realize that there is really one parameter which is the ratio of these things. That is that we can reparameterize everything in terms of how if we change the thickness of the links how that that offers the network. So we were curious, what kind of phases do emerge when you do that. In order to see that I'm going to show you a simulation when we start from a network and we're just thickening the lines you see, otherwise is the same the notes are not being shown for simplicity. And when you see that when the lines are starting to thicken, there is an exclusion principle in the space, and the lines don't have the space to run straight any longer, and they start curving avoiding each other, leading to this final break, this kind of fat network that we actually names Gurgas from the Latin board of Gorge, because that that there is like like water trying to break through the gorge here, the links are trying to break through to find the space where they go. What you see is a transition from this empty network state that is a very non dense network where the links can go straightly and easily lead each other without encountering each other, all the way to that kind of sticker state when there is strong interactions between the links and about here the strong interaction starts show up, and then pushing each other. So how do we quantify this. Well, in order to do so we first look actually at the crossings and this is the rescaled link length right, and we start from this configuration and I showed you we end up here. And inside I'm showing you how the links look like this is the skeleton of the network right where I'm just showing where the link goes and not the full length. So at the simple how many crossings would happen if we have this ticker and ticker lines but they will be straight. Everything goes smoothly as I'm increasing the link thickness you have more and more crossings. But when we look at the total link length and suddenly two different behaviors show up for a wide range of link length you see this is the link link is going ticker and ticker ticker. Or the ring thickness I'm sorry so this is the link thickness is becoming ticker and ticker. The total length of the links in the network does not change. That is, as I'm going from a very very thin links to ticker and ticker one, they're still not noticing each other. And hence the layout is virtually unchanged yes the lines are getting ticker and ticker, but they're not crossing each other they're not bothering each other. Because we see that there are more and more crossings, if you consider straight lines, but by little bendings they can easily avoid each other without altering the structure of the network. However, there is a critical point at which suddenly the behavior changes. And that's where the link link interaction turn on, and eventually these link interactions actually then from there on as the link thickness increases, then the network starts expanding and all the link starts growing, because they're pushing against each other. And so they're unable eventually to kind of, they're unable to find easy paths, and they become the network becomes ticker and ticker. And by the way, we can actually analytically derive this particular transition point. It's actually in the paper, and, and it is this transition that is really interesting for us that we actually don't call it as a phase transition it's what we call a crossover transition in this particular case. Nevertheless, it's fundamentally alters the shape and the structure of the network. And how it alters it's very interesting to look at how these networks actually responds to stress and pressure. So we kind of put this network under pressure on these two plates, right. And what we find that in the me weekly interacting regime that is before the link link interactions turn on that we had mostly straight links which means that that the stress propagates and isotropically in the system, meaning that it will have the features of a solid. However, once we get into the crossover to the crossover space into the strongly interactive regime, then the links are curved, they fill up the space fully. And that induces a diagonal total stress tensor that results in a fluid or a gel like response in this particular system. And you see that that transition from the weekly interacting to the strongly interactive one is not only something that affects the way the network looks like, but it also alters its physical properties. And to talk to tell you a little bit of another angle of physical networks is to kind of talk about network isotope, because if you keep if I give you actually a particular network, and I laid out in the space somehow. The question is if I give you two of these layouts, how do I know that they are coming from the same network that how do I distinguish different layouts. And to the concept of not network isotropy, which is about whether two graphs are equivalent, because here we assume that the graph is exactly the same that is the wire diagram of the network is the same, but the concept of a network isotope, which is the question, can we actually transfer one network into the other that is one layout into the other, without having to cut any of the links. So for example, if you look at this particular layout, that is isotopic with that because I can just fold it out. This one's a slather. However, this one is a non isotopic version of the same network, because I must cut this red bond and reassemble it in order to find that for the same configuration. So what is interesting is that for a given adjacency matrix, a network has an infinite number of layouts, because you can just move one particular little, or one link of bend a little bit more and you get a different layout. So, in principle, in a continuous space, the number of layouts is infinite. But there is a finite number of isotopic classes. That is, they all classify into a finite number of cases that can be within one isotopic class by properly rearranging the network without actually costing any links. I can go and go from one layout to the other one. This was a question we had asked, how do we quantify this process? And, and just to give you more examples, these are some non isotopic versions of two different networks, right, because they cannot be related to each other. And they all come down to these local tangles that is that locally the network got entangled, and that leads to some of these non isotopic. Before we actually were inspired by topology, you know, which is some of these problems have been studied in the case of simple loops, and we borrowed the term of a linking number that comes from topology that looks at cycles. So, these two circles and say these two circles has linking number zero because they're independent, linking number one because there's they're connected as two rings and linking number two, because they've been actually wired twice across each other, and so on and so forth and you can imagine. The two pairs with different linking numbers are always non isotopic, right, because there's no way you could actually go ahead and care move this one into that one without crossing the link. Now, if you have the same linking number, it's not guaranteed for the network to be for loop to be isotopic. It just kind of, but, but there's a possibility for that. So we generalize this turn to networks that hasn't been done before the best of our knowledge, and for a given network embedding, we define the graph linking number as the sum of all the linking numbers of every cycle in the network and because particularly it's all about individual cycles. And so it means that we need to count the number of independent cycles in the network and or just cycles, and that will give you the linking numbers so in this case, you have linking number one in these two cases because even though they are entangled in a different way. What's common in each of them is that one untangling one tangling leads to that difference. And here's the linking number two. And of course, these linking numbers don't have to be the square at this can be much more complex. The problem with this is that this the linking number is only meaningful for one particular size network. So we introduce the normalized graph linking number where we need to divide by the measured linking number with the total number of loop pairs in the network. And if you do that, then you can actually go ahead and compare many, many different type of networks, physical networks for which collected data from the London subway that has the smallest linking number because it's virtually a three and a few loops that are never entangled the way it's laid out to mitochondrial networks which is our like the cell wall that are kind of mapable and again have zero linking number to vascular networks that suddenly start having linking numbers to the seal against brain to the fruit fly play all the way to the mouse brain. And you can see as the complexity of this network increases, and the more entangled they become. That's the linking number grows. And here is the key discovered at least I'm most excited about it in the sense that we ended up measuring the cost of the network to its elastic energy, right effectively corresponds to how long is the network how much energy had to do to build it. What we're finding is the elastic energy of the network and the linking number of the network are linearly connected. That is the more higher the linking number the more entangled the network is the longer cables you need the more energy you need to maintain which is very very important this linearity because if this will be a non linear function then we would have a difficult problem at hand but because it's linear. It means that there are only weak interactions between the tangles in this part. They are not in the same place of the network. So therefore, the number of tangles is enough to estimate the elastic energy, but most important. The linearity of the linking number and the energy suggests that we can treat each tangle as an independent particle excitation. Right. And that's what I'm trying to try to show you that this would be the base energy right and this would be a single linking number double and triple. And these higher and higher linking numbers correspond to higher and higher energy, because it requires longer and longer cable to achieve that. And this is not only for a loop pair that I'm showing over here, but this is generally true for an arbitrarily network. And we can define the system energy as a base energy of the network plus the linking numbers time so you need energy that comes from each unit and we can measure that. And here I'm showing you a square lattice with higher and higher linking number by increasing an effective temperature and how do we have the effective temperature, because we do statistical mechanics on this problem. Once we have the energy we define the probabilities and we can form that we can only derive the linking number connected to this fictive temperature, as well as the entropy in the system and so on. And once we have this formalism, we can actually use this effective energy that corresponds to the tangleness of the network to really measure, you know, how, whether the predictions are in line with the simulations and they are very much in line. So, so what, so effectively what we have here is a way to use statistical mechanics to think about entanglement. And this is something that we have just recently taken to the brain as well. So we are using the same tools and ideas to look at the entanglement of the brain and so on. And finally, I realize that I'm slowly out of time so I'm going to talk very briefly about the project that I'm very excited about linear physical networks. I directed with last low love us who's a mathematician in Hungary who has probably you know just got the obel price and the world price before we've been thinking we have running together an ERC synergy grant that we're thinking a lot about, how do you find a fundamental modeling units of physical networks. And I'm going to show you something that we have some only primary results on with not yet published, which is the concept of a linear physical network, which, which is the minimal model that I do sharing any model in our mind of a physical network and for that, we are placing in this case, and non overlapping spheres with radius London, for example. And then we choose randomly a pair of notes and connect them with the link of thickness of length certain length that actually London is the thickness of the line here. And we keep the line if it doesn't violate physicality in this case if it doesn't cross any other links, or any other notes. And if we want to put this link however next or then we, this is violating physicality because it's crossing another link. So this link will actually be forbidden. And then we put another link on there that is allowed right. And so we keep adding links in the system, always rejecting links that violate physicality. What you can imagine is that if London is zero that is the thickness of the lines is really zero. Then we have no physical constraints and if we are in a real space, then there is never a crossing. And that in that case that the linear physical model actually a network model maps into the additional any model, it's purely a random network, the links are connected randomly to each other. However, if London is larger than zero that is the line the physicalities turned on the ticker the lines, the higher higher the likelihood that you will have a collision. And also the more links you put in the more likely that you will not be able to put in the last link. Next. So actually for each network, we see a crowding transition for London on zero that there is a maximum number of links that we can place into the system. And this is what I'm illustrating here. If you have very, very thin links and this is the red line you can actually get quite far put quite a number of links in the 300 node network. But if the link thickness increases with a factor of 10 or 100, then actually you die very, very early here in this case in the 300 node network with the link thickness of point 15 only a few hundred node links can be placed rather than 10s of thousands. Now, the question is how do we mathematically we wanted to mathematically understand this jamming or crowding transition. It turns out to be a very difficult problem because you have to deal with the geometry of the network, the geometry of the layout, as well as the fundamental network characteristics. But then we had a lucky break, I wouldn't say it's more than a break because really this is really where the collaboration between the mathematicians and us has really brought fruits. Because we realized, and I want to compare to the mathematicians that there is an exit mapping of this particular problem into another graph theoretical problem into what I would call the metagraph, and the mapping happens to the metagraph, the metagraph, each node corresponds to a link to potential link in the network so one in five are being connected one three being connected and so on that's what you see over here. But, but in this metagraph to link candidates are connected if they want to intersect that is when they violate physicality. So you see these link candidates while it physicality so we put the link between them. And it turns out that for these node realizations there are many more violations like that. This is how the metagraph looks like so these are all the links that for that given one the value violate physicality. And when there is no link it means that 46 and 67 could actually be existing without colliding with each other right so they So this is so far of mapping. Why is this mapping interesting. First of all, because this map and the metagraph is fully deterministic that is once you specify the position of the nodes, as well as the land value. Then you have only one metagraph. And here I'm showing you the metagraph for tiner small lambda, and here is for ticker land the larger lambda and you see the more land that you have the more higher the London the more collisions you have, and the more connected the metagraph. What is the real value of the metagraph and that's where the real discovery comes from is that we were able to realize that every physical network corresponds to an independent node set in the metagraph let me define independent node set an independent node set in this graph is a set of notes that don't connect directly to each other. So this is clearly part of an independence set because it doesn't connect to everyone, but for anyone but for example these two could not be part here, because they connect to each other but these two could be because they do not connect connect to each other. And the independent set has been highly studied in graph theory in arbitrarily graphs, and what we are able to show that any physical network that you can build in this model corresponds to an independent node set in this metagraph. And the maximal independent set that is the maximum number of notes you can color blue without colliding with each other correspond to their jamming transition. So we have a way to play random briefs now a random games on this metagraph and get analytical results for the physical network that was never possible before. And we were able to show that we were able to derive rigorous bounds on the gem state that is what is the expectation value of the maximum number of links you can place in there. And we also have some formulas that allow us to estimate how this number of maximum number of links actually scales with the thickness with the network size as well as with the thickness of the line so there's very beautiful new scaling in terms of statistical mechanics in this particular case. So I would end here, and I want to kind of show again as motivation that a huge amount of data is coming towards us as a community from the brain area, and we need to be prepared to look at the network science tools to look only noted that not only at the GCSE metrics, but also to explore that to fully the physicality of the network, and the physical network for that I just presented is our entry in this particular paid space. And I want to kind of thank to our collaborators, both in my lab who have been much of the work as well as now Mark and posh boy is at CU, as well as the mathematicians who work with us including last low as he and I are leading this current project of ERC. So the two papers we published in this space and writing a third paper now about the linear physical networks. And at this moment I will stop. And thank you very much for the attention, and again, deeply honored that the physical mechanics community has chosen to honor me with this wonderful prize and, and I accept it in the name of the community. Thank you very much last for a beautiful talk we are running a little late, but there's certainly room for one or two questions. Do I see some hands. Oh, there was maybe one on the chat, can that person, I can read that I see it when minimizing the energy in the network layout our links allowed to pass through each other. Meaning does doesn't think an account nothing in the network, or just aims to get the best thing and final layout this regarding the original topology. Wonderful question, we have both. So, if you want to actually got the native state of the network, we allow the links to tunnel through each other during the molecular dynamic simulations. Otherwise you will get a hopelessly entangled network with a very, very high linking number. So when we try to do this statue that you see on my here behind me for that as well as also for the brain layouts that you see in the back for those we use tunneling. However, the simulation can be used in the mode where tunneling is not allowed. And we do so if we want to ask questions about the linking number because if you allow tunneling, then you are actually decreasing the linking numbers, because you're decreasing the energy. So, depending on the questions we want to ask and the problem at hand, we are using both both ways. I have a quick question then you showed us beautiful networks of the brain of fruit fly and mouse, and I wonder are any similarities and or differences between different species concerning brain. Wonderful question so it's too early to say and one of the reasons too early to say is because the registry repeat what we have for see elegance we have a physical layout coming back about two decades. For fruit fly it just emerged a year ago, and for mouse we don't have a full what you see behind me is not the full map, but the full one millimeter cube set has actually just was published a few weeks ago in nature, kind of telling us where our those are, and was the data was released but only a one millimeter millimeter cube what you see behind me is not the full wiring, but what we call injection based wiring so they injected in a certain places see what the neurons actually went from there. And of course, that's where it ends we don't have higher organisms and I don't recommend you recall, you donating your brain for a map for a map like that because they would have to slice it in what in micrometer you would have to slice it in different places. So, so the data has just emerging the last few weeks and for the last few months, and this is one of the questions would like to understand as a community as a group as well, whether do we see any similarities, and could these similarities be induced by the physicality of the layout you know that's a real statistical mechanics question for us. And this is a journey we're part of we don't have answers yet. Again, a big congratulations and let's thank you for this. So it's certainly also a big honor for me to introduce the next winner, Angela. We know each other for over 30 years. It has been a fantastic pleasure we started out working together with our late friend Giovanni Paladin as Eric said, and I spent fantastic half a year in Rome and visited your group. At least once a year over the many, many years and we worked on shell models a lot, which you found, of course. And no one in the turbulence community first believed it and now many, many people have worked on shell models. And with this. I really treasure your friendship and you know and with this I give your work. Okay. Thanks. Everybody for this occasion for this price is a big honor for me to see my name together with the name of Peter Grosberger, let me say, I am very proud of this. And let me say that I consider this price not only for me but for a community, which I am part of this community and I am pretty sure that the science is a collective. Collective fact and the basic on many things that starting from the giant in the past in our in my case like a balsam a richards one could go off and then to be a good scientist and you need to have good advice or a mentor my case. Johnny, you know, senior George Parisi. And then sort of find the good collaborator. So started from Johnny. And then many other. So, yeah, there are. I am very happy to see at least three of these people in this audience and months and Eric Orell and Roberto Livi. But there are many, many other. And so as you see from the title, the title appeared the name of this gentleman and ladies try Richard so and I am pretty sure that this site is not very well known. And I don't know why, because it was really a giant. And, but no, there is some reason the reason is that it was a very serious and honest person and spend his life in a practically in the way, and in spite of the fact that introduced very important contribution is not very well known. For example, here there is a list of this contribution. The famous sentence, how long is the cost of brain is not by Mandelbrot is by Richard. Okay, so Richard so as the the first serious the first clear idea of self similarity, studying turbulence. Apparently crazy question does the wind has a speed. So, this was related to the fact that there is a non differentiable, non differentiable video. He was the first to consider non Gaussian diffusion in the, in the, in the relative, relative dispersion purpose to go beyond the big equation. He is the father of the modern weather forecasting. He has the first intuition of the parallel computation at that time, the imagine a theater with the 64,000 computers working together, and at that time computer means a person doing computation. Okay. And in addition was a pioneer on the application of mathematics in social science. So because he was the first as first at least as far as far I know, who tried to apply mathematics in psychology of human conflict. And it was it was a pacifist. He has a very important role in the pacific movement. And for example, he during the the first world war, he was the driver of ambulance in the French front and during during the war, he wrote his masterpiece weather prediction by medical process. And he was so deeply honest and conscientious that at least twice he resigns from his position because he didn't want to have interaction with military organization. And at certain time decide to stop his activity because remember that during the war that the study of the turban dispersion was used by the by the German for the for the gas. And he spent his last 10 years. In the last three years he worked only on on the application of mathematics to psychology. So this in some sense was a peon and want to see paid the idea of a game to you. Still, still is in the pacific environment is still in Europe. So I show you just last year it was a book devoted to to his activity on this I show you. Okay, so now my talk will be about two, two, two topics of turbulence and both of these topics are started from idea of rich. So the first is about the fractal model for for two months. And the second is the introduction of the generalization we have on his phone and also this is associated to his idea of relative of relative and. Okay, so you started to consider the story of that. Okay, I spend just a few seconds about this that as as anticipated. It was the first wonder about how long is the cost of some regular stuff and introduce the what now is called the boss counting method just count how many segments. And I need to cover a certain length and okay if you translate in terms of the length, you will you realize if the object is more than you have, you have a convergence to a constant which you call length. And if this is not the most you have you have a divergence with the size of the, the segment of the position and the exponent is a, if you translate in modern terms. This is associated to the to the fractal, the fractal dimension, and he just in an empirical way he compute the stuff in the coastline of the Great Britain or the border between Germany and, and so on. The fact that this alpha depend of the, the kind of the country you are, you are. And so, and that there is a very famous of a great trees, very famous for people working to balance a sort of poem which he inspired by a satirical script about the, the, the, the turboland Teddy this big wheel of little wheels that feed on their velocity and little wheel of lesser wheels and so on to discuss it in molecular sense and for people working to balance this is very, very, very impressive and this is really the idea of the to go and cascade, which has been formulated, something like 20 years after by, by, by, and so this is an example is a very classical sample of similar, similar structure. This is a function introduced by, by via stress in his war in his classical work of series, and so is an exercise. I try to, to, to do in my class student, and this is continuous, and this is convergent, but the derivative is not convergent. And if you look, you realize that this is not differentiable in endpoint, and you have a self seen a structure, you can do by empirical inspection. And the reason is that if you look at the, the difference between function in the point x and x plus delta, instead to be proportional to delta x is proportional delta x to something. This is the definition, the mathematician use the term older component. And if the, the, the curve is more to H is one, if it's not smooth, it's something smaller than one. And if you translate in terms of the fractal dimension of the graph. There is this relation. Okay, and you can compute by this is just a quite simple exercise at least for good. And so, but now to balance. So the, the first, the first, let me just one, one minutes about to balance so to balance traditionally is a, is a topic for engineering or people working in physics, and not for people, not for theoretical physics. But there have been a parenthesis that I like to call golden age of the two balancing physical physics, roughly the last two decades of the previous century, where they fully develop two devices being considered a problem in theoretical physics. These are, maybe this is my personal consideration, I don't pretend to have clear historical precision is due to mainly to three parts. So this three person, this three ambient scientists was able to collect around around them a group of people young, young at the time, and including some moments and Eric, we were just so much of a lot of people so many other. So this was rather a group spreading in Italy, France, Denmark, the United States, also other places. And it was a certain period of where the, the, the fully well to study from the point of view of the, the statistical mechanics, but now this period finish and the two points come back to the traditional field, roughly. And so what is the, the problem. So the problem is that you want to understand the model in statistical mechanism of the scaling structure of two minutes. And the two policies, complicated problem, and you have some chance if you decide to concentrate on a specific aspect in a specific aspect of the scaling. Okay, the, the, the, the, the first important contribution, based on idea of Richardson is due to call more often in an important work in 1941 in the turbo and German K 41. The, the, this is the result. Delta V distance L is proportional to L to one third. And this is valid for L must be not too large and not too small, because if it's too small, of course, is H must be one. And in the so-called inertia range inertia range is means that L must be small respect to capital L where capital L is the size of the pipe. See, and ita where ita is the so-called moral length with decrease with the Reynolds number in that way. So this is the, the prediction, the prediction of comogorov, the prediction of comogorov is based on two factor one is some, some chronological consideration, but there is also an exact result that what is called the fifth law, which is an exact result of which one can derive from, from the, the, the, and the combination of the, the exact result and the chronological argument suggest H equal one third. Okay. And this is some same in statistical mechanics, the term is a sort of a new field. And Landau stress that this, this cannot be completely corrected it there is a footnote page in the, in the book of fluid mechanics of Landau where in two lines and say yeah nice work can be done by academician comogorov, but this, this cannot be completely correct because it's necessary to take to account fluctuation. Okay. And, and so, okay, this is comogorov and I like to repeat here the idea of comogorov about to balance this is this is predicted by Yasha Grigoryevich Sinai in the preface of a book devoted to the certainly, certainly of comogorov and Roberto and I where, where the, the, the edit of this book, and it's, it's very, it's very nice because you see that apparently so comogorov was a great mathematician, but, but was overall was a great scientist and it was able to understand where it was necessary to, to use mathematical, mathematical or not, and Sinai asked to comogorov, the origin of this idea and the replies was a son, I studied experimental data. Don't expect from genius on mathematics, and it was not particularly interesting mathematical aspect. It sounded a bit strange. So if you look at the work of comogorov, there is not particular mathematics, there is some mathematics, mathematics, which can be followed by any, any, any, any student in the, in the university. Okay, so what about the remark, the, the criticism of Landau, the criticism of Landau if you translate the technical term, the following, if, if, if the argument of comogorov is correct, means that if you look at the scaling of this in the diagram are called structure function you take the difference of velocity the certain distance, and you consider the moment P. And so if the argument of K 41 is correct, you have that this the scaling in the scaling you have a trigger scaling with this Z P is not P over three. Okay, if this is not true means that comogorov is right. And so, actually, comogorov, no sorry, Landau was right. This is not true. Okay, so it is necessary to take to account some this fluctuation. The first historically the first attempt is due to a game to comogorov in the revised, revised theoretical case 62 were introduced the famous Lagrangian, sorry, log normal approximation where now there is an extra an extra term an extra parameter new, which take to account the fluctuation. Unfortunately, the log normal distribution is pathological for many aspects is well known in the in probability is sound astonishing that the comogorov of comogorov knew very well that the log normal distribution is astonishing but I don't know why you say, in spite of this he proposed this stuff, which maybe you can see this is sort of approximation, I don't know. And I had no chance to discuss with comogorov I visit Moscow in 1985, and but it was very, very, very, very old and what's impossible to speak with. And not only for me, but for the rest of the world. And so they now let me know their time other time it's 93. So this is an idea was to start from from Georgia, Paris, and then developed by so and so and the idea was that the idea was to, to, to, to give to do something similar to the log normal approach approach, but in a consistent mathematical way. And so this is, this is some sense is not a theory. It's not fair to say this is a theory, this is a model, but in a model which is able to, to, to produce some, some explicit prediction, using just a single ingredient and which one can fit the from data so we will we will introduce the model and from this model we fit some function and and then the rest is computation. Okay, and then you can check. Okay, and so you can you have some consequences like for example the probability solution function gradient of the velocity the probability solution function of acceleration, the existence of intermediate of the range. Don't worry if you don't know what it means so it's something not not absolutely not really a consequence of a multi fractal due to refresh and massing over gas solar and also with the, with the moments and join with work a bit of this stuff. Now, in, in, in a nutshell, the, the, the idea of the multi fractal, the idea is the following, consider the name stocks equation. And so, and you, you look at the URL, just a remark, if you perform this transformation you risqué x, you T and viscosity in this way. You see that the name stocks equation, okay you also to rescale effort, the name stocks equation are invariant after under this thing for any value of H. This is true for any value of H you can wonder what is the proper value and the proper value. A natural candidate is H equal one-third, due to the idea of Kolmogorov. But we know that this cannot be completely correct because of the, the, the, the landau, landau, landau remark. So, the, the, the idea of multi fractal is the following. So instead to consider a global, a global scaling variance to consider a sort of local is not completely correct this terminology, but just roughly speaking, a sort of local scaling variance where you have that if you look at the, the, the, the, the difference of velocity. Now you consider also the point in the space where you, you are considering and so in this age can depend on from X. And so, and you, you, you introduce you divide the space, the space in our, in our free so in the physical space in, in a set of fractal, which certain scaling singularity age, and each fractal as, as a certain fractal dimension, the ovation. And then, if you translate this in terms of probability to observe, to observe a certain exponent, a certain scale, you have, you have, you have this low. So, this is not, so if you are familiar with, with the large deviation theory this morning, you will, you will be shattered to consider the study discussed the large deviation theory, this is, this is can be associated to something like a, a, a large deviation theory so in the, in the work of Kaldanov, more and so many others don't remember, I'll say I guess it's the first, the, the, the, the, the, the, the Chicago group use the terminology f of alpha. We use the ovation, there is a correspondence, there is a translation I don't remember but so it is the same stuff, the same stuff. So, of course, since this is a, the ovation cannot be completely arbitrary, it must be convex, and then, and then for some physical reason, you have also this inequality, so you have some, some, some, some constraint. So, then, apart from the mathematical stuff, you can do a simple steep on the sentence estimation, and you find a link between the scaling exponent for the structure function, and the ovation you have a large and transformed. Okay. So, of course, nobody is able to compute from the first principle the ovation. And so, okay, this is what I said before, this is more or less, not this more or less, this is essentially is a version of the large deviation theory. And I confess that at that time I was very ignorant, but even Parisi didn't know very well this stuff. So, and we realize that we discovered by and many, many things and we were very lucky to avoid the serious, serious mistake. It's sort of my, my, I guess, also, you and the band, the Chicago band, they didn't know very well the story. And whoever, whoever, this was so important that time in the community of statistical physics because the dominant paradigm was the paradigm of phase transition critical and where it was all very few exponent with some relation and all the all the scaling property was possible to the idea was possible to describe in terms of your opponent. Here is evident. This is not true. It's not true. There is no two or three exponent, but there is a function. So in some sense you have the infinite infinite hierarchy of the sport. And this at that time, some people didn't accept this. I remember that some old colleague and it was a bit disturbed by this stuff. So, as I said before, the computation of the OH from the name, so the question of nobody's able to do the log normal approximate and the log normal approximation correspond to a parabolic is translate the stuff to respond to a parabolic. And so, and now how you can go on, you have to, it's necessary to construct a model with some physical intuition. Our model was called random beta model. Don't ask me the table was I don't remember was almost 40 years ago. And so with these some intuition and looking at experimental data, we, we, we derive this expression. Okay. So it's not a theory. It's something between a sort of some model from your consideration, the fitting of data. And we obtain from this data. So this is the data is experimental data. And one this curve is our model then there is another model, but it's not very important. So if you plot the, our day of age and day of age is still a decade. The difference is very small. And so, for sure, it's clear that the Kolmogorov is wrong because Kolmogorov is designed. So you see that my even for P equal to the difference is more different but clearly different. Now, we have, we have an explicit expression for the ovation, then you can do an exercise that excites a bit longer. Some exercise was what this was 91 if I remember correctly. This work with the with the benzene biferal gas so like Giovanni, I don't remember. So where we compute we compute the probability distribution of the gradient of velocity. Okay, so you can compute within the, in the, this is the Gaussian approximation, then you have Kolmogorov approximation, this is the homogeneous fractal, and this is our prediction. Look, in this our prediction, we just use this formula that don't, don't change again the parameter. Okay, so we have this. Very good, very good agreement with what what is astonishing that in order to leave the difference with with the Kolmogorov is necessary to arrive at the really large deletion, really the something like six or seven variants. So, but at the end you see you see the difference. I guess the most astonishing behavior is this one this is the distribution of acceleration of Lagrangian particle. So, these are the data from simulation by, by, by, by Guido Boffetta and the other don't remember the other, and the line is the prediction, the multi-fractal prediction where some computation is not it's not it's not straightforward. So, please, look, look at the value. Forty variants. So, really large, that really large deviation, not five or six 40. So, so you can believe that this is there is not there is something true in the, in the model. Now, let me discuss about the second, the second topic. The top is, is the relative fusion, so really the fusion internal is very important. Really fusion means that instead to look at the Lagrangian particle, I look at two particle and they are interested in the difference between two particle why this is important. Imagine you have a cloud of contaminant. What is important is the position the clouds, the center of mass, but also the size and the size is associated to the different distance between two particle. This is the reason. Richardson, our hero in one very important. Of course, if there is no fluid, there is no turbulence, there is no fluid. This is nothing but the usual figure question. You are just introduced the factor to the diffusion coefficient. Okay. Of course, since the two particle are not independent, you have to take to account the interaction, the interaction. Let me say, let me say interaction in the sense that they feel the same velocity field, and Richardson proposed this equation where, so a thick equation where now the diffusion coefficient depend from the distance between the two particle. Okay. Let me say, how, how, what, what, what I put here, the, the, or, and since he was a genius. I don't know why just looking on some data very poor data. He proposed the, the, the, the scaling the area, the light are two for that. Now, for people who study a bit or two balance realize that this is obvious because if you know the color of theory, this is obvious, but this was 20 years from our field. Okay, now this sound almost real, but at the time was not triggered at all. Okay, this is the, the, the equation, and from this equation, you obtain what is called the same anomalous diffusion that are square increased like TQ. Okay, that's called Richardson. Okay, so now, this is a sort of, well, this is the sort of. This is the theory, and so you can wonder, but what, what happens if you take a count to permitted. If you take a count, it means you can expect that if you look at the moment of distance. This moment is there to scale like three half time P with the, with the somewhere. I'll behave this, this is a exponent in terms of the day of age. Okay, this is another you repeat the exercise with multi fractal. And I don't remember the other person maybe also it was important this but I'm not sure. And so at the end, we arrived to this expression to get a PR way this deal of age, and you take the deal of age you fit from blah blah, and you repeat the exercise and you see that this is very, very good. This is a very good result. And, but there are another important person in turbulence was a big boss of turbulence bachelor bachelor was probably one of the most eminent scientists in turbulence after. And he was Australian, if I remember correct, he was professor in Cambridge for many years, and he proposed, he proposed any question similar to the question of. The question of Richard so what this equation is still a thick equation but now the differential coefficient is a function time. And with some argument you propose a deal of key is proportional to the square and if you do the exercise you obtain the same result of Richard so you say you had to model and to propose a and which give the same scaling. And if you do the exercise, if and you look at the probability distribution, you realize that the probability distribution of bachelor is Gaussian this is obvious because this is just a escalation time is Gaussian. You have to put to you at the tube on the contrary the probability distribution or you think from Richard so he's a structural exponential. And then you can go with the winner of this. And there is a very astonishing work by Boffetta and Igor Sokoloff very precise accurate numerical simulation and from the result you see in absolutely clear way that the result are in perfect agreement with with Richard. So this is a little different little different due to intermediates but very, very small. So this is not the curiosity and I discuss why why this important. So the important is associated to the second, the second part of my talk. I don't know how many time. Okay, the second part is associated to the upon spawn and the generalization of the opponent. So the opponent is well known concept in chaos, maybe the most popular and important so you have an evolution equation and you to do to the opponent as the ratio of increasing delta x delta x is the difference between trajectory and but what is important here you have to take two to two limit when the delta x zero go to zero. This is mathematical you do automatically when you consider the tangent vector if you are popular with this and then you take the limit t equal to go to infinity. Okay, this is the concept is very. Take these two limit is necessary from a mathematical point of view because in this way, for example, you have that. The opponent is not dependent by the norm and you use it for very some interesting and proper problem, but the trouble is when you translate the opponent in physical system. For example, imagine that you want to understand the predictability that the time necessary to arrive to a certain precision starting to a certain maximum tolerance delta capital delta starting from the initial precision. You obtain this formula. Okay, but now you can wonder what imagine that you have a situation like turbulence turbulence turbulence is not like Lawrence model or this model system with three dimension in three variable in three variable. The opponent is the beginning at the end of the story. And, but imagine that you have a system very big like turbulence and turbulence you have a many many different degree of freedom from a large scale large scale and small scale. And it's not difficult to realize that the opponent is associated with the smallest active scale the scale, the, the behavior of so called more of a length. And if you assume this, you do some exercise is done by well and in the in the in the framework of the color of theory, you obtain that the opponent, the scale as a square root of Reynolds number, then you can do some exercise. You also want to do some exercises. Yeah, also, also most sorry, but after some years, I don't remember. We repeat the exercise with the months and Johnny and that is something and they start to exercise this one. So, of course, they appear somewhere this of age. But if you put here the of age you take instead one after you take 0.4. So, but, but the fact that this result. The result are completely irrelevant. So are just a mathematical exercise, maybe a bit difficult, but why, because now now you have to translate this mathematical result in physical terms physical term means that this result is relevant if you are in terms into to precision much smaller than the what is called, called more of velocity which go to zero as the Reynolds number to minus one. So this means something like fraction of millimeter for second, which means nothing at the practical level. So if you go to speak with some body engineering on geophysics and then you say I have this result for the up on spawn, congratulations. Good for you but for us it's completely irrelevant. So what is important that that is necessary to live the mathematical perfect world where everything is where you have a wonderful result that the independence of the normal son. You do to dirt your hand introducing. I don't know invent this name I guess we don't buffet and fun excise the opponent. The idea is to generalize the opponent at a certain scale, and the idea is the following. So you have a certain scale, certain scale certain tolerance delta delta, and then I wait until the time such that the difference arrived to delta multiplied by the raw, they say imagine raw equal to you have a doubling time. You introduce this doubling time, which depend of the, which depend of the the initial, the initial tolerance and then we will introduce in this is please, this is the algorithm for the, for the method. So the, the method for people who are familiar with the dynamical system is a is a sort of is a sort of non-linear zoologist of the benefit. Anything organic is really just really stretching method in the first paper of 1967. And so there is an explicit and explicit rule. And what is important that in the limit delta go to zero one obtain the usual mathematical. I think this is fine from a theoretical point of view, but for a physical point of view what is what's important that is this change. Sorry, this is not this is not that this is the is not lambda this is that what is important that if you compute numerically or some argument, you all take that the opponent spawned the fun excise the opponent in the initial range be a like delta minus two, and this is an agreement with some phonological argument of Lawrence. This for example, here you can appreciate the difference. This is the work by by some people in in some huge geostrophical geophysical model. You know the fun excise the opponent that this is the prediction. This is the tolerance. When the tolerance is very more you have this is flat. This is the opponent, but in the initial range of the stuff. And so you have to say here the difference for for the probability time. So if you use the, the stupid formula you have this is the predictability time which is extremely small. This is the true predictability time this one. Because this take account to the, the, the, the non linearity so look at the there is some order of magnitude, and this is okay now now the, the people in geophysics are happy, because if you don't have a stupid result is the prediction is just a fractional means, means, means nothing. Okay, now, come back to, to, to Richardson so here you realize that the, in some sense Richardson had the same idea because the you can, you can associate to the diffusion at a certain scale, you can write in some of the opponent in this, in this way. So the, the, the smart idea of Richardson was to, to, to use as a privileged quantity the distance and not the time, on the contrary, the bachelor the privilege quantity, the variable was, was, was the time so this is a similar to the, the idea of the leg bag or remain, remain, remain integration. And so apparently you say bucket this is just a tricky argument. Why, why, why the fun aside the opponent is better than other than nothing. And the reason you have no time to enter the reason is that with the finite finite side the opponent is, it is possible to avoid the contamination contamination effect at different scale but which happened at the same time now from the, from the next picture I explained so this is the last picture so this is for this is the, the relative dispersion relative, this, the relative dispersion. Okay, this is very accurate, this is very accurate numeric and very accurate numerical simulation in 3D by Boffett, I understand that or not, very accurate means state of art, something like 8,000 cube, so really 60,000 cube but don't remember so really huge. So you see this is the, this is the result which depend on the initial distance. So you see that this is the, what do you know that is correct to keep. But you see that in the simulation you have before you have the, the tool balance to start from, start from here, before there is the, the, the, the, the, the, the, the chaotic exponential behavior. The contamination these two factors introduced some spurious behavior so you don't understand nothing. So for this value you have the correct behavior, but the other completely wrong but why, why, why this is better than the other. The reason is that if you are here this time, some, some couple of particles are again in, in the dissipative range other are in the nursing range when you put all together you have a contamination you don't see nothing. Of course, of course, if you have 10 decades, there is no problem. No, not the day because you have to. Okay, now repeat the use the same data, but you treat it with the fun as such the opponent, and you have this stuff. You have perfect scaling. This appear this behavior. This is the prediction. Richard's like this part of this is the flat the part of the usual, the usual upon spawn and the, the, the advantage is very well known is very, very well, well, well evidence for example, with Eric and we applied this method to the treatment of balloons. So there are people who study balloons in atmosphere. And it's true that the data was completely squeezed and they, they, they give, they give us by free and by using the method we were able to extract a lot of information, but only only if you use this, this, this method. Okay, so now, I guess my time is at the end. It's an obvious remark that in a complicated situation like complexity turbulence is necessary to put together all your knowledge, mathematics, knowledge, understanding of phenomenology, simulation, smart colleague and so. The open problem. So the open problem. Apparently, the first obvious open problem is to derive the ovation on the first piece. Frankly speaking, I don't, I am not sure that this is the very important so I guess according to Kolmogorov, at least Kolmogorov in the CNI world, maybe this is not so important. I mentioned that in this, in this direction, there is some important progress, not for the problem of turbulence but the problem of the so called passive scholar. When you study, not, not the loss, not the, the, the field equation for the velocity but for the, for the, for the, for the temperature or concentration. This is possible to obtain something in a 90 way, but this is interesting problem but rather far from, from this. From my point of view, I guess it's very what is important, it's important to, to construct some effective model so this is also is part of what I say what I call golden age or turbulence because in that period was some people, including Christian back so the people already mentioned before the people try to start to introduce some model. Not, not, not, not, not simple like, not, not to, to, not to small and not like Lawrence Lawrence system but some couple maps or shell model and so on. The actual way with this, this model was possible to, to study in a systematic way, numerically over some idea from a dynamical, dynamical system. I have the feeling that this, this line now is a bit. It's not a particular interest in this. And so and finally the, the, the, the, the, the idea of to extend the idea of fun at such a component in the treatment of data, because, for example, imagine traditional climate and so on. So here the, the first obvious idea just to apply the tokens approach but tokens approach is, is well known that if the dimension is large where large means five and not, not, not, not 10,000 doesn't, doesn't work. So this is really my dream to find, to try to introduce some effective models, repeating our work with the moments in the shell model and so on, and try to generalize the, the finite size of the opponent for the, also for the application in the real data. Thanks for your attention. Thank you very much for a beautiful talk. Any questions. It was before. Well, I had planned. I don't think it's you. This is about the energy number. No, it's not a question just an appreciation, wonderful, wonderful clear noise was for me or for the previous. Okay, I had of course the plan to ask you about what is the biggest problem in turbulence this is what you say here to the right d of age right and I think that is not really a serious problem because so apparently I am pretty sure that this is out of the human possibility. Exactly, I agree. Then what, then what's, then, no, maybe, maybe, maybe one possibility is it's following some idea of a correctional. So correctional, for example, was able to formulate some statistical theory. Not for names of sequestration but some modified names of sequestration in the Fourier space with the random coupling or in the infinite dimensions something like that is in the spirit of people working in statistical mechanics. Maybe this possibility I don't know. Yeah, but it seems that nobody seems that very few people are interested in this Roberto Benzi insists to say that at least in some at least for the shell model at the end of our career, we had to solve the problem. Why, so the shell model is actually the same disability of the, the, okay, you have this free from some tensor stuff but apart the tensor decoration, the trouble you have to say, that's true. I was going to teach you also a little bit on, you know, did you ever tell mental growth that Richardson is the father of fractals. Did you ever tell Benoit mental brought that Richardson is a father of fact. What he brought was a smart guy, and he was very accurate and he mentioned him. Show him so you cannot, you cannot say you cannot say that the mandible wasn't fair. Okay. First, let me just show you. No, no, some books about about some suggest. Okay, why, why, what happened. No, so. Okay, so let me suggest this book by Ashdorf. This is the, the book was compared most very complete about about Richardson. And then another book about. Okay, this is the, this one is very recent. This is the Richardson activity in the pacific movement. Okay, this is very nice book. This is I guess the most the great expert present time of Richardson is meteorologist and you wrote this book with a sort of tribute to the idea of Richardson, whether forecasting. Okay, this is the masterpiece of Richardson. This is the paper on the anomalous. This is my small, small tribute to Richard, just in a minor, my, my, my, my new job. Thank you until a big congratulations again. But it's unfair to stop price winners talks. So we are having still half an hour break until 430. Thank you very much. I think that actually networks are really formed by distinct layers for interactions and he has exploited the mathematical formalism of some potential complexes and hypergraphs, going beyond per vice interaction, and I'm sure in this talk you will tell us more about that. He builds on dynamical processes taking place on these networks, including random walks reaction diffusion processes synchronization, epidemic processes, and so on. He's also interested in applications of network science to social and transportation systems as well as to the human brain. So, it's now time to ask Federico to come to the stage and again do. I think that's worth it. Clap once you're here. Please come here and we do social distances like that. Like that. And please take your yes both this trophy as well as the document, and I should read this document which has the following inscription that is the official EPS citation for your price. Federico Batista is honored for his outstanding work on nonlinear dynamics and emerging collective phenomena in multi layer and higher order networks, including diffusion synchronization, social and evolutionary processes. Congratulations. Great. I asked a route or other who's a member of the board of the EPS statistical on the physics division to say a few words. I simply on behalf of the of the board. I would like to congratulate you for this for this world is her price. As a member of the board one of the nicest jobs we have is to analyze the nominations will receive for this early Korean to realize how much brightness and excellence is in the is necessarily short by definition curricula. So, looking at the, at the world that you've done. We can say probably that the future of our field is in very good hands. So congratulations for the price best wishes for your future achievements. And remember, the best is yet to come. If you go down and I call you back late. Okay. Now, again, in alphabetical order. We continue with Katarina. And she did her undergrad graduate studies at the University of Padua and was looking at non equally brown in motion particles in contact with heat baths and. And then she was a graduate student in or say from October 2012 to September 2000 2015. And her PhD was supervised by Sylvia France and Satya my younger and there is a connection to the previous price senior price winner two years ago Satya who, and that's also quite nice and interesting. And then as a postdoc you went to the Santa Fe Institute wonderful decision and beautiful place to be. And that was from 2015 to 2017. And since 2018, Katarina has been working at cyber Valley at the Max Planck Institute for Intelligent Systems in tubing and in Germany, where she's a group leader and supervising quite a lot of researchers on complex networks and her specialities in network inference and rooting optimization. Katrina has made important contributions to the statistical physics of random walkers, such as calculating exactly the mean number of distinct side visited by random walk on random graphs. From that behavior on some calculations she could get information on the topology of the underlying graphs she is also interested in stochastic search algorithms for community detection and link prediction layer independence in multi layer networks is a highly sized paper of hers, and she's, as said, also very interesting in rooting optimization on networks. So, congratulations, Katrina please come to the stage. So, again, we, if you come. No, yours is over there, sorry. If you come over here we do this little bumper award. Great and here's your trophy please pick it up. And here's your official citation document. And it is time for me to cite what is written on this official. This is a citation of us that is again the official EPS text. Katrina debacle is honored for her outstanding work on statistical physics of random walkers on random graphs, stochastic search processes, rooting optimization on networks and effective algorithms for community detection. So, Eric, I will also speak to Katrina so on behalf of the board but also on behalf of all the co p i s of a European network method is which I believe paid for your PhD studies. And that includes me. Our main PI Peter solid she was not here your supervisors if you're friends, I heard from him he won't be congratulating you, and Matthew Marcelli is in the audience and I think that is everybody here present. We are very happy from the old method is crowd that one of our, let's say, students maybe we could say collectively students I believe that you were here in Slovenia in a place organized by Mateo and did rafting some years ago so that already also helped that one from us that was from the group that we were supervising and helping has progressed so quickly. So, from the board from everybody from the whole community. Congratulations. You had, you have from lots of courage in your career so please go on doing that. Do new things. Yeah, all the best. Okay, great. What we want, what we want to do now is have a first picture of the two early career prize winners together in nice social distance of course or maybe you'll take your trophies and we make a photograph Eric don't run away you should be on that photograph as well. So maybe here like that, maybe. Yeah, I don't know how it looks like please direct us. You keep this. Oh, oh, there's another one, then, then I go here. Excellent. No social distances please. Okay. You have to come closer closer. I stand in front. Okay. I come down very good down here. Okay. So, okay. Good. Excellent. And then where is our senior prize winner. Angelo, are you there? Oh, there he is please Angelo, and in the middle please with us. And perhaps as a background screen. Laszlo as well, at least his photograph. Yeah, can we have Laszlo in full screen onto it. Oh, there he is. Oh, wow. Okay, excellent. So this is the joint photograph now of everybody involved. Fantastic. Thank you very much. So, where's Morgan's. Morgan's you take over this session as well, I think. The first one is. Okay, so it's a pleasure to introduce the two young price wins. Thank you very much. What's the length here Christian roughly 45 minutes. Including questions. I will stop your own for you. Okay, perfect. First of all, thank you very much. Christian, and I also like to thank all the members of the statistical and linear physics division of the EPS for this award. Thanks also to all the organizers, especially the local organizers for this very nice conference here in Trieste is the first time I come to Trieste for work. And can you hear me well in the audience. Yes. Okay. So, as this is. I'm very honored to share this award with Katerina on the occasion in which for the senior award. We have a last love Barabashi, whose work of course was very fundamental I think for my careers, and also Angela will piano was was awarded on the same day and indeed I was a former student of his for the course of statistical mechanics in Rome several years ago. And our talk, I also wanted to provide a little bit of perspective on a network science so why matters for for the field of complexity, and also for the field of physics. So what I'm going to do is that I'm going to try to talk a little bit about a topic which has accumulated a lot of interest, not only from me but from the wider research community in the last couple of years. And this is the topic of higher order interactions in complex networks, and I'm going to start by by. I'm going to start from the data from the some empirical observations, all the way down to dynamics and processes on networks. In order to do so, in order to explain why higher order interactions matters for complexity, I always like to take a step back and refer to these masterpiece published 50 years ago in science by Philip Warren Anderson. And this is because in years when the most interesting experiment were considered those about trying to find the tannis particles of which everything is made up in years when the redactionist hypothesis was completely dominant. Anderson was already warning us that typically the behavior of large and complex aggregates cannot be understood as a simple extrapolation of the properties of a few particles. So what Anderson is saying is that actually no matter how well we know the unit of a system, a lot of information is actually encapsulated into the interactions and the architecture of these interactions among the units. And this is why over the last decades, networks have become very important for for complex systems. They are sometimes referred to as the backbone of complex systems. So to me, networks are mainly a physical concept of physical infrastructure, on top of which, dynamical often unexpected complex behavior can emerge. And when I'm talking about complex behavior, I'm thinking of things like the sudden viral spread rumors in a social networks to spread the disease and we know this very well. The units do not have to be people, they can be different things like generation units, and we can have cascading failures of generation unit and blackouts like in 2003 in the US. And all of these are a bit negative things. So I also wanted to put an example of a good emergent behavior, like the large scale cooperation that we witness every day in a human and animal societies. So, if the architecture is so important for the dynamical emergent behavior of a system. Well, of course, we need to be very precise while describing such architecture. And this is because not only with this will allow us to understand better the systems, but eventually also to improve our ability to predict the dynamics of such system and eventually possibly to control them. And this is actually why. So what is what is actually a network in in 2021. So it is surely a collection of nodes and their interactions. And since these interactions are so important for dynamics, but actually since the very beginning of network science, even before from sociology. People started enriching the description of such interactions. So they first considered different ways, or they added a direction in this relationship a source and a target node, and or possibly assign a positive or negative sign to these interactions. However, in the last recent years, there's been a lot of a lot of new research on more complicated generalized network structure where for instance links can be created or destroyed over time, or multi layer networks where nodes can be connected by links of different nature. I worked a lot on this topic during my PhD with my supervisor with a lot of which, which I would like to think. I think that now it's indeed about time for us to try to tackle a slightly different problem, which is that of networks where interactions are not limited to the pairs, but actually can can encompass a wider number of groups of nodes. And this is because there's a lot of empirical evidence that higher order interactions actually exist in interacting system from ecology when where the survival of species often does not depend on isolated mutualistic or competitive interactions but often depends on the simultaneous interactions with more than one species at a time, also to social systems were, for instance for social contagion, differently from biological contagion where a node can be an individual can be infected independently by another infected individual for social contagion what the main mechanism which is believed by which complex by which social contagion of course is that of complex contagion where a single agent actually is susceptible to acquire a new opinion by some mechanism of group peer pressure. So here she looks at all the neighbors around him or her, and only if a significant fraction of them has a given opinion, then here she will acquire the same opinion. So higher order interactions are there. And, however, how can we describe them with simple links to describe interactions. Well, the problem to describe interactions beyond the Paris of course not completely new. So, since the early 2000 actually people have considered network motif, which are small connected subgraphs, which are found to be statistically statistically overabundant in the real system, compared to a suitably randomized system. And in particular clicks, which are special type of motifs, where every node is connected to everybody else. And however, I want to show you the traditional network encoding is often not enough. So let me give you this example from from a quarter ship network publication data, where we have three scientists, at least Bob and Claire, at least and Bob right a first paper together, which gives us an edge. Bob and Claire write a second paper together, as well as Alice and Claire. So eventually, what this is going to give us is a triangle of this type. However, it's very clear that this type of interactions is very different from the case where Alice Bob and Claire actually joined forces and write a single paper together. So, what we have here is what we call a field triangle. Here in with traditional network encoding look exactly the same as the previous triangle. And indeed what we instead what we want to do is not only consider difference from a structure point of view of these two objects, but actually being able to give a different dynamical meaning to these two objects. What we have here is that we cannot limit ourself to these lower the building blocks simple links, but we have to encode interactions in a more sophisticated way by considering one simplices which are edges but also field triangles or two simplices field that are three simplices and so on. And so we covered this topic very much in details in this recent review done together with many colleagues. What is nice is that the average age of this review is like 30 years old. So it was really an early career effort, not only from myself but also from a lot of lots of colleagues. And the idea is that if we put together all these different building blocks what we will have is a so called simplicial complex and actually simplicial complexes allow us to exploit a lot of powerful computational and mathematical tools, which are the tools which characterize topological data analysis. So for instance, we can look at so called holes and cavities and assign a particular shape to these higher order data. And this is indeed this is what has been done for brain data semantic data, even collaboration networks where it's clear that the fundamental units of interactions is actually the whole group of quarters. However, and another interesting case of network data is actually that of face to face and proximity interactions. So in the last 10 years, there's been a lot of research about analyzing real, real social offline social interactions. For instance, as done by the social patterns collaborations where people were given sensors and if two people were speaking in one in front of the other, an interaction was recorded among them. And this was done in settings as diverse as primary school hospitals, we understood a lot of new things about social networks. And, and these things were done were studied in time as simple networks. And this is not only for face to face interaction but also for proximity data, like in Copenhagen they had they run a very big experiment where they give a they gave a JPS to some students and they built these temporal networks. I've been involved in some studies where we studied the diffusion of culture in population of hunter-gatherers in the Philippines. But what I'm saying is that also in this case actually, since we have time resolved data, actually we should study these systems more as simple shell complexes rather than simple networks evolving over time. And indeed, if we do so, actually, we can we can study for instance, not only the statics architecture of simple shell complexes, but also how they evolve over time. And we can look at the burst of activity, not only in the encounters of pairs of individuals, but by extending some work done by Laszlo 15 years ago, we also see that also larger groups of three or four individuals actually tend to interact in in a burst. So there's moment in which nothing happens and then moment of strong interactions among three or four people. So there is there is a problem with simple shell complexes. It's more of a technical problem maybe, which is that from a mathematical point of view, in order to use these tools from topological data analysis which characterize the shape of data. There is a requirement. And a requirement is that if a simplex belongs to the complex, also all its sub-faces, its sub-simples, this must belong to the complex. So what does this mean? Well, it means that for instance, if we have a triangle here, not only there is a free body interactions, but we also must include all the sub-faces, all the edges in these cases. And this is what allows us to use tools from topological data analysis, but this is not exactly what we want in many cases. Sometimes a free body interactions is just a free body interactions. So what we can do is that we can relax this condition and what we have is a mathematical object which is called a hypergraph. And by doing this, we gain in flexibility, but we lose some of these tools already developed by the mathematical community. So in our review there is, let's say, all the mathematics about this, we describe the basic formalism from incidence matrices to adjacency matrices and tensor to describe higher order interactions. And I want to now start talking a little bit about some practical problems. So for instance, what kind of information we can get by looking just at the micro scale at these systems. And for instance, are we able to distinguish different interacting systems by looking at the local patterns of connectivity. So let us consider for instance, motifs of three nodes, but with higher order interactions. So if we were with simple networks, we would just have this triad and this triangle. But now that we also have higher order interaction, we already have six different combinations. And as soon as we consider slightly larger building blocks, for instance of size for this goes above 170 possibilities, and the number explodes actually very quick. But even just if we stop at the motifs of order for what we see. Well here is an example of actually what what we can do with motifs of order three. So for each one of them, we can compute its frequency in the real data and compare it to what we would have in a suitable model. And then we can assign a score, a significant score, which basically is greater than zero, if the motif is overabundant in the real system, and is negative, if otherwise is underabundant. We can do this not only for three node motifs but also for four node motifs. What we get is that for instance, we identify two super families of higher order networks, and we happen to see that in one, all our face to face higher order networks belong to this category and in the other one, for instance, all the biological systems happen to these other categories. So for which motifs are actually overabundant, we see that they're very different ones. So for biological networks, there are a lot of interactions which are actually pure, many body, many body interactions higher order interactions. While in social networks, what we see is that if there is a higher order interactions a face to face higher order interaction often there's also, it also means that these people meet separately in a very wise way. So this finding here suggested that actually maybe there was some sort of reinforcement mechanism at work in social networks. So what we try to compute was for all the motif with a list of free body interactions, but with a different number of pair wise interactions, we try to compute how many times actually the free body interactions was observed. And we see that in all cases for these face to face social networks, we have a growing a growing trend, indeed hinting at the existence of this social reinforcement. While this is not the case for biological system, for instance. And similarly, we also try to validate this reinforcement with external metadata, such as for instance how many people were friends on Facebook or declared to be friends on a survey. And again, if we plot this for higher order interactions with a different number of pay wise interactions also in this case, we, we observe a growing trend hinting at the existence of some structural reinforcement. So what I'm trying to do now is that I'm trying to actually develop a model where we can actually tune and grow higher order systems with some correlations between the different orders of interactions. So there's a lot of data about higher order systems. So much data that actually recently we thought well in some cases, these systems are very big and also very noise. So can we find a way to detect meaningful higher order interactions, separating noise from actual information. And this is what in network science is so called problem of filtering or back boning. It wasn't introduced slightly more than 10 years ago by Alessandro Vespignani in this very famous papers, and then some other some other things were were also generally generalization, let's say, we're done for the case of bipartisan networks, temporal networks, also also Guido Caldarelli has worked on on on a related topics for financial networks in particular. And the problem is that if we do not use higher order and coding of our interactions. We might not be able to distinguish between these two type of interactions, or might actually miss these two interact interactions where the free body interaction is actually desegregated into single single edges, each one of which should be validated independently. So what we did with the group of Rosario Mantegna in Palermo and and his collaborator Federico Mosciotto, they are truly expert of statistically validated system. So what we did is that we basically devised a backbone for these for this higher order system so this is just to say, in the case some of you happen to have a higher order network and want to filter out the noise or the redundant information. This is a tool that that can be used. There's there's a lot of data so much data that sometimes we want to filter and simplify the system, but actually sometimes we are we have the opposite problem. So there's a lot of data, but this is not always recorded as a higher order network. And sometimes we only have information about the projected system, the projected network data. And a nice question which these guys asked themselves and to whose work I want to point out is, are we able to actually reconstruct the higher order organization of an interacting system, given given simply the network data. So what I did is that they developed a very nice by Asian framework, which basically allows us to do so. And, and I think, however, this doesn't solve the problem of inference of higher order interactions for for complex systems, and for instance, some interesting problems, at least problems that I find of interest are problems where we also have dynamics, not only information but we might have a dynamic on on each node we might have a signal. And the question is, for instance, if given these signals we are able to actually reconstruct correctly the interacting structure. So this is somehow a problem of dynamics. And indeed I want to have my second part of my talk about the effect of higher order interactions on on dynamical processes in complex systems. And I want to start by mentioning the case of a synchronization which is the emergence of oscillatory order through interactions and this is of course widespread behavior observed from heart cells beatings together to circadian rhythms and many other examples. And, and at least when I started working on this topic, most pre existing studies were actually limited to low order schemes. So simple links, or some many body interactions but mostly in very simple non complex, I would say, interacting like all tools came. So, together with Maxime Luca and Julia Chinchetti, we actually wanted to tackle both to have both these properties. So we wanted to have all possible orders, but also a generically arbitrarily complex relational architecture. And to do so, we focused on a very simple thing we focused on identical phase oscillators. We considered a system like this. A Kuramoto model basically where we have the time derivative of a phase here and the time evolution of this phase is governed by the natural frequency, plus the difference in between the phases with the neighbors where Aij regulates the connectivity. So, rather than simply having pairwise interactions, we told us, well, okay, let's add also three body interactions and four body interactions actually potentially all the way up to all possible orders of interactions where these entries here can be as complex as you want. So they can be heterogeneous, for instance. And the typical question in which we were interested is that is the synchronized solution linearly stable. And to do so, we computed the dynamics of a tiny perturbation into the system. And what happens is that after some algebra, which I'm not putting here. We were actually able to writing down these these dynamics in this very simple way, which reminds us of what we have for the traditional Kuramoto model for pairwise interactions only with the difference that here we have a very particular operator, which is not the traditional Laplacian but is something else that we decided to call multi-order Laplacian. Why do we call it multi-order Laplacian? We call it like this, because actually, after some rearranging of the terms, we, these can be split into several, some of several terms where we have a first term which indeed is the traditional Laplacian, encoding pairwise interaction only. But then we also have this other term, a lambda 2, all the way to a lambda d. And all such matrices actually happens to be generalized Laplacian matrices of order d. So they have the traditional properties of the Laplacians and they happen to be written down in this way where d is the order of the interaction. KID is the number of simplices of degree d included in i. And this is the generalized adjacency term. So the question is what can you do with this operator? Of course, now you have a solution, a spectral solution for the stability of this simple system of identical phase oscillators. But when we wrote this down, we really thought that this had some wider applications. So I was very happy actually to see this year, this other work done by the group of Mattia Frasca, Vito Latore and Stefano Boccaletti, who used the same multi-order Laplacian, the same operator, to extend the idea of Pecora and Carol about the master stability function to synchronization in simplicial complexes. So actually they use the same operator that we used, but not only for simple phase oscillators, but a wider class of nonlinear oscillators like Rosler oscillators and things like this. And what they found is that higher order interactions is nice because they can either act as a stabilizing force into the system or introducing them sometimes can also be detrimental for synchronization. And a lot of people are actually using this operator. For instance, these other guys just put this on the archive and they are basically trying to optimize the architecture of a higher order interacting system by focusing on the properties of this multi-order Laplacian matrix. I should say that for pairwise connections, we have a single Laplacian, a single Laplacian that arises from diffusion properties that arises for synchronization in the Cura model model when we try to linearize the system. But this is not the case for higher order interactions. I think in the literature now we have many different Laplacians which, and I think they're used, it depends on the problems that you are talking. So for instance, there's a wide class of Laplacians on which I'm coming back in a little bit where basically the scope of these matrices is actually to define some boundary, some boundary makes some relations between a triangle and all the sub-phases. So simplices of order n and of order n minus one or n plus one. And actually, I also studied some random walk on hypergraphs from which you are able to, by looking at the properties of this random walk, you're able to extract relevant information about the community structure of this system. Let me give you another example which maybe is a bit more natural science and a bit less dynamical system like, but I think it's very nice to understand why higher order interaction matters. So this is not my work is a work of these, these colleagues, and it is about social contagion. So the idea here is that differently from a simple contagion where a node can be susceptible or infected and contagion can only occur in a otherwise manner with an infection rate beta. And in case you have two neighbors, you can have a probability to be infected beta by each individual in the case you belong to a field triangle right a higher order interactions. And there must be some additional reinforcement term for this contagion. So what the guys did is that they introduced these additional triangular infection probability beta triangle. And what is very nice, I think, is that we can write down the mean field equation for this problem. And we can for the density of the for the temporal evolution of the density of infected nodes. So, well, like in traditional epidemic models, we will have a term which describes the loss of infectiousness. So this is new this is the rate at which you recover from an infection. Then we have the traditional probability that because of an encounter between a susceptible and an infected nodes. This example becomes infected, and this is mediated by the infection rate beta, and the number of contacts K. But now we also have a new term, a new term, which is this additional triangular infection probability. So if there is an infect susceptible node, it can belong to a triangle with two infected nodes. So here we have a term which is raw squared. It's mediated by beta triangle and the number of triangles to which to which you belong. So this is now is a is a polynomial order three in the case we have larger groups of interactions this becomes a polynomial of order D. What's nice is that if we look for the steady state equation where we put this time derivative of the density of infected nodes equal to zero. So we can of course have this value which is equal to zero, and this is corresponds to the absorbing epidemic free state, but we can have up to two more physical solution here lambda is just the infection rate beta which has been rescaled by the So here we can have two more solutions and this is because we had a polynomial of order three, but what does this means from a physical point of view. It means that there is some range of solution where the traditional second order phase transition, which separates the endemic phase where a macroscopic fraction of nodes is actually infected. And then from the healthy state to these endemic endemic state actually is not there anymore for some parameters and what happens for some parameters is that actually we have this stability emerges an aesthetic cycles emerges and and and I thought that this was a very cool. Because I mean first order first transitions are something very important in physics, but they are difficult to obtain in network science and in statistical physics. So here what we see is that actually instead they emerge quite naturally in this simple model of mean field for homogeneous simplicial complexes. And this is actually because the year later then people started considering heterogeneous like scale free like simplicity complexes, and not only simplicity complexes but also hypergraph, and not only hypergraphs but also higher order structures which are evolving over time, and we always found some stability and some explosive phenomena. Let me tell you more than this actually, if we change process, and we go back to our current model and synchronization, and this is the work of Sebas Cardale and Alex arenas. Once again, a very similar face diagram emerges where best ability appears. In all, this suggests that actually, there is a sort of universal route to explosive phenomena for higher order synchronization right no matter no matter the process in some sense no matter if you're dealing with epidemic spreading if you're dealing with extreme synchronization. If here we have our macroscopic order parameter, it can be the fraction of infected nodes it can be the kura moto parameter for synchronization. If we plot this as a function of some control parameter. And as the effect of our higher order interactions is pretty weak, we still maintain our traditional second order transitions in in in interacting systems, but as long as this effect becomes strong enough. This is really an explosive abrupt transition emerges. And I think it is still an open problem trying to characterize this in a better mathematical way. So some results are available for some very particular settings by mathematicians. Another hand waving explanations exist based on bifurcation theory and on linear stability analysis, but I think that here in principle even for mathematicians or people interested in doing rigorous work. There's a lot of space to try to give more rigorous mathematical ground to this physical observation. And there's another interesting problem. Interesting process on network is that is that of cooperation. So cooperation is a very well studied process. There's actually already a paradigmatic game for games of many people, which is the public goods game, which basically works in the following way. There are some players, which can either decide to cooperate with a coin or the fact, and all the coins are then put together, multiplied by some game parameter, the so called synergy factor. And eventually split equally among the players of our population, no matter the initial strategy. So it's clear that in this case deciding to cooperate was not a very good decision right these guys started with a full coin and now they are back just with 80% of this coin. And indeed, unless we are in the trivial case where the synergy factor is greater than the group size. So in this case we are basically just multiplying money in the in the opposite case, which is the interesting case from a rational point of view, in order to maximize your individual payoffs and individuals should actually be able to should always defect. The problem is that if I defect if everybody defects there will be no money in the pool to be multiplied, and the system will eventually fall into the so called tragedy of the comments. And this is indeed what happens in in well mixed population, where if we plot the fraction of cooperators as a function of the synergy factor, the multiplication parameter. We have an abrupt transition, which separates the faction, full defection from full cooperation. Of course, we know that people cannot only play in a well mixed scenario, they can play in a network, for instance. And what is interesting is that in this case, actually something something new happens. So first of all, the transition changes order. So it goes from an abrupt first order transition to a smooth second order transition from an evolutionary point of view. What's interesting is that actually cooperators start emerging into the system much earlier than before. And this is basically the transposition to this particular game, the public good game of the work by Nova can make in the early 90s about network reciprocity. Now, what is the problem with the traditional way in which we implement the public good game on on on networks. It's actually in the way we define groups. So, typically we let our agent tie play with all its neighbors. And this is the group in which it is the focal node, but we also let I play in all the groups where the neighbors are the focal the focal members. And of course what we know now with this language of either graphs is that, well, that is not exactly the correct way to represent the group interactions so we can do better than this. And so what we try to do with a group of colleagues and that will recently publish is that we try to analyze the evolutionary dynamics of this system. Very much in the same spirit of the model which for which I showed you the social contagion phenomena. And what we do is that not only we provide okay we provide them in field description of these evolutionary dynamics. We see that higher order interactions can boost cooperation. We also see that for the first time, actually, the game or you need for my paragraphs corresponds to the replicator dynamics in the well mix in the well mixed limit. And this is, as I showed you before, this is not something that happens in a in a simple networks. So, I thought that this was actually quite quite a nice result. Very last thing. And this is I think where higher order networks really take life and become alive is to consider what I like to call topological dynamical process. So, typically we consider processes and networks where we have a dynamics of the nodes and we have interactions mediated by the links. But with higher order networks, not only we can have higher order interactions, but we can also place dynamics on top of simplices and higher order edges. And in this way higher order dynamical systems transforms static interactions into active agents. So think for instance of the case where the state of an edge, for instance, can influence the state of the nodes, which are at the two extremes, something like this. And this is also a very interesting case, which has been, for instance, tackled by Ginestra, Bianconi and collaborators in these explosive higher order paramoto dynamics, where you can have oscillators which are placed not only on the nodes but also on the edges, possibly on the triangles. And also here, very cool things can happen because with these higher order laplacian matrices, which I mentioned at the beginning of the talk actually, you can see analytically some very strange things like for instance that the dynamics on on the edges may display synchronization transition, which however is only revealed when projected onto simplices of higher or lower dimension. So I think that also for these topological dynamical processes, we are just at the beginning. And, and there's a lot of research to be done. A final some some final words, I think, so I've been talking a lot about higher order networks and interacting complex systems. Of course, higher order also means more memory to store the data, more complicated algorithms to look at the system, possibly slower analysis. So very big question very important question that we always have to ask for ourselves is, do we really need to preserve all this information to understand and describe interacting systems. And I hope that I was able to show you that list some good examples that both from the structural point of view but also in terms of new dynamics and new physics. Actually, there's a lot of a lot of new interesting things to be done a lot of new developments, which I believe will occur in the next years. And I think in particular for statistical physicists. This is a very, very promising area of research. And this is of course not only my research topic. So this is a research topic, which a lot of people are working. And this is a non exhaustive list of key contributors. And if you're interested in this topic, together with Ginestra Bianconi we are gestating gestating a focus collection in communication physics, which aims to collect the most relevant work in in this area. And together with Giovanni Petri, we have an upcoming book as editors, which will appear on the springer series understanding complex systems, which I was very happy because this is a nice series where we have these nice books on temporal networks, multi layer networks, adaptive networks or these things. And finally, some of these ideas, which I shared with you today will eventually appear in some upcoming perspective article, which they, they told me I should not, I should not advertise much. Last thing. Simplicial complexes, hypergraphs, they are not a new thing. They are an old thing. At least from for the mathematicians. So why, why did it take us so long. Well, I think it took us so long, because it's actually hard. It's hard, very hard, and we knew this from already from from classical mechanics, we could say, but at the same time I think that after considering all these degree of additional complexity, it's now time for statistical physicists and network scientists, trying to understand a little bit more the physics of networks beyond the pair wise interactions. And thank you very much for staying with me until the end. Thank you again for the early career award. Thank you very much for the reason for very nice talks. You have a little late but one or two questions. Yeah, Christian. Thank you very much for this brilliant talk. My question is really for applications. I mean, I can very well understand in a social network that you do have these complicated higher order interactions. But what about natural systems like a biological system? Is it relevant there as well? Did nature choose to have higher order interactions or not? So I'm less of an expert of this is a wonderful question, very good question. I'm less of an expert of biological and chemical networks, but I believe there's a lot of experimental data about for instance chemical reactions where the elements like to interact in groups of three but not necessarily in groups of two, for instance. And however, if we forget about the higher order representation of the systems, what we end up doing is decomposing these group interactions into a collection of dietic interactions. And these might provide us with an inaccurate description of the system. Similarly, I think also these are also in ecological systems. I think it's very important to consider simultaneous interactions of different species. Here in Trieste there is a Jacopo, Jacopo Griglio has done some very interesting work in this direction. Of course we need more data. So I hope in the future years we will have more data also in natural systems. Okay, thank you very much. Thank you. The next prize winner, Katerina de Paco. And again, of course we were going to hear about networks, statistical inference in network, please. Can you hear me well? Great. So, I just want to start by saying thanks for giving me the opportunity to win this award. So in particular, I want to thank the division of statistical and non-linear physics for nominating me for this award. And also some credit I really need to give to the people that contributed to having me here today, who are my previous advisor, the two PhD advisors, Silvio Franz and Satya Majundar, and the postdoc advisor, Chris Moore. So I really have to say a big thank to them because without them I wouldn't be here today. And with no further ado, I just want to start straight with the talk. It's the last talk of the day so I'll try to be on time, hopefully. So today I want to talk about finding patterns in network using statistical inference. In particular, I want to talk to you about finding patterns in networks when we have different types of information beyond the classical ones you are used to have in networks. So I just as a disclaimer, the running example for today would be a particular application that I have in mind that I've been working in the recent years, which is that of analyzing social support networks. So because I like to start working usually developing the idea from concrete application. I think this is a very stimulating application that provides rich opportunities for developing theoretical grounded theoretical models. In particular for this type of work, I work in close contacts with social scientists who do the hard job of collecting the actual data, and then it's up to us, providing powerful methods and theoretically grounded methods that are able to support the analysis of such rich data sets. So typically networks are represented as a powerful tool, first of all, to represent the interactive system, and the basic type of information you can store is a binary type of information. So you, sorry. So you can have. Let's see. The basic type of information you can have is a binary type of information. So whether an edge exists or not, which between two nodes, and this is usually collected in a matrix so called the JCC matrix. However, in real data, the things are much richer than that. So these sets contain much more information than just whether an edge exists or not. And that's the case of social support networks, the one collected by my colleagues who performs field studies, asking questions, bonding with people in rural villages to wonder about the relationship between them. These networks are networks where people are the nodes and edges are relationship between them. Relationship can be of different types in this particular network that I plot here which represent a village in rural India. We have 12 different types of relationship ranging from financial relationship social relationship advice, powerful relationship and so on worship. And now that they collect question to each single individual so the networks are directed because people can reply and have different opinions about the same type of relationship. So often you don't even have consistency in between disagreement between what two people say about the same edge. And then, of course they collect also all sorts of information about the individuals themselves. Of course you can think of them as attributes, and they can also be of different types. Okay, so what do we do with this. Well, first of all, let me say that this particular type of network that have been lucky to have access to is not the only type of network that exists out there of this type. It's actually part of a bigger effort that in particular anthropologists are running to collect the same type of network but on 30 different rural places located around the world, which is a work in progress right now. So in the next few years we will have at least 30 of them collected at two different time step. So there are a lot of questions that one can ask about this type of rich data set. The particular focus of this program so called the endowed program is about modeling how finding connection between how the structure of the network so the topology of the network can explain can help explain health wealth inequality between different individuals in a village. It's a pretty broad questions, but my point is, we need the powerful methods to be able to tackle this and all the subsequent questions that have been asked. So let's start with a simple a simpler question is actually not simple question but at least formally one can define it properly. We want to find patterns behind this rich data sets. So that's the first question that our colleagues was asking handing out the data set let's first find some pop and some commonalities that we can distinguish the different here in these different villages. And that asks for a very, a very powerful approach. Actually, you can let me say with this straight there are many approaches you can take to model this type of data sets, but to answer this type of question I really find powerful. The formalism of statistical inference, and more generally of proper probabilistic models. And that's because it's a principled framework. So you can drive actual math, do it in a proper way in a rigorous way, and then modeling probability distribution allows you in general, if you're able to do it efficiently to answer several types of question, in particular those that go beyond the one you originally started with. So for instance, in this case the original question was, how do we find patterns, some macroscopic patterns behind this network, but then having access to the probability distribution. We could answer several other questions as I will show you in a bit. And finally, these types of model rely on a simple principle that is that of latent variables. So variables that you do not observe, but you model them, you select the probability distribution that depends on them, and they can effectively capture the dependencies between the data. And one very nice thing about latent variables is that you can interpret them. So they are not parameters black box parameter type that you just infer them they have values you don't have any meaning, you can give them to the social scientists, which can then be interpreted using their domain knowledge. And one particular type of interpretation one can use the one that I would use in the next part of the top is that of community structure. So you might have already heard several times about community structure, but this is one of the flavor the flavors you can think of communities as latent variables that represent how nodes are grouped are clustered in a network. So these can determine the probability that they also interact. So I will start now with showing how we can tackle the original question which is how do we find patterns in in this type of rich data sets, and I will add one layer of difficulty one layer of information, which is that of having multi layer So the original question the original data sets that we received was this set of 12 questionnaires so 12 different type of interaction that people interact so who do you talk to who do you borrow money from. And the goal is to find this hidden patterns. So in particular, as I was saying we will give them interpretation as communities. So, I just want to say that the running example is a social network but social support network but this particular application is broader than that. So for instance multi layer networks you can think you can see them in transportation system in biological networks and so on. And just quickly defined so that we are all on the same page, what what I mean by multi layer network multi layer network is a network where the set of nodes is the same, but the set of edges changes between between different types between different layers. So for instance in our running example. If you ask question about who do you talk to this defines one layer. So it has one set of edges. And then you can have a borrow layer or who do you borrow money from, which is a different type of interaction, and it has a different layer, but the people are the same. Now, what is the main problem about working with these different types of information represented by having different interaction. Well, the main, the main, the main challenge is to effectively incorporate the different type of information while trying not to lose important information. So let me tell you how this is happening with a simple example. I plot here one layer one possible layer of two layer network. And you can clearly distinguish here that there are two groups. And you can also clearly distinguish that the structure of these two groups is so called assortative, meaning there are many more edges between groups than a cross group or homophilic type of interaction. So this is one structure in one layer, but then let's assume we have a second layer where the structure is completely different, or let's call it this assortative. So there are many more edges across groups than with within group. So you can already see now that we have a two layer network where the two layers are completely structurally different. So if we take a naive approach where we aggregate the two layers together because we have so many nice methods for single layer network is just bump the two layer network into an aggregated one, so that we can use our method. We will completely lose the individual information contained in the two layers. It's important for us to be able to find these patterns by effectively using the two different types of information coming from the two different layer without losing the fact that the structure of each individual layers can be completely different. So how do we do this in practice. So one way to do it is using probabilistic model with latent variables. And we start by treating. First of all, let me say that now that we have multi-layer networks, the adjacency metric is not enough we need to define a tensor, a three-way tensor. So this quantity here, a ij alpha, where the third index here denotes the layer, so alpha here. So because we are using probabilistic inference, we start by treating this these entries as random variables. So we assign a distribution to them. And just because typically these entries are positive and discrete, we assume that this is Poisson distributed. So the whole point of the model is to define how the parameter of the Poisson distribution is defined. So this M ij alpha. So the main assumption, and I want to just pause here for a second, is that we want to parameterize the mean of this distribution, the parameter of the Poisson using a tensor product where the tensor product is made of these three latent variables. Sorry, I don't know why I keep pushing the laser but it goes up. The tensor product is made of these three latent variables u, v, and w, where u and v are our interpretable latent variable, the communities, and they tend to the extent to which each node i or j belong to a group k. So you can think of as nodes people divided in groups. And we have two different u and v just because we have a direct network so we need two different ones for this practical reason. And then finally we have this w matrix here. W alpha matrix that captured the density of connection between nodes that are in group k and group L. So the main point that I want to make here is that this u and v these latent variables do not depend on the layer. So they are shared across layers. So we have a shared community structure, while the w matrix as a alpha here dependent so it depends on the layer. So that means that the structure of each individual layer can change a lot. So once again, the community structure is shared, but we can still have very different topologies across layers by tuning this parameter w. So this, this, this was the main model assumption and it allows you to capture the situation I was giving you before in that simple two layer network where we can have two different layers while having a share pattern in this case in the community. So we can model rich multi layer nettles arbitrarily. And here I just want to briefly mention that. Now we need to perform inference of the parameters because these are late and you don't observe them, you only observe the data, and you can set up a straightforward inference approach to update them in a pretty efficient way. So let me just say, because this audience maybe would appreciate the analogy you can always treat this as a maximization problem here we forgot to say we use a maximum likelihood estimation. You can always treat this as a Baltimore distribution with a particular energy function. And if you do this you will notice that maximizing the likelihood is equivalent also to more from a statistical physics perspective of minimizing a free energy of this particular system. But the bottom line is that you can have your close format dates for the parameter in a efficient way. The main idea here is that the physics formula I don't want you to focus on the formula but just on the fact that they depend on the numerator on the entrance of this tensor. And you should keep in mind that typically these systems are very sparse. So there are the most of the entries are zero. You cannot count in these updates. So in other words, you can perform this optimization in an efficient way. And this is a scalable algorithm up to millions of nodes. So it can handle large system, which in practical application makes a huge difference. So, okay, so now we do that we know how to set up the model how it works and how to perform inference. So what do we get. So this is the main results you can get. And that's where I insisted a lot that we need parameters that are interpretable. And that's what you get if you interpret now the communities. So this k dimensional vectors, we plot here on the top, the real network and the colors are the cast. So this is not something that we use in as an input in the data, but we have that information, a posteriori to to to give an interpretation to the community. And then on the bottom we have the four groups that the model is able to infer color coded by the intensity of the entry of your latent variables telling how much you belong to each group. So I suppose that there is some kind of correlation with the cast information, and then our colleagues was also saying with gender and geography. So there is some, some meaning behind this, this community structure. Okay, but now our colleague was not completely satisfied or actually she was satisfied but she was also more eager to learn something more. She said I also have other types of question like the following on. So typically I know what people worship together who they worship together so I know this information, but people are much more reluctant to talk about their financial situation. So can I learn something about how they exchange money together. If I tell you something about how they worship together. So that was actually an interesting question and we thought about how to formalize it mathematically using what we just learned about exploiting the common community structure that we were saying. So we define this layer interdependence problem where you try to learn one layer by knowing another layer, more in detail. So the idea is to answer to question like the one I wrote before so you will you can perform this as a link prediction task where you set up a so called cross validation routine where you have your data sets divided in layers, but certain layers you only observe a little. So for instance, the financial layers, you hide the part of it as if you keep to capture the idea that typically you don't know all of this information, but then you give an input one extra layer or two extra layer or all of the other extra layer. And you try to see whether you are able to boost prediction of the missing connection in the financial layer. And the idea is that if there is a common community structure across the layers, in particular, between the one that you were hiding like the financial layer and the others, you should be able to boost prediction indeed. In fact, this is what we saw for this particular network and this is a crowded picture but I wanted to focus on the rectangle here on the dashed rectangle, where we have here the top layer, which is the one we hide we have partial information of what people who people talk to. And then on the y-axis we have a measure for link prediction, so the higher the better we are in reconstructing the missing top connection. And then the points here of different colors are how many layers we give in input in order to integrate information and recover who do you talk to. So if you see here, the red dots here represent the best performance that we can get and we can get it if we give in input all the 11 layers. So all the information about all the other type of relationship that these people have. Whereas if you only give the top layer partially you do basically like random chance you cannot predict much about that. So there is a big boost in prediction and that signals that there is indeed actually a common community structure, these latent variables that explain the dependencies between the different types of interaction that connect people. But that's actually not the case in general, because we now took another system, a completely different system, this is a network of malaria parasite where nodes are sub sequences in gene, different genes, and the layers are different position in the gene sequence. So the idea is that if you give me only the information about one position in the sub sequence, can I help reconstructing the sequence in a different position in space. And actually here the answer is no, you heard the reconstruction so you actually do worse, if you give me information about different parts of the sequence. So here that the best performance is obtained if you only use that sub sequence, so that layer, and it decreases, if you give me input out. So now the bottom line of this is, it's a negative result, but it's not, I wouldn't say a negative results in terms of the model, but actually it's a negative results in terms of the information you have. So this is a take away lesson that not all the multi layer network should be treated as a multi layer network. In this case, it's much better to treat them as the individual nine in this case, single layer network. Because their structure their common community structure is completely independent. And then of course you can go beyond that we saw one type of extra information which is having layers of different time but we can also ask. Okay, but I also had collected all this data about the cast about the wealth of an individual the age the gender. We can also use that. Remember that before the groups we got, we didn't use any information so the colors here are used the posteriori using the cast. So of course then it's a matter of how do you formalize this mathematically using this extra information. And this can be contained in a matrix so called the design matrix, this X here, which tells me the value of an attribute Z of node I. And if you do that well, it turns out that you can actually use the same type of idea so having some common community structure that couples together two systems. One is the network structure and the other is the design metrics to find again dependencies between these two data sets. And if you do this then you can obtain that the common community structure can be better inferred. If you have some correlated information between network and attributes on those. So here are results on synthetic net us we're on the left side here, you have the ground truth partition is just two simple groups, and then we increase the probability that the colors here the groups are correlated with one even just one attribute on the nodes, and there's a possibility of recovering the ground truth partition. So if you see here, you get better and better reconstruction performance as you use effectively the information on the notes. qualitatively of course you can ask yeah but how does the community structure change if I am also influenced by some attributes in this case cast so we use casting formation to drive the discovery of communities. Here is the main results qualitatively so you can see that there are some communities that are still the same as before not changes, but these two communities hide highlighted here. They change they are they are too smaller communities that are split in one case and are together in another case. And then once we had this that we handed this over to our colleague anthropologist, she was able to give an explanation that these are actually two groups belonging to two minor cast that had a fight one day. And now they don't talk to each other much, much often anymore. So there is a, if you use this cast information you're actually able to separate these two smaller groups and distinguish them. So this is an example of what qualitatively, what type of results qualitative if you also integrate this information. And then of course you can ask a similar question we had before, which is a question that she was asking of course. So, now, let me tell you that it's very costly for me to collect it. I actually spent two years of her PhD in doing a field study, because if you have to collect 12 questions for several hundreds if not 1000 of individuals it gets long. So can I avoid asking that many questions now and instead collecting some more not information. Because if I tell you a bit more about cast gender of individuals, can this help me predict also the probability that they interact. So maybe next time instead of asking 12% I can ask five and save time and ask many more people instead. And while it turns out that with this type of probability distribution, you can assign probability distribution to a network, if you observe also the design matrix so the attributes. So we run this experiment on her data sets and here on the, the gray axis here you have the four different villages, and on the, and the red axis, what type of extra information we were using input, and the numbers represent again some prediction performance in predicting the missing connection, and the higher the better. So here 0.8 is pretty high. The next line here is the light blue one where we do not use any extra information except just interaction we observe, and we only observe partially. So 80% of them. And you see that if you give an input cast information we boost a lot performance in predicting the missing connection. The answer to her original question is yes, if you're able to give me for instance the cast information you can probably ask a bit fewer questions, and I will able to predict better if they interact or not. But that's not true in general, actually, for instance here these three other types of information can decrease prediction performance, because maybe the partition the community structure is not very correlated with age gender or religion. So people behave differently than what is explained by age, for instance. So if you give me this in input I will do actually worse than if you give me nothing. So again, that's a negative results, not in terms of the model but in terms of the data. So it also probably teaches us that it's not always the case that the more the better. We also need to select what information to store and this can greatly improve, but we also need to understand what information we should remove, because it can actually decrease performance. And then you can generalize these two even more types of information for instance here we were also trying with different types of attributes not categorical like age or gender that you have four or five different types in total. And you can also have some continuous numbers like Gaussian distributed values and also on edges because edges also it's a different type of you can also have information for instance where they live so you can calculate the physical distance in kilometers where they live or capacity if you want. And here is a diagram where on the X and the Y axis we have increasing difficulty. One is noise in terms of how the community structure is a sort of versus this sort of thing. So the higher the noise the more difficult inferences the construction of pattern is. And on the X axis we have the degree so the number of connection the average number of connection so the fewer the harder prediction inference and going from left to right we have an increasing amount of good information so information that helps increasing. And here the colors are the brighter the better. So you can see that there is a clear pattern going from left to right of how much we can improve inference passing in different hardness regime based on how much of the good information we have. And we don't have such a picture also on the other way around but I'm sure we can get it how much you can decrease recovery if you give me the wrong type of information. Okay, so so far we saw very nice properties several types of question that we can ask, but let me conclude this this talk to you with you today saying okay but what can possibly go wrong. I found out very simple problem with everything can feel terribly actually. And that was again coming from my colleague social scientists but because one of the properties of this type of model they are called generative models for a reason is that you can generate new data. Once you learn the parameters. So in particular you get the community structure and now you sample networks that hopefully have a similar topological structure than the only one you could observe. So in their case she said it's very powerful because typically I get one instance of a network, but I may want to generate many more to get some statistical properties on them. So if I'm able to use this generative model to do that now I'm going to calculate some nice quantity on this network and hopefully they will be similar to the one I expect to see. So that's the case for a very simple quantity that is property that is called reciprocity, which is simply stated as the tendency of two nodes to form mutual connection. In both of the direction. So for instance here on the top on top you see that you observe an edge in one direction, and then you may wonder, can this help me predict the probability that also I observe that in the opposite direction. And we can have the two different types of situation, either we have that edge reciprocated or not. It's very simple. So reciprocity is the number of reciprocated edges in a network. So if you now try to do this and try to generate sample from this nice generative models with community structure, and then you measure the amount of reciprocity here on the y axis, you get these results on inside this green green shape. Now the labelings here just keep in mind that these are networks generated with variants of this latent variable models, and the reciprocity obtain is terribly low compare in particular to the one on the real data that you fit the parameter on which is on the with with red markets. There's something fundamentally going wrong here because you're able to do all the to classify all these very nice prediction tasks recover all the missing information well, but then when it's time to find the to generate a network that has the value of reciprocity that you expect, you cannot do this. So we were wondering why and looking at the math behind that we were thinking that maybe there is a problem in a main fundamental assumption that people with these models may, which is that of conditional independence between engines. So you assume that you want to model this joint probability distribution of your own network so represented here by a the adjacency matrix. And then you assume that given some latent variable theta, I am able to factorize them in fully factorize way edge by edge, given these latent variables. And I just want to put here as a reference the classic stochastic model is one of these, where you have fully factorize distribution on edges. And there are to say also theoretical grounding why this is the case there are these nice theorem that say yes, if you do this if you make this assumption and you effectively come in for some latent variables, you can actually capture very well this joint distribution. So you don't really need to more complex model you just have a fully factorize one. However, if you want to model reciprocity, this assumption is broken, you don't you cannot. It's not it may not be valid anymore, because it's not enough to model this fully factorize distribution, you may need to have a much less factorize this little less factorize distribution, which is one where you still have factorization between edges involving different pairs of nodes, but you keep this joint distribution between pairs. So you want to see both of the edges at the same time. And then if you model now your latent variable model with this type of factorization slightly modified, then you, and then you replicate the task we had before we generate data synthetic data with community structure, but and then you model you you you calculate the reciprocity, you are able to boost the reciprocity values going very close to what was the real data. And that made me wonder that all these nice models actually are very good, but they still contain a lot of limits, and that's a nice ground for me to say that probably we need to go beyond them. And in particular, we fix the problem for reciprocity but then if you count for higher order like triple triangle transitivity properties, we are still doing better. So we are still getting very low numbers. And there is also recent nice theoretical paper showing why in this models with latent variables, you try you will probably never going to be able to find higher order properties while keeping the sparsity load. So my idea now is that we need somehow to introduce some sort of linearities. So we have to think about this, and perhaps we can use also the tools we saw in the previous talk also high orders to tackle this type of problems. So with that, with this open questions and future challenge. I want to finish here my talk, and thank first of all the collaborators during this for this works that I presented to you. And this year is the person who collected all these nice data sets and will will be also in the future so a particular thing to her, but also then increase. And then let me finish thanking also all the group that have been able to start recently Max Planck, because they are all very nice young student motivated. And it's really a pleasure to go to work with them everyday. One question getting late. It's very, it's very late in the afternoon so we won't ask a technical question. I will instead ask a moral and ethical question. These kind of data that you discuss must be available to governments and to commercial organizations of all kinds. Can you say something about the moral implications of the research. That's a very good question. I'm glad you asked because usually this question are not asked in a we talked about models but then there are probably more important things. So I have to say I'm very concerned about this. In this particular case that we always have to sign. Sorry, there is some. We always have to sign some paperwork so these are not given away like that. So she's giving only to three or four people that are able to model them. I don't think that she's giving it to the government but her funding is also funded by the US government so probably they have the right to do that. Beyond that, that's a constant conversation I have with them about anonymizing at least the data, but it's an open question because you can anonymize and actually the data is anonymized. There is no personal information she doesn't give me the coordinate GPS or anything, but learning what how much we can learn cross matching all this information. It's really easy I think to de anonymize this data sets. So the problem the fundamental problem here is that the method to improve prediction and so on grow so quickly and so fastly that the method for having something that keeps the information anonymous and preserve the privacy of the people are not catching up with that. So I don't know if we should stop and going slowing down modeling to allow people to also first think about the ethical implication, or we should boost the people that work on ethical but mathematically because here we really need a mathematical to solve a mathematical problem how to learn some different parameters while not being able to reconstruct the gender of a person for instance. So, I think that's a huge problem but we may be able to solve only if we put focus on that. So before asking me can you predict the gender of this person. Why don't you ask me, can you find a way for me not to be able to predict the gender of a person if I tell you who they work with, for instance. So I think we need to start asking a different types of question and stimulate models that can answer this type of questions. So I'm really glad you asked that, because I'm, I'm also aware that is a tricky and sensitive issue, but I think it was a very good question to end on. Thanks a lot Katarina and let's thank all the price windows. I'm told that all the invited speakers should come here so we plan the dinner. Right, just come up here and so we know how many and transport and things.