 ... First, I have to tell you what is the main character of this conference. They are the directors that are listed here. And apart of a number of local people, that is me, Sebastian Gold, Edgar Oldan. Stefa Nourufo is also Sissa director of Mateo Marzili. All the other persons here, including Stefa Nourufo, are actually members of our partner for the Organization of Disconference Beyond ICTP, that is the Statistica and Nullinear Physics Division of the European Physical Society. For the reason that you will be clear, probably you know it, but it will be clear. So, and we are pleased to have here in presence a number of the representative, sorry, a number of representative of the board of the EPS Statistica and Nullinear Physics Division, but in order to, I think we have also the chairman, who is Professor Bek, who is going to tell us two words, I think right now, of welcome on behalf of the European Physical Society. Yeah, thank you very much. My name is Christian Bek, I am presently the chairman of the EPS Statistical and Nullinear Physics Division and a very warm welcome to everybody present in person here, but also to everybody joining us online. This is quite a unique conference and because it's based on a triple interaction of EPS, CISA and ICTP and I think that is quite unique and has never happened before. So, it's a great pleasure to be here and for us from EPS, it's really great that we can be here in Trieste and enjoy the hospitality of CISA and ICTP. So, this is the third of a series of conferences on statistical physics of complex systems. The previous versions were in Krakow in 2017 and in Norditer in Stockholm in 2019 and we are now very glad to be here in 2021. One of the highlights will of course be the award of the Statistical and Nullinear Physics Prize of EPS. More on that tomorrow afternoon, but for now a very warm welcome to everybody. Thank you very much. Thank you. And I think the director of CISA, Stefano Rufo, is connected from home and he would like to also give us some welcome words. Stefano, please. Good morning everybody. I'm unfortunately I could not join. My name is Stefano Rufo. I'm the director of CISA for now for a few months more and unfortunately due to the family problem I'm unable to attend today. Indeed, I think Christian is right that this is really an unusual conference and I would like to stress that it is a hybrid conference and the hybrid form has been reached between CISA and ICTP in a sort of new symbiotic partnership because in fact even the infrastructures that you are using today is symbiotic between CISA and ICTP. You don't realize it, but it is indeed symbiotic. And so I'm really sorry not to be with you and I hope that we will meet in the future very soon and I will follow the conference online and I think it is an excellent opportunity to show the various faces of statistical and ordinary physics. Have a nice talk. Okay, Stefano, thank you very much for your words. So maybe I specify that as Stefano was saying, this room belongs to CISA, but the whole electronic equipment has been set by ICTP and actually this room has been renovated very recently. So we were also a bit unsure whether we managed to have the room ready by today, so it was a bit of a thrilling experience. Okay, so let me also give you the highlights of the program. Of course there is today the poster session starting at four, which is completely virtual. So by using this gathered town, those of you who are not familiar with this gathered town are encouraged, welcome to see the booklet in which Sebastian Gold summarizes all his knowledge about gathered town, so I have still to read it after, but I will do. So this means that also the people here, they will have to participate online, so this means that, oh sorry, that I, no sorry, because I'm not familiar. So they have to, please remember to use headphones, so you have to connect with your laptop to the internet. Edu Rome is working here, use your headphones so you will navigate the poster session exactly as all the other participants from remote. And also in order not to crowd this room, although the chance is very low, but there are a certain number of lecture rooms one floor below, in which you can go and also work, discuss, but also attend the poster session. Okay, then the other highlight is tomorrow, as Professor Beck said, starting at two, we will have the award of the EPSS NPD prize to our distinguished colleagues that will be introduced in due time by Christian. With this said, let me give, remind some practical matters for those who are physically present here. These coffee and lunches will be served actually outside the room, and as you know, lunch will be given in the form of lunch boxes, which we can withdraw there. So it's a takeaway. And for eating lunch, the possibility is either to spread here or to go out. I mean, again, remember social distancing and so on and so forth. We have to comply with the rules that we have learned in the past year and a half. Important recommendation, do not enter ICTP building, because they have a different safety measures, so we have to stay at bay. And remember also to wear face masks when you are inside the room or inside the buildings. And remember to keep on distance, to maintain social distance. This said, again, thank you for joining either physically here or in person. We are really glad that many of the speakers managed to come here. It's not an easy period. The traveling is difficult. Everything is uncertain. You never know which documents you need to take with you. But thank you for coming. And we hope that you enjoy the conference. So thank you very much. This maybe in order to be shot with the program, let's wait five minutes. And at half past nine, we will start with the first in-present talk by Raffaella Burioni. So while we sit, so maybe Raffaella, you can come here. So before we start the scientific program, I forgot, my colleagues reminded me that concerning the lunch boxes, in principle, each lunch box should have your name on it. But in case this doesn't work, we never know. And you don't remember what you choose, because I can tell you that I do not remember what I chose. There is a board outside, which is written. What was your choice? So in case, but everything should be smooth. This is to check the microphone. Can you hear me? Yes, that's a kind of echo, maybe. So I have to keep on speaking to check the microphone again. So maybe now, maybe, I don't know because my watch is not really once this time. OK, so we start the scientific program. And so the first chairman of the session is going to be Raul Thoral from Palma. And who is a member of the Board of the Statistical and Nonlinear Physics Division of the EPS, please. Good morning and welcome to this first session. First speaker will be Raffaella Uriani from Palma University. She's going to talk about these large fluctuations in our normal transport on the big jump principle. You have 25 minutes. OK. OK, 25 plus 5 for questions. Kaj je? Kaj je? Kaj je? Kaj je? Kaj je? Kaj je? Kaj je? Kaj je? Kaj je? Kaj je? Kaj je? Kaj je? OK. Raffaella Uriani now has the first talk about these last fluctuations in our normal transport on the big jump principle. OK. Kaj je? Thank you very much. And first of all, thank you. Thanks to the organizer for the invitation. And as I am the first speaker, let me thank the organizer for the effort to organize this meeting also in person, because at least for me it is the first time I come to a real meeting after two years is like a kind of recovery. And so thank you really deep thanks to try to organize this hybrid meeting. So I will let me come to the topic. I will talk about a traditional topic in our group in Palma, which is a normal transport. And I will talk about anomalous transport in connection with this big jump principle, which we have been using in the last few years to obtain several results related to large variation and large situation in systems with fat-tail distribution. So let me, the work that I represent is mainly done in collaboration with Alessandro Rezzani and Delibar Kaj in Barilana, and also with Juan Lee Juan in Barilana as well as part with Rico Baldi in the old part, and also with La Posotoja. So thank you. Thank you. Nice work. And but it comes from a series of results that we have obtained with Jaco Magademigo, Alessandro Saracino and Angelou Guppiani and Stefano Lepi in Flora. So it's a long way. But I will mainly talk about these papers, which have been recently published, which are in collaboration with Alessandro Rezzani, Delibar Kaj and Juan Lee Juan. So to illustrate this big jump principle, I will make an example. So this is a pre-COVID conference, as you can see. It's a picture that comes from our previous life. And suppose you are in a room and you want to measure the average height of the people that are in the room. So you measure the average and you find that this average is well above the average value that you expect in the population of a statistical physics system in the world. OK. So what is the typical configuration that you expect to find in the room in order to explain such a deviation with respect to the average value of the height in the room? Well, probably what happened in the room is that there are a bunch of people. I mean, many people, we have a height, which is a little above the average. And then, OK, summing up all these contributions that are a little above the average, you obtain a fluctuation, which is large. And this is the typical megas that you expect. In a situation where you have a random variable, we have a thin-tailed distribution, like the height in the population typically. OK. So the large deviation is caused by minimal small deviation, all in the same direction. OK. But if I ask another question, for example, I measure the average rainfall, for example, here in Trieste in a month, and I find that this value is well above the average. And for example, I find that this month it rains a lot in Trieste. So what is the typical mechanism which has produced such a large deviation with respect to the average? Well, it's not the previous one, because rainfalls are typically differently distributed. They are fat-tailed distributed. So probably what happened during this month is that in one day it rained a lot, but really a lot. And this large amount of rainfall took all the fluctuation with respect to the average. So this, in a nutshell, is the, what, the mathematical liter, I discovered, is called the single big jump principle. And it is the situation where we experience in a stochastic process, in a random probability distribution, you experience a large deviation with respect to the average. But this average is not produced by a set of accumulating deviation, like in normal process, but a huge fluctuation, very large. OK. And this is the single big jump. OK. So typically these, OK, no, I took that. OK, ah, this is not. Ha, here it is. OK, OK. No. No, sorry. OK. So this is for the, here it is. Sorry, I'm not in the picture. So as I told you, the single big jump principle is the situation where this fat-tailed probability density of the also rubble that we are considering is dominated by a single, very large contribution. OK. And it is well known in a mathematical liter. For example, this is the nice paper in 1664 by Cistakov. And there are a lot of, I mean, mathematical result on this principle in for EED random variables drawn from a fat-tailed distribution, but not only a fat-tailed distribution has a power low, which is a strongly fat-tailed distribution. But this contribution is also present when you have a sub-explanational distribution, which this can be a much smoother situation than a power low. OK. And for example, this book, which is, I mean, I include many result, many mathematical result in the topic. So typically it is stated like this. It is the also rubble that you are considering is the sum of this variable. And you have the probability. So all the probability, the entire statistics of the sum of this random variable is equivalent to the statistics of the maximum of this variable when the value of this sum is large. So this principle tells you something well, really completely outside the real mob of a central limit theorem, which is related to the bulk of the distribution. This principle tells you something about the larger ratio with respect to the domain in the case of a fat-tailed distribution. Sorry, Andrei. So OK. Sorry, but OK. This one. This one. OK. And the typical situation is that is this one. OK. Where you have, for example, power load is distributed, distance is distributed, so this could be a position, as you will see, because I will use the language on normal transport. And in this case, you can show quite easily that, for example, if the variable are identically distributed and start and drone from this distribution, this is precisely what's going on. So the two statistics are equivalent. OK. And this principle has been extended to different situation with form of specific correlation in the random variable, but in this paper here, for example, but there are many people that this is really vast. OK. And in the statistical physics literature, there is a line of research, which is really nice, which is related, this occurrence of a large jump, so a huge contribution to the statistics, to a kind of transition, which is a kind of condensation foundation, where the full pressure, they condense all in the same event. OK. And it can be analyzed in a nice mathematical setting by using tools from statistical physics, for example, setting a kind of micro canonical approach that sets the constraint for some. OK. This is an example, and you have this when the variable are, of course, independent here. And with this approach, you can obtain a very nice result that I list, part of them I list. So what do we want to do in our situation, which is related to a normal transport? OK. And, I mean, to leve-like motions. So motion where, in some sense, a leve distribution and power distribution are involved in the step distribution or in some other quantities. So the idea is the following. So we know that this should be the case. So when you have a fat distribution, you should experience this kind of contribution for a large jump. So the question that we ask, does the principle for more complex fat-day problem, which are not, I mean, EED, or kind of, I mean, EED-related situation. So, for example, if you have a spacetime correlation in variables, you have, or have increments which are not independent. And the main point is, can we use this principle? So the fact that the contribution to the tail comes principally from a single jump. Use it to obtain the form of the tail of the distribution. So this is more or less the approach. So what we did was to take as a testbed for our analysis, serious of leve-like motion. So situation where you have leve distribution, so power distribution, which are related to transport. Let me make for example, one of them is the leve work. So you have, for example, in one dimension, you extract your step from a leve distribution, which is a power-love distribution, for example, with an exponent, which is between zero and two. So you have, you don't have normal transfer, typically. Okay, so each step is covered with a velocity plus, plus v, or minus v, with probability in one half. Let's put it online for simplicity. So the position and the observable is the sum of the steps. Okay, we respect to the, the distance, we respect to the origin. So the steps takes a time, which is related to the fact that the velocity is finite, and is, for example, constant. Okay, and the steps are uncorrelated, but of course there is a correlation between them in space, because there is a time that is needed to make the jump. And of course there is a cutoff in the velocity, in the probability distribution, because of course the particle cannot go far than v multiplied by the time when you are observing the process. So it is a complex process. It is simple, but it is complex with respect to it. And the leviflight would be the sum of power-love independent. It is with the random value, which I showed you before, where each step takes just one unit time. There is a more physical process. So what you want to understand here is the distance from the origin and probability. So the probability distribution to find your particle, at least in r, from the origin at 90. This is a simple process, which has been investigated in many settings, so now we use it just to make an example of how we can use the big jump principle to obtain the form of the probability distribution, but we are more ambitious. And for example, we wanted to analyze this process, which is much more difficult, which is the levilor and gas. So the situation for the levilor and gas is different. You have a set of scatterer on the line, which are spaced according to levil distribution. So the distance between the scatterer is drawn from this distribution with the same parameter, but it's time they are fixed in space. So it is a kind of quencher disorder in a physical sense. And you have a particle going through this structure, starting from a scattering site. This is a particular choice for this process, the non-equilibrium choice. And the particle starts from here. It goes through the structure, and when it comes to a scatterer, it is reflected or transmitted with probability 1R, for example. So this is a much harder problem, because the particle is moving in a quencher disorder situation. So once again, we are interested in the probability of finding the particle, a distance R, a time t on this structure. And this is a nice model, because it is a model for diffusion. I mean, it is used as a model for diffusion in combative flow for diffusion in porous video, where you have these gaps, which are the holes where the particle goes through. So this model takes into account the geometrical setting of the holes in transport. But we got interested in this problem, because of this nice experiment done in some years ago, which was testing the scattering of light and the transmission of light in a set of packet spheres with radii, which are distributed according to the levy distribution. So it was a kind of engineered levy glass, because it was made of glass with scattering regions, where you have, you see black regions, and ballistic transport inside the hole. And so this is the mathematical model that we did for the light situation. So, I mean, the level of the glass is a paradigmatic situation, where you have levy distribution, so fact-tail distribution for the variables, but the variables are certainly correlated, okay, sorry, once again, are certainly correlated, because of course, during the motion, when the particle experiences a scatter near a very big hole, then it can be reflected and find again the hole. And so it's certainly a very difficult situation to study from the point of view of correlation of the variable, which are involved, or the fact-tail variable that are involved in this situation, and it has been pre-studied. So what is the situation in general? So let me just say that in this situation, where you have an available transport, the PDF of the position is not Gaussian as a normal transport, but still we are lucky, because we can recover in this situation a scaling form. So the probability distribution has a scaling port, the bulk part of the distribution has a scaling form, and something which is very interesting to know is the scaling length of the distribution, which grows with time, and this is something which can characterize the process and gives an idea of how the distribution is made in the bulk. But in general, of course, you have the tails, as I told you, in the process like this. So in general, you should be more careful and say that this probability distribution has a part which is related to the bulk, so at distance we share inside the scaling length of your distribution, and the part, which we call B, okay, just because we call it B, and which tells you something about the position of your particle, when the particle is very far away from the center of the distribution. So you have this part, and these are both things that are very important in analyzing your process, because in general the probability distribution is like this, you have this living contribution, which is related to the bike, as I told you, this is the tail, which is very far from the center of the distribution. Of course, this part is as zero measure from the point of view of the probability distribution, but when you compute the moment of the distribution, of course, this part can be very important, because it can change the behavior of the higher moment of your distribution. So knowing that, it's important to know the behavior of the higher moment. So the idea is, can we give a precise form to this form of distribution, when you are in a situation like let the levy work, or much better, the legy Lorentz gas, which is a much more difficult situation from the point of view of the correlation of variable. So the idea that we took is the following. So we assume that in this situation, the principle holds, and that the contribution to the tail is produced by a single bit jump, that of course at a certain time, during the process, while it goes on at a certain time, so the main idea to extract that the contribution of this large deviation is to build the process and to analyze precisely the process that can produce a social large fluctuation. So we assume that it holds, we don't have a proof of that, of course, for such important problem, not for a levy work, where there is a proof somehow, but for example, for the levy Lorentz gas, we don't have a proof that this big jump principle holds. So we suppose that it holds, and the idea physically, not to enter into the calculation, I will show you some, but the idea of the estimate of this big t is the following. So you divide the process in two scales, two scales from the point of view of time and distance. So one scale is what happens inside the times and the distance which are related to bulk of the distribution. So the particle moves in spaces and time which are related, which are characterized by the bulk of the distribution, and during this motion inside the bulk, let's say, of the distribution, the particle is trying to produce the event that takes the particle outside the distribution. Okay? So during the motion in the bulk of the distribution, the particle is accumulating trials to make this very big jump. Okay? So we use the bulk of the distribution to estimate the rate at which this trial of doing this very big jump is made. Okay? Very, very roughly. And then to estimate the probability of going very far from the center of distribution, we analyze in detail the process that in a single jump takes the particle, the bunch of particle outside the distribution. So as you can see, it's an heuristic process which we managed to do in a several process, but I mean it has account, I mean understanding the typical process that produce this large fluctuation. So it is complex, of course, for example, in a ladelor and gas, the process is complex and it is very difficult to calculate this contribution. It is complex, but it is a single process. So if you have a chance to get an information on what is going on in the far tail of the distribution. So it's really the other extreme of the usual large deviation where we really sum up the exponentially, the press contribution of all these very tiny deviations with respect to the mean. Here you just say, you do the opposite, you just look at one process and you make your calculation on that. And then of course, we validate this result with simulation, but there are indications that it holds. Of course, and this is a very important thing to say. In general, just because this deviation and this larger fluctuation, let me say, is produced by one single one process or a serial process that contributes just to a very good jump, it is very process specific. So we do not expect a universal behavior in general in this situation, but of course, that we will have a non-analysis and a lot of details which are related to the process itself and not to the class of variable life, for example, in the case of usual statistics and usual division. OK, so let me come to what we did. So we applied this approach to many situations which are more or less related to mean fact-detailed in the sense of super-connected distribution. So we applied to levy work. This was known. The pattern was the last deviation, the fact-detailed of the levy work was already known, it had been calculated by in 2014 and we read it, I did it very easily to check that this approach old. And we used to analyze with a mapping of the levy work, I mean a model of motion of cold atoms. And of course, we used to estimate the last deviation in one dimension of levy Lorentz-Gash with strongly, of course, heterogeneous disorder. And recently we used also this process also to extract the same contribution in a one-dimensional Lorentz-Gash where now the spacing is very mild and it is as such an exponential. So it's not an extreme situation like in the level Lorentz where you have really huge deviation in spacing. So it's really powerful to obtain the exact form of the scaling function. So it's not just the scaling length, but the exact form on the scaling function. And we used all the force for levy work with memory and so so with the model with acceleration and acceleration along the step. But every time you have a levy work with some kind of modification, it's really easy to use it because you have the levy work, it's easier because every step is independent from the other and the correlation comes on only from the coupling between space and time. While in the quencher disorder, like in the level Lorentz-Gash, the situation is much more difficult. So let me go to the situation. So let me tell you how do you do the calculation in practice in a nutshell, for example, in the levy work. So suppose you have a levy work, take alpha between 1 and 2 because for alpha between 0 and 1 so there are some kind of problems in applying in this approach. OK, in this case you have a scaling length which can be calculated easily and also the scaling function is not so, the bulk of the distribution is no, you have a software diffusion. So you have a final velocity so extracting length is like extracting time so you can really shift in the two formalism. So what is the motion that we expect that the particle is taking in this case? So up to a certain time that we call TW is like a waking time, the worker takes a seat at this time that we have to, I mean sum up all over possible, all over possible TW. This particle takes this step that takes the particle well outside the typical length described on the scaling length at the time t. So up to that time given that the whole contribution is produced by the bizzank of course the particle is moving but it's moving in a very, very small region that we can completely neglect with respect to the world distribution. So the motion of the particle is really simple. So it's like it is stuck stuck in the original up to TW and then at TW it takes this step. So it's very easy to analyze in the case of the living world. And now you have to consider all the process that at TW takes the particle in the place where we are moving, we are measuring the distribution which is distance r at the time t. So if you reflect that there are two possible processes, only two possible processes that takes the particle there. So the first one is that the worker take a step of a length which is higher longer than your r. And of course, so the particle is still traveling when you are here. And this is the first contribution. And the second distribution is when the particle take this step which is precisely landing at the where you measure your distribution. Of course it is not exactly so because you have the fact that the particle is moving within a lot of t. But this is negligible with respect to the big jump. So these are two very simple processes. You can compute the probability really in few steps and you find the form of the tail of the levy walk which had been obtained with a very long calculation by Eli and co-worker by Denis Overebestov in 2014 by a momentary summation involving infinite densities. And so this is the form and the nice fact about the way we compute it is that you recognize precisely that the two contribution the two parts that you find in the tail are related to two different physical processes. Okay, so something which is difficult to recognize in the case where you sum all the moment like in the video. So let me go state. So what happens in the lavalon gas? So in the lavalon gas we were able to compute so the scaling length of the bulk of the distribution was known somehow and for example you see that here you see simulation you see that this part is scaling very well. So for every alpha the bulk of the distribution is well described by the scalar length but what is the form of the tail in the lavalon gas which is as I showed you is a strongly correlated problem. Okay, and so the situation in the lavalon gas is somehow similar but the motion that you have to imagine for your particle in the structure is the following. So the particle starts from the origin it moves and while it moves from center from the origin it has the chance to hit one of the scatterer which is followed by the large gap. So this is the typical motion. So it is moving of L of t and during this L of t the particle is a cumulative trial to produce this very big jump and when the particle is this very big jump it enters the jump and when you measure its position in the middle of the jump of course this can be produced by all the possible reflection that are experienced by this particle. How we are sure that it is affected because we are in one knee. Okay, and of course we are sure that in short time the particle is affected because of the records of the random walking weight. So of course you build this process in some all the contribution of this effect which you see are difficult but not that difficult as the as the I mean analyzing the motion in the wall structure and you find a very specific strange form of the tail with as you can see has some non-analysis inside because these are related to the reflection. Okay, so you find non-analytic point in one over n where n is odd. So this you see the simulation is really really precise. So this is the precise form of the scaling form of the probability distribution. So as you can see it is model specific and very powerful. Okay, and of course from this you can compute the moments and these are very well I mean reproduced by this probability distribution that enters in the calculation of the model and of course you can do the same with for example a sub-exponential spacing in the structure. This is not the situation that you are going to expect because here it's silly for example if you take a stretcher distribution it's silly hard to recognize the spacing of the scatter in the Lorentz gas with respect to the standard distribution and this is the form that you can calculate with the same technique and here you see that you find v multiplied by the time which is the scalar of course of the big jump so this is another scaling length which comes into the play and here you have L tilde which is the original scale which is present in a stretcher experiment so you have three scales in the profit in this case but you can compute quite easily I mean in this distribution so just to sum up let me tell you that I mean so the application of this technique of reconstructing the process the specific single big jump in this factor process is very powerful of course it needs a kind of it needs a I mean a comprehension of the physical process that produced this very big jump but once you assume that it holds you can concentrate your attention on the calculation of that particular process and it is probably going to give you a strong information on the I mean on the table of the distribution of course the important part and this contribution is very specific so it can be used also to analyze the details of the property just because this big jump has inside that I mean the details of the process it's not a universal quantity so it is very powerful but as I told you it is heuristic so we have to determine the process of the jump and the jumping rate and of course we don't know what is the class of the process where we can apply this principle in general and of course the nice idea is to use it for experimental data and to recognize the contribution of the biggest jump in a strong deviation mass deviation in real process in real data thank you sorry for that okay thank you Arfaela now is question time if anybody from the audience has a question just raise your hand and if anyone from Zoom I think you can simply unmute yourselves and ask the questions to see how it works any question from the audience? thank you very much I think I put it in the chat what would be the physical meaning of the alpha parameter if there is any sorry you mean the alpha in the power law? yes of course it is the parameter that describes the statistic of your jump if alpha is greater than 2 of course if you measure the second moment of the distribution this is fine and so you expect that the probability distribution is attracted for a central limit theorem to the Gaussian distribution if the alpha is between 1 and 2 you have a divergent second moment and sorry final second moment and divergent first moment so it is not the kind of distribution so it really the parameter that sell you something about the statistics of your microscopic variable so that can be of course the steps in the levy walk or some other quantities in general that can be related to any quantity so it is the parameter that tells you how fat is your distribution in some sense if it is a power law in the case of a wide distribution the alpha is also telling you which kind of process you are considering because alpha between 0 and 1 is a wide distribution alpha because 1 is a person and alpha greater than 1 is like a thin distribution so when you use the wide distribution you can experience all the the regimes so alpha is describing the statistics the fact statistic that is the statistic of your microscopic variables thank you so i have a question here so morally how similar is to the problem of living a potential well so imagine you have in equilibrium in principle you could do the same in an equilibrium situation in which you have a potential well and the particle is to jump out of it so you yes you can do it you can do it for example suppose you have a statistic of jump which statistic so suppose that the jump outside the well is not produced by many small deviations but is produced by a a big jump this is the case for example if you have and also i don't know a stochastic process with a with a noise with a magnetic noise which is not short coordinated but this is a leve light distribution with color noise then you have indication that the x the going out from the trap I mean for the well is produced by a single jump and in that case you can apply the I mean this type of reasoning as well so every time there is indication that your process is produced by a single jump and this is the case every time you have I mean fat statistics in the simple case or in more complex place in the quenched fat distribution you are morally you can morally use it no questions OK, I have one when you talk about this comparison with numerical simulations yes how difficult what kind of technology used to do these simulations in a rare event it was I mean a standard simulation of random walks that the most the leve work is not difficult it is very easy simulation but the rare events I mean the leve Lawrence gas is very difficult because to mean to see these I mean fat tail you really have to build a very large structure so we have to use several I mean tricks to obtain a very large structure and average over trajectories and average over different environments as well but I mean of course this is a this is a statistic of rare events so you have to produce these very big jumps that is they don't mean giving the contribution so of course I mean it was difficult to build a structure that that mean pointing in the leve Lawrence for the wider distribution it's much easier for example OK, we stop here and we thank the speaker again Thank you so much We will move to the next talk which is also an a conference in person by Gabriel Salierno No, by John Korbel should be I have in the program this is John Korbel from the Medical University of Vienna and here tell us about this thermodynamics of the structure forming systems you have 15 minutes OK OK, do you hear me? Great Thank you very much OK, so it's a pleasure to be here thank the organizers for having me I would like to present our recent research that was published in Nature Communications that is about thermodynamics of structure forming systems and this has been done in collaboration with my colleague Simon Lindner who is the PhD student now at the Medical University of Vienna and two senior guys Rudi Hanel and Stefan Trener from the Medical University of Vienna and complexity science hub if you want to see my slide they are available at slides.com slash John Korbel OK, so in classical statistical physics or thermodynamics we usually consider systems that are in a very simple so molecules or atoms but what we know is that in many cases like in chemistry or biochemistry other also in social applications these elementary particles elementary elements of the system form structures so molecules form atoms clusters of colloidal particles we have biopolymeres or micelles and what we wanted to do is to study thermodynamics of such systems and especially for the case where the system is small we cannot do it through the standard approach of using the grand canonical ensemble because the particle fluctuations in that case are too big and in case we have a small closed system that forms these structures I will show you that there is some correction to Shannon entropy that describes this the system is much better and I will also show you several physics application on the real physical systems there are not only possible applications but maybe quite interesting and also I will make a step beyond equilibrium statistical physics and show you some results in fluctuation terms for these systems okay so let's start with a very simple toy model it's coin model so I have a coin it has head or tail so I toss a coin and it can have two states or tail and let's say I have n coins then I say I have a simple assumption that these coins are magnetic so if these if two coins stick together they form a new state one can see it as a molecule or whatever and now I would like to show you that this structure that is brought into the system is very important and one of the reasons how one can see it that if you start calculating the states of the system so if you have non-magnetic coins you know that there are two n states where n is number of coins but this system when we allow this interaction between coins so this bond states grows super exponentially so fast gen exponential I mean this growth is just slightly bigger so it's n to n so it's like e to n log n but it's still super exponential and this extra amount of states gives us this emergent phenomena that wouldn't exist in the subsystems or in the systems without these structures let's start counting the systems so what we can do is that we can follow the famous formula by Boltzmann and if you want to really calculate the thermodynamics of subsystems we basically take his formula for entropy so we take basically the logarithm of the sample space or configuration space if you want so now our task is to calculate how many what is the multiplicity of this system the multiplicity means how many microstates correspond to same meso state in the microstate is really so if you have particles if you have coins if the first toss is head or tail or head or tail or if these two come together the meso state on the other hand is just the histogram or the number of states at given points so how many coins are at tail position how many coins are at head position and we don't care about whether it's first or second or third or whatever coin so let's have a simple example so microstates can be three coins head tail head tail head head head head tail this corresponds to the same meso state where we have two coins in head state one coin in tail state and the multiplicity is free similarly enough in if we have one coin in head state and the other are bond together considering that that basically this bond state is just single state so we don't say anything about for example that whether the first is up or the second is down or vice versa just for simplicity then the number of these microstates is again three corresponding to the same meso state and now the question is these were two simple examples is there any general formula that gives us the answer what is the multiplicity of a system and the answer is yes and we do it quite similarly to the case of where we have regular multinomial systems described by Shanon slash Gibbs entropy so basically what we say is that we consider meso state so we say we have a histogram and we calculate how many microstates belong to this one so what we do is that we make the permutations of all the particles at all places or we say that first particle is in head second is in tail third is in bond and then we permut them all together but of course these all permutations give us just too many states because some of them are overrepresented so if I say if first is in tail and second is also in tail then it doesn't matter when the first or second is tail so then we have to divide by this this number of overrepresentation so for the first example basically we do all the permutations so this is all in the colors so we say first is in head or second is in head or third is in head and the same for the second and third place and then we see that these permutations one two three and two one three belong to the same microstate head head tail because it doesn't matter whether the first or second is in head so what we see is that that we get three factorial permutations altogether but only three of them belong to distinct microstates similarly we can do it for the second case here it's just a coincidence that it has the same number of states but you see that for example the red the first one one two three red permutation and the second one one three two orange permutation belong to the same microstate good this can be easily generalized so the general procedure is that we take the nij molecules so molecules at state i of size j we permute the molecules this gives us the nij factorial permutation and then we also have to permute all the particles in the molecule so to say and this is the j factorial to nij permutations you can try it by yourself and you will get the same number the total multiplicity for this case where we have three particles of size three is 280 and the general formula is n factorial over this product of nij factorial j factorial nij and from now I will be showing the difference between the standard Boltzmann or Gibbs entropy this multinomial case by this slight blue color so you see that in when the size of the particles is one then j factorial is one and this doesn't count so this would lead to ordinary channel entropy or Boltzmann Gibbs entropy as we know but in this case where we allow these molecules we see that this is something that adds up to this factor and if you start calculating it maybe I can mention that it seems to be something new actually it was already discovered by Boltzmann himself in this 1884 paper where we was investigating the chlor and hydrogen forming the chloride hydrogen and his formula is for this special case very similar unfortunately this paper was written in German and it was somehow forgotten yes almost no citations but already in 19th century these formulas were known if you start calculating this entropy you will see that it's something that has the form of the channel entropy where is this p log p here we have to add a correction that really corresponds to the size of the molecules and makes the entropy non-symmetric in the probabilities what you can do is that you can also introduce finite interaction range so let's say only particles in certain region can interact and then you can introduce concentration and then you get equilibrium distribution which looks like Boltzmann one except for the case that there is this alpha j and it means that for calculating the partition function I cannot just simply calculate it I have to solve this normalization equation where I have to sum over all the states and there's this e to alpha j and if you think about it is just the polynomial equation in e to minus alpha of the order of the maximum size of the molecule so it means that unless you have molecules of maximum size 2 3 maybe 4 you cannot solve it analytically you have to do some numeric because we don't know the general formula for solution of polynomial equations but it doesn't matter so in this case you just have to calculate you cannot calculate the partition function directly but we have to do it numerically here I just mentioned that that although it seems quite different it fulfills all these common axiomatic schemes except for the case that it's not symmetric in when you relabel the event so it's not symmetric in the probabilities which makes sense and also it's not maximized by the uniform distribution which is the consequence of this but it makes sense because the states are conceptually different it's really different when you have a free particle or when you have a molecule so you cannot swap them easily this gives you another state really another physics so that's one point and this is what I was talking before we were comparing this with the usual approach in chemistry with the grand canonical ensemble where you have the particle reservoir so you have the reserve particle reservoir for each type of particle so for single particles for molecules of size 2 3, 4, et cetera and then you adjust your your chemical potentials such that mass action law is satisfied and for larger systems relatively larger systems it corresponds quite well it fits quite well this is the specific heat that you can see depending on a temperature but for smaller systems we can see some deviations so especially for small systems and low concentrations it's useful if you do these closed systems to use our approach or this approach and now I will quickly show you some applications one application is one application is in self assembly of Janus particles Janus was a roman got with two phases so Janus particle is also a particle that has kind of two phases or two halves so one is hydrophobic and one is charged so if these hydrophobic parts go like face to each other they get attracted and form a cluster and if the charged parts go face to each other they get repulsed and people were also experimentally investigating these and also numerically and they saw that under certain circumstances particles form big clusters this can be formalized by so called chemical modelers so we have the heart sphere modeler and this basically tells you which is what is the range of angles that needs to be to to attract the other particles Janus particles so there is this particle coverage Janus particles has this coverage exactly 0.5 it means that the two hemispheres are really the same and there are some other cases and what we did is that we calculated the phase diagram depending on concentration and temperature and we saw that there are three phases so there is fluid phase where we have three particles there is condense phase where we have really large clusters and there is a quite existence phase where we have both large clusters and fluid particles and this kind of corresponds to what people were observing experimentally another application is now in like more condense matter so we have the curry vice model or the fully connected easing model where we just say each particle interacts with each other but there is a chance that they form molecules and then they don't feel neither spin-spin interaction nor the magnetic field and then what you see is that you can see that in normal easing model there is a second order phase transition in this case what you see is that the second order phase transition becomes first order phase transition and the critical temperature or the kiri temperature is decreased and decreases with the size of the molecules so you see that allowing this extra state completely changes the the nature of the phase transitions of the systems and now very quickly about going beyond equilibrium so if we consider normal classical linear mark of evolution described by master equation and detailed balance which looks slightly different because the distribution also looks different then you get very nice formula so you see that the second law of thermodynamics holds for this system so this is this is good and then also we can derive the detailed fluctuation theorem where the sigma is the trajectory entropy production plus the change of the initial size of the particle and the finite size of the particle so there is some correction for this case but you can again derive all of these nice all of these nice results from classical thermodynamics and with that I would like to end this is basically what we've seen and now I would like to thank you for attention thank you for the talk for keeping your time are there any questions from the audience? one over there and my question is if the corrections that you get for the for the entropies can be related with the mutual information between the particles okay good question we haven't investigated basically as because what you see is that that you cannot decompose systems of n particles into two systems of n half particles let's say and these distribution will not be independent because then you basically cannot consider this mutual state so it's not just about the maybe correlations or mutual information it's about this emergence of this extra state that you cannot see from the subsystems but in a way you could maybe quantify it by quantities like mutual information thank you you showed us in one of your last slides there are logarithmic corrections right if you can yeah what you have written in blue so how big are these corrections in typical experimental situations is it really something very relevant? so it's basically corrections is that the j is the size of the cluster that the particle belongs at the beginning it's also the cluster that the particle belongs in the end I can imagine that that they can be quite large when you start in in like single particle and going to big clusters so I haven't measured it I haven't calculated exactly but they can be in certain situations quite non-negligible no question for you hi I'm Giuseppe Gornella you showed something about active particles at a certain point may ask if you are able with your calculation or your evaluation of entropy to calculate a sort of phase diagram you did something like that for the Janus particle and the question is if you have compared this with one coming from simulation or other methods yeah so this is what we used is just a basic model of course if you have the real particles they are the phase diagrams are more complex so they are different phases of solid different structures whether it creates polymers or et cetera what we did is just we edit very simple Hamiltonian that just tells you whether these two whether the particles are in a incondensed way or in a fluid phase we would need to do some other more complicated calculations that would probably not be tractable doing semi analytically where you only solve for this alpha but we would need to do some numerical simulations like I don't know this Gran Canary Co Monte Carlo simulations or these things but in principle it would be possible actually the community is already kind of using the same entropy they found it in a different way but they also are already using it in a way Any questions from the Zoom attendees? Does unmute yourself, do you have any questions? Okay, if not we stop here in time to speak it again. Thank you. I'll move to the next one which is Sara Los, here from IGTP and she'll talk about irreversibility, heat and information flows induced by non reciprocal interactions so are you ready? Okay, you have 20 minutes Because you're on schedule? Yeah, yeah. Okay, I mean, I do it like this. Okay. Ah, okay. Okay, then I think I do it like this. Okay, first of all I would like to thank the organizers for giving me the opportunity to tell you today a little bit about this recent work and thank you everybody for coming. So I'm presenting some work that I did doing my PhD at Technical University of Berlin together with Sabine Klap and now I am postdoc at IGTP and I'm continuing working on similar ideas together with Edgar Olden. So I want to start with telling you what are non reciprocal interactions because in most parts of physics you will only encounter reciprocal interactions and this is true for any kind of fundamental forces so these are the type of interactions that are representable as derivatives of an interaction Hamiltonian and that thus automatically fulfill new set law, act 2 equals react 0. So for example, for a mechanical coupling two beats coupled by a spring that would always be the case and the fact that these interactions are representable by an interaction Hamiltonian is a very handy from a theoretical point of view for example, from that property you can derive the equilibrium distribution despite the system being non-linear and having many degrees of freedom and you can also derive the fundamental laws of thermodynamics for example, the change of internal energy of the system would be given by the change of the Hamiltonian and that could be due to work applied to the system or heat dissipation and as you all know in stochastic thermodynamics these ideas are generalized to the fluctuating scales so here heat would be a functional of an individual fluctuating trajectory but on average the same laws hold. As an example, if we couple these two beats here to two heat bars at different temperatures we will easily find that that there is a heat flux whenever there is a temperature gradient and the direction of that heat flux would always be going from the hot side to the cold side as you all know. However, in some systems you can also encounter non-resuprogram interactions this is for example true in active matter you can also find them in complex plasmas or in computer systems if you look at logically operating systems the interactions of course don't have to fulfill Newton's third law and as an example you can imagine a line of pedestrians walking in this case the green person would certainly react to what the gray person is doing but the gray person would not necessarily react to what the green person is doing and in this case we have a non-resuprogram interaction these interactions can also be realized in recent experiments on the fluctuating scale for example here a non-resuprogram interaction between colloidal particles is realized with the help of optical feedback and in this paper we ask the question what are the thermodynamic implications of such a non-resuprogram coupling on the fluctuating scale so in order to address this question what do we do we take a very simple toy model the most simple scenario you have two linearly coupled systems which are subject to noise and they are also subject to linear restoring forces so you can imagine two colloidal particles each one in a trap and then there is this non-resuprocal linear coupling between them and for whenever the off diagonal elements have different values we have non-resuprocal coupling in the special case that they have the same value we recover the original mechanical model so they are coupled by a spring and then in the two situations that one of the off diagonal elements nullifies this model reduces to two models which are actually known from the literature so in the case that this A1-0 nullifies this model resembles an active Einstein-Urenbeck particle so in this case x-0 would behave like the position of an active particle and x1 would be an auxiliary variable that mimics the effect of a self-repulgent mechanism such as the gelom and in the other case this model resembles a model known from the literature for a cellular sensor so in this case x-0 would be some chemical concentration which is sensed by a cellular sensor and x1 would be the state of the cellular sensor yeah so the first question that we ask is what is the dynamics seen by the marginal observer in this case so it's no surprise that you have a non-Makovian process here if you only consider x-0 so if you trace out x1 then you will get a non-Makovian equation for x-0 which contains a colored noise due to this noisy process in x1 and it also contains a force which is in fact a delay force so this is now a force that depends on the history of x-0 and it is only present if both off-diagonal elements are non-zero so not in the case of the active on-shadunvac particle and not in the case of the cellular sensor so in all other cases we have some sort of delay force so in these situations this model can resemble a system where x would be a system that is subject to a feedback control like x1 and it applies time-delayed feedback force on x-0 so now considering the thermodynamics sorry is that always some delay when I click yeah so considering the thermodynamics we consider for example the heat dissipation of both variables and we find that the heat dissipation is in general non-zero so that should be the average heat dissipation of x-0 sorry it's really wrong here and so this is no surprise the system reaches a non-equilibrium steady state due to the non-resupprocal interaction we cannot reach a state of thermal equilibrium however then there are two special cases so first of all there is a situation where we do find absurdo-equilibrium states despite the non-resupprosity and in fact we can find it if there is a certain condition for the non-resupprocal strength as compared to the temperature gradient in the system and then we find that there is no net heat flow in the system no irreversibility, detailed balance is fulfilled so I think this is a quite unusual state because in a sense we find here that the two drivings so the temperature gradient and the non-resupprosity can compensate each other and we still obtain a state of equilibrium and I don't know if this is something that somebody else has seen in another system I would be very interested in seeing that because I found it very unusual and then a second thing which is quite unusual is that if the strength of the non-resupprosity is higher than the temperature gradient we can even find heat flows from the cold to the hot side so in this case different from the usual scenario you can extract energy from the cold bath passes through the coupling and release it at the hot side so if you let the system run it acts like a refrigerator here next we have considered the situation that you have a marginal observer from the viewpoint of thermodynamics so what happens if we only observe x0 so let's start from the full system if we write down the entropy balance of that system of course we have two contributions to the medium entropy production one on the cold side and one on the hot side and then there could be the change there's the change of Shannon entropy which importantly is here the Shannon entropy of the joint system and unsteady state this nullifies now if we had a marginal observer that can only see x0 this marginal observer would only observe this heat flow and it would observe the marginal probability distribution from which it could the observer could calculate the marginal Shannon entropy and what we found is that these quantities are connected with each other in the form of a generalized second law that looks like this so for this marginal system we can also write down an entropy balance and it contains the medium entropy production it contains the Shannon entropy and additionally it contains an information flow term and only if we include this we obtain an entropy marginal entropy production which is on average always greater than zero or zero in equilibrium and this information flow term here is it's called the continuous information flow or it's also called the learning rate in the literature and it is connected to the mutual information between the two beats in a sense it quantifies the change of the mutual information due to the dynamics of x0 and you can also interpretate it as the thermodynamic cost that x0 has to pay in order to sustain the correlation with x1 and of course these generalized second laws are quite well known in thermodynamics I am sure you have all seen them but here I want to stress that it's less they are less established for time continuously operating systems so in many scenarios like maximal demon kind of setups you would always be in the situation that you measure the system and then later immediately you apply a feedback force but then you wait some time and then you measure again and this is often also true in experimental setups while here we time continuously operate on the system so in each instance in time you both have the measurement and the feedback operation and we have also in the reference that I will give you at the end we have also generalized this to more than two variables which is a bit nontrivial because the mutual information is only well defined for two variables usually yeah and from this generalized second law we see that the heat dissipation of x0 can become negative if the information flow is negative so there is natural information flowing from x0 to x1 which means x1 acts like a controller on x0 and here you now see the full diagram of the heat flow of x0 and the information flow from x1 to x0 in a map where you have the two off diagonal elements of the coupling matrix so and the left side shows the heat flow and if we first focus on this vertical line in the middle where the model sort of resembles an active Einstein-Umbach particle we find that the heat flow is always positive and it can never be zero or negative and that makes perfect sense if we have an active particle we would always expect it to heat up its environment and we also have an information flow which is because if you would monitor the particle alone you could also reconstruct where the flow jalan has been on this horizontal line we find where the model where x0 is like chemical concentration that is sensed by a cellular sensor we of course find a zero heat flow because being sensed doesn't bring us out of equilibrium however there is still an information flow because the sensor is learning about the system and then in these intermediate areas we can have both positive heat flow and we can have negative heat flow here and this negative heat flow is always associated with this negative information flow and what we found is that this negative heat flow here is connected to a phenomenon known for a system subject to feedback which is called feedback cooling or entropy pumping before I come to the end I briefly want to mention that we also considered the case of n variables so if you not just have one non reciprocally coupled variable but more of them and for the dynamics we find something interesting so while when we just have one variable we always found exponentially decaying memory kernels and exponentially decaying color noise we find if we have more than one variable so for example 2 we can also have situations where the memory kernel is non monotonic so that means for example in this situation we have a feedback force acting on the system which depends on the past of x0 but mostly at a certain characteristic time tau and that is very common for system subject to feedback loops where the feedback loop takes some time and as you can see you can model them with these auxiliary variables that are non reciprocally coupled but they have to be non reciprocally coupled if you go to reciprocal coupling you would always just have exponentially decaying memory in the system and yeah at the end I also want to mention as an outlook that we are now looking at applications of this or experimental realizations so we have a collaboration with Ali Rajapur who is doing and Saeed Arab who is doing MD simulations and we are trying to realize this very simple non reciprocally coupled setup here in MD simulations and we already find that and we treat the system like a nano refrigerator so we consider how much energy we have to supply to sustain the non reciprocally coupling and how much heat we can extract from the cold side like a nano refrigerator and we see that there is already quite good results that we can match the that the theory can match the the MD simulations and it's quite cool because the MD simulations are actually below the large of our regime and we have also started to discuss with Sergio Siliverto about possibilities to realize such a simple setup in electrical circuits and to build a refrigerator on that scale yeah so this I want to summarize I hope I could show you a little bit that non reciprocal interactions are inevitably accompanied by heat and information flows in the system and that the information flows in the central part of the entry production if you consider a marginal system and you can even find heat flow from cold to the hot side and I also showed you these pseudo equilibrium states where two different drives compensate each other which I find quite interesting and yeah with this I want to thank you for your attention thank you thank you for the talk keeping on time also questions for the audience I have one question ehh ok, go ahead yeah I don't know, I'm not well I don't know if I understand correctly but you talk about a negative information flow and you say that it's something like a pump I don't know, I don't really understand that but negative information flow can be understand like misinformation or something like that or am I totally wrong yeah, thank you for the question I was not very, I did not talk a lot about that so the information flow is the directional quantity sorry, I'm trying to find the slide yeah, so the information flow is always a directional quantity so it always has a sign and the sign shows us in which direction the information flow actually goes so for example in a steady state in this two variable setup you would have two information flows one going from one to zero and one going from zero to one and the sum of them would always be zero because the mutual information on average doesn't change in the steady state because the two-point probability density is conserved so you would actually always have one information flow being negative so that just tells us in which the direction is flowing and you can, the interpretation is that if you have a positive information flow it means that the, so if you have a positive information flow from x1 to x0 for example it means that if you measure x1 you can say more about the future of x0 as compared to the present state of x0 and this can of course also be negative so it could also just mean that you can say more about the present state of x0 by measuring x1 than you can say about the future of x0 by measuring x1 so no, a negative information flow doesn't mean misinformation it just tells us who's the center of the information who's the recipient thank you kut, in just one other question not related what is the force field that you use when you make molecular dynamic simulations what is the force field like the mbf physics is like the equation that you derive when you make the solution of the Newton's equation so these are Newtonian equations of motion and we use this funhover thermostats i don't know if that clarifies the question thank you ok, no, a question from the audience hi so many of your examples of these non reciprocal systems were sort of like reciprocal systems where a degree of freedom was frozen out so like for instance two walkers with one person facing the other way it's like saying that you can't use an angular degree of freedom so does that mean that these results have some sort of implication for the short time dynamics of reciprocal systems where maybe there's a slower degree of freedom, like a persistence length in an angle or something yeah, that's a good question so no, I think in general the results are and the results we don't assume the time scales so the time scales could be similar or they could be very different if the time scales are very different you will lose the effects so the effects emerge from the fact that you have similar time scales but it's true in the example of the pedestrian it's maybe not the best example so yeah, normally you would like to have them operate on similar time scales in order to see these effects yeah there are other questions may I ask a question I'm online go ahead okay, so nice talk so I was just wondering when you introduce this positive and negative information flows so is this somehow connected to the concept of transfer entropies because when you include some suppressor variables to predict the target variable generally the entropy of the target variable decreases and your information or predictability increases and otherwise if it decreases you will get a minus sign so is there any connection yeah, so thank you for this question so yeah, the transfer entropy is also a concept to quantify information flows which is more established even I think unfortunately you cannot see all the references but so especially this reference here by Jordan Horowitz it's very nice because it compares these two measures next to each other and it depends on the setup which information flow is more important so in this case it's a kind of information storage in a finite amount of degrees of freedom and here the information this continuous information flow will come into play if you have an external memory that is infinitely large you would instead look at the transfer entropy and that would be the measure that quantifies the thermodynamic cost of storing this information so it really depends on the yeah, on the setup that you look at and in this case it's a continuous information flow but yeah, as I said, I would recommend to look into this beautiful reference where you can find all these things listed together and so it really depends what you are looking at and also what question you ask if she wants a kind of information measure that is connected with entry production or not Okay, thank you Okay, thank you Question, okay, final question, okay Hi, Sara, could you go back to the slide of the memory kernel no monotonic memory kernel? Yes, so let me try Yes So if I understand correctly when you look at the other networks they are very non-reciprocal in the sense that one particle doesn't see the other one the left one Yes and then you see a non-monotonic city in the memory kernel Yeah So if this wasn't the case like if it was less non-reciprocal then you would also see the non-monotonicity so my question is basically is the non-monotonicity suggesting that you have non-reciprocal interactions? Yeah So very good question I just for simplicity show these extreme cases it's true here we have just unidirectional coupling which is the most non-reciprocal one if you add the counterclockwise direction just weaker then you will also get a non-monotonic memory kernel but it will not be zero at time-different zero so it will just look more like yeah, like this and what we showed is that you need some sort of non-reciprocity to get something non-monotonic but really in order to get this zero offset you need this unidirectional coupling and in this scenario of feedback control you typically want that because if you have a feedback loop with a finite time so this is the finite time that let's say measurement takes and then the feedback controller needs some time and then later it will apply a force and you don't want any offset at zero but yeah, this is just a special case and of course I should stress that this is all just true for over dense equations if you have the inertia then you have all kinds of oscillations anyway and then you are not monotonically decreasing anyway and also what is interesting is that the colored noise is always monotonically decreasing in these setups okay, thank you, let's hang the speaker again and then we'll come to the end of this part of the session we'll come back at 11.15 okay, in 20 minutes from now, 22 minutes so I think we should slowly start starting because otherwise we accumulate delay for practical reasons I would suggest actually encourage those who are asking questions at least to first say his or her name and this is true both here but especially for those who are online because we cannot see you and sometimes it's a bit difficult to okay, so first tell always your name then I give again the floor to Raul for chairing the session okay, now the next talk is by Urna Vasu can you share Urna, are you there? Yeah, I'm here, should I start sharing? Yeah, just start sharing okay, and you can start your talk is 25 plus 5 minutes 25 minutes plus 5 questions yes, can you see? Yes, we can okay, okay, yeah, good morning everybody so I'd like to start by thanking the organizers for inviting me, it's great to be back at SISA virtually so I'm going to talk about active downy emotion with direction reversals so this is based on joint works done with my colleague at Ramalisa Institute colleague Shranjit Shahpundit and his PhD student Anu Shantra who is also in the audience today so here is a brief outline of this talk I'll introduce active particles very briefly first and then I'll go on to talk about this particular active motion that is the main topic here today direction reversing active downy emotion and then I'll discuss the various dynamical regimes shown by this kind of motion and then I'll discuss the position distribution in this different and in this different genes and then if time permits I'll very briefly mention how it behaves in the presence of a harmonic class and then I'll conclude okay, so active matter what is active matter? It's a collection of self-propelled active agents each of which can generate directed motion consuming energy from environment at an individual level so this makes this kind of motion inherently non-equilibrium in nature and examples of active matter can be found in at all length scales both in nature and also there are many artificial engineered systems so in nature the typical examples that example that we are all familiar with the motion of a bacteria but on the other hand and of the scale there are also barf lofts and tree schools where a single fish is like an activ agent and then there are artificial systems like micro or nano swimmers or genus particles of different kinds which show a similar kind of motion so these are the genes sorry we're losing you it's not good hello hello yes when was the last three seconds can you hear me now? now we can now we can okay okay okay yeah i was just going i was just saying that we'll use minimal statistical models to understand this individual active particle dynamics so there are various kinds of models for active particles which are some internal application or internal steps which couple with the positional degree of freedom to give rise to this directed motion which is this active motion so two of the most famous models are the so-called active Brownian particle and Ranandamble particle okay so active Brownian particle that their dynamics is I mean the characteristic part of the dynamics is there is an internal orientation which changes via some rotational diffusion so examples of systems which follow this kind of motion are characteristic swimmers and various other kinds of genus particles on the other hand there are the Ranandamble particles where the orientation changes via discrete jumps so again the typical example is a bacterial motion like E. coli however the topic of today is a different kind of motion which is described by either of these two models so there is something called direction reversing active motion so here I have given some examples so the first one here is the video of the motion of a certain kind of bacteria called mysocococococanthus the second one is another bacteria called pseudomonasputita so the typical motion that I am talking about is best visible here if you follow this blue trajectory so the particle is moving along a certain direction which itself is bending a little bit so that's like an active Brownian motion but then suddenly it reverses its direction okay so it's like a direction reversing active Brownian motion okay so there the third one is from an experimental system it's a light tunable particle so there are also many other kinds of bacteria which follow similar motion and our purpose of I mean the purpose of this talk is to understand theoretically what are the typical features of this kind of motion so I'll introduce the model we are going to use so we are interested in the motion of such such direction reversing Brownian particle on 2D so we talk about an overdome particle which moves with a constant speed V0 in 2D apart from the position xy it also has its internal orientation theta so theta is the denotes the angle that the internal orientation here this blue arrow schematically denotes this is the general orientation so the angle that it makes with the x axis so that's my theta and then this theta evolves in time that's how we get the active motion but there are two different kinds of motion for the theta one is that it rotates slowly via a diffusion process and another is that this theta can suddenly go to theta plus 2y which is like the reversal of the motion so this motion can be described by this set of Langevan equations x dot is v0 sigma t cos theta y dot is v0 sigma t sin theta so theta is this angle as I said it undergoes a rotational diffusion motion so theta dot is proportional to eta a white noise and dn corresponding diffusion constant and then there is the sigma t which is a dichotomous noise so it can take values plus 1 and minus 1 and it can flip its sign with some rate gamma so there are two kinds of two origins of noise here one is this reversal of intermediate reversal of this sigma that gives rise to the reversal of the direction of motion the other one is the slow diffusion of this theta so throughout this talk I will assume that that time t equal to 0 the particle starts from the origin without any loss of generality with a certain internal orientation theta no and sigma equal to plus 1 yeah so clearly this motion is different from both evp and rtp but in between consecutive reversal event it behaves like an evp so this model has been used in numerical simulations to understand certain collective properties of mysococosanthus but there are there is no antical understanding beyond the position values so this is what we are going to discuss here and our main observable that we will concentrate on is the position distribution of this particle so before going there let's have a closer look at the dynamics of this direction in the direction we were saying active boundary motion so we can request this dynamics as in terms of some effective noise zeta x and zeta y so I have simply written zeta x as this is a sigma t cos theta t but then theta t is theta not initial orientation plus phi t which is nothing but a Brownian motion and then sigma t is the dark almost noise as I said so here in this plot I have shown a typical trajectory of this direction university active Brownian particle so this green dot here I am not sure if you can see so there is a green dot here which indicates the initial position and then it moves and then this blue and red arrows indicate the present sigma state in the in set there is a trajectory which is much longer which is taken over a much longer duration of time and you can see that this looks more or less like ordinary diffusion although at a smaller time scale it's something very different very different than ordinary diffusion as well as very different than the ordinary ABP ordinary active Brownian particle so our objective here is to understand how this position distribution evolves so clearly there can be two scenarios one is when gamma is the reversal rate is larger than the diffusion constant or when it is smaller so here we show how the position distribution evolves in these two cases so the first panel is where gamma is much smaller than vr so here we see and start it again yeah here we see that the as time evolves so this is a 5000 particles all of which start from theta not equal to pi by 4 and blue arrows indicate this current sigma equal to plus 1 and red arrows indicate current sigma minus 1 and we see that for gamma smaller than vr the position distribution evolves and evolves into anisotropic circular kind of distribution quite quickly on the other hand for gamma greater than dr it's actually detains anisotropic for a very very long time so what I am going to discuss is that actually this depending on the two time scales their inverse and gamma inverse there can be four different dynamical regions so one is the short time region where time is much smaller than both the time scales both gamma inverse and dr inverse so there is a characteristic shape of the distribution position distribution in that region and then depending on whether gamma is larger or smaller than dr there is an intermediate region and then there is a long time region which is when time is much larger than both the time scales and we will discuss how the what is the characteristic shape of the position distribution in these four possible regions so just to give me an idea about the real system so in in mycococosanthus gamma inverse is about 100 seconds and dr inverse is about a million seconds whereas for pseudomonas putida the other bacterium which was bacterium which was this kind of motion gamma inverse is 0.13 second and dr inverse is 43 second so we see that there is a clear time scale separation between these two these two scales and the intermediate region is actually quite important and we'll see that that's where the most non-trivial result actually come so before going to the details let me just briefly tell you what we are going to see so we'll see that at short time short time region so the when time is much smaller than both the time scales the distribution is strongly anisotropic and non-diffusive and at the intermediate region when gamma is larger then again it's anisotropic and non-diffusive to the non-trivial scaling function which will which will derive and in the two other regions it's actually diffusive and isotropic and and shows a typical Gaussian distribution so here are some plots obtained from numerical simulation so the first column is for at short time so here you see that both for gamma are smaller and larger than dr the qualitative shape is like this it's like sort of a hammer this is this direction is the initial orientation on the other hand for t much much larger than both the scales the distribution is single fit with a peak at the origin it's very Gaussian looking now for the intermediate region if gamma is larger than dr it's some sort of an anisotropic shape you see the heat map the contours are like ellipses and for gamma smaller than dr it's again a Gaussian map so this this one is second middle panel that is the most interesting region but before going there I'll just discuss briefly all the other regions that we get the first one is a short time region when time is much smaller than both these scales so which means that drt is very small which means that phi t which is proportional to square root drt is also small so mathematically we can approximate cos cosine phi as 1 and sine phi so that gives a simplified form of this effective noise that we had remember here is the effective form of effective noise written so in this region they can be approximated by by this equation and the mean square displacement one can calculate it exactly it turns out that the mean square displacement in this region for both x and y are proportional to tq so it's very very anisotropic and the coefficient of tq are different for x and y which means that it's also anisotropic and tq means non-detect so to calculate the distribution we take reports to a trajectory based approach I'll not go into the details of the calculation but what we do is to look at a trajectory with n number of reversal events and compute the position distribution for that particular trajectory and then sum over all possible such trajectories and sum over all possible number of jumps so because phi t is just a Brownian motion we can actually calculate exactly what it gives and if the position distribution can be expressed as an infinite series which is not very useful as it is but what happens is that for now if at short time regime remember we are looking at time scales shorter than both dr inverse and gamma inverse so for small gamma one can actually perturbatively evaluate this series systematically and one can get the position distribution in this region so here is a plot of the marginal distribution of along x which shows a very unique shape this kind of position distributions are not seen in either RTP or MVP so here there's a plateau around the origin and then there's a Gaussian peak around x equal to v not t and this is actually obtained from the symbols are from numerical simulation and the black lines are from the analytical calculation here but keeping only n equal to 1 and 2 terms so this plateau is actually a result of one reversal or two reversal weekends it's direction reversals which keep rise to this kind of of a plateau in the position distribution next I'll go to the long time regime where time is much larger than both the time scales and from the trajectory plot we already sort of expected that here one should see a Gaussian distribution and indeed that is what happens so here one can show more rigorously that the effective noise actually emulates a white noise but with an effective diffusion constant it depends on dr and gamma both and the diffusive nature of the motion in this region is clear from the path that x square and y square average both go grow linearly with time t with this effective diffusion constant and the typical distribution is a Gaussian with a diffusive scaling here again we compare this prediction with numerical simulations which show excellent match so just to give a quick reminder this behavior is similar to long term behavior of ordinary active dronial particle but with a different effective diffusion constant okay then we come to the intermediate regime which happens when dr is larger than gamma then again one gets a diffusive behavior but with a different effective diffusion constant once again the effective diffusion constant is this square divided by dr and the typical distribution is Gaussian again so again comparison with the anomalies yeah now we come to the most interesting region when gamma is large gamma is much larger compared to dr and we are looking at the intermediate regime so t is much much larger than gamma inverse but much much smaller than dr inverse so t much much larger than gamma inverse means that the average number of reversal events which is gamma t in this regime one can approximate this dichotomous noise sigma t by a white noise right it becomes letter correlated again one can throw it rigorously but I am just making a statement here but the strength of this noise is is given by gamma inverse on the other hand because t is much smaller than dr inverse so phi t is still small so this cosine theta t and cosine sine theta t we can still use the same approximation we can keep it up to linear order in pi and now this these effective noises resemble that of so the Lanzar equations with this effective noises resemble that of a 2D diffusive particle because it's just white noise but the diffusivity itself is a stochastic function of time so this this term here cos theta naught minus phi t sine theta naught it's like a diffusivity but which is evolving with time so one can again calculate the mean square displacement exactly and it turns out that the leading time is t proportional to t so it's like a diffusive motion but there is still anisotropy because along x and y the coefficients are different so we want to understand the position distribution in this regime so what one can do is that adopt a path integral approach because again here what we have is just a white noise and a brownian motion so one can actually perform all the path integrals exactly and one can arrive at the characteristic function of the 2D distribution so again I mean this is some exact expression but kx and ky are the corresponding Fourier variables for x and y I mean the expression doesn't mean anything just except that it can be inverted for any value of theta naught but let us first look at the marginal distributions so from these exact expressions we can find marginal characteristic function and the point to notice here is that there is no general scaling form one cannot write a general scaling form for pxp and pyt and it depends non-trivialy on the initial orientation theta naught and it can be inverted numerically but to understand it better so understand is the anisotropy better what one can do is to look at along the directions where the anisotropy is maximum in some way so one can look at the position distribution along the initial orientation and autogona to the initial orientation theta naught okay so this is what we are going to do so first we look at the parallel component so if you remember the this figure from some slides ago so this black dash line here indicates the direction parallel to the initial orientation and we can get the corresponding characteristic function by taking theta naught equal to 0 and ko equal to 0 and it turns out that the characteristic function is that of a Gaussian actually it's exponential minus k square times something which gives rise to a diffusive scaling form for the position distribution this is the diffusive scaling form which means that along the initial orientation the motion is actually diffusive okay and it does not depend on dr it depends on gamma only and the scaling function is nothing but a Gaussian so here you can see in this plot the scale plot of p x parallel at some time t in this region for different values of gamma and dr and the perfect collapse shows you that it's actually like that so the direction you are saying at the bottom input of the particle actually shows a diffusive motion along the initial orientation intermediate time on the other hand if we look at the orthogonal component there the characteristic function is something very different it's of the form 1 by square root of cosine hyperbolic and it's actually hello am I audible hello we can hear yes yes yes hello yes you are audible okay okay yeah yeah yeah yeah so the the resulting scaling form is actually a ballistic scaling form so the position distribution in this region for the orthogonal degree is a function of x perpendicular by d0 t with some other constants which depends on gamma and dr and we can calculate the scaling function as well which is given in terms of gamma function so this scaling form actually is is unique again to the direction diffusing motion we have not we do not see it in in usual active Brownian motion neither in rtp and it shows the exponential decay at the pace okay so this is so this means that for for this direction diffusing motion we have a ballistic motion auto to the initial orientation whereas for the directions parallel to the direction parallel to the initial orientation there is a diffusive motion so this is this strength you have here is due to the presence of the reversal so yeah I'll quickly summarize how much time do I have in five minutes okay, okay, yeah I'll quickly summarize this part so I have just discussed the the nature of active Brownian motion with intermittent direction reversal and we have shown that because of the presence of two different time scales due to the rotation diffusion constant and the reversal rate there are actually four possible distinct dynamical regions each of which are characterized by different shape of the position distribution and at short time the motion is strongly anisotropic and non diffusive and it has a weird shape with a plateau around the origin at late time it's typically diffusive on the other hand in one of the intermediate regions when the reversal rate is fast then we show we see a different kind of motion there is a ballistic motion along the orthogonal direction on the other hand there is a diffusive motion along the parallel direction to the initial orientation and we can also we have also found the non-trivial scaling function with a exponential deal so the next obvious question is what happens if we have this kind of an direction investing active Brownian particle in an external potential and we have actually studied it in the presence of harmonic potential so I will not go into the details so this but just a diffuser what we see is that actually in the in the gamma dr and nu inverse nu is the stiffness constant of the trap there are there emerges two new kind of passive and active phases compared to the ordinary abp or rtp particle so these two new phases are they arise due to the presence of the reversal and this will actually be this is there in the in the poster by Ayon Shanka and for details you can you can check his poster so I will just finish with some open questions first one is of course I mean if one can experimentally see this different dynamical latins and as it turns out for both pseudomonasputita and mysocoposanthas the values of gamma and dr are such that this intermediate regime should be accessible experimentally and also possibly with this artificial active poloids where our time scales might be cleanable and then another question is of course what happens when these are at least particles interact with each other what kind of collective behaviors one can expect so these are the open questions with this with this direction you are seeing active slanjan particle and finish by thanking for your attention okay okay thank you now we start the questions the new questions from the audience okay hi hi you now and yes Andrei Agambas so I have a question so at a certain point you show that the mean square displacement around one direction or the other grows like t-cube as it would be super ballistic yes yes and so I was a bit surprised by that because I was expecting that somehow the ballistic growth would have been the like you cannot do better than that no no so this is actually this feature is not very new so even for this ordinary active slanjan particle if you look at the mean square displacement along the initial orientation that actually grows as t to the power 4 and for ordinary activity this is t-cube okay so here yes okay thank you thank you okay I see you are right any other questions from the audience anyone from zoom yeah I have a question okay go ahead so I'm I'm on bishash so I have this very nice question the assumption of gaussianity in the noise processes how does this correspond to a experimental observation about this microbial run and tumble motion which assumption of gaussianity where so you have in the theta dot you have taken us I think if I remember correctly quite gaussian noise yes so how does this correspond to the microbial environments who actually displays this run and tumble motion no for run and tumble motion you don't have this gaussianity so for run and tumble motion what you have is that this theta changes by a discrete amount so the particle goes on a straight line and then suddenly theta goes to some theta prime it changes by a finite discrete amount and then so on and so on so this is like tumbling here it's more like an active brownian particle where the orientation changes slowly via some diffusive continuous process so where is this energy source located in these particles like for the microbes it is within them right the ATP production and they have to continuously expand this now how do you how do you place this nonanimate and animate objects together no okay so for this so this every kind of motion are typically in some genus particles or catalytic swimmers so where you either shine light on them so typically like genus particles which have some I mean one surface of it coated with something and then you either have a chemical reaction or you shine light on it which gives this energy and then they move in a certain direction so it's coming from some kind no we're gonna hear you hello hello yes was there any other question there was a comment in the chat about your movies I don't know what you saw that oh okay says that yeah yeah you have shown in the first part of the the these are non-interacting particles these are done with completely non-interacting particles just each of them are spreading with time and we are just looking at the at the position distribution of the can you please read the question in the chat yes so Cesare is asking you have shown in the first part of the talk a movie of expanding cluster of particles colored accordingly to the value of sigma where this simulation is done with interactions among particles so the answer is no these are completely non-interacting particles just a collection to show the distribution at one go so these are like ensembles of independent particles okay thank you any other question okay no we thank the speaker again and then we move to the next talk which is also an online talk but Giovanni Giovanni Volpi from University of Gothenburg Giovanni are you there? yes here i am you can start sharing your screen you have again 25 minutes yes so can you hear me and see my screen well? yeah it's okay you can start now okay perfect so yes well thank you but thank you very much for the possibility of giving this talk it's very nice to be virtually there it's a pity that obviously we cannot meet in person what I will speak about it's we will give a very brief overview of of the convergence between active matter and machine learning and especially explain how machine learning can be used for active matter and to motivate this we have to go back a few years and all the fast about deep learning comes down to a challenge that was created a few years ago well many years ago actually and it's this image net challenge the challenge consisted in classifying these images so to identify essentially which objects are present in each image like where there are dogs where there are cats and so on and if you go back like 10 years the best result was in the order the smallest error that you could achieve with let's say classical machine learning algorithms was to around 25 percent which means that one out of four in a sense more or less means that one out of four images are misclassified and what is interesting is that after about 10 years ago and in the following years we see a huge decrease in the error in this challenge so you see that we go down to around 5 percent and then in 2016 down to 3.4 percent which means that almost all images get classified correctly and this was thanks as you can see here there are blue these blue histograms and this blue histogram corresponds to techniques that used deep learning so deep neural networks and deep convolutional neural networks to be specific and you see that it goes down very very fast and what really triggered the deep learning revolution was the fact that in 2016 the machine managed to do better than humans so humans will do more or less in this classification task will do like a 5 percent error machine did better than that like five years ago and from there people got really convinced that you could do analysis of images automatically and well the gates open and the machine learning revolution flooded essentially every field of life and at the end it ended up also in active matter in soft matter physics so in this talk I will try to give a very compressed outlook about machine learning in active matter and to do that in an efficient way I will present one example that we did in my lab where we use machine learning successful to solve one problem and then give a brief overview of other case studies as well as some ideas about opportunities and challenges of machine learning so let's start well the funny active matter I don't think it's really very necessary for this audience but let's keep in mind that active matter which means matter subject to an influx of energy that drives it out of equilibrium ranges a huge number of orders of magnitude of scales in time and space so we can go from biomolecules up to large animals like I don't know, herds of elephants can be conceived as active matter so we have a huge range of scales and things get even more interesting when we consider this active matter in complex situations like in complex environments or in crowded situations where we have a lot of particles active particles interacting with each other and here we can think of a lot of examples drawn from robotics, biology and other fields so this is everything just to give you an idea active matter is really covering all ranges of statistical physics on all scales and now we move to machine learning and again I will give a very very brief overview and machine learning many people will think immediately of neural networks in particular deep neural networks and in fact this is an important field of machine learning so we have many kinds of neural networks so convolutional neural networks particularly useful for images recurring neural networks useful for time signals reservoir computing and all of this is supervised learning what does it mean supervised learning means that you have a need you must have input data and output data a set of data for which you know what inputs you want and what output you decide to obtain with that input and then you train your network to do just that essentially these are very complex very highly dimensional fitting procedures if you wish but of course these are not the only supervised methods we should not forget that decision trees and their larger version the random forest are very very powerful tools that also have been very very successful but nowadays are a bit forgotten compared with neural networks especially in our fields and yeah one thing that it would be nice to notice is that this as you see is 2020 so it's roughly one year and a half ago this graph was made well new things came up since then so now we have a lot of new techniques like graph neural networks the concept of attention, transformers and so on so neural networks keep on sporing new lines and new architectures all the time but in another category we have semi supervised learning which sometimes is also called genetically reinforcement learning and here we can think of adversarial networks reinforcement learning proper genetic algorithms which are very forgotten in most fields but they are actually very powerful in my opinion and the idea here is that you don't have a set of inputs and decided outputs but you have an algorithm or a device that has to interact live in a certain environment and interacts with this environment and then you select somehow this device learns how to behave in that environment in order to reach the highest thickness this semi supervised because now we don't have an explicit output but we just have an environment in which we can optimize the reward and finally we have all unsupervised methods which go sometimes also under the name of classical machine learning which include well a lot of methods known probably nowadays thinks too much of as machine learning like going from regressions principal component analysis k-means, self-organized maps these are methods that are often used to explore data okay this gives you a very very quick overview of machine learning as it was one year ago as I said already the field is evolving very fast so new kinds of machine learning especially new architectures of neural networks and more powerful architectures of neural networks are coming up on a monthly basis so this is working progress in a sense but in any case so this is machine learning and now we can see how we can apply this to active matter and soft matter physics in general and therefore also statistical physics I will give one example in some detail and then some very quick examples drawn from work done in my lab the example I want to give you is about digital video microscopy where the question is well it's a very powerful tool in statistical physics to work with brony and particles make experiments with brony and particles because you know brony and particles jiggle around move so there is a component of stochasticity but you can also apply very controllable forces on them so you have a deterministic force field they interplay between deterministic and stochastic elements maps very nicely into stochastic differential equation or focal blank equations and so on so there is a very deep connection to statistical physics and well when you do an experiment what you see is actually not a focal blank question but it's a video of the particles which look somehow like this without the green rings obviously and then you have to identify the particles so to find the green rings and what often is done nowadays in most labs around the world is this so basically you take the image acquired from the experiment and you threshold it or does it mean threshold it transforming into a black and white image for a given threshold looking like this one and then you calculate the centroid of each white spot and that's if you do that over time then you can have trajectories for the particles and use them to analyze lots of their properties from their diffusion properties anomalous diffusion forces acting on the particles biological activity of cells and so on but I mean it's really surprising how powerful is such a simple algorithm if you think about it because this algorithm this centroid extraction algorithm permits you to achieve sub pixel resolution this means that we can actually map determine the position of a bronian particle down to pens or hundreds easily hundreds of nanometers like with a bit of effort down to 50, 20 nanometers which is pretty impressive considering that that's a fraction of the diffraction limit anyway this is in an ideal world where you have perfect images if you go to real life often you have images that are much more difficult to track this is an example from experiments done in my lab in this experiment specifically we wanted to study how bacteria this bright spot which are motile move in a background of passive particles this round gray things that you see all around and our idea was to study how this bacteria can open channels that then are metastable and to study the statistical physics of this process which is pretty interesting the problem obviously is that before we can study the statistical physics of this we need to actually track the bacteria and the particles and you see here that we have all well we have a good range of problems that can arise in digital video microscopy we have blinking of the bacteria we have non uniform illumination because we want a very large area and a very noisy environment because we want to keep the illumination low not to bother and photoblich the bacteria the experiment needs to run for hours so we don't we need to keep the illumination low well now the problem here is that your brain can track these things pretty well but think of a algorithm and what the algorithm sees is a very zoomed in version of this and the question is well where are is this a particle or a hole between particles it's not that obvious if you are an algorithm and if you see a single frame right this one for example so it was very difficult we tried with the best algorithm and the best students and after weeks of effort we got this which is bad this doesn't permit us to do any analysis of the video of the results and the worst part is that we optimize this patch and with the same parameters of the algorithm standard thresholding algorithm we could not analyze the rest of this video the other patches it didn't work let alone other videos so it was really weeks of work for very little yeah very little to shelf or weeks of work the point is where we entered machine learning so we thought since machine learning is so good with images maybe it can even track the particles here so that's what we tried and here I explain you very very quickly how it works conceptually it's a simplified version of the neural net as we employed so here imagine that you have this input image you want to find the center of this particle well what you can do is to convolve it with different filters you get a feature map and then you can do this with many different filters don't worry about the numbers in the filters they are random for what we care at the moment then you have a convolutional layer so where you convolve the input with many filters you don't sample it mainly to reduce the number of pixels you have to deal with and then you can have a dense layer where you combine these values into a neuron just summing them up with some scaling factor of these omegas again they can be random and finally you get the position again combining the outputs of these neurons with some weighted by some other values and of course since everything is random in this net of what you get is a random number at the output so you don't get a position now what you have to do in order to get the position is to backpropagate the error that's a very powerful backpropagation error backpropagation algorithm which tells you that if in a neural net of what you do is this you measure the error and then you change these weights one by one in order to decrease the error for a given sample and you do this many many times with many many different samples we are speaking of millions of samples here hundreds of thousands millions and eventually you end up with a network that can give you the center of the particle in a very reliable way now obviously you see the problem here the problem is that to do this since this is supervised learning we need to have inputs and outputs so we need to know in advance where is the position of the particle and if you knew that we don't really need the neural network well what we did then was to use what first of all we use a more complex neural network with more convolutional layers more dense layers but conceptual is very similar to what I showed you before now to train what we did is to use simulated images the advantage of simulated images is that first of all we can have as many as we want secondly we know exactly the position of the center because we simulate them so we can use the backpropagation and finally it's very cheap to do so which would be very very expensive experimentally so we did that this works very well next question is does it work on experimental data? well then we have to go to experimental data so here we have an optically trapped particle optically trapped means that the particle is an optical trap which is a laser a focused laser beam which keeps the trap in a position and we measure its position with the standard algorithm and with deep track and the two agree perfectly no surprisingly because this is a very good image for an optically trapped particle and also its probability distributions agree very well and also its autocorrelation function agree perfectly and also with theory this fit with the dash lines okay so we know that our network trained on simulated images works perfectly for experimental images for which also standard algorithms work perfectly good but not good enough so what we did next is to keep the same particle in the same optical trap switch off the LED very good LED illumination and switch on in kandesense lamp next to it so the image of the particle gets very bad the standard algorithm cannot track it but deep track can still track it perfectly since in here you see the tracking done by deep track and by the standard algorithm you see the difference and here you see that while the standard algorithm fails in getting the probability distribution of the correlation function deep track can retrieve it basically perfectly so this made us very confident about this algorithm and we went back to our original problem and we could now track all the particles as you see very nicely actually we can now track the particles for all the video in and also for other videos with the same trained neural network even better we can also track and distinguish the bacteria as you can see here with the round circles which track the bacteria and okay let me skip this I can tell you so here this is a comparison with the other methods and you see that deep track is better than all these other methods that in a standardized task but I don't really want to go into the details if you are interested in deep track have a look at that because the software is completely free we also have a second version of the track that can be used in a much broader range of application and also microscopy applications and again it's all the software is completely free and very user friendly so all of this is available online on our webpage what I want to show you now very briefly is some other applications of deep learning one other application is to quantify anomalous diffusion so here you can think well you can have different trajectories which can be have a different anomalous diffusion and the anomalous diffusion is obviously characterized by the exponent of the mean square displacement and you can have sub diffusion normal diffusion and super diffusion the question is how can I recognize the exponent the anomalous diffusion given just the trajectory how well can I do that well and the answer is that that's difficult but it can be done very efficiently with the recording neural networks which is another flavor of neural networks so you see that the neural network and mean square displacement give very similar results both on simulated data and on experimental data all these right spoils which is nice but of course it's never enough to be able to reproduce things that you can do with standard techniques it's nice it's useful to use neural networks when you can do something beyond what you can do with standard techniques so in this case what we did is to show that we can go to more challenging situation for example we can take a trajectory sample that not equal times and still the neural network is able to reconstruct the value of the alpha very, very accurately or a neural network is able to detect the switching time and the anomalous diffusion exponent in a trajectory that switches between two different anomalous diffusion behaviors so in this case you can see that the trajectory we can identify both the anomalous desponent and the switching times by the way this anomalous diffusion tracking is very interesting topic it's a very interesting topic in fact with several colleagues that you mentioned here we organized a challenge last year which ended the last November actually last year in November I think and in this challenge the idea was to detect the anomalous diffusion recognize the underlying model and so on you can find the results of the challenge in the challenge website and more importantly there will be a second challenge that we are organizing right now and should open probably in a few months so if any of you is interested in participating in the anomalous diffusion challenge to keep tuned on this or drop me an e-mail and it would be very nice if you want to participate so okay another example of something we can do with neural networks that it's not so easy to do without neural networks is this one where imagine that you have abronium particle trapped in an harmonic potential that switches between two different stiffnesses very strong stiffness and very weak stiffness like here so you see the stiffness weak stiffness strong stiffness weak stiffness strong stiffness and so on with that period we don't know the stiffnesses and the period a priori but we can assess we have access to one trajectory like this one what this one is a trajectory with relatively big differences in stiffnesses and of course it's also color coded so it's very easy for you to visually see the difference but when the difference becomes smaller it's not so easy to visually see the difference between the two cases and the question is can you characterize the three values of the stiffnesses and the switching time and you can do this in a very systematic way using again record a neural network so here you see that we can predict over a very large range of values the stiffness i and low what low and i and the switching time in a very very accurate way and over a very broad range now this is not interesting only for a potential the switches between two different values of diffusion of stiffness this is interesting for any dynamical model as long as you have a model that you can parameterize you can use the same technique to find out what are the best parameters for your model given a trajectory of such model okay let me skip this this is more interesting and this is a very recent working which we tried to apply some of these ideas to the obvious topic of testing and containment in epidemic outbreaks and here we have a simple example of the SIR model in free evolution so this is a small variation on the SIR model where we see how the disease spreads over time very reliably so you see that the number of cases increases and then goes down as most of the population gets infected and recovered now one way to prevent disease spreading is obviously with contact tracing tracing and confinement lockdown as you or well confinement of the active cases and you can see this here so the blue ones are confined and you see that contact tracing works well at the beginning but then you will end up having some breakout infections which will spread further the disease so contact tracing alone doesn't work so well what you can do instead is to employ a neural network to tell you which individuals to quarantine if you do that you can get with the same amount of individual quarantine you can get to actually completely stop the disease outbreak so you see this is a nice application of neural networks to a problem of obvious current interest the advantage of using neural networks is that you can decide which individuals to quarantine without having to have any explicit knowledge of the parameters of your underlying infection outbreak so you don't need to characterize the neural network finds out somehow the parameters of the outbreak and decides what to do accordingly without you having to input this explicit knowledge in any way but you input it through the training data but not explicitly okay so this is just a very quick list of examples of obviously very limited in time so idea I'm going to the details but if you are interested you are very welcome to obviously read the paper or contact me now I want to just in the I guess I have a few minutes like two, three minutes and let's let's say two that's fine I will just give a snapshot on well what are the opportunities and challenges of machine learning so opportunities this is pretty obvious obviously there are plenty of opportunities in data acquisition and analysis feedback control but very interesting topics that can be investigated and I didn't mention them too much because it's really ongoing research is the systemization of active matter or to gain insight into evolution and biological active matter or to control swarm of robots and also active biological systems through some elements that can control them and finally what is the sacred grail if you wish is to find out a way to embed intelligence into microscopic active matter for real not by controlling from outside with a computer and so what are the challenges that are still open well of course we need benchmarks so how do we benchmark machine learning models against simple approaches how so I mean here's to give some guidelines so first of all you we need to benchmark always we need to prefer simpler models and we need to analyze and select the input features which sounds obvious to a physics audience but it's really against the ideal feature engineering is like going back to feature engineering in the 90s from the point of view of people in machine learning and finally it's very critical that the machine learning trained models are not employed outside the range of training you know I'm playing a machine learning model outside the range of training is like extrapolating instead of interpolating and I guess all of you know that extrapolation is a much trickier business than interpolation and so what are the open challenges in my opinion so first of all how do we know that a machine learning model is correct and robust how do we understand the mechanics of the how do we get insight into the black box that machine learning of models often are how do we use machine learning models informed by physics so where we impose some constraints that come from physics like think of a simple one energy conservation and how can we benchmark alternative machine learning models against each other how do we decide which one is the best these are open questions so that I think are well that no I don't think it's it's really there on top of the mind of people working in this area right now and finally last slide there are actually lots of benefits that they feel those active matter and more extents in extents and all statistical physics can actually bring to machine learning and in my opinion disadvantages go down to the fact that active matter experimental statistical physics has access to very large databases of data that are very, very controllable so we can control a very large system of colloidal particles on the microscopic level we understand the microscopic interactions so we can make this connection between microscopic and macroscopic behavior of a system and we can do this over long times large systems and many time scales and length scales which is not possible for a lot of other systems in machine learning it's obviously not possible to have full access to the macroscopic dynamics of a population of social individuals interacting on Facebook we don't really have that access but we do have access to the and we can play in tune the dynamics of microscopic particles and see how this affects the behavior of a macroscopic behavior of a system which is a huge opportunity for understanding which ones are the best and more finely tuned machine learning algorithms and devices that can be employed also in other fields and with that I would like to conclude so I try to give a very, very quick overview and after briefly introduce active matter in machine learning I try to convince you that there are some problems that can be successfully solved with machine learning in active matter and therefore also in all generic statistical physics I gave some very brief examples and finally I try to give an overview of what are the current opportunities and challenges and also highlighting the fact that also machine learning can benefit from work done by statistical physics and and active matter so it's not a one-way road and of course if you are interested in knowing more about this we had this review paper last year where most of this idea are contained again it's a very fast evolving field so some ideas are not there but most of them are and with that I would like to thank you and leave you with this slide with the last in-person group picture of my group and the list of our funding sources thank you thank you Giovanni now this time for questions there is one here in the audience I have a question oh OK OK, my name is Jared and I just wanted to ask there's a technique I'm familiar with that was developed while I was at Cornell PERI or parameter extraction from rake-contracted images for getting centroid or you know the positions and radii of particles in disordered microscope images is this one of the techniques you compared with because my impression was they approach the ultimate bound the Kramer-Rau bound speaking of the radials yes that's actually the technique that is shown actually OK and I agree with you that's one of the best approaches very much the problem I can tell you where that techniques break down and why we also chose to use a fluorescent slump it doesn't work well when you have a gradients of light obviously because then the centroid of the particle gets a bit shifted well I think PERI models the light field it is OK OK so it's a different technique so I don't really know the technique you mentioned it could be what we used was radial symmetry technique and I don't really know if it came from Cornell but yeah so that's what we used it's a very standard technique a very successful one which was proposed like 10 years ago or no less actually eight years ago we don't model the light field OK there is another question here in the live audience Hello I'm Eric Gaurel from Stockholm it was a very very nice talk from Sweden thank you very much I'd like to ask a very general question because you had also some very general conclusions in your slides how would you rate using machine learning to improve the analysis of the understanding of the data you already know how to acquire to developing better methods that give more accurate images or data which can be understood without machine learning tools thank you OK well that's a nice question well nice to meet you virtually I mean obviously if you can acquire better data that require less manipulation in order to analyze that's obviously always preferable so but there are many cases in which you have constraints on the acquisition that are really out of your control so the data quality is fixed so you cannot really choose that then the other thing is that one thing that often happens is that you might find which happened to us in some projects which I didn't present we might find with machine learning a bet that you can analyze your data extract more information and after you have done that you manage or some other people in your group can manage then to find more classical way of extracting the same information in that sense somehow machine learning it's a kind of front runner that finds out that there is some information and motivates I guess people to find a better way to extract this information in a more classical way so in that sense of course this is very welcome when it's possible because it's always better to have an explicit method that it's understandable instead of a machine learning method that is a black box so I hope you can answer the question we need to move on to the next speaker thank you again, Ivani then the next speaker is Daniel Shadrach from Tanzania are you there Daniel? yes I am there okay you're going to start sharing your screen okay good thanks so you can see my screen 25 minutes thanks a lot welcome and I thank the organizers for this wonderful conference and also for the invitation and my talk is going to be on safety matter biophysics and in particular I'm going to make an emphasis on computer-aided drag design we'll lost you hello Daniel we cannot hear you we don't have time once the drag has been sorry Daniel the production went down can you start again we don't hear what you said okay so I begin from the for the very beginning ah thanks a lot my talk is going to be on soft matter biophysics and with the emphasis on how tools like molecular dynamics or computer-aided drag design have accelerated the process of drag design a traditionary drag discovery design development is a little challenging and it takes about 15 years for a drag to be approved but thanks now to the advancement of many techniques including the computational tools and advances in physics which has helped a lot to understand and to manipulate the materials at atomistic scale and nowadays we can complement what have been not done for many years so the process now of drag discovery for the modern approach will have like this kind of pipeline where both in silk approaches using molecular modeling can be incorporated to the practical or experimental approach before it was not easier to do so we can predict most of the problems like the sorbility the toxicity, the administration of the drag and how it could be excreted which are the major reasons for many drug failure in the market so with more electrodynamics actually we are doing going back to friends can you put your presentation Daniel? Yes. Can you put your presentation in full mode? Yes, it is in full mode Okay, are you switching slides? Are you still? Okay, now you are Yes, so I am completing in full mode Well, we are seeing the summary of the slides also No, but I just put it in full mode Do you see full mode? No No really, we are not seeing in presentation mode You don't see in presentation mode? No Again But we do see the slides You do the slides Okay, now is It is in presentation mode Okay, go ahead when the weekend Okay, thanks So actually here now we are integrating the Newton's request in motion where we have the force that is the negative gradient and then we need to get the potential which define the system as the combination of the bond, the bond angle, the torsion and the non-bonded interaction So by so doing we use molecular dynamics as a tool that can help to understand how the drug could be binding and also unbinding from the receptor and it can help us to understand for example this small molecularity is unbinding from the protein and then how long will it stay and what is the binding and unbinding All the dynamics have helped the lot in understanding this and also accelerating and understanding more experimental data I will just show you some of the work that we have done in the group here So we know, we first have we characterized the interaction of a smaller molecule that is called the Rina marine which is this one here and what are acting I don't think we see the sign Which slide are you in now? Which slide number? I think it's the fourth one No, we see the three Yes, it's the fourth Maybe it is going slower than it is going, I think slower Okay, don't do presentation mode, I'm told Okay, I should not do presentation so it go like this Is it okay with you this? Now we see number four Yes, I'm in number four, yes Thanks So then we wanted to characterize the interaction of Rina marine and the Rina marine since I'm complex Rina marine is a small molecule that is formed in Ikasava and recently I've raised more interesting as an anti-cancer compound but it is interaction and with this enzyme remain not where it is at molecular level So, and because the protein is not available in the protein data bank so what we did is to do homology modeling and we used these different model tools like Swissmoderator and I tensor and Moderator to model the protein and then we compared with the source of the protein after we have performed the molecular modeling then we checked the stability and then we performed three molecular dynamics simulation of the upper protein and the protein that was a complex one and again after doing molecular dynamics we extracted some ensembles and then we did some dokinica recreation of the receptor that is the enzyme with the leg and the Rina marine and the best snapshot that we subjected again to do molecular dynamics that is complex number two and therefore molecular dynamics in relation comprised of three molecular system that we performed and we found that this is the apple the homology model protein without the regand inside and this is for complex number one for the red and for the blue it is complex number two so we see that the molecular dynamics structure was more relatively stable when we compare to the apple and to the whole ensemble structure in all the systems that we did and then it shows more stability compared to the molecular one and then we are interested in comparing the interaction and the contribution of the less juice from the protein and how is it interacting with the enzyme Rina marine and we see that there is a difference of residue from the apple protein from complex one and complex two complex two I mean the protein from the molecular dynamics simulation then we can see here for example this this juice for one three is having a passive contribution to the interaction while in the other system is positively contributing and interacting with the regand but here does not interact this we saw that there are a bit of the of the amino acid and the residues in the protein which could result to unfavorable interaction however we then concluded that there is a strong interaction of Rina marine and this enzyme now I highlight how molecular dynamics have helped us to understand the interaction of the small molecule we have a lot of natural products that have been locally used in management and the treatment of diseases here I will share a few of them we have this compound here in the bottom here the left bottom extracted from this plant with taniason in ferra so we tested them computationally in the inisilco as the bacteria or antibiotic compound because they have been used for longer time here in the local community to manage many bacterial infections some of this compound they showed some promising activity in the inisilco and then we said to comprehend and give some understanding on how they are interacting with the protein so we choose one of the best compound that we performed in the docking and then we said to do more ecodynamic simulation to relate the to ascertain the observed stability and what we saw in the experiment the less juice that highly contributed from the protein we found is ili 1976 is higher interacting with the with thantide one of the the ligand that was binding most turbo inisil from the distance that I can see that it is really stronger interacting with the different less juice as also suggested by the free energy composition of the each residues contributing to the binding free energy so this is helping us to probe to give an atomistic insight on how the ligand or the molecule the drug they awaken we then isolated some of this molecule in silk from the plant and then we characterized with GCMOS tools and then we wanted to understand now because this we are available in different quantity which of this is highly active against the various bacteria so we found that this molecule here that is of terrain that it was more active that compared to the other molecules that we obtained from the extract and the results that we obtained from the molecular dynamics simulation leaving a good agreement with what we observed from the experiment where this molecule was showing a good inhibition and the stronger interaction compared to the other we have been applying now computational methods to screen and try to identify other smaller molecules for the treatment for the management of COVID-19 we have a lot of natural products that were previously isolated in the lab of one of my corroborator and what we did now is do similarity search so first we started with our database with many compounds so the compound for one here of our database that we screened against the main proteasor of the virus and the best ligand that was in ligand number 15 here that showed the more stable interaction we subjected to do an ensemble screening to database of FDA approved drug and then we found three drugs that are in clinical application and then we took these drugs which are approved to do some stability and investigating how they could be binding the binding model and we found that one of this drug actually this one here which is an approved drug and this one we are more stable and of course some of them have been chance into some clinical trials for the property management we then wanted to investigate how this small molecule could block the virus entry to human cell virus enters the human cell by attaching to the adiotensin combating enzyme 2 which is as a 2 by stronger attaching using this residue like he stayed in tattoo and adionine 393 and so this is the the the receptor binding domain of the virus so ideally blocking this interaction here is ideal by preventing the virus not to enter into human cell so again we did some molecular dynamics in relation of this molecule here and we found that the molecule number 8 here was mostly working better than the other then to get inside into the role of water because water has known for quite longer that it mediated the biological process it mediated also the interaction of the drug and also the maintaining the shape of the protein so then we can see here at the interface between the viral protein the viral protein and the human protein the acetone our molecule is binding in between but the stability of this molecule is enhanced also by the presence of the water molecule that is surrounding the small molecule and also that they do the sterilization and here we try to look the number of water that are available near each residue as an example we can see that the residues that are interacting with this small molecule here they possess different amount of water and also this could be influenced by the nature of the amino acid residues so the monohydrophobic and the the hydrophilic residue would have different number of water and also they are interacting in a different way although the larger distribution function shows that the interaction of water will be in the same distance but the intensity of the pq would signify the difference I mean the intensity and the amount of water that is available and then we were interested to understand the dynamic of this ligand less u and then we did metadynamics and enhanced sampling method that really accelerated and has an advantage over the class of molecular dynamics simulation in process that when the drug is in it is bound form and when it is unbound we stop the calculation so we do this several times and then we can calculate the free energy then we can see that the free energy for the bound and for the bound system here is the move that because of the internet problem I don't know maybe try again if it will do the striving don't know if you can see it here that our molecule is at the interface of the two protein as a two protein the human body and the receptor binding protein and this try me to disrupt the interaction here and then we monitor the movement of this ligand as the function hour of time and then we track the problem one rescue that it was forming the hydrogen bonding that stabilizes also the interaction between the rescue and the ligand that is carried in green here then we can say over time is really moving this figure now see here is trying now to show the bound and unbound state of this molecule and I I'll put it here remove the the mode thanks now the name tree also have shown some great promise in the managing recovery by preventing the virus entry we have performed some several series of experiment here and we found that this molecule is here isolated from the tree in this by stabbing the recognition of the viral receptor binding to the human asset to and thereby preventing the entrance to the viral to the human cell so to achieve that we performed the different enhanced disampling methods, calculation and also classical molecular dynamic simulation then we can see for example in figure one here that the interaction of the protein and the binding pathway is somehow different so we have different used different corrective variable the cv1 here is the distance between the protein and cv2 is the unbinding distance so we can see that the molecule would spend the rotor of time so there is a large free energy in the bound state rather than in an unbound state and again we tried to understand which less you are highly contributing to the interaction between this amino acid and the derigan so we found that less used like phenyl 124 and this less used here try to find 141 are highly contributing to this interaction and the two destabilization of the ligand the process of drug design also is really influenced by the flexibility of the protein so accommodating protein flexibility in drug design is very important we have found that if we do we accommodate the least use we lost you again during the accommodating computational aspect considering the flexibility of the protein is very essential as an example we can see here the binding of the two compound the compound one and the compound three which is compound one and compound three here is different over different structures consistently this compound number one length the wrist compared to compound number two but when we perform the using the crystal structure compound number one was the best than the compound number three so this there is some information that some compounds in drug design act as false positive binders so in the therefore in order to eliminate the false positive binders we need to accommodate and to involve the flexibility of the protein and this has helped a lot now I want to share probably the two last thing about the molecular simulation on the discovery on the heat shock protein inhibitors and we are trying to understand the role of water again and the conformational flexibility and we were helped to some extent that our simulation was able to meet some experimental data which was done by another group that was constantly reported so we can see for example here at the middle HSP hole P the pitavastatin is the most showing lowest binding free energy nine the pitavastatin was shown a very small effective concentration against the various cell line that we are testing for cancer cells so this was a greater agreement to some bigger extent that what we were doing also was in a cross agreement to the experimental perspectives now experimental data I've shown a good promise that now I'm moving to nanotechnology that the user of nanoparticle can enhance the drug delivery process of a drug and they can improve the half life of poor survival drugs and now computational work have been employed to understand how can the nanoparticle drug interaction be affected by the solvent and also the change in the pH as well as the release pattern we previously synthesized the nanoparticle that for amidoamine dendrima and tried to do the encapsulation of a very poorly soluble drug like this a phenomenon molecule and then we found that the dendrima that we synthesized was able to highly encapsulate and release the molecule and the release was of course a pH dependent manner and then to gain more insight into this we isolated many of this natural product and in our group and then we have tested to encourage them from using many nanoparticles including the Kitosan nanoparticle but one of the challenge that we observed this molecule is shown some chemical instability in during the formulation process and how do we get in the trying to understand we therefore used some Thomistic simulation molecular dynamics to understand the interaction and the release pattern where the experiment we were not to get some insight and then we found that for example when the system of the same molecule this one to something is formulated in DMSO and in water the DMSO would really accelerate the release to be very fast compared to water this is actually similar to what in the experiment was observed that water formulated the system would sustain the release of this molecule but the behavior of the nanoparticle and the size it differs in different solvent so in DMSO the size is really larger as compared to water and therefore we concluded from this work that the DMSO would not be a good solvent for the formulation of this chemical and stable molecule into a ketosanonanoparticle interaction then we have been focusing on trying to understand the confirmation and the stability of this natural product in different solvent and this has been done also by the student in the group so I will just highlight one example of the Rina Marina I mentioned you have to be the general of two minutes thanks so I will conclude with this talk about the self assembly process and the sororability of necrosomide this drug is very sororable in the water and the sororability process of this has been a problem so we are trying to understand how we can address and what causes sororability necrosomide when it is an aggregated form we found that it forms some anti-pararepa interaction and the parare-pararepa interaction and these interactions are the ones that result to the poor sororability nature of it and then it was interesting to find that as the aggregation of necrosomide grows and into different arrangement then for example for the anti-parare displaced they have higher aggregation energy but the sorovation energy is smaller so we found that when it is an anti-parare in anti-parare form like this the sororability is lower but when it is in parare arrangement like this the sororability is higher if we want to include we cannot hear you now hello Daniel we have lost you hello Daniel so Vati, zelojte svoj svoj svet? no vej kanjo rekojno, Daniel? kanjo vzelo mi? vzelojte? ne, je, ne? zelojte ne, je halo ne, ne zelojte je vzelojte, Daniel kanjo vzelojte? halo, kanjo vzelojte? no no, vzelojte. jo je vzelojte so, jo, vzelojte that the anti-parare aggregated system is the one which is more anisorable, less soluble in water but the parare system is the one which is soluble in water but is less stable so if we want to include the sororability of necrosamide we need to to destroy the already established the cateonipa interaction and therefore favoring the formation of the pi-pi interaction in the system as we observed here that the sorovation free energy for this would increase as decreasing the anti-parare interaction so this is would end up here my talk and this is a pattern of the group that we have been working together and my group so thanks for the organizers okay, thank you hello hello no, no, we will hear you are there any questions? hello hello, can you hear us? you cannot hear us Daniel? anyway, are there any questions for when we are ready? any questions for the from the zoom or maybe in the chat? well, I'm afraid we have to stop here, right? I don't think we can do sorry Daniel, you can hear us we stop the session now any questions? will we send you by the chat or maybe later? okay, so for both the for both the online and the remote participants we reconvened to now for the local ones the food is as I said this provided in lunch boxes with your name tag on it just where we got the coffee and then for eating you can stay here you can go outside you can explore the surroundings but please do not enter ICTB building so stay in the garden stay where you are there is a small sort of there are some benches I think up if you go up the road uphill a bit you will find that there is a nice place but be back to too okay, thank you