 Please find your seats. I'm Eric Aurel from Stockholm, I'm the Chairman of the Afternoon Session. I'm Eric Aurel from Stockholm, I'm the Chairman for the Afternoon Session. It's my pleasure to invite Giuseppe Gonella, who is Gigi on the title slide. Please. Thank you. Good afternoon. I'm Giuseppe Gonella. And I will summarize in this talk the results of our group on the face diagram of active Brownian particle. As a first finger, actually I want to thank the organizers for the opportunity of giving this talk and for the organization of this conference. Also told you in this mixed way. Okay. The summary of this talk is the following. We have already seen some talks today about active matter so you already had an introduction to the topic. And in this particular seminar, I will focus more on some general features of the face diagram of active Brownian particle. So there will be some introduction and then some results about the face diagram of disk and of elongated molecules. And the main line of this talk will be to put these results in connection with what known in melting transition for the disk. And from with what known from the coastal it's town Alper in the Nelson Young theory. So let's go to quick very quick in introduction. So active matter as we have already listed is the out of equilibrium matter we can see this in this way out of equilibrium but with some external or some driving which depends on the local states of particles. So this sense is a bit different from other kind of driven non equilibrium matter as we have seen in the talks for example by Volpe we had by Giovanni, we can have active matter on very different scales and they share some common features. So what I'm going to describe is mostly related to self propelled colloid so at least it can be used to describe the experimental behavior of bacteria or self propelled colloids. So let's start from some very general observations. I said that I want to put in contact the behavior of our disk with the behavior of active particles. So first of all, let's give a look to a simulation of standard the hard disk just interacting by a repulsive very short range repulsive interaction so they are this can be the there is a tracer particle of yellow one and this is the kind of motion that one can observe at a relatively low density sorry maybe you don't. It's okay. Yeah. And on what we want to consider in this talk is an active version of this system already introduced today not so different from the active orange and woolen back particles with interaction seen in the morning, but in the morning we only consider this in some seminar, a single particle behavior. So the particle that we have in mind are these like this that have also another degree of freedom which is a polarization this polarization evolves in some way for example by the fusive rotated behavior, and the new fact is that that affects the equation of motion of the disk, we will see later explicitly the equation of motion but let me first show the picture so there is a force acting on the disk along this polarization axis that is another as I said extend another degree of freedom of the particle. And these are kind of active disk I realized in many different way and this is a quite known experimental realization my made by putting on the group of low and Germany, I think, and here they move by thermophoresis there are but there are other thermophoresis that produce these force these are colloids with the half of the colloids coated by carbon and then when there is a laser light acting on the colloid there is some eat that produce some flow around the disk that pushes the disk along the red, each red arrow. So I will come later to the real to the full meaning of this picture but let's give a look to what happens to the same simulation when I put in the equation of motion also the force coming from this polarization. What happens is that the standard Brownian motion see before becomes a sort of persistent Brownian motion we have straight pieces of trajectories more persisting in the same direction and this is an example that we can see. So diffusive property change and we can study all a lot of properties at low density but the most interesting facts comes when one consider what happens with these are persistent motion particles at higher density. So here, let's come back to the same picture as before, and the meaning of this picture is the following disease taken by the physical regulators by the German group. Colloids all pointing in the same direction. So you see that if the particle all go in the same direction they tend to become stuck at certain points. Of course there are also thermal fluctuations due to the Brownian to Brandon motion to the thermal noise that change the position and also there is the diffusive motion of the polarization to but if the density of these active colloids is high enough. What happens is that you can you see observe the formation of clusters of these particles and this is a very nurse much study they quite understood the fact that is called mobility induced phase separation so these active colloids even if even in absence of any attractive force they make a face separation between a low density face and a face done by high density clusters so this is an example of simulation. Sorry, sorry. Stop in the movie. Okay. And these are not these are molecules but for what I'm going to say is the same. So, you know, there is some problem in the movie. And you see that there are clusters of particles that are forming. And if this movie will they've continued that you've seen a more complete. Okay, so in the particular case of this movie, since these are molecules with some with their dumbbells actually they combine it each other in a way that the cluster also rotates and this is something more self experimentally but I don't want to discuss this now. So, this is what comes from simulation and what was quite known also before we consider that these problems. So, the question that we considered with the the friends and colleague, let's see. So, the question that we considered was to make a connection with what happens in passive standard this where also the melting transition occurs at high density. So, these mobility induced face separation is also observed at density, quite low when I say density I mean parking the fraction of the system and you observe face separation for standard active force also at parking the fraction of the order of point two point three point four on the parameter of the system, while melting over the disk is observed at point seven, and we will see better now, which are the number but what was completely missed in the studies on this problem was the relation with standard the transition to the disk which is something also quite obvious to consider one can think to consider a standard the disk system and switch on the active force and see what happens and this is what some. Some things done less in literature in literature so this was our starting point and maybe I need to summarize some very known and a bit old results on melting transition in the two dimensional system so this was already known a bit. In fact, we're known by the work of lambda or mermin and vanguards theorem and other other problems, the fact that you cannot have proper order in two dimension and all these things here but who proposed the realistic scenario for transition into the two dimensional system where Alperin and Nelson and also young following by applying the cost of its tallest theory for topological transition following these are classical scenario that you can find a book at very high density, if you consider this. You can find articles in two dimensional very high density you expect to have a solid with the positional order but without orientation of order, and then if you decrease a bit the density, you have a liquid crystal system with only on orientational order and where the positional order is completely lost to lost completely I mean we don't know we know positional order means that if you calculate correlation function related to position of particle they decay exponentially, while long range means to have power low. correlation function for the proper order parameter that you are going to consider and then if you decrease a bit more the density, you go in a disorder system so this scenario that I put here actually has been studied a lot I considered a lot in simulation. And, of many kinds of Monte Carlo molecular dynamics because it was not easy to find this numerically at the beginning only in very recent year was the group, especially the group of French German group by Bernard and Bernard and crowd. These are these are these are French group actually that clarified the following factor that actually the first transition between the solid and the exotic face. was a actually costal established transition, while the from the exotic to the liquid, what you find in simulation is, oops, I was a coexistence region between exotic face with higher density and the liquid face with a lower density. Here, one of the reasons for the fact that this transition has been found, not easily, is that the density interval between the density the coexistence density is very low and you need very massive and accurate simulations for finding these results in this picture. You find the on the left, a visualization of the exotic order parameter on the boundary of the coexistence line of between exotic and between exotic and the liquid region so everything red means that you have everywhere. Orientational order if you have many colors like here means that the system is disordered on the other column, you have a contour plot of the density. If you look carefully, you can see that the exotic order and face correspond to a bit higher density and those ones are our simulation that confirm results by previous groups so the standard theory now the standard point for the face behavior face of the disk in two dimension actually is this one now, almost everyone believes in this picture that there is a costal established transition repeats from solid to exotic and the first order transition towards the liquid case so how. Our point was to put these facts in the context of active particles, and now this is the model that I said before that we consider this is a model originally introduced by Philly and market for describing these active colloids, which we saw before in the realisation and the eye is just the index for each particle so for each particle we simulate a overdump at the launch of an equation where all this beat is complicated to read but it's just excluded volume interaction introduced by a very stiff potential. The not linear johns but we use the only the repulsive part of the linear johns but with a very high power 664 repulsive the power and we did the same as other groups and then there is the standard the thermal noise this one white noise. This one, and then in the equation of motion, the new fact with respect to standard Brownian motion with excluded the volume interaction is the presence of this active force was a strength is constant with the direction and I. The angle that changed in a diffusive way in a standard way, so this is a in a source to some sense at the standard model for this active Brownian particles with the respect to the Ornstein-Ulenbeck version of the model. This has been studied more, more or less they share the same features. Actually, that one is more easy the Ornstein-Ulenbeck can be analytically studied for at low density in a better way so the parameters that we have in this model and that we've had in simulation basically are two is the density, the number of the particles that we consider in a fixed size of the system and the intensity of the active force. Actually, we use a probe some number that is called Peclet active Peclet number so when in the following a show Peclet that you can think to that like the intensity of these driving F active force. So this is the model for active Brownian particle. And this is the main results of our simulation I will first show you the face diagram and then some comment about this. So basically what was known before our work mainly was just this part here so the part on mobility induced phase separation, where you see at the density point one you already see a Peclet 100 separation between the low dilute phase and domains. And I'll tell you just for having some some idea about this number that the Peclet number evaluated for bacteria is of the order of few hundreds would be 300s 400s 100s depends on the case, and maybe for self propelled color it's a bit less but of the order of the hundreds. What we did so was to study the other part you see that a Peclet zero, this is an enlargement in the insert of what happens up at the Peclet zero we have the solid, then we have a very teeny region of exotic then we have coexistence between liquid and exotic and then liquid. So our results show that actually when you increase the Peclet number. This exotic region becomes larger for a while, and the quick system at a certain point disappear so the quick systems region ends in a critical line. The critical line that I anticipated the what there is no time to show everything but the critical the critical line. Our indication is that a critical coastal it's soulless line so when you increase activities, you go from this quick system to a standard the more standard scenario first between where we have solid exotic and the liquid and then at the end you arrive. In the MIPS region, and you that is here where there is another. Which is thanks line between the high and low density look here so here in this region. The face separation here is a liquid gas type actually because you have on the boundaries, for example, Peclet about 100 a bit less, you have no exotic order. But you have the boundaries, I mean what coisics are to not exactly order the face at higher and lower density. So, this is the main results of one of our paper, and these results is based on the study of the pressure that I don't want to comment much now. And you see that small Peclet, you have an horizontal regime and the pressure he against the density and this horizontal behavior of the pressure becomes shorter when you increase Peclet from zero to three and then disappears. And then again, when the place quite high like in the bottom left picture here. Again, you have an horizontal part of the behavior of the pressure that corresponds to the MIPS and then. And the other important ingredient that we use in our simulation was the behavior of the correlation function. For example, let's comment just on the right picture here. So you see that this is the behavior. I think Peclet 10. Yes, it is written there. And you see that the yellow cars are in the solid face and then at a certain point that they change behavior and they decay very fast when you go in the exotic face. This is the translational correlation function and similar things can be found for the other correlation function describing the exotic liquid transition, something more new that I want to show here. There is no time to describe all this story but this topological transition are related of two different kind of defects unbinding transition so the solid exotic is related to the unbinding of this location, the exotic liquid to the unbinding of this inclination transition or let me give you other two or three minutes, not more. Yeah, yeah, so let just let me tell you that there's this. The coastal is starless Alperin Nelson young transition refers to these local defects, but actually in simulation you find the many more extended defects cluster of defects that actually in the exotic face are the dominant ones. And as you can see from our paper from the statistic of what we found. And what we found is that there is a percolation transition of these defects that occurs very close to the liquid exotic transition. And actually, here there was a big problem that we just mentioned this problem if is the percolation transition that distinguishes between critical or first order liquid exotic transition this is there was a big problem in soft matter. There was a big problem of disks and we found that actually in both cases we found the percolation of defects and this is another part of our job and this is the finite size analysis that we did for studying this percolation. I don't mention this but let me just give a look at this picture, because we studied also dumbbells in dumbbells, these dumbbells move our self propelled along their main axis so and then the rest is more or less similar to the disks but they are non symmetric geometry. And since they have not a convex shape, they become aggregated more easily because they can become jammed together, and we have studied also the phase diagram of dumbbells in the same way. And here there is a different feature with respect to the disk in the sense that there is no interruption from the coexistence region at low Peclet or corresponding to the passive limit to the one that corresponds to mobility induced phase separation. This was as being considered something not expected in the literature of these things because everyone consider only the mobility induced the phase separation region why, for example in the case of dumbbells actually we see that forever is actually the coexistence region at small Peclet that then extends and become larger when you increase the activity. And there are some of our results. There is no mention of percolation that we consider also an important point and thank you very much. We thank the speaker. We have a few minutes into the next talk may I ask the, the technically responsible to bring on the next speaker who is an online speaker. Yes, please, you want to speak in the phone. What, what does control the, the width of the blue region. So can you do you have a way of engineering that in such a way to make that phase larger. The width of the blue region. In this diagram, this diagram actually is just the answer to Peclet. So you, your question is what other parameter we can change for changing that to change the structure a bit. Actually, just by chance and what we found is that when inertial effects are important, the exotic region becomes larger, for example. So when you go from overdump that wonder dump the motion what we've found is that the region, the critical line does not go towards the high part of the face diagram. I was asking because he's really, he's really at the beginning. Okay. No, that, that, I, the, I cannot answer you because if you change the potential. Let me tell you, they've changed this French group has done a lot of different kind of simulation, but that region always becomes always is very teeny. You can change the potential. But if you change the potential what changes the existence region, not the width of the example. I guess we can have time for a very short one very short question if there is one. Otherwise we have to move on to the next. That's no short question. Okay, so we move on to the next speaker. We move on to the next speaker who is David Dean, who will join us over zoom. There. I will stand up so that you can see me somewhere in the middle when, when you have five minutes left, you should have 25 minutes for the talk and five minutes for questions. You can have less also if you like. Yeah, please go ahead. My time on well, thank you for inviting me. It's a real shame that I couldn't be there because I see the weather in Trieste is absolutely fantastic and it's absolutely awful here in Bordeaux. I'm going to talk about a nice little problem that's related vaguely to active particles and generalizes some of the questions that people ask about general active type stochastic processes. So, this work was done with Satya Majumda, who's at the University of Paris, the University of Paris or say, as we used to know it. And so it was interesting. So this was really a sort of COVID collaboration. We started talking about this problem, especially me and Satya by zoom, while he was stranded at the in Bangalore, and he couldn't get a flight back to France. So he had this problem that was on his mind and the other problem was he was running out of beer and cigarettes because of the lockdown. So anyway, this is a sort of this is an interesting, this is an interesting zoom collaboration. So actually it's very, let me just explain the basic problem. The basic problem is, imagine just take this diffusion equation here. So I'm going to have any derivative so I look at the generalization of an acceleration. And I'm just going to drive the process by white noise so this is well studied problem so you imagine white noise is the sort of classic passive thermal noise. Yes, if I just have any equals one here, then if I integrate the equation if I integrate white noise, I just get standard Brownian motion. Okay. And what I'm going to do is I look at this class of processes I'm going to, I'm going to assume that everything starts off at zero. So all the derivatives, the initial position the first derivative up to the n minus one derivative, I'm going to set to zero. And so you imagine if I integrate this equation if if n is equal to two, then that's what is known as the random acceleration process, but the formal solution is the integral of a Brownian motion so you're looking at the area under a Brownian motion. And so, and obviously you can ask this question about what happens for general n. Now the interesting thing is that n doesn't necessarily have to be discreet. Okay, you can actually, you know, this is this solution this solution here was written down by the great Paul levy in 1953, and it generalizes this general acceleration process driven by a noise to a non integer n. Now the thing is, because this is a Gaussian, and you're basically integrating a Gaussian, you've got a sum of Gaussian processes. And so everything becomes Gaussian. So if you solve this problem for any sort of Gaussian noise, you've completely solved the problem just by compute if you compute the variance okay, and then you know this is what you put in your Gaussian probability distribution function. So if you actually take this representation, and you compute the variance what you actually find is something strange happens for n is bigger than equal to two, everything seems okay. But you can see when n is equal to half you run into a problem here, you end up with a negative variance. So something is strange is happening to this process, and it's ill defined for n less than half. So obviously, we'd like to, if we're interested in this, let me see. There we go. Now, when I when I mentioned this n continuous. Obviously there's a realization of this. If you imagine you take some sort of elastic model. So this could be a sort of a polymer mouse model, or you know just just a model for an elastic interface. So if you look at this energy functional here, you have this energy this surface energy term here. And imagine I just pull up at the origin with a sort of delta function force. So I just pull here so I'm just basically pulling the, the, the, the interface at the point x equals zero. Okay, so I can imagine this is my driving force. So if I just look at the response to this force because the problem is linear or if I ignore temperature, if I write down the dynamics for this the general form of the dynamics is that you have dh by dt. So the general form of over dynamics is the dh by dt is equal to you have some dynamical operator here, depending on the type of dynamics you want to consider times the functional derivative of this energy functional with respect to h. And so actually, if I ignore this force term, I mean, if I take zd equal to zero, I find model a which corresponds to Edwards Wilkinson, the Edwards Wilkinson equation. Or if I take zd equal to then I have some dynamics which conserves the, conserve the height of my model, so I have a height conserved model. So if I take zd for two separated phases, the effective dynamics there gives you zd is equal to one. Okay, so which gives you the t to the third scaling for model b dynamics. So, so this model, this dynamical exponent can take several different values depending on the model you're looking at. And if you actually just solve this in Fourier space. Okay, you just take a Fourier transform and then you come back and look at the position of h zero of t, you find exactly this sort of integral. Okay, so this t minus s to this power here is just coming from the the fact that you've just integrated over the Fourier modes and obviously depends on the dimension of the space. See, it's up here and the dynamical exponent. So actually if you look at the sort of Edwards Wilkinson model the Rouse polymer that corresponds to n equals a half and the conserved model would correspond to n equal to a quarter. And this is for d equals one. Okay, so you can realize these, these strange integral forms. Okay, now, let's ask what happens now. If the the noise is colored. Okay, so the noise I so let me just for the moment I'm just going to talk about the variants. Okay, I'll come back to the full distribution later. But if I take this commonly used so this is the this this correlation function is a correlation function to lots of different things. Okay. So if you just decays expect exponentially. This can be generated, you know, from an austere Limbeck process, or we're going to see that it can be generated by a telegraphic noise that's what I come back to. So actually, if you actually look at this integral here rather surprisingly normally you can only do asymptotic analysis but if you tease Mathematica enough and help it with the integral you actually find that this this variance x squared of t is actually given by a formula for all times. Okay, normally when we look at these sorts of formulas for sort of persistent random walks we do the small t and large t expansions. And so, at small times, what you basically find is that v of t you get this t to the two n so remember Andrea was talking about ballistic motion. So this is equivalent to the ballistic motion regime this is basically that you know, related to the fact that the particle hasn't hasn't yet flipped its spin. And if you look at large times what happens is that for n equals n is bigger than a half. It grows like t to the power to n minus one the variance for n equals a half exactly. You actually find that it grows like a log and this is constant five zero that you can compute as well. And four so it's growing very very slowly at this critical point n equals a half, and then for n between zero one you enter into a localized phase and you can actually show the late time asymptotics is given by this formula here. Okay, so as unusual as unusual with all relaxation times this behavior is valid for t less than one over gamma, and the long time behavior is bigger is valid for t greater than one over gamma. So actually, so you basically, what we can say is that we've we've solved this problem there for for an austine-Ellenberg process in equilibrium and the austine-Ellenberg process in equilibrium suitably rescaled has exactly this correlation function so we've solved the problem. But now let's look at the other problem let's try and look at. The problem I'm talking about if I look at this process here d sigma by dt equals minus gamma sigma, you know, have this white noise here this gives me the Gaussian. This gives me this is still Gaussian this is what we call the austine-Ellenberg process. But let's have a look at this other process here let's take an equilibrated telegraphic noise. It's used for a simple model for run and tumble bacterias in one dimension you can think of the state where it's plus one, as it's, it's, you know, basically moving forward in space moving to the right, and then it can flip and it will go to the left. And so we can look at a system where it changes between these two states at a continuous time rate gamma. Okay, so I wait. With the probability gamma times dt I change the sign so the steady state of that is obviously the probability you're going to the left is equal to the probability the right is equal to a half. And so actually a number of authors have looked at this and you can actually compute the full probability distribution for this n equals one one run and tumble model for a bacterium. And so you can find you find that the interesting thing about this thing here is that you have a sort of like cone structure was obviously if you look at these last two terms here with the delta functions this basically corresponds to a particle that started moving off to the right with a positive velocity and exponential minus gamma t is just the probability that it's never flipped. This is the corresponding term for a particle which started going to the left. Okay, so this variable row here you see it's a light cone like variable as well. It's, you know, it's exactly this sort of v squared t squared minus x squared so it's normal okay that this this process has a finite velocity so unlike the equation, the probability that it moves quicker than its maximal speed which is v zero zero. Okay, so it's a process which cannot go off to infinity at finite times. Okay. So actually you can look at this just to check things out you can actually simulate the variance so this process is very easy to simulate because basically we need a good simulator to do it. What you do is you just integrate the equation in terms of the flipping times and you find this formula here. So basically the whole process can be generated by just producing a sequence of these flipping times and you can see here for n equals to you have the increase of the diffusion of the variance it's it's growing. Here we have the slow logarithmic growth and here for n equals a quarter which is obviously less than half, you have the saturation to a finite value. Okay now the interesting thing is we start looking at the PDF obviously the simulations of the variance they should behave in the same way as the as as colored oh you know it's okay. Now let's have a look at what happens when you look at the distribution so these these these simulations are done at time to time at time t equals five starting from a symmetric initial condition. Okay so what you see so basically you're evolving to delta functions at the origin as terms in terms of velocity. Each with a probability of half and you can see here in this simulation that we have the variance which is growing like t to the n minus a half, but you can see these as this is the extreme these are the particles that really haven't flipped a lot okay and they're growing like t to the power t. If you go to n equals a half you see the structure is changing slightly here so this this this region here this region here will be growing very slowly. And this region here will actually be saturated this this this region here will be going like t to the half of this region here will be growing logarithmically. So here we go to enter the quarter and basically what happens is that this region here continues to go grow grow with tea, but here the scaling is different so there's a there's a there's a real difference between n bigger than a half and then less than half. But actually, there are lots of strange things happening in which I'll come on to. Okay, so this atomic measure just comes from this, this fact that these are just particles that are never flipped. So if I say the x star of t, you can easily show this is the maximum velocity, these are the zero t to the n over gamma n plus one, just integrating the equation directly. And so you find these peaks are just coming from this term here this is the atomic part of this, and of course they decay to zero very quickly as as time becomes not. Okay, so actually it looks like quite a difficult process because actually you have, in principle you have you know all the derivatives so in order to analyze the process you need to know what x, x, dx by dt is and all the other. The lower moments this is what the lower derivatives this what make this is what makes something like the random acceleration problem, very difficult to handle. So this is the formula, actually if I just make a change of variables s goes to t minus s. So this doesn't look as if it's done me any good it's just changed the t minus s here to s and the s here to t minus s. But actually, the thing is if I look at this process, Sigma of t minus test this is just like the process started at t run backwards in time, because I'm taking Sigma to be an equilibrium. So this is the same process as the pro that has the same statistics as a process which is run forward in time. Okay, so this is, this is something that's only holds for the statistics at a single time. And the way of seeing that is if you want you can basically look at the moment generating function so the last transform the PDF it's given by this. So always if you basically compute the moments of x of t and x tilde of t when you use time time translate translation invariance and time in inversion symmetry, basically find all the correlation functions turn out to be the same. So you can actually look at this process here. And while you couldn't take the derivative of this process here you can take the derivative of this process here and you find a really simple process. So basically this equivalent process which is statistically in law is statistically identical in law at this at the single time, basically a basis equation here. So this is why you can actually analyze this problem. So the key to analyzing the problem is to do a Feynman-Katz analysis actually there are actually a number of different ways of solving this problem. That's the first way that we did it in the way that we published. So let me look at this generating function so I'm just looking at this expectation value over this this expectation value overall the paths which finish at with sigma t equal to plus or minus one. Okay, so I don't know if you know Feynman-Katz sounds very complicated but it's very easy to see what happens all you do is you work out what what happens when I go to t plus delta t. The thing is, if sigma doesn't flip which it doesn't flip with one minus gamma delta t you have all of this exponential with its integral up to the time zero and t and you just pick a little bit more. Or is it between t and t plus delta t, but with sigma equal to one. Okay, so basically you get a term like this. So that's the that's sort of the, the gain term or the. This is the, this is sorry there's a sort of the loss term if you will, and this is the gain term so basically this is just saying that gamma, the process flips, and you end up going to the state with a minus sign which you do with gamma delta t. Obviously you've moved a little bit so the integral is changed but the order in which the integral is changed is of order delta t so that gives you a delta t squared term. So essentially, you have to do exactly the same trick for the, the process for the functional function u minus, you end up getting these two equations here. Now if I look at equilibrium initial conditions so if I set t equals equal to zero. Well there's nothing here so this term is equal to one. So this is just the expectation value of this is just a half because you're equal to one or minus one with equal probability. So there you have your boundary conditions. So now actually it looks like a very simple problem. So you have lots of different ways of doing it but a nice way of doing is just to look at the sum of these two processes so this is basically just removing the conditioning on the final point because you're adding to delta functions which is one and this is another term where you're looking at the difference between these two functions. Okay, so if you use that you find there's a very simple equation for u n of t. And there's a very simple equation for w n of t. Okay, and the digital conditions are clearly. So this I am expresses this is this turns out to be quite subtle you is equal to one you and is equal to one and w is equal to zero. Now what you can do is obviously you and is the sum you and is the sum of you plus and you minus so it's just a Laplace transform of the probability distribution we're interested in. Okay, and so now what you have to be you have to be slightly careful with the initial conditions and this because it looks as if the u by dt should be zero because it was proportional to w and w zero. So you see that for n less than one this term here is diverging so you've actually, you have to be a bit careful. And basically you find that these are the effective initial conditions you have to integrate the equation for w in order to find the initial conditions correctly. So these are your initial conditions and you can see that they go. You can see the divergence at n equals a half so you see this business about n, n equal to half being fundamental in the problem. Now in order to solve this problem actually make a large deviation type and that's. And you're basically saying if you think about, you can sort of argue this but this this, you cannot argue this in a slightly more systematic way you motivate this. Okay, well actually if I'm interested in late times I'm going to look at the scaling variable x divided by x star of t. So this variable, which I call w later is basically something which is between one and minus one. Okay, and if I assume this large deviation behavior and so analyze this this form here you can see that if I, if I put in this large deviation form here. If I'm making a change of variables x, z is x over x star, it becomes this form here. And this is the, this is the corresponding variable for w. So, this functional Feynman-Catz functional is characterized by this function hn of w, which is obtained simply which is related to if I go back to the calculation in the terms of z, it's just given by a saddle point. So basically it means if I can compute this function hn of w, I can invert this Legendre transformation and I can compute this large deviation function probability distribution function. Okay, so now it's things get a bit technical but the interesting thing is if you insert this and that's into the equation for you when. Basically what you find is that, so this function gn is simply related to hn by first order differential equation. You can go ahead and solve all this and not going through the details what you find is this hn of w is given by this form here. Okay, so it's just solve a quadratic equation and integrate this first order differential equation with this quadratic term. So now what you have is you've got this problem of in order to solve the problem you need to determine c and you have to determine the sign of these two square roots. Okay, and it actually turns out you can see that, despite the fact when I was looking at the variance n was equal to a half look to be important actually it's n equals one here you can see that n equals one is playing a certain an important role. And we have to consider these two cases and greater than one and then less than one separately. So in the case any equals zero actually we take the limit t goes to infinity we want to keep this scaling variable w to be finite. Okay, so, so this means the Zen goes is because ends bigger than one actually I need to consider the limit where mu is going to going to zero. And this tells me that hn of w must be going to zero as h and sorry, so hn of w as it goes to zero actually should be equal to one actually that's that's a mistake here, and by symmetry there should be a symmetry the derivative of the law origin should disappear. So in these facts you managed to find that C is equal to zero and you have to take the negative point. Okay, so you find this full form for the large, the Legendre transform of the full large deviation function. Now, when n is less than one you need to consider the boundary conditions as mu goes to infinity, because actually if you want to, because if you want to consider the case. As n is less than one you can see here that you will have to go to infinity to keep w of order one. So this change the analysis and the interesting thing is that we can no longer use the boundary conditions at w equals zero to solve the problem. But what we can do is we can look at the charge deviation function what we expect the large deviation function to behave at, at the edge where for the process which hasn't flipped. So this is just the probability you haven't flipped and this is the maximal displacement. And this, this applies to both left and right so you actually find the large deviation function in this case is equal to one minus the module modulus of w. You can then solve the differential equation for hn, and you find. Oops. Actually, I've just opened an email from Satya thesis sent me. So, let me see what am I done. Let me just, before sharing, let me just go back to the original. Okay, so I don't know what. Actually, excuse me, I was just, I don't know what I've done to my, I don't know what I've done to my PDF file. It seems to open the email that Satya just sent me my PDF file. Anyway, so actually you can see I hope you can still see the, the, the formula so you actually find this different form for hn of hn of w so it changes when you go from n to n less than a half. And you can basically solve everything exactly using you can plot the solutions using Mathematica. Mathematica has a parametric plot function. So you can see how that can be implemented which is quite useful Satya show me how that works. And then as you look at the asymptotics of course for n is greater than a half, you basically have a region where you see a Gaussian behavior so this is compatible with the variance. And indeed, when you look at near z equals z equals one, you can see that for a half between between a half and two, you basically find this sort of singular behavior it goes like one minus z to the half. But actually so you actually find that n equals two is also a special point, and you can see that the behavior this critical exponent z goes to one becomes one over n, which matches with the two here of course. And for n less than a half, you basically find that a z goes to zero you find this no longer Gaussian, this z to the power one over n form, and G of n is given by this explicit form here. You have five minutes left. Can you still see my screen. I saw what you froze for a moment that's actually so I don't know why. So you can only see half of the screen right. Well, can you see the full screen. Yeah, quit and go back again. Okay, let me let me let me stop the share. Okay, then I'll do share screen. That's actually much nicer. Sorry. Much nicer to see you. It's nice to see you as well. Yeah. Yeah, all the people I haven't seen for a while. So let me go back. Let me just do something here. Let me just try and get back to have a good camera. Yeah, I have a good camera. And I've got a good suntan because I was on holiday in Wales, which is always a good, good place to get a good suntan. Let me see if I can get this. Let me see presentations. Maybe you can just round up in words. Yeah, okay. I'm sorry about that. Okay, let me I don't know my computer has just done something very strange. I can no longer get out of. I can no longer get out of this, this, this, this thing. So basically so it basically shown that so you have look at this very simple run and tumble process and you generalize it by using this run and tumble noises telegraphic noise as as the basis for, you know, integrating it up so looking at so you can be looking at the area of these sorts of processes, or we've seen that it can apply to polymers as well and sort of interface models. And so basically seeing you know that so this this model with respect to the Gaussian model actually it's the region where n is between zero and a half has a now it makes some physical sense, and it has completely different behavior at late times to the case where n is greater than a half. And there are also a few extra little, you know, there are sort of minor phase transitions and critical exponents, having basically n equals one and n equals two. So, I thank you for listening and I'm really sorry about, I don't know what I've done. So, thank the speaker. We happy anyway. Well, I'm sorry. Yes, okay, thanks. So, to get the next speaker ready says your honest hook off to the sky. And I think we have time for, we have time for one question, I guess. I'll come down with the mic. Yeah. Hello, I'm Benjamin. Thank you for the talk. in how much you can generalize this to fractional driving noise, or if you can just absorb this into a different end? Yeah, actually fractional Brownian noise is actually quite a bit different to this, right? I mean, actually, Levy introduced, when you look at the correlation function, actually, when Levy introduced it, actually, one of the criticisms of the Levy model was it gave too much importance to what happens at early times with the noise. So, I mean, perhaps there's, I don't know, there might be some trick, right, that allows you to do it, but I suppose, you know, I suppose if you, I mean, if the noise is somehow, well, I guess if things are Gaussian, right, you could probably just go ahead and calculate most things, right? I mean, if you know, if you're looking at Gaussian processes, then there shouldn't be any problem. But it has strange correlations as far as I remember. Yeah, it has strange correlations. I mean, you'd have to, you'd have to put it into, you know, but in principle, I guess you can, you know, just integrate it up, you know, just the way that, you know, it's exactly the same calculations that I did for the two-point function, right? But you have a different two-point function there, so you have to look and see, you know, where this exists, where if the variance exists or if it diverges. No, I remember there was some calculation from the integral of Brownian motion, when they were to be generalized to the integral of fraction of Brownian motion, they were much difficult. Yeah, yeah, the integral of scientific past. Yeah, I think that's good. We have to speak again. Okay, thank you. Thank you. Thanks a lot. And yeah, I can see Joana Słuchowska. Yes. Very good. So should we switch to Polish? I don't think so. I think it would be a kind of limited opportunity, but nice to see you. After such a long time. Yes, nice to see you. So should I turn my screen? Yes, I should present you first. So it is my pleasure to present the third and last speaker of this afternoon session, Joana Słuchowska from the University of Warsaw. And the floor is yours, please. Thank you very much and thank you for an invitation. I think my talk will be slightly less technical than the other talks, but I hope it will be still interesting. It will connect the topology with statistical physics and biology. And so let me share the screen. I hope so you can see now my screen. Yes, probably yes. So as I said, I will talk about connection of statistical physics, protein and topology. So let me just move. Yes. So you know that nuts are very common in our daily life. Nuts probably you can find easy interest when you go for sailing. You use the nuts to to park your sailing boat. You can see the nuts in nature that you can have some tree that can be they can make links, for example. You know that nuts are very common in art and art is a part of trust that you can see in any part of nice old building interest. There are some nuts what they remember. Nuts are very common in chemistry. They are found in physics and they are now they exist in biology in DNA. However, for a very long time it was believed that nuts cannot exist in proteins. So for example in 94 it was suggested that over that time we know there was 400 protein structure and some mathematicians analyzed this polymers and they found out that proteins are not noted. And one of the reason was the our conclusion is that from the statistical point of view the nuts probably the nuts cannot be formed on protein because it will be too difficult for them to fall to the native position. So just to remind you about the proteins that we can also think about them as a polymers. So proteins are made of 20 different amino acids. They are connected in linear chain and this chain has default to some unique three-dimensional structure. And this unique three-dimensional structure is the the form that we call it is native and it's biologically active. So we usually say that proteins are classified based on the sequence and then secondary structures or alpha helix better term and they have the three-dimensional fault but what I would like to tell you that you also need to take into account that protein can be noted because today we know that at least seven percent of protein in post-apology and as you can see in the figure on our left side this is a protein and probably by visual inspection it's very hard to see where it's not rotated and as you probably know from the mathematics nuts are only properly defined on closed curve. So in this case you can have two approach. You can just pull the protein by the termini and you can find out that protein is noted or you can have the more statistical point statistical approach. You can extend termini to infinity then close them on the on the sphere and then use the familial polynomial for you so you can use the Alexander polynomial hopefully polynomial to find type of the nodes. However protein can also make something what we call lasso. So in this case you have the protein chain that is closed by one of the stable bridge we call this the system bridge and then you can find out that one of the tails piercing the loop and moreover you can find out that some proteins possess something what we call links so you have two chain one chain that make two stable connection in such a way that two circles are across each other. So because from the statistical point of view we'll expect that long polymer chain should form nodes and different type of the nodes and in the case of the protein we know this is only seven percent of protein that are noted then you can think why protein do not form nodes or why they are not so why nodes in protein are rare. So the question that I am asking is how to characterize nodes or entanglement or non-trivial topology in protein additionally the basic fundamental question on the border of biology and physics is what is the biological role of entanglement so maybe nodes were formed during the evolution and maybe we will observe more and as you know there was the alpha fault using the artificial intelligence was recently predicted the structure so maybe based on this we will know that protein can be the nodes are common and the question that I would like to explore today with you is how noted and lasso and leak protein can fault if the thermal fluctuations are sufficient to make nodes in the proteins. So how protein can fault not going to the details about the forming of the protein or protein folding with the trigger topology we assume this is the free energy landscape of proteins and then you have some random configuration and then protein can find can make some smaller local interaction and can easily find a way in reasonable time to find the minimum of the energy so we assume that the native contact that exists in the protein in the in final conformation are sufficient to guide the protein to escape from the local trap to the native state. However this final landscape is not so easy to understand from the point of view of noted protein because in this case you know that not all part of the landscape are can be accessible for the protein because you cannot cut the protein chain and just make a note so you have to explore but only some part of the landscape so the question that you would like to ask or I am asking is how protein can fault and or how protein can make a note so you cannot study the protein with the non-trivial topology with all atom molecular dynamics simulation because it will take you to a long time but you can use some other very simple model that we call them the structure-based model that they are based on the Lener dust potential that it's mimicking the interaction so if you will have a question I am happy to talk about this later on but I would like to tell you that the result that we find already a long time ago we find out that protein it's possible that thermal fluctuations are sufficient to fault the protein however you do not observe the random knot. There is a kind of the unique pathway that a low protein to make a knot and this pathway you can see here so you first have to make a loop then it's not random loop the loop it's exactly where the the knot is located in the in the in the minimum of the energy or the in the native state then you have to make something what you do during your daily life when you tie your shoelaces you make a slip knot loop so the slip knot loop this is what you can see in the middle it has to be pushed across the the knotted loop so in this case there was just a twisted loop but now you have to move the slip knot loop and when you pull when you tie your shoelaces too correctly then you you know that you make a knot so in this in this case this is how the protein make make a knot so this is what we call the random isner move there are now also from the mathematics mathematical point of view so this is telling you that protein can make a knot but in very unique way the successful rate as you can see here is very low it's only 2% but it's kind of the proof of concept that protein that native contact are sufficient to guide the protein to to to fault probably if you will add some additional interaction that's the probability is increasing but still you have to know what type of the interaction protein can use to to make to make a knot so the message is that noting of random polymer is different than nothing of protein so one of the most complicated type of the notes that was found in the protein is not with the six crossing it's a six one and it's type of the note that it's a twist note so you can make a twist loop what you can see here and then you can twist and twist and twist and if you twist this two times and then you just what you have to do is pull one of the terminity termini across the loop so in this case you can even fold protein that it's quite long it's above 200 amino acids in the same way so you cross this topological barrier just once so that's one thing to think about the the thermal fluctuation and the protein that if you have such mechanism that you twist twist twist and you cross the topological barrier once then maybe protein can solve the denoting problem however there was some experiment uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh from uh from Cambridge she uh uh uh she was working on similar protein and she found out that protein can solve type however if you add something what we call the chaperone so this is some kind of the uh protein they can help and speed up this process and additionally she made a very nice experiment she attached two huge and two huge proteins or domain to both termini of the protein that and then she found out that still this protein can solve however here there's a kind of the problem with the uh interpretation of the experimental results that you have to be sure that you really untie the protein because based on all our um knowledge up to now uh if you unfold the noted protein by the huge temperature by the huge thermal fluctuation you will unfold the protein but not will still uh be embedded inside the chain so this is very nice experiment but uh additional proof that the the protein was untied uh it will be nice to um it will support this pathway so um this is telling you that uh protein can uh not in self-resystem so you don't need an additional maybe help to to make to make a knot so um uh on the right side you can see the protein that is composed of uh three domains and um the only the only the green one so the the domain the middle domain one is not it is the deepest knot ever found now in the proteins uh so it's the same something what it was suggested over five years ago by Sophie Jackson but such protein we found recently or let's say three years ago and now you can ask the similar question can we fold this protein and how this protein can fold so then you can think that maybe um you can because protein is much longer maybe the random nothing can happen here so the protein will make uh can make a huge loop and then uh one of the termini uh can uh cross above this loop and you will make a knot the other possibility is the pathway that I was suggesting before that you make a twisted loop uh in the native position and then you will drag one of the termini and then you will make a knot however there's also one possible another possibility that in biology they have something what they call the um ribosome it's a you can think about this as a tube uh which uh uh can uh make the protein chain and then when you the protein chain is pushed across this tube uh maybe there can be some special unique conformation and there will be the force to push the the the chain across the loop because in both this this case is in the random knots still you need to have something um some external force to uh to push the termini across the loop and the thermal fluctuation probably are not sufficient and in the case as well in this case when you have such long terminal termini that probably uh it's hard to imagine that this mechanism will work but uh we tested this based on the very simple uh theoretical model so we asked the question can we make a random knot when we have such long uh protein uh so then you have to make two experiments you can have the you can create shallow knots or you can make unfold the protein and have the configuration when you start with the deep knot and then you can ask the question if from the shallow knot you can move to the native position or when you have this deep knot then you can fold the protein and what we found out we found out that uh we never almost observed that if you make a shallow knot this knot will move along the chain even with very high temperature to the native position so making the shallow knots it's not sufficient to lead protein to the native uh knotted conformation however if you start with the deep uh knot uh almost in the native position then you can probably fold the protein however there is this problem because you you have to jump from this shallow knot to the deep knot and that's something what we did not observe in any polymer model that we constructed that was similar to the protein so uh then you can make another experiment you can try to fold the protein in this way that you will have the twisted loop and see if there's enough thermal fluctuation to push the termini across the loop and unfortunately now we rather observe misfolding and never correct folding uh you can make another experiment you can take only this green part that is supposed to be knotted and see if this green part when you will cut the this uh domain the the blue domain and the red domain can fold and that in this case we we find out that if you determine you are not too long the protein can self-tight and you still observe the slip knot uh mechanism so uh the thermal fluctuations are sufficient here and we do not observe the random knotting but we always observe the uh the slip knot mechanics so you make uh you uh decrease the the entropic contribution by making the loop that is pushed by thermal fluctuation across the the twisted loop so what about the ribosome so this is the ribosome this is the molecular machine that it's uh putting uh that you can say um that uh make the protein chain um by putting together the the amino acids and when you construct attach one amino acid to another one you have additional force that is pushing this red uh chain that is already protein outside the ribosome so you have this additional force that maybe can help you with nothing so you can uh you have another example how it's how you can present this ribosome that's just uh tube that has some curve uh at the end and in this case you can build the very simple statistical or physical model you can construct the tube model uh with some additional force that is coming from push from formation of the nest and chain you can additionally uh simulate with the van der waars interaction uh van der waars type of the interaction interaction between the protein and the the surface of the ribosome so uh we constructed such model and we asked the question can can protein self-tight on the ribosome and I hope if the movie will work uh yes so this is our um presentation of the model that we constructed uh so first we make the loop of uh we create the ribosome and then you can see this is the one of the chain is going outside of the ribosome and the blue part the green part is the part that is uh supposed to make a knot and what we observe here is the formation of twisted loop on the on the surface of ribosome and then you can add the interaction between negative charge of the of the surface and then um let's see what's happened and then you see that the protein is just pushing the chain across the loop so with this we um we find out that this is one of the possibility that the biology can use uh to uh to fold the protein so if you have a stable uh protein there's enough force that this protein can dissociate uh from the from the surface of the ribosome so we believe that this is one of the possibility how protein can fold or when the protein will possess a very deep knot so additionally i told you that the some protein can make a lasso so uh this is when you have the protein chain is closed by the sustained bridge so to find uh uh if proteins can possess such conformation um so this is a protein and as you see by visual inspection it's quite difficult to see if this loop um that is closed by the orange bits uh is crossed by any terminal so uh what we did we constructed something what we call the minimal surface so you know this from south uh film and from the mathematics uh you constructed this by the triangulation method so we span the minimal surface on a protein on the protein chain that is closed and then we check if this minimal surface is uh crossed by terminal and by vector vector products we know if it's uh direction of the crossing so when you apply such very simple technique to all on all protein structures then you can find out that in the case of lasso there's at least 80 percent of protein that possess this type of the topology and moreover you can find there is a different type of the topology so you have this something what we call the simple lasso so one of the protein tail is crossing the the loop however you can find something that it's quite uh interesting um uh from also statistical point of view you you have something what we call the super calling so this is what you can see on um on the uh on the left side uh in this case the tail is crossing the surface uh more than one uh in this case two times from this from the same direction so this is something quite interesting and there's an interesting question to ask if the thermal fluctuations are sufficient uh to make such topology in uh the time that protein can still fault and have the correct biological function so uh we did the same kind of the experiment what we did before so this is the protein with the super calling um and then you can see this we span this minimal surface uh then you can see that it's uh this minimal surface is crossed two times by in the same direction uh because the system bridge can be uh form in oxidative condition and can be broken in uh reduced condition in the cell we first check if this protein can be fault when what is the speed of the folding of this protein and the free energy landscape without the system bridge so this is what you can see here this is the number of native contacts so the contacts that are formed in the native uh uh conformation or minimum of the energy when the protein is folded so um you are not expecting to fall to make all the contacts so this is why you don't have the minimum it's not at one but it's a uh thermal equilibrium that it's around 0.9 and there is one uh barrier that you can see and you can as well have a different representation of this of uh root misquery deviation versus number of the native contact and you can see there is uh well form a barrier but then you you can ask additional question what will happen if you will have um um non-trivial topology so when the loop will be close so in this case you can observe that almost majority of the contacts are formed uh before uh you make non-trivial topology however uh there is a very uh nice topological barrier when you have to cross uh when you have to make the additional crossing so the first crossing is not difficult because protein is still unfolded and you can easy cross but to make the super calling you have to cross this barrier uh again so uh we find out that as you can see here that it's still possible so uh the thermal fluctuations are sufficient to cross this barrier but there's also uh nice observation that similar uh in similar way as in knots when you have to cross the topological barrier the most efficient way to do it is by the the slipknot loop so this is what you can see here on on the right that you make our first up you can see a small slipknot loop and then the slipknot loop is pushed by the thermal fluctuation so it seems that uh nature developed some kind of the unique mechanism to cross the topological barrier but what I think it's even more interesting is that um to think um what the topology is giving to the protein and here we find out that maybe you can study the fluctuation of the minimized surface so this is what you can see in the in the blue so blue is a variance of uh fluctuation of the minimized surface and the red curve is telling you how many crossings do you have so in the beginning you have a lot of fluctuation so this is time of folding and in the beginning you don't have the you have very slow number of the native contact the blue curve and there's no topological no crossings but over the time you uh you cross first time so you make uh one crossing and then when you make the second crossing what you can observe here that this change in the fluctuation of the minimized surface is decreasing so we believe that based on the fluctuation of minimized surface we can measure influence on the topology instability of the protein so it's on the right side it's maybe easier to observe this that the there's a small difference between the fluctuation when you have just one crossing and when you make a second crossing when you make non-trivial topology or the supercolding so the fluctuation of the minimized surface is telling you uh there are decreasing when the additional crossing is formed and this uh it's bigger difference than the number of native contact because the number of native contact is increasing is the protein is almost folded so maybe the minimized surface uh approach or the fluctuation will be method to measure the the influence of topology uh to the protein I don't know how many minutes I have still you have four minutes including questions okay I have two slides so it's perfect so this is the last so that we analyze before to make the to find the minimized surface to find the last so uh so this is what you can observe how we constructed this but sometimes you can the tail of the protein can be longer and in this case you can see there will be that you can make a second loop and when you make a second loop then then you can already observe that we try to find the technique to to to find links in the protein so we use this method to scan again all the proteins and we find out that uh there's only two type of the links that can be formed in protein so that's from the statistical point of view links are really not uh things that happen in protein and why they are important because the links it seems that protein with the linked topology can uh live or conduct the biological function uh in very high temperature almost in the boiling temperature so in the case in this case you can make another nice experiment to test if the topology influenced the stability of the protein so you can construct two proteins or one protein but with two different topology uh this one is non-trivial and on the right side it's exactly the same the same number of existing bridges but trivial topology and then you can measure the what is the probability to unfold the protein and when you don't have the no there's this is the energy barrier to unfold the protein without energy standard when you add the testing bridge the blue one loop this is the energy barrier when you add the when you make the red one there's no difference from the statistical point of view which one the energy barrier will be higher or lower it's almost the same however when you have two system bridges then you can see that this barrier is is much higher so now it's a question if you have exactly the same protein exactly the same number of the system bridges but non-trivial topology the energy barrier will be lower the same or higher and we find out that the the energy barrier is higher so this is I think very nice way to show that this non-trivial topology is introducing the stability to the protein that's can this cannot be conducted in the case of noted protein because we don't have the protein of exactly the same sequence of the amino acids the same fault but it will be unnoted so to finish today we know that there is another group of protein with non-trivial topology we find out protein can make conform data curves so there will be a lot of study of such topology and if you would like to know if protein is noted possess last some links then I encourage you to use one of this server and with this I would like to finish and thank you for your attention let's thank the speaker well I think we out of time uh unless the organ the main organizers say that there is time for one question okay actually I also have a question but I will put it in the mail to you so okay I'll put it in the mail later yeah it was a great talk thank you here comes the question thank you and I wish to ask you what about entropy so you can make you can describe the different structure or the dynamics through and from an tropic point of view and if there are any entropic barrier if you if the polymer change from one structure to the other one mm-hmm so it's it's a very good question but I don't have easy answer for this so we believe that we can use more statistical point of view approach to show that the entropy is decreasing when you make the slipknot conformation so the the slipknot is the this is the unique way uh to make on one hand a stable uh harpy that it's not fluctuating too much but when you make uh at least one native contact it will be sufficient to move this loop on the other side of the of the twisted loop so the entropy is decreasing when you make the slipknot loop and if you make the test that you make just you would like to push as the chain in straight conformation it will just be stuck in the loop but there will be no efficient force to to move it on the other side okay we thank you for this explanation an answer and we thank you again thank you are there any announcements from the organizers I think they're all yep