 Just leave me a second. Yes, we can. So as I was saying now, Modric Magyunda is going to talk about an amorous diffusion in random nomenclature systems, auto correlation functions of random graphs. And this was supervised by three of our colleagues, Liner, Yusuru, and Stephano Tupo from CISA. So Modric, please take it away. Thank you. Yeah, thanks for the introduction. And I hope I'm audible. And I apologize in advance for any ambient noise. And so hi, my name is Moitresh Majumdar. And I worked on Project 11, which was Anomalous Diffusion in Random Dynamical Systems over the course of the summer school. And I was supervised by, as Isak said, by Dr. Rainer Klages, Dr. Yusuru Sato, and Dr. Stephano Tupo. And today I'll be talking about some of my main findings from this project and some of the main things that I learned over the course of the summer school as well. I'll specifically be focusing on this particular area, that is auto correlation functions of random maps. And I'll be explaining the terms as I walk through the presentation. But before that, a brief outline of what I aim to do. I'll be beginning with a very well-known model of Brownian motion, that is the Langevanic patient. And I'll be talking about how the traditional stochastic version of the Langevanic patient was modified to introduce a chaotic dynamical system and how that was further modified to introduce a random dynamical system, which was the focus of the project. And I'll be walking through this process of the modification. I'll then be focusing on that particular random dynamical system and its properties, which were the focus of the project, specifically the auto correlation function. And then I'll be talking about some of the numerical simulations that I did of the auto correlation function. And finally, the results that I obtained from studying that. And I'll be ending with an outlook and how I'll be proceeding in the future in terms of this project. So to begin with, the model for Brownian motion, as we all know, Brownian motion is seemingly random motion of a particle suspended in a fluid. And it only experiences force from the molecules of the particle, from the molecules of the fluid that it is moving in. And when Langevan derived equation which models Brownian motion, he considered the force to be in two parts. The first was a macroscopic force which the particle experienced in terms of the viscous drag in the fluid it was immersed in. And this macroscopic force was proportional to the velocity of the particle which here is denoted by y, the one-dimensional velocity. This viscous force has a damping constant gamma. And hence, since it is proportional to the velocity, it is minus gamma into y. The second part of the force was modeled as small fluctuations which arise from thermal fluctuations or random fluctuations that the particle experiences when it is moving through the fluid. And this was modeled in the original equation in the form of a stochastic term eta of t. Here, the stochastic term was taken to be a delta correlated Gaussian white noise term. That is its expectation in terms of time was zero and it was delta correlated in terms of the Dirac delta function. And as you can see, this forms a stochastic differential equation because of the presence of this stochastic term here. And we will consider a very simplified case of this equation where the mass of the particle is taken to be 1 and a few other changes. Now the modification to this equation which was suggested originally by Dr. Christian Beck in his paper in 1996 where the stochastic term eta of t was replaced by chaotic dynamics which were arising from a deterministic map. So the eta of e term is now modeled in the form of a kick force where the delta function assumes non-zero values at only times p equal to n tau. And at these non-zero values, it was scaled by a scaling factor under root of tau so that it remained to be some finite value. And the magnitude of this kick force was given by a discrete dynamical system which is known as the Bernoulli shift map. So this map is given by this term b which evolves us to x n mod one and this is a chaotic map. And a brief motivation as to why this modification was made and the Langevin equation was reduced to a deterministic form because of this the entire dynamics of Damian motion could now be described completely in terms of purely deterministic equations of motion because essentially a dynamical system is a rule that gives us how if we know x of t then we will know x of t plus one. And after this modification, the entire dynamics of the Brownian motion of the particle can be described through these deterministic rules because we introduce this dynamical system. So if we integrate the original Langevin equation we will obtain dynamics in terms of x and y where x n plus one is the Bernoulli shift map and we will consider a simplified case where gamma of t tends to zero and gamma into tau tends to zero and tau tends to one. And your tau is the time difference between subsequent impulses of the kick force which are modeled using the dynamical system. And the main takeaway from this modification is that the microscopic fluctuations for the seemingly random and faster force that the Brownian particle experiences is now modeled using a deterministic setup or deterministic dynamical system. Now for the purpose of this project the deterministic dynamical system is replaced by a random dynamical system and it is also a discrete system. So it is a map described by this term t x n. And this map t, it is a random dynamical system because it assumes the value two into x n mod one with probability p and one over two into x n with probability one minus p. And hence the coefficient of x n changes and it assumes two and it is basically two different maps with two different probabilities. And a brief motivation as to why this map is interesting in terms of the Lyapunov exponent of the map. And as we know the Lyapunov exponent would quantify the rate of separation of infinitesimally close trajectories and hence it quantifies chaos. So the Lyapunov exponent when p is close to one it is positive. And it's quite clear that it reproduces the deterministic map b, right? When p is close to one. And this is a deterministic map, it's a well studied map and because of the positive Lyapunov exponent it is chaotic and it is expanding. For the case when p is zero it is again a deterministic map but with a negative Lyapunov exponent. And in this case it is a contracting map and the contraction occurs to a point attractor that is x n is zero, which makes sense because under the action of the map one over two x n every subsequent value of x n is divided by two and we get closer to a point attractor x n equal to zero. However, the interesting case takes place when p is varied from zero to one and specifically when p is very close to one half and at this point the Lyapunov exponent is zero but we also observe intermittency because the system experiences the effect of both the maps to x n mod one and one over two x n. And hence this is an interesting point where we observe intermittency of the trajectories of the particle. So a little bit about the invariant density of this random dynamical system the Pelikan map as it was introduced by Pelikan. So as we know the invariant density is the stationary distribution of the particles with time and if you start with an ensemble of particles and if the system is allowed to evolve at one point the distribution of particles becomes does not change with time, it becomes invariant and at that point we observe the invariant density of the map and the invariant density of this particular map is actually in the form of a step function a piecewise constant function if you may. And the way I characterize it is if you consider an interval of the form ij which is between zero and one of us and each interval ij is given by one over two j plus one one over two j which is half comma one and one over four comma half and so on. At each interval the invariant density has a constant value which is a function of p which is given by aj. So in this figure you can see the invariant density for the case when p is equal to 0.7 and as you can see it is constant at every interval between zero comma one and every interval is given by this expression. And a little bit more about the shape of the invariant density and why it's interesting. So a previous summer school participant who worked on the same project, Jin Yan she worked on it in the 2019 summer school she obtained a curve which represents the invariant density using midpoint interpolation. So we know that the invariant density is a series of constant values over intervals. So she derived an expression for a curve which passes through these points and that expression is given by that expression is here on the screen and a, b and c are functions of p and we can see that close to or rather at p equal to one the invariant density is a constant function it's a uniform function and as we get closer and closer to half it sort of changes and that there's some interesting behavior that we observe. For example at p equal to 0.8 the curve sort of changes convexity but it must be remarked that for all values of p greater than one half the invariant density is normalizable that is aj that is a constant value into the length of the interval if you take a sum of all these it would sum to one and in that sense it is normalizable and as we get to p as p gets to one half then the invariant density is not normalizable and for values of p close to one half that is for p less than two by three the invariant density is also unbounded and also must remark that for values of p less than one half the invariant density is just the delta function because under the action of the map one over two xn the trajectories would localize near x equal to zero so you would just see a huge spike near x equal to zero and that's how the invariant density would be a delta function and that is also sort of observed near p equal to one half as you'll see in the next slide. Now a little bit about the computation that I did and how I sort of compared with the theoretical expression for the invariant density that I studied. So there is an issue with this Pelikan map T and which I'll be explaining. So if you look at the Bernoulli map that is two xn mod one and every number is represented when it's represented in terms of bits on our computing system we see that whenever this map is applied we lose one bit of information. So repeated application of the map two xn mod one would lead to loss of a bit at every step it's applied to. So I had to use the MPFR GNU library for C which allowed me to carry out simulations with an arbitrary degree of precision up to an order of these many digits. So I could observe the trajectory evolve for a very long time because if the precision is limited then especially for values of p close to one where this map dominates you would just see the values of x go close to zero because we would lose all the information of the bits because this map in fact the Bernoulli map is also known as the bit shift map because at every step of its application we just move one bit ahead and we lose the information from each bit. So I use the MPFR library to carry out simulations of this map. And I did it for these four values of p that you can see that is 0.99, 0.8, 0.6 and 0.501 and I did it for an ensemble of 10 power three initial conditions and then plotted the values of xn that I obtained as a histogram. And as you can see the histogram and the value matches very closely with the theoretical value of the invariant density that I had introduced previously which shows that the simulations were pretty good and they showed a very close matching to the theoretical value. And here I scaled the constant value of the invariant density at every interval by the number of xn by the number of initial conditions so that I can see how close the histogram is to the theoretical value. And for all these values of p I obtained a very close matching with the theoretical invariant density. And these are the corresponding time series plots. And here is where you can actually observe the intermittency as p approaches one half, right? Because you can see that under the action of both the maps you see that a lot of time is spent by the trajectory very close to zero especially at this value of p but also because of the chaotic map we see the chaotic behavior emerging in between as well. So in that sense we observe intermittency for that value for p approaching close to one half. So this is where, so this was the start of the simulations the numerical simulation that I performed using this special arbitrary precision package. And the goal was to look at the quantity known as the autocorrelation function and eventually finally, like I mean we did want to look at the velocity autocorrelation function of the original Langevin equation which is related to the position autocorrelation function of the Pelikan map which has been previously defined through this relation because of the X and Y in dynamics that had been introduced in the beginning the velocity and position the velocity autocorrelation function of the Langevin equation is related to the position autocorrelation function of the Pelikan map or the random dynamical system that we're dealing with and of course the angle brackets over here denote an average over many denote an average over an ensemble of initial conditions and this position autocorrelation function is in terms of the invariant density of the Pelikan map, right? So the system is at a point where it has achieved where it has reached the stationary invariant density and then we calculate the correlation XK X naught and the reason we did that is because the velocity autocorrelation function would capture the decay of memory of this map with time and that was our goal and also there is an analytical expression from a semi-Marcovian approximation for this position autocorrelation function XK X naught explicitly in terms of P which was again derived by Jin Yan and the main goal, the first goal of this project was to look at how closely the position autocorrelation function computed from the theoretical expression the explicit formula which was in terms of P how closely does it match with the numerically computed position autocorrelation function using the infinite precision package? So to compare across different values of P of course I looked at the normalized autocorrelation function that is I divided the value by a certain constant value which was the square of X the ensemble average of that minus X whole square and in this figure I plotted this normalized correlation function CF the logarithmic value of that on the Z axis and I calculated it for 50 values of K and plotted the log of K on the X axis over here and that was so essentially it's a log log plot of the position of the normalized position autocorrelation function and I did this for 25 values of P and this figure is mainly to show how the position autocorrelation function shows the expected exponential decay which is known from theory for values of P close to one and how this changes again and we obtain the expected power law decay in the correlation function for values of P close to one half and this decay occurs in a very monotonic manner and you can observe it very clearly in this 3D surface plot of the log log plot of the normalized correlation function that I calculated and here are some of the comparisons between the theoretically calculated and the numerically calculated values of the position autocorrelation function and the red curves are the theoretically calculated position autocorrelation function while the blue one is the one obtained from numerical simulations and as it's very clearly observed for values of P close to one the theory and the numerical simulations agreed very closely and they both showed the expected exponential decay and even the exponents and the constant in the decay rule were exactly similar for both theory and for simulations but as we got close to P as we got close to one half then there was a divergence between the theory and the simulations and it must be the amount that for P close to one half the theoretically calculated expression for the autocorrelation function still gives the expected power law decay however that power law decay is a lot slower from theory as compared to the one that was obtained from numerical simulation however we did obtain the power law decay from theory which was quite remarkable and the decay in the theoretical version was a lot slower so the exponent of the power law decay from theory was one magnitude one order of magnitude away from that obtained in simulations so that's where the theory really diverged a lot from the numerical simulations and the results that we got from them and so just as a summary we calculated the autocorrelation functions for the Pelikan map and the random dynamical system using arbitrary precision and they were numerically calculated and in terms of the comparison between the theoretical and numerical results for values of P very close to one the theory shows very good agreement with the numerical values of the autocorrelation function however as we got very close to as P got very close to one half the theory still showed the expected power law decay but it was diverging from that that was obtained through numerical simulations and the exponents of the power law calculated from theory were lesser were one order of magnitude lesser than the exponent that was found through the numerical simulations and this is an outlook in the future the next step would be to compute the mean square displacement for the original Langevin dynamics driven by this random dynamical system driven by the Pelikan map and the expectation is that the MSD shows a transition from the MSD being linear in T that is T power alpha with alpha equal to one to MSD being T power alpha where alpha is less than one which would be a sub linear growth characteristic of sub diffusion as P in the Pelikan map or P in the original dynamical system is varied and yeah these are some of the references that I consulted throughout the course of the summer school and to make this presentation and thank you to my supervisors for their patience and for their invaluable time and wonderful discussions over the course of this summer school and special thanks to Jin Yan who was a previous year summer school participant and I refer to her work very frequently for this particular project and thank you to LML and ICTP for this wonderful opportunity and thank you all for listening. Thank you very much Moderich for this fantastic seminar congratulations for the work to your supervisors so now we go to the part of the questions colleagues you know the drill so okay very good so Yan can has a question please Yan can go ahead okay please go to the page 10 please yeah and I was quite amazing because most time when I do some simulation and mathematical calculation and usually when they are such big difference something must be wrong whether the numerical simulation whether the theoretical calculation so how are you so confident with your results? yeah actually the theoretical the theory behind these curves it was not it was calculated after taking a semi Markovian approximation so it's not well known theory and the objective was to check how good the theory is and how closely would the theory agree with it so we found that the approximations taken for obtaining the theory in the first place were unfortunately not good enough to capture the autocorrelation function in the first place so yeah I have similar behavior I mean when the precision is not enough in the simulation of branching process it's basically it's a critical case there is it's very easy to find to get some discrepancy between the numerical and theory but if we have some careful procedure with the numerical calculation with the numerical simulation basically there is no problem to make the numerical simulation results to be identical with the theoretical calculation I think you could carefully review your code or your calculation again yeah regarding the simulations I was quite confident that they are correct because I was able to obtain the invariant density from my simulations so I could confirm that I am proceeding correctly when I'm doing the simulations and because I had to like the objective was to check how good the theory is from that point onwards the simulations were like they were trustworthy and we could depend on them to give an accurate picture of how the system actually behaves so that's how I proceeded with the with the simulations after carrying out that sort of like a base case check with the invariant density and for the well-known result near t equal to 1 which is a very well studied map so because I was able to reproduce those results from the simulations from that point forward I could trust that the simulations were correct so yeah okay if perhaps I may add to this yeah just to explain further Moitra, could you perhaps go to the page where you showed the vision of the PDCAN map? yeah yes if you look at the definition which is the line xn plus 1 equal to t of xn you see that we are swapping with probability p but being the map 2x which is nicely expanding and the contracting map 1 half x and now actually what our theory unfortunately does not capture is that these two maps do not commute so and actually do not know at all of how to reproduce this properly in our theory that turns out to be a very deep problem so in our theory we assume that this is what we call Zemima-Covin approximation for some reason that these two maps commute meaning whether first we applied x and then 1 half x or whether first 1 half x or then 2x is the same but in fact this is not true and therefore we do have a precise understanding why there are these big deviations between the theory and the numerics so this doesn't come as a big surprise and it's not a fault of the numerics actually we are confident as much as said that the numerics is actually correct but the theory is not unfortunately and you don't see enough of any easy way to cure that problem I think it's a very deep problem and it deserves further investigation very good thank you Rainer for that point more questions if there are no more questions shall we thank Motish for this fantastic work thank you very much Motish excellent so let us continue so actually I should stop my recording