 So now, to make it clear, so what is this? And it's just, in some sense, kind of the license of freedom or some freedom. So now, for example, let's take the 3 by 3 table game and let's decide to move with probability 1 half to the right, with probability 1 quarter going up, with probability 1 eighth going to the left, and with probability 1 eighth going down. OK? So this sounds a little bit strange. If you want to have equal probabilities everywhere, what I'm just trying to explain to you, you can do it. So now, at each point, you go right with probability 1 half for 1 quarter, 1 eighth, 1 eighth. But now, with what probability should I accept this move? The Metropolis algorithm does, says, accept it with minimum of 1 pi b over pi a. As the pi b's and pi a's are the same, I should accept the move. And that would mean that I would always go to the right. So the Metropolis Hastings algorithm tells us the following. Suppose you are on site number 4 and you want to make a move from 4 to 5. I move with probability 1 half. Again, let's see, how do you do this? Well, when you want to move, you take a random number between 0 and 1. If this random number is smaller than 1 half, you go right. If it's between 1 half and 0.75, then you go up. And then, otherwise, you go to the left or down. So now let's say we've taken a random number between 0 and 1 b and ended here. So we want to go to the right. I have to accept this move with the probability, with the Metropolis Hastings probability, minimum of 1 pi b divided by this, multiplied by this, divided by pi a. Since pi b, I want to go from 4 to 5, since the probability at 5 is the same as pi a, it's the same, so I can cut this out. Now, the a priori probability to go from a to b, from 4 to 5, this is a, this is b, is 1 half. So this is 1 half. It's down here. And the a priori probability to go from 5 to 4 is 1 eighth. So I try to move to the right. And I should accept this with 1 eighth divided by 1 half. I should accept it with probability 1 quarter. And this is the famous Metropolis Hastings algorithm. It's not used, of course, if you have a probability distribution that is flat, if you have 1 ninth everywhere, it's not very useful. But it's very useful if pi changes a lot with configuration. Especially, so the optimal choice is if pi, if you can arrange yourself such that the a priori probability from a to b is proportional to pi of b or approximately, then this will be 1 and this will be 1. And you accept the move with much higher probability than you would otherwise. So that was the first remark. And the second remark was again by Anders. So let me just say the three problems of Monte Carlo or the three craxists, the real, the damnations that we live on, or the essence of where it becomes interesting, there are three of them. One of them, and I think it was worth spending one hour on this, was what is tau? I think you've already gotten some feeling that this is really important. The second is what is the variance? This is something I haven't treated. Maybe I'll treat it next time. What is the variance? Can be a very, very difficult problem if you have a flat, if you have fat distributions, levy distributions, and so on. What is the variance? And third problem is how to do financial scaling. And we are just slicing this problem. I was only treating this problem of what is tau. So now let me get to the second part of what I wanted to explain to you today. It is hard disk from classical mechanics to statistical mechanics. I want to treat molecular dynamics. I want to treat Monte Carlo and want to give you some examples of the new algorithms. So now, again, to break the rules, first understand them. Much of the material is from our MOOC. We'll run again. So molecular dynamics as the Monte Carlo method, but we were both first used, was first applied to the hard disk model. So this is the hard disk model for this in a box or even in periodic boundary conditions. And the first paper on Markov chain Monte Carlo was this 1953 paper by Metropolis and Company. It was on this model. And the first paper on molecular dynamics was by Alder and Rainwright. Bernie Alder just celebrated his 90s birthday two weeks ago, big celebration. Alder and Rainwright 1953, where they invented the following algorithm. What I noticed, of course, very easy is if you start from the initial condition at t equals to 0, here you have it. Everybody, so from some initial conditions of particles with velocities and positions, you can compute the next event exactly. Of course, we understand that you don't have to do discretized time. You can compute the next time exactly. And this was the beginning of molecular dynamics. So you don't have to do approximate calculations to find whether at the next dt, delta t, you still are without collisions. So the hard disk model is like billiards without rotations. You simply move and continue. If you have questions, just don't hesitate to ask questions, please. Very happy. So this is what you can do. So here is the algorithm event disk. I think it's already on my website. It is 43 lines long, or 30 lines long. And it does a complete calculation. It's a complete molecular dynamic calculation. But it can only do four disks in a square box. But it has 40 lines. And it can compute. You give it the positions and velocities. And it computes at what time it will collide with the wall. And here it will collide with positions and velocity of particle A, positions and velocity of particle B, and compute at what time they will collide. The two of them will collide. And then this is part one. And this is the end of the program. So for 4 million or 40 million, I don't see it correctly for a certain number of events. It computes what are the wall times. At what times will particles collide with the wall if they are by themselves? At what times will all pairs collide if they collide? And then it computes the minimum of the wall times, the minimum and the pair times, gives the next event and computes collisions and goes up. So this little program is the beginning of molecular dynamics I will show once. Everybody understand what I'm doing here? So this is world record of shortness of a program for hard disks, for hard disk simulation. You'll see it running very soon. And you can play it with it yourself. Exact calculation, there's no approximation. So Boltzmann comes in, says, well, this is much too complicated. As we have hard disks, you have collisions like you say. Energy is zero like this. Energy is equal to, excuse me? Infinity. So now Boltzmann distribution, Boltzmann rate is e to the minus beta e is pi is equal to 1. And pi is equal to 0. So now Boltzmann tells us to let's do a better simulation. The better simulation consists, again, let's do direct sampling, because we were so fond of direct sampling before. If you were discussing, so direct sampling, what is the algorithm? Let's just throw four particles randomly, first particle. Let's do a direct sampling algorithm for four disks in a box. As I told you, we still have a box, because we haven't invented periodic boundary conditions. But I will invent them once I get out of this here. So particle one, particle two, pow. So now what I do, there are big discussions. I tell you I should do tabula rasa. I should throw everything away and start again. So there are some other people who say that I should take the yellow one away and should try it again. But this is not good. We can think about it very quickly. So what we should do is tabula rasa. Understand what I'm doing? Tabula rasa. So now here, one, two, what do I do? Tabula rasa. You understand, everybody understand tabula rasa? Tabula rasa. One, two, three, four. And now this is a direct sampling. I haven't done any averaging. I'm only interested in getting one sample. I say this is beautiful. And I just marvel at this, because this has no correlation with initial time. This is a direct sample of the probability of Boltzmann's original probability distribution. So why is your algorithm not so good? Or not, excuse me. Why shouldn't you just take out the yellow one? Because here, it is clear that any configurations of four hard spheres or four spheres is created with the same probabilities. Then I throw all the illegal ones away. And those that are legal are still created with the same probability. But I shouldn't retry again. So what you can also propose is the sequential deposition. But think about it yourself, and you'll find out that tabula rasa is what you have to do. So this gives direct disks.pi. Start with a random uniform. And then n minus 1 times. Take a random uniform. And if you have an overlap, then break it. So this, now, we have two algorithms. I was a little slow in the first hour. I didn't get any, I was supposed to give you algorithms. But now in three minutes, I already have two algorithms. So now, this allows me to run both of them. So to the left, I have a beautiful simulation of my 43 line molecular dynamics calculation. And here, I have the direct sampling. OK, you see? So this is the beginning of statistical mechanics. It's not as fancy and as fascinating, maybe, as quantum systems. I was supposed to talk about quantum, but just. So the thing is that now you can prove. Everybody can prove. We can prove that the Boltzmann distribution here is satisfied. We did it ourselves. If the random number generator is OK, we can prove that each configuration is equally full. So I did this tabular rasa. And then, when I had a good one, I just said, well, now it's time equals to 1. And I tried, tried, tried, tabular rasa. A good one, time equals to 2, tabular rasa. So these are only the good ones that I get. No correlation. I don't even think about finite size scaling here. But anyway, so this is, and the really funny thing is that there are rigorous proofs, early proofs from Jakobsinai, Abel Meadow-Wiener last year, and Simani, who now it is, mathematics is now able to prove that this, the probability distribution of this simulation of the molecular dynamic simulation of four particles, not in the box. I cannot prove it in the box. I can only do it with purely boundary conditions. But there's a detail. But you can prove that the probability distribution here for a finite system is the same as the Boltzmann distribution. The only system, therefore, for a finite interesting system you can show that you can, basically, you can derive statistical mechanics out of Newton's mechanics. When you start with Newton's mechanics, you do this, you solve these equations, completely complicated, complicated. Then afterwards you ask, what is the probability to have a configuration? And isn't it, I mean, it's kind of really amazing that today you can prove that this, so it stopped. It is amazing that today you can prove that the Newton's dynamics of particles in the box satisfies the X-ray probability principle that is at the basis of statistical mechanics. All right, so now let's do the following. Now I have, just to show you, so why does this work? So why does this work even for finite system? Why does this work also for finite system? The reason is chaos. And I have a really nice illustration of chaos. I started the same simulations here. So this is a simulation to the left. This is a simulation to the right. And at the initial point, I cannot really stop it. OK, yeah, here I can do it. So at the initial state, the yellow one on the right was 0.0000006 times more to the right than the other one. OK? Now the question is, oh, here I still have a little bit of French. So boule means disk. You want to learn some French? Anyway, so, all right, you see? And you see that this little arrow plays absolutely no role, you see? There's really no reason to be so precise until, see? Up, now it's changed. And now this one has become completely different from this one. So and it's the presence, you see? So initially, they started completely the same, see here? And after a few hundred collisions, four particles in a box, all your calculations become more or less worthless, because the only thing that you look at is chaos, see? Up. So now, let's do the following. So the probability, up, the probabilities are now, you know that I told you that all the, OK, as I said it here, that all the configurations were equally probable, OK? All the configurations are equally probable. Check, check, each of these configurations are equal. So now we ask the question, what is the probability? I cannot catch it. What is the probability? What is the probability to have a particle at position x, right? Follow? Are you following? What is the probability to have it at x? And since everything is equally probable, you would say that the probability to have a particle at x must be completely flat, right? It should not be, it should be flat. Everybody says it should be a flat distribution from sigma to 1 minus sigma, it should be flat. But if you look at it, so this, now you ask this question, what is the probability to have a particle at x? And once you get to, since I'm videotaped, you can do it yourself, you can look at the configurations and you do a little histogram, and you can do it. And what you will find is that the probability distribution will be like this. So it will look like a little bit like a Batman hat, OK? So you have, it's a real, it's a real mystery, or not a mystery, it's a paradox. How is it possible that with a uniform probability distribution, because we proved, even fantastic mathematicians proved it, and we did it by, we know each of these configurations is random. How is it possible that with this randomness, you get a probability distribution of x, which looks like a Batman hat? And this is the famous Azakuwa-Ozawa depletion interaction, the fifth force in nature, which shows that even though you don't have really real attractions, you don't have Coulomb forces, you don't have, you don't have, you don't have van der Waal's forces, nothing, there's no forces, but still particles, disks are attracted to the walls. And what you don't see as well here is that disks are also attracted to each other. So this fact that the particles are attracted to each other, please ask questions if you want to understand it. This is a famous, famous, famous paper by Azakuwa-Ozawa in 1953, 54, last year was the 60s anniversary at Nagoya University of their, there was still alive big conference to celebrate the Batman hat. It is the most important interaction in biological physics. So anyway, so there's an animation if you can do, but basically still each of the configurations, all the configurations are equally probable. All the configurations are equally probable. So this configuration is as probable as this one, as this one, as this one. And then if you have a little bit of understanding of statistics is of course, is that well, this configuration has the same probability as this one, or this one and this one, but there are many more configurations like this. Then there are configurations like this. So the probability to see a configuration like this is much higher than to see a configuration like this. So now just I'm giving you a three minute wrap down on this liquid solid transition. So again, alder and rain ride, 1964, when molecular dynamics calculations, and then notice that even though there are no interactions, nothing has happened, there's no energy, no all configurations are just the same as all the others. So you have a phase transition between liquid and what looks like a solid. Even though at a time it was believed that the solid could not exist in this two-dimensional system. So now you understood that we had this algorithm, the tabula rasa algorithm. It is a perfect algorithm, really fantastic, but try to use it for a million particles. Try to use it even for 20 particles. You take a first one, take a second one. And once you have understood that you actually have to do this tabula rasa business, so if the 574th particle that you hit has an overlap, you have to take out everybody and you start again. Imagine the frustration, you want to do 256 particles, 255 have already been placed and the last one just slightly hit tabula rasa. So this algorithm, the direct sampling algorithm, is exponentially bad in number of particles and exponentially bad in density. But the tricks, but anyway, so there's lots and lots of things to be said. So now, since, question, so now what we can do is we can do Markov chain Monte Carlo. So Markov chain Monte Carlo, as I said, was invented, exactly this algorithm was exactly invented. This was the first application, so Metropolis did not think to work on the three by three table game. They immediately worked on the hard disk calculation. So what is the algorithm? They used detailed balance. So detailed balance, what does it mean? You start from a configuration here. You make a move randomly up down to the, you move in randomly in some epsilon neighborhood and if the move is accepted, if it is illegal, you accept it. And if it is rejected like here, then the configuration at the next time step is the same as the one that you had before. So this makes for the following, that the flow from this configuration one, two, three, four to one, two, three to this one is the same to the flow from here to here. It's the same as the flow from here to here. The only thing that you have to satisfy if you don't know the a priori probabilities is that the probability to move upwards to make this move must be the same as the probability to make this move. So once you have satisfied this condition, a periodicity very easy to prove. You see that this algorithm is not reducible. You can come from any configuration to any other. Very easy to show. So this is a perfect algorithm on which we can discuss this important question of what is tau. Because we want to set up really big calculations. So we have exponential convergence. Lecture, there's an R missing. Anyway, so we have exponential convergence and we are set. And this is the first application that's ever used. So here we have the third algorithm. Fourth, fifth, I don't know. Third algorithm already. So this is the Markov chain Monte Carlo algorithm. You start with initial configuration. You have sigma and so on. You take a random particle, you move it a little bit and you move it by a random uniform number between minus delta and delta in X and minus delta and delta in Y. And then you check whether you have overlaps and if you have no overlaps, then the new configuration is the one that you just created. So this algorithm is very easy, it's on the website. And I'm just encouraging, even if you do completely different stuff, even if you do completely different problems, I think it's interesting enough, whenever you have a little time, just look at what is molecular dynamics. What does it actually mean? Even if you don't work in molecular dynamics, just run once in your life a program in molecular dynamics that you really understand, that you see and then you see what is the problem is and then you break, you don't get sleep because you think what's the worth of my calculation if after 100 collisions, I have such an influence of chaos that I don't know that I don't have a deterministic calculation anymore. Anyway, so you can really lose your sleep on this, but let's lose our sleep on this here. So this is the Metropolis algorithm and it's guaranteed 100% guaranteed to be correct. So let's run it and let's run it, let me show you the problem of correlation times. So this is the initial configuration, one initial configuration, 256 particles. It starts to give a little example and then there's a very small calculation and the density is not very high, so particles do have the possibility to move around and then I ran it 25 billion iterations and initially it looks like this and finally it looked like this. So clearly you still have what you get after, I don't know how many, a few hours of simulation on my Python computer, so a few hours of calculation. You still have the initial configuration, you are still close to the initial configuration so this to the probability distribution which was peaked on the site number eight in our three by three pebble game and this very clearly is not a sample of the equilibrium distribution. All right, so now let us improve our algorithms. Okay, so this algorithm is not so good and we want to use it for much larger systems so let's improve the algorithm and let's remember that the algorithm where you moved, so the point was that this disk here, it moved with the same probability to here as it moved from here back, right? So this is the basis of detailed balancing. But detailed balance is much too, it's much too constraining and we can use global balance algorithms. So there are two algorithms, let me just so that we are fully understood at each time step. What do we do? We take a new random disk, we take a random disk so we take one of the disk, so this is the point, we take one of the disk and we move it by a random displacement that is symmetric around the present position of the disk. So this random choice and this random choice that is symmetric around the disk is what constitutes the Metropolis algorithm. Okay, so now what I will show you, very simply is this algorithm that does exactly the contrary of what everybody has done. So instead of taking a new disk, so I have 256 disks and the algorithm is I take a new disk and I move it up or down, left or right. Instead of this, let's do exactly the contrary of what everybody else does. Let's always move the same disk and let's always move it in the same direction. Okay, so every completely different. So now with one disk it's easy. Okay, just move it. Now I introduce periodic boundary conditions but let's do it with two disks. All right, so let's spend five minutes. Let's spend five minutes on this problem of two disks. So here's one disk, two disks and I always take the same disk and I always move it by the same displacement. Okay, so I take this disk, I move it like here. Okay, next time step, at the next time step this disk is only going to be here. Next time step, I take this disk, I move it over to here. Well, unfortunately now I have a rejection. As I have this rejection, well, I simply replace what I had before it was the red and the white color. So I say, well, now unfortunately I have to give up and instead of moving this one, I move this one next time. Okay, so this configuration is exactly the same as this one but what is called the lifting particle, this one is replaced by this one. Okay, move, move, move, and so on. Do we get the algorithm? Okay, very easy. So we have, we always move the same particle and we always move it by 2.1 centimeters. So now this sounds really like a crazy algorithm. Is it correct? Of course it is correct because what is the flow out of this configuration? It is equal to one because this configuration must go into this configuration. See this? What is the flow into this configuration? Well, this configuration must come from this one or if this was a rejected configuration. Anyway, so this one flow in here. What is, where does this one go? This one must go into this one. So the flow into this configuration is one. The flow into this one is one, one, one. So the flow in is equal to the flow out for all the configurations in our system. Fabulous. So do you think there's a problem with this algorithm? Well, you may have to move vertically but what can we prove about this algorithm? Just like in this thing, just moving like this. We can prove about this algorithm. I mean, maybe somebody else wants to say it. What can we prove about this algorithm that only moves in X direction? Exactly. So the global balance is true. So that means that all the configurations that it can reach will be visited with the same probability. So all the possible, all the configurations that it can reach will be visited the same probability. And then at some point I have to move up in order to reach all the configurations. But this algorithm, I may spend a little more time later on it, so it is programmed here. What does it do? It gives us a delta, to simplify it, excuse me. So I say this is a displacement, this 2.1 centimeters. It always goes to the right, but it can be a random variable. Sometimes it's smaller, sometimes it's larger, okay? And it is a random uniform number between zero and some number. So we always move to the right, but sometimes we go in direction zero, that means to the right, and sometimes we go in direction one, that means it goes up. So completely new algorithm, definitely breaks detailed balance because you never move back. And definitely satisfies, well, definitely satisfies global balance, definitely satisfies a periodicity, definitely is irreducible, and must converge towards the equilibrium distribution. Okay? Yes, of course, of course. But this can be done, but if you have a box of size one and you move by square root of two divided by 100, it will also be okay, okay? So this is a perfect algorithm, and not perfect algorithm, but it is a Markov chain algorithm, and here I just showed the pair distances, so I take the two disks, and the pair distances are completely uniform, so it's a uniform distribution in this two-dimensional space that we have, okay? So basically you can play with this algorithm, and so you see that there are many possibilities of having new algorithm that are completely outside of the range of validity of the Metropolis algorithm. All right? The only problem is that we have only two disks, and this is not enough to make it in life, so let's get a little bit more complicated. So let's do it with three disks. So now I use the same algorithm for three disks, and you see, I move it here, so this one is moved, and now it creates multiple overlaps. So this one, it can overlap with this one, or it can overlap with this one. Now I have a little problem. I cannot go here, but I can of course go with some probability here, and with some probability I can go up. And now, very clear analysis of, so if this probability is one going in, so maybe it is 0.5 going here, 0.5 going up, the flow out is going to be one. So here the flow out, so this configuration will give this one, so the flow out will be one, but the flow in will only be one half, or P and one minus P. So this beautiful algorithm that I just proposed didn't work for three particles, but it worked only for three particles didn't work because we had this multiple overlap. Now, what we can do is, very simply, we use only, we do, instead of moving by 1.2 centimeters, we move by only epsilon, we only move by little epsilon, and if we move by epsilon, we'll only have an overlap with one other particle. So, you see, so this gives, so instead of moving by a long distance, I move only by epsilon, and then I have a perfect algorithm, and I have a very general method, Markov chain simulation, that satisfies global balance, and that will converge towards the equilibrium distribution. This gives what we call the event chain algorithm, so we have this particle, it moves by epsilon, epsilon, epsilon, epsilon, epsilon, epsilon, epsilon, epsilon, until it overlaps with one other particle, then this one moves until it overlaps with one other particle, and moves until it overlaps with one other particle. So again, very clear, this is a rejection, you see, of course, that we never have a rejection here in this algorithm, we never have, well, we have just one rejection when you move from this one to the next one, when you change the lifting variable, you have global balance okay, and you only move to the right and to the left. So, let me show one more detail, in this lifting two discs algorithm, how did we change going from the right, from going up, what do we do? Is we decide to make, so this is, I did 10,000 iterations by going either to the right or the left, and inside each iteration, I said I made 100 steps, so I'm 100 steps moving like this, with this fixed distance, and after this 100 steps I stop, I again choose a random uniform number, I decide whether I want to move in plus x or plus y, and I continue and continue, so this, so moving 100 steps, translates into moving into a fixed distance here, so we take moves that are smaller and smaller, but we make more and more, and this means that we have to have a fixed distance between this one, plus this one, plus this one, plus this one, is a fixed distance, okay? So this is the algorithm, again, so it goes really from here to here, Python, you start with initial configuration, you decide what is the length of this fixed, this fixed length that you want to have, and then you move either in plus x, or you move in plus y, and then you have a starting disk, the starting disk is somewhere, okay? So the starting disk is a random choice of all the distance you have, and then until you have used up all this motion, you compute what is the nearest neighbor, the nearest neighbor, and you move up to the nearest neighbor, or you stop once this total distance is used up, okay? Understand the algorithm? Question? Comment? Yes, of course, of course. It has a little problem with serialization because we run it in serial, but it has a problem that it is rejection free, that means it generates these chains that go to the right, for example, several of them, but they could run into each other, and then you have to treat the conflict. It's not perfect for serialization, that is clear, but it is a much faster algorithm than the detailed balance algorithm. Anyway, so okay, so this is the first algorithm that was able to really parallel, to thermalize systems of up to a million particles, so it runs several orders of magnitude faster than the Metropolis algorithm. It can also be used for continuous potentials. So continuous potential, how does it work? Well, it works here. Here you always see which is the next algorithm that will be the next disk that you collide with. So how do you do this if you have a general potential? For a general potential, you use what I explained earlier this afternoon. You use a factorized version of the Metropolis algorithm where you can have interaction with all the particles, but in fact, each of the interactions with all the other particles are independent. So this is just an example of an algorithm that we can use. So this is what I wanted to explain this afternoon. Let me just resume. We can show that the Metropolis algorithm gives exactly the same result as the molecular dynamics for hard disks. We've had very long-standing algorithm, this Markov chain Monte Carlo, of course in parallelized or non-parallelized versions have been run for decades. The Metropolis algorithm and the detailed balance had been used as a dogma for many, many, many decades. But using the concept of global balance, we see that we have very general algorithms for, we can use this kind of algorithms for spin systems for continuous potentials for many, many problems. All right, so this is what I wanted to explain and maybe there are questions and maybe we can program a little bit. Okay, excuse me? Yes, of course. Yes. No, there are simulations on using this algorithm, there are simulations using soft sphere potentials, but we haven't done simulations on non-spherical disks. Yes, of course. This can also be done, yes. Other comments? Marks. Does everybody understand how the, let's say, does everybody understand how the Markov chain Monte Carlo algorithm is programmed or how it works? Yes? Well, I didn't, I can discuss, I didn't prepare it for today, but what we learned is that you cannot, I mean, what we learned is that in two-dimension, what is true that in two-dimension you cannot have long-range order, so you cannot have long-range positional order. That means we have learned that this cannot be a real lattice. And this was, of course, came from the original arguments by Piles and by Landau in the 1930s. It was believed that this excluded the existence of a crystal in two dimensions. But then this, it was the original simulation by Alder and Rainwright that actually showed exactly this, that showed that the transition existed. This put up, you know, a long, long discussion. So what we learned that the crystal cannot exist does not contradict the existence of long-range orientational order. So even though these particles cannot be a crystalline on a crystal here, what can happen is that the orientation of this disk and the orientation of this disk can be long-range oriented. So this is not excluded by the theorem. And the solid, unfortunately I took out the slides because I thought I was running long, so what is excluded, so the solid in two-dimension is characterized by long-range orientational order. So if you have this angle here, you can have the same angle very, very far and by algebraically decaying positional correlation function. The only thing that cannot exist is the long-range order in the positional, long-range positional order. That cannot exist. But the, but the orientation can be, is long-range. And the two-dimensional solid is characterized by long-range positional, long-range orientational order. And then there is not, the thing that we found out is that there is the existence of an intermediate phase, the exotic phase, that has quasi-long-range orientational order and short-range positional order. So this was also, so this whole question came up originally with the simulations from Alder and Wainwright and has taken a very long time because, to figure out exactly because of this problem of correlation times of Monte Carlo simulations. And the problem that was faced was that everybody was using the same algorithms, you know, for decades and for decades. Excuse me? Or in this case here, now temperature is not really a concept here. So the, because this system as you see, you see it's probably, it's e to the minus beta energy does not depend on beta. So what this means is that this system hard spheres, but I did not really want to teach you about hard spheres. I wanted to show you, discuss algorithms with you, show you what possibilities there are to develop new algorithms. But what happens here is, of course, this system is the same at all temperatures. No, it is the same, the positions are exactly the same at all temperatures. So let me show you what happens with temperature. You see the temperature, you see it very easily here. Oops, not here. So if you have double temperature and simulation, then simply the simulation runs twice as fast. That is all that changes. So the velocities are double. The pressure is, the pressure is four times higher. But of course this and the, the Boitzman algorithm is exactly the same as it does not consider the velocities. Yes, if you have soft spheres, then you have, but even in, even if you have soft sphere, soft sphere potential, for example, is a potential d of r is like one over r to the power of n, for example, like this, for n going to infinity, if you have hard spheres. But even if you have soft spheres, you have a combined parameter of temperature and density that describes the whole system. Also for soft spheres, you have a single parameter which describes, which describes your system. So here, you have the hard sphere system has a unique phase diagram and the only controlled parameter, which is the density. Which makes it so interesting because you have no, no freedom, no choice of, you only have one system that you have to study. So two days from now, I will, I will discuss, I will discuss further algorithms in, I saw this in spin systems and also in different, different algorithms. I expect that in the future, that I expect that in the future, algorithms that break detailed balance and satisfy global balance will be more and more important because again, as in this algorithm, as in this method, they are much less diffusive as the Monte Carlo algorithms. Another point that we can, we can make is usually the molecular dynamics algorithm. You know today, there are two, two, two communities of, of people doing simulations. Many people still do molecular dynamics. Many other people do Monte Carlo. And the molecular dynamics algorithms also have many advantages compared to the Monte Carlo algorithm just because they are not diffusive. So if you have, you have conservation of momentum, even local conservation of momentum and if a particle is moving in one direction, it will continue to move for a long time. You see that molecular dynamics calculations even for, for very, not just for hard spheres, but for many other problems converge much faster than the molecular dynamics, than the Monte Carlo algorithm. And the problem is always the diffusivity of the Monte Carlo algorithm. So we and many other people are working to overcome this basic, basic limitation of Monte Carlo algorithm. Comments, yes? Excuse me? You mean, with these algorithms, what do you mean? Yes. They are, they are, they are techniques in quantum, for quantum systems. Yes. They are, usually they are not Monte Carlo algorithms, or it depends, but it's usually these are, these are, these are matrix multiplication algorithms. So these are, these are much different from what I was, what I'm explaining here. I don't think that there is a, that for example, these algorithms usually don't do, it's not, it's not sampling algorithms as the way I present. The point here again is that we have sampling algorithms, we don't do, we don't do the integrals, we don't compute the integrals themselves. Then, thank you for your attention and, and good evening. Okay. Thank you.