 Zdaj sem prišljena na Pate Integra Montekarlo, prišljena na koncijne systemi. Tukaj je izgledan. Zdaj sem prišljena na Pate Integra reprezentacije obzervovati in zvukovati. Zdaj sem prišljena na prezentacije kvalifikacije, kvalifikacije, nekaj je energija, nekaj je superflik, in izgleda drugi algoritmo, ki je umeljena izgleda, da je izgleda pred zelo, izgleda camšnja, nekaj, tega všeg algoritme, zelo, ki ima zmarniti izgledajtej v konstratih, in zelo, da je zelo, in je gledaj je všeg in izgleda izgleda, ko je vse vse, vse tegniče. Na Mono, s Vasenu, da bomo, In tudi namazili se имo toga tutoriala. V nohii vlastišli gol je nijakost nekaj barkrat draufnega Coca Cola kodu priješli, ki dobrote, o čezna razižaj, upravljaje na modelu o dvanscu dnegovorgu. Vsih vse izmahnih je vrstanko, večljaj, kot posonan za timi layingah. na pati integracije za helijum in algoritma vzgleda. Ispešno? Ok. In drugi paper je diskripsi vzgleda algoritma. Vzgleda vzgleda je nekaj bolj informacij, nekaj vzgleda. Vzgleda je vzgleda vzgleda vzgleda. Zato še are gole, če modi do vseščaj teglijenih obržov, v kratu operatorji ko je nabijeli na vseščjah, vseščo je operator refo, da o čudnega bo povrživati. A da je minus vita h. Co je kaj, bilo, kaj do petrskog tegatora. Vita je, in tobe nekaj sa spremjmakeroj na linja v side. Kaj bo našto pri sinkosti da je posredno radila, izgledajšči, kjer zame entenčno ljudje, in zelo zato, da se je koristiti, in vsega je, da se lahko odliči, zelo a skupno delje, zelo je izgleda, prejgačen da je, zelo, kot zelo, izgleda, odliči, odliči, odliči, odliči, odliči, odliči, odliči, odliči, odliči, odliči, odliči, in vse zelo nekaj systematik viziv. The method is not restricted to this kind of Miltonians. You can have different potentials, for instance, in svar to describe cold atoms. You can have electronic energies to describe interaction between protons when the quantum motion of the protons is important. You can put body forces, external potentials, whatever you want. This is just to make examples. We are going to work in coordinate representation. And so the main object of interest is the matrix element of the density of the matrix in coordinate representation. Of course, the analog of this in classical simulations is an integral of the observable times of the Botsman weight. The Botsman weight is explicitly known given the position of the particles. This guy is not explicitly known. But it can be made explicit by introducing auxiliary variables, which define a path, as we will see. By the way, this is a method which has been very successful. The calculation of superfluid fraction of helium-4 as strongly interacting highly quantum system starting from the Miltonians, done by Saperian polynomial in 1987, was a major achievement in computational physics. And the metal scales were done with a number of particles, at least for bosons. And recently, I have simulated systems with 10,000 helium atoms, and verner crowd and the Markov salts one have simulated a number of cold atoms of the order of one million, just because the experiment had two few particles. So it's a very powerful method, and it is, in principle, exact. We will see in which sense. So we have to deal with this object first. It's the collection of the coordinates of all the particles. And this is the matrix element between two configurations of the system. It's a temperature one over beta. The way we can have an explicit representation of this object is to use the troctor break up. e to the minus beta h is equal to e to the minus tau h to the power p, where p is beta over tau. This is obviously an identity. And this is equal to limit for p, that goes to infinity, of something that we can explicitly represent, namely, e to the minus tau, the kinetic energy, times e to the minus tau, the potential energy. Forgiving p, namely forgiving tau, this is an approximation. But if p is large enough, the approximation converges to the exact value. So there is no systematic error in this approximation, in the sense that you can make the calculation of different values of tau and extrapolate at tau equals zero and get the exact result. And the advantage of introducing this approximation, controllable approximation, is that this guy has an explicit representation in terms of the coordinates, namely r e to the minus tau t, e to the minus tau dr prime is equal to the integral over dr2, of r e to the minus tau t, r2 e to the minus tau dr prime, just inserting a completeness representation of the coordinates. Now, the potential in our example is local in the coordinate. The matrix element of the potential is r v r prime equal v of r times delta of r minus r prime. So this term is trivial. You see to the minus tau v of r and brings a delta, which eliminates r segment. Now let's deal with the kinetic term. The kinetic energy is diagonal in case space. So these two completeness, k and k prime, in the above expression, r k e to the minus tau kinetic energy operator, k prime r prime, times e to the minus tau v of r prime. So I've done the integral over r2 using this property and I'm now dealing with the kinetic energy term. Now the kinetic energy term, k, t, k prime is lambda. I guess I forgot to define lambda in the previous h bar square over 2m. We assume we have all particles, all equal particles, it's not a problem to include a mixture of different kinds of particles, it's only a notational simplification. Lambda, k square, delta of k minus k prime. So now we use this. Can you read this? Down there? So this is equal to integral k. We also use the fact that the eigenfunctions of the momentum operator are plane waves e to the minus i, k, r. So we have e to the minus i, k, r. This is e to the minus tau lambda, k square. We have e to the i, k prime, k, sorry, r prime. And then I have again e to the minus tau v of r. So you see that this integral over k is the Fourier transform k times r minus r prime of a Gaussian e to the minus tau lambda k square. And the Fourier transform of a Gaussian is again a Gaussian. So this is equal to e to the minus r minus r prime square over 4 lambda tau e to the minus tau v of r prime. So this is an explicit expression for the coordinate representation of this operator here. Now I can use this explicit expression for each of these factors. Each of these factors. And I obtain this expression rho of r0, rp, and beta is equal to the integral dr1 dr2 dot dot drp minus 1. I have many factors here. I insert a completeness among each factor and integrate over the corresponding variables e to the minus r0 minus r1 square over 4 lambda epsilon. E to the minus tau v of r1 or 0, whatever you want, times, this is the first one, many products like this up to the last one e to the minus rp minus 1 minus rp square over 4 lambda epsilon e to the minus tau v of r. I think either I take 1 or I take p minus 1. Now with this guy I can go and calculate my observable. I have to make the product of these times the coordinate representation of the operator and then take the trace. Is that clear? Is that obvious? Probably yes. Let me make the assumption which is only to simplify the notation that o is also local in space. Average of the observable is equal to integral over dx where I have introduced the notation x is equal to the sequence r0 dot dot dot rp. The operator of r say 0 and then I have all these products. Each term of this product let me call it r0 of r0 minus r1 in tau. This is the part of the high temperature density matrix that corresponds to the kinetic energy alone. So this is the density matrix of non-interacting particles without any external potential. 0 means free particles and e to the minus tau v r1. Then I have the product of all these terms. i from 1 to p rho 0 of r i minus 1 to r i in tau times e to the minus tau d of r. This is the numerator. Because I have to take the trace I have the boundary condition that r is equal to r0. So I have the sequence r0 r1 and so on and so forth and then I go back to r0 at the last point. And then I integrate over r0 and this is the trace. And the numerator is the same without the factor of r0. Let me just try to zeta. It's the same thing except you don't have this factor, the denominator. Now let us look at this guy. We have an integral in many dimensions. If you had three n dimensions before we have now three np dimensions because we have all these guys to integrate over. But the integral is what is something that depends on the operator that I want to calculate times product of Gaussians times products of exponentials. These are all positive terms and this is why we choose to work in a coordinate representation. In this representation we have this integral which has a factor in the integral which can be interpreted as a probability distribution. And let me call it pi of x. Pi is this product of Gaussians times exponentials. And basically that's it. You have something which you know you can calculate with Monte Carlo using a metropolis algorithm. You have a configuration space defined by these guys. You move these configurations with a suitable algorithm. You calculate a set of independent configuration in xi k configurations and then you estimate your observables as 1 over k the average of o of r0 for the configuration i. This is like a classical simulation. Instead of the Boltzmann way you have a slightly more complicated object and then you have something else where you just sample the distribution and you get the result with a statistical error which decreases as 1 over square root of k if the variance of omega is finite namely if the integral of square times the same pi is finite. This is indeed a mapping of the quantum problem into a classical one which can be visualized in terms of parts. Let me make a few drawings. X is a path. It is r0. Let me draw a path for a system of two particles. Particle 1 at time r0 is here. Particle 1 at time 1 is there. This is the first particle and the second one is here and so on so forth. The system follows this trajectory here in say imaginary time you can interpret this density matrix at inverse temperature beta as an evolution in imaginary time data. So this can be viewed as an evolution of the system in imaginary time through these steps. So this is the path. This is a possible representation of the path. There is another representation which is called warline which is very useful because I can only draw one coordinate and time there. And the time goes from 0 to beta. We have this discretized ten steps which define the oops, I forgot one which define the position of the particles at each ten step. The trajectories are continuous but not differentiable and they represent the motion of the particle. Not that as here the particle comes back to the original position also here. Particle 1 starts from here and goes back to the same point. The same for particle 2. So each particle is mapped onto a polymer. This is a classical polymer of which these things here are the Boltzmann weight. This is a peculiar interaction. The exponent of this S is the potential energy of the polymer system. You see that the polymers have a spring between adiations adjacent turns. So there is a spring here and the particles interact at equal time r11 interacts with r21, r13 interacts with r23. But the potential doesn't mix different times. It's a kind of peculiarity. It's not a real polymer but it's still a classical system that you can treat with the methods that you know. An important characteristic of these trajectories is the amount of space that they can travel in one-time step. Xi minus xi plus 1 the modulus of this is of the order of two lambda the thermal wavelength at the temperature corresponding to the single time state. Why is this? Because otherwise this Gaussian if the distance between r1 and the next time is much larger than this variance and the statistical weight of this configuration is basically zero. So the honest configurations and statistical weights are those who satisfy this condition. And the whole polymer has a spread which is the thermal wavelength of corresponding to the low temperature one over beta. So the spread of the polymer can be larger than the movement in a single time state but it's of the order of square root of two Let's see what happens when we start from a high temperature and we decrease the temperature in terms of this path. Let's again consider two particles this is a high temperature trajectory for particle one and this is a high temperature trajectory for particle two at a very high temperature this number is very small. Td is the generous temperature of this system of this quantum system at the generous temperature of this system the paths are so long that they can get close to each other. You see why? Because the spread of the polymer is square root of two lambda beta. If I lower the temperature I increase the spread of this path until they touch. At this point it is important to consider when we have to make a slight change in the formalism that we have applied above. Yes, our particles so far maintain their identity. Particle one closes on the initial position of particle one particle two closes on the initial position of particle two they never exchange. In this situation when the temperature is of the order lower than the picture of warline zero beta and this is space there is this kind of configurations which do contribute to the integral because they have a large statistical weight but there is also this configuration in which particle one ends up where particle two started. So this is a path with an exchange and for a quantum system at low temperature we have also to include these configurations in our integral to get meaningful results and to do this we define, we symmetrize the density matrix we consider all possible permutations and we allow each particle to close on any permutation of the other particle. This is an example. This is for bosons. Do we have to make an independent simulation for which kind of permutations? No. If we set up an algorithm which changes not only the positions of the particle but also the permutation we can sample this sum just in the same way as we sample the spatial part of the integral. So the algorithm will take care of this sum which is unpractical so we have to devise moves that can switch between different permutation sectors. This is the only complication for fermions. Everything else remains as before. This is again a product of Gaussian's time exponentials and this is a positive thing. There are no minus signs here. We can continue. We only have to include in our algorithm something that can allow particle exchanges. Let me open a parenthesis for fermions. In this case, I would have rho, r, r, v beta goes into 1 over n factorial sum of all the permutations minus 1 to the parity of the permutation psi of p rho of p, r r, v beta. In this case the probability distribution pi of x composed of products of Gaussian's and exponentials now includes my include sign minus sign and I can write it in this way. So I can continue doing my calculation of the cerebral. O would be equal to integral dx O of r0 s of x pi of x divided integral dx sine of x pi of x then I can divide denominator by denormalization dx modulus of modulus of pi of x divided integral dx modulus of pi of x. So you see that both denominator and the denominator are things that I can see here, I have divided multiply by 1. Both the denominator and the denominator are suitable to a Monte Carlo calculation. The numerator has a positive probability distribution times something that I average over and the same is true for the denominator and the estimate of this over k independent configuration would be summation of i O of r0 of the configuration i times sine of xi and the denominator is summation of i of the sine. So what's the problem with this? I can do this. Calculation is still exactly before apart from the 10-step discretization. The problem is this average sine is exponentially small in the number of particles and the inverse temperature. So I cannot do many fermions and I cannot do low temperature fermions. So I will give up with fermions. This is just because I do the integrals by Monte Carlo. And this is an example of the fermions sine problem. So no fermions, unfortunately. There are approximate fermions in methods, which are neither very efficient nor really tested against accuracy. So it's a very hard and open problem. Finding a useful approximation for fermion pati integrals. So let's go back to bosons. Now it's a completely classical problem and we can go and calculate properties. For instance, the energy. Just this definition here and I do have to calculate and then all the other factors, right? This is slightly complicated because I have to do this period. There is a better way to calculate the energy. The energy is also minus 1 over zeta the derivative of zeta with respect to beta. This is a thermodynamic definition of the energy in terms of the partition function. And zeta is the integral of all these guys. And I can take the derivative on the link that I want, on the factor that I want. I change beta by changing one of the 10 steps in the middle. So I can do this derivative for one link only. And the derivative with respect to tau now because I take the derivative of one link of e to the minus r0 say minus r1 square over for lambda for lambda tau times e to the minus tau z0 r0 is oh, here I have to so far I have neglected the normalization of the Gaussians because I only save the Gaussians in the numerator and the same Gaussians in the denominator. But now I have to take the derivative of this Gaussian and the normalization does depend on tau. I have to include it. And so this is for lambda pi tau to the power 3 n. So when I differentiate the denominator I have a factor 3n over tau times the exponential times the Gaussian r0 e to the minus tau d when I differentiate this I have minus probably here minus plus here minus r minus r1 square over for lambda tau square and then I have plus times exponential times Gaussian and I have plus d of. So you see that by doing this calculation I have something times the original probability distribution so I do not alter the probability distribution I keep sampling this in the center of the energy which I can average over all the slices if I want p, number of slices this will be i this will be i plus 1 and there will be a summation over i from 1 to p I can use any slice of the path to increase the statistics of my calculation So this is the calculation of the energy sample of observables that they can obtain people have calculated those sort of structural properties even even excitation spectrum through underneath the continuation of this imaginary time evolution all sort of quantities condensate fraction super free fraction remember to see the super free fraction calculation this was only one possible example the other example more interesting is the super free fraction we can define the super free fraction thinking of a cylindrical vessel which is set on rotation slowly with helium originally at rest if the helium is in the normal fluid phase then it will follow the rotation of the vessel if it is super fluid it will stay at rest it is like if it had no moment of inertia so the super fluid can be represented by a lack of moment of inertia but the normal fraction of the liquid is 1 minus the super fluid fraction of the liquid and this is equal to the actual moment of inertia divided by E c I c would be the moment of inertia of the classical system of the normal fluid so you see that the super free fraction is the missing fraction of the moment of inertia if the moment of inertia is 0 then rest will be 1 if the moment of inertia is equal to the classical moment of inertia then rest will be 0 so this is the definition that we use what we need to calculate to compute the super fluid fraction of the moment of inertia the moment of inertia is defined as the derivative of the angular momentum with respect to the angular velocity at angular velocity equals 0 we are not in a position to simulate a rotating system which is a time dependent stuff what we simulate what we imagine to simulate is a system in a rotating frame and the density matrix changes accordingly because of the rotating frame so we are going to do this derivative in the rotating frame then we set omega equal to 0 and we go back to the rest frame where we can do the calculation so we have to use this in this derivative here this is the trace of L many terms which came from the trotter breakup of this guy when I take the derivative with respect to omega I get a factor tau L the derivative of the first factor then I take the derivative of the second factor here h minus omega L L this comes from this derivative e to the minus tau h minus omega L and so on and so forth and I have one of these terms for each of the factors that I have introduced in the trotter breakup then I let omega go to 0 by the way the derivative of z doesn't contribute in the limit of omega to 0 there would be a term also from the denominator but it doesn't contribute in the limit of omega equal to 0 it contains powers of average of the angular momentum and at omega equal to 0 these guys are 0 so we don't need to consider the denominator and this guy at omega equal to 0 is equal to and you see that this is our original density matrix and we can do the same simulation as before we just need to calculate the effect of these guys on the short time density matrix L in coordinate representation is summation of all the particle myh derivative in respect to theta so let's say L is the zeta component this is the particle r i this is theta r and this is the derivative of this guy here in coordinate representation so we have derivatives of this guy contains the Gaussian of the kinetic energy operator and the exponential of the potential operator the contribution from the potential operator if the potential is at their potential of spherical spherical interaction is 0 you can do the derivative and test it explicitly so only the kinetic energy contributes so the terms that we have to consider to do these derivatives are of the kind derivative in respect to theta of e to the minus say x minus x prime square minus y minus y prime square this is the factor in the Gaussian that depends on the angle theta for a given particle the zeta component depends on theta the potential has a zero derivative in respect to theta and this is the only part that depends on theta so for each link I only have to take this derivative for each factor of L that appears here x is equal to r cos theta so d theta x is equal to minus y and d theta y is equal to x so this derivative and this is 2x minus x prime minus and this guy times the Gaussian so I forgot all these factors in the Gaussian and science and so on and so forth this is basically the object that we need and we have 2xy minus 2xy is zero and what is left is 2 xy prime minus yx prime which is equal to twice r vector product with r prime r is the well I should put another name but this is the planar component of the vector so this is the result of applying the angular momentum to one of these guys and let me do a picture here I have here is the zeta axis this is r particle i times j particle i times j plus 1 and so on and so forth and the vector product is twice this area and you see that I have a contribution from the first time step, the second time step, the third time step and so on and so forth so I have this area I have this area and if this is the projected part of this particle this particle moves in imaginary time along the path and there is a projected area and this guy is the projected area of this particle because I have a term for each time slice first, second there is one special slice which is the first one I have L square here so I have to take another derivative of this guy of this guy of the quantity of the Gaussian is equal to so I differentiate the Gaussian once more and I have the square of what I had before and then I have another term in which I have to differentiate this which is a factor which now appears in front of the Gaussian in the first derivative so the third area is this guy times the Gaussian when I differentiate the Gaussian once more I have a square of this now I differentiate this and this guy gives a contribution I am not sure about this time plus or minus twice x prime plus y prime so now I have one of these objects which comes from this L square and then for each of these terms in this sum I have twice L the product of 2L the product of 2L produces something which is the square of the area and this special term here is the classical moment of inertia let me just try to think and remember that L is a summation over all the particles I erase it and this is a summation of all the particles and this summation over J is the sequence of these terms and so I have one factor of this I have another factor of this because I have 2Ls and I have a factor a term IC which is summation over IMJ of MRI dot so the term that comes from the L square here is a factor which is proportional to the classical moment of inertia and all the other guys give this square of this projected area everything comes from the definition of the moment of inertia writing the angular momentum in a rotating frame taking the derivative and using this simple rules then you have to put all the factors together and all the indices of the particles and things like that but eventually the result is the fluid fraction is 2M over bit and lambda classical moment of inertia times the average of the square of the area result of the super fluid fraction of this rotating of this helium in the cylindrical and the cylindrical vessel why is it interesting well it tells you two things one is that doesn't matter how many ilumatons you have this is a definition of super fluidity which doesn't require a bulk phase transition but it also manifests itself in small systems there are experiments which see manifestation of super fluidity in helium cluster with as few as four of particles and the calculation of this guy represents quantitatively these experiments and the other thing is that because the super free fraction is proportional to this area we have some implication on how the paths are distributed imagine have your cylinder here at high temperature you have small paths with no exchanges maybe a few exchanges this guy has two particles and this guy has maybe three so these are the parts of all the particles view from above in this cylinder what is the area each of this particle has a direction random so these are small numbers that other subtrata incoherently the super free fraction is zero there is no area this is maybe plus this is minus and this is only this configuration other edge over only small areas with opposite signs the area here is summation of all the particles if this particle goes this way and the other goes that way they cancel each other how can you possibly have a super fluidity you have to have a big area such as this one no matter what other particles do this will prevail and will give a signal of super fluidity how can you have such a long path by having a long exchange cycles this guy contains many particles the only way you can have super fluidity is through a long permutation cycle and this you read directly from the fact that the super fluidity is proportional to the area square and this is the only way you can get a big area in this case so in this classical mapping mapping between quantum particles and ring polymers but interlaced ring polymers one can go we have this of the super fluid fraction totally classical because you simulate this as a classical system which is, I think, interesting in periodic boundary condition this is for a finite system in periodic boundary condition you can use the same result by imagining to have two very large slender R is the radius and D is the spacing between the boselj the hidden system is inside here so this is at least in this direction here like a periodic boundary condition but if you apply this formula to the super fluid fraction for rotations along the zeta of this guy you see that the only path which have a large area are these guys here if one goes distance in one direction this is a small area the only big possible big area now if you imagine that this is now the periodic boundary of this then you open this you see that in periodic boundary condition the only path which contributes to the super fluid fraction are those which by exchanging through many particles wind around the boundary condition this is a path with one line number R this is a path with winding number zero only these guys in the thermodynamic limit contribute to the super fluid fraction and this again you learn from this area estimator in a cylindrical symmetry adapted to the periodic boundary condition case in terms of this winding number rho s in pbc this becomes rho s is equal to w square 1 over 2 lambda beta this is the area you define the winding number s summation i j of r i times j plus 1 minus r i times j this is the definition of winding number this is something that adds all the displacements along the path in this case and you have a net displacement in this case you have a zero displacement in all this so the only ones that do contribute to the super fluid fraction are those who wind around the boundary conditions and again this is a consequence of this area estimator yes winding number area the relation between winding number and area oh you cannot read it so this is the second and last example of calculating properties I think this is particularly interesting and now we move to a description of the algorithm to simulate this to calculate these integrals so this is a path for a system in which there is a winding number non-zero since each particle cannot move too much in a from zero to beta this has to include many particles and it is an important quantity to change the winding number because we want to make statistics we want to have fluctuating quantity and to have an average of this we don't want to have a configuration get stuck and never changes we want to be able to change this it is extremely difficult with ring polymers first of all let me make a draw of two particles if we want to implement an exchange with closed polymers these are two polymers we have to cut them and try to reconnect them in this way so we have to move at least two particles but if you try to attempt two particle exchanges here nothing happens to the winding number for instance I can include this guy in the permutation cycle there are few particle exchanges but the winding number didn't change I can get rid of this guy but the winding number didn't change so the only way to change is to propose a global move which contains more or less all the particles of this cycle the number of the particles that I have to include are roughly the size of the simulation box divided by the number small this is the number of particles that I have to include in a move if I have to change the winding number in the linear distance the linear size of the simulation box divided by the particle distance something like this and this is a large number so you can imagine that constructing an exchange between many particles and reconstructing in the path of all these particles to reconnect in a different way a nightmare so this way of calculating the superfill fraction of the linear has been successful for say 100 particles not more with the warm algorithm to describe it now you can calculate 10,000 particles whatever and the idea is what happens if I am allowed to cut open one of these rings so let me draw here a path which perhaps too many particles suppose I am allowed to include a configuration in my configuration space in which this is broken now I start moving this guy I am going to move only one particle at a time, not even two, one particle at a time and I move this particle like this and if I accept the move I cancel this this is now continuous line I accept the move now I can move this particle here if I accept the move this is the new configuration and if continue I perhaps go and reconnect with this guy so here with the sequence of four moves of one particle I have killed the winding number this is a pictorial example that should show you that it's easier to change the winding number if you are allowed to have an open polymer amongst so this is what we are going to do with the worm algorithm we have to have not only closed path a rough exchange over there, but also an open path this is a configuration of new kind that we have to include we want to include in our sampling because we believe that this guy will be able eventually to change the winding number easier we want to use this configuration and what do we need to use the configuration we have to need the statistical weight of the configuration because this is what enters in the Monte Carlo calculation happens I happen to generate this configuration I have to decide whether accept or reject it what is the statistical weight of this it's not different from what we did earlier we can calculate the statistical weight of this configuration despite the fact that seemingly it's different from before let's think of what happens in one link let's take this link a link is a sequence of two connected time slices this link has three particles and the contribution of this link the statistical weight of the configuration is raw say this is time j plus one and then I multiply the statistical weight of each link when I come to this point and we have raw of rk rk plus one and tau raw is product of a Gaussian times an exponential and the only difference between this guy and this guy is that this one has n particles and this one has n plus one particles so I can add a factor for each link and I reconstruct the statistical weight of this configuration without any problem with respect to the previous situation let's be a little bit more formal I'm going to sample configurations which correspond to a generalized partition function which is now equal to zeta plus zeta zeta prime zeta is in a sense the old partition function however, since now we have to be able to change the number of particles it's not going to be the canonical partition function but the grand canonical partition function zeta corresponds to a situation where there is no this guy maybe the number of particles can change for instance this guy can grow and I have a different number when I have a configuration with closed rings they contribute to the canonical partition function when I have a configuration with an open thing this contributes to the green function to the one particle green function of the system the one particle green function of the system is the amplitude to create a particle here at this time and to destroy the particle there thermal average at equilibrium with the other particles I call this m and this i tau times jm this is tau times ji this is position r i and this is position rm if I integrate over all the other variables this is a function of i of r i r m tau t i t m this function here and this to the partition function is the sum of all the statistical weights namely including integration over these variables and g is the thermal average over the amplitude to create the particle at this time and to destroy the particle the other time in the other position this is just because this is a physical quantity this is known as the one particle green function it is interesting because at time zero it is the one body density matrix so it gives you a way to calculate the one body density matrix but apart from the name the integral over the sum over all the statistical weights is the generalized partition function of the warm algorithm so apart from the formalism the arguments precedes as before except we have new configurations now how do we move this configuration oh by the way there is this c so when there is no this is called the warm when there is no warm the configuration contributes to the canonical partition function this is called the zeta sector when there is a warm this contributes to the green function and this is called the g sector yes we can put as many as we want but for the purpose of speeding up the simulation in the sense of changing the wind in numbers or things this is just a complication and not an advantage if you are interested in two body green function for instance if you were able to simulate fermions maybe we wanted to see pairing this and that would be extremely interesting there is no problem at all you draw another line these are the new configuration and that's it you only have to devise moves that move all the coordinates switch from z sector from z sector to z sector and possibly change the winding number and sketching the kind of moves of the warm algorithm now so we have to be able to move the coordinates displace these points where this warm is and to switch between the sector with warm or without warm there are pairs of moves one of which is the reverse of the other and there is a kind of move which is the complementary of itself in certain remove are this kind of moves they move from zeta z z sector if there is no warm I can attempt to insert one if there is a warm I can attempt to remove it open close means that suppose I have to close the parts I attempt to open it one of them if I open it this disappears and this becomes a warm close is I can close a warm this guy I can continue the warm or I can try to remove part of the warm and swap is most useful for the super quick reactions and it is like this this is the warm and I have other particles starting from this point I look sometime after and I maybe connect this warm this position and if the move is accepted this guy disappears and this becomes the new head of the warm or I may do it again this is the old configuration I try to connect this with an existing particle sometime later and if the move is accepted this guy is removed and this kind of moves is sufficient to sample configuration space so you say you never touch this guy well I have just to wait until a warm comes here and swaps with this guy then I begin to move this guy so just by moving the warm I can touch sooner or later all the war lines and change the configuration completely let me just take one example of the a priori proposal transition matrix and acceptance rate of the osmos let's take swap which is the most important so let's make a picture this is a move that happens in the g-sector and remains in the g-sector so there has to be a warm here it is there have to be other particles so this is our initial configuration starting from the head of the warm I consider the positions of all the particles m 10 steps later m is the parameter of the algorithm in the metropolis algorithm you have a size of the move if you make small moves you always accept that you don't get anywhere if you make larger move you always reject and you don't get anywhere so this is the size of the move parameter so it has to be chosen appropriately so so I take all the particles m 10 steps later and pick one of them I pick one of them with this probability r i is the collection of these particles at this time step here one of them is called r zeta this is the probability r zeta it's probably the closest but not always I need the other symbols s 0 minus s r 0 of s 1 minus s 2 tau r 0 of s m minus 1 s m I sample these guys these r 0 are Gaussians with s 0 equal r i and s m equal r zeta so I do a levi construction we have seen the levi construction from here to there by the way the levi construction actually samples this divided r 0 of r 0 minus s this is a factor that comes from the probability of the levi construction we have to take it into account for the but basically you make a brown and bridge gaussian random walk started from i and then then we have to know the probability of the reverse movement sorry, I'm late, I'm trying to speed up the reverse move so if this is accepted I will have alpha which is going back from here the reverse move will be the one in which this is the head of the worm and it would like to close in this position and the reverse move would have sampled r 0 of t 0 minus t 1 tau dot r 0 of m t minus 1 t m in tau in tau where t 0 equal r alpha and t m equal r zeta so this would be the divided r 0 of s t 0 minus t m in time the acceptance probability is the minimum between 1 and pi of x prime divided pi of x this is statistical weight of the new configuration statistical weight of the old configuration multiply the probability of the reverse move multiply this guy t x prime x t x x prime and I have no time to do all the calculations but what you have is that this is a rho 0 divided by this summation I call sigma and this summation I didn't write the probability of choosing r zeta starting from r alpha it is similar to this guy except you have alpha instead of so eventually the probability of accepting the move is minimum between 1 and e to the minus delta v the change in the potential from the new and old configuration times sigma I divided sigma alpha I didn't have time to do the whole calculation but this is kind of similar to the result so let me just recap the idea you define a certain kind of move you define the rules with which you choose to do this move and you calculate all the ingredients you know and this is the metropolis algorithm and in this formula here are the probability in the new and old configuration times the transition probability of making the direct and reverse move I apologize that I didn't have time to go through them step by step but more or less just thinking in terms of statistical weights of the configuration defining the probability of accepting the move you get the rules for each of the kind of move which define the word and on Monday we will see this algorithm to work through the phase diagram of helium form