 about exploring big AP physics with co-gases. Thank you, Marcello. Hello, everyone. I'm very happy to be here again after the colloquium of yesterday. And so the topics of my two talk, actually, will be a build-up about the colloquium. I will try to go not as fast as yesterday. I will try to really set the ground for this Berezinski-Kosare-Tauris physics. Explain to you why two dimension is not the same as three, whereas we lose long range order, but we gain other things. So I will go relatively slowly. Probably for those of you who are very educated in this field, you will not learn so much at least today. But I hope that I will take everyone to the next level. Just before I start, I just uploaded my slides on my own website. So if you have a tablet or a computer and if you want to download them, tell me if it doesn't work, but it should work in principle. So, OK. So everyone, some time to know that. So fees.ins.fr, tilde dali bar, without small letters, trieste dali bar 1 PDF. Tomorrow it will be 2. OK, so let me start. And I start by, again, speaking about two dimensions. This will be the topic of today. And before going to fundamental physics, I would like first to say how important it is to understand what happens in two dimension, how important it is from the applied point of view. There are many devices in our world, technological world now, which are based on two dimensional physics. And here I have just given three of them, but there are many others. The three of them that I'm pointing out here are quantum wells, which are everywhere in electronic devices. The quantum well is nothing but a 2D gas of electron, confined strongly by playing with semiconductors and with electric potentials. Another place where two dimensional physics appears is in high Tc superconductor. It is now, I think, widely accepted that this superconductivity occurs because the cooper pairs move in two dimensional environment. And another very famous example of 2D physics is graphene, this set of carbon atoms, these sheets of carbon atoms, which have very interesting properties for application. So this motivates to study two dimensional physics. Now there is also a fundamental motivation, which is, as I mentioned yesterday, the fact that in 2D, you do not expect to have the same type of phase transition as in 3D. And this was understood first by Piers in 1935. And the question that Piers was asking at that time was, if we live in a 2D world, would physical objects like crystals or magnets, would they exist? And the answer that Piers brought and that we will review thoroughly in the next slides is the following one. If you take a non-zero temperature, there is no true crystalline order in dimension 1 or 2. And the reason for that is that if you take a possible arrangement of a crystal, say a square lattice like that, if you look at the displacement of this atom that I label 0 here, I call this u0, this one I call it qj. And I assume that I am at a non-zero temperature. So I look at the variance of the displacement of 0 versus j, so uj minus u0 squared. I average over thermal fluctuation. Then the result that you get in 2D, and again, we will show it rigorously in the following. But I am just announcing the result here. This variance is proportional to temperature times the log of the distance between atom 0 and atom j. This means that if I take rj large enough, the log is going to tend to infinity. And therefore, even if I know perfectly well where the atom 0 stands, I will have no information where the atom j stands within a unit cell. Because this quantity here will be larger than a square, where a is the size of the unit cell. And therefore, I will lose completely the long range order knowing where an atom is sitting. It doesn't give me any information where another atom far enough is sitting. And this was understood, as I said, by Payor very early, just after the beginning of quantum mechanics. And it was generalized 30 years later by Mermin, Wagner, and Hohenberg for any system with short range interaction. Then this gentleman showed that you cannot have at a non-zero temperature a phase transition involving a breaking of a continuous symmetry. The continuous symmetry here being simply the formation of a periodic lattice of atoms. And this, as we will see again today, also applies to the case of both ion condensation, where the symmetry to be broken when you both condense a system in 3D is that your macroscopic wave function acquires a definite phase. So you want symmetry for the phase. However, as I explained yesterday, which will be the topic of the two lectures, there is still room for unconventional phase transition. And this was understood by Kossel and Thaoules in 1933 in a famous paper, which was called Ordering Metastability and Phase Transition in Two-Dimensional Systems. And last year, Kossel and Thaoules got the Nobel Prize, shared with Duncan Eldane. So this is what we want to study. This is the outline of the lecture of today. So I would like to review the argument by Piers. Again, it may be something that some of you already know, but I think it's good to see this kind of argument because it will play a role in the following. So I want to show that there is indeed no long range order in 1D or 2D and take the example of a crystal. Then we will generalize this to the case of both ion condensation. I gain 1D or 2D. In the first part, I will deal with an ideal gas to interaction. And in the second part, this may be after the lunch break, we will deal with an interacting both gas. We are there, we will be more careful by separating what is called phase fluctuation and density fluctuation. And we'll show actually that interactions bring some new feature respect to ideal system because it brings this idea of a quasi long range order, not the true long range order, which is forbidden because of the MAM invigner theorem, but quasi long range order. And I will finish the lectures of today by discussing the case of a harmonic entrap system and speak of something which at first sight looks very strange, which is both ion condensation of photons. But I will try to convince you that this is indeed possible and has been seen experimentally. And tomorrow, in the lecture of tomorrow, we will learn more about the role of vertices and we will really understand the BKT transition point. And I will discuss a nice feature that occurs in two-dimension, which is not present in 3D, which is a notion of scaling variance. So let's start with today's lecture and this argument by Payals that indeed there is no long range order in 1D or 2D. So the argument of Payals, let's start with one dimension where it is the simplest. The argument of Payals goes in the following way. Let's suppose that I have my atom, which are interacting by some short range interaction and I model this short range interaction by a potential u of x, which is like this, so attractive at a long tail, which is attractive at long, for long distance, a local minimum here for distance a and then a short range repulsion. So if I am at temperature zero, it's clear that all atom will make perfect crystal, classically, and the distance between the two atom will be equal to the two little a where a is the position of this minimum. So at zero temperature, classically, there is no fluctuation, the answer is easy. If it was a quantum description, then I would have quantum fluctuation and it would be a bit more difficult, but let's do it classically for the moment. So I expect perfect crystal at zero temperature. What happens at non-zero temperature? So I will excite some fluctuations because of thermal fluctuation and this fluctuation will be characterized by the spring constant here. If I replace this by an approximation which is a parabolic potential, I have the strength of the oscillator is this kappa, which is the second derivative of u with respect to x taken in x equal a. So I will now model my chain at a non-zero temperature and the best way to do that is to use periodic boundary conditions. So instead of having a chain which is between minus infinity to plus infinity, I put my particle on the circle and I put n particle for a circle which has a length n times a. So at temperature zero, again, I have one atom every a, everything is perfect. Now, if I'm at non-zero temperature, the atom one may move by a quantity u one, the atom two may move by a quantity u two, and so on. And if I look at the energy of this system, I have both some kinetic energy and some potential energy due to the interaction of the atom between the atoms. And so the kinetic energy is here. It's promised to the uj square. So uj, again, is a displacement with respect to the equilibrium position. And the potential energy is just the proportional, so distance between the atom one and atom two, atom two and atom three. And it would be minimum if the distance would be a, and if the distance is not strictly a, it's proportional to the square of the uj plus one, the displacement of atom j plus one minus uj square, and all this is sum over j. And the hypothesis here is that I am allowed to expand the potential between the atoms, so uj plus one minus uj has to be small compared to a, but I don't assume that uj is small compared to a, that is, the atom j may move by a lot as long as its neighbors move by nearly the same amount. Okay, so that's the hypothesis. Okay, so how does one treat that? Well, I suppose that you have all seen this in your lecture. You can write the equation of motion for the variable uj, and uj double dot double time derivative is coupled to itself and to the two neighbors, j plus one and j minus one. And a simple way to solve this equation, excuse me, is to go to Fourier space, to Fourier transform the equation. So I introduce uq with a hat, which is just a Fourier transform of uj. So q is a wave number here, but that I choose in the Brillouin zone. Emmanuel has described this in his lecture, so I'm not going through that in detail, but q goes from minus pi over a to plus pi over a. And when you are in Fourier space, you get normal modes and you get n independent equation if you have n atoms, and each equation writes uq double dot plus omega q square uq equals zero, where the frequency of the mode q is just given, well, by sine qa, so again the wave number here, times coefficient, which involves the spring constant and the mass of an atom. So again, this is something I think you have all seen once, so I'm going pretty fast, but if you have any question, as for the other speakers, don't hesitate to raise your hand. So at this stage, I suppose everyone is happy. Yes, okay. So then we look for thermal equilibrium for this system. So we have n independent harmonic oscillator because this is the equation of an harmonic oscillator. So at thermal equilibrium, a given oscillator is decoupled from the other because they are independent, so the equation is not coupled. So I know that several average of uq prime will be zero if q is not equal to q prime, and if I take q equal to q prime in this equation, then I have the equipartition of energy because I'm doing classical statistical physics. So one half of m omega q square uq square just equal to one half kT, which tells me that uq square is proportional to kT. This is the important result that actually will be valid even in 2D or 3D. So now let's go back to the problem that Pyre's had risen. Let's look at the fluctuation of the distance between two atoms. So I take the atom zero, which moved by a quantity u0 because of fluctuation. I take the atom j, which moved by a quantity uj, and I look at this quantity, uj minus u0. I plug here the Fourier expansion that I had written before, where hj here is the position of the atom j. The atom zero is position zero, so I've got a one here, and I remind you that now I have to take the thermal average of this, knowing that uq square is proportional to temperature. It involves the frequency omega q, and this omega q is itself partial to sine qa. So here I take the square of this. Well, first I look at the average value of the quantity itself, but this is going to be zero because all the uqs are zero in average. So in average, the atom stay at the position I am interested in the fluctuation. So I take the square now, and I take the thermal average. So the thermal average of the uqs get out. This is this kbt over m, which is here, and I'm left with a sum over all modes of the system of sine q, sine square of qxj over two, divided by this frequency omega q square. This is what needs to be evaluated. And this is when evaluating that, that we will see whether pi r is correct and how one goes from one d to two d to three d. So in order to evaluate this, I'm going to make an approximation, which will be validated later on. I will assume that the physics of, at low dimension, the physics of Berencico-Sarris-Sarris is dominated by low momenta, or if there are large wavelengths and you will see why it is like this in a moment. So if q is, I can assume that q is small, then it simplifies the algebra. I can say that q is small compared to pi over a, which means that I can linearize a sign here and I get simply omega q proportional to q, which just means that I'm looking here at phonons, at sound waves only, and these are the excitation which dominates the physics. Okay, so the omega q square, which is here, I will replace it by cq square. And the second step that I have to do is to replace this discrete sum, which are never easy to evaluate by an integral. And the integral over q here runs from zero to the maximal value of q, which is pi over a. So I get here in the integral, this sine square, which is not modified, and the omega q square, which I had here, which has been replaced by c times q, so I get the q square and the denominator. So keep this in mind because this is really the structure that I will discuss, both in 1D, 2D and 3D. And the coefficient which is in front is kBt over something which involves the parameters of the problem, the spring constant kappa and the distance between two sides, little a. Okay, so let's try to evaluate that if there are no questions at this stage. Let's try to evaluate that. So it's an integral, so if I replace the upper bound by infinity, actually I can calculate the integral exactly. And the result that I get in 1D for this integral is something which is kBt again, this was proportional, and the integral of sine square over x over x square integral from zero to infinity is equal to pi over two. So I get finally this result here. And I get the fact that first I can look where are the modes which contribute the most here, and I get from this integral that the mode which contributes the most are the q on the order of pi divided by xj, that is if I take my two atom, one atom in zero, one atom in j, the modes which contribute the most to the fluctuation of the distance between these two atom are the modes which have a wavelength on the order of this distance. Okay, so since I'm interested in long distance behavior, indeed these modes have a very low q, this validates the approximation that I was looking at only sound waves. And more important, and again this is what Piles showed, when you look at that, you see that this function here will get larger and larger when xj gets larger and larger, which is what I said before, that is even if I know perfectly well where the atom zero is sitting, that I know perfectly the value of u zero, if I take j far enough, I will have no information where the atom j is sitting within a lattice spacing because this whole quantity here will be larger than little a, or little a square, so I will not have a true crystal, I will not have any long range order, no long range order in one d at non-zero temperature. Is that clear for everyone? Everyone? Okay, so what about 2d now? So in 2d you can do exactly the same treatment. It's a bit more complicated from a purely technical point of view because in one d I was, since I had only one dimension, my atoms had to move along this direction. Now in 2d, the atom may move in this direction or that one, so I have to, when I look at the sound waves, I have to look at the so-called polarization of the sound waves, the orientation of the motion of the atom. So technically it's a bit more complicated but here I'm not so much interested in the technical details, I just want to look at the global structure of the result and the global structure of the result will be exactly the same as in one d. That is, I will still get, when I calculate the variance of the displacement of uj minus u0 here, I will still get something proportional to the aperture kBt. I will still get an integral of the Brouillon zone which is now not a segment but it is now a square which goes between minus pi over a and pi over a with qx and qy. In this integral as before, I have a sine square of q times the distance between the two atom I'm considering divided by q square because again I'm looking at sound wave and now integrate not over a single dimension of q but over dqx and dqy because I am in 2d. So the only difference at this stage is these two here telling me that I have to integrate over two direction and not one. But this is enough to already change a lot the physics with respect to 1d. Indeed, when you want to calculate this integral, so I can, in 1d I could calculate this exactly. 2d I have to make some approximation. So an approximation that I can do if I assume that I restricting myself to q on the order of pi over rj or larger is to replace a sine square by one half. Okay, I could do something more precise but at this stage I can do that to get a rapid result. So I replace a sine square by one half. So you can get rid of these two here and I'm left with one over q square times d2q. d2q is the element of surface in the green one zone. So if I take polar coordinates, this is 2piq dq. And 2piq, yes, you have a question? Of course it's, what is that? That. So I look at the displacement of the atom j with respect to the equilibrium position. I call it uj. I look at the displacement of u0. It's not rj, okay? It's a displacement with respect to the equilibrium position. Okay, thanks for clarifying this. And so here I get, because I'm in 2d, I get in polar coordinates, 2piq dq. And so I'm left with an integral over q which goes, I have the one over q square times q, so I have one over q and the integral of one over q is a log, okay? So I get log of the upper bound divided by the lower bound. The upper bound is pi over a. This is the edge of the green one zone written here. Pi over rj is the wave vector q for which the larger, the smallest wave vector q for which this approximation holds. And then I'm left with this log of rj over a. And this is a result I wasn't seeing at the beginning. This variance of the displacement with respect to the equilibrium of j minus zero here is proportional to temperature, t. And there is logarithmically with the distance rj, which means that again if I take rj very large, even if I know perfectly well the value of u0, the value of uj will not be defined within better than little a, so I will not know actually where the atom j sits within a unit cell. So again, I do not have long range order, that is I, knowing where an atom in the crystal is not enough to know where all the other atoms are if I look far enough. However, you see already a difference respect to 1d. In 1d, the divergence was linear in rj. Now it's only a logarithmic divergence. So it's not as strong as a divergence, okay? So the loss of long range order is not as big and people often speak because the only logarithmic speak of quasi-long range order instead of the total absence of long range order. And just to finish the argument, you may wonder what happens in 3d. So again, in 3d you will have the same integral as before, so you will get this kt temperature in front of an integral, a sine square over q square. Here now you will have a volume in momentum space, d3q, and when you replace this by polar coordinate, d3q will become q square dq. The q square that will appear here will cancel with this one. And now I have no divergence at all for low momentum, okay? So I had linear divergence in 1d, I had a logarithmic divergence in 2d, and now in 3d there is no divergence at all. So I can calculate completeness integral and I get something which will be kbt over kappa independent of rj. And this tells us that indeed in 3d we can have true long range order. If this quantity that I have here is much smaller than a square, which is again the square of the crystalline parameter, then if I know where an atom is sitting in this crystal, I will know where all the other atoms are sitting within some fluctuations which are given here, but these fluctuations are small compared to the lattice spacing, okay? And there is a criterion that this should be actually less than 10% of the lattice spacing if we don't want the crystal to start melting. So in 3d, with this argument, I get true long range order, but in 1d and 2d I do not. And the physical reason for that, which is hidden in this mass about Jacobian, is the fact that when you are in 1d or 2d, you don't have enough neighbors actually. When you are in 3d, and if an atom wants to move out of its equilibrium position, it's going to frustrate many people, okay? People on the left and on the right and in front of me, behind me, people above and below. And so it's very unlikely that an atom can move a lot because there are many frustrations that are caused when the atom moves. When you are in 1d, you have only two neighbors, one on the left and one on the right. And if you move like this, if this neighbor accepts to move also on the left, it's easy to have a big fluctuation propagating. And in 2d it's marginal and this is why we get a log in 2d, okay? So the idea that I want to convey here is this lack of neighbors in low dimension which forbids to have two long range order. Yes? Ah, good question. Yes, how does this apply to graphene? So there are two answers to that in the literature. And I don't think people have really completely say which one is correct. So one first reason is that when you take this in graphene and you take the value of kappa, the distance rj over which you lose the long range order is very, very long, much larger than the graphene sheet that can be produced. So it's not so relevant for graphene from a practical point of view, taking into account the fact that the graphene sheets are micrometer size or tens of micrometer size. Now there are some more elaborate arguments which are not actually, which could apply to even if we could produce infinite graphene sheet, which is that graphene is not really flat. Graphene can bend and because of bending this introduce kind of long range interaction that is if I take into account the fact that the graphene sheet can bend like this, then I have interaction between two atoms. Let me maybe draw my graphene sheet so you see what I mean. So I'm not sure I can draw graphene sheet easily, but do you see the graphene sheet that I'm trying to draw? Yes, so I have an interaction between these two atoms when it bends. So this introduce a small component of long range interaction in the system because of the possibility of the graphene sheet to bend and this is enough to restore, from a purely theoretical ground, the true long range order because of the bending. So the graphene being not something which is flat but having this possibility to bend, it's a crippled surface if you want. Yes, yes. So you... It's already a small bend. Yes, a small bend is enough because a log is very fragile. So it's easy to get rid of a log if you introduce some new physics in the system. It's not solved and I'm not sure that this is relevant for any practical graphene sheet, okay? Yes, yes, then it might be relevant. Yes, yes. Okay. So now that we have seen this example for the crystal, let's go to both ancient conversation which is what we are interested in in this set of lectures. And first of all, I would like to start with the case of an ideal gas, no interaction. So again, these are probably some things that many of you have already seen. So I take particular obeying both statistics. I confine them in a box at a non-zero temperature. And the first thing I would like to say is that irrespective of the dimension, when I take a box or actually any kind of potential, I always get something that one can call both ancient conversation. That is, the number of particles that can be placed in the excited state of the box is always bounded. And indeed, this comes from the Bose law that I will use a lot, so I might as well write it on the board. So the number of particles on a state of energy, E, as you know from the Bosenstein law, is the exponential of E minus mu divided by KBT minus one, okay? So this is something you have all seen. Let's suppose that the lowest energy of the system is called E zero, and I set it to zero, just a convention of energy. Then in such a formula, the one which is written on the board, mu has always to be smaller than E zero, which again is zero here. Otherwise, if I allow myself to take mu larger than E zero, this will become negative, and it will be nonsense, okay? So in a Bose-Einstein system, by contrast to Fermi system, the chemical potential is bounded. It cannot take any value, it should always be lower than the energy of the ground state. And therefore, it's just a mathematical exercise to see that if I take the sum of all the population bat the ground state here, so I exclude P equal zero from this sum, this sum here is always smaller than the same sum without the mu here. And this quantity, if I take a discrete sum, is finite, I can calculate it. And therefore, the number of particles I can put in the excited state for this temperature and this chemical potential mu is always bounded, which means that if I take the same temperature, and the same mu, and I constantly add particle, well, same temperature, excuse me, and I constantly add particles that mu is going to vary, because mu is always smaller than that, so the number of particles in all the excited state will never exceed this value, and therefore, all the extra particles have to accumulate in the ground state. So in this sense, Bose-Einstein condensation in a finite system with a discrete level is very simple, just an inequality like that, and you don't have to discuss anything more. And this will be true in 1D, 2D, or 3D, okay? Now, the question that was not trivial in Einstein's paper, and which has actually caused a lot of debates, and the notion of thermodynamic limit was actually introduced because of this problem raised by Einstein, is what happens now when the size of the box tends to infinity. If I keep constant the temperature, and I let the size of the box tend to infinity, keeping the same density, then what happens? And when you do that, so the discrete structure here of the level will disappear, I will transform my discrete set of states into a continuum. So I have to go to replace this discrete sum again by an integral, and this integral involves the density of state, d of e divided by the same denominator, the expansion of e over kt minus one, and all the question is, again, created to low wave vector, low momenta, or if you prefer low energy, all the question is, does this integral converge, and does it converge in equal zero? And as you probably have all have seen, it depends on the shape of this density of states. So if you go, sorry, we've written this integral here, if you go in 3D, the density of state in a cubic box, when the size of the box tends to infinity, varies like the square root of the energy, and the denominator here for low energy is typically linear in energy, so you get an integral which is square root of e divided by e, so one over square root of e, and this integral converges in zero, and therefore the number of states you can put in the excited states remains bounded when you take the number of particles, excuse me, or more precisely the densities that you can put in the excited state when you take the thermodynamic limit, remains bounded, and therefore this Bose-Einstein condensation that we had for a finite size system survive when you take the thermodynamic limit, because this integral converges, okay? So again, for a discrete system, you always have BC, and when you take the thermodynamic limit, if this integral converges, it means that the density that you can put excited state is bounded, and therefore you will survive, your BC will survive, and in 3D, this is the case. When you go to 2D, the density of state d of e is constant, and therefore the integral that you have to calculate now is one over e, because the denominator is partial to e, and this integral diverges, which tells you that the Bose-Einstein condensation phenomenon does not survive when you take the thermodynamic limit. The density that you can put in the excited state now can be infinite, and therefore there is no macroscopic accumulation of particles in the ground state, okay? So I wanted to make this clear for everyone. Is it okay? No question? Okay. So still it's interesting to see what happens when you take a Bose gas in the thermodynamic limit into dimension and see where does the particle go. I told you that they do not accumulate in the ground state, but still there is a kind of saturation which occurs, and in order to show that, I have taken this this Bose-Einstein law, which is written here. So I take, I'm in a box, so I take e equal p square over m, over two m, excuse me, and I plot here the density distribution in this box. For values, phase space density rho times lambda square. So this is the quantity that I will use a lot. So the phase space density is a density, special density rho times lambda square, lambda t square, where this lambda t is equal. I just rewrite it here, square root of m kBt. So varying the phase space density is a way to vary the degeneracy of the gas, to go from the classical regime to more quantum regime. So here I have plotted in values colors, the momentum distribution, N of p in the box, as a function of momentum p, for values phase space density. So let's take phase space density of 0.1, so gas which is very weakly degenerate, kind of Maxwell-Boltzmann gas. So N of p is just a Maxwell distribution, a Gaussian distribution. So here I have taken some logarithmic coordinate here, so a Gaussian with a logarithmic semi-logic coordinate correspond to a parabola like that, and this is a blue curve here. The unit that I put here, so it's the momentum p, but I rescaled it to have a dimensionless variable, so I use p lambda t over h bar. When this is equal to one, if you square it and you replace, you take the value of lambda t which is written on the board here, you get p square over 2m equal 1 over 4 pi kBt, so this corresponds to the situation, one here is when the kinetic energy is on the order of kBt. So at low phase space density, no surprise, this is Maxwell-Boltzmann. When I increase the phase space density, I see that I have a bump of particle accumulating in slow momentum, momentum which are such that p lambda t over h bar is smaller than one, or if you prefer, the kinetic energy is smaller than kBt. So you see that when I put more and more particle, I don't get a Bosnian condensate, because I'm in 2D, a Bosnian condensate would be an accumulation of particle in p equal zero here, which is not what I see. But I see that still that the particle accumulates at low momenta here, the high momenta here are nearly saturated, so putting more particle in the system does not put more particle at large momenta. All the particles go to low momenta, but not to a single one like in 3D, but to a continuum of low momenta which is here. And this is what happens for this ideal gas. And if I look at what happens at phase space density of 10, I actually, this is not obvious because I'm using logarithmic coordinate, but most of the particle are now sitting in this low energy tail here, and very few particles are here by contrast. And if I now ask myself what is the shape of this momentum distribution close to p equal zero, then I can take again this equation here, and I say that E minus mu over kT is a small variable, so I can expand the exponential, and I get now that this N of p when I expand will be kBT divided by p square over 2M plus mu, which is a Lorentz distribution. So in 2D, when you cool a gas, a Bross gas, what you get is that most of the particle when you have strongly degenerative regime, most of the particle accumulate in the low momentum class and they have a momentum distribution which is a Lorentz distribution like this. So now you may ask what does it mean with respect to the absence of long range order, and the long range order in a Bross gas, this is characterized by quantity I have introduced in the colloquium yesterday, this is the G1 function, this is how the phase is correlated between one point in R, one point in R prime. So the thing I had yesterday was, I mean here is the two differential equivalents, so G1 of R, yesterday I think I wrote that it was psi dagger of R, psi of R prime, where psi is the field operator, but this can also be written, it's equivalent to R, the one-bodied estimatrix, row one, R prime. These two definitions are equivalent, this can be shown easily. And so this one-body correlation function, I can let you show this as an exercise, it's relatively easy to show, can always be calculated from the momentum distribution because these two quantities are just rated by a Fourier transform, that is if you want to calculate G1 of R, you can take the momentum distribution N of P, and N of P is the Fourier transform of G1 of R, and vice versa. So now we know that the momentum distribution is, this is what was here, is a Lorentzian at least at low momenta. So we have to know what is the Fourier transform of a Lorentzian, and the answer is, you know what it is? An exponential, an exponential minus R over L. And so what we get for this ideal both gas is that indeed pi R's again is correct, there is no long range order, the G1 function decays exponentially with the distance between two points I'm considering, and with a length L, which increases when I increase the phase space density, but still it always decreases exponentially, exponentially fast. Okay, so how am I doing in time? Yes, I still have 15 minutes. So are there questions at this stage? Yes, yes, yes. So the question is, do you need a Boschian condensation to get ITC co-superconductivity or to get superfluidity? And the answer is no. You can get, this is precisely the work of Berezky-Cosser, St. Aulets. You can get superfluidity without Boschian condensation, and this is what I want to show later. Yes, no, no, you should not be sorry, this is a very important point, thanks for pointing it, yeah, that's, so. Okay, so probably what Emmanuel had in mind was a quasi condensation, so quasi long range order, which was enough to ensure everything he was saying. Also he has a pretty finite size system, so this was the answer for the graph in sheet. If you're looking at a short distance where the log is not time to rise, then everything happens as if you had long range order because you are looking between points which are relatively close to each other. Okay, other questions? No? So let me now go to the case of an interacting Bosch gas which is more interesting, but also more challenging, and you will see that interactions still do not violate Mermin-Vagnott theorem, but still they change a little bit the picture that we have for the ideal gas of this exponentially decreasing G1 function. So in order to discuss the interacting Bosch gas, one has to do some approximation, and you can never treat exactly an interacting many body system in at least in two-dimension or three-dimension, and the approximation I'm going to use is a so-called classical field approach. That is, this classical field approach amounts to say that we have a true many body wave function which is a complicated object, but we are going to replace it by an ansatz which is often called the Hartree ansatz, which is to say that all particles occupy the same state psi, so I write this either as n psi like this, or if you prefer n many body wave function which is the particle one inside psi r1, particle two psi r2, particle n psi rn, but the psi itself is a fluctuating object. If psi was the ground state of the box, this would be a Bosch ansatz ansatz, okay? But here I take for this psi a fluctuating object, a fluctuating field, but I assume that all the particles are occupying this field. So it's not an exact result, but it's an approximation, and the rule of the game is to find the correct probability distribution for this classical field psi. So this may sound a bit strange as an approximation, but for those of you who are familiar with electromagnetism, this is exactly what you do when you do classical electromagnetism. When you, if you want to describe a field using quantum electrodynamics, you have to use some very complicated state which where you have to specify the number of photons with momentum k1 and polarization epsilon one, the number of photons with k2 epsilon two, this is very hard to manipulate, but we all have learned classical electromagnetism where we describe our electromagnetic field by a classical field, actually two classical fields, the electric field and the magnetic field, okay? So when you are doing classical electromagnetism, you are doing exactly what I'm doing here. Here you are saying that all the photons, so to speak, share the same wave function, it's always dangerous to speak of the wave function of a photon, but are described by the same field characterized by these six real numbers, EXEYZ, BX, BYBZ, okay? So this is what we are going to do for our matter wave field. We are going to assume that all the atoms share this same psi, but this psi is a fluctuating object. What are the limits of such an approach? So again, maybe a good way to see that is to come back to the electromagnetic field. When we do that, when we take a classical field approach, we neglect the fact that the light is composed of elemental individual photons, so we neglect the corpuscular nature of light, and if you want to understand what a photon is, you have to look at, say, a spontaneous emission process where you have photons emitted one by one. So when are these process relevant, this spontaneous emission process? When will a classical field approach be bad? Which will be bad when actually you do an experiment which is sensitive to the commutator between A and A dagger, which will tell you that it's not the same to take A A dagger or A dagger A. And if you look at what is the matrix element of A dagger A and A A dagger, since this commutator is equal to one, this one will be equal to N, where N is the number of photons in the mode K epsilon, and this one will be equal to N plus one. So if I have a mode which is heavily populated, if I have a number of photons N per mode, which is very large compared to one, the difference between N and N plus one is not significant, whereas if I'm doing something like spontaneous emission, a mode which is initially in the vacuum state and I emit a photon in it, there, of course, I cannot neglect the fact that the photons are quantized. So this classical field approach, a priori will be valid if I have a large number of particles per mode of this matter-wave field that I'm going to introduce. Another issue that you have when you use a classical field approach is the so-called black-bodier addition problem, a famous problem of the 19th century, that is when you are doing classical field physics and you are looking at the thermal equilibrium of a classical field, then you may have some ultraviolet divergencies, and therefore you need a cutoff. That is, you need not to consider frequencies which are very, very large. You have to put a cutoff in your frequencies, otherwise you have divergencies. So here we will do that. It's not so important for us because as we have seen, all the physics that Payor's introduced and what will come with BKT physics is a physics of, as I have explained, low-momentar, low-frequency. This is large distances, low-momentar. So what happens at very large frequencies, this ultraviolet cutoff is not so relevant for our physics, so we can put this cutoff. It may mean some uncertainty on the prediction, but at least the main physics will still be present because these physics belong to, I would say, infrared wavelengths and not ultraviolet wavelengths. So how does one under a classical field at, in thermal equilibrium, at finite temperature T? So here I'm doing it for the simplest field I can think of, a field like the electromagnetic field where the energy, I share it with a field that I have written by this e-caligraphic. This, the energy is promote to e-square, integrated of our volume L-cube. Here I'm doing it in 3D, but I could do the same in 2D or 1D. So again, the way to treat that is to expand in eigenmodes which are all independent from each other for this classic, very simple energy model. And so the state of the field is characterized by the amplitude e of q, the Fourier component of this expansion. And e of q-square is the proportion of the mode q. And the energy that was written here as integral, when I expand in Fourier space, I get something which is alpha L-cube, the integral of the whole volume, times eq, the eq-square, so the Fourier component square here. And now I just have to say that the probability to get a given realization for the field, that is a given set of amplitudes of eq, is just given by the Boltzmann weight, exponential minus the energy over kBt. And because the energy, excuse me, e of e is the sum of eq-square, I get that every eq-square has a thermal average which is kBt. This is very similar to what I said before, but now it's at the level of the field and not the displacement of the atom. And this linear variation of the position of each mode is something that is very characteristic of a classical field approach, something you have probably seen when you have studied black body radiation. So we'll do the same for our matter weight field. So let's see how it works with the matter weight field. So I take the energy of my gas. So this is my Hamiltonian I start with. So I have some kinetic energy. I have some trapping potential to hold the atoms. And I have some interaction between the atom, u of r, i minus rj with the one half here to avoid double counting. And in my classical field approach, I'm going to say that the wave function, macroscopic wave function of the system, psi of r1, r2, rn, is simply a product state, little phi over r1, little phi over r2, little phi over rn. And the normalization of the phi is such that the integral of phi square of the volume is equal to the number of particles n that I put. So saying that I get an energy which depends on the field phi, which I will have to treat again as a fluctuating object. And this energy corresponds to the sum of all components that are here. So the pj square, which are over here, will give me, you know, that pj in quantum mechanics, well, the momentum p is the gradient of the wave function. So the pj square, which are here, will give me h bar square over m, the gradient of the field phi. The trapping term here will give me v trap of r phi square. And the interaction term here, u of r i minus rj, will be treated by a contact term. So it's an approximation, but it's a very good approximation for low energy physics for cold atoms. I treat this u of r minus r prime as a direct distribution delta of r i r minus r prime times some strength little g. And then when I plug my ansatz here with this potential, I get g over 2 phi square, phi 4. So a, here is a scattering length that Emmanuel already mentioned. So are there questions on that? No? You all know that? OK. So let's go now to 2d. And then probably I will stop at this stage. So let's go to 2d. So in two-dimension, I have to go to two-dimension. I have to freeze one direction of space. So I introduce a strong potential along the vertical direction. And I assume that my particle only occupies the ground state of the vertical direction. So this ground state, if I take a harmonic potential, it's just a Gaussian distribution with a size given by the size of the harmonic oscillator depending on the oscillation frequency on the potential along z. So the classical field that I was introducing before, this phi of x, y, z, is actually a factorized. I have this chi 0 of z, which tells me that I have frozen the vertical direction. So I know exactly the state of the particle for the vertical motion. And psi of x, y is still a random field that I have to understand. And so when I plug the previous gross potato ansatz that I had before into this with this ansatz, I get again a kinetic energy, which corresponds to a kinetic energy in the x, y plane. So this is a modulus square of the gradient of psi of x, y. I assume that I have no more potential in the x, y plane. So I have no potential energy. And the kinetic energy, which was partial to phi modulus to the 4, now is partial to psi modulus to the 4. And I have introduced here some parameter g tilde. And this g tilde involves a previous scattering length that I had before. And when I integrate over this chi 0, which is here, there is a little algebra which is not complicated, I get little a divided by aOH. And what is quite remarkable in this expression here is that you see that I get the same h bar square over 2m in front of the kinetic energy and the interaction energy times this g tilde, which is a dimensionless parameter. And this is an important point I want to make. In two dimension, at least within this classical field approach, the interactions are characterized by a dimensionless parameter. By contrast to 3D, where interactions are characterized by a scattering length. So the number that characterizes them is a dimension. Here, it has no dimension. In practice, it is a ratio between the scattering lengths and the size of the ground set of the Hamiltonian. But the fact that it is dimensionless, you will see tomorrow plays a very important role because this is what gives the scaling variance of the 2D formalism by contrast to 3D, where the interactions were introducing a length scale or an energy scale. Here, there is no length scale, no energy scale, which comes with the interaction. The only thing I need to know is the value of g tilde. And in standard experiments, it's between, say, 0.1 to 1. And this is the only thing that has to be known about the 2D Bose gas. This is this dimensionless parameter. OK, so what do you think? Should I stop here, or I can take questions if you want? I can continue, but maybe I can just show this slide just so that you can have lunch peacefully. You may wonder when I'm doing this classical field approximation whether there is any quantum feature left in my problem. The fact that I've replaced all the quantum commutation relations by c numbers, are there any quantum features left in the system? And the answer is yes. There is still something which is quantized in the problem, which is related precisely to the fact that we are using a classical field. So when I take an assembly of particles described by this classical field psi of r, I can interpret this classical field psi of r, which is a complex object, by a square root of a quantity which is the density of particles. The use of psi squared tells me the density of particles in r. And I have some phase factor into the i theta of r. And as I said yesterday, there is a velocity field, which is associated to this phase theta, which is just a gradient of theta times h bar over m. And the quantum feature which remains valid here is when you have such a classical field like that, if I take the integral of gradient of theta over any closed contour, I know that since this field is single valued, I have to get the same value of the field when I make my closed contour, which means that theta has to be the same within 2 pi. And so when I take the integral of gradient theta dr over any closed contour, then it means that this integral is n times 2 pi. And therefore, in terms of velocity, the motion that I can describe by this classical field approach are such that the velocity field has an integral over any closed contour, which is again quantized. It will be the same thing as this n 2 pi times this h bar over m, which is here. So there is a quantization of the circulation of the velocity, which is present in this classical field approach. So even though I'm doing some classical physics with this classical field, I still have something which is quantized. This is a circulation of the velocity. And this is the only ingredient I need to get super fluidity. So I've lost completely quantum physics by doing this classical field approximation. So probably it's a good place to stop here. So I thank you for your attention. I am ready to take any question. Thank you.