 La lecture va être sur les limites de large-cane pour la quantité de gaz. Je l'ai changé un peu le titre pour le faire plus vite à la conférence. Il va être très basé, donc j'ai hésité beaucoup à vous présenter un peu de nos derniers résultats en détails, ou plutôt de faire une grande introduction au sujet. Et j'ai décidé que peut-être c'est mieux de juste être très basé et de faire sure que tout le monde peut suivre. Donc si vous ne pouvez pas suivre et me arrêter et me demander des questions, donc c'est supposed d'être accessible à tout le monde. Donc, dans la parole, je vais discuter de ce que nous pouvons dire sur les limites de large nombre d'interactions en quantité de particules. Bien sûr, c'est un grand nombre, il y a plein de questions. Et beaucoup de gens travaillent sur ça, donc je dois faire une sélection. Donc c'est juste une introduction aux questions principales et avec la focus sur la conversation de Beuser-Einstein. Mais pas seulement. Je vais en fait discuter un peu les trois gazis, lesquels il n'y a pas d'interaction, juste pour faire sure que vous le savez avant que vous le voyez. Par exemple, les plus avancés parler des gazis d'interaction. Donc vous devez vraiment voir cette lecture comme une introduction à les plus avancés parler que vous verrez durant la semaine. Et je vais, par contre, essayer de vous dire. Donc, Marcello, on va parler de ça, ou peut-être Serena, on va vous raconter, etc. Donc, de toute façon, afin de fixer la notation, je pense que c'est probablement mieux de commencer avec un cas classique. Donc, on va considérer des particules identiques en RD. Donc ils evoluent en RD. Peut-être qu'ils vont éventuellement être confinés à un domaine ou quelque chose. Donc, après ceci, je vais aller dans le potentiel extérieur. Et il y aura un potentiel extérieur, ainsi qu'un W interagé par la paire. Donc je vais... Donc je décide que les particules sont seulement interagées par les paires. Je peux aussi insérer trois forces de corps, quatre forces de corps, mais je ne vais pas faire ça. Donc, comme tous les nouveaux, je suppose, donc ce système est juste un système Hamiltonian sur ce qu'on appelle le phase-space RD. Donc, je pense que je vais faire RD, RD à la fin. Donc les variables sont appelées XI PI, où XI est juste la position de la variable I et PI est le momentum. Donc, M times la vitesse, c'est le momentum. Et comme un système Hamiltonian, il y a une énergie, qui est la suivante. Donc c'est une fonction de XI PI, XI PI, etc., etc., XNPN, qui est l'énergie kinétique, l'énergie potentielle, et puis l'interaction avec W. Ok, j'ai toujours des paires sur la paire et donc je vais assumer que W est une fonction de même. Je ne vais pas l'écrire, je vais toujours assumer que W est même. Ok. Donc ça veut dire que la dynamique est juste évanée par une équation de Newton. Ok, donc juste un XJ dot est le dérivatif de l'énergie en respect à un Pj. Donc, il y a un Pj évalué par M et Pj dot est minus le grad de XJ, ou V. Ok, donc vous avez beaucoup de couples aux ODE. Et l'énergie est, bien sûr, conservée. Très bien. Quos sont les questions que nous voulons regarder ? Nous voulons regarder les états de stationnement. Les états de stationnement sont juste des états qui ne bougent pas. Ok. Donc, ne bougent pas signifie que Pj doit être 0, comme vous pouvez le voir par l'équation et le grad de V doit être 0. Ok, donc ils ont tous Pj qui est 0. Et ils ont un point critique de l'énergie en respect aux XI. Et aussi, en respect aux Pj, mais le seul point critique est 0. Donc, le grand état c'est juste le plus stable de stationnement. Donc, c'est juste un minimum si il existe de V, Y, juste parce que quand vous êtes à 3, le minimum local n'est pas généré. Et localement les états de l'É, ils ressemblent vraiment à l'élipsoïde et donc c'est très stable. Et puis, il y a des états d'élipsoïde. Donc, les états d'élipsoïde sont un peu plus compliqués parce qu'ils sont probabilisés. Donc, vous ne spécifiez pas exactement où sont les particules et où sont les velocités, mais juste donne une probabilité pour ces gars. Et donc, c'est donc une probabilité à la mesure de l'élipsoïde qui est de la formule suivante E-beta E avec le facteur de normalisation en front z. Ok, donc c'est juste une fonction éloignée, généralement. Bien sûr, vous avez besoin d'assumptions pour faire sure que vous pouvez l'intégrer sur l'espace et obtenir une mesure de probabilité. Très bien. Donc, c'est ce que nous voulons faire. Nous voulons regarder la limite n à infinity et comprendre ce qui se passe. Vous devez penser que W est donné à vous. Je veux dire, pour quelques systèmes physiques spécifiques, vous pouvez parfois étudier W un petit peu. Mais généralement, l'interaction est la propriété de les particules que vous regardez. Donc, si vous regardez quelques atomes ou quelque chose, mais généralement, il n'y a pas... Je veux dire, si vous regardez deux atomes hydrogénétiques, il n'y a pas de vrais W. Parce que l'atome hydrogénétique n'est pas... n'est pas un particule fondamental. Il est fait de l'électron et d'électron et de protons. Donc, W est plus empirique. W est donné à vous, plus ou moins. Et V est celui que vous étudiez dans votre lab. V est le potentiel extérieur. C'est celui que vous modifiez. Donc, quand vous regardez à n à infinity, vous devez choisir si vous choisissez que vous choisissez que la limite va exister et que ces particules infinitaires seraient pu vivre ensemble. Mais ceci est fait par l'utilisation de V. Donc, c'est la forme typique de W que vous pouvez considérer. Donc, je veux dire, peut-être que vous regardez à l'électron et, en ce cas, W est juste 1 over X c'est Coulombien 3D. Peut-être que vous travaillez dans d'autres dimensions. Je ne sais pas. Peut-être que vous ne considérez que la force de gravitation et que c'est juste minus 1 over X qui est généralement plus stable et plus difficile. Mais sinon, si vous regardez à atomes, donc, typiquement, la forme de W est que c'est très repulsive à l'origine. Et puis, à l'infini, c'est généralement comme ça. Donc, si vos atomes ne peuvent pas polariser ou quelque chose, vous aurez une vendeur valse à l'infini qui signifie que ça va actuer comme 1 over X à l'infini à l'infini. Mais si vos atomes peuvent être polarisés, donc vous pouvez avoir une vendeur valse à l'infini, pour exemple. Et la force de gravitation va généralement stabiliser le système et compenser l'attraction à l'infini et faire comme ça que ces particules infinitaires peuvent vivre ensemble. OK. Typiquement, la vendeur valse que nous considérons dans cette lecture, donc je vais utiliser quand je veux confier le système à un domaine fin de la fin, je vais juste prendre la vendeur valse à l'infini à l'extérieur du domaine. C'est un peu un abus de notation. Je pense que c'est OK. Donc, je peux prendre la vendeur valse à l'infini dans le complément de omega et 0 en omega. Et puis, ce sera juste le sens que le système est confié à omega. OK. Donc, c'est une situation typique que nous considérons. Je veux dire que d'un coup, vous avez des potentiels confinés. Donc, c'est un potentiel confiné. C'est un potentiel qui va à l'infini à l'infini. Donc, pour exemple, x squared, le potentiel harmonique. Ou vous pouvez avoir ce que j'ai appelé confiné localement. Pour moi, un potentiel confiné localement c'est juste un potentiel qui est négatif de quelque part pour attraquer le particule à un endroit. OK. Mais ensuite, il va à 0 à l'infini. Donc, à l'infini, les particules sont complètement frais. OK. Donc, ce sont les autres potentiels que nous allons considérer dans la lecture. Dans le cas de quantum. Donc, je suis classique. Donc, il y avait un limiter large pour le cas de quantum dans le titre. Donc, ce serait un typique appel slash important limiter. C'est le limiter thermodynamique. Donc, le limiter thermodynamique est essayant de construire un gaz infini vivant à la scale microscopique ou à notre scale. OK. Donc, ce est le limiter thermodynamique. Donc, vous avez pris un domaine omega. OK. Et ensuite, vous mettez... Donc, maintenant, V va être ce que vous voyez ici. OK. Et je vais prendre un domaine très grand qui je suis augmenté afin de couvrir l'espace. Donc, je vais prendre l'oméga L. Et le moyen que je peux le faire c'est juste de scale l'oméga fixe, pour exemple. Je vais assumer l'oméga en volume 1. Et je scale le. Et ce que je fais c'est que je mets mon n particule en omega. Donc, je peux... Je peux regarder le state de la Terre ou peut-être le state de la Terre à une température présente. Oh, j'ai oublié de dire que le T est 1 à beta. OK. Et ensuite, je vais essayer de faire comme ça que mon gaz infini devrait être localement finit. C'est ce que je veux. Donc, ce que je vais faire c'est de fixer la roue de densité qui est juste n sur le volume. Donc, n sur L sur D. Fixé. Donc, c'est un limiter de large scale quand j'essaie de construire un système infini comme votre glace d'eau. Si vous voulez, c'est de ce genre. Il y a beaucoup, beaucoup de particules 10 à 23. Et à la niveau microscopique, on peut les décrire par un modèle de ce genre. Et ensuite, c'est le limiter de large scale qu'on pourrait faire. OK. C'est un limiter très difficile. Je veux dire, il y a beaucoup, beaucoup de résultats sur ce limiter. Je ne vais pas les décrire maintenant. Mais bien sûr, le principal objectif est de comprendre ce qui s'occupe par rapport à la transition face. OK. Donc, si vous mettez la température, alors vous espérez que, selon les valeurs de roue et d'eau, parfois vous allez avoir un solide, parfois vous allez avoir un liquide, parfois vous allez avoir un gaz, etc., etc. Donc, vous espérez avoir des transitions face. Très bien. Donc, ce n'est pas going to be discussed today. But maybe I should mention that you have to find a way of describing this infinite gas. So, you can do it either via point processes or you can also use correlation functions, which appeared yesterday already in Lars lecture. So, correlation functions are just involving only finically many particles at a time. So, you can expect they are finite in your limit. OK. And today we will spend quite a bit of time on the quantum equivalent of correlation functions, which are density matrices. And that's why I'm mentioning them now. But I think it's time to turn to the quantum case now. So, let's discuss the quantum case. Very good. So, in the quantum case, you know that there is a Heisenberg principle, which tells you that you can't know position and velocities at an arbitrary precisions. Or, if you like, if you think using probabilities, then you have to have constraints between the probability for the momenta and the probability for the positions. OK. So, if I'm looking at the probability I get for the positions, the one I get for the momenta, then they can't be both equal to a delta, if you like. So, there has to be constraints. And the constraint is the wave function, psi. And this psi is used as a constraint to ensure that positions and velocities cannot simultaneously be known to an arbitrary precision. How is that done? Well, it's done in the following way. You decide. So, that's the axiom, if you like, that... Sorry, so psi is a function in L2 of Rdn, which is normalized. OK. So, it's a function of n variables, each variable living in Rd. And the postulate is that psi of x1 xn squared is the probability density to observe the first particle at x1, the second at x2, and so on. OK. And then, if you take the Fourier transform of psi, take the square, then this is by definition the probability that the first particle has momentum p1, and so on, and so forth. I'm working here with h bar equal to 1 for those who know what h bar is. And I'm also using a Fourier transform, which is an isometry, right? Because I need this guy to be a probability measure, if this one is a probability measure. So, you have to put the correct two pi's. OK. So, that's it. That's the only thing you have to know about quantum mechanics with these axioms, and then you can just go on and understand everything what's happening. So, we can then compute the energy. So now, you see the state of the system is no more a point in R2 dn. Now it's a function in L2 of Rdn. OK. So, we went from finite dimension and psi is not the state of our system. And you see that psi is really the link between the probability for the momenta and the probability for the velocities. So, that's, if you like, the Heisenberg principle. So, the energy of the system in the state psi, well, I mean, it's just a classical energy integrated against the corresponding probabilities. All right. So, E of psi is just the integral, the sum of pj squared over 2 integrated against psi hat squared plus the integral of the sum of V of xj against the probability that the particles are at these xj's and then the same for W, psi squared. Non, I mean, so let me remind you that pj psi hat is actually the Fourier transform of minus i gradient xj of psi. So, and then if you use a Parseval, you deduce, I mean, you get the Dirichlet energy gradient psi squared over 2. Oh, there was a name, whichever. Forgotten. Plus, and then the rest is unchanged. So, for the same reason that classical mechanics, the energy is linear with respect to the probability measure on phase space. Here, it's also, well, it's not quite linear, but it's linear in probability measures and you get a quadratic function in psi. So, this you can write as a psi hn psi point, so it's a quadratic form if you integrate by parts where hn is now the quantum Hamiltonian, maybe I write it here, which is minus the sum j1 to n minus Laplace xj over 2 V of xj plus the sum W of xj minus xk. Okay, so you see, and I again forgot the m. I will soon take m equal 1 so you see that your energy which was a function on your phase space is now an operator, so this guy is a self-adjoint. To make sure it's self-adjoint, one has to add lots of assumptions, but it's going to be self-adjoint on L2 of RDA. Okay, so this operator becomes the main object of interest when you want to study the large n limit of quantum systems. I mean, so the people working in quantum theory know that I forgot something, but the other ones did you notice that there was something strange in what I wrote somewhere here. After all, it's a lecture, so you have to participate. What you see when I say the probability that the first particle is at x1, the second is at x2, and so on, I'm really... I decide that I can know who is who. I mean, so if I see somebody, oh, you are the first, and then if I look at the system at a later time, I can say, oh, you were... I know you, you were the first before. Okay? But they are all identical, so if I see the system at time one and the same at time two, I will never be able to know who has gone where, because they are all the same. So everything should be invariant under permutations of indices, of labels. If I put labels on my particles, then it should not matter where, I mean, who I decide is the first and who is the second. So my model must be invariant under permutations, meaning that these two probabilities have to be invariant, completely invariant under permutations, so meaning symmetric. Okay? So they have to be symmetric, both. But I work on psi. Okay? So I have to put now a constraint on psi to make sure that this is symmetric. And quantum mechanics is a linear theory. You have to work in a linear space. Okay? So I have to find a linear subspace of L2 over dn, for which I'm sure that for everybody in this linear subspace, these two probabilities are symmetric. And if you do the little exercise, you will see there are only two possibilities when it's formulated this way. So either psi itself is symmetric or psi is anti-symmetric. Anti-symmetric is just that you get a minus like a determinant when you flip the labels. And of course, when you take the square, then the minus disappears. Okay? So it has to be... We will never study this Hamiltonian over the whole space L2 over dn, but we have to study to subspace or actually two subspaces of... So either are symmetric functions, meaning that psi of... If I exchange the labels, then it's completely invariant. And... Or anti-symmetric functions, meaning that when I exchange the labels, I get a minus. I mean I get the signature of the permutation and this sign disappears. When I take the square. So the first kind, these are particles which are called bosons. The second kind are called fermions. And I will denote the corresponding subspace as L2s of rdn. Now you see I have to be a little bit more careful before I was writing rdn. But now I'm not exchanging the coordinates inside xj. So I'm taking this kind of xj which itself has d variables and I'm exchanging these d variables. So that's why now I write rd to the n with a little s. I hope it's clear enough. And for fermions, I'm going to write a. So this symmetry constraint looks very innocent. But it's not innocent at all. And bosons and fermions, typically, behave very differently in some regimes and this is something I will try to describe in the lecture. So you have to imagine that fermions are typically more stable than bosons because they hate each other and bosons are extremely social. They love each other too much which sometimes can lead to some kind of concentration and therefore instabilities. But proving this is not always easy and people have been working on this question for many years. So how do you see? So now I have to erase for the first time. Where is the... Ah, it's hidden here. Ow. Maybe I keep this. What did I want to do? Let's keep that. So what is a typical, well not typical, but how do I see that bosons love each other so much? I can actually put them all in the same state. Make sure that they all do the same thing by taking a tensor product. Meaning that's the equivalent of IID if you like. So tensor products are just IID quantum particles. So if I take psi of x1, xn to be u of x1, u of xn, which is going to be denoted by u tensor n of x1, xn, where u is now a function of l2 over d integral u squared 1. Then I for sure build a symmetric function. This we can call a boson Einstein condensate. I mean, it's extremely condensated. I mean, because they are all, exactly all of them in the same state u. So that's a typical, but that's one simple example of a symmetric function. So if I want to do the same for an anti-symmetric function, then of course I can't because it's symmetric so it can't be anti-symmetric. So the simplest anti-symmetric function is what's called a Slater determinant. So what is it? I just take a tensor product and then I make it anti-symmetric by just summing with a proper sign over all permutations. But then, I mean, remember it has to be normalized so it's a little bit a pain. And of course my end functions they have to be all different because if two are equal then I get zero when I anti-symmetrize. So it's easier if you from the beginning decide that they are orthonormal just by picking the right basis. So let me take u1, un, which form an orthonormal orthonormal system in L2 of Rd. Then I can just take the tensor product and then I anti-symmetrize by doing the sum of all permutations sign blah blah blah and you see it's just a determinant. So that's why it's called a determinant. So let's take u1 xn which is the determinant of ui xj. So you look at this matrix ui xj and then the proper normalization is the square root of n factorial. So that's a typical simple fermionic function. Very good. So what is it we want to do with our quantum system now where we want to do the same first, what is the dynamics? Well, it's also a Hamiltonian system. Laura. Can you just explain why from this formula you use that somehow the fermionic system is more simple than the... Oh, you don't see it here, but you will see it when I will give you an example later. On this example, I mean... No, no, it's not clear at all. I mean what you see definitely is that 2 fermions can't be at the same place. So they avoid each other a little bit because psi vanishes if 2 xj's are equal. But as such, this guy is not more stable than that. I have to study a specific system too. But you will see in an example that there will be a huge difference between the two. You will see later. Very good. Hamiltonian system, so if you're not used to it because Hamiltonian system, you need 2 variables and a symplectique... Now it seems that we have only one psi. So you may be worried, but that's... because I have not insisted on one thing that psi has to be complex valued. And that's very important. When psi is complex valued, then you do have 2 variables, real part, imaginary part. And by the way, if you compute the energy of psi 1 plus I psi 2, it's just equal to the energy of psi 1 plus the energy of psi 2. So then when you just write your Hamiltonian system, let me write this. E of psi 1 plus I psi 2 where these two are real, it's just E of psi 1 plus E of psi 2. That's because my Hamiltonian itself is real. You can just write the quadratic form. We'll just cross the derivatives, the usual way for our Hamiltonian system. So I do psi 1 dot is the grad with respect to psi 2 of E. And psi 2 dot is minus the grad with respect to psi 1 of E. I mean that's the usual symplectique form. And then if you just write what this is, you will find psi dot is actually 2 H psi which is Schrodinger's equation. And by changing units of time, you can always remove the 2. I mean the 2 doesn't really matter. So that's Schrodinger's equation. So it's just a Hamiltonian system but in infinite dimension. And now so now we have to discuss what are stationary states, what is the ground state and then we can start our study of the larger limits of these systems. So what is a stationary state? So a stationary state is just a state which doesn't move except that here you see we have a model which has a certain invariance. Everything is completely unchanged if I multiply psi by a phase e to the i theta. So my model is completely phase invariance. So this if you like you have one act on my model and I want to work modulo phase because they don't do anything to my probabilities they don't do anything to the energy they don't do anything. So we work modulo phases and therefore a stationary state is just something which doesn't move modulo phase. So if you like it has a phase which moves. And then if you plug that into Schrodinger's equation you get that psi will be just an eigenfunction of h. You will get actually the derivative of the phase but you can be an eigenfunction of only one eigenvalue at a time so then the phase ok let me do it. So you take psi of t is e minus i theta of t if you like psi 0 which doesn't move. It depends on x ok and then when you plug in into Schrodinger's equation you get theta prime of t psi 0 is h psi 0 ok so you deduce that psi 0 is an eigenfunction of h with eigenvalue theta prime of t but I mean it can be only one eigenvalue at a time ok so you deduce that theta prime of t is constant you call it lambda but you get h psi 0 is lambda psi 0 and then here that was just t times lambda so you have a phase which moves but which doesn't do anything to the system so stationary states are just eigenfunctions of h which by the way just correspond to critical points of the energy on the unit sphere of L2 now what is the ground state? Well the ground state I mean the stationary states with lowest possible energy so it corresponds to taking lambda the smallest possible if you like so the minimum of the spectrum of h n first eigenvalue if you like if it exist otherwise minimum of the spectrum ok so the ground state energy maybe I call it e of n is just the minimum of the spectrum of h of course remember that I have two such minima one for bosons one for fermions they are not the same a priori so that's the ground state energy and then the ground state is the corresponding psi 0 and then Gibbs state so Gibbs state is so it's as usual a little bit more complicated than just states because it's more a probability of a state right so you wanna further up in a sense so it's gonna be exponential minus beta h n z minus 1 which now as you can see is an operator on L2 of r d n ok and what we normalize now is no more the integral but we normalize the trace ok so z is the trace of exponential minus beta h ok yes so why is it necessary to that the bottom of the spectrum is an eigenvalue no no it's not necessary so you don't necessarily have a the ground state so far there was no assumption so it's vague so that will depend on v and w ok so I can put assumptions to make sure that this is finite and then by the way to be a minimum but they might not exist an eigenfunction exactly yes that will depend on v philosophically speaking right v has to really make sure the particles do not escape if you want to be sure that there is such an eigenfunction if you take v equals 0 for instance there cannot be an eigenfunction I think the confined cases the confined cases they will always be yes there will always be an eigenfunction which is at the bottom of each yes yes if w is not too crazy yes ok so that's the Gibbs state ok so the Gibbs state is a little bit more complicated because it's not a wave function but it's a it's an operator acting on your on your Hebert space but I can always diagonalize it I'm using the notation with the ket and the bra so it's a self-adjoint operator I can diagonalize it ok it will have some n i this n i will just be e minus beta lambda i divided by z where lambda i are the eigenvalues of h I'm assuming now it's confined I mean for a Gibbs state it has to be confined to make sure the trace is finite ok so then I only have eigenvalues and these eigenvalues have to go to infinity sufficiently fast to make sure that this sum converges ok so lambda i are the eigenvalues of h n and then the psi i's are the stationary states so you see it's more a mixture it's called a mixed state actually so it's a mixture of our stationary states and you have to think of it as a probability of our stationary states with a certain weight n i which is chosen in terms of the temperature of beta very good so now what we want to do is to study the large n limit of such systems and of course as you know it's very hard to say anything precise at finite n and it's even harder when n gets large ok so there's no way I mean except for very simple situation where it's all explicit but there are not so many explicit situations usually there's no way of computing even on a computer it's extremely hard to get a good approximation on the ground state energy on the corresponding eigenfunction or on the corresponding Gibbs state it's a very hard question because Schrodinger's equation is a PDE in RDN ok so it's huge dimension and there's just no way one can easily do things at sufficiently high precision anyway so when looking at the limit n to infinity we will not be able to work with psi anymore because psi will become a function of infinitely many variables it's not fun so as usual we will look at correlation functions which in the quantum case are called density matrices so that's what I would like to discuss now after this long introduction we will look at the properties of density matrices ok so which are the quantum equivalents of correlation functions so what is a density matrix I will define them for a pure state so for a psi but then when you have a mixed state you just get the same definition by linearity ok so let me take a state psi so you know the correlation functions when I have a probability measure I just integrate all variables but k and then I put n choose k depending on your choice but I do and then you get something which describes events involving k particles at a time so here we do the same except that we have more freedom because we are not working with a probability measure we work with psi which is a little bit the square root measure so gamma k psi so that's the k particle density matrix it's called density matrix because that's the name used in physics but it's not a matrix it's an operator so it's an infinite matrix if you like so what is it so that's a self adjoint operator of rdk ok k is the number of particles which I retain which is defined this way so depending on your taste I will write two equivalent definitions so this guy is actually n choose k and then I do the partial trace of the orthogonal projection on to psi so that's the more algebraic definition if you like partial trace so I integrate n-k variables but if you don't like that that's also the operator whose integral kernel is gamma k psi so I have to be careful I have lots of variables y1 yk ok so I'm giving you an integral kernel the integral kernel of an operator of l2 of rd to the power k these n choose k and psi of x1 tata xk and zk plus 1 tata zn psi of y1 tata yk and then the same zk plus 1 z bar dzk plus 1 dzn so it's exactly the same as a correlation function I do integrate n-k variables except that the first k I used the fact that I have a square root so I can put two kinds of particles two kinds of variables the x's and the y's this is how I get an operator instead of a function note that if you take all the x's equal to the y's then you do get exactly the correlation function ok so gamma k psi of x1 tata tata and x1 tata tata is the k particle correlation function well, depending of your convention whether you put an I mean some people don't put anything some people put an n factorial of n-k factorial and some put the k factorial I think the n choose k c'est compatible with fox spaces that's the one I like the most this correlation function you've seen yesterday in law's lecture so what's nice with the operator is that it contains everything so if I look at the operator and I take its Fourier transform so what do I mean by Fourier transform I forget by Fourier on both sides so I pass to Fourier variables then when I do p1 pk and then I take the same p1 pk I get the correlation functions for the momenta so you see that the density matrix so this operator contains both correlation function in position and the correlation function in velocities I mean as is usual I mean one can express terms in the energy using only gamma2 or gamma1 for the one particle part and gamma2 for the two particle part so for instance if you compute this one particle part of the energy psi is either symmetric or antisymmetric so this is just actually n times the same operator acting only in the first variable and then this n is exactly the n choose 1 so you see that you can also write this in a funny way that it's minus laplace plus v applied to gamma1 everything in n2 over d so it's time you have something which only involve one particle at a time you can express it using the one particle estimatrix and for the same reason you can also express it using the two particle estimatrix but for the same reason if you look at the interaction then it is exactly the gamma2 over l2 of rd squared AOS so something very natural would be to say let's forget all this psi it's too complicated let's only work with gamma2 well that would be wonderful ok so problem the set of gamma2 coming from upside is very complicated and not known however if one could use only gamma2 that would be a fantastic in physics it's usually called Coulson's challenge so at a conference Coulson was apparently the first or one of the first saying we have to find a way of using only gamma2 and maybe even if we can't know exactly what's the set of gamma2s we could find maybe sufficient conditions or necessary conditions and do approximations this way directly on the two particle estimatrix but it's not so easy ok, yes Question, I mean to what extent is this used by chemists I mean the gamma2 you make guesses for the set which are yeah so around around 2005 or something like that so there was some kind of excitement by chemists and there was a group of chemists who did some calculations so one can define the set of gamma2s by writing actually infinitely many linear constraints so they were able to do some numerics and tested only the five first constraints and got the best results so far for the ground state energy of some atoms so there was some excitement but still it was extremely costly on the computational side so there is a group of chemists really working on putting some constraints on gamma2 removing all the other ones I can find the names and give them to you now they don't come to my mind ask me again I will send you the references I think they are in Japan I'm not sure anyway it's a natural question and people have worked on this problem we also worked a little bit on this problem but still it's not so not so easy I have to state a lemma for you the properties of this gamma k so I gave you the definition here so gamma k so for any psi gamma k is a non-negative self-adjoint operator and it's actually trace class it has a trace then it's also Hilbert Schmidt so it's good I mean this kernel definitely makes sense in L2 which you can prove by Cauchy Schwarz so this guy has a finite trace which of course I mean the trace is the same as integrating the diagonal of the kernel so you get n choose k so the trace is huge blows up with n but I mean it's like in classical mechanics the integral is huge but it's not we know it's going to be huge what we want to have is something which is locally finite but not finite everywhere so from this we conclude immediately that the operator norm of gamma k is less than n choose k alright if the trace if the sum of the eigenvalues is equal to n choose k then the largest eigenvalue is also less than n choose k and one natural question is to ask is that optimal is this bound optimal is that true that I can have an eigenvalue for the n choose k is that true that I can always have such an eigenvalue and there you already see a difference between bosons and fermions so let me start with bosons bosons that's quite easy so is n choose k optimal so I'm asking optimal as a bound on the norm on the largest eigenvalue if you like so bosons yes and that's quite easy because if I take Psy to be a boson Einstein condensate full I don't know how to complete I don't know how to call it because they are all of them exactly in the statue b condensate then I can compute everything easily now because when I integrate over some variables I just remove some of the use and you will find immediately that the gamma k of u tensor n is just n choose k times the projection on u tensor k so it is a rank one operator so when I take a boson Einstein condensate then I have just one eigenvalue which is the largest it can be now you can do a little exercise and show it's the only way that you can have an eigenvalue of equal to n choose k right if there is only one eigenvalue then it has to be a tensor product but one can actually ask more I mean if we have eigenvalues of order n choose k namely of order n to the power k is that I mean do they have to come from a condensate or can I get something more complicated ok so so this really goes back to a Penrose and Onzager in 50 what 50 56 and they said let's actually define boson Einstein condensation by the property that gamma I mean we look at a certain system at equilibrium or maybe a Gibbs state of something then we compute the corresponding density matrices then we say that there is boson Einstein condensation when gamma k has an eigenvalue of order and then of course we look at the limit n to infinity so it's of order n to the k I mean n choose k it's just n to the k divided by k factorial ou large n so they define boson Einstein condensation as the k particle density matrices have an eigenvalue of order n choose k so you have to imagine that in a normal system so in your glass of water for instance gamma k is usually more of order 1 I mean its trace is infinite because n is infinite or almost infinite but as an operator it's usually bounded where we see examples later ok so boson Einstein condensation is something exceptional which is happening here it's a kind of phase transition where suddenly lots of particles so a macroscopic number of particles occupy the same state and give an eigenvalue of order n to the k and because of this example John Ziger decided that this is how we are going to define boson Einstein condensation now you should protest the only case or not that we understand that yeah good that you are protesting so the natural question is to ask I mean if the k particle density matrix has something of order n to the k does it come from something like that or can I get something different it's a very delicate question but there is an answer to this question which is essentially yes the only possibilities are the u tensor k and that's a theorem which is usually called quantum infinity which says the following ok so let me consider it's an abstract theorem which is actually valid over general Hilbert space blah blah blah but I'm just staying in my setting of L2 of Hardy ok so you take i n a sequence any sequence of bosonic states bosonic we are discussing bosons and let me assume so let me look at what is at scale n to the k that's because I forgot to do a break ok so let me start all over again so let me take any sequence of bosonic states and I am still working on L2S of Hardy I mean it's actually abstract but I don't have to make it abstract I mean any sequence of bosonic states and then what I will do is I will assume that there is something at scale n to the k ok I mean I have to put some kind of more assumptions than just an eigenvalue right because an eigenvalue is not sufficient the eigenfunction could do crazy thing go weakly to zero so I will assume that I have a non-zero eigenvalue but that the corresponding eigenfunctions behave well so they have a non-zero weakly limit ok so I will assume that the corresponding k particle density matrix divided by n choose k has a weakly limit to something which I call gamma k and this weakly limit is a weak interest class looking at what I have at scale n to the power k and let me notice that this guy has trace 1 for all k so actually after extracting a subsequence by a Banaralaoglu I can always assume that this limit is correct so if you start with any sequence psi n you can always extract a subsequence for which you have that ok so it's not a big deal but if you assume it converges ok in most cases it will converge but you will get zero if you look at your glass of water you will get zero because there's nothing at scale n to the k ok then the theorem will be empty but still true ok however what the theorem will tell you is that if there is something at scale n to the k then it has to be of that form so it has to be a condensate ok so after then there exists a probability measure p on the unit bowl of L2 of Rd invariant under phases such that this gamma ks are all on average u tensor k u tensor k dp of u ok so the theorem is saying if you do have something, if something converges at scale n to the k then it has to be a convex combination of what you would get for an exact for the Einstein condensate so if you like it has to be a convex combination of condensates sorry Martin there has been a crash for like 10 minutes a very very good recap of the last 10 minutes ok so maybe here I think density matrix ok ok so let me summarize so I have defined the k particle density matrix of any n particle state bosons and fermions and that's a self-adjoint operator which is non-negative meaning its eigenvalues are non-negative and the trace to sum of the eigenvalues is n to k by definition from which I deduce that the norm namely the largest eigenvalue is at most n to k ok that's what I have proved there's no proof it's essentially from the definition then I asked so now I look at large n fixed k k is going to be anything but fixed and n is going to be large k limit and I ask can I get this eigenvalue of this order n to the power k and for bosons so I ask is this n choose k which is like n to the power k is that optimal for bosons that's clear it is because if I take bosons and fermions condensates and I compute gamma k because it's a tensor product everything simplifies and the operator which is n choose k times the rank 1 of the original projection on u tensor k so I have an eigenvalue of order n choose k it's even equal and all the other ones are 0 now the natural so then Penrose and Onzager in a 56 they decided to define bosons and fermions but the natural question is all right but if I really get something at this scale n choose k are they condensates ok and the answer is yes if you allow me to use weak limits I mean it's not the only way to answer the question but if you allow me to use weak limits the answer is yes the bosonic states up to sub sequences you can always assume if you like that gamma k divided by n choose k has a weak limit ok and it's actually weakly star in the trace class let me remind you that this means that you test against contact operators ok and let me call this limit gamma k the theorem says then there exists a probability measure p on the unit ball the unit ball because you can lose something when you do weak limits not the unit sphere unit ball of L2 of Rd so that the gamma k's are given by this formula they are all given by the same p that's very important so there is one p which works for all k and which tells me that gamma k is an average of what I would get from this example so if you like of condensates ok so if I look at weak limits then at the scale n to the power k for bosons I can only see condensates actually you see it's not pure condensates I can have a mixture of condensates like one half one guy plus one half another guy because of p which you would typically do instead of doing u tensor n you can do u tensor n over 2 tensor v n over 2 something like that so if you like condensates you take v orthogonal to u so it's clear that you can have such a p they don't have to be all exactly in the same condensate you can have a mixture of several ones yes if there is some loss and the measure is not on the sphere is there some interpretation of this law? well I mean again I mean in your glass of water it's not going to the k and then it will always go to 0 ok so the loss can contain everything else like your thermodynamic system or Jacob yeah this definition of panels and non-cyber is sort of say more restrictive than what is usually than one it looks just that k equal to 1 and it looks just that one particle the density matrix true, yeah that's a good remark so let me emphasize that if the limit I mean so the theorem is always true ok but if the limit is non 0 for 1k then it's non 0 for all k ok so you can also just restrict yourself to gamma 1 if you like ok because up to sub sequences you can always assume they all converge and then you get this guy ok and then when you compute I mean you see when you compute the gamma 1 so the weak limit of the gamma of the one particle density matrix you see I mean for instance if you get the trace you want to know if it's non 0 or not you get the trace just a matter of whether delta at 0 or not ok and either p is a delta at 0 and then you have nothing or p is not a delta at 0 and then you have non 0 for all k yes so if you if it goes to 0 and you normalize differently is anything non then so if you normalize by a sequence so you get a non trivial limit you're talking about the entries ok so this is your chosen normalization exactly so here it's very important to have the largest possible eigenvalue so I have to divide by n to the power k so you may well have eigenvalues which diverge but slower and then I can't say anything Andreas does it also work for mixtase ? yes yes yes no no it's all the same for mixtase yes yes I know people usually don't like mixtase very much but same for mixtase sorry so it works for a gibstate or anything a little bit of history oh that's what I wanted to keep what is the mixtase ? oh so mixtase yes ok so so when I have a general gamma which is a sum of ni psi i psi i ok with ni non negative some ni is 1 like my gibstate for instance was such an example ok so that's what I call the mixtate which you should always think as a probability over some set of of states psi i ok and then by definition the gamma k of this gamma is by definition just a sum of the ni the gamma of the psi i ok it's just linear so it's all linear the same theorem is true for both so a little bit of history so the theorem is due to Hudson Moody so first this theorem is a quantum version of something you may have encountered already so the definity are here with Savage theorem in the classical case but it's for operators if you like for non commuting things it's been proved by Hudson Sturmer in 1969 but that was in the case that there is a strong convergence ok so it's the same theorem but you assume strong convergence not weak strong convergence you can't always assume even going to a subsequent it's not clear there you will have strong convergence but if you do assume strong convergence then you get the same theorem and in this case p is supported I mean concentrates on the unit sphere instead of the unit ball and that's actually equivalent you have strong convergence if and only if p concentrates on the unit sphere no loss of mass ok so they had the same theorem except here it was strong so that was a true assumption and p was on the unit sphere and this is very similar to the classical theorem classical in the sense of classical mechanics of definition in the thirties and here with savage in the fifties so the version with weak convergence is in our paper with Naam and Nicolas Rougerie in 2014 and actually there was something very similar although not stated that way in works by Amarie and here around the weight there are several papers ok so what's nice is that it really tells you that having an eigenvalue of order n to the k is really boson Einstein conversation it's something which is not normal I mean something exceptional which is of course very specific of bosons this is very abstract it doesn't really tell you whether this will happen or not I mean later in the lecture I will give you examples where you do get a non trivial p yes so can you see here when you give some spatial structure to a long range order from this type of all in the condensate I mean can you see long range order I mean so what they call this off diagonal long range order yes this is an abstract theorem it's true in any Hilbert space yes so here I stated it on L2 of Rd just because I didn't want to work in an abstract Hilbert space but it is an abstract theorem doesn't use anything so there's no space really but if you apply it if you apply it to a specific model then they will usually happen together yes happen together or from this formulation you can derive this I don't know I never really thought about it Jacob that's of course limited to the concept of diagonal long range order it's usually limited to transnational invariant true if you have a trapped gas ok yes so it is more general maybe a remark so this theorem is a compactness theorem so you may not like it in a way because it's not very quantitative it doesn't tell you whether you are close to being a condensate or not you have to pass to the limit ok so it is a kind of compactness theorem or it's more structure you first pass to weak limits and then you express what's the structure of the possible weak limits but it's a bit vague it doesn't really tell you whether you have to take an extremely large or whether ok so there is actually a quantitative version but this theorem can be quantitative only in finite dimension ok so there exist quantitative version but in finite dimension ok so you assume I don't know everything leaves of our finite dimensional subspace of L2 and it's of the following that for any psi there exist pn ok such that when you look at the gamma k psi n minus the average pn of u to divide by gamma nk then you look at the trice norm so you take the trice of the absolute value and this is less than 4kd over n ok so if you work in finite dimension then actually you have an error of the other one over n or k over n which is not so bad of course the dimension enters so if you study a specific situation you are never in finite dimension but we use this theorem many times to get quantitative estimates the idea being to say for instance if you are in a trap you will essentially occupy finitely many modes and you can use these estimates and then you have to know that the high energy modes have to be only a little bit occupied because they cost too much ok so one can get quantitative estimates in infinite dimension by reducing to finite dimensions and using this result here the pn is explicit I'm not going to give it to you but that's a result which is due to Christian Keunik Nietzsche and Renner and we also have a different proof of the same result excuse me, the D here isn't the D as before oh no ok let me call it the dimension of the function of this of the function yes ok so l2 of rd is replaced by a space of dimension capital D thank you so some words about fermions before we go on so fermions the answer is no and there you see immediately an effect of the anti symmetry ok so fermions can, I mean the density matrix cannot, can never have une eigenvalue of order n to the k so there is a a theorem by Young 63 which says that the norm of gamma k so meaning the largest possible eigenvalue of epsilon ok is less than a universal constant ck and then it's n to the k over 2 1 is n to the power 0 so then gamma is pound D if k is 2 then you get n if k is 3 you get n if k is 4 you get n squared and so on and so forth for some universal ck unknown I mean we have some bounds on ck which are very bad there is also a famous conjecture by Young himself about the value of this ck but there is no progress to my knowledge in the direction of solving this conjecture so you see that for fermions you cannot have eigenvalues of order n to the power k and it's more complicated but it's very intuitive that's because they hate each other as we said however what fermions could do perhaps is to form pairs ok and if you think of a function of four variables anti-symmetric then if I put together the first two and put together the last two when I exchange them I do get a plus because I get many minuses they compensate so pair of fermions behave a little bit like bosons now you understand the theorem so in upside n the idea is that fermions the best they can do is to form pairs then these pairs maybe can do something a little bit like condensation they can kind of love each other a little bit more and that's the reason of the k to the two and the floor ok because if k is if k is two then you can form one pair and then you get n ok but then if k is four you can form only two pairs but two pairs condensating would give you an n what did I say would give you an n squared because of the bosons ok so that's the best which is known for fermions ok now you could say oh I mean let's take this guy divide by that try to see what's the limit but nothing like a weak limit will work so the fermions density matrices have very algebraic I mean subtle algebraic properties so for instance if you take a tensor product so let me take u1 un and the gamma k psi n v1 vn scalar product so what I'm doing here oh sorry so that's a slater determinant which I have defined it's just a my usual tensor product which is then anti symmetrized ok and things like that form a basis of our practical space so actually if you look at that then this is always less than one as soon as u1 un is a orthonormal and v1 vn is orthonormal so a different way of saying it is that if you work in finite dimension then the gamma k's are always bounded ok so for fermions in finite dimensions gamma k's are bounded but the bound depends on the dimension psi n bounded in finite dimension it's very intuitive because if you are in finite dimension when you have fermions you can never put more than the dimension so anyway you see fermions are much more complicated and that's somehow already a hint that they may have a different behavior in some large scale limits Is this pairing related to Cooper pairs? Yes, exactly so that would be the Cooper pairs which explain superconductivity yes so let's go to the last part of today so now the goal of course is to give you some examples where you see both Einstein conversation but I thought that I mean you will also have lots of talks I mean maybe not so many but talks about the subject so for those who have never really seen I mean I have seen but have not worked themselves with the quantum problems I think you have to first see the non interacting case before you look at the interacting case so I thought I would spend the last half an hour telling you what's happening for a non interacting quantum gas I mean it's already full of surprises right? and not so easy so and I think you have to see that if you want to really understand the interacting systems ok so that's part 4 the free gases and then on Thursday I will talk about interacting systems so now we go back to what's written there actually I could use the same blackboard ok so I'm taking a domain omega which is very large and I'm going to take a fixed domain and dilate it I'm going to assume that 0 is somewhere in the middle ok make it big and now I take W equals 0 because that's non interacting and I take v to be infinite outside of omega which will just confine the system to omega and actually what will happen I mean now we have to talk about self a joint operator so I have to tell you what is the boundary condition of the Laplacian ok so what we have to do is to study hn which is the sum of minus Laplacian xj, j equal 1 to n I mean sorry I mean you see this guy is just a full Laplacian in RDN the way I am writing it this way is because I have these symmetric or anti symmetric constraints ok and maybe I put a nail here to emphasize that this is the Dirichlet Laplacian on omega L so this Dirichlet Laplacian by scaling we know many things so the Eigen values of this Dirichlet Laplacian are just the Eigen values divided by L squared where lambda I is a lambda I1 that is the Dirichlet Eigen values on omega which is not scaled I mean the guy we start with ok so when I increase L then my spectrum becomes more and more dense I have more and more Eigen values and then I converge to the spectrum of Laplacian which is the whole alpheline in the whole space and of course the corresponding Eigen functions are the UIL of x which is just UIL of x over L L to the d over 2 very good so now what is the spectrum of HN it's a small exercise and it's simple because it's a sum ok so of course I get a different answer depending on bosons and fermions so how does that go well the remark is that of course the UIL form a basis of L2 over omega L ok of a normal basis and actually so if I look at the UIL tensor UIL is form a basis of L2 of omega L to the power N right so UI1 of x1 and now xn and then if I look at sub spaces I just have to symmetrize anti symmetrize these spaces ok so UI1 L I will write a V for the symmetric guys ok so this V is just 1 over square root N factorial the sum over all permutations of the tensor products and you have to be a little bit careful that when because of the sum when two are identical then this guy is not normalized it's not fun I mean bosons so they are not necessarily normalized because the definition of my tensor product is 1 over square root N factorial and then the sum of all permutations fermions are better I1L then I put this wedge which is really just the slater determinant I defined before and that's also a basis so of course for fermions you're not allowed to repeat two guys and the order also counts of course otherwise you get the same function so for fermions you have to assume for instance I1 so they less than IN for bosons you have more freedom because they can be repeated without problems ok when you apply HN to such a guy I mean HN is a sum and these guys tensor products when you apply HN you just get the sum of the eigenvalues so HN of a guy like that will always be the sum of the lambda IK ok so the spectrum of HN is just the sum of the eigenvalues but with constraints on the indices because for fermions we can't repeat any eigenvalue ok so now we can compute the grand state energy ok and now I will finally answer our last question sorry that it took so long so for bosons I mean what's the lowest this can be is when they are all equal to lambda I ok so the picture is the following so here is the spectrum of your Dirichlet Laplacian and what you are doing is to put all the bosons in the first state ok so you get N lambda 1 and that's the boson Einstein condensate so the corresponding wave function is U1L you see that the energy is of order N over L squared for fermions it's different because of the anti-symmetry fermions maybe some eigenvalues are degenerate depending on the form of omega this could be so for fermions you have to fill the eigenvalues starting from the bottom and you are not allowed to repeat anything except in case of multiplicity so psi will be the slater determinant U1L times UNL and the energy will be the sum so you already see a big difference the energy will not at all behave the same in the two cases ok so you will not at all get the same kind of and you also see so you can also compute the estimatrices and so on but it's more fun if we go directly to positive temperature so T which is 1 over beta so in this case we said that we have to look at exponential minus beta HN Z minus 1 and now you can actually show that the exponential minus beta lambda I the sum converges for any bounded set omega this sum will converge due to some vial estimates it's not difficult and so you can define the Gibbs state of course you have two Gibbs states one for fermions and one for bosons from these Gibbs states you can compute the gamma KN with the recipe which is on the right here ok so let me summarize you take a domain we put N particles in this domain and now we are going to do thermodynamic limit N to infinity and N over L to the D converges to some row which is the density and we want to know what's happening and we will have constructed an infinite quantum gas non interacting where you only see the effect of the kinetic energy the only thing we've put is the Laplacian and symmetry and anti-symmetry that's all right and I take M equal to one half I got rid of the 1 over 2 M which anyway I was getting wrong before many times so let me state the theorem it's gonna take some space so it's a theorem that everybody knows where it is written I had some difficulties locating all parts of my theorem but I think it's a true theorem so let me start with fermions and to simplify I will only tell you what is the limit of gamma 1 ok but then you have the similar theorem for gamma 2 but I don't want to run into complicated formulas so I will tell you what is happening for gamma 1 gamma 1n converges weakly in some sense which will make more precise to the following operator exponential beta minus Laplace minus mu plus 1 ok so this guy is an operator on L2 over d ok it's translation invariant and in Fourier space it corresponds to multiplying by 1 over exponential beta square minus mu plus 1 ok so it's a Fourier multiplier which means an infinite translation invariant gas gas and that's our free Fermi gas oh yeah I will say so where mu is a chemical potential which you have to adjust to get the yeah I forgot to say so that's when n goes to infinity to the d goes to rho ok so we do the thermodynamically we put as many particles as the volume with the density rho and mu is the unique real number which will make so that this infinite translation gas has density rho ok where mu is the unique real number so 1 over 2pi to the d integral dk exponential beta k square minus mu plus 1 is rho ok and the other thing I have to tell you is what's the meaning of this limit ok so I told you several times so we have infinitely many particles so if I compute the trace or anything it's not going to work I have to look locally I would say that it's a strong limit locally something like that so one way to so this means I mean there are many different ways but one way to do it is you pick any u in l2 over d and then you project u into omega and then you apply the operator of min so you compute ok so you take this function u is in l2 over d you just troncate it forget what's outside you apply this guy you get a function in l2 of omega l which you extend by 0 outside ok and then this converges in l2 to the limit times u and that's for all u in l2 over d so it's a kind of strong limit if you like and this is the same as saying that this operator I extended to 0 outside of omega and then it's just a strong limit ok so I get strong convergence if I apply it to a fixed vector u which is kind of local right when I take a u in l2 then some other tail doesn't matter very much so I'm really looking locally and then locally I see the effect of this translation in variant operator 1 over blah blah blah plus 1 so this tells you that your infinite karmigaz has I mean this this kind of dispersion in the velocities which is a little bit like Maxwell but not quite because of the plus 1 ok so if you forget the plus 1 then you get Boltzmann the usual Boltzmann e-b square minus mu but because of the antisymmetric nature of fermions then you get the 1 over blah blah plus 1 ok so for large k it's the same as Boltzmann but for small k not quite is that an ok result or questions no no no ok so let me repeat that's important so I take a u which leaves over rd I tronquait it ok then I apply this operator but this guy is only defined on omega so I get a function on omega l which then I extend by 0 by convention ok so this function is 0 by convention outside of omega l I can put an indicator function if you like this but not quite I mean this guy only leaves over omega l so I have to no no because then l is infinite there's no l anymore I mean there are many ways I don't know what's the best way I think this is fine but you can also put an indicator here and then look at the norm of the difference so that's fermions so you see that the limit is this operator which is a translation invariant operator which is bounded very nice no problem whatever temperature and density I always get a bounded operator it's bounded by 1 ok and nothing is blowing up nothing crazy which we knew by young anyway that the gamma 1 was less than 1 but if you compute all the gamma case they will all be bounded I mean ok so we know that gamma 1 is always bounded by young but if I compute gamma 2 I have a similar theorem gamma 2 is bounded and it will converge to something similar with two variables so it's a bit more complicated to write so it's more fun for bosons which is going to be our last oh and the other thing I forgot to tell you is that you see that the limit is better expressed in the grand canonical formalism this is what I wanted to do to explain what's the grand canonical afterwards but I will not have time so I will do it on Thursday because I mean the limit really depends on mu so if you want to make it depend on rho you have to invert this crazy function compute mu in terms of rho which I don't want to do ok I mean you can express this in terms of special functions it's not fun but everything is simple if you use mu the chemical potential as your variable and not rho even though rho maybe sounds more natural grand canonical is much easier and let me emphasize that my mu is not the same as the mu that Laura had yesterday I think mu was e to the to the beta my mu but anyway so bosons so bosons this is much more complicated and there you see the effect of symmetry and the fact you can have bosons, Einstein, pronunciation and so on so forth so let me define the critical density rho c of beta or rho c of t maybe which is 1 over 2 pi to the d dk e beta k square minus 1 over rd ok which is just infinite in dimension 1 and 2 and which is equal to t to the d over 2 times a universal constant in dimension 3 and higher by scaling so an explicit constant depending on the dimension ok then what is happening depends whether rho is below or above this critical density and so maybe I write something vague so the idea is that gamma in 1 will have so you will have a thermodynamic part ok with the usual boson gas if you like so you will have beta minus laplace minus mu minus 1 ok so let me emphasize that for bosons there's a minus and for fermions there's a plus that's the only difference and this minus is the cause of lots of problems of course because another denominator can vanish so there will be a thermodynamic part which is the boson gas, normal gas very nice which is usually bounded when mu is negative but you see I can never take mu positive here otherwise it blows up like crazy so I am forced to take mu negative and if mu is 0 I get this density there so that's the largest density I can reach with my boson gas I mean with a nice infinite boson gas that's the largest density I can reach that's this critical density if I'm forcing to have more particles than this density then they will all condense into the first Eigen function so I will get L to the D rho minus the critical density plus I have not enough space sorry in the U1L U1L and then the thermodynamic part ok where and I have to put quotation marks where mu so you have 2 kc so mu either so that 1 over 2 pi to the D integral dk squared minus mu minus 1 is equal to rho if rho is less than the critical rho and mu is equal to 0 otherwise ok so what this is telling you is and I have to still explain the quotation marks in the remaining 5 minutes so what this is telling you is that your boson gas has 2 scales ok so this large scale limit you see the microscopic scale and the microscopic scale so in the microscopic scale you get your translation invariant boson gas e to the minus dk squared minus mu minus 1 ok but this guy cannot have a very high density in dimension 3 and higher it cannot have a density larger than this rho here so if you are forcing by putting too many particles in a larger density then suddenly you will have an eigenvalue of order n in gamma 1 which is here and the particles will all so the remaining particles will be in this first eigenfunction of the Dirichlet Laplacian in omega and that's really a macroscopic thing because the U1L really depends on omega and the shape of omega and everything this is kind of universal it doesn't depend on omega but this depends a lot on omega ok so you have 2 scales so this boson Einstein condensation which corresponds to having a eigenvalue of order n is also associated with 2 different scales so the scale of the condensate which is a macroscopic and then the boson gas which is a microscopic I have to tell you what the quotation mark means so one way for instance is to so it's I mean this quotation so it's a difficult problem ok so I should not there will be an eigenvalue of this order but there can be and actually there will be other eigenvalues which are also very large ok when I look at the second eigenvalue it also tends to 0 and it will also behave badly however the other ones will be less crazy than the first ones so there will be one huge eigenvalue of order n and then many other ones smaller still diverging but smaller and then if I test against a nice function these ones are going to be average over ok in such a way that I will only see this part here so the quotation mark here so one way to do it is to take f in l1 and l2 the intersection of l1 and l2 and then the statement is that f gamma1 nf converges when ln goes to infinity and n over ld goes to rho and now it's time to remind you that u1l is actually equal to u1 of x over l divided by l to the d over 2 so there is a cancelation so you see as a eigenvalue it's huge but if I test against a nice function it's going to be of order 1 because there is a cancelation between the l to the d and the l to the d over 2 which appears twice here and what I will get is rho minus the critical rho plus times u1 of 0 squared ok so when I test against f this will converge to u1 of 0 squared and then I get integral f squared and then plus vf on the nice Fermi guy f sorry the nice gas applied to f ok so one way to summarize the situation is as you can see that if you look at the velocity distribution of your bose gas maybe I don't want to use a new graph maybe I still have one minute ok so let me assume that rho is bigger than rho c of t otherwise ok so what is the velocity distribution of my gas so it's 1 over e beta k squared minus 1 in this case ok so it's exponential here but here it behaves like 1 over beta k squared at the origin this is what this is saying and as you can see here I have a direct delta at k equals 0 0 momentum right because the integral of f is the Fourier at 0 times rho minus rho c u1 of 0 squared and then there's a 2 pi where is the 2 pi I think I have to put 2 pi to ready ok so if you look at the velocity distribution of your gas at the microscopic scale then you have a direct delta at k equals 0 which corresponds to the condensate and let me emphasize that the coefficient really depends on omega or actually it depends on the first directly eigen function so here it's 0 which appears because I centered omega at 0 and scale it and then I took a nef which was fixed so if you like I am looking what's happening close to 0 ok cause I took a nef in L1 intersected with L2 close to 0 I was looking there ok so and then I get this delta of course if I look somewhere else I will see all the possible values of u1 in omega and they really depend on the shape of omega and everything so that's what I see from the microscopic scale at the microscopic scale very good I have to stop so on Thursday what will I do on Thursday I will talk a little bit about Brown-Cannonicoli, it cannot hurt and then we will do mean feed limits and dilute limits for interacting systems thank you