 of charged system, of system of charge. Okay. So as I said yesterday, if we have a system of, so I will take only one species of charges and I will show you how it, you generalize it. So assume you have a system with a certain fixed charges in the system with charge and ct, raw f of r. So some surfaces, some objects which are charged. And then you have mobile ions q equals ze in the system. And so the partition function of the system is an integral product over i equals 1. Let's say if you have n particles e r i into the minus beta over 2 sum of i j q i v c of r i minus r j q j. Where v c of r equals 1 over 4 pi epsilon 1 of r. And then you have the interaction of the charges with the density. So it's minus beta sum over i integral v r of q i v c of r i minus r raw f of r. This is the interaction of the charges with the fixed object. And then I will add the self energy of the charged object here which is just minus beta over 2 integral dr dr prime of raw f of r v c of r minus r prime raw f of r prime. So we saw yesterday that if I introduce what I call the charge concentration or charge density, this is just raw f of r plus sum over i q i delta of r minus r i. So this is the charge density at point r, any point r here. Then the partition function z can be written as an integral product over i dr i. And this object which is up here is just e to the minus integral dr, I mean it's d3r every time, of course, dr prime, raw of r v c of r minus r prime, raw of r prime, where raw of r has to be replaced by this. So it's really a function in terms of the coordinates ri's which are here. So the whole idea of this statistical field theory of statistical field theory in general is that to go from an initial representation in terms of particles, so where you represent the partition function of the system as an integral over all the position of the particles, to an integral over field configuration to represent the partition function in terms of continuous fields or not continuous fields in terms of fields in space. In this case the field is the density field as we will see, but in fact eventually it will be something equivalent to the electrostatic potential field. So this is the idea, this is what we are going to try to do. So as it is now, raw is not a variable of integration, raw is just a notation for this. If I want to make raw as a change, as a new variable, as I showed you yesterday, the way to do it correctly is to make a change of variable. So to introduce raw as a variable. And the way to do it is really, so you have to discretize, as I showed yesterday, I will do it now directly in the continuum. And the idea is to write in order to make a change of variable, you introduce the new variable. So the new variable, I will write it as integral d raw of r, product over r, delta of raw of r minus raw f of r minus sum over i, qi delta of r minus r i. So if I introduce this in here, then now raw of r is really an integration variable. Here I have just introduced one. I have just introduced an identity. So I can stick it here. And in fact, I will not use it as is, but I will use its Fourier representation. And the Fourier representation is just the exponential representation of the delta function. So this is just integral d raw of r d phi of r. And in the continuous form, I don't do all the steps that I showed yesterday to do it first discrete, then take the limit. It's once you get used to it, you write it directly, and you can show that it's easy. So it's e to the i beta integral dr. So phi of r is the variable which is conjugate to raw. What I'm using is the identity that delta of x is integral dk over 2 pi of e to the i k. Or let me write it as delta of x minus x0 kx minus x0. And as I said yesterday, you can put any constant here and here. You can put beta. The new variable is beta k instead of k. It doesn't matter because it is integrated always from minus infinity to plus infinity. So I put i beta phi of r times raw of r minus raw f of r minus, so this is q if I have only one species of, OK, it doesn't matter. Minus q sum over i delta of r minus r i. So this is just this one, this one, this one. OK, so it is really this. OK, so here you have this delta function. I mean, I can write that delta of raw of r minus, let me call this raw 0 of r if you want, product over r. So I will use this identity. So what I call k here, I'll call phi here. It's a very integration variable. So it will be integral product over r of all the phi of r, which I denote like this, and then e to the i beta. So I forget, of course, there is always some constants which I don't write, which are absorbed in here, which are in general infinite constants, but who cares about infinity, right? And then you have, so you have phi of r, raw of r minus raw 0 of r. And in fact, because you're in the continuous, you have to put all the a cubed, the lattice spacing, et cetera, and you can show that this is just integral dr and phi of r times this. So this is really this relation which is here. And so I come back here. So this is just integral dr, d phi. So the first, I don't know if you can see when it's down there. OK, so it's e to the i beta integral dr phi of r, raw of r minus raw f of r. And then there is this term. So the last term is, I write it here, it's e to the minus i beta q integral dr phi of r sum over i delta of r minus r i. And therefore, this is just e to the minus i beta q phi, sum over i of phi of r i, right? So if I replace here, then I have minus i beta sum over i phi of r i times q. OK, so the partition function which is here, z, can be written as. I have introduced this. So I write it as integral d rho d phi. Then e to the minus beta over 2 integral dr dr prime, raw of r vc of r minus r prime, raw of r prime. And then I need this to force. So it's plus i beta integral dr phi of r, raw of r minus raw f of r. And then I still have this integral over r i, and this integral over r i goes, yes? In what? Here? Yes? It's the second beta. Which one here? When you introduce the new integration variable for the delta function, you put a beta and x. Yes, because the variable is beta k, if you want. Oh, here? Yes, yeah. At each point, but that's a constant which is infinite because it's a beta to the power number of points of discretization. So that's part of the integral. Yes, and there are two pis also which I didn't write, which are part of the normalization of this, which give an infinite constant. But as we know, this Z is essentially a normalization factor when you calculate Boltzmann weights. So they just, all these trivial normalizations cancel out between numerator and denominator. So there is this. And then we have integral product over i dr i of e to the minus i beta sum over i phi of r i. So let's go back to this. So the first term is the energy term. Right? It's this term, rho vc rho. So that's the electrostatic energy of the system. And all this term is just the change of variable if you want, the delta function which allows you to enforce rho as the new integration variable. So the field rho is the new integration variable. And the conjugate phi, the conjugate field phi, is the conjugate to rho which allows to, it's a representation of the delta function with, and there is a q which is missing here. Though this part comes from here because remember that the this was still present because here when you write it like this, rho has to be replaced by this, right? So it's an integral over all the r i's. And then it turns out that once you do this change of variable, the only place where the r i's are present is here. So it is in this part. And now, since the r i's are variable which are decoupled, each one, so this is just factorized, right? It's an integral dr1 dr2 drn e to the minus i beta q phi of r1, et cetera. So this is just integral dr e to the minus i beta q phi of r to the power n, right? You just replicate it n times to get this. Is it clear? So this is z index n for n particles. So it is simpler to work not in the canonical ensemble, but in the grand canonical ensemble. And in the grand canonical ensemble, the grand canonical partition functions z, let me call it courtesy of lambda. So lambda is the fugacity of the charges. It's equal to sum from n equals zero to infinity of lambda n to the factorial n times z n, right? This is the definition of the grand partition function. So given that z n is here, you see that the only place where n comes in is here. And therefore, you will have a re-summation of this quantity to the power n. And you have this nice identity, which is that z of lambda is, so the summation will have the summation for n equals zero to infinity of lambda to the n over factorial n of integral d r e to the minus i beta q phi of r to d n, because that's the only place where n comes in. And this is just e to the lambda integral d r e to the minus i beta q phi of r. So the partition function now, the grand partition function can be written as z of lambda equals integral over the two fields rho and phi of e to the minus beta over two integral d r d r prime rho of r vc of r minus r prime plus i beta integral d r phi of r rho of r minus rho f of r plus lambda integral d r e to the minus i beta q phi of r. So yes? This one? Yes. This one? Yes. Yes, of course. Yes. It's a function of phi. Yes? See that this is the integral over d r instead of t. No. So all these, this quantity, I'd have to insert here. Right, because it's one. So I can introduce. So it is there. And then of course the sign of integration, you can interchange them as you want. So then you see that the integral d r i will act only on this, on this part. That's how it, and it factorizes between the, okay? Okay, so this expression is of course very important. And we will show now, I will show you how to simplify it further, how we can get rid of the charge density field to stay with a simple functional integral which depends only on phi. One small remark, which I forgot to mention, is that when you go to the grand canonical ensemble, what is fixed is not the number of particles like here. It's the fugacity. And the fugacity, when you fix the fugacity, that determines the average number of particles. And the average number of particles in the system is given by n equals, so it's the average n, is equal to lambda d by d lambda log z of lambda. Right? You get it from here. When you take a derivative of log z with respect to lambda, you bring down n. You replace this term by n lambda to the n minus one. And if you re-multiply by lambda, you reconstruct this sum with a factor of n in front. And you divide. OK. This is a standard. And what else did I want to say? OK. So, of course, n has fluctuations. The average number will fluctuate, et cetera. OK. So, this is where we got yesterday. Now, where do I want to go? So, what I want to show you is now, so there is one thing when you have function, so this is called a functional integral by the way. It's an integral which is not over real variables like integral dx or something like that, but it's an integral over a field. You integrate over all configurations, all possible configurations of the field. So, the field can be flat, can be whatever you want. Both fields, rho and phi. And so, this is called a functional integral. I don't know if you studied Feynman-Path integral in quantum mechanics or in other contexts. So, Feynman-Path integral is the simplest form. It's the simplest form of a functional integral. So, functional integrals are different. So, they are not very well defined mathematically, but the way we will use it, it's absolutely OK. So, now I want to show you how to simplify further. And the important point is that the only case where you can calculate exactly functional integrals is when there are Gaussian. So, Gaussian means that when the exponent, when it's the exponential of a second degree form. So, here that you see that if you look at rho, the integration variable rho, it is quadratic. It goes like rho squared. And here it's linear. So, it's a quadratic form that you can integrate. And this is what I'm going to show you now. Any question? So, there is an alternative way to show, to do this functionally, to introduce, to represent the particle system in terms of a field using what's called the Stratanovich-Herbard transformation. But this is more general than the Stratanovich-Herbard transformation because Stratanovich-Herbard assumes that you have only two body interactions, quadratic forms. Here it's completely general. OK. So, as an exercise, I will show you the Stratanovich-Herbard. OK. So, the point is here we have a quadratic term. And I will do a small appendix about Gaussian integrals. So, if you have one variable, if I consider the integral dx e to the minus a x squared over 2, let's say, plus bx, this integral. So, you can do it by shifting by your favorite method. This integral is equal to 2 square root of 2 pi over a e to the b squared over 2a. OK. So, that's, you find in any textbook. And, of course, this assumes that a is positive. Many variables. So, many means n. So, i is the integral by definitions, products for i equals 1 to n dx i of e to the minus one half sum over i and j of xi ai j xj plus sum over i of bi xi. OK. So, this is just the integral to many dimensions of this integral. Let me call it capital V. OK. And this, of course, now, in order for this integral to be meaningful, the matrix ai j. So, the equivalent of this means that the matrix ai j should be definitely positive. So, it's symmetric and symmetric. So, you know, definitely positive means that all the eigenvalues of this function are positive, not negative. And this is 2 pi to the n over 2 divided by square root of determinant of a times e to the one half sum over i and j. Of bi ai j minus 1 bi j. So, all this is exact. It's an identity. To go from here to here, what you go is you make a change of basis. You go in the basis which diagonalize the matrix ai j. You make a change of variable. Then, all the integrals decouple and you're back to this problem like this. OK. Ai j minus 1. So, because the matrix is definite positive, the inverse exists. And by definition, the definition of the inverse is that for any pair i j, sum over k of ai k minus 1 a k j equals sum over k of a i k j minus 1 equals delta i j. Right? That's the definition of the inverse of the matrix. OK. Some interesting properties of this, which I want to emphasize, is that if I want to get, for instance, expectation values of the variable xi or any number of variable xi from this i, you see that every time I take a derivative of this integral with respect to b, I bring down a variable x. So, for instance, if I define xi, expectation value of xi as the integral of these times xi times these divided by the normalization by i n, you see that this is 1 over i n d by dbi of i n. Right? When I take d i n by dbi, I bring down a variable xi. So, I have xi times the Gaussian weight. And then, to normalize, because it's an expectation value, you divide by i n, which is the Gaussian, which is just d by dbi of log i n. So, for instance, in this case, you see that if I want to see what is the expectation value of xi in this case, I just take the log of this and I take a derivative of the log of this with respect to bi. And you see that the part of the log which depends on the b is just this exponent. And if I take a derivative with respect to bi, I just get aij minus 1 times dj. So, for instance, I could, if I want to calculate higher order correlation equations of this xi, of these variables xi, for instance, expectation value xi xj would be, so in order to bring down xi and xj, I need to take two derivatives, one with respect to bi, one with respect to bj, and normalize. So, it would be 1 over i n d2 by dbi dbj. And what you can show is that when I take two derivatives of i n with respect to bi, bj, I bring down xi xj, and then I normalize in order to have expectation value. And the interesting thing is that if you take the so-called second cumulant, you can show, so the second cumulant is just the correlation function, so it's xi xj, so exercise, minus expectation value xi, expectation value of xj is just d2 by dbi dbj of log i n. And all the so-called cumulants, the cumulant expansion, is obtained by taking derivative of the log of i n of the integral. So, now we go to the case of continuous variable like in the path integral or like in the functional integral, which is here. So, the generalization to n variables is fairly simple. I will try to put it here because I don't want to erase this. And so, if I have a field i, so if I have the integral, which I will write as an integral d, let me call it rho of r. And I will use the notations of here. So, e to the minus one-half integral dr dr prime of rho of r A of r minus r prime rho of r prime plus integral dr of B of r rho of r. So, you see that this is really a generalization of this integral to the case where now the indices i are no more discrete indices like integer indices, but they become real variables, continuous real variables. So, the clean way to do it in this example is to write this as a discrete integral by discretizing the space. Then you can use this formula and then you take the continuous limit. So, I'm not going to do that. I give you the result directly. The result is i equals, and the infinite constant, which I will call c, divided by square root of the determinant of A. And the determinant of A is a Fredholm determinant. I will explain what it is. So, of course, here, again, A of r minus r prime has to be symmetric and definite positive. And then the result is e to the one-half integral dr dr prime of B of r A minus one of r minus r prime B of r prime. Where A minus one is the inverse operator of A of r minus r prime. So, this inverse operator of A is defined in the same way as the inverse here. The definition is just now you replace sums by integral. So, by definition of A minus one, you have integral, let's say dr bar of A minus one of r minus r bar A of r bar minus r prime is equal to integral dr bar A of r minus r bar A minus one of r bar minus r prime equals delta of r minus r prime. So, this is really the equivalent of this, yes? Nothing of this you are sure that it's well defined in the continuous case. Nothing, OK? So, the determinant in principle, so the determinant of A, there are several ways to define it. Of course, you can define it as the product of all the eigenvalues of A. In general, if you blindly do the product of all the eigenvalues of A, you get something which is divergent. But there are many ways to regularize it. And it turns out that in the following, so in general, it's the product over all the eigenvalues of lambda alpha. You can write it as e to the trace. You can show this identity of the operator A. Anyway, what we will see is that although this determinant in general is infinite, it's again we will use it in the partition function here. And therefore, it will be one of these, one additional infinite constant in front of the quantity that we are studying. So, it's not going to cause any problem. It's just, again, a normalization factor which will be infinite. So, that's how it goes. OK, so this is the important point. And now what I want to do, so you should keep this in mind. This is a small appendix. Gaussian integrals are very important. So, again, all the things that I showed about calculating expectation values of any quantity in the discrete case, you have the same here, except that now if you want, for instance, the expectation value of rho of r, like this, it's a functional derivative that you have to take with respect to B of r. So, it will be the equivalent of this equation if you want, would be delta by delta B of r of log of i. And I guess you must have seen these things about functional derivatives. Right? When you take a derivative, a functional derivative of this integral with respect to B, you bring down a factor of rho of r. So, this is, right? Maybe at some point I should. Right. The point is I don't want to erase this equation. Actually, okay, I can erase. I will write it here. So, I can write also that m equals lambda by d lambda z of lambda. Yes. So, okay, by definition, when I write expectation value, for instance, expectation value of xi is to be understood as integral product of i dx i of xi e to the minus one half xi aij xj plus bi xi divided by integral product over i dx i e to the minus one half xi aij xj plus bi xi. Okay. Just to make sure we understand the notation. Right? That's the definition of expectation value. If it's xi xj, you will have xi xj. By definition, expectation value is you put the quantity you want to take average. You put it here. Multiply by the Gaussian weight divided by the Gaussian weight divided by the normalization. That's the definition of expectation value or average of thermodynamic average of a quantity. So, so far, so good. So, the point is I did all this little intermezzo just to show you that here, rho is a quadratic. It has exactly this form. With v a, the matrix a is vc of r minus r prime. And b is i beta phi of r. So, I can use this formula to calculate the partition function and integrate exactly phi. And the result, of course, is fairly simple. So, I can get now the partition function as a function. So, integrate phi. And if I integrate phi, I get z of lambda equals. So, there is one term. So, when I integrate phi, so I look here. The correspondence is that a of r minus r prime equals beta vc of r minus r prime. And the function b of r is i beta. So, I can do the phi integral using this formula. And I will get z of lambda equals. So, a certain constant divided by square root of determinant of beta vc of r minus r prime. But as I said, this is just a number constant, which is infinite, but I don't care because it's a number which comes out. And then I stay with an integral over the field phi only. So, then I have e to the one-half. But in b, I have i beta. So, I'll get minus beta over 2 integral dr dr prime. And then, so, phi of r vc beta vc. So, a minus 1 of r minus r prime is going to be, so because it's an inverse, so you can, it's 1 over beta vc minus 1 of r minus r prime. Right. It's quite obvious. So, sorry? You integrate over? Over rho. Rho, okay. Because rho is quadratic. It's rho v rho plus i beta phi rho. So, rho, I don't know if I have color chalk. Yes. So, I have rho here and rho here. Right. So, you see that this is quadratic and this is linear. So, it's exactly of this form. Right. It's of the form rho quadratic kernel rho and then a linear part, which you can read off here. So, then I have 1 over beta vc minus 1 of r minus r prime times phi of r prime. So, this is really this part ba minus 1b, which gives this. Right. And then I have to add minus i beta integral dr phi of r rho f of r plus lambda integral dr e to the minus i beta q phi of r. Is it okay? Yes. It has negative because there is i beta. So, it should be positive, but it's, right? Yes. Okay. So, here is positive, but you have b and b. And b is i beta phi. That's where the minus sign comes from. Okay. So, this is the expression and I will simplify it further by showing you what is vc minus 1 equal to. So, how you calculate this. So, I remind you that vc of r is 1 over 4 pi epsilon times 1 of r. Okay. So, we know that we have, that this vc of r satisfies the Poisson equation. And the Poisson equation is just that minus Laplacian. Vc of r equals 1 over epsilon delta of r. So, this, in fact, if it was r minus r prime, it would be like this. And, in fact, let me write it rather like this, minus epsilon. Okay. Why I want to write it like this is because it looks very much like this. It starts looking like this. You have the delta here. And here you have an operator acting on vc which gives the delta function, which means that this operator which is here is going to be related in the function. So, by the way, this is called the functional inverse. The determinant which is here is called the Fredholm determinant. When you have an operator, it's called the Fred. So, there are books about Fredholm determinants and regularization of Fredholm determinants and about inverse. So, the inverse is better defined than the Fredholm determinant. So, in order to see this as an inverse operator, I will put it in this form. And the way to do it is to write that integral dr bar of minus epsilon Laplacian with respect to r, let's say, of delta of r minus r bar vc of r bar minus r prime equals delta of r minus r prime. This equation is right. So, why is it so? Because the Laplacian of the delta function with respect to r minus r bar, this is just like minus epsilon Laplacian with respect to r bar delta of r minus r bar times vc of r bar minus r prime, integral dr bar. Right? The Laplacian acting on r on r bar is the same because it's a function of the difference of the two. And then I can integrate by part here. So, the Laplacian by definition, this is minus epsilon integral dr bar delta of r minus r bar times Laplacian r bar of vc of r minus r prime. And when I do the delta integral, this is just minus epsilon Laplacian vc of r minus r prime. So, just to show you that the functional inverse of vc minus 1 of r minus r prime is just this. So, it is minus epsilon Laplacian delta of r minus r prime. Is it clear or is it not clear? No? Okay, I'll do it again. Yes? Sorry, this one? Oh, this one? Yes. Yes, I can put it out. Yes, absolutely. Yes, and then I don't even have your absolutely right. Yes, absolutely. It is absolutely right. If I write it like this, in fact, I can just take out, this is, in fact, the Laplacian which is here. Since the integral acts on r bar, I can put it out. And so, this second line can be written directly as Laplacian of the integral d r bar of minus epsilon delta of r minus r bar times vc of r bar in minus r prime. And therefore, it's just minus epsilon Laplacian vc of r minus r prime. Yes. Okay, that's an even simpler demonstration. Okay, so essentially the result is this one, that the inverse of the Coulomb potential is just minus the Laplacian. And that's the result of the Poisson equation. This equation implies that this operator is the inverse, differential inverse of this operator in the operator sense. Everybody agrees? No question? Okay, so then I can just go back to here and replace vc minus 1 by its expression. And this will be, by the way, this vc of r which comes in here, it's a symmetric operator, of course, and it's definitely positive. Why is it definitely positive? Because we know that the spectrum of vc of r is obtained, so it's an operator which is a function of r minus r prime. So the eigenvalues of the operator, of v of r minus r prime as an operator just is given by the Fourier transform components of v, which is the tilde of k, is 1 over 4 pi epsilon 1 over k square, which is strictly positive. That's again, you can view it by just taking a Fourier transform of this equation. The Fourier transform of this equation will give you this. And therefore the operator is definitely positive. And therefore the operation that I did of integrating this was legal because vc is a definite positive operator. Okay? So now I can simplify further and this will be the final simplification for the partition function. So what I do is I will replace vc minus 1 by this equation and you get that z is equal integral d phi. So now I forget this infinite constant in front. The infinite constants are contained in the measure d phi, which is here. Then it's e to the minus beta. So there was beta square over 2 divided by beta, so it's e to the minus beta square over 2 and epsilon minus 1, v minus 1, so minus epsilon integral dr dr prime of phi Laplacian r minus r prime phi of r prime. And then all the rest, e to the minus i beta integral dr phi of r rho f of r, plus lambda integral dr e to the minus i beta q phi of r. Okay? And so this expression here simplifies a little bit further in the following way. So if I look at this, so it's e to the beta epsilon over 2. And now you see that, since you can integrate by part the Laplacian, so you integrate it by part twice, so it will act here, and then you have the delta function acting. So it takes out one integral, so I'll write it in there. So it's integral dr phi of r Laplacian phi of r. I use the fact that integral dr dr prime of phi of r Laplacian delta of r minus r prime phi of r prime equals, I integrate by part on this side. So it's integral dr Laplacian dr dr prime Laplacian of phi of r delta of r minus r prime times phi of r prime. And therefore this tells me that r prime is equal to r, so this is just integral dr Laplacian of phi of r times phi of r. And usually if phi has good properties at infinity, you can integrate by part once, so you can move the Laplacian, one of the derivatives you can move here, and this is just minus the integral dr gradient phi to the square, right? Everybody follows or everybody is lost? It's difficult to appreciate. OK, so the final result, sorry? Lost? At which level? Don't tell me at the beginning. This one, here? OK, that's a good thing. OK, so here I have just put vc minus 1 up to the, I took out the factor minus epsilon, et cetera. So I'm just evaluating this term. So the important part in vc minus 1 is this Laplacian with respect to r of delta of r minus r prime. That's what I tried to prove here. So then you know that when you have quantities like this, you can move the derivatives on each side because there is no boundary terms because phi is 0 at infinity. Because phi is 0 at infinity, and therefore what you have is integral of certain function phi Laplacian psi is equal to minus integral Laplacian phi times psi, or is equal to plus integral Laplacian phi times psi. This is a problem for you, or you understand that? This is because there is no boundary term, right? In principle, it's like the Gauss theorem. There is the flux of the gradient on the outside surface, but you assume that the electric field or everything is 0 at infinity. So then there is no boundary term, and if there is no boundary term, you can just move around the phi. Every time you move it on one side or another, you just change sign. So you see that, for instance, if you have a Laplacian, you can act on the right or on the left with no change of sign, and I forgot a gradient here. Yes? Yes? Yes? Yes, so the way to do it when you have a system like the charge cylinder, the way to do it in a clean way, no, no, but you think you have a point, but it's not, no. The point is that what you need is that gradient phi should go to 0 at infinity, because you see when you go from here to here what the boundary term is the integral of gradient of phi on the surface and in the electrostatic case, which is the boundary. Yes? So usually the clean way to do the cylinder, which I didn't do because it's much more complicated than what I showed yesterday, is you have your charge cylinder and then you put it in a cylindrical box on which you impose that the electric field is equal to zero. So you impose gradient phi equals zero, and then you can solve it. It's more complicated than what I showed yesterday, but you can solve it exactly. Actually, the first people who solved it solved it exactly in this geometry. So a small cylinder A and a large cylinder capital R with the boundary condition that the field is imposed to be zero on the large cylinder, and then you take the limit R going to infinity. Then you are safe. Everything is okay. On the bigger cylinder? Yes. It's finite. It's a finite function of R, but you see it's never the potential which enters the game. When you do this integration by part, it's the gradient of the potential which has to be zero. It's the electric field. It's not the potential. Because the value of the potential itself is defined up to a constant. So the only thing is whether it's finite or not finite, but any finite value, all finite values are equivalent, so it's not a problem. Here? No, you just need the boundary terms. The boundary terms include only gradient phi. Is this integration by part? Yes, this is just integration by part. So okay, we can come back. Okay, yes? Yes, it's just an analogy. I will show you why this analogy. It turns out that it's directly related to the electrostatic potential. I will show it. You are right. A priori is just an auxiliary variable to enforce the fact that the charge density is the charge density. But it's the conjugate, so you see there is okay, we will see that this electrostatic, the field phi is i times the electrostatic potential. Okay, so if we do all this, we come to the final form, which I will write here, which is that the partition function Z of lambda is equal to an integral over all the field phi configurations, integral d phi of e to the, so once I do the integration by part, this becomes minus beta epsilon over 2, integral d 3 r gradient phi to the square, minus i beta integral d 3 r rho f of r. integral d 3 r rho f of r phi of r plus lambda integral d 3 r e to the minus i beta q phi of r. And this is really the important form that I will be using all the time. Okay, so this is the functional integral representation for the partition function of a charge system, of a gas of charges with charge q and fugacity lambda. And yes, yes? No, I don't know if you are going to comment on the part you need, if you mainly think that phi is like the potential of gradient phi, epsilon gradient phi is the energy in the square. Yes. Plus, apart from an i, the interaction between these charges and the potential. Exactly. And then, so for this, is there any way of interpreting the rest? And this is the partition function of the charge in the electrostatic field, i phi. In fact, yes. So, this is the electrostatic energy of the system. This is the electrostatic interaction with this. And this is, in fact, the entropy of the particle in the field phi. Let me show you just one thing to come back to this analogy or to this thing that you were saying before. You know that if I have fixed charges in a system, then, OK, so z of lambda is e to the minus beta f. f is the free energy. Actually, it should be the grand potential. I don't care. I call it f or whatever. So, the point is that if I take the derivative of log z of lambda by rho f of r, so this z of lambda is the partition function of the system with some mobile charges, but it's in presence of a fixed charge density rho f. So, if I look at this, this is by definition the electrostatic potential of the system if I need a beta also, right? Because in my, eventually, I should get something like e to the minus beta phi of r rho f of r, or let me not call it phi but psi of r, right? The rho f of r should couple directly to the electrostatic potential. So, what I want to say is just that and with a minus. So, this should be minus electrostatic potential psi of r. Does anybody agree? The derivative of the free energy, which I can write it like this, so which means that psi of r is minus delta f by delta rho f of r. The functional derivative of the free energy with respect to the fixed charges is equal, sorry, it's a plus, because z is e to the minus beta f. So, this is just this. Agreed? Yes, with an integral, with an integral. Sorry? Rho f are the fixed charges. So, the electrostatic energy. So, in other words, the electrostatic field is given by the variation of the free energy with respect to the fixed charges. So, this tells you that electrostatic, the electrostatic potential psi of r, if I look here. So, when I take a derivative of log z with respect to rho f, I bring down minus i phi, minus i beta phi. So, you can see immediately that psi of r, the real electrostatic potential present in the system at point r, is the expectation value of i phi of r. Right? If I take a derivative of this with respect to rho f of r, I bring down phi of r. Yes? Sorry? No, it's a real field, but here you see you have some values all over the place. And the expectation value of phi is, in principle, phi is a real field. No question. But the weight which is here, the weight is complex. So, nothing tells you whether it's real or not real. Eventually, the integral of phi, so the expectation value of phi is going to be real, of i phi. OK? One last thing before I forget to do it. I did this calculation for the case of only one type of charges. So, one type of charges, q, with a fugacity lambda. Now, because I didn't want, I thought it was complicated enough like this, but if you have n species of charges, qi instead of q, with fugacity is lambda i. So, the formula is fairly simple. Instead of having one here, you have sum over i of lambda i qi. That's it. That's the, yes? From zero. It's only because you have the first term in the explanation. No, because of this and this also. Why? I mean, it depends. You can have this. No, this one is not symmetric. It's linear. If it was only this, you're right. Yes. So, for instance, if you didn't have this, this is in the vacuum, by the way. So, in the vacuum, yes. The average field is zero. The fluctuations are not zero. And that's a Casimir effect and things like that. Yes. So, just one comment. If you have many charges, it's not only one lambda, but it's a sum over all the species of lambda i this. And the number of each species, n i, average number of particles for each species of the system, is equal to lambda i d by d lambda i z of lambda i. Right? So, you have to determine all the lambdas, all the fugacities of each species. You have to determine it to match the total number of particles in the system. What else did I want to say? Yes. So, if I want to calculate additional, it turns out that all the correlation functions of the electrostatic field, of the real electrostatic field, are given by averages of the field i phi. So, there is a direct one-to-one correspondence between the electrostatic potential and the field i phi. OK? Another thing which is interesting to evaluate and which is not trivial in the case here is the ionic concentrations. So, if I take, let's continue over there. So, the ion concentration, so this is the expression in terms of only the field phi. But if I wanted to have the ion concentration, it would be easier to work with this expression here because it's in terms of the ion concentration. So, I would write that c of r, so I take the case of only one ionic species. Right? So, c of r, so I remind you that when I did the change of variable up there, it was rho of r equals rho f of r plus q sum over i delta of r minus r i. So, the ionic concentration c of r in this language would be only this term, right, because this is fixed charges. So, it's something like one over q rho of r minus rho f of r. Right? And, of course, this, since you are in a system which is fluctuating, the ri's are fluctuating. So, the c of r would be this expectation value like this. OK? So, I write that c of r is one over q. So, this expectation value, I will calculate it from here. So, it's one over z of lambda times integral d rho d phi. And, I will write it as rho of r minus rho f of r e to the minus beta over two integral of, OK, so I'll write shorthand notation, vc rho plus i beta integral phi rho minus rho f and plus lambda integral e to the minus i beta q phi of r. OK? So, this is, by definition, c of r is one over q expectation value of rho of r minus rho f of r. OK? So, you can see that this is one over q, one over z of lambda integral d rho d phi. So, rho minus rho f, you see that it's just like taking a functional derivative with respect to phi. So, let me write it like this. It's e to the minus beta over two integral rho v rho. Times one over i beta delta by delta phi of r of e to the i beta integral d phi of r. Right? This rho minus rho f, if I take the derivative of this object with respect to phi of r, I bring down i beta rho minus rho f. OK? Not OK? Is it OK? At least one person says OK. I need one person to say OK to continue. OK? So, now this is, so these functional integrals, they behave, you can show that they have exactly the properties of normal integrals. So, in particular, if I have integral d by d phi of this times this, I can integrate by part. So, one point is that when you have this integral, these functional integrals, there is never boundary terms. They always go to zero because of the nice properties of the field and anything. So, this I can integrate by part. And if I integrate when I have the integral over phi, when I integrate by part, so this will be minus 1 over i beta e to the i beta integral dr phi of r rho of r minus rho f of r times delta by delta phi of r of e to the lambda integral dr e to the minus i beta q phi of r. Right? This is just an integration by part of the delta by delta phi r. So, this is like u prime v. So, it's going to be minus u v prime. I remind you that integral u dv is u v on the boundary terms, but this is zero, minus integral v du. So, this is what I use all the time, which I used when the, was the Laplacian, the phi Laplacian phi. And here, so in the phi Laplacian phi to get the Laplacian, the gradient phi square, I use this kind of thing. So, here it's a integration by part, but a functional integration by part with the delta. So, this is like the prime with respect to phi. And the d by d phi acts here. Okay? And you see that. So, when you take the functional derivative with respect to delta phi of r, you will bring down lambda minus i beta q e to the minus i phi q. So, you get the result, right? This, when you take the delta by delta phi of r of e to the lambda integral dr e to the minus beta q phi of r is equal to minus i beta q lambda e to the minus i beta q phi of r times e to the lambda integral dr e to the minus i beta q phi of r. So, you bring down, it's like the derivative of an exponential, right? If you didn't have integral, you would bring down, if you didn't have integral and you did the derivative with respect to the variable phi, you would bring down lambda times minus i beta q e to the minus i beta q phi times e to the lambda e to the minus i beta q phi of, okay? So, it's just the same. It's like that. No, it's, you cannot, okay, there is a big philosophical question. All these objects, all these functional integrals, they are used all the time and mathematically they are not well-defined. Mathematicians cannot define it because it's full of infinities. It's full of, okay. So, the only way to give a meaning to this is to check that, for instance, if you do perturbation theory or things like that, you get the correct results which you can obtain otherwise. So, in particular, you can show that by doing, by using this kind of rules which are, where you neglect everything which are on the boundary condition, it's the right rules to get the correct perturbation expansion and things like that. That's all I can say. There is no boundary terms. If you put boundary terms, they are completely undefined. Actually, no, the reason is that if you want to be able to give any kind of meaning to this, you have to put the field and the gradient of phi to zero at infinity. That's a must. Otherwise, nothing makes sense. All these things are completely crazy. So, if you impose this boundary condition that phi, gradient phi, and that phi goes to zero extremely fast, then you can show that all these boundary terms just disappear. That's the best justification that I can find. So, anyway, in the end, once you do things correctly, you end up with the property that now c of r, I will, the c of r, which is here, is going, so you have, right? It's minus one over i beta times, so the i beta goes away. The q, which is here, goes away. So, it is just equal to lambda expectation value of e to the minus i beta q phi of r, right? Because you just plug this back in and that's what you get. So, in the case of many species, the c i of r is equal to lambda i expectation value of e to the minus i beta q i phi of r. And this looks very much like the Boltzmann kind of distribution that we saw before. In the case when we looked at the Poisson Boltzmann equation, you remember there was no factor of i because, of course, we will see that we have seen the electrostatic potential is really related to i phi. And in the case of Poisson Boltzmann, when we did the derivation of Poisson Boltzmann, we saw that lambda i is the kind of bulk concentration of the ions. So, here lambda i is not the bulk concentration of ions. It's really the fugacity of the charges defined by this determined by this property. So, this is the ion concentration. And from this, of course, you can calculate everything. So, correlation functions of the potential will be correlation functions of i phi. And correlation function of the concentration will be correlation function of e to the i beta q phi. OK, so I will stop here because I think it's a bit dense, maybe. And so tomorrow I will show you how one can try to calculate this kind of object, how one can recover Poisson Boltzmann equation by Huckel, how one can go beyond, and all kind of further approximations. Thank you very much.