 OK, so today we're going to make contact with localization and this morning's lecture by Francesco. So I'm going to start by computing the expectation value of the 2 1 half bps with some loops that I introduced yesterday, which were the infinite straight line and the circle. And so we want to compute, so we take the representation to be the fundamental one. So we want to compute this object. And we're going to start computing this perturbation theory, because I think it's a very nice exercise where you gain lots of intuition. And moreover, in this very simple case of n equal to 4 Superium Males, it's an exercise in which you see a connection between the matrix model and really Feynman diagram. So it's a very nice setup. OK, so how do we compute this object? So we expand the exponents. OK, and so we have exponents to something. So the first term is 1. So we get the trace of 1, which is an n by n matrix. So you get n, which cancels with this 1 over n. So the first term is 1. And then you get the second term, which would be linear in the fields, is 0. Because so these are traceless matrices. And then you have to worry about this part-ordering symbol. So when you expand the exponents, so higher terms in the Taylor expansion are going to have more integrals. And you have to order the integrals one after the other. And you can use this formula that I think should be familiar from a similar situation, which is when you solve Dyson equation and you have a similar issue. So you have 1 over n factorial from the Taylor expansion of the exponential part-ordering of this guy. And this is dS1 dSn. But now the integrals are ordered. And I can pick one ordering like this one. OK? OK, so very good. So now let's go to, let's compute this guy. And we have this is 1 plus 1 over n trace TATB integral S1 bigger than S2 dS1 dS2. And then you have some integrant. OK, and this integrant, let me call it combined propagator. And when I'm going to do Feynman diagrams, I'm going to denote it by a wavy line. And what is this? This is this guy. OK, and I hope the notation is obvious. So X1 means X mu of S1. And X2 means X of S2. OK, so I need to compute this guy. So I can go to Feynman gauge. And these propagators in Feynman gauge become like this. So this is the scalar propagator. And the vector propagator is essentially the same up to the index structure. So this is why this gauge is nice. So you have delta mu nu, delta AB, and X minus Y squared. OK, so I use these two propagators. And then this combined propagator gives me delta AB, g m n squared over 4 pi squared. And I got X1, X2 minus the product divided by X1 minus X2 squared. OK, so this is what appears at the first non-trivial order in the expansion. OK, so now let's evaluate this combined propagator in the two cases that we had yesterday. So we had the infinite line, which was just X4 equal to S. And then everything else was 0. So you see that you have essentially 1 minus 1. So this combined propagator is 0. So you don't have a propagator, essentially, in your example. So you can immediately see that W, the expectation value for the line, is going to be 1. So this is a trivial expectation value. It's not very interesting. And essentially, you can argue that this is going to be valid at any order in perturbation theory, essentially, because you don't have a propagator. So you can evaluate trivially this guy. So now let's look at the circle. So for the circle, you know that X1 dot times X2 dot is equal to X1 times X2. So this functional dependence on the coordinates becomes 1 minus. So this guy, yes? Yeah, yeah, wait, I'm going to come to that. OK, so this is 1 minus X1 times X2. So this is, I'm just expanding the denominator. And then I got X1 squared and X2 squared R1 because of a circle. So this is it. So this is constant. So it's amazing that this particular combination, which is due to supersymmetry. So it is this particular structure of this propagator gives you a constant. So then you see that this W for the circle is no longer trivial. It's 1 over n trace taTb integral from 0 to 2 pi ds1 integral from 0 to s1 ds2. And then you have just this constant thing, which is 8 pi squared. So you can evaluate this. And this is 1 plus lambda over 8 plus dot dot dot. So indeed, we see it's non-trivial, although as Jesse was alluding to, the line and the circle are related by an inversion, which is a conformal transformation that I can do in the theory. So I have got some line. I can invert it, and I've got a circle. And I would expect it to be the same thing, but it's not because of some anomaly in the boundary conditions at infinity. So there is an interpretation in terms of anomalies for this fact. But anyway, so this is a fact that it's non-trivial. What's lambda in terms of here? Sorry, yeah, I should have said it. So lambda is giga-miles squared times n. This is also called tuft coupling. So what we do in n equal to four severe mills, so we start from a theory which has giga-miles squared and the rank of a gauge group. And we prefer to express things in terms of lambda and n. So you change the parameterization of the parameters because this is the relevant combination that's called tuft limit. And in orography, this is the useful parameterization to use. Good, so now we see this very simple computation at second order. Let's go to higher orders. I mean, we have a constant propagator in the end. So we should try to compute as much as possible. So higher loops. So at next order, we have order lambda squared. We have different kinds of diagrams. So let me do the diagrams like this. So this is the Winston loop circle. I denote it by a circle. And now I got propagators connecting it. So at next order, we have, for example, these two propagators. And then there will be something with a crossing. But I exclude that something with a crossing by taking the large n limit. So I'm taking the planar diagrams only limit. So I don't consider crossings. So this is a possibility, so you see one single propagator is order lambda. So now we have two propagators is order lambda squared. But there is another possibility because we can have internal vertices like renormalization of a propagator. And we can have a three vertex in the middle, which are also lambda squared. So this guy is, OK. So first of all, let me notice that in this case, of course, everything is UV finite. The propagator is constant. It's even better than finite. So we don't really worry about divergences because essentially, in a purely gauge field with some loop, you would have a linear divergence. But this linear divergence gets canceled by the scalar contribution. OK, and these guys, whether or not are divergent, you need to regularize them. And so you can do that. You do something which is called dimensional reduction scheme, which is a way, it's like dimensional regularization preserving supersymmetry. You can do that. OK, so you regularize. And then you find that when d equal to 4, the dimensional space time is equal to 4, actually the two guys cancel, one with the other. So this was a computation done in 0, 0, 0, 30, 55, which is a famous paper by Ericsson, Samenov, and Zarembl, where they computed the expectation value of the, so this is Ericsson, Samenov, and Zarembl. OK, nice. So what they did, so this was pre-localization, of course, and this is where essentially the conjecture about these Wilson loops originates, that namely very symmetric model computing them. So in this paper, they decided, OK, so they conjecture, OK. So we have this observation at order lambda squared. Let's assume that the cancellation of diagrams with loops or vertices is going to continue to any order in perturbation theory. So let's assume that for some reason, mysterious at the time, only ladders contribute, and in this limit only planar ladders contribute. OK, so they conjecture in this paper. Diagrams with vertices will continue canceling. OK, and then let's see what happens. And then we only have essentially constant propagators for planar ladders, and we can really compute everything, right? Because now we have that this guy is going to be equal to 1 plus this guy plus this guy. OK, so this is order lambda. This is order lambda squared. Now we have to go to order lambda cubed only with planar ladders, but we have two possibilities. We have this guy here, but we have this guy here is topologically different, right? And so on. So at order n, we have what? We have gm mils squared times 4 pi squared to vn. So this is the factor that was coming from the combined propagator, if you remember. Then we had 1 over 2, again, to vn. This was the constant piece. Then we have 2 pi to the 2n, 2n factorial, and then we have n over 2 to vn, which in total gives you lambda to vn, 4 to vn, 2n factorial. So again, I explained where these two guys come from. This guy comes from integrating, so these guys, so this is just the measure of the ordered integral at order n. And this comes from repeated applications of this. We're in the normalization of the gauge theory generators. So now what do we need to do? We just need, so now we know that this expectation value of the circle is just going to be the sum from 0 to infinity of this factor, lambda to vn, 4 to vn, 2n factorial, times how many diagrams appear at any order. Let me call this factor cn. OK, so it's a combinatorial problem now. So I tell you, when you have only one ladder, OK, we can see you have one guy. When you have two ladders, you have also one guy. With three ladders, you're going to have two guys, two possible diagrams contributing. And then you go on, of course, the number increases. But we need to compute how it increases, so we need to understand what's cn. And then we can just re-sum this series, and we get an exact result, assuming again that vertices will continue cancelling at any order. OK, so again, it was a conjecture that was essentially proven by localization a few years later in 2007. Good, OK, so now let's count. So cn is the number of diagrams with non-crossing ladders, with n non-crossing ladders. And there is some, you can easily set up a recursion relation to extract this cn, so let me denote it by this. So you have n plus 1, so you have a generic ladder diagram with n plus 1 ladders. And you can always see that you can write it in this way. So you have a blob with k, a blob with n minus k, so the total number is n. And then I got some extra ladder going like this. So I mean, the easiest way to realize that this is true, I mean, just write down a few cases for low n and check that this is always going to be true, OK? So then you have that cn plus 1. If I write down this pictorial equation in formulas, is sum k from 0 to n, cn minus k, ck. And then I need a step of the recursion, which is c0 equal to 1. So I know that when I got 0, I got only one contribution from 0 ladders. OK, so now you can introduce a generating function to solve this recursion relation. So introduce f of z equal to sum n from n to infinity, cn z to vn. And then so this equation, you can check that becomes f of z squared equal to f of z minus 1 over z with f of 0 equal to 1. So you can re-express this in terms of this equation. And you can solve it. Now, it's a quadratic equation. You get f of z is 1 minus square root of 1 minus 4z over 2z. Of course, I mean, I would have a plus sign and a minus sign. And I fix the minus sign in a way that when z equal to 0, this is a finite. Sorry, of course, z equal to 0, you need to have a finite result. Actually, it has to be 1. So you need to select the minus sign. So now you can write this equation as an infinite series. So this is 2n factorial n plus 1 factorial n factorial c to vn. So now you immediately understand that this guy is cn. So this sum here becomes sum of n to n factorial n plus 1 factorial n factorial lambda to vn for n to n factorial. OK, you plug it into Mathematica and you get 2 over square root of lambda Bessel function e1 square root of lambda. So this is some modified Bessel function. And so now let's take stock. And so let's remind. So what did we get here? We get a result which was valid for n equal to infinity because we were only considering non-crossing ladders, but is valid for any value of lambda. So this is exact in lambda. OK, so this was the conjecture in this paper that the expectation value would be like this. And in fact, we are going to see on Thursday that there is an immediate check that this is the right result without localization. I'm not sure. Yeah, using holography. But anyway, so this is the result. And now I want to revisit this computation using localization. No, no, there is no holography. But what I said is that you can check that this is, I don't want to anticipate too much because I would have to explain how to check it. So we can see on Thursday that you can check the strong coupling limit of this function using holography. So this is, of course, a piece of evidence for the fact that this conjecture is reasonable. OK, so any questions so far? So this was pure perturbation theory and combinatorics. And now let's redo this computation using localization. And so this is when I'm going to connect with today's lecture by Francesco. So 1.5. So the 1 half BPS circle from a matrix model. OK, so there is already some intuition that we can build out of this computation. So first of all, we had an expansion if I'm in diagrams in which propagators were constant. So we are going to suspect that the relevant quantity is going to be some constant field, some zero mode. And then there were no vertices, effectively, because they cancel. So then it's probably going to be a Gaussian action. And so this would be a matrix model with some Gaussian action. So it would be a Gaussian matrix model. So we suspect this. But in order to see, and this was conjectured already in that paper, but OK, so in order to see that rigorously, we have to really use the localization machinery. Yeah, so you're mapping the points at infinity to a finite point. And this introduces an anomaly that is like a conformal anomaly that Drucker and Gross I elucidated out the contributes to this discrepancy between the line and the circular. OK, so now what are we going to do? We're going to put n equal to 4 superium mills on S4. OK, so we saw this morning how it works. So we have a compact manifold. And then there was some fermionic symmetry. And so the q used for localization is the same q such that q of w circle is 0. So w circle is an operator annihilated by this particular q that I'm using to construct this localizing action that Francesco was talking about. Yeah, so I take my circle to be the equator of the sphere. So this is the 1 half BPS circle is equal to the equator. And if you remember, q was some fermionic symmetry squaring to a bosonic symmetry. And the bosonic symmetry is precisely the isometry along with circle. So I'm moving in this direction. So q squared is essentially d d phi. So the isometry along this equator. OK, then what you have to do. And then so we saw that there was this computation of d z dt. So you start with an action, you deform it. And then you see that the z does not depend on this parameter that you are using for deforming. But the same as Francesco was saying, the same argument goes through. If instead of adding z, you get w. You compute the expectation value of w, which is also annihilated by the q. So the same thing goes through. So I can do the same thing. I can take t equal to infinity. You do this very violent deformation. And essentially, you have a localization on this other point. So what happens is that it didn't do it for n equal to 4 super yam miles. But the result of localization is that you have to solve the equation qv equal to 0. So you have to solve for the localization locus of the pat integral. And so if you do it for n equal to 4 super yam miles, you find that this qv equal to 0 is a sum of squares. So you have to solve for this equation, which is a sum of squares. And then you see that the only thing that survives in the sum of squares is a 0 mode of the scalar appearing in the Wilson loop circle. If you remember, there was a scalar appearing there. So let me call this 0 mode big m, like matrix. I'm sorry. Yes. Well, if the Wilson loop is invariant under that q. Oh, so it's that? Yeah, I'm picking. So it turns out that the BRST operator that you're using for localization is the same that kills the Wilson loop. So the Wilson loop is invariant under that. So this is why you can redo the same analysis. OK, so let me tell you what happens. So the partition function now becomes dm e to minus. I'm going to write it down and then I'm going to tell you where it comes from, trace of m squared. So the original action doesn't have a mass term for the scalars. But when you put it on the sphere, on vs4, you get a coupling of the scalars to the curvature of the sphere, which in the appropriate units is given essentially by this combination here. And this is necessary to preserve supersymmetry after you put the theory on the sphere. So this is the partition function, z. Now similarly, so now we have to understand what is the insertion that we have to put into this partition function in order to compute the Wilson loop. And so remember, the Wilson loop was one of the dimensions of the representation. Trace rp exponent i mu dx mu plus let's say phi 1. OK, so the gauge field on the localization locus is 0. So we don't have this guy. This guy becomes the 0 mode, m. So it becomes constant. So now the integral just gives me 2 pi. So what I need to insert is one of the dimension of the representation, trace rp, no longer p. I don't need a p anymore because it's constant. So e exponent 2 pi m. So this is my insertion. Also the important comment is that this morning we had a super determinant that was coming from the fluctuation around the phi 0, Francesco hold it. There was a phi hat that was giving rise to a one loop determinant. For n equal to 4 superion males, because of the large amount of supersymmetry, you have an exact cancellation between the numerator and the denominator of the super determinant, so you get 1. So this is why you don't have anything here. So this is, I can, times 1. And so the only thing that really survives is the classical action. This is not always the case. You can, for example, consider different theories, like n equal to 2 or n equal to 2 star, or ABGM theories, into dimensions. In that case, this guy is not 1, and it gives you a contribution. OK. So I want to emphasize this thing that this analysis, again, does not rely on picking a particular representation. So it's valid for any representation. So right now, let's focus on the fundamental representation because we want to reproduce that result that we obtained for the fundamental representation perturbation theory. But you could have other representations. In fact, in exercise 4, you can consider representation, which is a symmetric of rank k or anti-symmetric of rank k. And essentially, it's a guided exercise on how to do that. Yes? It gives you a, well, I mean, you can think of it. So in the language that we are going to use in terms of forces between eigenvalues, it will give you extra force pieces between eigenvalues. Yeah, you can expose it. Yeah, you take this. Whatever you get here, you write it as exponent of a log. And then you put it here. It's like a potential for the eigenvalues. So it's always a function of n that you would not use. Yeah, yeah, yeah. Yes? Yeah, so you start with this. So the scalar is in the same multiplet of the gauge field. So it's also in the adjoint. It's an n by n matrix. OK, and it has eigenvalues. And so this is it. So I don't know what is the question. The question is, when you take another representation. Yeah, when you take another representation, I can give you a, I didn't write it here. So if you want to take another representation operatorially, what you are going to do is to take sum over the weights. So you have a representation for any gauge group. It doesn't need to be SUN like we are focusing on right now. So you take the sum of the weights. So e to the trace, let's say, trace in the representation R of e to 2 pi m is sum over the weights of a representation times the degeneracy of that weight e to 2 pi with evaluated nm. So I mean there is some character formula that you apply and tells you what is the insertion that you have to do. In the exercise four, we are essentially going to insert a generating function for the symmetric and the symmetric representation. And I tell you what is the insertion to do. OK, so the very nice thing about this example, which is Gaussian, is that you have a Gaussian measure and you can apply something called orthogonal polynomials. I mean the method of orthogonal polynomials. And you can solve these things exactly. Now we know that this is mathematically precise. So we know that this is what we want to compute this guy. So it's this insertion in this Gaussian matrix model. So we know by localization that this is the right computation to do. We don't need to assume cancellation because this is what comes out of localization. So this you can compute it exactly in n and lambda. So you can get an exact result. And I guide you through this computation in exercise three of a problem set. OK, so finite n, any value of n, any value of lambda. In this perturbative expansion, we need it for large n. But you don't need to do that. You can be more general than that. You can get finite n. And OK, the result is some Laguerre polynomial. Associated Laguerre polynomial that you can obtain. And actually it's quite easy. It's just some algebraic manipulation. You have determinants. And you use orthogonal polynomials. So it's going to be Hermit polynomials because the Hermit polynomials are orthogonal with respect to the Gaussian measure. So it's actually very simple integrals to do. But here I don't want to do that. I want to use another method because this other method is then what you can use in this exercise four for the other representations. And this other method is called saddle point method. So here we use saddle point. And this saddle point method is valid when n is equal to infinity. So essentially I want to reproduce really the best of function that I obtain in perturbation theory. I mean, of course, orthogonal polynomials is a much nicer thing. But it gives you more. It gives you something which is exact in both n and lambda. But the price you have to pay is that you need to have a very particular measure. So you have to have a set of polynomials which are orthogonal with respect to the measure that you have in your matrix integral. And here you don't need to have orthogonal polynomials. But again, the compromise is that you have to get n to infinity. OK, so now let's use this saddle point method. And I'm going to go through the details. OK, so step one, let's remind us that we want to compute dm e minus 8 pi squared n lambda trace m squared trace. Now we're in for the fundamental representation. So I'm not writing the box. So this is what we want to compute. So step one, we need to diagonalize m. And to diagonalize m is like a Fadiya Popov method. So you can find details on pages 12, 13 of a review paper by Marcos Marinho. So this is quite standard. And so before computing the expectation value of the width of the loop, let's just compute the action. OK, so this is some generic n by n matrix. Now I put it into diagonal form. So I've got products over the eigenvalues dmi. And then I've got e minus 8 pi squared n over lambda sum over i mi squared. So this is the Gaussian action. And now you've got this Fadiya Popov determinant, which is product i strictly less than j mi minus mj squared, which is called the Vandermonde determinant. And essentially, in these two pages, you're going to understand how to compute this determinant. But it is a standard computation in quantum field theory. Essentially, it's a Fadiya Popov computation. OK, so now let me rewrite this like this, products dmi, e minus n squared, an effective action of the eigenvalues mi. And the effective action mi is 8 pi squared over lambda n sum over i mi squared minus 2 over n squared sum i less than j log of mi minus mj. OK, so I'm just writing down this as the exponential of a log. And then I got the sum of i less than j. So now you can think of this guy. OK, this is an effective action in which you have n particles from the line called mi. And there is a common potential, quadratic potential well for all of them. So this is a common attractive potential. OK, and then you get log of 0, which gives you a negative, which gives you a repulsive, a positive and repulsive pairwise potential. OK? Yes? I'm searching here in the way to the eigenvalue, because you have n squared variance there, right? Yes. Which you use to n. Yes. So you're giving the gradient out. Yes. Which is fine, but what's the loop? The loop obviously depends only on the eigenvalues. Is there any situation that you can think of where you can insert something that depends on those variables that you get out? I don't know of an example. No, but you're there a wise man. The insertion that is not necessarily. What's your answer? Yeah, I think you cannot have anything different than that. If you're coming from gates theory, I guess, the eigenvalues will say that the insertion is very similar just to the loop. Yeah, yeah. Maybe, I mean, make these models appear in other context. In other context, I don't know, but I don't think it's. I think this is what you get always. But anyway, so this is an effective action with a balance between an attractive common potential and a repulsive pairwise potential. So there's going to be some equilibrium configuration. And also, so let me notice this. So a sum over the eigenvalues, essentially, is an order n contribution. So now you have a sum here. So you get order n, which cancels with this one over n. Here you have two sums. So order n square, which cancels with this order n square. So the effective action in the n counting goes like order 1. And then I got minus n squared in front of it. So n squared works like 1 over h bar. So when n going to infinity, the saddle point is what contributes to this integral. So we take n going to infinity, and then the saddle point is the relevant configuration that we want to get. Good. So now, let's see what is the equilibrium configuration. Yes. Yes. There is no dependence. You can set it to be 1 without loss of generality. No, no, it doesn't matter, because it's a conformal theorem. You mean computing on R4 and on Vs4? Oh, that's 4. On Vs. You still set the scale to 4. OK, so now let's get the saddle point. And the saddle point you get by taking the variation equal to 0. This is 16 pi squared lambda n mi minus 2n squared sum. Now let's write j different from i, 1 over mi minus mj. And now you have n equations. So the index i is 3 goes from 1 to n. You get n equations. I can write this in terms of some eigenvalue distribution, which is 1 over n sum over i from 1 to n delta m minus mi. And now, when I take the continuum limit, so n equal to infinity limit is like a continuum limit. So 1 over n sum over i of some function evaluated on mi becomes the integral of fm rho m dm. And I've got some interval here. So I assume that m is contained in an interval. This is called the one-cut solution. And so rho m is different from 0 only for m in this interval. And this guy is normalized to 1. So the integral rho m dm is equal to 1. So this is what happens in the continuum limit. So now what is the saddle point equation? So now I want to write this guy in the continuum limit. This becomes 8 pi squared lambda m equal. So I got this integral. But now the integral is not the total integral. Because j has to be different than i. So I have to take the principal value of the interval. So this is a singular integral equation that I'm assuming that this distribution of the eigenvalues is contained in some interval. That outside of this interval is 0. So that all the eigenvalues are inside of some interval. So that could be that in some cases you have two intervals or three intervals. You have multi-cut solutions. And in this case, yes, the solution is going to be a one-cut solution. So now this is the saddle point equation. Now step three, we have to solve this equation. And the way to do it is to one way to do it is introduce a so-called resolvent, which is the same thing except I don't take the principal value. So this omega m prime, dm prime, m minus m prime without the principal value. Now I can use this theorem Sokotsky-Plemelge, whatever. So fx. So a function like this, I mean, this f of x, x plus minus i epsilon plus minus i pi f of 0, where epsilon goes to 0 from above. And I can write this as 1 half. f of x, x plus i epsilon dx plus f of x, x minus i epsilon dx. OK, so using this formula, I can write down this saddle point equation like this. It's minus 1 half, omega in this minus. So this is the saddle point equation. So I'm using this. Moreover, I've got some extra information about. Well, I guess in this case, I would say it's the natural thing to assume. I know of other cases in which it's kind of natural to assume that the things they compose in blocks and then you have more than one cut. So this is the example for some kind of higher representations in the Wilson loops or like correlators between Wilson loops and local operators. You have these multi-cut matrix models. But OK, here, I think you just know that this is the solution. Or you try and it works. I don't know if I wouldn't try with a different, with a more complicated. Or maybe you can try with more complicated things and then you find that rho is only different from 0 in the center cut. Probably, OK, you can write the most generic answers. But then you're going to discover, applying all this machinery that is going to be different from 0. Yes. Yeah, so also like this. Essentially, all the eigenvalues would like to sit here. But then there is this pairwise repulsion. So they're going to spread. And then this is one cut. Yeah, it's in the potential. Yes, probably. So you have different properties. So omega is analytic, the m plane, except along the interval i. And then you have that when m goes to infinity, it has to have this behavior, 1 over m, due to the normalization of the distribution of eigenvalues. And then you can use Cauchy's theorem to discover that there is another equation, which discontinuity equation, across i, which tells you that rho of m is minus 1 over 2 pi. Now you get the same thing with the minus sign. Minus omega and minus x. OK, so now we want to find. So what is the strategy? The strategy is that we want to find the resolvent omega. After we have the resolvent, we can find the distribution of eigenvalues using the discontinuity equation. OK, so in order to find the resolvent, let's go back to the original saddle point equation. So this was my original saddle point equation. Now there are, again, various ways to find the resolvent. The easiest one, I think, is so you multiply everything. So this is i is a free index. So there are n equations. Now let's sum over i and multiply the equation by mi minus m. So you multiply all the equation, and you sum, and you take the large n limit. And this gives you, again, you have to use that theorem. And you find omega m squared minus 16 pi squared over lambda m omega m plus 16 pi squared over lambda equal to 0. OK, so this is the equation for the resolvent. Now it's an algebraic equation. It's a quadratic equation for omega of m. I'm almost done. Yes, I neglected. There is one over n. Good, so now this is a quadratic equation. I can solve it. Omega of m is 8 pi squared over lambda m. Now, again, I could have two different signs, but I have to pick the minus sign to guarantee that this goes like 1 over m at infinity. And OK, so now from the saddle point equation, I got the resolvent. And now, if I plug the resolvent in the discontinuity equation, I find rho. OK, so now let's see. The resolvent has two pieces. It has m, which is continuous across the cut. And then there's a square root, which is not. And so rho of m from this equation is minus 1 over 2 pi i, 8 pi squared over lambda. And then, OK, this is minus. And then minus, so then I got this discontinuity. And in total, it's like this, from this. So in total is 8 pi over lambda, lambda over 4 pi squared minus m squared. So this is the famous Wigner semicircle distribution. So it's like a semicircle between square root of lambda over 4 pi squared and minus square root of lambda over 4 pi squared. So the eigenvalues are distributed here. OK, but this was just the partition function. We wanted to compute the Wilson loop. And now there is a key observation. So now, step 4, I guess, OK, in my notes, step 4 was using the discontinuity equation. Now, step 5, back to the Wilson loop. So the crucial observation, which also applies to the exercise in the problem set about symmetric anti-symmetric representation, is that we have the trace over e to pi m. This becomes sum over i e to the 2 pi m i. And this is order 1 in n. So what generated this distribution was the action that was order n squared. And this is now order 1. So this does not back-react on this guy. And we can take the Wigner semicircle distribution as a background. And then we can insert this guy in this background. There is no deformation of the distribution of eigenvalues. This is also going to be true in the exercise set in which you have a symmetric and anti-symmetric representation with a long, young tableau with order n boxes. Order n is still less than order n squared. So you can also use the Wigner semicircle distribution in most computations. But then we're going to see that if you take a much bigger representation, which goes like n squared boxes in the young tableau, of course, this is going to deform the original distribution of eigenvalues. And this is what corresponds to these bubbling geometries on the holographic side. And I'm going to mention that next time. But anyway, so now we want to compute this guy. And again, so the key observation that we write here, I'm really done. So the Wilson loop is of order 1 compared to the saddle point that was generated by something order n squared. So we can use Wigner semicircle law. And so now we have that W circle for the fundamental representation is the integral between minus lambda over 4 pi squared plus square root of lambda over 4 pi squared rho of m e to the 2 pi m dm. You plug into Mathematica and you get 2 square root of lambda i1 square root of lambda. So you get precisely what we got before. Mathematica does this integral for you. So let me just now two comments on this result. So there is a square root of lambda, which is kind of artificial, because actually if you expand this function, this is odd in the argument. So the square root of lambda is going to cancel all the time. And this, in fact, is going to be 1 plus lambda over 8 plus lambda squared over 1 and 2 plus, so it's going to be an even power. I mean, there's not going to be integer power expansion. But if you go to lambda to infinity, so this becomes square root of 2 pi lambda minus 3 quarters e to the square root of lambda. So you have this predominant contribution at large lambda goes to e to the square root of lambda. And OK, so next time we are going to understand this behavior from holography. And I'm going to tell you how to do it. And next week, I guess you're going to hear a few talks about this correction, lambda to minus 3 quarters. So thanks.