 Let's start. Thank you very much for coming to the last lecture. So it will be almost independent of everything we did so far. We are going to study large dimensions. So lecture four, separate work in dimension d larger than 4. I want to do two things. The first one, I want, without explaining the assumption I want to explain how from the bubble condition that I will define in a minute, you can derive some information on the safe-holding work. And basically what I will prove is that, in average, Cn is like mu C to the n. There is no correcting factor. Then I will explain in the second part, I will explain the less expansion. And I will not do everything, but try to give you an idea of what is the basic principle. So first section, applications of the bubble condition. So since there will be several, I mean, we are going to look at the green function a lot during this lecture. I'm just going to change a little bit the notation. I will call zx the partition function of safe-holding work. I'm the generating function. So the x, the parameter x, will be an index. And I will define gx of u to be, if you want, the green function of the safe-holding work at the vertex u. So x is just a positive number. gx of u is the sum over every work from 0 to u of x to the length of the work. So here u is a vertex in z. So I will try not to be confused between the x, which is a parameter and the u, which is a vertex. If I am, please tell me quickly. Very good. So our goal is to try to derive information on the safe-holding work when you make the following assumption. So bubble condition. So the bubble condition is the following. You assume that sum over every u in zd of gx critical of u, so remember that here there is a radius of convergence, which is xc, which is one of the connected constants. So here I assume that when I sum over every vertex, the green function of the safe-holding work, but I square it, then I have something finite. Notice that the square is very important. If you just do that, this is just z of xc. And we saw that z of xc was infinite. So you need to square it. So this will be what we call the bubble condition. And this quantity, we will call it bubble x. So the assumption is that this is finite. Just a remark, we will see, or you could deduce, basically, that if you assume this thing, you can deduce that, in fact, gx of u should behave like this green function, the simple random work, green function. So if bubble condition is valid, then one expects whether you can actually prove it is something else. But one definitely expects that gxc of u behave like the green function, which I would define like that. Green function for random work of u, which is, by definition, the expectation for a random work starting at 0 of the sum for n equals 0 to infinity of indicator function that xn equals u. So it's the expected number of visits to u starting from 0. And the point is that this quantity behave like constant divided by u to the d minus 2. Why do I tell you that? Because if you assume that the bubble diagram condition is valid, then you get that the green function is behaving like that. When I retune this here, I should get something finite. So if that sum for u in zd of constant over u to the d minus 2 is finite, if and only if d is strictly larger than 4. So what did I do in this remark? I basically stated the consistency condition that you cannot expect the bubble diagram to be finite below dimension 4, I mean below dimension 5. At dimension 4, there is something else happening. So this has a following important implication. So bubble condition is not expected to be true if d is smaller than or equal to 4. I don't say cannot be true because here, this equivalence is a little bit tricky, but definitely everybody expects that it's true. So everything that I will say when I assume that the bubble diagram is finite is only valid for d larger than 4. And notice on the contrary that if you have that the green function of the safe forwarding work is smaller than that, then you have the bubble condition for d larger than 4. So the second part of the talk will actually be to prove that this thing has to be smaller than constant over u to the d minus 2, which will give the converse. But that's not the goal of the first part. The goal of the first part is really to give you one example of application. There are many of them. Yes, you're right. If I start making mistakes now, I cannot even tell you what it's going to be when we are going to do the less expansion. But anyway, OK. So that was a remark. So the application I have in mind is to tell you how z is exploding when z approaches a critical point. So we no already remember that cn is larger or equal to mu c to the n. So that implies automatically that zx is larger or equal to constant, I mean actually to xc, let's say 1, 1 minus x over xc. So it explodes at least like that. So the next proposition is exactly telling you the opposite. So if bubble xc is finite, then there exists a constant c such that zx is smaller or equal to constant over 1 minus x over xc for any x minus x. So it's exactly the converse. And notice that this has an immediate corollary, which I'm going to state like that. I'm going to say for any n larger or equal to 0, sum of the ck mu c to the minus k for k equals 0 to n is smaller than constant times n, which I can rewrite as the average is smaller than constant. How do you get this corollary? You just apply 2x equal xc times 1 minus 1 over n. OK, so this is just a proof here. Just apply 2x equal xc, 1 minus 1 over n. Notice that then z of this value of x is smaller than constant times n. But for any k smaller or equal to n, 1 minus 1 over n to the k is just a constant, basically, right? So then you get that sum for k equals 0 to n of ck mu c is minus k. It's smaller than something like e times z of this x, let's call it xn, which is smaller than. So at least in average, you end up with a fact that, so in average, ck is like constant times mu c to zk. It's only an average statement. If you would like to get, really, the full statement, which you can definitely do once you know everything about less expansion, then what you need to prove is something a little bit better. So if you would like to have an estimate on cn, what do you want to get? You want to get a Tauberian theorem, right? So by the Tauberian theorem, one would get ck smaller than constant mu c to zk for every k. If you would get that z of z is smaller or equal to constant over 1 minus z over xc for every z smaller than zc. But here, you need it to be complex, to be able to apply the Tauberian theorem. Here, I'm just proving something for real numbers, OK? So that's what is missing. And actually, you can get that. It's just you have to work more. And it would go beyond what I can just explain in two hours. But I think already just having the result for almost every n is kind of satisfactory. OK, at least it satisfies me, so that's something. OK, so that was the proof of this corollary. But let's now prove proposition 4.1. So here, we are going to do something quite classical in statistical physics. We are going to derive a differential inequality for z. So we want to see z as a function of x and derive a differential inequality. And the differential inequality will be the following. So we wish to prove the following. We wish to prove that x times z prime of x, you understand what I mean by prime here is for zx seen as a function of x, is larger we could to z of x squared divided by bubble of x minus z of x. So I want this for any x smaller than x. So maybe before I prove this differential inequality, let me explain to you how you derive the result from it. So the only thing we are going to do is we are going to integrate this differential inequality between x and x critical. So let me do it in more details. So I'm going to write it like that. So this means minus z prime x over zx squared is equal to minus, so how would I write it? Sorry. That means this larger way equal to 1 over x bubble of x minus basically 1 over x zx. And here I do recognize the derivative of minus 1 over zx. This is the derivative of minus 1 over zx. So that has to be larger than this quantity. Now, notice that bubble of x is always smaller or equal to bubble of xc. So here I can add c like that. I don't get infinity. That's the problem, otherwise you may get infinity, but here I don't get infinity. Also, notice that zx itself is going to infinity. That's, for instance, proved here. So I can fix x0, such that z of x0 is larger than, say, twice bubble. Then if you do that here, you end up with larger equal to 1 over 2x times bubble of xc for any x larger than x0. So I'm just saying there, do not bother too much about this term, which you could have guessed here as well. This zx is going to infinity. This is constant, basically. So here, in fact, I could have ignored this term. So once you are there, well, you are in good shape because you can just integrate between x and xc. Actually, let's even put xc here. That doesn't change a thing. So here this is constant. You can integrate and you end up with 1 over zx minus 1 over z of xc is larger, we call, to xc minus x over constant. This is by integrating between xc and x. But this is just 0. 0 of xc is infinity. And I end up with zx smaller or equal to constant divided by xc minus x, which is exactly what I want. Here, formally, this is true only for x larger than x0. But of course, the only behavior we are interested in is the behavior near xc. So this assumption is harmless. So now we need to prove this one. We need to prove one. We need to prove this differential in equality. So how do we do that? Let's look at what this guy is. So this guy is sum for any work ending somewhere of what? So it's a sum of gamma x of gamma. It should be the derivative is gamma x of gamma minus 1. I multiply by x, and I end up with that, which I'm going to write in two terms. I'm going to say it's a sum for any gamma ending somewhere of gamma plus 1 x of gamma minus where the sum of zx of gamma, which is just z of x. So my whole point is I need to prove that this thing is larger than zx squared divided by bubble of x. So call this thing q of x, and we need to prove q of x larger or equal to zx squared divided by bubble. So q of x, so it's a sum of a u of the sum of a gamma going from 0 to u. But the gamma plus 1 notices exactly the number of vertices on my work. So instead of writing gamma plus 1 x of gamma to the power gamma, I'm going to write the sum of a u and the sum of a v of x to the gamma indicate a function that v belongs to gamma. Right, it's the same thing. I'm just counting each walk the length of the walk. But now observe that the good thing with this writing is that this can be rewritten as sum over 2 walks, 1 gamma 1 going from 0 to v, and 1 gamma 2 going from v to the boundary. So it's a sum of a u. Sum over u and v of the sum of the gamma 1 from 0 to v, gamma 2 from v to u of x to the gamma 1 plus gamma 2. Except these two walks, they need to satisfy something very special. They should intersect only at v. Because together, they need to give a safe-holding work. And here is the key of what we are going to do today. It's basically we are going to do that repeatedly. This is a difficult condition to handle, this condition here. It's telling something between the first and the second walk, and it's telling something very non-trivial. But what we are going to do, we are just going to say, OK, this whole thing is equal to the thing without the condition minus the thing where we force the two to intersect somewhere else than v. So we are going to go by inclusion-exclusion. But the point is that in high dimension, reintersecting is something that is going to cost us something. That's something that we are going to be able to express. So here, for instance, we are going to write it as this thing, which, if you think about it, is just zx squared. Because there is no condition, the second walk, you can integrate out the second walk to get zx. Then you integrate out the first one to get zx. And the error term, we are going to bind it in terms of the bubble diagram times zx squared. And that's what is going to help us computing this. OK, so let's exactly start this program. So the first thing is sum over u and v of x to the gamma 1 plus gamma 2. And then we have another term, which is sum over u and v of x to the gamma 1 plus gamma 2. But the indicator that gamma 1 intersecting with gamma 2 is different, I mean, contains. And it's different from v. Here, again, you integrate over gamma 2. And this was, sorry, sum. I forgot the sum here. And here as well. So gamma 1 from 0 to u, gamma 2 from u to v. And here it's the same sum. Gamma 1 from 0 to u, gamma 2 from 0 from u to v. So here, again, you integrate gamma 2 by invariance of the translation you get zx. You integrate gamma 1 by invariance of the translation you get zx. So the first thing is just zx squared, no difficulty there. Now this guy here definitely is smaller than the sum over u, v, and w in z, d, w not equal to v of, so what should happen? So if you have a walk from 0 to v and a walk from v to u. But this second walk does intersect the first one. Well, it means that there is a w somewhere, which is the last intersection between the two, so the last one. And so this walk needs to do like that. It may intersect several times here, end up at w. And then it needs to go through u without intersecting the first walk. But in particular, if you have that, then the beginning of the walk here should not intersect the end of the second walk. So here what I'm going to say is that this is like having four walks, gamma 1, gamma prime 1, gamma 2, gamma prime 2. So it's a sum of a gamma 1 from 0 to w, gamma prime 1 from w to u, gamma 2 from v to w, and gamma prime 2 from w to u of x to the gamma 1 plus gamma prime 1 plus gamma 2 plus gamma prime 2. I'm going slowly because we are going to redo things. And I'm pretty sure for people who write it's not that slow, but. And here, among all the, I mean, I have a lot of constraints. This second guy should not intersect gamma prime 1, should not intersect gamma 1. Gamma 2 should not intersect gamma prime 2. Gamma 2 is allowed to intersect gamma prime 1, but not gamma 1. I mean, it's a little bit complicated. Let's drop most of these conditions and just write indicator function that gamma 1 and gamma prime 2 should have an intersection, which is just w. Let's just keep this one. I'm doing an inequality. Anyway, when I was saying I'm intersecting, I was really summing on w anyway. So I drop most of the constraint and write like that. So again here, to keep in mind, w is the last intersection. So we are decomposing with respect to the last intersection. OK. So how do we do now to conclude? Well, notice here that if I fix gamma 1 and gamma 2, what do I have? I have gamma prime 1 and gamma 2, which have no condition of intersection with gamma 1 and gamma prime 1. They have nothing. They are just two walks going from w to v. Or let's say one is going from w to v and the other one is a reversal of it. So if I sum the gamma prime 1 and gamma 2, I just end up with a green function squared, the green function squared of the safe-holding walk at v minus w. Each one of them contributes that. And then I get x to the gamma 1 plus gamma prime 2. Indicator function of gamma 1 intersection gamma prime 2 is w. So this was q of x, by the way. So q of x is larger or equal to zx squared minus, well, the sum over v of this guy is what? It's just bubble of x minus 1, actually. Because v must be different from w. So you don't have the contribution from walks from 0 to 0, which are just 1. And then imagine here I erase all the reference to w. So this guy to v, this guy is not there, this guy is not there, this guy is not there. What do I end up with? Well, this is just q of x. Hence, q of x is larger or equal to zx squared divided by bubble. And that's the end of the proof. OK, so it's an inclusion-exclusion principle. At this stage, actually, it doesn't use, I mean, this derivation of the differential inequality does not refer at all to the bubble condition. It involves the bubble of x, but it doesn't, I mean, this is true for any x smaller than xc in any dimension. It's always true. So here are really two things. There is deriving some differential inequality for z of x. And there is a second step, which is using the bubble condition when you integrate out this differential inequality between x and xc. So that was the first part of the day. Now, of course, I mean, this is interesting only if you have a way to prove the bubble condition. And that's going to be what we are going to do in the second part, which is going to be a little bit more like this. Are there questions on this part? So again, use inclusion-exclusion, and then use the bubble diagram to integrate out, the bubble condition to integrate. So the second part, and the biggest part of the lecture, is infrared bond via less expansion. So the goal here, let me maybe state it as a theorem, actually. So theorem, which is due to Hara and Slade, actually, since I'm going to prove the weakly straightforward, maybe let me not put any persons. So for D, large enough, gxc of u, I mean, there exists a constant c such that gxc of u is smaller than constant times, actually, we are going to prove it smaller than twice, the green function for the simple random work. And already, I must say, we are not going to prove this theorem. We are going to prove it for a simpler model. But I want to do as much as possible for this model and then explain to you what needs to be adapted. OK. So you agree if you have that, you have the bubble condition. OK, so the idea is the following. The idea is not to work really with the green function, but to work with its inverse for the convolution. So let me define the following, so definition for f and g function from zd to r in L2, define f convoluted with g at u to be the sum of v in zd of f of u minus v, g of v, and this for every v in zd. Sorry. What did I do? u minus v, no? For all u, you're right, sorry. And here, the observation, the first observation, is that g random work has an inverse. And what is the inverse? Just the Laplacian. Denoted Laplacian random work and defined by Laplacian of u is equal to 1 if u is equal to 0 minus 1 over 2d if u is a neighbor of 0 and 0 otherwise. So by inverse, I mean that, I mean, what is the neutral for convolution? It's the Dirac at 0. So by this, I mean g random work Laplacian is equal to Dirac at 0, which maybe let me write it because anyway, it's going to be useful for us. This is equivalent to saying that the green function is equal to the Dirac at 0, plus 1 over 2d sum over u neighbors, I mean, sum over v neighbors of 0 of green function at u minus v. It's exactly the same. Just pass this thing on the other side and you recognize the convolution of the green function with Laplacian. So question, well, can we get something similar to this equation for g, for the green function here of the forwarding work? By the way, in this section, I would just drop the x because the notations otherwise are going to be way too heavy. So question, similar bound for g of u, which now is, I mean, we drop this. OK? Well, let's try. Let's just start and see if we can make it look like this, at least. So g of u, so now I'm really working with the green function of safe forwarding work. Well, first, if u is equal to 0, then this is 1. So at least we have this thing, good for us. And if u is not equal to 0, then there is a first step, right? So with the sum of v neighboring 0, I need to do one step, which is going to cost me x. And then I need, well, a sum for every gamma from v to u of x to the gamma. But notice that I also need these walks not to pass again through 0, right? So here I should put indicator function that gamma does not contain 0, right? So I just decompose on the first step of my walk from 0 to sum v. And then I have a walk going from v to u. If you would remove this condition, well, you will exactly end up with the thing there with x instead of 1 over 2d, right? So what are we going to do? Well, we are going to remove this by saying, OK, this is equal to the thing with this thing minus, well, the case where it does go back to 0. So let's write this as delta 0 of u plus x times the sum over v neighboring 0 of g of u minus v minus x sum for v neighboring 0 of the sum for gamma from v to u and gamma containing 0 of x to the gamma. So I'm correcting my term. And this first term here, I mean this term here, let's call it 1. So the goal is to understand how 1 looks like. Because these terms actually look great. This looks quite good, I mean as good as we can wish for, basically. So the game is this guy. OK, so actually let me include the sign. So 1, we are going to do exactly like here. So if you are a walk starting from 0, basically, doing one step, then going back to 0 and then going from this to v, well, what you can say that this is minus sum over 1 walk from 0 to 0 and 1 walk gamma 2 from 0 to u. So I'm just using, if I reintersect, I'm like that, except that these two walks, so x to the gamma 1 plus x to the gamma 2, instead of these two walks, of course, should intersect only at 0. So now we really have something that looks like something like that. And we are exactly going to remove this indicator function and put that it intersects at somewhere x. So this is equal to minus sum gamma 1 from 0 to 0, gamma 2 from 0 to u of x to the gamma 1 plus x to the gamma 2. And then I write plus sum. So now I'm going to go a little bit faster and say, well, actually no, I'm not going to go faster. Sorry about that. So gamma 1 from 0 to 0, gamma 2 from 0 to u, x to the gamma 1 plus gamma 2. If they get the function that gamma 1 intersects with gamma 2, contains something else than 0. Let me first do this step. OK. So now here, first thing is let's treat this guy. So this guy is what? It is equal to what I will write the sum for v in z, d. And you will understand why I want to write it like that. Of pi 1 of v, actually the blue is maybe not the best color. It's still reasonable, I think, for now. OK, I can try this one. Times g of u minus v, where pi 1 of v here is going to be delta 0 of v times just the sum over every gamma from 0 to 0 of x to gamma. OK, so I just want, I mean, there is only one term in this sum, which is v equals 0. But you will see it's important to understand the structure of these guys. So what I want is to write it like that. And I just want to encode the intersection in this work. So this is one work, and this work has this property. So imagine I put the indexes of the work here from 0 to length of gamma like that. Well, here what are the two only values which have the right to intersect and must intersect? Well, the first value and the last value must be the same. So I'm just going to encode it by that. I'm going to put an edge like that between 0 and gamma to denote the fact that gamma of 0 and gamma must be equal. I write it like that. So that's my first term. Now this thing here, I call it 2, and I'm going to try to treat it the same way. Let's go here. I'm going to do three steps because there are three new behaviors, and I'm going to try to go not too fast, but I'm also not so familiar with all these things. So here, if I intersect, that means that there is a v where you must intersect. So here, something which is a little bit annoying, but I definitely think that here it was better to look at the first, well, actually, I don't know. Here I'm going to actually decompose on the first intersection. So here I'm going to write it as a sum for gamma 1 from 0 to 0. And I'm going to say, well, the second walk has to go from 0 to some v, which belongs to gamma 1. And then gamma 3 goes from v to u. And I write x to the gamma 1 plus gamma 2 plus gamma 3. And here, what is the indicator function? So if I did intersect at v, and if v is my first intersection, I do need that gamma 2 and gamma 1 intersect only at 0 and at v. They cannot intersect anywhere else. But of course, when I'm doing that, I'm forgetting something. What am I forgetting? I'm forgetting that gamma 3 and gamma 2 are the two guys that are coming from gamma 2 here. Maybe I should write them gamma 2 1 and gamma 2 2. They are both coming from gamma 2 here. Hence, they also must not intersect. Well, now you understand that I don't want to write that. I'm going to say, well, I remove the condition and I subtract the term where they do intersect. So I'm going to end up with minus sum over gamma 1 from 0 to 0, gamma 2 1 from 0 to v, gamma 2 2 from v to u, x to the gamma 1 plus gamma 2 1 plus gamma 2 2. The indicator function of gamma 1 intersected with gamma 2 1 equal 0 v. But I add the indicator function that gamma 2 1 intersected with gamma 2 2 is not just equal to v, which it should be. Do you agree? I should have here the intersection. I write it as 1 minus, I mean, empty intersection. I write it as 1 minus the fact that it does intersect. OK. So there will be two terms here. There will be our term 3. And I'm sorry for that, but we will go through one last step. So this will be term 3. But before that, I want to understand this term. Because I mean, I could go directly with a less expansion, but then I mean, I think there is something a little bit mysterious with what is happening. It works, but OK. So this term here, I'm going to write it as sum over v of pi 2 of v g of u minus v, where, so pi 2 of v now is right. So at the top, notice gamma 2 2 is now a completely independent work from everything that happened before. So this is going to give me the g of u minus v. And I need to sum over gamma 1 from 0 to 0 and gamma 2 1, which I'm going to write gamma 2 from 0 to v, belonging to gamma 1. I mean, I'm going to write it like that. Indicator function x to the gamma 1 plus gamma 2 indicate a function that gamma 1 intersected with gamma 2 is equal to 0. Which, again, if I try to express it like that, gives me what? So let's say this is gamma 1. So here you have gamma 1. And then you put gamma 2 next to it. So what do we need? We need gamma 1 to go from 0 to 0. So here, the value of the work here is the same as the value of the work here. So I write these two value must be equal. Now, there is a value here. So this is gamma 2 of v. But there is another value here, which is equal to v. Let's say it's this one. So this has gamma 1. I mean, let's say this is k. And you have gamma 1 of k equal v. Well, this one must be equal to this one, to the last one. And these are the only two intersections, a load. That means exactly like here, any two other edges had to be different. Here, any two other edge, you have to have different values. Good. Let's do one last step because something happens when you look at pi 3. And I think after that, you kind of get the phenomenology and we can go for the general case. So let me not erase this. I never promised that would be nice. But it's a very powerful technique. And it applies to so many models that it's extremely useful, I think, to see in the case of self-reading book. Let's do the last step, so 3. So 3 again, we are going to write it as minus sum. So we need now gamma 2, 2. We are going to decompose it into the first intersection with gamma 2, 1, which is going to be w. So I'm going to end up with minus sum from gamma 1 from 0 to 0, gamma 2, 1 from 0 to v, gamma 2 to 1 from v to w, and gamma 2 to 2 from w to u. And I need x to the gamma 1 plus gamma 2, 1 plus gamma 2, 2, 1 plus gamma 2, 2, 2. I did write 1, 2, 3, 4 on my thing. I don't really know why I did that now, but I guess I just panicked. Gamma 2, 1 is equal to v, gamma 2, 2, 2, 1, sorry, intersected with gamma 2, 2, 1 equal w. And again, there should be an additional intersection thing which is going to appear in a first term, which I promise I'm not going to do. But what I wanted to write is that this term, which is now the equivalent of the valid term there, this term is written as the sum of a v of pi 3 of v. By the way, I did a very big mistake there. There is a minus here. Sorry. Yeah, in pi 2 of v, there is no delta 0, right? It was really the sum on any vertex in the whole book. G of u minus v, where pi 3 of v is equal to the sum. So gamma 1 from 0 to 0, gamma 2 from 0 to v, gamma 3 from v to w of x to the gamma 1 plus gamma 2 plus gamma 3. And here I end up with indicators. And gamma 1 intersected with gamma 2 is equal v. And gamma 2 intersected with gamma 3 is equal to 0 v, sorry. And gamma 2 intersected with gamma 3 is equal to v. No, what am I doing? Sorry, with w. Why did I do this step, which doesn't look much nicer than the other one? So I just rewrote these things, right here. I did this step for the following reason. When you encode, let's try to encode again with this thing. So what do we end up with? So let's say this is gamma 1. Let's say this is gamma 2. And let's say this is gamma 3, OK? So gamma 1 goes from 0 to 0. So this guy needs to be, I mean, r equal. Then again, the end of gamma 2 needs to be one of these points. So if this guy is k and you have gamma 1 of k equal v, you end up with this thing, right? And then you also have this is w. But you have a term here, which is L, such that gamma 2 of L is equal to w. And these guys need to be equal like that. So OK, that looks very much like before. Why did I do this step? I did this step to make you realize one thing, which is that here I have absolutely no constraints between gamma 1 and gamma 3. So in fact, there, any edge like that could be there, in the sense you could have intersection between any edge here and any edge here, OK? So this could be an intersection. This could be an intersection, et cetera, et cetera. So when I'm going to do the systematic thing, which I'm going to do now, you are going to see that we are going to re-sum on these edges somehow, OK? So we are going to get back to that, but now the question is more explicit explanation. Whoa, in one hour. OK, so that's perfect. And we are at the half of what I wrote, which is really, really amazing, OK? And there is a minus, which I again forgot. Thank you very much, which I'm writing here. You are entirely right. So we are going to do a five minute break. And then I'm going to explain to you how you do a generic thing. And it's going to be based on these encodings that I described in these three first examples. And we are going to do it in a generic way. And then there will be a few questions of, of course, you want this pi k, pi 1, pi 2, pi 3, and so on, not to be too big. In some sense, interpret this pi k as long range steps. If you would have only this thing, it's a nearest neighbor random walk. Imagine this pi k as long range steps, except they have one property which is extremely bad for an interpretation as a Markov process, which is that they may be negative. And that is going to make it a little bit more difficult to interpret. But morally speaking, it's kind of long range. You are rewriting Cephoenigwok. I mean, the green function of Cephoenigwok has a green function of a long range model, except there is this very big cave height, which is that it's with non-positive weights. So we will have to refer to analysis instead of the random walk interpretation to estimate the asymptotic of these things. So break five minutes, and we start at 40. So we want to do a more systematic decomposition. And for that, we are going to introduce a little bit what correspond to these diagrams here. And it's going to come from the following quantity. So defined k of AB, so it's something defined on integers. And it's a sum. I mean, it's a, sorry, maybe I'm going to do that. By definition, the product of every s smaller than t smaller or equal to b and larger or equal to a, just think of 0 and the length of the walk of 1 plus u st of gamma. So it's a function of gamma. And it's equal to, I mean, where u st of gamma is minus 1 if gamma s equals gamma t and 0 otherwise. So imagine, for instance, if you want to check whether a walk is safe avoiding or not, you apply k of AB to the walk, which is at this stage not safe avoiding. And you agree that if so, k 0 gamma of gamma is just 1 if gamma is safe avoiding and 0 otherwise. Just a way of writing the repulsive condition. OK, now observe that this quantity can also be rewritten as a sum over gamma, which I will call a graph, and I will tell you what a graph is in a second, of the product of st of u st, st in gamma. So let me fix the notations here. So st is just going to be a pair st where s is strictly smaller than t. It's just a short edge, I mean a short notation for an edge. And gamma is just a collection of edges, just things like that. I just encode it like that. So it's just a notation. Now my goal is to rewrite g of u in terms of this quantities. So g of u, I can write it as a sum over every walk, gamma from 0 to u. But now really do not think of this as a safe avoiding walk. It is not a safe avoiding walk. It's a walk, a priori. And here, x to the gamma k of 0 gamma of gamma. This is encoding the repulsion, the fact that it must be safe avoiding. So you see here, if you use this decomposition, you can write this g of u as a sum over every, say, n equals 0 to infinity or something like that, of any possible graph of x to the gamma product of the u st for st in the gamma. The point is that this is going to be a huge, huge sum, very, very big sum. So our goal is to only partially decompose our thing, not to decompose into any graph, gamma, but to decompose into what we say, any graph having a certain lace. And I'm going to tell you what a lace here is. So instead of decomposing on any graph, gamma, I will decompose on every lace. And I will have the sum hidden in my partition function that will appear next. I will have the sum of every graph for which the lace is equal to my lace l. So in order to that, I need to tell you what a lace is for a graph. And then you are going to see the thing just kind of follows almost directly. Let me erase maybe this part here. So this is just a way, think of it as a smart way of decomposing in a systematic way my gain function. So first thing, imagine I have definition of a lace. Think of a lace as kind of the backbone of the connected component of my graph. So I'm going to tell you what the graph is and then what is the backbone of this connected component. So imagine I give you a graph. Let me try to draw a good one. This one is fairly well chosen. So let's do something that looks like that. This like that, this like that, and here. Let's do that and that. OK, let's say this is your graph. I want to define the connected components of this graph. So the connected components are going to be, well, the connected components, simply, right? When you look at the intersections, so here I draw a set of lines, they are connected components. So they are, in this case, how many? There are four of them for the following reason because there is only one trick. So there is one here, one here, and here there are two of them. These guys here are not considered as in the same connected components. The connected components are really these things that cross each other, really cross. OK? And now for a connected component c, for the connected component of the origin, actually. So let's call c of gamma the connected component of the origin. I'm going to define the lace of gamma as follows. So the lace of gamma, the lace of c of gamma, is defined as follows. It's defined recursively. So it's a set of edges. And it's kind of a minimal set of edges and coding already a lot of intersections. Think of it like that. So the first thing is to define it's a0. So a0 will be a. So a is the first, like we are looking at k of a b here. A is the first vertex. So a0 is equal to a. And t0, which is the other endpoint of the first edge, is equal to the farthest point connected to a. So imagine you have two guys like that. So t0 is the max of the t so that at belongs to gamma. OK? So it's this guy. Then you define recursively ti will be the max of the t for which there exists s smaller than ti minus 1 with st belongs to gamma. So in this case, what is this guy? Well, this is the largest guy for which you are connected to a value which is smaller. And si is the min of the t of the s for which sti belongs to gamma. So if I would have had, OK, maybe I could draw this one rather, if I would have had several guys connected to this one I take the smallest one. And the lace is just a collection of these edges. OK? Good. So now what I can always write is I can write k of a, b to be simply the sum over all the possible lace of the sum for any gamma graph with the lace of gamma equal l. So let's define the lace l of gamma of the cluster of gamma, but therefore of gamma. And then what I write, I write the product for st in l times the product of gamma, of course, 1 plus st, where st belongs to what I will call compatible of gamma. What is compatible of gamma? It's a set of edges in gamma which are such that if you add them to your graph, you don't change the lace. So let me, so we're compatible of gamma. This is a set of st in gamma such that st unions the lace. When I apply the lace, I get the same. So that's a guy that would not change the definition of your lace. So let me give an example. This guy, for instance, will not change the lace, or this guy will not change the lace, for instance. Let me give you another more important example. Let me not give you another example because I understand I'm not certain that's a good day one. So let's keep like that. So the compatible are the guys that do not change the lace. So now let's just use the definition of lace and compatible of gamma to define, maybe that's just that there actually, OK, I messed up something. So if I define k of AB now, notice that k of AB, there are two possibilities. Either the connected component of A is a singleton, or it's not a singleton. If it's a singleton, then k of AB, when you decompose it using graph, you can just see that it's k of A plus 1B. So that corresponds to the case where the first guy is connected to nobody. So any graph like that is what? It's just a graph of A plus 1 to B plus the sum for S equal A plus 2 to B of k of SB. And before here, well, what do you get? You get exactly the sum of a lace of this thing. Notice that here, this is just minus 1 to the number of edges in L. And I'm going to write it as the sum for capital N equal 0 to infinity, capital N equal 1 to infinity to minus 1 to the capital N times gn of AS. And gn of AS is what? It's just this whole sum when number of edges, I mean, this is this whole sum, sorry, where you restrict for number of edges equal N in the lace. So you just decompose of every laces with N edges of the sum over every compatible edge of product of 1 plus UST. OK? Yes? Just precise what's the nature of the lace? Like, is it edges? It's a set of edges. Yeah, yeah, set of edges. So it pairs, so a lace L of gamma is a set of ST. And the point is that this ST have some compatible relation which is that basically they need to look like that. And now I understood, sorry, so I wanted to give you other examples, sorry, and I got confused because I wanted to tell you these guys are compatible edges. These guys are actually non-compatible edges. What are the edges that are not compatible here? There are those that will change the lace if you add them. If you add this edge, you don't change the lace. Why? Because this T1 is larger than this one. So T1 was this guy anyway, that you put this guy or not. But if you put this edge, you do change the lace. If you put this edge, you are the T1 should be this guy here, right? So these two yellow edges were actually the non-compatible edges. Now notice here morally what does this thing say. Here I'm taking the product of the compatible edges of the 1 plus UST. So I'm saying any compatible edge, I cannot have gamma S equal gamma T. It's impossible. It's forbidden. Yeah? But if ST are in the graph gamma, how can they be not compatible with the lace of gamma? No, but I'm going to give you another example. No, but you are summing over every gamma here. So for a fixed lace, you are just describing the set of guys. Sorry, but they are compatible. So yeah, I mean, if you want, it's all the other edges. I mean, I want to highlight the fact that they are compatible because then I want to kind of ignore the summation of a gamma. If you want like that, it's indeed ST incompatible with gamma, then you can just re-sum the whole thing to get only, OK, that's what you are saying. And maybe that's actually a smart thing to do. So it's minus 1 to the L. I should have written that. Sorry. Oh, yeah. Yeah, yeah, no, no. That's not OK what I wrote. So this is UST for ST in gamma minus the lace. Sorry, you aren't very right. Sorry. I got confused. And now this whole thing here, you can rewrite it as sum minus, I mean, sum over N minus 1 to the N. And sum over laces with number of edges equal N. And here you are going to have product of 1 plus UST for ST compatible. Yeah, sorry. Sorry, sorry, sorry. That's the whole point. Being in front of the board is making it. So it's exactly saying that. It's exactly saying you want to re-sum. You don't want to have the whole entropy of all the graph. So what you do is you only decompose with respect to laces. Yeah, thank you. And that's OK. Is it clear now? Good. And so this guy here is a guy that I call GN. OK, very good. So now, when it's written like that, if I re-plug in the green function, what do I get? I get j of U equal delta 0 of U plus x sum for V neighboring U0 of G of U minus V where sum of a V of pi of V G of U minus V where pi of V is equal to the sum of the pi N of V. And pi N of V is what? It's just what you end up when you do the summation here in the green function. So pi N of V is minus 1 to the N sum for gamma from 0 to V of x to the gamma times GN of 0 gamma. So what you just do, you just plug this inequality in G. So you just replace here by the term. What do you end up with? This guy is actually going to give you exactly the x times the sum of the neighbors of G of U minus x. And these guys here, this will give you exactly the green function from V to U, so G of U minus V. And this term here will give you this pi of V. And notice here that, well, pi of 0 is what? Pi of 0 is nothing, sorry. So it's sum for N equal 1 to infinity. When you are on the board, it's a little bit close to your mistakes. So pi 1 of V is what? What is the only lace that you can imagine? Well, the lace necessarily goes to the end of your thing. So this is the only lace that you can imagine. And it is exactly the pi 1 of V that we were looking at. What are the laces with two quantities? Two edges. It's exactly those ones. With three, you end up with this. So we recover the three things that we got. And more general, you will be always able to decompose into N guys like that. It's just that in order to encode which ST needs to have gamma S not equal to gamma T, well, it's best seen by the systematic way, if you do the right thing. If you re-sum the right thing. OK. So that kind of suggests that there is a natural inverse. I mean, actually it does more than suggesting it proves that there is a natural inverse for G. So define delta self-evaluating walk of U. And actually it's something that depends on X to be 1 if U equals 0 minus pi, I mean, minus X plus pi 1 of U if U is a neighbor of 0 and pi 1, I mean, pi, sorry. And minus pi of U if U, I mean, distance between these larger than 1. Right? So it really looks if the pi of U would all be positive. It will really be the generator of a long range random walk. It's not the case, but now the question becomes how close is this thing from Laplacian? And in order to do that, well, we need to understand how fast this term does decay. So maybe let me put an X here to really highlight that this is depending on X. So question, how close delta self-evaluating walk is from delta? And the first thing we can try to do is to bound. Because I mean, you can always write that, but it's kind of an empty statement if you are not able to work with it. So first thing we are going to do is try to bound px of U. OK? So what do I erase? So how do you bound p of U? Well, first thing you do is p of U is smaller than the sum for n equals 0 to infinity of pi n of U. Up to now, no difficulty there. So the goal is to bound this guy. So what is this guy? So it's a lace with n edges like that. And what I do is I can sum on v0, which is equal to 0, which must be equal to this guy. I can sum on v1, which must be equal also there, on v2, et cetera, up to vn there. I can sum on this. How many? v0, v1, vn minus 1. OK. So I can say this is smaller than sum for n equals 0 to infinity of the sum for v1, v2, vn minus 1, and v0 equals 0. And now your work has very complicated intersection conditions. Because there are all the compatible edges, you should have no intersection. Yes? The sum starts with 0 and 1. What's pi 0? So here, there are very complicated conditions in what is allowed to intersect what. But I'm just going to say, OK, so there are compatible edges. These compatible edges, I cannot have intersection. OK. They are complicated. But let's just think, definitely all those are compatible. All those between guys here are compatible. All those between guys here are compatible. So I at least need this guy to be safe of owning, this one to be safe of owning, et cetera, et cetera. So if I write like that, I get g of v0, v1 squared, then g of v0, v2, then g of v2, v1, then g of v1, v3. Sorry, I did always with this notation, et cetera, et cetera, up to this point where I get vn minus 1, vn minus 2, and vn minus 1. So I will get this squared at the very end. OK. Well, assume for a moment that I know that g of v, g of u, is smaller than some constant divided by u to the d minus 2. Basically, that I know that it's bounded by green function. Then here, it's a big sum, but it's an explicit sum. OK. That's something that you can all compute yourself. And what you end up with is that this whole thing here, so if, again, it's assumed that you have this bound, then this whole sum is in fact smaller than a certain constant to the n. Maybe let's put c0 and this is c1 divided, many times divided by u to the 3d minus 6. It's a computation. Just it's a completely elementary computation. OK. So that is problematic because you see for small u, this, I mean, for no u, this converges, because this constant c1 is blowing up. So here is where we stop working with straightforwarding work. So there is a story that allows you to prove that this sum here is convergent for d very large, for straightforwarding work and d very large. It's a too complicated story for the 20 minutes that remain. So what I'm going to do is I'm just going to change the model. OK. And I'm going to work. So this is not good enough for convergence. And therefore, we are going to switch to the dorm joy model, so to the weekly straightforwarding work. You remember the weekly straightforwarding work was something where you don't forbid completely the existence of an intersection. You just penalize it. OK. So imagine here how I want to rewrite this to say that I only penalize the number of intersections. Here, what do I mean? If I don't want to penalize at all, I put UST to just be 0. If I want to penalize completely, I put UST equal to minus 1 when there is an intersection. If I put minus beta here, then I penalize an intersection, but I don't forbid it. That's exactly the dorm joy model. So here, if you do exactly the same study, but with this UST beta instead of UST, what changes? Well, the only place that changes is that here, it's not minus 1 to the number of edges. It's minus beta to the number of edges. Right? If it's minus beta to the number of edges, all the rest was actually the same except here instead of getting minus 1 to the beta times this sum. So basically, this sum, when you take the absolute value, you are going to get beta to the n times this sum. For beta small enough, it's convergent. So for beta very small, we get convergence of the pi. And furthermore, we actually get that pi of u is smaller than we call 2c1 beta basically times u to the 3d minus 6. I mean, c2, maybe it's another constant, but it's something of this sort. OK? OK. So now, what is the end of the proof? Well, the end of the proof is actually not that simple, but I can try to give you an idea of how it goes. So the end of the proof is absolutely, is completely abstract. It doesn't rely on the safe avoiding work anymore. And it's saying basically, some guy which looks like that, where these are small, well, it's inverse, which is exactly g, has to be close to the green function. So let me state the lemma, and let's then prove the, I mean, finish the proof of the theorem. So the lemma that I will only sketch the proof afterwards. So this is like 4.4 or 5, something like that. We don't care anymore. I almost got it. Like almost until the end, I did the numbering right. So the lemma is the following. Assume d is larger or equal to 2, or actually 3 is larger than 2, sorry. And assume that and consider delta such that, so you consider an operator delta, which has the following properties. So delta is symmetric with respect to the symmetries of Zd. And I'm going to tell you where it's involved. The second thing is assume that the sum of the delta of x is positive for every x, I mean, is positive. And last one is assume that delta of x minus delta random walk of x is smaller we call to c beta divided by x to the d plus 4. Assume these three things. Then g, which is delta minus 1, so it's the inverse of delta and the inverse of the revolution satisfies that g of x is smaller than 2g random walk, g of u, sorry, I promise I will not make the mistake. And I of course did it. It's smaller than twice, so as you know, 4 beta small enough. I have that. So this lemma, I'm going to tell you a little bit how we can prove it. I'll tell you how you prove the theorem using this lemma. So proof of theorem 4.3, 4 weekly safe avoiding walk, of course, since we switch to this model. And notice that here it's going to work for any d larger equal to 5. d larger equal to 5 and beta small enough. So here there is a small trick, which is we did get the bound, but we only got the bound assuming we already had a bound. We proved that the pi was small only because we assumed that the green function was bounded by the green function for safe avoiding walk was bounded by the green function for simple random walk. So it's kind of going round and round because we are trying to prove that. So assuming it is not the best thing we could do. But here is the trick and here is the beautiful trick. It's a bootstrap argument. It's kind of saying if you are small at some point you will never manage to get big later. So define the following. So define f of x, which is just the soup over u of the green function of safe avoiding walk divided by the green function of simple random walk. I mean this is for weekly safe avoiding walk now. I'm going to put a beta like that, but maybe I'm just going to ignore it. Look at this function and notice you agree that for x small f of x is smaller than 3 or even 2. So for x small f of x is smaller or equal to 2. Second point is here assume that instead of doing that I really wrote this as exactly the right bound. If I assume that then I get that for a certain c. So now for x small you have that. Now observe that f is continuous. Let's say 1 over 2d xc, but I mean you could go from 0 to xc. xc is 2d, of course. And let's say 1 over 2d, this is smaller or equal to 2. And now observe that, OK. Sorry, that was with a 3. Now if f of x is smaller or equal to 3 that means that the pi satisfies this bound. But if pi satisfies this bound then delta xc forwarding work satisfies the assumptions of the lemma y. Well, this thing here is going to be tautological from the bound on pi, simply because 3d plus 6 is larger than d plus 4. Let me put 3d. Indeed, pi of u is smaller than c1, I mean c2 beta over u to the d plus 4. Since d larger or equal to 5 implies 3d plus 4, 3d minus 6 larger or equal to d plus 4. Second, delta is clearly symmetric. Delta safe forwarding work is clearly symmetric. And third point, well, this is positive because in fact, and yeah, this is positive because some of u of delta safe forwarding work of u is just one of a sum of, I want to treat, sorry, sum of a u of the grain function. And this is strictly positive since sum of a u of g of u is finite for any x smaller than x. So you have the three assumptions of your lemma. Therefore, if you have the three assumptions of your lemma, what do you deduce? You deduce that for every u, g of u is in fact smaller than 2g random work of u. So what did I just prove? I proved that if f of x is smaller or equal to 3, yes? How do you have the last assumption, the absolute value of this? So delta safe forwarding work minus delta random work. So the delta random work was, okay, maybe you need to put an x. That's what you are saying. Let's put it like that. Let's put an x like that. That was your concern. Because you see, like when you wrote like, where is it? No, no, actually let me keep it like that, sorry. Did I erase the pi? I did erase the pi. No, maybe not. Maybe yes. Okay, the pi, I mean here delta of u was minus x for the neighbors, minus the sum of the pi, I mean minus the pi of u, sorry. So pi of u itself is bounded by that. And you just need for neighbors to check that it's true. But x is smaller, I mean xc, when beta is small, xc is basically close to 1 over 2d. I mean up to a certain beta factor. So here, putting like even for the neighbors, you also get a bound as constant times beta. But you are right. So you have the three things. So I mean the key is really this one. But indeed you need this plus xc minus 1 over 2d, yeah xc minus 1 over 2d smaller than constant times beta, which you can check very easily. Okay, so you have the three assumptions of your lemma. Therefore you just prove that f of x is smaller or equal to 2. So you have a continuous function starting at a value which is smaller or equal to 2, which is forbidden to take the value between 3 and 2. Therefore it will always remain smaller or equal to 2. You deduce the result at the end with 2. Okay, so this is a bootstrap argument that was introduced by Hara and Slade, which are two very big names. So the less expansion was introduced by Bridges and Spencer for the weekly safe avoiding work. Then it was used repeatedly by many, many authors, including Hara and Slade, which are the two key players that did also the percolation, for instance, with safe avoiding work. I will just very briefly tell you just orally how you prove these last deconvolution arguments. So you have an estimate on delta, you want an estimate on the inverse of delta. Okay, that's the only thing that I'm not proving basically for a weekly safe avoiding work. So it's not extremely complicated to do. You want to basically, yeah, actually, I don't know whether I can do orally that. I don't think it's going to work very well. But okay, let me write two or three things, just keywords for the bound of the lemma, I take you five minutes and like that you at least see that I didn't, that I almost did everything. Proof of the lemma. So basically the idea is define rho to be one of a constant beta times delta safe avoiding work minus delta random work, but massive random work. So what is this thing? It's one if u is equal to zero. It's zero if this, I mean otherwise. And it's minus mu if u is a neighbor of zero. So the standard laplacian is with mu equal one over 2D. This is a massive laplacian. So define this where mu is such that some of the rho of x is zero. You need to check that you can always find a mu like that. It's always possible if this mu will always be smaller or equal to one over 2D. Okay? So the main claim, and that's basically the only claim necessary, is the following, that if you define the following norm, so define the norm of f, the norm of a function is going to be the soup of some of the f of x, I mean f of u, so the l1 norm, and the soup over u of u to the d f of u. So there is kind of a soup over maybe d minus 2 f of u. No, u to the d. So it's a soup between two norms, which is the l1 norm and the norm obtained by doing the soup of u to the d times f of u. So it controls the growth of f of u times u to the d and it controls the sum of the f of u. And basically the claim is that any rho, I mean any rho satisfying this property, I mean this rho, satisfies that the norm of, satisfies that the norm of the green function against, I mean convoluted with rho is small. And this claim, the only key for this claim is a very precise estimate of these guys and of the green function. So it's a very precise, delicate asymptotic estimate for the green function. Once you have that, it's actually very simple to get this thing. And that's exactly where this assumption for instance is used I mean you use this very delicate asymptotic expansion of the green function and you combine it with a Taylor expansion basically and for instance the symmetry allows you to kill the first derivatives, the odd derivatives. It uses this symmetry of delta. Okay, so you get something like that and from this claim that I will not prove but that you can see in the paper by Gadi-Cosma, von der Hofstadt and Bolthausen which is a very nice account on a simple way of getting less expansion which is what I presented. So I let you imagine the complicated ones. Okay, so once you have this it actually tells me that this minus this when I convolute with this thing what do I end up with? Well this is just C beta times rho so if I get this bound I get C1 times beta here. Just I multiply by C beta the bound that I get here. Right? But notice that this is what? This is this thing with this thing minus delta zero, right? This by definition the convolution of these two guys is delta zero. So now I let you check easily that this norm gives a Banar algebra structure with the convolution. Therefore here if you have that you did use that this thing is invertible. Very close to the unit. Okay, it's invertible. Well write and in addition to that you know that the reverse thing is equal to delta zero plus a certain variable e and e satisfies also a bound of the following type if it's invertible in the Banar algebra. Very good. So what is g now? Where g by definition is a safe forwarding work convoluted with g mu random work minus one convoluted with g mu random work. Right? Because if you do that these two guys are going to cancel each other and this minus one is a definition of g. Or actually here because we are in the lemma there is no safe forwarding work here anywhere. It's just a delta that I chose. Right? Okay. But this here I said is delta zero plus e. So that gives me g mu random work plus e g mu random work. Right? Well once you are here you are done for the following reason. If you want to bound g of x where you say it's, I mean g of u you say it's smaller than g mu random work of u but this is clearly because mu is smaller than one over 2d this is clearly smaller than just the green function. So I end up with that. And then I need to bound here the sum of a v of e of v g of u minus v for random work u. And here you can just check that this bound here is giving you two things it's giving you that the average of e is not big and e is growing is much smaller than one over u to the d everywhere. When you replug this in this thing you just get that this is smaller or equal to constant over u to the d minus 2. Constant beta. Okay? So the proof of the lemma is actually I mean it would be long to write it completely on the board but it's kind of fairly natural you define the right norm and then you just check this claim which really is based on delicate asymptotics of g. Once you have that it's totological almost like it's very simple lines to use the Banar-Agera structure to define the inverse check that the inverse is just a perturbation of the identity and then write g as g mu plus a small modification notice that at no point you bound g by g mu you always bound by g for the foreign reason here you had g mu which g mu is decaying exponentially fast if mu is smaller than one over 2d right because it's a massive random work so it decays exponentially fast so this term may be decaying exponentially fast this one isn't this one is always decaying like constant over u to the d minus 2 so you do not you never get a bound by the massive one but you get a bound by constant divided by u to the d minus 2 and this of course for beta small enough ends up being smaller than g random work that's I mean that's the end of the proof it's a complicated proof but I hope I gave you some ideas of how it was going so you have this less expansion thing and then you have this more analytical part of the proof a little bit messier but more generic there is nothing really based on the model there the tricky point is to prove the convergence of this less expansion this point is difficult I did it for the weekly safe forwarding work model for the true safe forwarding work you have further work to implement it's actually you have a lot of work to get this thing done well thank you for following all these lectures and well thank you