 So let's start this third lecture. As I promised on Tuesday, I will first tell you a little bit how you guess, I mean, how Nienhaus predicted this square root of 2 plus square root of 2. I think it's a fourth part. It's Nienhaus prediction. So actually, Nienhaus prediction is probably best cast in a more general framework, which is not just safe avoiding work. So the idea is to generalize the model of safe avoiding work by allowing loops. So you will have a pass from A to Z plus loops. So you are always in a finite domain, omega. And now the probability of configuration omega would be proportional to X to the number of edges in omega and to the number of loops. And you renormalize to get something, to get a probability measure. So here, this is the number of edges. And this is the number of loops. So N equals 0 is exactly the safe avoiding work model. And for people who know, N equals 1 is the high temperature expansion of the easing model. So it's a very classical representation of the easing model. So in this framework, two things. First thing is, before I talk about what Nienhaus predicted, let me observe. Let me do a second remark, which is there you can also define the paraffinionic observable. So you can define f of z, which would indeed be something dependent on NxA beyond z, which is going to be the following. You just sum over every configuration omega X to the omega N to the number of loops times e to the minus i sigma winding of gamma from A to Z, where gamma here is the safe avoiding work in the loop configuration. This loops plus one work. So this is the safe avoiding work from A to Z in omega. Notice here that I am not renormalizing for a reason which actually here, when defined like that, I should tell you that it also depends on these two things. And here, I'm going to make also the reference. The reason is that I don't want to renormalize because I don't want this term, this renormalization here depends on A and Z. And I want here not to have this type of thing. So I do not renormalize this. Some people would prefer to renormalize it by putting one over Z here, where Z would be the sum of X to the omega N to the loops when omega is only composed of loops. So in particular, for the safe avoiding work model, this is just one, and you don't have any renormalization. OK. So you can define the paraffinic observable like that. And the key is that for the proper choice of sigma and the proper choice of X, you have exactly the same thing as we prove for safe avoiding work. So we get this discrete homomorphicity, which was probably lemma 2.2. So for sigma equals sigma of N, which is actually maybe I should just tell you what it is, or maybe not. For sigma with chosen N, X equals XC of N equals one over square root of 2 plus square root of 2 minus N, you obtain this discrete homomorphicity. So you get lemma 2.2. And you can try, just apply the same proof as the proof of lemma 2.2. The only thing that is going to change is that in lemma 2.2, we had walks going like that, and walks going like that, that we were pairing together. Now there is a third type of walk, is walks where you have a loop just where you arrive. If you take those into account, we had to wait 0 for safe avoiding work, because N is equal to 0. If you take this into account, you will not want to take sigma equal 5 over 8. You will want to take sigma equal something else, but you can check what sigma it is. And then you do the pairings between this walks, this one, and this one to find what is the right X for this pairing to cancel. So it's exactly the same proof, and you obtain this ketolomorphicity for this value of X. So why do I mention that? I mention that because for N equal 1, this observable, F, is simply the order-disorder operator, actually I should say N, order-disorder operator, at z. So it's actually an object which is extremely classical in the study of the using model. If you've never heard about it, just think of a keyword. And this is an extremely classical object in the using model. And let's say it's not so surprising to want to work with that. So for people who were surprised about the para-phamunic observable, think of it as a natural generalization to N not equal to 1 of the order-disorder variable for the using model. OK? So that's the end of my remark. Now, this was not the way N in-house predicted the value. The way N in-house predicted the value square root of 2 per square root of 2 is by trying to prove that, I mean, trying to argue that this point here has to be the critical point. But it didn't rely on the observable. It relied on something completely different, which is a mapping to another model. So let me just briefly explain this mapping. So the first thing you do is imagine you have your loop configuration. First thing is that you are going to map this model of loops. You are going to map it to oriented loops. So what you do is now you say, well, your configurations, instead of being here, it's loop configuration, and the weight is x to the omega N to the number of loops, you are going to orient each loop in one of the two possible directions, either clockwise or counterclockwise. And you are going to say the weight of a configuration is x to the omega. Let's call it maybe now. So this was, say, omega. Now let's put omega like that to say that the loops now are oriented. x to the omega, mu to the number of loop oriented counterclockwise of omega. And mu bar of the number of loops oriented clockwise of omega, where mu plus mu bar is equal to N. So you see at the level of the partition function, if I sum over every loop configuration x to the omega N to the number of loops in omega, you will get exactly the same as summing over every oriented loop configuration x to the omega, mu to the number of loops oriented counterclockwise, mu bar to the number of loops oriented clockwise. Simply because for every loop, you have two possible orientation. For one, it will contribute mu. For the other one, it will contribute mu bar. So if I put mu plus mu bar equal N, then I get the same sum. So at the level of partition function, the partition function of this is equal to the partition function of this. That's the first step in in-house mappings. The second step is going to be, so here it's again, like for probabilities, there may be something a little bit worrying here. It's not a coupling or anything like that. It's not even a probability measure on this loop configuration. Mu is a complex number. But really think of it just as an identity between sums, the sum over every configuration of the weight, the sum over every configuration of the weight. So the next step is to keep going in this mapping between different models and to go to what we call a six vertex model. So what you do here is you replace every, you start by replacing every vertex by a triangle like that, et cetera. So imagine your original graph is like that. And I'm going to put, so I start from this, and now I'm going to put some orientation on the edges of my graph. And I'm going to do it this way. If I have, let's put it like that. If in my original graph, I was doing like that, well, now here it looks like that. So what I'm going to do when I see that, I will orient all the two arrows in the same direction as this guy. So if this guy arrives like that, I'm going to put these two arrows in this direction, and here the two arrows in this one. I do that every single time I have a loop. I mean, I have an edge passing. I can always do that. And I can always do it in a consistent way. Now there are cases where I have nobody around. In this case, when you look at the corresponding thing, well, what you do is you choose either you orient like that, or you orient the other direction like that. One of the two. So again, for any oriented loop configuration, exactly like here for every loop configuration, we are getting multiple oriented configurations. Here for every oriented one, I will get multiple what we call six vertex configuration. So the good thing with this orientation of edges is that you can check that at every vertex of this graph, there are two incoming arrows and two outgoing arrows, always. How do you check that? If you have a vertex like that, you can just check that you have two exiting and two entering like that. So that's fine. For any vertex like that, I have one entering, one exiting. And whatever the thing I'm doing on the other side, I will also have one entering, one exiting. So whatever the orientation that I choose here, when I start from an oriented and I get this six vertex configuration, the only possible configuration that I will get are one out of the six configurations. And let me maybe be careful. So you will have this one and this one. You will have this one and this one. And maybe let's put in two different colors, the exiting and the undone. So these are the exiting one. And the last ones are, let's say, entering, exiting and exiting, entering. That's the only six possible configurations you will see at every vertex of your graph, because you must have as many entering as exiting. Now, when you write things like that, here, there is another way to measure this mu to the number of edges, I mean, mu number of loops going counterclockwise, mu bar number of loops going clockwise. What you can write here is you can check this is equal to the product over every vertex here of x times e to the i alpha times the number of guys like that. So if I could choose clockwise for mu, I want to do something like that. And x e to the minus i alpha times the number of turns like that, where mu is e to the 6i alpha. So if I fix mu to be e to the 6i alpha, this whole thing is exactly equal to x e to the i alpha times the number of right turns in your configuration times x e to the minus i alpha times the number of left turns. Sorry, I did this is left. Why? Because any loop which is counterclockwise is actually having six more turns on the left than terms on the right. And every loop which is going clockwise has six more turns on the right than turns on the left. So this whole thing can be written like that. But that's a fairly simple exercise to try to use this observation to see that when I do these mappings, in fact, I will get the same sum if I sum all loop configuration with this weight than if I sum every six vertex configuration with those weights. So here you put 2x cos of 3 alpha, 2x cos of 3 alpha, 1, 1. And here e to the minus i 2 alpha and e to the i 2 alpha. I hope I didn't mess up. So pick this sum, use this observation, and then transform into that. And it's a very cool small exercise to check that it will get the sum over this thing is equal to the sum over this, maybe times a global factor. I don't remember exactly. Maybe times cos of 3 alpha to the number of vertices or something like that. OK, so there is a mapping for partition function between this loop configuration and this six vertex model. Why is it good news for NINHAUS? The good news for NINHAUS when you see that is that the six vertex model is actually also related to another classical model of statistical physics, which is a POTS model. So here we have a connection between the loop model and six vertex model. And the six vertex model itself is also related to what we call the POTS model. So the idea of NINHAUS was, OK, for the POTS model, I know when I'm critical or not. So what I'm going to look is I'm going to look at the six vertex model for which the POTS model associated to it is critical. And the point is that there is one point here, one value of this weight for which you recover a critical POTS model. In fact, there is only one value where you recover a POTS model, and you are lucky enough that it is a critical one. And this value is exactly for x equal 1 over square root of 2 plus square root of 2 minus n. The alpha here, the e to the alpha, I mean the alpha is actually an explicit function on n. You have no choice on it. And then there is one value of x for which the six vertex model is also associated to a critical POTS model. So xc of n is the only value for which the six vertex model above is related to a POTS model, which in addition is critical. So that looks like a pretty convincing actually argument to my opinion, because the POTS model are very well understood now. We really understand them quite well. The only problem, which prevents me from even saying probably you can turn this into a proof, like rigorous proof or anything like that, is that the POTS model that you end up with here is very, very weird. So the coupling constants are minus infinity. So it's kind of like a completely anti-ferromagnetic model. And in addition, the Q, I mean, depending on the values, can even be not really good. So negative. So that's probably what prevents, at a formal level, you actually map to a POTS model which is critical. But then at the probabilistic level, this very anti-ferromagnetic model with negative weights and so on looks extremely difficult to study. And proving that you are critical seems to be quite difficult. So that's where I think probably trying to turn NINHAUS prediction into a rigorous proof would be difficult. Also, notice that it's not because you map the partition function of models that are related. Not because you map the partition function of a model to another one, that the critical behavior of one should translate into the critical behavior of the other one. You need to prove something. There is something not so easy to prove. So even there, that would be probably not extremely simple. But at least you have a glimpse like, I mean, try this exercise. It's a cool exercise to see why you have the same sum. And you have an idea of the techniques that NINHAUS use, which are really orthogonal, basically. OK, so that was for lecture two. Let's now start lecture three. And that is going to be the geometry of safe avoiding work. So what do we know on the geometry of safe avoiding work in low dimension, basically? And you are going to see the answer is, well, not much. This is going to be a lecture with more open problems than theorem, basically. OK, so the first section is going to be about the local geometry of safe avoiding work and what we call Keston's pattern theorem. OK, so the goal is to be describing what happens locally to the safe avoiding work. And the first thing is, if I want to tell you about Keston pattern theorem, maybe I should tell you what a pattern is. So a pattern, let's call it P, is a safe avoiding work in a box, lambda r. So r is, let's call it r of P, such that, well, 0 belongs to lambda to gamma, to P. And the safe avoiding work runs from boundary of lambda r to boundary of lambda r. So what? I mean, it's a big work to just say you take a box of size r and a pattern is just a work from boundary to boundary, passing through 0. And the goal here is going to be to say, if I give you a pattern, how many times does it appear in a work? Typically. So let's call ochre P of gamma. OK, maybe that was a gamma P here. Ochre P of gamma is a set of j such that the translation by gamma j, by minus gamma j of gamma is such that, well, I mean, such that this guy, let's call it gamma prime, such that gamma prime intersected with lambda r is equal to gamma P. So what does it mean? I mean, it's a set of j for which, if I put gamma j here, I exactly see the translate of gamma P around me. That's just saying, around gamma j, gamma j I see the pattern. So it's a set of j for which this occurs. And really observe that this is forbidden. This kills the pattern. You really want the intersection of gamma prime and lambda r to be equal to gamma P. So you are not allowed to have additional edges. It's an information on the edges in the thing that are present, but also those that are not. Like if you have a vacant edge in the pattern, it needs to be vacant in the translate of the work. And let's define n P of gamma to be the cardinality of occur P of gamma. So it's a number of times you see the pattern P in your work. And the theorem, due to Kastem, is telling you the following. For any pattern P, there exists epsilon P, strictly positive, such that for every n, the probability for safe forwarding work of length n, that n P of gamma is smaller than epsilon n, is smaller than exponential of minus epsilon n. So it tells you the probability of having too few patterns is exponentially small. Typically, you have a density of patterns, which is something quite natural to expect. Just as a remark to tell you that this you will see is not a very difficult proof, and it's quite natural to expect that something like that will occur. But I really want to highlight one thing, which is that it's actually kind of natural to expect that you will have a density. Like there will be an alpha such that you have almost alpha n such guys with very large probability. So remark, I mean, or open problem, prove that there exists alpha, which depends on P, such that the probability of having such that probability. So that for every epsilon, there exists delta, such that probability of n P of gamma minus alpha n larger than epsilon n is smaller than exponential of minus delta n. That's even more natural to expect. There will be a typical density of patterns of a certain type. Why is it a nice open question? Because if you manage to prove that, that actually gives you an excellent way of defining a two-sided infinite work. You will just say that the probability of your work, like you want just for every R, to basically have the probability of your work to be the pattern P to be equal to alpha of P. And then you need to prove some kind of consistency, which probably, if you manage to prove that, you will manage actually to prove some consistency in the thing. So that could be a first step towards existence of two-sided because it will not be just a safe-holding work starting at zero. It will actually be a B-infinite work, but of a two-sided infinite safe-holding work. So if you want to try, that's, I think, a nice problem. OK, let's turn to the proof. So you can see in the original paper, the proof is a little bit difficult to follow. And actually in the book of Madras and Slade, it's already much better, but it's still not a very simple proof. So I'm going to give you what I think is a very simple proof. It's based on the same idea. Of course, I'm going to hide somewhere. You are going to see, at least in dimensions three and more, in dimension two, it will work perfectly. But in dimensions three and more, I will leave Lemma to be proved, which I'm pretty sure everybody can prove it if he wants to prove it. But it's going to be a little bit tedious. So OK, proof. Let's start by ignoring this problem that I'm going to mention later on. And let's just assume that I know that for a certain pattern, I have the result. So assume that there exists epsilon zero positive such that probability for self-forwarding work of length n of np0 of gamma, smaller than epsilon zero n, is smaller than exponential of minus epsilon zero n, for some pattern p0. And here, I'm going to put of size lambda r. So what I'm assuming is, let's assume you give me that for some pattern of size r, I have the result. My goal is to prove that then I have the result for any pattern of size r minus 2. Then let us prove, let's fix p, a pattern of size r minus 2, meaning that it is in lambda r minus 2. And let us prove the result for p. So as it is, of course, it's not a proof of the theorem because it's absolutely not clear that even for one guy, you would have the result. But I think the core, the interesting part of the proof lies actually in this thing that if for one pattern you have the result, you have it for all of them. So the idea, and maybe let's set c equal the size of lambda r to simplify a little bit the notation. OK, so the idea is going to be the following. We want to take a walk which has very few p patterns. And the goal is going to be to map this walk to many walks that have a lot of these patterns. But we want to do it exactly like last week. We want to do it in such a way that we can reconstruct where we did the modifications. That in this way, the goal being that for every walk with few patterns, we will map to many walks in such a way that each one of these walks have few pre-images. If I can prove that, I just prove that my walks in the first place, we are very unlikely. So it's kind of a multivalid map principle. You construct a multivalid map. To every walk, you associate a set of walks. If the size of the set of walk is much bigger than the typical size of the pre-images, then that means that my first set had very small size. I will make this a little bit more formal later. So we are going to create a map, t, from the set e, I mean from the set of np of gamma smaller than epsilon n, here n is the size of gamma. And I'm also going to assume that, actually, I have epsilon 0n patterns of type P0. And there, I'm going to map to a set of safe-forwarding walk. So here, it's c epsilon. This is just a big walk to set the walks of size smaller we call to 1 plus c epsilon n, right? OK, and the map is going to be the following. To gamma here, I'm going to map to a certain set, ts of gamma, where s is a subset of ochre P0 of gamma. So what I do is I take gamma. There is a big set of places where the pattern P0 occurs. I'm going to pick a certain subset of these places. And there, I'm going to change the pattern to the pattern p. So where s is a subset of ochre P0 gamma, and such that, let's say, the lambda gamma j, lambda r of gamma j, are all distinct for j in s. So I pick my walk gamma. I pick boxes P0 ochres. And then, ts of gamma consists in replacing the patterns P0 by the pattern p for every j in ochre in s, for every j in s. OK, so there, what I do is I erase the pattern p, and I change it to the pattern P0, and I change it to the pattern p. So notice that, of course, there is a problem with changing the patterns into another one, is that they have no reason to start and end at the same place. They have no reason to have the same length. So that's exactly why I took a pattern of size r and a pattern p0 of size r, and I took a pattern p of size r minus 2. Because what I'm just going to do is whatever the place where the pattern p0 start and ends, I can always move along to arrive exactly at the place where the pattern p starts and ends to be able to put the pattern p here instead. It does change the length, but we are going to see it's not very problematic, and that's actually the reason why here I changed. And except that, I have no problem with doing it. OK, so what do we want to do now? Well, first thing that we want to do here is I need to tell you what is the size of the set s that I want to take. So here, where s is a subset of all curses and I'm going to take size of s to be epsilon n, OK? Now, well, let's look at t. So first thing, what is the size of t of gamma? Well, it's exactly the number of ways of choosing the set s, right? So in particular, if epsilon is much, much smaller than epsilon 0, I need to choose epsilon n boxes among epsilon 0 boxes, right? I had epsilon 0 boxes at least. I need to choose epsilon n of them. But I also need to be careful. I need them to be distinct, the boxes. So every single time I pick a box, I actually need to remove all the boxes that intersected them, a priori. So there, you can really check that t of gamma is larger than epsilon n. Choose epsilon 0 over 2 cn. See, remember, it's the size of the box of size r. So just what you do is, among your epsilon 0 boxes, you choose a set of size epsilon n over 2 c of d-joint boxes, and then you choose epsilon n among them. And the important thing here is that this is much bigger than exponential of epsilon n if epsilon is small. Now, what is the size of t minus 1 of gamma prime, which I define to be the set of gamma, such that gamma prime belongs to t of gamma? So be careful. We are looking at multivariate map. So here, the pre-image, I define it to be the set of gamma for which gamma prime belongs to t of gamma, of t of gamma. So what do I need to know? What are the possible gamma which are mapped to gamma prime? Well, to know what the gamma are, I just need to know where I modified the configuration. OK, I need to know that. So I need to know what are the epsilon n places where I modified my configuration. But how many places can there be anywhere, like among where are these places, where these places are at places where I see the p-pattern, right? Because I know that when I modify, I put the p-pattern. OK, in gamma, I know that anyway, whatever the gamma I started from, I had at most epsilon n such guys. When I change all these p0 patterns into p-patterns, I created p-patterns, right? But most I created c times epsilon n of those, like for each one of them, I create at least one. And maybe I actually create as many as number of edges in the box, or maybe in the twice bigger box, whether that's not bothered too much about that. This is the number of prime edges. So here, what I want to argue is that this is constant to the epsilon n. And this is more like constant over epsilon to the epsilon n. Hence, the number of images here is much bigger than the number of prime edges if epsilon is small. OK? So what you did use is probability of safe-forwarding work of, let's call this event, E. Let's say this is E. And let's say this is F. Probability of E is going to be smaller or equal to the max of a gamma of the t of gamma of the t minus 1 of gamma divided by the mean of a gamma of the t of gamma times the probability of F. That's what you did use directly from the definition of being a map here. You just count the arrows if you want. You have two sets. You have the set E. You have the set F. To every guy here, you map a certain number of guys here. Just count the number arrows in two ways, first from E. And using the fact that you have at least this number of guys. And then from there, saying that you have at most this number of guys. And it gives you exactly this inequality. OK? So you have that, which means here that this here is smaller than C0 to the epsilon N over C1. Sorry, let's not. I don't know why I put this thing. So it's E and F. So that means E divided by Cn is smaller than C0 to the epsilon N. C1 over epsilon to the epsilon N. And here, I get at most something like N Cn plus C epsilon N over Cn. And this thing here, I remind you that this is smaller than E to the square root N times mu C to the epsilon N, for instance. Cn is larger than mu C to the N. And this guy, by Hammersley-Wesh, is smaller than E to the square root N times mu C to the N plus C epsilon N. OK? Overall, here, you see that everything has a power epsilon N, but that these two guys are constant while this guy C1 over epsilon. So by choosing epsilon small enough, you can get this thing to be exponentially small. So by choosing as much smaller than one, we have the result. So try to, I mean, that's really a completely elementary tool, this kind of multivariate map, where you want to prove that a certain set of small size creates this multivariate map. Once you have that, fix epsilon small enough that the number of images you are creating is much larger than the number of pre-images. And notice that here, it doesn't use any independence or anything like that. If you have independence-type results, these results are extremely simple to get. But the good thing about this principle is that it doesn't require anything, basically. OK. And it's always based on the same observation that you will get constants that do not depend on epsilon to the power epsilon N. So the smaller the epsilon, the smaller this whole thing. But that they are competing with C1 over epsilon to the epsilon N. So why isn't it a complete proof? Well, the problem is that maybe your pattern doesn't exist. And the true enemy is the following thing. Maybe, typically, when you take your work and you look at its intersection with a box of size r, maybe when you cut what is in the box, you end up with many connected components like that. When I was starting with P0, when I'm cutting here, when I'm removing everything in the box, what do I know? I know that there are only two points where enter and exit. I don't have these arcs here. But these arcs may be very big, in fact. They may go very far. Imagine a space-filling curve. When you cut a piece of it, you actually have a great chance to have very long arcs like that. So when you want to create your pattern in the middle, in yellow, let's say now you take lambda s and you want to change something in the box of size lambda r here. You need to be able to rewire everybody in such a way that you get a true work with this pattern. So here it seems completely straightforward. You can, you know, do something like that and et cetera. You can keep doing things. And no, you don't do that, sorry. But the point is that the bigger the number of arcs, the harder it gets. And it's actually not so clear you can always do it. So you need a claim, a rewiring claim, that whatever the dimension and r, you can find an s large enough so that whatever this arcs, you can always do it. So let me show you how you do it in two dimensions. And I leave it as an exercise to do it in any dimension. And just, I mean, these are difficult questions. Actually, for percolation type problems, it can lead to like you can have counter examples sometimes. So it's not. So the claim here is for any arc, there exists an s such that whatever the pattern, I mean, whatever the exterior of gamma, one can rewire inside to get pattern p in lambda r. And in dimension 2, it's not, I mean, there are several ways of doing it, but it's not very, very difficult to get. Let me try to convince you on a picture. Let's make a big one. So actually, maybe let's keep this thing. So I'm going to do the proof into d. So here the pattern is really, I mean, notice that the edges here are all really going like that, the last edges. I really remove everybody on, I mean, intersecting the box of size n, including edges between two guys on the boundary. OK. So the step, I mean, there is a simple way of seeing that you can do it and then see that you made a mistake. So let me go that way. So let's first prove it with a mistake. So one way is just you extend by one inside. Everybody, you extend by one, OK? And now you go along like that and you start gluing things. When you did that, the problem is that you are disconnected, that theory, right? You don't, you have polygons that are disconnected. But notice that here you know that nobody is intersecting this thing here. So what you can do is just take this thing and sort, so do like that. You can connect all the lone polygons by adding this thing like that. The shock is not visible. Yeah. Did I find something worse? I think I did, right? I have a dark green one maybe somewhere. That would be even worse. But you do like that. And once you have done that, normally you should end up with a walk going from there to there. And then the next step is simply, well, pick two things here and go inside. And now here you can clearly make whatever pattern you want here. OK? So what is the mistake in what I did? Yeah? The corners. It's always the corners that are problematic. So of course, the problem is, let's take an example here. Take an example here. That guys could also be like that. This guy, there is not even a way of extending it at all. And this guy potentially does intersect just here. OK? So for these things, you need to rewire in a slightly different way. So you can, for instance, decide here that you do first one step here and then go inside or things like that. I let you think about it in two dimension. It's actually quite simple to convince yourself that you can do it. It's actually quite simple. In higher dimension, you start to see that it's going to be a little bit more tedious to do something like that. But you can do it nonetheless. OK? That's not actually really what they were doing. I think they didn't want to enter in this type of consideration, in particular, K-10 in his original paper. So he basically really went around the difficulty by doing something smarter. But I do think that the core of the pattern theorem is really this multivariate map. It's really this idea that if you have too few of an object, then transform all the things into this object and use the fact that because you had few objects in the beginning, after the mapping, you still have few objects. You don't have so many of them. So reconstructing the places where you did the change is much simpler than the number of places where you could do the change in the first place. So it's really saying that reconstructing, I need only to choose epsilon n in this set, which is of size epsilon n, while in the number of choices for the transformation, I could do epsilon n changes among a much bigger set. OK? That's really, I think, the go-home message of this K-10 pattern theorem. If you want to really turn it into a full proof, you are facing a few difficulties with this rewiring. But it's not like the core of the argument, I would say. OK, so it's a good time, actually, to make a break. And just after the break, we will show how this pattern theorem implies that CN plus 2 over CN tends to mu c squared, which seems to be a very natural thing to expect. OK, let's keep going. So corollary from that, due to K-10 again in 63, CN plus 2 over CN converges to mu c squared. And just to make you realize that there is something fishy with such a statement, I mean, why isn't it clear, just open question. We do not currently know how to prove CN plus 1 over CN converges to mu c. It's not an open question. I mean, it is an open question, but of course it does. But we don't know how to prove it. And you are going to see very well why. So let's turn to the proof. So the proof is going to be based on the following. We are going to try to change these patterns into these patterns, OK? The good property of these two patterns is that this one has length, these patterns plus 2. And the reason why we will not manage to get CN plus 1 over CN tends to mu c is that for patterns starting at the same place on a bipartite graph, there is no way to find two patterns which have different parity of length. OK, so it's going to be a little bit the same multivariate map type principle. So we are going to start from walks such that n, so let's call this guy p and this guy q, and p of gamma larger than epsilon n, and nq of gamma larger than epsilon n. By the way, just before I just realized, when I proved for the patterns, I only proved that the parity of E was small. The parity of having fewer than epsilon n patterns was actually the parity of E plus the parity of having fewer than epsilon 0n p0 patterns. But this second guy also had exponential decay by assumption, right? We use this thing to bound the second guy. Sorry about that. OK, so again, it's going to be a multivariate map where we are going to map a walk of length n to a walk of length n plus 2. And how do we do that? You take gamma and you map it to the set of ti of gamma where p occurs at index i. This is still the thing that replaces p by q, say. So nti of gamma is a walk gamma with p replaced by q at i. And you do that for every i in, so maybe I should write it like that, for every i, for i in ochre p of gamma. So for every i where p occurs, I mean, you choose one i for which p occurs and you change to q. So you indeed change from a walk of length n to a walk of length n plus 2. And if you use here this mapping, what you can immediately get is that cn plus 2 is equal to the sum of a gamma of length n of, let me not make a mistake, np of gamma divided by nq of gamma plus 1. Why is it true? You have np of gamma choices for your image. How many pre-mages do you have? The number of q patterns, which is nq of gamma plus 1, because since I created one when I did the thing, I'm now at plus 1. And here, this is true for any walk such that np of gamma is larger than epsilon n and nq of gamma is larger than epsilon n. And here, by the pattern theorem, any walk which is not of this type is quite small. So here, I have a big O of exponential of minus epsilon n. And this is by K-10 patterns by theorem 3.1. So I throw out all the guys which have too few patterns of type p or q. And then on the others, I use this map to rewrite cn plus 2 in terms of that. Now do the same with cn plus 4. Imagine that instead of changing one pattern, I just change two of them. Then it's a sum, exactly the same, same thing here, except that here, now, I have this number of choices for the p pattern to be changed to the q patterns. And the number of pre-images is going to be equal to that. And again, the error term is exponentially small. So what does it imply that? Well, use the fact that np and nq are larger than epsilon n to see that the plus 1 or minus 1 or plus 2, they are negligible. Just to prove the following, these two bounds together, they imply that cn plus 4 over cn plus 2 is larger or equal to cn plus 2 over cn minus constant over n. So I let you think briefly about that. But these two bounds together, they imply that for every n. So the ratio that we are trying to prove converges to something doesn't decrease too fast, OK? This, I mean, you get from there to there just by using Cauchy-Schwarz, basically. So I let you think about it as just Cauchy-Schwarz. OK, but I mean, why is it very good for us this? Because assume, for instance, so assume that cn plus 2 over cn is larger than mu c squared plus epsilon for a certain epsilon. Then, by this claim, I deduce that cn plus 2, I mean cn plus 2k plus 2 over cn plus 2k is larger than mu c squared plus epsilon over 2 for any k smaller than, say, bn or delta n, right? I have increments I decreased by at most a over n. So for a very long time, I'm going to be larger than mu c squared plus epsilon over 2, OK? But what does it imply? It implies that cn plus 2k over cn is larger than mu c squared plus epsilon over 2 to the k for any k smaller than delta n, right? That's not possible because this term here is smaller than we call that mu c to the 2k times e to the square root n plus 2k times maybe a constant in front of it. So this is not coherent for k large enough. That's how you do one case. To prove the other case, you just think so if cn plus 2 over cn is smaller than mu c squared minus epsilon just goes the other way. Just use k negative so you end up with cn minus 2k plus 2 over cn minus 2k is smaller than mu c squared minus epsilon over 2 if k is smaller than delta n. And then you apply the theorem. You apply the same reasoning. So you say cn plus 2k minus 2 over c, I mean cn minus 2k over cn is going to be smaller than mu c squared minus epsilon over 2 to the k. Maybe I, yeah, to the k. But this is larger than mu c to the minus 2k divided by e to the square root n. And this is, again, absurd. The proof is not very difficult, but I mean if you try to do it yourself without the help of Kesten, it's actually not so easy to see how you prove it. So it's really Kesten theorem allows you to prove a regularity on cn plus 2 over cn, basically. And in particular, I think if you try to prove this theorem yourself, this corollary yourself, the first thing you want to prove is to kind of say, OK, having a certain pattern or having this pattern plus 2 should be roughly the same probability. There should be a good proportion of patterns of type P or Q. And maybe the ratio of the two should be related to mu c squared. If you go that way, it doesn't work well. Really, what is funny is that it's the regularity on the increments. So it's cn plus 2 over cn, which is easy to handle, and compared to cn plus 4 over cn plus 2. OK, and I think you got why cn plus 1 over cn tends to mu c is a harder question. It's necessarily a global question on the work. The modification you will have to do to go from cn to cn plus 1 will have to be global. It will have to shift like a full part of your work. And these are extremely difficult questions. What can you shift? What can solve it? It's unfortunately a very difficult one. I mean, fortunately, maybe. OK, so that was the first thing on the local geometry. Now let's try to prove something on the global geometry. And the thing I chose was to prove some ballisticity. So the goal of this section is to prove something which seems fairly natural, and which is to say that the expectation of the endpoint doesn't grow linearly. So I'm going to prove this. And then I will give you in the last section a few open questions to realize that the opposite, like maybe proving that it's not space-filling or things like that, is completely open yet. OK, so this is CRM 3.3. And in fact, what we will prove, we prove the following. We will prove that for any epsilon positive, there exists a delta such that the probability for safe avoiding bridges of length n, that gamma n is larger than epsilon n, decays exponentially fast. I let you think why this implies the other one. And the key word will not be very surprised after these two first lectures. The key word is hammer slay wedge. So if you apply hammer slay wedge to a walk which is going to linear distance, necessarily the bridge you will obtain will have a linear span. And therefore, if it is exponentially unlikely, even if you pay e to the square root 10 to end up with a bridge, just the exponential cost is such that you lost already. OK, and the proof of the proposition will be based on two ingredients. The first one will be to say, OK, if gamma n larger than epsilon n, typically, then in fact, if I take the irreducible bridges, you remember these are this irreducible guys, which we defined the law for them, which was proportional to mu c to minus their length. If you take the irreducible bridges and you look at the length of your irreducible bridge, then if gamma n goes linearly in one direction, then the expected size of the bridge should be finite. And you will see this is not very surprising. Morally speaking, you want to say you have a density of renewal points. So if in length n you have, say, delta n renewal points, that means you have delta n irreducible bridges. So in average, these guys should have a finite size. The second step would be to prove that this is not the case. As a good pedagogue, I will start with the second guess, because that's actually the simplest one. And the most elegant one, so from two reasons to one. So lemma 3.5, the expectation of gamma is equal to plus infinity. And I will do two proofs. I will first work on the hexagonal lattice and use the observable, because we basically did already all the job. And then I will actually give you the more conceptual proof on the square lattice and on Zd. But I will just sketch the proof, because it's a little bit long to turn it. So on the hexagonal lattice, how do we do? So remember the notation, so bt was the partition function for the bridges of height exactly t. I mean the generating function for bridges of height exactly t. So if I look at the infinite bridge, which is a concatenation of IID irreducible bridges, what is bt? Well if you think about it, bt is exactly the probability for the infinite safe forwarding bridge that height t is a renewal time, in the sense that you have your infinite bridge. It has the composition into bridges and so on, irreducible bridges. And you have renewal times, which are times for which this edge, that's the end of a irreducible bridge. And then the next guy is again an irreducible bridge. So if you want, it's times where just when you go one more like that, it's crossed only once, this edge. So bt is exactly this probability. I let you think about it and meditate about this thing. But that means that towards what this thing is converging, the probability that t is going to infinity, that t is a renewal time. When t goes to infinity, this goes to the average, one over the average size of an irreducible piece. So this converges to one over this. So if I want to prove this claim, I need to prove bt tends to 0. OK? So this is by renewal theory. Well, in fact, we kind of already prove bt tends to 0, because I don't know if you remember it was in lemma. I maybe even put what the lemma was. Lemma 2.4, we prove that bt was bt to the 6 over t. Was smaller than pt, right? But actually, if you try to do it yourself, you notice that what we actually used is that bt cubed was smaller. I mean, if you, instead of doing a full turn, you do a half turn, you end up with bt cubed smaller than the generating function on the walks going from there to t8t, like that. OK? So in fact, along the way, the observable proved that bt cubed is smaller than the sum for x equal t to 8t of, let's call it, a tx. And 8x is this thing. It's an arc going from 0 to x. But the sum of the 8x is 8t. 8t is smaller or equal to 1. So 8x is just summable. So if you look between t and 8t, you will have somebody small. So just the fact, so actually, you can even bound by sum for x equal t to 8t of a of x. And this is smaller than the sum for x equal t to infinity of a of x. And since a is equal to the sum of a of x is smaller or equal to 1 by the observable, this thing stands to 0. It's a very lame proof, but it's a short proof and it's self-contained. Here, it's really you have all the tools to prove. Now let me show you the better proof and the nicer proof, to my opinion. So how will you do on ZD? OK, well, it's going to be a proved by contradiction. Assume that it's finite. If it's finite and I look at the infinite bridge, what are the type of information I can get? So the infinite bridge is a concatenation of ID, PCs, ID irreducible bridges of finite length. So just by ergodicity in particular, just after n step, I'm typically going to be at a certain distance alpha n. There is a law of large numbers. I'm summing finite expectation random variables. So first thing is that gamma n converges to alpha n almost. So now notice gamma is an infinite work. So pick gamma according to the law of infinite sephiridine work. And just first thing you know is that gamma n, y of gamma n, so let's say I'm on ZD, but maybe let's me just draw on Z2 for the intuition. So y of gamma n converges to alpha n, divided by n converges to a certain alpha almost surely. First thing you can prove. Second thing you can prove is the gamma is clearly reflection-symmetric with respect to the x-axis. Just irreducible pieces. So it's a centered, if I look that way, it's a centered process this way. So I can prove that x of gamma n divided by n tends to 0 almost surely. It converges to something by ergodicity. And the point is because it's symmetric, the average must be 0. So it's actually easy to deduce from that that the work will have a density. So from these two things, you can deduce that there exists a delta such that the number d gamma intersected with 0n over n tends to delta almost surely, where d of gamma is a set of diamond points. So what I call diamond points, it's a point which is such that the work is included in the code emerging from it. So before it's here, and after it, it's here. So why is it true? Well, 0 is a diamond point with positive probability, simply because it goes in linear speed that way, but its width is tending to 0. So by changing a little bit at the beginning, you can say 0 is a diamond point with positive probability. Then by reflecting, you could also see if you would define the b-infinite walk that it will also be a diamond point in the reverse direction. And then 0 is a renewal point. For any renewal point, you would have the same probability of being a diamond point. So this is kind of ergodicity theory. You need to do something, but I'm sure if I lock you in a room and force you to do it before you exit, you will manage to do it. Now the question is, why is the density of diamond points something absurd? So you have your walk, which now is, if I look at the decomposition in diamond points, it looks like that. Each one of these points are diamond points, and you have something going like that. So I want to say that the density of diamond points is absurd. Why is it? Well, that's really something we like very much with Alan, is we want to use what we call a stick-break method. So the idea is going to be pick two points, two diamond points, and imagine maybe here it's not a myth, but really imagine something with the guys are very much like that, and you pick two diamond points here and here. And what I'm going to do is I pick two diamond points at random, and I stick-break the walk. What does it mean? It means this piece in between, I rotate it by pi over 2. So this piece here, rotate it like that. And then start again like that. The fact that I look at diamond points allows me to rotate by pi over 2 without intersecting. I need to be a little bit careful, because it has a certain width, this. When I stick-break, I don't want it to come down below the starting point, because I want a bridge. But just pick these points deep inside, not close to the beginning, or if you look at a walk of length n close to the end. So pick two diamond points at the middle, far from each other, stick-break the thing. And what you end up with is a walk. It's actually a load walk, but its end point is not at all at the right height. It's not at height alpha n, because if I stick-break the whole area of size beta n, then typically the height of this guy is going to be alpha minus beta times n instead of alpha n. So here, by doing a multivariate map principle, exactly like before, by saying I picked two diamond points, I stick-break, I look at the number of pre-images, the number of images, and so on. From this reasoning, what you end up with is that the probability of being at distance alpha n after n-step cannot be close to 1. It's bounded away from 1 uniformly in n. And therefore, that means you need to have infinite expectation, otherwise you should have this thing. So it's not a foolproof. You need to actually turn it into a proof. But I think the idea is clear. So you use ergodicity to create this density of diamond points. And then you contradict the fact that you have the speed if you want. By saying by stick-breaking, you would make a different speed with positive probability, which contradicts the ergodicity. So here, it contradicts the y of gamma n over n tends to alpha almost shortly. OK, I do not claim it's a foolproof, but you have basically a foolproof for the hexagonal lattice, and this you can turn into a proof. OK, let's now focus in the end of the lecture on the difficult parts of the proof, which is the following lemma, which is saying that the probability to, for every epsilon positive, probability of r of gamma larger than delta n, sorry, for any delta. There exists a delta so that this is larger than e to the minus e to the power of n times probability of gamma n larger than epsilon n, where this thing here r of gamma is a set of renewal points. So points, again, if you want edges, which are crossed only once by the work. So here, what we prove is that you want to compare the probability of having distance n between epsilon n between your end points and your starting points. And I'm saying, OK, what I'm going to prove is that if you have that, in fact, with not too bad probability, you have many renewal points. OK, and the point is that you are going to do that for infinitely many n. So let me explain first why this concludes the proof. So again, we are now trying to prove that our goal was to prove step one, which is if gamma n is larger than epsilon n, then the expected size of gamma of an irreducible bridge is finite. But I just erased it. Why did I do that? But if you prove that, what you do prove, so if lemma 3.6 is proved, then what you know is that the probability for the infinite safe-overdain bridge of length n of having r of gamma larger than delta n does not decay exponentially fast, OK? If this is true, then it's actually quite simple to check that gamma must be finite. If it's infinite, if the expected size of your bridge is infinite, then whatever delta, the probability of having delta n renewal is going to decay exponentially fast. Imagine you have a random walk which has an infinite mean of increments, and you want it to be a smaller than n after delta n steps. That's going to cost you really a lot. And it's going to be exponentially unlikely, which will be contradictory with this thing for infinitely many answers, OK? So from there to the final result, it's simple. So I need to explain to you how I prove this lemma. And you are not going to like it. OK, the first observation is actually simple. So define r gamma k to be what I call the k renewal. So r gamma k is the lines crossed at most k times by gamma, the horizontal lines. OK, so you look at your walk. And this is, for instance, a 3 renewal, OK? This is a 1 gamma, which is also included in 3 gamma, whatever, and so on, OK? So my goal is to prove that the probability that r1 gamma is large is not too small, OK? But one thing which is completely clear, if I look at a walk which is big, is what is completely clear is that r of 2 over epsilon gamma is larger than epsilon over 2n. That's just a pigeonhole principle. You need to have many lines which are not crossed too much, otherwise you will never reach distance n. So this is tautological. The question is how you can reduce this 2 over epsilon, which is a constant, to 1, OK? And think of the enemies as being the big n. Imagine a walk that looks like that, typically. If you take a bridge that looks like that, you have many, many 3 renewals and you have no renewal points, OK? That's your enemy. Of course, there maybe you can say, OK, this would be very unlikely because I do a Merseley-Waich or something like that. But then think of really guys where you will do these type of things at mesoscopic scale. So that's really the enemies that I want to beat. And I'm going to do that by proving the following. So we want to prove. We want to find a second delta k, I mean delta 2 over epsilon delta 1, OK? So just a finite set of constants such that liminth of 1 over n log of probability of r k gamma larger than delta k gamma, delta k n, implies that liminth of 1 over n log of r k minus 1 gamma larger than delta k minus 1 n is 0, OK? So I want to prove that if I have sub-exponential decay of the probability of having many k renewals, if I have that along a subsequent, this is decaying at most sub-exponentially fast, I want to deduce the same for k minus 1, OK? So here it's for k. Basically, you want to say it's not possible to have many k renewal and no k minus 1 renewal, right? You want to say if you manage to have many k renewal, you should have many k minus 1 renewal. The point is that it's actually a quite difficult claim to prove. So again, I'm only going to sketch the proof hoping to give you an idea of what is happening. So OK, let's imagine we are at step k. And so let n be the set of integer n. I mean, such that, I mean, we choose n to be a set, n, including n, such that Pn, which is the probability for safe avoiding bridges of length n of having this, this is e to the minus little of n, OK? So I'm assuming I have the limit equal 0. That means I have a subsequence for which I have sub-exponential decay. I pick a set along which I have that, OK? And now I'm going to work with some n in that and try to do something with it. So first thing is the following. So first thing is assume that probability that for infinity many n, probability of r 0 gamma, I mean r 1 gamma larger than delta 1n is larger than Pn over 3, OK? Let's assume that. Well, then we have nothing to do, right? This thing is decaying sub-exponentially fast. This is exactly what I want to prove. So in this case, I'm done, then we are done, OK? So that's not the very interesting case. Now let's look at second case, which may be a little bit more interesting. Imagine that you have a capital N like that. Let's say this is typical, OK? If this is typical, how do I prove that with not too bad probability I'm going to have many renewals? Well, what I'm going to do, I'm just going to unfold my n. It costs me something, because I need to reconstruct, but it costs me, for instance, in the case of a n, only two places. So it costs me 1 over n squared. And now I have a guy with many renewals. So let me introduce the notion of zigzag. So a zigzag is the following in my work. It's a place where this point of zig is maximal among everybody before the point of zag. So gamma zig, y of gamma zig, is a max over j smaller than zag of y of gamma j, and y of gamma zag is a mean for j larger than zig of y of gamma j. So it's a point of zigzag. It does locally a n like that. And let's first assume, imagine that the probability of r1 gamma k gamma larger than delta kn, but zigzag of gamma, which I'm going to denote it like that. So zigzag of gamma is a set of zigzag. So let's imagine there are not too many zigzags, epsilon n times n. So assume there exists epsilon n tending to 0 on n, such that you have that larger than pn over 3 for infinitely many n in n. Assume you have that. Well, unfold all the zigzag in this case. You can always unfold a zigzag by saying you unfold this piece and then you glue the piece after. So you unfold the zigzag, so the zigzag is like that. And you transform it into this like that, OK? You unfold your zigzag. If you unfold all the zigzag, notice that any k renewal is going to be mapped to a k minus 1 renewal. Except if it's already a renewal to start with, but if it's already a renewal, you would be in this thing. So maybe let's add also r1 gamma smaller than delta 1n. So when you unfold all the zigzag, you transform all the k renewal into k minus 1 renewal. But unfolding the zigzag, because you have at most epsilon nn zigzag, the cost in the hammer slay wedge decomposition, if you want, think hammer slay wedge or think just like it's not a one-to-one map, but it's only a e to the epsilon nn to one map. So this thing, when you unfold, you end up with larger than e to the minus epsilon nn pn, basically. Here maybe it's like minus 2 epsilon nn. So if you have a sub, like a small proportion of the zigzag, you unfold all of them. It costs you something. It's not a one-to-one map, but it's not too many to one map. So you only lose sub exponentially. So you exactly get the limit you are looking for at scale k minus 1. So what remains and what is the important case is the remaining case. So the remaining case is pure renewal and many zigzag. So delta n for sum delta, which is the opposite of finding a second like that. And you also have rk gamma larger than delta k. And this is for infinitely many n. Well, if you have that, and I mean, it's almost the n, so I'm going to not give you all the details at all. But here's the idea is that if you have many zigzag and pure renewal, pick short zigzags. Because if you have many zigzags, you have many of these zigzags that are not very big. Pick short zigzag and unfold a small proportion of them. And exactly like in the pattern theorem, I have many of these small zigzags. So I have many ways of unfolding them. But how many ways do I have to reconstruct? Well, I know when I unfold, I'm going to create at the point of the image of the point of zag, I'm going to create renewal points. So if I unfold, I'm going to create a certain number of renewal points. If they are short zigzag, it's not going to be a huge number of renewal points. And the point is that now, I just want to locate among the renewal points the places where I didn't fold. But I don't have too many renewal points. Because I started with very few. I unfolded a small proportion of the zigzag. So I have few places where I could reconstruct, where I could have unfolded. So exactly like in the K-sten pattern theorem, I have many places where I can unfold, which is basically I can choose any set of size epsilon n in a set of size delta n. And I have few places where I can refold. So there are many more images and pre-images. And I end up with the fact that this thing, in fact, in this case, by unfolding, is smaller than something like that. So this case, in fact, is absurd. This case cannot occur. OK, I'm borrowing you five minutes for the following things. The first thing is to tell you, OK, open question. So here I agree. I only gave you the idea. I didn't write the thing down. But open question, make the theorem quantitative. I would be happy with one of log, log, log, log, log, log n. This theorem is really not quantitative. Because we use the ergodic theory. So it's completely non-quantitative. And even if you remove the ergodic theory that we use for ZD and you use the observable on the hexagonal lattice, because there we can have co-circuited completely the ergodic part. Well, this lemma here is one of the rare instances of non-ergodic proof, which still is not quantitative. So here, we don't know how to turn this into a quantitative proof. Because in the second step, we are just saying, here we are really just taking the opposite of I have infinitely many n for which I have, I mean, there exists delta such that I have delta n zigzag for infinitely many n. The opposite is there exists a second epsilon n tending to 0. But I have absolutely no clue at which speed it goes. So here, it's a funny thing. I thought it was simple. But I must say, it's maybe simple, but not for me. That's the conclusion of the thing. Let me just give you, finished by a few questions on the geometry. It's really going to take five minutes. So open questions. And like that, next week we can forget about all these horrible things and see that in dimension 4, no, not next week, but in 10 days, in dimension 4, everything is working well. So open questions. So making it quantitative would be good. What is really, I mean, question one, what would be really huge would be to be able to prove that gamma n decays like n to the 1 minus epsilon. So really like a power faster than that. That would be very good because that would kind of show some fratality of the model. Well, you have a natural opposite to that, which is can you prove that this thing tends to press infinity? So if it's space filling, it's of size n to the 1 over d, can you prove that the step forwarding work is not space filling, which sounds completely ridiculous, right? That's an open problem. Question three will be exactly the same question, but with 1 over square root n. I mean, common, it should go farther than a simple random work. But that is even harder than here. In dimension 2, it's the same question, but in a higher dimension. Let me give you what are the two known results a little bit in that direction. I mean, of course, you could try to prove in question 4 is getting 1 over n to the 3 quarter in dimension 2, but that seems even harder. Of course. Let me just give you two theorems because they show, I think, they illustrate very well that we don't know much. So the first one is a theorem by Madras. And it says the following. If I look at gamma n to the power p, then it's larger than n to the p over p plus 1 times d. And here, if you notice, if you put p equal 1, it gives you n to the 1 over 2d. But you could think, OK, I mean, it's self-avoiding, so it cannot be worse than space filling, right? Well, it can because that's just the endpoint. It's not the radius of the book. So it could be that the endpoint for some completely crazy reason is close to the origin. So just that you realize, we don't even know how to prove that the expected distance between 0 and the endpoint is of order n to the 1 over d. So it's of the order of the distance, I mean, of the radius of the book. So here, in fact, here, the n to the 1 over d, if you replace by your radius, it's a good question. But even with gamma n with the endpoint, then anything better than 1 over n to the 2d would be good, actually. And the second theorem, which we first proved with Glassman, Manoulet-Screw, and Hammond, and then Alan improved it on its own, it says it looks exactly at this probability that the walk ends up at 0. So what is the point is that the walk comes back, or let's say, near the origin, that you create a polygon, basically, that your self-avoiding walk ends up exactly at the neighbor of the origin. Well, this proves that this thing does decay at least like 1 over square root n. And I really want to highlight something beautiful about this theorem is that this is what? This is just Pn dot divided by Cn. It's a pointed self-overdain polygon divided by the number of polygons. And you see what you really proved is that you have very little information on Cn. You have very little information on Pn, but still you can prove that the ratio of the two goes to 0 at a certain polynomial speed, which is typical from critical exponents. So this is, I think, a very, very nice theorem. And you see that's very, very weak results. There is a lot of room for improvement. So if any of you want to do something on that, we would be very happy. So next week, you know, so Tuesday in two weeks, we will really work on self-overdain work on Zd with D large. So it's going to be completely different. You are going to see that there, things really work well. The opposite of what happened in two dimensions. Thank you very much.