 OK, well, thank you very much for coming. So it's going to be a four lectures class on safe avoiding work. And maybe before I really dive in the subject, I would first like to tell you a little bit how it's going to be organized. So in the first lecture, so today, it's mostly going to be generalities. So there are like a few things we really need to discuss before we really start. So in particular, one thing is going to be where the definitions of the objects we will look at. And then from these definitions, there will be very quickly a quantity that we will need to study a little bit more in detail, which is called the connective constant. So that's going to take us a little bit of time. Once we have defined this connective constant, there is a very classical argument. And it's going to pop up several times in the lecture. We are going to improve it in the second and maybe in the third lecture. So it's called the Hammersley-Waich argument. It's a very cute one. And I hope I will convince you of that. So we will spend time to discuss it. And then we will finish with discussing the little bit infinite safe avoiding works. Can we define the measure on infinite safe avoiding work? How does it look and so on? So that's the first lecture. The second lecture, which will be next Tuesday, this will focus, and some people already saw talk of me on this, this will focus on safe avoiding work on the hexagonal lattice. So on a very specific lattice for which we know much more. And on this lattice, the first thing we will do is we will compute the connective constant. And we will prove that this connective constant is equal to square root of 2 plus square root of 2. That's the thing that probably several of you already saw. But that is going to be just the beginning, actually, of this second lecture. It will be very short. So what I will do after that is I would improve Hammersley-Wesch. And that's the first improvement in 50 years. So you are going to see this Hammersley-Wesch is very elementary argument, but it took very long to actually be able to improve it by even a little bit. Then I'm going to give you a little bit of work in the sense that I'm going to describe to you an argument which I like very much. And this argument is basically explaining how you can really improve drastically Hammersley-Wesch and basically get a result which would be extremely good for us, but condition on the conjecture. So a conjecture leading to what we call polynomial bounds. And this, it will make much more sense once I will have told you what is Hammersley-Wesch and what we do with it. I will also discuss conformal invariance, but this will really only be a discussion. There is no mathematical theorem and the relation to NIN house computation. So all this second lecture will be devoted to this special word among Sefer-Wedding work, which is a word of Sefer-Wedding work on the hexagonal lattice. Then lecture three, which will be next, I mean it will be on Friday, and this is Friday 16. So be careful, there is a slight change of plan. It's not going to be every Tuesday. So on Friday, what we will do is we will study the geometry of Sefer-Wedding work. How does it look typically? And there will be two main theorems, the first step will be to study them locally. How do they look locally? So local geometry. And there, the key word is we are going to prove K-Stand pattern theorem. And in the second step, we are going to look at the global geometry, and there we will prove that they are subbalistics. So subbalistic. Last lecture, which will be on Tuesday, but be careful, Tuesday 27th. So there will be one jump. There we will study Sefer-Wedding work on ZD with D very, very large, because we are going to see in this last lecture that Sefer-Wedding work in very high dimension, in fact, is behaving like random work. So the Sefer-Wedding condition doesn't give you any strong constraints. And so there, what we will do is we will discuss first what we call the bubble condition and its application. Second, and this should scare people who already know the world, but I think we cannot do without it. So we are going to discuss less expansion, which is a thing that enables you to prove this bubble condition. So the bubble condition will be a nice condition on Sefer-Wedding work, and you will see that you can deduce a lot of things from it in a very soft and elegant way. Then proving that this condition holds true in a large dimension is a much more delicate matter, and that will be the subject of this less expansion. OK, so that was just a brief thing for, I mean, that we know where we are going. I mean, I know where I'm going. You probably, many of these words don't make any sense for you now, but they will soon. OK, so first lecture is about these generalities. And let me first tell you what is Sefer-Wedding work. So first, Sefer-Wedding work bridges and polygons. By the way, so what I will try to do during the lectures is to give you not so many exercises, but a lot of open questions. And I will be extremely happy if any of you managed to solve any of those. So there will be a bunch of open questions, not so many exercises simply because they are, I mean, they are usually quite difficult. So I'm not sure I want you to lose too much time on that. But if you really want exercises, you can come to see me. And I will find some. I should have told you maybe two things. So if you want manuscripts on Sefer-Wedding work, the main one is due to Madras and Slade. I think that's the most comprehensive one. And it's a very well-written manuscript. So I recommend it warmly. And then if you want something a little bit more recent with a little bit of the recent development, you can look at lecture notes by Roland Barot-Schmidt, Slade again, and myself. So Sefer-Wedding work, bridges, and polygons. So what are they? So first thing in these lectures, we will always work on a lattice. So G would be a lattice, meaning an infinite graph which is transitive, locally finite, and connected. V would be the set of vertices and E is the set of edges. So these are the vertices and the set of edges. And really think, I mean, let's just pick examples because we are going to mostly work on them. So think of ZD. So Z2 and higher dimension. Think also of the hexagonal lattice, which I will call H, et cetera. And you can also think, for instance, because that's one of the only cases where we will manage any way to say anything. Think, for instance, of the ladder or of the tree, the diaritree. I will call it TD, where here the degree is D plus 1. So the binary tree has degree 3. So think of these lattices. And now the object we will look at would be of four types. So that would be what we call a work. So a work is just a sequence, gamma 0, gamma 1, gamma n in V, such that gamma i, gamma i plus 1 is an edge for every E. So it's a sequence of neighboring vertices. Now we say that it's a safe avoiding work. So seconds gamma. It's a finite sequence or an infinite sequence, whatever. So a safe avoiding work where it's simply a work such that gamma i equals gamma j implies i equals j. It's a one-to-one work. You are allowed to go only once to every vertex of your lattice. And then we will just introduce two other things. So there will be a safe avoiding polygon. Well, this will be just a work such that gamma 0 equals gamma n. So it goes back to the same vertex. And gamma i equals gamma j implies i equals j or 0 equals n. So a safe avoiding polygon is simply a work going back to this original point. The only thing that I'm going to add to that is that I'm going to look at these works but up to rerouting and reorientation. So here I look at equivalence classes up to translation and of indexes and reordering global flip. So I don't want to say what is the original point gamma 0. And I don't want to say in which direction the polygon is discovered. And last object, it's a safe avoiding bridge in direction. Actually, maybe let's not put in direction e. Let's ignore that. So it's going to be a bridge. It's going to be the following. So it's a work but which never goes left of its starting point and never go right of its ending point. So it's going to be gamma safe avoiding work such that, well, gamma i scalloped it with e1 is smaller than gamma i scalloped with e1. Small or equal to gamma n is e1 for every i in 1m. So notice just a small subtlety here. I do not allow you to go back exactly to the same height, I mean to the same place as the first starting point. But I do allow you to touch several times the ending one. It's going to be convenient for the decomposition later. Yes? Exactly. So that's what I was going to say. So here, this will make sense only when l is embedded in rd. And e1 is the first vector of the kinetic base. Yes, very good. OK, so maybe let's give names to the different sets. So the n here is going to be the length of my work. Notice that there are n plus 1 vertices in a work of length n. So it's really the number of edges. So the set of works of length n, I will call it wn, the set of safe avoiding work of length n, s a w n, polygons s a p n, and here s a b n. Just here, I don't want these things to be infinite. So when I mean, this is the safe avoiding work of length n starting at 0. So gamma 0 equals 0. Edam here, here, it's up to translation anyway. So we ignore that. Or maybe we say containing 0 something. And here, it's starting at 0 again. OK? OK. So the model, these objects, were first considered by a mathematician. So I mean, model introduced by Orr in 47, I think. The poor fellow actually died a few years after introducing it. No connection to safe avoiding work there. But he was not, I mean, didn't get recognized for the introduction of this model. What he did actually is maybe not that interesting anyway. So he counted the safe avoiding work of very small length on z3 or z2, I don't remember. So it was really enumerative. You are going to see we are going to do things of the same sort. But that's what he did. People usually refer to another discovery. So it was rediscovered by Paul Florey, which is a chemist. In 53, it's actually a very famous chemist. He got the Nobel Prize. And here, what he did was already a little bit more interested. What he did is the following. So what you can think of is you have a finite number of safe avoiding work of length n. So you can define the measure, which is just a uniform measure on this set. You take the uniform measure. And then you look at, well, the typical distance between the starting point and the ending point. So you look at the expectation. So this is the expectation with respect to this measure of gamma n and force, well, let's put it like that. So this, just that you have, I mean, you understand what I mean by that, is just 1 over the size of your graph times the sum of a gamma, the size of your set, sorry, sum of a gamma in your set of gamma n. This is the Euclidean norm. And what Paul Fleury predicted is the following. He predicted that this is growing with n as follows. So prediction that this is growing like n to the nu plus little o of 1, where nu is equal to 3 fourths in 2D. So if you take, for instance, Z2, let's say, it's equal to 0.59, something like that. Here it's numerical on Z3. And it's equal to 1 half on ZD, with D larger or equal to 4. So the prediction of Fleury actually doesn't include this numerical value, but it does include this 3 fourths and this 1 half. So notice, I mean, maybe it's a good point to just make a comparison. If you take the same question, you consider the same question, but for just works, not safe avoiding works. So if you look at, what a good idea, if you look at this is the uniform measure on Wn, so that's what people know as a simple random work. There, what you end up is this thing is always behaving like square root n, whatever the dimension. So what Fleury predicted is that in dimension 4 and more, it does behave like the simple random work, but that in dimension 3 and in dimension 2, it does go much farther. The typical distance between the endpoint and the original point is much bigger, which is kind of intuitive if you think of the repulsion in the work, which is going to push the work further. I will come back to this because you are going to see between the intuition and mathematical theorem, there is really a huge discrepancy there. But indeed, this prediction of Fleury is actually quite interesting because it tells you that in dimension 3 and 2, the work, the safe avoiding seems to be really a different model from the simple random work. There is really something different. Funnily enough, you are going to see that the prediction of Fleury is quite accurate. It does indeed behave like 1 half here. It does behave like 3 fourths here. Here it predicted actually 2 third maybe. Only in dimension 3 didn't he predict the right thing. What is kind of surprising is that actually the 3 fourths is one of these examples of there are not so many numbers between 1 and 0 or 1 and 1 half because the way he predicted the 3 fourths is absolutely wrong. He predicted 2, he said, OK, two things are happening for the safe avoiding work, and if you combine these two things, you get 3 fourths. The truth is that none of the two things are existing for safe avoiding work. And that, well, the two together give you exactly the right result, but for completely wrong reasons. So OK, we will see. I will actually give you a heuristic why it's 3 fourths during the second lecture, which this one is, I mean, we believe right. OK, just before I go to the second section, let me just make a remark that most of what I'm going to say, basically everything except what I will tell about the safe avoiding work on the hexagonal lattice is generalizable. So there is a generalization. So imagine you do the following. So you look at a certain function. Let's call it, so for gamma, a random work, define phi of gamma to be the sum for x in ZD, x in L, sorry, of let's call it phi of Lx of gamma. Don't worry, I'm going to tell you what these things are, where Lx of gamma is just a number of visits of gamma to x. So it's the occupation time of my work at x, number of times I visit. And phi, when it's a function from the integers into R plus, such that two things, I want phi 0 to be equal to 0. I want basically that if the work doesn't visit a vertex, it doesn't contribute to this sum. This sum is not going to take into account anything where the work is not visiting the vertex. But I also want that f of n plus k is larger or equal to f of n plus f of k. So I want basically that I'm going to think of this as a global penalization for your work. And I want it to be larger if I visit n plus k times than the penalization that I will get for a work visiting n times plus the penalization that you will get for a work visiting k times. This will be what we call a self-repulsion for the work. So the more you visit a guy, the worse it is for you. And then, basically, you define Pn phi of gamma to be one of a constant times exponential of minus phi of gamma. So the probability of a work would be larger if you have a smaller phi. So let me give you a few examples of these guys. Well, the first one is phi of k equals 0 for every k. In this case, what you end up with is simply simple random work. So you end up with this guy. There is no penalization. You don't care about the visits. Everybody gets, I mean, one here. So you exactly get one of us a number of possible works. Then the second possibility is f of k equals 0 if k equals 0. And infinity, if k is larger or equal to 1. In this case, what does it say? It says this thing is going to be infinite as soon as I have two visits somewhere. So what do I get? I get the safe-forward angle. Yeah? Could then be k larger than 1, rather than larger than equal? Sorry. Yeah, sorry, sorry. You're on the right, sorry. So in this case, you get safe-avoiding work. Sorry about that. You could get f of k to be, for instance, mu times k. But maybe let me just mention another one. The most common one will be beta over 2k times k minus 1. And this is called the weekly safe-avoiding work. It kind of interpolates between these two models. Beta equals 0 is a simple random work. Beta equal infinity is the safe-forwarding work. So this is the weekly safe-forwarding work. And it's also sometimes referred to as a dome-joice model. And what I meant here, why did I mention all of this? Because basically, if you take a model of this sort with the phi satisfying these things, basically everything I'm going to mention works. It actually works in a simpler way than for the strictly safe-forwarding work. It's kind of the most difficult case. OK? So all these things were introduced to model polymers. So long chains of molecules of subsistence. So you can think of plastic or DNA. And the goal was, actually, I mean, at least Flory's goal was to understand how the work, how the polymer, place himself in a solvent, in a liquid. OK. So that was just an introduction of the objects. Now let's start to the math. So first thing is, I mean, second statement is connective constants. Maybe one of the first questions you can ask yourself before you really dive into the geometry of this type of works is actually to understand what is the number of such works. From a physics point of view, it will be what is the size of my space of configuration. So what is my free energy? How do I, my entropy, how do I compute this thing? So first question is, what is Cn, which I will define from now on as the cardinality of the number of works of length n? OK, for very small values, you can compute explicitly just enumerate these things. But I want to draw your attention on one thing, which is that even if you have big computers, so with computers, and smart algorithms, actually they are not trivial algorithms at all, well, the best we can do, and this is works by the Australian teams, well, you can, for instance, on Z2, you can exactly enumerate Cn up to C71. And you get basically 4.19 times 10 to the 30. So this number grows very fast, and you are very, very soon blocked by the fact that, for instance, you can think about it. It's very hard to compute this recursively. It's very difficult to see how you are going to compute C72 using C71 simply because there are many works of length 71 that cannot even be extended by one step. So it's a highly non-Marcovian process. So on the hexagonal lattice, one can go a little bit farther, because it's a smaller thing, so you can compute to 105, and you get this type of number. And you can, of course, do it on the triangle lattice and so on. I could give you many possibilities, like many examples. But the thing that is important is that they can compute the 105 first digit of this, I mean, first values of this sequence, Cn. If you tune it to any site recognizing known sequences or guessing formulas, you don't get any formula. Nothing is popping out. This doesn't seem to be, at least, a closed formula for these values. There is one exception, which I can mention, but it's a completely trivial one, is if you are on TD, then, of course, it's trivial to compute the number of safe forwarding work. For the first step, you have D plus one step, and then you cannot go back. You have this step for every one of them, and you never have any risk of closing cycles. So you understand that the cycles are exactly what prevents you from computing well this type of quantity. So here, no exact formulas. At least none that we know, but probably none overall. So that can look a little bit bad news. But what you can do is you can relax a little bit what you want to do and say, OK, maybe I don't want to compute exactly Cn, but I want to compute the rate of growth. So second question, compute the rate of growth. If you are, for instance, on Z2 or on ZD, you easily see that Cn is smaller than 2D times 2D minus 1 to the n minus 1. For the first step, you have 2D choices, then you have at most 2D minus 1 choices. But you also see easily that you are larger than D to the n. Simply because if you only go in the same direction, imagine you give yourself a basis, E1 to ED, but you always go in the direction of the edges, the oriented edges, E1, E2 up to ED. You never go back. If you never go back, you are going to be safe avoiding automatically at every step. You have D choices. So you know that you grow exponentially fast. And the natural question is, well, what is the rate of growth if any? And that is one of the first results on safe avoiding work. It's a very small and easy result. But I want to illustrate the fact that it's actually a deep proof in the sense that it was reused in many other instances. So it's due to Hammersley. And it's 54. So you see it really didn't wait long. Is that for any lattice L, the limit when n tends to infinity of Cn to the power 1 over n, when it exists, it's a certain constant. And this constant is between 1 and degree of L minus 1. Here I really want to highlight the fact that the non-trivial fact here is the existence of this limit. So existence of this limit. So let's prove that. You're going to see it's one line. If you take a walk of length n plus k and you cut it after n steps, so you cut it after n steps, well, the first part of the walk is a safe avoiding walk of length n. And the second part is a translate of a safe avoiding walk of length k. So that immediately tells you that Cn plus k is smaller or equal to Cn times Ck. This, so for any n and k, larger or equal to 0, this plus the fact that Cn is smaller than d times d minus 1 to the n minus 1, where d is the degree. These two things combine, give you by Fekete's sub-additivity lemma, where it gives you automatically that Cn to the 1 over n converges to the infimum of the Ck to the 1 over k. And that's the end of the proof. So here you see that we actually proved something else, which is that furthermore, Cn is larger than muC to the n for every n larger or equal to 1 to 0. So very simple proof, but it's going to be very useful for us. OK, if you have any question, you stop me. I mean, for now, it's a little bit simple, but it's going to get a little bit less simple later on, so. Yes? Do you have a convergence rate like a convergence rate? Do we have a convergence rate? We are going to get to that. We are actually going to get to that. I mean, one of the beautiful things about Sepharding work is that it's a very elementary model. So I would try to keep it elementary, but if I don't manage, you just stop me. So examples. So muC of Td is d. That's simple. For the people who are completely bored, you can try to prove this thing. And I will come back to this later. We will see that muC of the hexagonal lattice is equal to square root of 2 plus square root of 2. So C lecture 2. But unfortunately, except these three cases, well, we don't have other examples basically. Basically, you can put muC of z, but it's the first thing. We don't have other examples, basically. So if you look at muC of z2, for instance, you can approximate it. You can use the numerics. For instance, if you want an upper bound on muC, you can take c71 to the power of 1 over 71. It gives you a rigorous upper bound on muC for z2. And you end up with things that looks like 2, I mean 63, 81, 5, 8, 5, et cetera. So it's indeed smaller than 3 and larger than 2, which was what we had here. And it's a certain number. What is digits are? Sorry? We know what is digits for this? Yeah. No, I didn't just put them like that. Let's see if somebody sees that. You were saying you have an upper bound, so I wasn't really sure. So what is known, the rigorous bound is maybe five digits or something like that. But this is indeed numerics for maybe these guys. I could check, but indeed you are right. OK, so this is approximate. And I'm not going to give you z3 and z4, but you also have estimate, of course, for them. So the bad news about it is that even with this simplified question, well, we still don't really have exact formulas. OK, so that was for safe-forwarding works. Now what ends up, I mean, what do we get if we look at safe-forwarding bridges or safe-forwarding polygons? So proposition 1.2 is going to be to bound these other guys. So here, just to simplify, but it will, let's think of zd. But actually, any graph embedded in rd would work, but it's no need to make something complicated. Then for every n, we have the forward. So we have that bn, which is going to be from now on the number of safe-forwarding bridges of length n. Well, bn will be smaller or equal to muc to the n. And pn, which will be by definition the number of safe-forwarding polygons, will be smaller or equal to d minus 1 muc to the n. So we get the converse inequalities. So let's prove that. So let's look at the first object, and let's observe that we have exactly the opposite property to this one. If I concatenate a bridge of length n with a bridge of length k, I end up with a bridge of length n plus k. And observe here that it was important to make a strict distinction between the left and the right. So here, there is no risk that these two guys intersect. And so that tells me that bn plus k is larger or equal to bn times bk. If I start from a safe-forwarding bridge of length n, a safe-forwarding bridge of length k, I create a safe-forwarding bridge of length n plus k. And I do it in a one-to-one fashion. If I know n and k, it's one-to-one. So that immediately tells me by fecete that the limit of bn to the 1 over n is equal to the supremum of the bk to the 1 over k. And this obviously is smaller than the infimum. I mean, it's smaller than mu c. Simply because bk is smaller than ck, and ck to the 1 over k is tending to mu c. OK? Here I didn't claim, and this is important, I didn't claim it was an equality here, right? As for now, it's an inequality. OK, let's look now at polygons. So here, the observation is a little bit the same. It's epsilon more subtle, but not much. Imagine I give you a polygon of length n, and I give you another polygon of length k. Well, I claim that there is exactly one translation. Actually, there may be more than one, but at least there is one translation that allows me to put this guy just here, like that. Now here, if I erase these two edges and replace it by these two edges like that, I end up with a safe avoiding polygon of length n plus k. OK? And if you think about it, so if I start from, if I give me a gamma and a gamma prime, and I define concatenate, which is a thing which is going to translate the second polygon to put it exactly in such a way that the right side of the orange one is just next to the left side of the green one, and then change to the xor for the edges around the square, I end up with a safe avoiding polygon of size n plus k, which I'm going to call concat gamma, gamma prime. OK, the point here is that it's not one-to-one map anymore. The reason is that in dimension two, it looks like a one-to-one map. There is maybe one way to translate and to put them like that and then switch. But in higher dimension, it could be that the leftmost point, the leftmost edge, is not actually a line like that. It could be that the leftmost edge is like that, for instance. Right? So actually, you also need maybe to rotate your polygon to make the leftmost edge really like that to be able then to just change like that. So for every guy here, it's actually easy to reconstruct what is this part, because that's the only part that you can do going from there and cutting in such a way that this has length k. That's the only one, because I could have here two edges and I could think that I maybe made a change here. But if it's here, notice that then this guy will not have length k anymore. So there is an only red place. But then, once I have identified this green and this orange guys, I still have d minus 1 possible rotation of this guy to start with. So it's a d minus 1 to one map. So this is concat is d minus 1 to 1 map, d minus 1 to 1. Thus, pn times pk is smaller or equal to d minus 1 pn plus k. And now this, you notice that this implies that pn divided by d minus 1 is multiplicative. So pn divided by d minus 1 converges and converges to the supremum, which is the guy we wanted, which is also smaller or equal to this. Hence the second guy. We are starting slow, but that's OK. Just one remark before we do the break. Sometimes you will see actually that you do not look at polygons up to translation and reflection. I mean reflection of the ordering, of course. Orientation and translation of the ordering. So then be careful. You don't get pn smaller than constant and musi. If you look at pn dot, which is the guy which has, so it's really polygons, I mean rooted polygons with an orientation, then you should really be careful. You get pn dot smaller than 2n d minus 1 pn, musi to zn. You get that pn dot is smaller than 2n pn. Actually, it's equal to 2n pn. This is important if one day you end up working on safe avoiding work, not to be confused because it's very simple to prove theorems. If you start from pn dot smaller than the musi to zn, the problem is that it's not what is true. And actually, it's even wrong. Really wrong. pn dot is going faster than 2n. So that's just a remark to prevent further disasters. And now for people who got bored since it's a break, I can answer the exercise. How do you prove that musi of the ladder is 1 plus square root 5 over 2? Well, the simplest way of doing it is to notice that bn is actually very simple to compute for the ladder. Why? Because bn plus 1 is simply equal to bn plus bn minus 1. If you end up somewhere here at this height, whatever the way you ended up there, you have exactly, I mean, you want to say that you are going to have either this way or this way to extend. I just realized that maybe you need to define bn tilde. Let's see. What am I doing? Because something like that is true. With b0, it could be 1 equal 1. So you just recognize Fibonacci. I mean, maybe you need to define the bn tilde because I realize that if you end up like that, you have two ways of extending. So that's maybe not good. You can easily check that. And then what you end up with is that in this case, you see that cn is larger than bn, clearly. And it's smaller, well, it can be larger than bn simply because you can go back. If you do something like that, you may come back for a certain number of steps. And on the other side, you may come back for a certain number of steps. So you need to reconstruct potentially this and this. But you have only n choices for each one of these guys. So you get here n squared bn. So if this guy is growing like 1 plus square root 5 over 2 to the n, cn is also growing like that. OK, so let's make a break, like a five-minute break. And then what we are going to do later on is, I mean, when we come back, is to prove the first non-trivial result on safe avoiding work, which is this hammer-slay wedge bound. And just one thing here that I notice I wrote is that pn here is actually tiny. The number of polygons of length n is really tiny. It's not growing at all. So it's smaller than maybe you just need to decide which translate you take. So like p2n is maybe smaller. I don't know why I write this type of thing because I know I'm going to fail, but something like maybe n minus 1 or something like that. Don't judge me. But the important thing is that it doesn't grow like 1 plus square root 5 over 2 to the n. So it's not always true that the rate of growth of polygons is the same as the rate of growth of, I see that as a good PhD student, you want to make me wrong, right? I see you will manage. Don't worry, there is no problem with that. It's the worst type of thing because it's counting as a number of p-loans or the number of intervals between the p-loans. Maybe it's n. Anyway, I will be right with n. OK, so let's stop and let's take five minutes and we start with the hammer's lay wedge bound. So you see that you prove that bn to the 1 over n converges. You prove that cn to the 1 over n converges and pn to the 1 over n converges. cn converges kind of from above, pn and bn from below. But at this stage, we don't know whether they converge to the same thing or not, right? So the goal of this section is to prove that actually, at least on zd, they all converge to the same quantity. They all converge to the connective constants. That would be very useful because indeed somebody asked, well, what are the bounds you know on cn? So you know that cn is larger or equal to cn to the n. And here you have bn. Imagine I can relate cn to bn in an explicit way. Then I will actually get an upper bound here because at this stage, the only thing I know is that this is smaller or equal to cn e to the little of n. But I absolutely have no idea what is this guy. So my goal in this section is to prove the following theorem. So theorem, so it's 1.3. It's called Hammersley-Wilch. It dates back to 62. And it says the following, there exists a constant c. So on zd, there exists a constant c such that cn is smaller or equal to e to the c square root n times bn for every n larger or equal to 1. Actually, let's maybe put mu c to the n here. So you have an upper bound like that. And we will see in the course of the proof that we will actually relate bn to cn. Yeah, maybe let's. OK, so in particular, you get that is smaller or equal to cn, sorry, is smaller or equal to e to the c square root n times mu c to the n. This goes back to 62. And it's actually the best known. It was the best known result in dimension 2. Now before I go into the proof, let me tell you what is the predicted thing. So conjecture is that cn is behaving like n to the gamma minus 1 plus little over 1 mu c to the n. So it's n to a certain power where gamma is equal to 43 over 32 if d equal 2 is equal to something 1.162 if d is equal to 3 and is equal to 1 if d is larger or equal to 4. So that means in dimension larger or equal to 4, the corrections are subpolynomial. In fact, in dimension 5 and more, they will be constant. So mu c will be comparable to, cn will be comparable to mu c to the n. In dimension 3, in dimension 4, you have logarithmic correction. Let's not mention it. Well, I just mentioned it, but it does not go farther than that. In dimension 3, you have some numerical thing. In dimension 3, 2, cn should be equal to n to the 11 over 32 mu c to the n. I mean, equal up to subpolynomial corrections. Exercise, prove it. So this is a big conjecture, I would say, on safe avoiding work is to deduce this, I mean, to get this 11 over 32. And I will tell you a little bit more about that next week. Just a remark. Here, I will really prove it on Z2, but you could also do it on the hexagonal lattice. So Hamer-Sille-Welch works on H. And actually, same thing, you will get cn smaller than e to the c square root n mu c to the n. But the truth is also cn should be like n to the 11 over 32 mu c to the n. But I really want to highlight something miraculous there, which is this is mu c of z d. So in particular, on Z2, it behaves like n to the 11 over 32 mu z2 to the n. Here, it's mu c of the hexagonal lattice. So mu c of z2 is larger than 2. And mu c of the hexagonal lattice, it's still written there, and it will be proved next week, it's smaller than 2. So the number of safe avoiding work on the hexagonal lattice and on the square lattice is absolutely not the same. Like at exponential order, it has nothing to do with each other. Yet, the correcting term in both cases is n to the 11 over 32. That's a beautiful thing. So it's a universal statement. So this term is universal. And I will tell you a little bit more why next week. OK, so that was a discussion before the proof of the theorem. Now let's dive into the proof. So this type of connection between, I mean, this type of universal results are exactly what we are aiming for when we do statistical physics. OK, so we are going to go in two steps. One step will be actually quite simple. The second one will be a little bit more subtle, and that will be the core of the proof. So the first step is to relate the safe avoiding work to half space safe avoiding work. So define, it says that gamma is a half space safe avoiding work if simply gamma i e1 is larger than 0 for every i larger or equal to 1. So it remains in the right half space. OK? Let's call h safe avoiding work and the set of half space safe avoiding work of length n. And here I'm starting from 0. And let's define hn to be the cardinality of the number of works of length n, the half space work of length n. So the first lemma is saying that cn is smaller or equal to the sum for k equals 0 to n of hk times hn plus 1 minus k. In particular, my goal would be to bound hk, not cn. So how do I prove that? So exactly like before, the observation is going to be quite simple, is that you can cut a safe avoiding work in two half space safe avoiding works. So what you do is the following. Imagine you have your work like that. Well, define r of gamma to be the supremum of the s such that, let's call it big s, such that gamma big s, big s e1, is the max of the gamma s e1. So you take the farthest point on the right. There may be several one. You take the last one. OK? And what you observe is that this is, and maybe I could have taken the mean. That's more logical. So you take the minimal value. You have several things. Take the last one. And what do I see? This thing by definition is going to be a half space work of length k, which is actually r of gamma, which is n minus r of gamma. And this one, the beginning, is almost a half space work going in reverse, if you go in reverse direction, except that it can touch several times here. So you just need to add one edge. OK? So if you take a work and you do this cutting procedure. So if you do cut, which goes from safe-holding work of length n into the union for k equals 0 to n, and which consists in taking gamma, and you cut. You end up with gamma 1, gamma 2, where gamma 1 is a work from 0 to r of gamma in the reverse direction. You just reverse the direction of this thing. Sorry. Gamma 1 is going to be the work from r of gamma to n. And gamma 2 is going to be the work from 0, from r of gamma to 0 when adding one more step before. So gamma 1 is a green work. Gamma 2 is I do one step, and I go reverse for the second guy. Then this is a true map, OK? And it's a one-to-one map. You can reconstruct automatically the original work. If I give you gamma 1, gamma 2, you just remove one step to gamma 2, and you reverse gamma 2. You go from gamma 2 to gamma 1. And you get the work gamma. So that automatically gives you exactly the bound. So the fact that cut is one-to-one gives the result. Agreed? OK. So now that was the easy part. Now let's prove, prove of theorem 1.3. Actually, maybe let's me state one more lemma. I mean, let's isolate one lemma. It's maybe cleaner like that. So lemma 1.5, and combining the two will give us the results. Lemma 1.5 is the following is that hn is smaller or equal to let's call it pn times bm, where pn is the cardinality of the set of n1 smaller or equal to nk, such that n1 plus nk is smaller or equal to n. So what is that? OK, let's me, sorry, let me write it like that. Define small pn like that when it's equal to n. And this is what is known as the partition of an integer. So it's a number of partition of an integer by, yeah? Any k? Yeah, any k. And capital Pn would just be sum of pk for k smaller or equal to n. OK? So what I'm saying is in order to get bn from hn, you need to put this entropy factor, which is basically the number of partition of an integer smaller or equal to n in integer n1 to nk in an increasing order. Decreasing order or increasing whatever. OK? OK, so let's prove this lemma, and then we will see how we get proof. OK. So what we are going to do, we are going to create a map from the set of half space safe forwarding work into the space of safe forwarding bridges. OK? So let's define unfold, which to gamma associated with unfold of gamma, which is obtained as follow. Z, sorry. So what we are going to do, so now we are a half space work, so we are going to take r of gamma, which is defined there. So you take the, well, OK, let's say it's not r of gamma anymore, but you take the last point, which is farthest on the right. And you just unfold this part of the work, meaning you take the symmetry with respect to this line. OK? That's the first step of my unfolding operation. Now you take the last farthest point on the right. You unfold the remaining there. And you keep doing that until you end up with a bridge. In finitely many steps, you are going to end up with a bridge. OK, that's my unfolding operation. OK, so that's a well-defined map. Is it clear for everybody, this definition? Sorry. So you defined, let's call it, so imagine this was r minus of gamma. It was the point which was the last point most on the left. Here you defined r plus of gamma to be exactly the same, but with the max. OK? And what you do, so this is r plus of gamma, you do the symmetric of what is after this. Do the symmetric, it's well-defined. Now you have a new work, gamma 1. When you repeat the operation, you take r plus of gamma 1, you unfold what is after, et cetera. And in a finite number of steps, you will end up with a bridge. You stop at this stage. And what is going to be important for us, the important thing for us, is that unfold is at most, I mean, is capital P N to 1. Why is it true? If I give you a bridge, how many guys, how many half-space work could be associated to this bridge? Well, in order to be able to reconstruct the work, what do I need to know? I need to know, well, where I unfolded, so I need to know these values here. But these values, N1, N2, N3, and 4, they have the specific property that they are decreasing. Why? Because N3 is basically this distance, which is definitely smaller than this one. And 4 and 2 is going to be this distance, which must be smaller than this one, et cetera, et cetera. So in order to reconstruct, I need the places, N1, smaller or equal to N2, smaller or equal, et cetera, to Nk, where I unfolded. I unfolded. But N1 plus N2 plus, et cetera, plus Nk is exactly equal to the width of my bridge, which, of course, is smaller or equal to N. Therefore, the number of possible pre-images is bounded by capital Pn. The fact that I constructed this capital Pn to one map immediately tells me that hn is smaller than Pn times Bn. Let us conclude the proof now. Yes? Can you get the same inequality with little p instead of a big p if instead of taking the position of the lines, you pick the instance, the time that which you reach? The problem of the time, it's a very good question. The problem, if you take the time, is that this is not decreasing anymore. You see, the width is decreasing, but you could imagine pick a walk which is going like that and then does this like that. Or let's say does n comes even just a distance 1 like that. What is the Amos-Lewesh decomposition of that? You will unfold this whole part, right? And then you are done. One unfolding is sufficient to give you a bridge, but notice that the first piece here is going to be huge compared to the second one. So you really want to do the width of the... Actually, it's rather good news. It's better to do the width because, for instance, we'll see in the third lecture that you can really prove that the width of your walk of n is typically little or of n. It's not actually going ballistically. So it's actually a better bound. OK, so proof that this is typically this type of things trying to modulate around the proof that I'm presenting. I mean, I highly recommend that you try because you are going to see, in fact, how much we are walking on a thin line when you walk with a safer walking walk. In the sense that most of the proof, if you try to modify them a little bit, you break down completely. For instance, this argument seems wasteful on so many aspects. Like, it really looks like you are doing something stupid. Like, you could do much, much better than that. It took 50 years to get better than that. So it's not like you get super good. You are going to see. OK, anyway, let's combine the things. So how do you prove the theorem? Well, so you know that Cn is smaller or equal to sum k equals 0 to n hk hn plus 1 minus k. This is lemma 1.4. And by lemma 1.5, you end up with sum k equal 0 to n of capital Pk, capital Pn plus 1 minus k. And you get Bk times Bn plus 1 minus k. But this is smaller than Bn plus 1, right? So this whole thing here, in fact, let's put Bn plus 1 because I don't want to bother with it. Of course, you could end up with Bn, but let's put Bn plus 1. So here, it's completely clear that Bn plus 1, maybe. Sorry, I changed my mind, but it's going to be. Otherwise, that's exactly the type of things you think it's a good idea on spot, and then you realize three pages later and it makes your life way worse. So let's put it like that. OK, clearly, Bn plus 1 is smaller than Dbn. No, not clearly. Maybe I would stay with Bn plus 1. Because if you remove one step, you don't end up with it. We will see how I pay this later. So what remains? Well, the only thing you remain to prove is that one can use a result, which goes back a long time, Hardy and Ramanujan, which proves that Pn is smaller than e to the, so it's pi square root of 2n over 3, something like that. 3n over 2, 2n over 3. We are going to see that in a minute anyway. 2n over 3. So you can just use this plus little o of square root n. The estimate is actually quite sharp, the Hardy-Ramanujan estimate. We are just going to use that roughly Pn is bounded by e to the pi square root of 2n over 3. So e to the square root n. If you plug this here, you end up with a bound in e to the square root n. Just because you maybe didn't all saw this bound, and it's a kind of simple bound to get. Let me, I mean not the sharpest one, of course, but let me give you a proof of this Pn is bounded by e to the pi square root 2n over 3. It has nothing to do with safe forwarding work, but I can't resist. So it starts from the following observation. If you take f of x, which is a generating function of the partitions of integers, you end up, maybe it's with tricks that you want to do, but no, no, it's a smaller way to call. So it's product for n equal 1 to infinity of 1 over 1 minus x to the n. This I leave it to you as an exercise for people who never saw that. This is a generating function of this. So that tells you automatically, and notice also that log of f of x, which is the sum for n equal 1 to infinity of log of 1 over 1 minus x to the n. This, if you expand, this is sum for n equal 1 to infinity k equal 1 to infinity of 1 over x to the n k. Am I doing something wrong? Divide it by k, otherwise it's not going to n, OK? Just I just expanded this guy. Then if you exchange the n and the k, you end up with x to the k over k 1 minus x to the k, because it becomes just the geometric sum of x to the k, right? OK. So let's go back there. Pn, well, Pn clearly, for any x smaller than 1, notice that, by the way, this immediately tells you that Pn is growing sub-exponentially fast, because the radius of convergence of this is clearly 1. So for instance, if you would just like to prove that there are not exponentially many more cephalding works and bridges, this would already be sufficient. Here we are going to get something more refined. So Pn times x to the n is clearly smaller than exponential of log of f of x. It's smaller than f of x, so it's exponential of log of f of x. So it's exponential of sum k equal 1 to infinity of x to the k over k 1 minus x to the k. And here I'm going to put minus n log of x. I'm going to Pn times x to the n is smaller than f of x. So this gives me immediately that. And then the only observation you need to do is you want to optimize on x, this bound. So in order to see what is the best guy, what you do is x to the k over 1 minus x to the k, this is clearly bounded by, so OK, how would I write it? I'm going to write it like that. 1 minus x to the k over 1 minus x is larger than k x to the k minus 1. And the other guy is 0 minus log 1 over x over 1 minus x is smaller than 1 over x. So these are the two things that you can get from the, I don't know how you say in English, like inequality is a croissant fini. That would be the French touch to this talk. So when you plug in these two things in this thing, what you end up with is that Pn is smaller or equal to exponential. So the first time, you are going to get a sum of the 1 over k squared. So you are going to get pi squared x over 6 1 minus x. And the second term, when you plug this thing, you end up with n 1 minus x over x. Why is it good to do it like that? Because you see that here, there is a competition, x over 1 over x, and 1 over this number here. So you have a lambda and a 1 over lambda. You want to get the best lambda possible here. It's going to be the lambda for which this term is equal to this one, which is roughly x equal 1 minus 1 over n. When you plug it here, you end up with the right reason. So you optimize. So here, picking x such that pi squared x over 6 1 minus x equal n times 1 minus x over x gives actually that Pn exponential of square root 2n over 3 of pi. And there is a 1 plus little over here. So if you want to do better than that, you can, of course, go to Ramanujan-Hardy or so on. But this will be, anyway, completely useless for us. But it's kind of funny that you can get this e to the square root n so easily. And the proof of the simple proof of these facts are actually not so old. I mean, it's a very simple proof. But it took some time. OK. Yes? We did use a power series. But we have a gothic product right here. And we also get one right here. Couldn't we use some questions about? I'm going to get exactly to that. I'm going to get to you are saying that this relation seems to be suggesting inequalities in terms of the generating functions. And they are, and we are going to use that. That's a very good point. OK. But before that, just this was for bn. So you relate. So now we know that there are not much fewer bn. I mean, bridges than walks. Let's see what we can get for pn. So where was I? I guess maybe I'm going to erase there. So you see, it's full of small arguments, which are kind of cute. And I like it, at least. So proposition, I think it's maybe 1.6, is pn dot. So for every n, if I look at p2n dot, so remember that the guy is where it's rooted. And it's going in one direction. Then this is larger or equal. And maybe we should take d larger or equal to 2. It's larger or equal to bn squared divided by n. I mean, there is a constant c, let's say k, and n to the d plus 2. So since we know we don't have exponentially fewer bn and bridges than walks, we also get the same for polygons. And the argument is not completely, I mean, if you try to find one yourself, it takes a little bit of time. Here, I'm kind of giving you the result all the time. So you can't, but try to ask somebody to do it. Now you saw the proof, so it's too easy for you. But try to ask a friend to prove these things. You will see it's not so simple. So here, the idea to construct it is, so here we feel like we should take two bridges and do something with it. So define bn of x to be the number of bridges, well, the cardinality of safe avoiding bridging nx, which these are the bridges ending at x. I'm going to take two bridges ending at x, and I'm going to modify them in such a way that I can create a polygon out of it. So imagine this is your bridge, OK? And the idea is going to be the following. You have a vector, which is a vector, I mean, you have a direction, like that. Pick the translate in such a way that you are the last time to intersect this translate. So you never cross it, and maybe you touch it several times like that, but you take the last point, OK? This we did several times for other directions. I'm not going to redefine correctly this thing. But here, now, I have two walks. I have this walk here, and I have the rest of the walk. And notice that these walks are such that this is not going left of that, and this is not going right of that chance. Not left of that, not right of that. So observe here that if I pick this walk and I put it here, I end up with a walk from there to there. So I can translate. I'm just checking that I'm not doing something stupid. No, that seems to be OK. These red guys, if I put it here, I end up here from a walk with a walk from there to there. OK? Now imagine I took two bridges, and I did twice this. What I'm going to end up with is I'm going to end up, if I take another bridge, I'm going to be able to construct exactly a walk from there to there above it. I just make the same construction, reverse, and put it on top. So out of two bridges, indeed, at x, I can build by cutting there, translating, doing the same for the other one, and reversing. I create a polygon. Voila. You see, once you have the construction, it's simple. And here, so let's call it, let's say, glue. So separating bridges and x times separating bridges and x into separating polygon 2n, which, well, OK, maybe I don't define the, I draw you the thing. And here, the good point is, what do I need to reconstruct? How many image, I mean, it's a, how many to one map? Well, in order to reconstruct, I need to know for each one of the two bridges where I cut. So I need to know the index here. This is a certain gamma j, I need to know j here. I mean, maybe there are smarter ways of reconstructing, but definitely if I know j, and for the second one, I know j prime, then I can reconstruct my works. So this is, so glue, is at most n square to 1. And notice you could, OK, so it's at most n square to 1. There is, I mean, I'm cheating a little bit, because here you see you could touch again, which there may be self-touchings. You should be a little bit careful with that. What you can do is just add one edge to be certain it doesn't intersect. But I mean, now at this stage, I'm going to start to be a little bit more. OK, so what does it give me? It gives me that tn dot is largely cool. Actually, what you can see is that for every x, you get here guys that are really of different type. So you can even sum over x, the bnx squared over n squared. This is actually a lower bound for pn dot. Sorry? 2n dot. 2n dot, sorry, 2n dot, exactly. And here if you use Cauchy-Schwarz, you end up actually with sum over x of bn of x squared. So this thing squared is smaller than this thing, times the number of possibilities for x. Well, x has n choices that way. And at most, 2n plus 1 choices the other way in every one of the other directions. So here you are going to get n times 2n plus 1 to the d minus 1. And overall, this is bn squared at the top. That's just the definition. And here you get constant times n to the d plus 2. This is going to be actually crucial for us next time, because it also tells you that if bn is close to muc to the n, so is pn. OK, so let's finish these lectures by mentioning a little bit. I mean, you see here we have a natural law on safe avoiding work of length n. From a statistical point of view, there is something a little bit disappointing with that, which is usually you don't really like you want to take n to infinity. So you would like to have an infinite volume, an infinite law on infinite works. And this we are going to see is quite difficult. Actually, we do not know how to do it, but we know how to do it for bridges. And I want to explain to you that. So infinite safe avoiding bridge and k-sten relation. OK, so here we are. So as I said, the goal here is to construct an infinite safe avoiding work. And in order to do that, we are going to study a little bit the generating functions. So now I'm going back to your idea. So define j of x to be the sum for n equals 0 to infinity of cn x to the n. And define b of x to be sum for n equals 0 to infinity of bn x of n. So it's a generating function for works and for bridges. And safe avoiding work is safe avoiding bridges. And the main proposition, what would be important for us is to prove the following. g and b have radius of convergence xc, which is, by definition, 1 over muc. So that is actually completely straightforward. Furthermore, limit when x tends to xc of g of x is equal to the limit when x tends to xc of b of x. It's equal to plus infinity. So our works blow up. I mean, the generating function of our works blow up. So I mean, you may wonder what is non-trivial there. Because if you think about it like that, so prove. Well, clearly the first claim is obvious. The rate of convergence is clearly 1 over muc, right? Notice it's clear now that we know that the rate of growth of bn is the same as cn. But Hammersley-Welch implies rate of convergence equal to xc. So that radius of convergence. Now the limit of g of x, of j of x, sorry, well, that is not difficult as well. Because g of x is the sum of cn x to the n. But cn is larger than muc to the n. So this whole thing is larger than sum n equals 0 to infinity of x over xc to the n. So it's larger than this. So as soon as x tends to xc, you get infinity. So what is going to be non-trivial is actually the b of x. Because the b of x is smaller or equal to muc to the n. So there it's absolutely unclear that you get what I claimed. Yet it's not that difficult. Because if you look at this relation here, what does it give me in terms of generating function? Well, it gives me. So this was lemma 1.4 gives that g of x is smaller or equal to 1 over x h of x squared, where h of x is the sum hn x to the n, right? It's indeed a Cauchy product. So you end up with this inequality. In particular, if j goes to infinity, then so does h. Now the question is, can we get from h to b? Well, this, if you think about it, the decomposition here was giving us a little bit more. So define bn of x, bt of x, to be the sum for n equals 0 to infinity of bn t x to the n, where this is the number of bridges of length n and width. If you restrict on bridges of width t like that, what did we get from the Hammersley-Wedge argument from the second lemma? Well, from the second lemma, we got that h of x was smaller or equal to the sum for k equals 0 to infinity of the product for n1 smaller or equal to n2 smaller or equal, et cetera, to nk of b, let's call it here, bn1 and i of x, right? That's exactly what we were getting from the Hammersley-Wedge. If you rewrite them in terms of the generating function, that was the claim. But this is just the product for k equals 0 to infinity of 1 plus bk of x, which itself is smaller just by an exponential of sum. Maybe it's 1 here, sorry. Sum of bk of x for k larger or equal to 1, and this is b of x minus 1. So what did I just prove? I proved that h of x, it is self-bounded. So h of x is bounded by exponential of bx minus 1. So it's actually, this is more than just Hammersley-Wedge. You cannot deduce it just from the results. You can deduce it from the proof. But that tells you that if Jx tends to infinity, so does h of x, and therefore b of x. And that's the end of the proof. But notice, for instance, that it gives you something absolutely non-trivial from the Hammersley-Wedge part, which is, it tells you, so corollary. And this is 1.8. For infinitely many n, bn is larger or equal to mu c to the n divided by n. And maybe here I should put 1 plus epsilon. The sum of the bn mu c to the minus n is infinite. That's, if you try to prove it directly, it's not something straightforward. And I'm almost done. So from these observations, I want to construct for you an infinite safe avoiding bridge in the following fashion. There is a second corollary. Maybe here just write it like that. I mean, this is a little bit nicer. So here is the last corollary of today and of this lecture. So define an irreducible bridge to be a safe avoiding bridge which cannot be cut into two safe avoiding bridges. So what does it mean? It cannot be cut. Let me give you two examples. This is not a safe avoiding bridge. It's an irreducible bridge because it can be cut there. And I get two safe avoiding bridges. This guy, for instance, that's a simplest example. But of course, you don't need to do things that are that crazy. But this one is an irreducible bridge because there is no point where you can cut it in such a way that before and after you cut all bridges. So an irreducible bridge, another way of putting it, is it's a bridge which crosses every line in between, at least twice. So that's an irreducible bridge. Now define irreducible of x to simply be the sum over gamma irreducible bridge of x to the gamma. This is the length. The generating function for irreducible bridges. So corollary 1.8, 1.9, is giving me the following. It's saying b of x is equal to 1 over 1 minus b irreducible of x. In particular, b irreducible of xc is equal to 1. So let me prove that. And let me tell you what we do with it. It's really going to take three minutes. I'm stealing three minutes of your time. How do you prove that? Well, every bridge has a unique decomposition. Really, think of these guys are your prime numbers, right? Any bridge has a unique decomposition into irreducible bridges. So that means that b of x is a sum for k equals 0 to infinity of b irreducible of x to the k. Simply because this is a generating function for the bridges which have exactly a decomposition into exactly k irreducible bridges. But this is just 1 divided by 1 minus b irreducible of x. Now, since this guy is converging for every x smaller than xc, that means this guy must be smaller than 1 for every x smaller than xc. So b of x, finish here. So b of x smaller than infinity for every x smaller than xc implies smaller than 1 for every x smaller than xc. But also, b of x tending to plus infinity as x tends to xc, that does imply that this guy is increasing to 1. So by the monotone convergence theorem, it tells you that b irreducible of xc equals 1. So that's monotone convergence. Why is this good? It's very good because, you see, when you see a generating function of something equal 1 as a probabilist, you should feel like joy. Because that means this is just your normalization for a probability measure. So this thing equal 1 for x equal xc means that there is a very natural probability measure on irreducible bridges, which is just, say, define b irreducible of gamma to simply b xc to the gamma. Just that. This is a probability measure on irreducible bridges. The sum of every configuration, every possible element in my probability space gives me 1. But once you have a probability measure like that, what you can do is you can take independent copies of this probability measure and concatenate them. So now, imagine you take, so now, peak gamma 1, gamma 2, et cetera, and infinite seconds of id. Maybe I'm going to write it with capital gamma. So these are of id random variables with low p irreducible. And just define gamma to be the concatenation of gamma 1, gamma 2, et cetera. So you take an id seconds of your irreducible bridges, you concatenate them, gives you an infinite bridge. And this is actually a very nice object. So that's what we will call, so this will be an infinite safe-forwarding bridge measure on, so it's a probability measure, on infinite safe-forwarding bridges. Just to illustrate the fact that there is something magical there and that is that it's a completely open question to define the natural infinite probability measure on infinite safe-forwarding books. So question, define a natural measure on infinite safe-forwarding books. This is a question, and we will see that at least maybe there is a way, but it looks very complicated. Well, thank you very much for your attention.