 roots. Here's the W's. These are all just constants. Ignore those. But the eigenvalue is just some very simple function of the roots. It's a product of, you know, 1 minus 1 minus r squared times W, you know, 1 over that. And so, and these W's satisfy some simple polynomial equation where this is our, this is what I was calling the constant on the previous slide. So now you can see that everything is, is perfectly computable, right? If I'm trying to compute the, the eigenvalue which is the free energy, well the log is added to the free energy, all I need to do is solve this equation, you know, look at those roots, the ones that are on the right of the figure for this constant and, you know, compute that quantity there, the product of those roots. Little n is the number of paths. Big n is the system size. Any other questions? Yes. Right, so for each number of particles, I can compute the leading eigenvalue. And then I have to compare all those different leading eigenvalues for all different, those different number of particles and figure out which one is largest. Yeah, so there's definitely some work to be done. It's just, you know, this is, yes, sorry, that should be lambda sub little n. I agree. Okay, well, all right, so now I'm going to save you a lot of calculation. I'll just display the answer here. Here's the explicit formula for the surface tension. Well, I drew it upside down. Sorry about that. Somehow I feel like it's easier to see from this perspective. It's really, you know, this is minus the surface tension. And, you know, it's got a maximum somewhere in the middle. But it has some interesting feature that the previous picture doesn't have. See this black line here, right? It's actually, on that side, on the far side of the black line is actually just a linear function. Okay, so in fact, the surface tension is piecewise analytic. It's still defined on the whole triangle, but there's two pieces. There's a piece where it's nice and, you know, strictly convex, but then there's a piece where it's actually linear here. And that this line here is a little hyperbola, which when r tends to 1, this hyperbola sort of flattens out and becomes a straight line, the black line there. So you don't see it in the r equals one case, but you do see it in the r less than one case. Yeah, I'm getting to that. Okay. And so let me tell you what attractive and repulsive are in the second. But so this, this is probably too complicated. But let me just show you. Remember that, remember that simple and mysterious formula about involving laws and stylings, which I will put up again because I like it so much. And that way you can compare, right? There's this pi s pi t. And this third one is of course pi times one minus s minus t. And then this was like e to the x and that was e to the y. And this was one. That's what happens in the, in the laws in this case, the r equals one case. And here's how you generalize that when r is less than one, at least. So here's my segment from zero to one. And here's the point one minus r squared. And now I, now my triangle over there is going to become this triangle here, that big triangle. And, and I, I, I sorry, I rotated the angles. But don't worry too much about that. Here, this is s times theta, t times theta. And one minus s minus t times theta. Theta is no longer pi. It's something less than pi. But here's a fun geometry problem for you. Let me, let me do it over here. As long as I have all the board space, I, if I give you these two, these three numbers, s t and one minus s minus t. Three, three positive numbers which sum to one. So I want you to find me a point z in the, in the upper half plane. Here, this is the point one minus r squared. Such that the ratio of this angle to that angle, to that angle is s to t to one minus s minus t. That's a funny, funny problem. Right? It's not so easy to, to, it's just a, but it's elementary, you know, trigonometry. That's, that's, that's how you define theta. Right? The ratio s to, right? This, that, that is one, is s to t to one minus s minus t. Right? Once you've done that, you found this point z, z. You, you intersect with the circle, you get w. And then here's x and y as a function of z and w bar, where, where b is this nasty thing here. Well, b is sort of a, another version of the dialog rhythm function. You know, the dialog, dialog rhythm is a, is one of these very beautiful functions which has lots of cousins and they all have interesting symmetries. This is another one that, I don't, I don't think this has a name, but it should have a name because it's, it's the one which occurs naturally in this problem. Okay, whatever. Doesn't matter. It's just a formula. And here, here, you know, the, the free energy lives over here, the surface tension lives over here and there's this nice map between the two. It looks kind of like an amoeba if you're used to what an amoeba looks like and you can look at the free, here's a, here's a plot of the free energy. Well, at least on the interior here, it's, the free energy is still linear outside the, outside this sort of partially infinite triangular region and it looks like that inside. It comes to a little cusp there at some finite point. So the flat part over here all maps to that, to that point up here. So yeah. So you don't, you don't see that particular, those phases in this, in this particular simulation. I'm, I'm, this particular limit. Okay. So there was limit shapes in the title. Do I have time left? 15 minutes? Oh, plenty of time. Maybe, wait, is it okay? What? Oh, okay. Are there any questions so far? All right. So let me tell you about limit shapes. Let's, and let's go back to the, the Lawson's case because that's the case we understand the best. This is an old theorem of Henry Cohn, Jim Promp and myself that, you know, if, if you are looking for a tiling of a particular region in the plane, just a random tiling with these three, these three shapes, these 60 degree round by Lawson's tiling. So then you take a uniform random tiling, it, it looks, it has some shape. You know, the, the nearer corner like this, you see only yellow tiles near this corner. You're only going to see red tiles and, you know, the density of tiles throughout the figure will, will converge as the lattice spacing tends to zero to some non-random quantity. So that's the statement of this limit shape theorem. Even though you're taking a random tiling, the, in the limit, it concentrates on a non-random shape. There's sort of a law of large numbers, which, and that shape is determined by minimizing the integral of the surface tension. That's why it's called the surface tension because it's the thing which controls the limit surface. You know, the limit surface is trying to minimize the surface tension. Right, so the, the function h, describing the limit shape for a given Lawson's tiling of a region like this, is the unique minimizer of this so-called surface tension integral. You know, you integrate over the region and over the plane r, surface tension, which is now a function of the gradient of h, gradient, think of gradient of h as s comma t. Right, that's, that's what I've been calling these s and t variables. They're the, the densities of, I forgot what colors I used, blue and green tiles. And one minus s minus t, that's the density of the red tile, although I guess I use slightly different colors here. All right. And, well, because we had this nice variational principle for the limit shape, we can write down a PDE, which that limit shape has to satisfy. And with Andrea Kunkhoff, we simplified that PDE to a PDE just for that parameter z, z. I've been in France, I was in France, no, I was in Cambridge last week. And so z, z is the third coordinate here of this triangle, or that triangle there. You see, if you give me, if you give me two numbers, s and t, or x and y, that determines the complex numbers z in the upper half plane is the third, third, the apex vertex of that triangle. And if you write the, it turns out if you write the PDE in terms of that function z, z, darn it, darn it, you, that, it becomes, well, okay, maybe I'm hiding something a little bit under the rug here, but it becomes this nice, very simple PDE, the complex burgers equation, complex burgers equation. The x derivative of z plus z times the y derivative of z equals zero, where x and y are the coordinates in the plane, well, tilted coordinates in the plane here. How does that generalize in our case? This is all for the Lawson's Tiling, the uniform Lawson's Tiling. How does it generalize when r is not equal to one? Well, there is an analogous limit shape theorem for the five vertex model, which I didn't bother writing down, but here's the corollary, you know, if you write down the PDE for that limit shape, right, there's some Lagrange equation which tells you that the limit shape has to satisfy certain PDE, and if you write that PDE in terms of z, this, this apex coordinate there, complex number, then it has a very similar form, zx plus some function times zy equals zero, where now this function is not just z itself, but there's some explicit function which I could write down, and, you know, if you ask me afterwards I will tell you what it is. It's an explicit, but the mysterious thing, I guess, or interesting thing is that it's not actually a complex analytic function now, and so we cannot, as far as I know, solve this using the same methods we did before in the r equals one case. In the previous case, the complex burgers equation, you can solve it using complex characteristics, which means you can get explicit limit shapes. As far as I know, we don't have the same technique for this kind of equation, although, you know, it looks very similar. So, you know, I don't have any explicit limit shapes to show you. I mean, of course, I can always run the simulation and give you one, but I don't know how to solve this equation yet. If somebody in the audience knows how to solve these equations, I would be excited to talk to you. All right? Yeah, well, never mind. This is not so important, but if you, if you turn the equation around and think of x and y as functions of z rather than z as a function of x and y, then you can write it, you can turn this equation into an equation for x and y, and it's, and it looks like this. x, the z-bar derivative of x, and the y-bar derivative of, the z-bar derivative of y are related, their ratio is f of z. And this, the interesting thing here is that it's actually a linear PDE now for x and y. If I have any two solutions, I can add them and get a solution, which is something a little bit mysterious when you're thinking about this as solutions for limit shapes. You take two limit shapes on two completely different domains, you take the Minkowski sum of the domains and you get a limit shape on the Minkowski sum. That's also true for the Lawson's-Chinling case. It's just something we didn't realize before. All right, so here's what you've been waiting for, some simulations maybe. Here's the r equals one case, that's the random Lawson's-Chinling case, so you've seen those before. And when r is large, the paths want to have lots of corners, so they turn out to be these exact things. And they don't interact very much with each other. When r is small, however, that's the interesting case. The paths want to go straight, and they want to have very few corners. But of course, they're trapped between the neighboring paths. And when the density is large, like down here, then you're in that phase, which I called the attracting phase. So the reason I called one phase repelling and one phase attracting is that in the repelling phase, these paths repel each other. They're like a log gas. But in the attracting phase, they don't, as far as I understand. I don't know what they do, but here's the simulation. And again, you should take these simulations here are a little bit of a cheat, because I don't actually know a Markov chain, how to simulate these things rapidly, quickly. I mean, the standard algorithm doesn't converge very rapidly. So I did my best. What? Yeah, x is y. So the slope is one. But then I just increased the density from the upper picture to the lower. And when the density is high enough, then you're in the attracting phase. And this is just my intuition. I don't have any theorem to this effect that in the attracting phase, the paths attract each other in the sense that they line up in these bands, but you can kind of see the bands in the figure. There's these large regions where everybody's going parallel to each other. Okay, well, I have one more result to present if I have time. Okay, it's just a couple slides. What about the fluctuations in the attractive phase? Which is not to say that I understand the fluctuations in the repulsive phase, but physicists will tell me that they're Gaussian free field fluctuations in the in the repulsive in their repulsive phase. But in the attractive phase, something interesting happens. Here's the actual simulation. Again, you know, and it looks like the fluctuations are much larger than square root of log n. I think you might agree with me by looking at this. And so we can go back and do a little calculation. This is a case. This is a case of a situation where, you know, I did the numerical experiment on my computer, and then I found a proof. And I still don't really believe the result. So I don't believe I still think there's a mistake somewhere that just haven't found the mistake. You know, it's possible that I mean, I did prove something. And I did compute something. And I think I computer the right thing. But I still have lingering doubts. I was hoping that I'd be at the end of the talk before now. But I still want to I must want to present it to you and you guys can see if you believe it or not. Alright, it's a very simple result. Here's here's I already told you about the leading eigenvector, right? And in our case, this function a pi, right, which is not, you know, just the signature of the permutation, but it does have a reasonably simple form. It is the signature times some factor, which is just a product over i and j. And when you plug it into this a determinant, it actually you can manipulate it into an actual determinant times this factor. So this leading eigenvector, the components of the leading eigenvector can be written as determinants of this matrix, which looks kind of like a van der maan matrix again, with these extra factors. So one minus zeta one inverse one minus zeta one inverse squared, and so on. So I hope you can see the pattern in that matrix. And so using that form is pretty easy to compute the some certain components of the eigenvector, in particular if all the so here, here's my system on a cylinder. Suppose I have k pass and they're all adjacent to each other, like there's a there's a path coming at at location 123 up to k. Right? Then I can actually compute the component of the eigenvector. And I can also compute the component of the eigenvector when the past are sort of arbitrarily, I mean, evenly spaced, for sort of maximally spaced apart, you know, at one at zero, one at n over k, two n over k, all the way up to k minus one n over k. You know, when I when I cram all the paths next to each other, that should be sort of the smallest probability event. And when I let and I would give them all the maximal space between them, that should be the largest probability event. And when you compute the ratio of these probabilities, both of which you can do the ratio tends to one. And if so if this is really the largest probability event, and that's really the smallest probability event, then all events have exactly the same probability, because the ratio of the largest to the smallest tends to one. And therefore, the system doesn't, you know, it's just a uniform measure on all configurations of k points in n. And so that is the inescapable conclusion from this calculation. In the attracting phase, particles on different given row or column are exactly uniformly located. Except, of course, they have to be different. So that, okay. Yes. No. No. I have an infinite by infinite cylinder. And I look on a given row. And I've got k particles. Then the locations, those k particles around the ring are uniform. They're just Bernoulli. Yeah. The particles are the paths. You know, just just cut this cut this thing horizontally here and look at the locations of the vertical lines. Does it look uniform to you? Well, it's very hard to kind of hard to tell. But right, this picture. Yeah, I'm integrating out everything above and below. I cannot tell that this is not uniform. Maybe you can. But so that's, that's that's the end. Thank you. That was not part of the talk. Any questions? What's the PD? Oh, yeah. Well, I said it with z x equals some function of z times z y. What's f? Well, that's a lot I can read. I should have written it as b of z, z, z x plus b of w w, w y equals zero where z and w are are well, there are those two points. I didn't show you the point, but z minus one w minus one equals r squared, one minus r squared. And b is that that dialog rhythm function. So it's actually, you know, this is the z derivative of b, b is a non analytic function, but it has a nice z derivative. And, you know, something you have to stare at for a long time to understand. It's not an easy function. But it is explicit. Yes, I mean, maybe, maybe that's really what's going on is you get taste up or some, some version of that on a ring. But it's not exactly taste up because the, the transition probabilities are a little bit different. But maybe we should talk afterwards. If you know, let's have a chat afterwards. Any more questions?