 means that h tilde n k l minus 1 x is actually equal to minus the sum y equals x0. OK, sorry about this long equations. There's not much I can do about it. But you see, because you have this difference and you know that it's 0 on the boundary, you can just sum from the boundary to your point and you get your function. That's easy enough. What that means is that if this thing, yeah, l minus 1. So if this thing is a polynomial of degree k minus l, then this thing's a polynomial of degree k minus l plus 1. OK. And at k, it's obvious the thing is a polynomial of degree 0 of degree 1, 0 of degree 0, right? So they just increase by degree each time you go back, which is exactly what was asked. OK, so their polynomial, that proves 2. So now we have to prove 1. So this is a really fun. If sum over z, sorry, and I think I should be at k. Oh, by the way, I said it proves, I'm sorry, I said it proves 2, but it proves that this guy is a polynomial of the right degree. But then this is just a convolution with shifts of the thing. So of course, that becomes a polynomial of the same degree. That's easy to see. OK, so that proves that these guys are actually polynomials of the right degree. This thing, not enough room on these boards to write these formulas. So hk, this is a continuation of it. That's a product. OK, so I just wrote it out. You get this sum, OK? But now you've got RT, you've got this sum, and you could put the sum first on the z's. If you do the sum on the z's, there's RT, RT inverse. So that just becomes an identity. And that means that z1 is equal to z2. So this thing is equal to the sum over z of q to the minus l z x0 n minus l hk n 0 z. But that's actually just a way of writing q star minus l acting on h, OK? But what does q star do to h? It shifts it around in the l's. So this is hk n of l, and then you just evaluate at x0 n minus l. And lo and behold, it's told you to evaluate it on the curve x0 at l, but that's only equal to 1 when l is k. So now we found the phi's, and that's always true. Now you can see why to get a closed-form formula for this might be a little bit tricky. You see, if you want a formula for something, you have to be able to calculate the hidden probability of a geometric walk to that curve if you want an exact formula. If the curve is just linear, particle, hole, hole, particle, hole, hole, then it just goes up like that. And we all know how to calculate the probability of a Brownian motion to a line. And it works also for geometric random walks. Same tricks. These are actually beautiful because you jump over because of the memoryless property of geometrics. It just forgets that. So it's just perfect. So you can calculate probabilities to two lines. But this thing would ask you to calculate the probability that you go hit a staircase. So it's obviously not going to be a nice formula. So let me just write out the conclusion. kTn, this is the one point. That's all we needed, is RT, this funny RT on the sides, q to the minus n, g0n inverse, where g0n equals probability where tau0n is just the first time you hit the curve between minus 1 and n minus 1. I'm sorry, it comes out slightly awkward enough. But this isn't actually true. This is only true for below the curve. Above the curve, it's some sort of analytic extension of that. I want to write it a slightly different way. So this formula actually only holds below the curve. But I want to write it everywhere. And when you try to write it everywhere, you find out it's this. Things start to look familiar, where I should actually call this sort of script s, because it's not the same s as before. It's the microscopic s. So this script s is equal to e to the minus t over 2. This r, I'm just writing it out, and this is q. And q bar is an analytic extension of q. If you like, these things are given by contour integrals. But perhaps I won't write the contour integral down. And epi, s bar epi x0, is just the expectation. Now these are geometric walks instead of Brownian motions of this s bar. So now it has to hit before time end. And so that's the solution of Tasep. So the theorem is sort of written in different pieces here. This holds with that kernel for any right finite initial data. OK? One can also write formulas for not right finite initial data, but they're a little bit more cumbersome. It wants to be written like this, I'm not sure why. And of course, if you've got some special case where you can compute the hitting time to a curve, for example, the periodic, you just compute it. You can calculate the s epi explicitly. And it's just exactly the formulas they had before. OK? Yeah. That's easy. Well, because you're trying to hit the curve x0. So particle, whole, whole, particle, whole, whole, that just means that x0 is minus 3i. So you're trying to hit the curve minus, I can't draw anything for the life of me, you're trying to hit the curve minus 3i. Of course, it's just the other one is a little staircase. So it's not surprising you don't have exact formula. But of course, everything we do is soft, so of course the staircase gets proved by just linking it to the other guys. So for example, for the staircase, you will not have a completely explicit formula here. It'll be mildly inexplicit in terms of hitting times of the staircase. When you pass to the KPZ limit, it doesn't care if you had a little staircase or if you had a straight guy. They're all flat. OK? Does it make sense? I should just add a word because I want to show you, I have 15 minutes, so I want to show you a little bit of the scaling. I said I would do the 1, 2, 3 scaling today, so I'll try to do that. While this is true, it's not actually true that these Ks in this case are trace class operators, but there is a conjugation to make them trace class operators. And that's enough to make the sense of the Fred Holm Determine because the Fred Holm Determine is invariant under conjugations. So if you can conjugate to a trace class operator, that's fine. It means that the space may actually not be exactly L2, but it's something a little bit different. And this R is a typo. Of course, that's Z. OK, so now we want to take the 1, 2, 3 scaling limit of this thing. I just want to show you how remarkably easy it is. Once things are written in this form, so we'll take the density is about a half. And then we know that we should restail h epsilon tx. We know from the first lecture epsilon to 1 half h2 epsilon to the minus 3 halves t epsilon to the minus 2 epsilon to the minus 1 x plus epsilon to the minus 3 halves t. And this thing is supposed to go to the KBZ fixed point as long as initially the functions converge to some function. So you set things up so that h epsilon 0x, this is you take your Tasep height profile and you produce h epsilon. And you set it up initial condition so that h epsilon 0x converges to some upper semi-continuous function in the Hausdorff topology I talked about. OK? Now, hT z, as I said before, is basically just the inverse of xT minus 1 z minus 1 minus x0 inverse x1 minus z. So it's basically the same thing as the inverse of x0. And so the probability that this function is less than or equal to something just gets translated into the probability that x0 of, OK, it's a little bit complicated, but it's not anything difficult. So it'll just take me a second to write. A half epsilon to the minus 3 halves t minus epsilon to the minus 1 x minus 1 half a i plus 1 is greater than 2 epsilon to the minus 1 x. And that tells you the probability that h was less than these ai's at the appropriate points. OK? You wanted to know that hT xi is less than or equal to ai and it just translates into this. OK, so we can call this number n. And we just have to do some rescaling of our formula. OK? Here's the formula. Let's take, for example, this guy here. We want to rescale him. So when you untranslate through this whole thing, you want to take a limit. So you're trying to take a limit. So you just take a limit of this and a limit of this, right? I'm basically going to show you how to take a limit of this. And then you'll see the rest. OK. So when you translate, you get epsilon minus 1 half s minus epsilon to the minus 3 halves t. This is the microscopic s. Of course, the numbers are all crazy things. Takes a while to get them straight. OK? So that's the new t and that's the new n. And I think I dropped a, but it doesn't matter. You'll see that it's not here. Of course, you can just write x plus epsilon to the 1 half a. OK. Now the s has variables. And of course, when you take a limit of friend-holder determinants, you may have to shift the variables. You may have to do conjugations to get the thing to come out right. And here you have to do a little shift. So we have to evaluate, first of all, that the lattice size is this. But actually, you have to shift 2 epsilon minus 1 x plus epsilon to the minus 1 half u. And that's allowed because of conjugation. OK. All right, so we want to take a limit of this. And you may think this looks very imposing. But actually, the limit is like butter. Because this is just this. And q inverse, I have a fourth of a fourth. So this is just e to the minus epsilon to the minus 3 halves t over 2 times minus 1 half grad minus plus. OK, now the q to the minus 1. q to the minus 1 was 1 plus 2 grad plus. And I said it has a kernel. But that kernel is actually not an L2. So you don't have to worry about that. So I'm just going to write that as e to the log. And you might think, well, that's not quite allowed because grad plus is an operator bounded by 1. But we're on a lattice of size epsilon. So actually, grad plus is size epsilon to the, lattice of size epsilon to the 1 half. So we're, so grad plus is an operator of size epsilon to the 1 half. So that's OK. So plus 1 quarter log 1 plus 2 grad plus. So just remember, that's the order of epsilon 1 half. And then, OK, so we've got that. And then actually, I want to separate out the epsilon to the minus 1 x here and write this as q star epsilon to the minus 1 x. OK, that's what you get. And I didn't write inside the variables. Fair enough? Now this guy here, this guy's being evaluated at this point here. And that's actually the mean of this random walk q star. So you won't be very surprised to know that this thing goes to e to the x d squared. That's OK if x is positive. If x is negative, then you have to pull some more stuff into the other guy and then pay for it later. That's OK. So this is actually very easy. That is just the standard convergence of random walks properly resented to Brownian motions. But what about this thing? So here's the black magic. Minus 1 half grad minus. And now I'll just expand the log. And remember, we're on a lattice of size epsilon 1 half. So grads are size epsilon 1 half. That's 2 grad. This is just expansion of the log plus. And now the next term is going to be order epsilon 1 half to the fourth. So this is order epsilon squared. That's what's inside here. Fair enough? Now this thing is completely amazing. Got minus grad minus plus the same 1 half grad plus. So that's a Laplacian. Does anyone see that? That's a discrete Laplace operator. But grad plus, oh, I dropped a term. Sorry. There's a plus. Sorry. Plus minus 1 quarter times 1 half of 2 grad plus squared. And then this thing is like cubed. Sorry. I just want to see if that's correct. It really matters. OK, good. Sorry. OK, now it's correct. I hope you can all expand the log for me and just tell me if I made a mistake here. I'm pretty sure this is correct. OK. So this and this is a Laplacian. A standard centered Laplace operator. But if you take grad plus squared, it's actually a Laplace operator also, but it's shifted by 1. Because it evaluates at 2 minus 2 times 1 plus. So you get this Laplace operator minus this Laplace operator, which is a difference of Laplace operators, which is a third derivative. Plus a third derivative. So the whole thing after a little bit of computation, and you've got this epsilon to the minus 3 halves times the size epsilon 1 half, this thing is e to the t over 3 d cubed. Why would the third derivative come up in a probability problem? I have no idea. So you see the S is just converged to this S operators, which I wrote out there erased. The S is just converged in a few seconds to the S operators in the continuum. And now the epi operator, well, the epi operator is just evaluating random walk paths of these S's. And it's not too hard to imagine that the random walks are just converging to Brownian motions, hitting the rescaled x0 path, which just turns out to be minus the height function, because x0 is the inverse of h. So the perturbed inverse gives you a minus sign. And so that gives you the K-P-Z fixed point formula. So let me just try that. Maybe I'll just say it. So what you check is that the limiting guys also under a conjugation are chase class. And the map from initial data in UC, remember UC is upper semi-continuous functions with this local Hausdorff topology. And it turns out they have to be bounded by linear infinity. And G in LC, which is just minus UC to these operators, K is a operator K-hypof minus 2 over 2 K epi G 2 over 2. Or it might have a minus sign on it, it doesn't matter. That this map is actually a continuous map in trace class. And that means you can pass to the limit. So in other words, if you've set up your initial data so that h0x converges in UC to a later profile, then at time t later, the whole thing converges. Again, to the kpz fixed point. OK? So it's that soft. There's one little twist to this, which I haven't gotten to, and I'll just explain in words. We started with right finite initial data. So when we get to the kpz fixed point, we had a kpz fixed point, which the initial data has to be minus infinity after a certain point to the right. So everything I've written is only proved so far for initial data for the kpz fixed point, which is minus infinity to the right of, well, 0, but it doesn't matter because of translation invariance. So it just has to be to the right of some point. Does that make sense? But now you can check, because you have exact formulas, you can check how much it matters out there. There's a worst case scenario, which is you just start with something which is growing linearly at infinity. And then there's the other worst case scenario, which is you have minus infinity to infinity. And you can check that the difference, if you're at minus l and you cut off, is actually like e to the minus l cubed uniformly in the limit as you take the casep under the 1, 2, 3 scaling to the kpz fixed point. So that means it's very easy to remove that, and you get the convergence for anything. And so that's the difference. I'll stop there.