 So the last talk is the KPG fixed point. So what we got last time is we got a formula for TCEP, starting from essentially any initial data, and rescaled it and got a formula for what you would see in the limit as epsilon goes to 0, rescaling the height function at time t. So we got a formula for these generalized dairy processes. What you get when you start with general initial data. So what I want to do today is describe all the properties of this thing. And the first thing we want to do is some sort of regularity. So the first thing we want to do is just check how rough is this function. And of course, the answer is the function is going to look locally like a Brownian motion. So it's going to be roughly the same as a Brownian motion. And so the natural thing to measure that in is these Helder norms. So the Helder beta norm. And now we want to make it local. So we'll just look on minus m to m. m is just some finite box. And you just take the supremum x1 not equal to x2 in m1 minus m of hx1 minus hx2 over x1 minus x2 of the beta. So that's the local Helder norm. And roughly what you expect is to be in this with beta anything less than a half, like Brownian motions. So what happens is we have the following result. For any t positive, so for any initial data, h0 in uc, remember what uc is, it's upper semi-continuous functions with this local Hausdorff topology. And any t positive and any finite m, the limit as a goes to infinity of the probability that the solution at time t is equal to a is equal to 0. And that's for any beta less than half, which I forgot to write. So just how would you prove such a thing? Well, the way you prove such things in general is this Comagroth condition. So Comagroth continuity theorem, that's our main tool for proving the regularity of stochastic process. It's an amazing theorem, which basically tells you the two-point function tells you the regularity of your function. So basically, roughly, it says that if you can control, if you have a bound like this, basically you get held there. That's what the Comagroth continuity theorem tells you. And so the amazing thing is you just need the two points. You just need two-point functions of your stochastic process to tell you how regular that thing is. It's a completely amazing theorem. So I guess I'll just write out our formula again. So how would you do this? Well, I want two-point functions. I'll write it out for m-point functions just so that, because I want to use it in a second. So this is just a rewriting of our formula, this h. So h here is just the asymptotic limit of the tasep height field under the 1, 2, 3 scaling at time t. That's all we know so far. And it's given by a determinant. Now, the way I wrote it before, I wrote it h less than some general function is equal to determinant of i minus k hypo k epi. But I'll write it a little bit more explicitly right now, because we actually want just m points. Yeah. If you just did that, I was going to say that in a minute. Yeah, sure. Let's go step by step. So here's an extended kernel formula for the thing. So now this is on the extended space. So this is l2 of x1 through xm dot set times r. OK, so that's an extended kernel version of our formula. As I was saying, you have extended kernel versions, and you have path integral versions. There's just all these different ways to write the formula. Since we only want to be less than the values at m different places, you basically can just write this k epi of those m different upside down narrow wedges and just compute, and you're going to get a formula like this, where this kt extended is a little bit long. But anyway, if you think about it, the formulas are about as simple as they could be. OK, so these are all heat operators. Some of the heat operators are backwards in time. But as we saw, our old kt hypo is full of airy functions, which allow you to apply backwards heat kernels. So this is a legal object, and that's the extended kernel. Or you could write this thing in path integral formulation like this. OK, hypo. At any rate, all I'm doing is writing out the path integral version of the same formula, because we know we can go back and forth between the two. So you don't even need to know the details of this. OK, so there's a path integral version, which is now just on L2 of R. OK, so at any rate, I'm just showing you some multi-point formulas for this height function at some time t. And we only need two-point formulas. And the only thing I want to point out is that what happens, and it's kind of a surprise, is that if you look at the two-point version of this, it's extremely hard to prove something like this. But if you look at the two-point version of this, it's actually very easy. All you have to do is a little trace class estimate on the two-point thing here, and it comes out clean. I'm not going to do it for you, but you get an estimate of this nature, and that proves, that's how you prove that the thing's held there. So these path integral versions turn out to be extremely useful for things like this. For some reason, it's easier to read off this kind of information from them. The small-scale information, when points are close to each other, turns out to be much easier to read off the path integral version than off the extended kernel versions. It's just the way things are. Another thing you get out of this for free, and for free means, I think everybody here can imagine doing the calculation. So I think you can all imagine you have the two-point function here, you plug this thing in, and you actually just start computing. I think you can also imagine that you wouldn't want to watch me do it for the next 45 minutes. But you can also imagine that if you now zoom in on the function. So if you zoom in on the function, what you're doing is you pick some point, xm, and now you make all these guys really small. Zoom in locally on the function, and you'll see that the thing's a Brownian motion. Because the kernels just converge as you zoom in to a beautiful kernel which gives you Brownian motion. And it's just immediate when you do the calculation. So what you get out of this is that it's helder, and it's also locally Brownian. So it's helder. Yeah, so the sense that I'm talking about is I look at a point and I look at the differences. So h, I fix a t, and I look at hdx plus y minus ht. So I look at this function of y. And as you zoom in and rescale out, so you look at epsilon to the minus 1 half h of epsilon y converges to a Brownian motion. It's a local version of being locally Brownian. In terms of finite dimensional distributions, yeah, that it wouldn't be hard to prove it was tight. No, no, you don't get absolutely continuity from this. Yeah. So what Ivan's pointing out is there's different notions of these processes being locally Brownian. And this is perhaps the weakest notion of those. As you zoom in, you're going to see a Brownian motion. Another one is that if you looked on some interval, this process would actually be absolutely continuous with respect to a Brownian motion. That's not something you get from these formulas. But that is something that you get from the Corwin. Oh, that's right. I'm sorry. So Corwin and Hammond can prove that the area 2 process is absolutely continuous with respect to a Brownian motion on a finite interval, which is a stronger statement than this. So that's not known for these other processes now. And I don't think it's known for any other case, right? Yes, I guarantee you it's true that for any initial data, this thing is on finite intervals absolutely continuous with respect to a Brownian motion. Yes. I'll tell you after how to prove it. So Ivan's question was whether there's a property in which it's very interesting to compute the arg maxes of these things, because that's where the polymers end up. There's an underlying polymer model, which just wants to go to the arg max. And so you'd like to know that that arg max is unique. So that's something that was a problem to prove, even for the area process a while ago. But one would be able to prove it here, too, using these formulas. OK, so some things the formulas are good for, and some perhaps not. OK, so it's locally Brownian. But now there's a bigger question. And that's that we actually don't know this thing's a markup process. Not yet, I mean, in this discussion. So how do you prove it's a markup process? Well, of course, you could say, well, I took it as the limit of the TASAP height functions in their markup process. Is it the limit of a markup process? Should be a markup process, right? Is that true? It's not true. Exercise, make yourself a counter example. If you don't know one, you should know one. It's false. And the markup property, the markup property is basically the Chapman-Kolmogorov equations. So the Chapman-Kolmogorov equations here would be ph0, hs, this is in df. That would be the markup property. That's the Chapman-Kolmogorov equations. But you might imagine this is going to be a little bit difficult to prove, because this integration is the integration over UC. It's a nightmare. And we definitely do not have a formula for this. So that's hopeless. So the only way you're going to be able to prove the markup property of this thing at this point is to prove it's a markup process, because it's the limit of markup process in a nice way. And that you need some sort of tightness. So here's a lemma. So it's p epsilon t xa. These guys are a markup process, and they're feller. It's a feller markup, trying to discern probabilities. And suppose they converge to feller kernels, p, t, x, a, or feller kernels. Kernels just meaning they're probability measures depending on x, but they're feller. But you don't know they satisfy the markup property, but p epsilon are converging to them. Then the kind of tightness you need is the following. So these are all on some Polish space, on Polish state space s. And now for each t, each delta 0, there's a compact set, k delta, such that p epsilon t x, k delta, complement less than delta, and p epsilon t x, converge to p, t, x, a, uniformly. Then p's markup. Yeah, yeah, weak convergence of measures. So actually, once one has this lemma, and the lemma's a good exercise, actually, if you like continuous probability, it's very easy to prove once you know what the lemma is. Yeah, yeah, a's are the subset in the boral sets in half. But of course, you can use a generating family, things like that. OK, so yeah. OK, let's not get into a long discussion. You sit down and you can prove it in. You can really literally prove this in about one minute once you know the statement. So the point is that we're done, because these guys are feller. Everybody's feller, and the reason they're feller is because this map from f and g into these operators, k hypo f, k epi g, is actually continuous in trace class. And that automatically makes these things feller, because determinant is a continuous function of the trace norm of the kernel. And so we've just got to find our k deltas. But k deltas are easy. These are k deltas. Less than or equal to a are k deltas, OK? Oh, they're all feller because the map from the initial data and the final g, if you like, to these operators, k hypo k epi, is actually continuous in trace. It's a continuous map into the trace class. Yeah, exactly. And you can prove this. It's relatively straightforward to prove. Of course, after a conjugation. There's a conjugation one. And so we've got our sets k delta, which are just the bounded sets of a helter norm. They turn out to be nice compact sets in UC. OK, so it's actually not hard to prove the Markov property. OK, so in other words, finally, we have our h, h is a feller Markov process in UC. And it's really nice. It's integrable in the sense that its transition probabilities are explicit in some sense. And it's scaling invariant. And the airy processes are special self-similar solutions of this thing, which are the airy process. So if h0 is just equal to 0, then hg of x is nothing but t to the 1 third, airy one process, t minus 2 thirds of x. And if h0 x is a narrow wedge at 0, or anywhere, take it around like that, does everyone know what I mean? It's the function which is 0 at 0 and minus infinity elsewhere, which is a function in our space. Then h tx is t to the 1 third, airy 2 hot, t to the minus 2 thirds x. Remember, the hot just means you subtract a parabola from the airy process. And if h0 x is equal to this upper semi-continuous function, a wedge, so that just means the function which is 0 to the right and minus infinity to the left, then hgx is t to the 1 third times the process called airy 2 to 1, t to the minus 2 thirds x. And I just want to quickly describe the calculation to get that. So how do we know this? Well, you actually go into the formula and check that it's exactly that. So I'm going to show you this one a tiny bit, or a sketch of this one. So in the first class, we actually saw the extended kernel formula for the finite dimensional distributions of this thing. And the way you get them is, well, I just plug that function in and try to calculate the k-hypo. So k-hypo is built out of s bar t0 hypo of h0. Now if you remember, that thing is just calculated by calculating the starting from some point. Let me write it out. uv is equal to expectation starting from v st0 of st minus tau of v tau u. And here it's hypo. So v starts above the curve. Anyway, don't worry about that. At any rate, oh, and there's one for plus, and there's one for minus. So you go one way, and you do this hypo, and you go the other way, and you do this hypo. Now this function is particularly easy, because if you take st hypo minus, that's the easiest one of all, there's nothing to hit. You start above, there's just nothing to hit. Things with minus infinity, what are you going to do? There's nowhere to hit. And so tau is just infinity. And you can easily check that st minus infinity from the formula for s. Remember, s is just this t to 1 third area process of area function. It's a very explicit function. You can easily check that st minus infinity is 0. So this thing is just 0. Just 0. OK. Well, what about the other way? The other way you're starting by, I mean, you have to hit a line. So it's just the same thing, but you have to hit a line. st 0 bar hypo, hypo H0 plus. OK. So that just means I'm starting up here, and I have to hit this line, right? I have to hit the hypo graph of this guy. But you just do that by the reflection principle. It's easy to calculate the time at which a Brownian motion hits a line, so you know the hitting density. The hitting density is just, OK, so if I start at y, so I wrote it at y. The hitting density is just 0 to infinity. And now I'm going to give you a very explicit calculation. d tau, tau is the hitting. y, y is where you started, over 4 pi square root tau to the 3 halves, e to the minus y squared over 4 tau. So probably some of you calculated this in a course on Brownian motion. This is just the hitting time of a Brownian motion to a line. You just compute it by the reflection principle. And now we have to integrate that against e. This one's really easy, because b tau is just 0. 2 thirds tau cubed. And I'm taking t equals 0 here. I'm taking t equals 1, sorry, I'm taking t equals 1, just for the calculation, because you can always rescale to get out of the t's. It's not a problem. t cubed minus tau of area. So this is this thing evaluated at x, y, very function of x plus tau squared. OK, you get that. That's the interval you get when you actually have to compute this thing. Here's something. You might want to give to your calculus students. It's completely amazing. This is, I think, the most amazing formula I've ever seen. There's books of formulas for area functions, which don't contain this formula. It's really, really remarkable. It's an extremely non-trivial fact. And that flip, that's the reflection which Daniel was talking about yesterday. So that means the kernel, when you do a hypo of a line, is just a reflection. And now you put the reflection and the 0 together. And all you do is write it down, and you get the kernel, which I showed you guys the first class, just directly. You just write it down. I'm not going to do it, because it takes five minutes just to write. But there's nothing difficult. It's all because of this amazing fact. Nope. What you do is you don't do it like this. You think of the thing using the reflection principle. You know, the reflection principle, you have this reflected point. And so you basically are solving the heat equation with 0 along the line by taking the solution of the heat equation minus its reflected value. And what you do is you compute each. And what this tells you is the reflected thing stays. And the non-reflective guy just goes to 0. And now you can just check. Sorry, you integrate from 0 to L. And as L goes to infinity, the one that you started with just disappears. It's completely amazing. So those are just some fun facts about the fixed point. Of course, there's lots more to prove. Let me point out a couple of things which aren't really in this picture, because I don't want to give the impression that the KPZ fixed point is just the solution of everything. It's not. It's sort of the beginning of a lot of things. Random initial data. So I haven't been talking about that much for a very important reason, that it's not really part of this picture. This says, I want to rescale out, and I get a Markov process. But the Markov process should start from a deterministic initial data. But there are, of course, these amazing formulas for starting from two-sided Brownian motions, or one-sided Brownian motions. And they don't really fit into this picture, because the only way you could fit them into the picture would be to integrate over that initial data. So you get a formula, which is our fixed point formula integrated over take the expectation with respect to initial Brownian motion. But how are you going to solve that? So that's something that's kind of missing, except that we checked, and if it's half Brownian.