 I minus a special kernel. Oh, sorry, I need to cut, takes a sec to write this thing. On an extended L2 space, so here we've got, chi bar A is the thing which in the x-wemph coordinate is the indicator function that the underlying variable, which I shouldn't call x, I guess I'll call it u, is less than A i. So it's a vector of indicator functions. And k two to one, x one, u one, x two, u two is gonna take me the rest of the blackboard to write. So we don't wanna see too many of these. In fact, I was foolish to start there, integral. I'm gonna write this thing, but don't take it, don't worry that it's a bit complicated. Now this is the area function. So these are like the formulas that Pierre was writing, but he sort of referred to me that I was gonna write them all down. Okay, and then there's another term which looks pretty similar. Okay, I'll write it out. I just wanna show you one so you get the idea. In this term there's a plus there. I don't wanna bore you with these things too many times in these talks, so I'm just gonna do it brutally once. Okay, any typos? I think, yeah. Oh yeah, everything else, right? Anyone in there who says a minus sign is wrong? Okay, what do you get from something like this? Well you see, okay, there's a formula, okay? These processes are specific processes whose finite dimensional distributions are given by some specific formulas, right? Number two, the formulas are quite complicated, okay? They're never much easier than this. They can be written in all sorts of forms. If you open a book for the area two to one process, you might see a different formula and you'd have to sit there for two hours maybe and just check that it's the same formula or it's even worse than that because determinants are invariant under conjugations and you might have to conjugate it to get the formula you want. So it can be kind of a nasty business and it's not too clear which formula is the same as which other, okay? One can check. So again, this formula is obtained by getting an exact expression for TASAP starting with this initial data which we'll see tomorrow how those expressions are obtained but it only works for these three data basically or I'll show you which one's exactly it worked for. And then doing a scaling limit to obtain that. So you do a scaling limit of the determinant formula you get for this with this scaling and you get this formula coming up. So and one can check this area two to one. And now you have to shift if you wanna get, goes to area two of X. So I've shortened it, I've just called A sub two of X as X goes to minus infinity and area one of X as X goes to infinity but not exactly that. Actually, if you think about it, under the scaling limit what you see when you take a limit of this under the scaling when T equals zero, there's only an epsilon one half but you're doing epsilon minus a half. So the thing converges to a wedge. That's your initial data, a wedge, okay? And from the wedge, you should see something that looks like that, okay? And so this area one to two lives there. That's what you're seeing, okay? And so here you actually have to subtract or add, sorry. The smaller of zero and X squared. That's that parabola that you're seeing. So here the initial data is H, the rescaled initial data, this guy here is H zero X minus infinity. That's the rescaled initial data. So what we have in our pockets is actually a couple of different area processes. There's this area two, area one. There was the one Pierre mentioned called area stat if you start with Brownian motion. Area stat's a funny process because area stat is just a Brownian motion in space but it's got this funny height shift of a by-grains distribution at zero which is coupled to the rest of it. And then you also see that you could start with one on one side and one on the other side and get a crossover area process. So you can kind of stop at zero, put one of them on one side so you could start on one side and a two on the other and you'd get an area two to stat process. So there's sort of six of these guys, six basic area processes. All with their own crazy formulas to make you insane. Okay, now that's the story. So it's not too hard to imagine if you look at this that what's really going on, because it remembers the initial data but otherwise it's universal supposedly over all these different types of models is that really the rescaling is sending you to a universal fixed point in the space of models. So we've got all our models like Tasep, KPC or directed polymers or stochastic burgers, stochastic chemical from Jacobi equations or as Pierre was mentioning the fronts of 2D chemical reactions. So we would have stochastic reaction diffusion equations. And then there's this rescaling h epsilon equals epsilon to the one half h, epsilon to the minus three halves T perhaps minus some huge constants. And all these models under this renormalization are being sucked into some universal fixed point which has special self-similar solutions which are these area processes. And that thing is the KPC fixed point. And the KPC universality conjecture is that all the models you believe are in the KPC universality class will under this rescaling converge to this KPC fixed point. So this is a picture of the KPC universality. It's a vague conjecture because what's the definition of being in the universality class. But keep in mind that the points in this space are these dynamical objects. So they're usually Markov processes. You have some initial data and it's a Markov process telling you the mechanism of evolution. So these points are Markov process and they converge to this KPC fixed point which should itself be a Markov process. They're not all Markovian. If you look at the front synostochastic 2D reaction diffusion equation if you don't know what that is, don't worry. That's definitely not a Markov process but somehow the Markov process gets recovered in the limit. The problem with this universality conjecture is that in tell recently we didn't know what the KPC fixed point is. So the subject of this lecture is that I'm gonna basically tell you what's the KPC fixed point. And the way to get it is to solve TASAP and take a limit. So this limit now, so I should write in red or something, this is completely understood. Okay, so at least one. And then there's a couple of other models like TASAP. TASAP has some friends, like things called push TASAP, et cetera, which we won't get into. So there's a couple of other models. There's like five other models which are sort of friends of TASAP, which also work. Okay. Now in this picture, the KPC equation looks like it has no special role anything more than TASAP or all those other models and that's true. In a sense, it's not even a special model. It's just a nice one because you are allowed to do this and we all like continual models and things like that. Though it's bad on the other hand, because it's so opposed. But it does play a special role because of this fact that on small scales it just looks like this. So what happens with the KPC equation is that if you look on large scales it should go to this fixed point, but if you look on small scales it should just go to DTH equals DX squared H plus C, this linear equation. So that's another fixed point, a trivial fixed point in the universality class called the Edwards-Welkinson fixed point. This guy has this one, two, three scaling. So the KPC fixed point just by fiat if it exists is invariant under this one, two, three scaling. The Edwards-Welkinson is invariant under this one, two, four scaling. And then there's a sort of weak universality of the KPC equation that it's the only thing, it's the conjecture, unique heteroclinic orbit connecting the Edwards-Welkinson fixed point and the KPC fixed point. And if you start with models with adjustable non-linearities like ASAP, so in ASAP you don't only just jump down, you can jump down or up. And if you make the rate of jumping down close to the rate of jumping up, that's the same thing as tuning the asymmetry small. And if you tune the asymmetry small, you can see that if you tune the asymmetry small here and put an epsilon and one half here, then you would already be in the one, two, four scaling. So there's models like ASAP, which if you tune close to, with asymmetry small, will converge actually to the KPC equation under this one, two, four scaling. So I just wanna mention that because there's also that universality which remains to be proved for a lot of models, though a lot of progress has been made because Martin-Harris method actually allows you to do this in a lot of cases. But that's a very different universality than the real KPC universality conjecture, which concerns this KPC fixed point. So this is the weak universality. So I have eight minutes and I'm going to tell you what's the KPC fixed point. And I don't know if you'll be appalled or wanting to come back for more, but anyway, here it is. Okay, so the KPC fixed point is a Markov process with explicit, like that, determinant transition probabilities. It's state space is a little bit funny. Did I erase this thing? Oh shoot, okay. It's state space is a little bit funny because although you'd think that state space would just be evolving height functions, you can see that under the KPC scaling, this is just gonna converge to what? It's gonna converge to something that looks like this, right? Converges to that function. And this guy converges to that function, right? Does everyone see that? So it's state space is actually naturally consists of upper semi-continuous functions. For the KPC fixed point, upper semi-continuous functions. Upper semi-continuous functions have a natural topology because for a function to be upper semi-continuous just means that it's hypograph. That means all this stuff left in it, less than it, which I just drew there, is closed. So if the hypograph's closed, those are closed sets in the plane and so you can take a distance which is just the Hausdorff distance. But we only wanna do it locally. So the metric, the topology on these functions is local Hausdorff convergence. That's very natural because if you take this initial data here, so that's your initial function. Because of the lateral growth, of course it's gonna grow to be like that. So that's why Hausdorff is the right topology here. You start with some function F. This is the probability starting from some function F in UC. And now you ask the question, at time T later, what's the probability that H is less than or equal to some new function G? Okay, that means less everywhere. So there's some function G and you just ask, is it less than that function? And this will be given by a Fredholm determinant, I minus some operator which is constructed out of F and G which I'm gonna show you. The operators have names, they're called K, hypo, F, V over two, K, epi, G, minus G over two. And this whole determinant is being calculated on L2 of R. So the whole thing's being hidden inside these operators K, hypo and K, epi, which I'm gonna describe to you now. They're a little bit weird. Well, first of all, epi refers to the epigraph of G. And it's completely natural because if I ask for a function to be less than this thing and we were talking about upper semi-continuous, it's completely natural that G should be lower semi-continuous. So G's are just minus the F's. And in fact, the hypo operator is just the epi operator observed from above. So hypo is nothing but row. So row is the reflection operator. So row of F of X is just F of minus X. So it's nothing but row K minus T of epi minus row F, row star. So anyway, they're basically the same operator but one's being looked at from above and one's being looked at from below. Okay, I could almost have written that but I wouldn't have space. Okay, now to build K, we need to write down some operators. It's just gonna take a second. So we start with the following things. STX is the following operator. So this one, E to the X, D squared. So that's just the heat semi-group, right? D squared is just the second derivative operator. So this just means the heat semi-group except there's a bit of a twist here. You were told never to write the heat semi-group backwards in time but X is in R. But it's okay because we're gonna add on T over three of D cubed and D cubed saves you from the backwards heat equation. So believe it or not, this operator makes perfect sense and in fact it has a simple integral kernel. Well, simple enough. It's T to the minus a third, E to the two thirds. Might start seeing things which look familiar. X cubed over T squared minus ZX. If you can't read this, it's in the notes anyway. Area of minus T to the one third, minus a third, Z plus T to the minus four thirds. X squared, okay. If you can't read it, don't even worry. The point is it has a nice integral kernel and amazingly because you're solving the heat equation backwards, the kernel is explicit and nice. It actually behaves well as X goes to minus infinity if you look. As X goes to minus infinity, E to the minus X cubed is very bizarre. Okay, so now I have to tell you what the Ks are. So you'll get five minutes. One minute. I won't. Let me at least finish this. Okay, so I have my function G and G is an upper semi-continuous function so it sort of looks like, oops, not like that. Wow, okay, it could look like that. There's an upper semi-continuous function. That's G. You should think about it as being identified with its epigraph. Now, what I'm gonna do is I'm gonna split somewhere at a place called X. I'm gonna send a Brownian motion until it hits the epigraph, okay? And that's called tau. So X is a position on here. I'm gonna send a Brownian motion from position V till it hits the epigraph. Time is called tau. It's a time, it's a time, it's a position in space. We'll send a Brownian motion this way too. Maybe it'll hit the epigraph there. So we make an operator called S bar epi G. Plus X, so this side of X is called G plus X. G plus X, that just means G on this side. And the G minus X just means G on this side. It's the expectation of a Brownian motion starting at V of S, T, X minus tau, B tau. That operator is called epi. And then there's a similar one on the other side with the same definition. I'll keep that. Here, epi G. Want to think of this as a hit operator. Brownian motion hit the thing and then we evaluated S at that. So this thing, this thing's called the Brownian Scouting Operator. Take I minus S minus TX minus S. By the way, S can have minus T or T. The T here can be any sign and it makes sense as long as T is not equal to zero. For all X. So you should think of this operator saying, I don't hit on the left. X plus T, and that says I don't hit on the right. And you should read I minus that as somehow a particle gets sent in from infinity at one variable. It's an operator, but it's an operator kernel also. So it has two variables, U1 and U2. It gets sent in from infinity to U1 and has to hit G and then exit in infinity at U2. And that's the Brownian Scouting Operator. You compute that, put it in here, and that's the, this is the KBZ fixed point formula. You can do that for any function and that thing gives you exact transition probabilities of the Markov process. Unfortunately, I might have time, but although it looks like a crazy formula, in fact, if we were in this case, in five minutes, you'll get this formula out of it. All you do is just evaluate because if you're in this special case, one side is flat and the other side is like this. So hitting on one side means you don't hit at all because there's nothing to hit. Hitting on the other side means you're hitting a straight line which you can do by the reflection principle. You just compute and out pops this formula. So although the formula looks very abstract, it readily reproduces the known formulas, okay? And then, well, of course, the big question is where does this formula come from and that's gonna be the subject of the next three lectures. I think since we're running a little late, we'll save questions for during lunch and the subsequent lunch. Let's thank Jeremy again.