 So lecture one will be a sort of general introduction to the KPC universality. In lecture two, I'll solve the model TASAP. In lecture three, we'll take the scaling limit of the solution, lecture four. So we'll get the KPC fixed point. And lecture four will be the story about the KPC fixed point, what it is. That's a bit rough. Actually, these things will all be mixed up a little bit. I'll even tell you a bit what the fixed point is today. Okay. So everything I'm going to be talking about is a recent joint work with Daniel Remineck and Konstantin Matetsky, who are both here and may correct me various moments. It's pretty recent. It's on the archive. And it's not really in finished form. So we'll invite you all to make comments. There may be nicer ways to write a lot of these things. Okay. So today I'll start by talking about what Pierre talked about, but maybe a little bit slower. So let's start with a simple model. Just to give an idea of what people are trying to do with this KPC universality. So let's take the Eden model. So this is just for illustrative purposes. This is a model where it's sort of simple to see what the model is, but nobody has any results whatsoever. So if you're like me and you get bored in these talks, you can just sit there and try to solve the Eden model. Okay. So the Eden model, it's on a lattice. So you start with the two-dimensional. So these are all two-dimensional or one-dimension. It's a height function, one-dimension. Height function, one-d random growth. Okay. So we start with the origin occupied and then choose a neighbor at random. And now in the next step, you'll choose one of the new neighbors at random, et cetera, et cetera. Or maybe we could do something else, add sites at rate one. So every neighboring site gets joined onto the cluster at rate one. Okay. So neighboring sites. So what you'll see after a long time is a cluster which looks like this. Approximately a ball. Maybe it's not a ball. At times t, that would be t because everything's joining at rate one, right? Approximately this shape at time t. And what we're interested in is the fluctuations of the boundary. So there are these universal processes which govern the boundary. So I look out in this direction or this direction or this direction. It's the same inhibitor direction. And you'll see fluctuations which are of size t to the one-third. And on a scale here, on a lateral scale of size t to the two-thirds. Yeah. I'll try to write bigger. Okay. So I'll just say it. So the cluster is radius around t. Then you'll see fluctuations of size t to the one-third but on a spatial scale of t to the two-thirds. And then these universal processes which are these processes called airy processes which govern these fluctuations. And some of them occur in random matrix theory. That's the story. Now as I said for this model, nothing is known. There's no result whatsoever. It's hard to hear. Oh you don't. I don't know. Maybe it isn't, maybe it isn't. And there's some question what exactly the KBZ universality class is. Now one of the problems with a model like this is of course if you look on the boundary, it's not even a function, right? It could easily do that. So I look out here. Where do I look? So we strict so far to things which are more like height functions themselves. So let's take, so now we look at height functions. So the most famous example of course is the KBZ equation itself. KBZ equation. Is that big enough to see? Okay. So the KBZ equation is DTH. So this is for an evolving height function on R. So it's h, t, and x. So x here is in R and t is greater than zero, greater than zero. And it's supposed to be evolving according to the following not very well posed stochastic PDE. Okay so let me explain what all these things are. Okay so this, this is just the heat equation. This here, so this is a smoothing mechanism. This is supposed to be a space time white noise. Which says that this thing's being randomly perturbed at different positions at different times independently. The interesting term, the most interesting term is this one. This is a lateral growth mechanism. It's the nonlinearity. And that's the thing we're going to be looking at. So what is lateral growth? The idea is that when you should think of the sort of surface there. And you should think of the surfaces growing randomly but outwards. So the growth is actually in all directions at once. Just like the Eden model. Think of particles coming down from the side and skicking to this thing. Okay. And of course if this thing goes outwards there then the actual growth is, okay let's do it there. The actual growth is, the vertical growth is proportional to the square root of 1 plus dxh squared. Right? So it's a nonlinear function of the slope. That's the key thing. It could be any function. I mean in the picture I drew it, it's going to be 1 plus dxh squared. Anyway, you've got some function f of dxh which is reasonable to suppose is symmetric. So f of dxh is f of minus dxh because why should it grow more that way or that way? Not much else. But okay, there's some magic in which if you try to write this equation, if you try to smooth out the noise and take a limit, no matter what function you start with, you get this equation. Okay, so that's some sort of basic equation. Okay, so that's KBZ. Now the model we'll be talking about a lot is called TASEP. So TASEP is a very special discretization of the KBZ equation. So now the height function, instead of just being a real function, it's on Z. Let's see here we have H, Z. The height function will just look like a bunch of, it'll draw it as if it's a function of R because otherwise it's hard to draw. So all you ever do is go up or down by 1. That's the rule. So the height function is a random walk path, which just means it goes up or down by 1 at each time. Does that make, is that fair enough? Okay. So that's the H at some particular time and the dynamics is just very, very simple. If you've got an up, then it jumps to a down at rate 1. That's all that happens. And that happens independently for every one of the ups. So for every little peak, it jumps down to a lower thing. Okay, that's the full dynamics and that's TASEP. It's amazing that such a simple model could actually have all the structure of all these things, but that is actually a discretization of KBZ. You can see that because what's happening here is that the jump rate is just minus, so you can write DTH equals minus 2 at X. Indicator function of one of those guys at X, right? DP. This is the whole dynamics. When there's an up guy, there's a bus on process which makes you go down and when you go down, you do minus 2. That's it. But this guy here is just exactly equal to, and I did this calculation on the plane, where that's the backwards derivative, discrete derivative. That's the forward discrete derivative and that's the discrete Laplace. So for example, I won't write them all because you guys know what I mean, I think. So TASEP you could just think of as a very, very special discretization of KBZ. Now there's lots of other models in the class. For example, it doesn't have to be randomly growing height functions in the normal sense. For example, if you do this Hopf-Kohl transformation, so Hopf-Kohl, Pierre already said this, Hopf-Kohl just means to take H equals log Z and write an equation for Z. If you look at the KBZ equation and you write an equation for E to the H, you can just write it, you have to assume that XC is nicer than space-time white noise. Let's just assume XC is nice, then you get the following equation. So in a sense, it's linearized by this transformation. In fact, KBZ doesn't make good sense because, well, okay, let me go back for a second. One of the key things about KBZ and TASEP is that you know the invariant measure. So it turns out if you start TASEP with a simple random walk, you flip a coin at each time to determine whether you're going up or whether you're going down. So that's called the random walk, of course. The coin could be biased, but in our discussion, let's just assume it's a fair coin. So start with a fair coin, so you've got a random walk path, then it turns out that this dynamics actually preserves the random walk, so you get that later, so that's invariant. Well, it's not exactly invariant. You can see that something's going on, it's going down. So up to height shifts is invariant. Here too, Brownian motion is invariant, modular height shifts. So if you start with a Brownian motion, what you'll see later is a Brownian motion shifted. Now, if you haven't seen this before, you should be very, very surprised that you're allowed to start with Brownian motion at all because you've got this dxh squared term, and that's the problem with KBZ. So KBZ is really ill-posed because the h's, for any positive time, locally look like a Brownian motion with local variance 2 in this case. That was the root 2 that Pierre saw. And so this dxh squared term is obviously illegal. You're allowed to differentiate Brownian motion, and you're certainly not allowed to square what you get. So really there's a minus infinity hiding in here. That minus infinity is the thing that renormalizes that term. On the other hand, this equation is actually in good shape. It's one of the few stochastic PDEs you can actually make sense of. So you can start with space-time white noise, make sense of Z, and the answer is a nice continuous function. It turns out to be never equal to zero, as long as you start with something non-zero on some interval, something positive. Z should be positive here. So then you can take log of it, and at log you can just decree to be the solution of the KBZ equation. So there's a way out of this thing. You can just, that's called the Hoff-Kohl solution of KBZ. It just means start with that, solve it, you can solve it yourself, and take the log, and then that's the solution of KBZ. And of course Martin Heyer has a very elaborate theory of KBZ, which in the end proves that his solution is the Hoff-Kohl solution. And of course when he makes sense of this equation, there's definitely a minus infinity hiding there. That term has to be renormalized. All right, okay. Now the Hoff-Kohl thing means that, again if you took a smooth C, you could rewrite using the Feynman-Catz formula, says that Z would be the expectation over Brownian motion path starting at X of E to the integral of zero to T of C, S, B, S. Yes, and then Z zero, BT. So that's the Feynman-Catz formula for the solution of that. If C was nice enough, a nice enough function for it to work. Now it turns out it actually works in this case, this has to be slightly interpreted slightly differently, but that's okay. The main thing is even if you have, you could think of C being slightly regularized, and what you would have is a free energy of paths which are Brownian motion paths weighted by how much of this background C they get, okay? And that's called a directed polymer in an environment. So just to say it again, suppose that C is slightly nicer than this. You could take a lattice version of all this, for example. You freeze the C, and then you look at Brownian motion paths weighted by this weight, E to the integrals of C, okay? Z is the partition function for that model, and the log of Z, which is KPZ, or some sort of discrete version of KPZ, is the free energy of that model. In other words, the conclusion is, is that we're not only talking about random growth, but things like free energies of directed polymers. So the story. And there's lots of other models. For example, if you took U equals DXH, it satisfies a noisy burgers equation. You could just write it down. It's easy to write down the equation. It should satisfy. So you can think of these things as random fluids also. So there's a lot of different interpretations of these models. Let's get to where we want to get to, so scaling. So one of the nice things about KPZ versus TASAP, for example, or maybe one of the other models, you can make all sorts of random growth models that KPZ has a little advantage over it, which is that rescaling this equation is fairly convenient. It's hard to rescale this guy. We know how to rescale this guy. So we want to look on big scales. So let's take H epsilon TX is epsilon to the one half H of epsilon to the minus ZT epsilon to the minus one X. Okay. So why did I do that? Because let's just look on big scales of X and I'll just call that epsilon minus one. So epsilon minus one is just the big scale you're looking at in space. Then I'm looking for a time scale in time, which will work. And I'm forced to take epsilon one half. That's because at T equals zero, if I took the invariant measure, it would only rescale like this. So there's no choice but to do that. And so we're just looking for Z. What's the Z that's going to get us something interesting? Now, KPZ itself is not scaling invariant. When you do this, you're going to get... I need to put on your glasses. I'll get it wrong. Okay. DTH epsilon equals... So this is how the different things scale. Three halves minus Z, two minus Z. The XC isn't exactly the same XC as you started with. It's a rescale XC. Okay. Now, the point here is that different things in the equation scale differently. It's not a scaling invariant equation. Now, you might think that these two things scale differently, but that's just this kind of a cheat. If you remember stochastic calculus, Brownian motion is supposed to scale like the square root of the other guy. Actually, these guys are scaling together. And this guy's scaling together. Well, together. This guy's scaling differently, I mean. Okay. Now, you can also see that unless you take Z equals three halves, you're in trouble if epsilon's going to zero. Right? So for large scales, for large scales, Z equals three halves. So that's the important thing. That's the dynamic scaling exponent, which is exactly the same thing, if you think about it for a second, as saying that at time two, you have t to the one-third and t to the two-thirds. Okay. These are exactly the same thing, because that's saying that the height scales as one-third of the time and the space scales as two-thirds of the time. It's exactly the same thing. That's called the one-two-three scaling. Okay. These guys are different. These guys dominate on small scales. So if you were sending epsilon to infinity, then you would have to take Z equals two, equals two, and this term dominates, the linear term dominates. So it tells you on small scales, the thing is going to look like this, which is that thing, that equation, if you just drop this, sends you to a Brownian motion with variance two. You can just check. So that's why on very small scales, this thing immediately looks like a Brownian motion with variance two. And on large scales, it's dominated by that, and that's the big question. Okay. And that's where all these area processes come in. But the main thing to see is that this KBZ equation itself is not scaling invariant. It's going to something else under these scalings. Under the Z equals three-halves scaling, this thing's trying to drop out and you're dominated by that. So let me be a little bit more precise about this now. So what I'm going to do is I'm going to explain exactly what you see here in special cases in the context of TASAP. So I take the TASAP height function. This is just to remind you that we're talking about the TASAP height function as opposed to the KBZ height function. And I'm going to look at these scales. Epsilon to the minus three-halves T and epsilon to the minus one T. And all these models have all sorts of finicky numbers, which you have to get straight so that everything matches previous results. And here it's a two. Here it's a two. There's a huge constant you have to subtract because on these time scales, this thing's gone down a long way. Okay. Okay, so let's look at that thing. So that's the rescaled height function of TASAP after a very, very long time. And let's just look at this thing at time T equals one. It doesn't actually matter because time's being rescaled. So T equals one is as good as anything else. Okay. Okay, so the following thing is well known and hopefully after three lectures you'll understand why it was well known. But I'm not going to go through the derivation now. Yeah, thanks. That's an X. Everything else good? Okay. So you take a limit of this thing and the following... so this epsilon goes to zero. And it was known how to calculate a couple of cases. So this depends on the initial data. And one starts with the initial data and then derives some very exact formula, rescales the exact formula. So here's one initial data. This is the one Pierre talked about and called it... I don't know what's it called. We call it narrow wedge. So if you take the limit there, you get something called the ARE2 process. One is a parabola. Or you could think maybe to start with this, going on forever in both directions, we'll call that flat, because that's some time it looks flat. And you get something else. There you go, one process. This is a process which if you haven't seen in the courses yet, have they? No, okay. I'll describe it in lots of detail later. But for example, for each X this guy here is a GUE distributed, distributed according to the Tracy Wynnum GUE distribution. So the marginals, F GUE. That's the rescaled top eigenvalue of a GUE random matrix. The ARE1 process, marginals, okay. This process actually arises if you take the Dyson Brownian motion for the GUE and you look at the top eigenvalue, it actually converges to this. And you would think the same might be true here, but it's not. The ARE1 process is not what you get if you take the Dyson Brownian motion for the GUE. So there's some link with random matrices, but the link is not complete. It's more like a bunch of coincidences. I want to write the formulas for these guys. But on the other hand, if I write the formulas for those guys, they'll be running forever. So I'm going to write a formula for a third one. This guy has both in it, as we'll see. So this means all particles to the left of the... We didn't say it. So it's going down to the left of the origin and then like that, okay. This is zero. Then you get a crossover process called ARE2 to 1. It's a process which looks like the ARE2 process way over there and looks like the ARE1 process way over here. So it sort of encapsulates both, which means I only have to write one formula. So I will write the formula for it. Okay, so the formulas look like this. So this just will give you an idea what the formula looks like. Don't take it terribly seriously right now. The probability that this ARE2 to 1 process... This is just the name of the process and this is a bunch of different positions. Okay, so you ask what are the endpoint marginals of this process called ARE2 to 1 and they're given by Fretholm determinants. One thing I think people can see is usually processes we think is parameterized by time, but time here is called x and it lives in r. So you just have to get used to that. Okay, so this is the Fretholm determinant. Maybe go into later what that is.