 of kT, the correlation kernel, chi bar A on this extended L2 space. So we have little L2 of n1 through nm. So that's set times set. And of course, the interesting thing is the correlation kernel. So the correlation kernel is nnj xj. Now, it's an extended kernel. Of course, it's got four components. It's minus q to the nj minus ni by less than nj. So that's the kind of easy part. And plus the summation k equals 1 to nj of psi ni ni minus k xi nj. Now, the size you know what they are, they're just over there. But let me write them out, because here k is positive. And if k is positive, then f has a negative thing. If f has a negative thing, then there's actually no pole at 1 anymore. So you can get rid of the 1. It's just a comment. 1 over 2 pi i to the gamma 0 dw over w to the k. It's maybe written a tiny bit different, but it's almost the same thing over w to n minus k. OK, great. So the size are just those things. And everything looks fine, except what are the phi's? That's the problem. Yeah, the q is that. So that's q you think of as a matrix. And you take that matrix to that power. And then you evaluate it there. And nj is bigger than ni, so you don't have to worry about the power's negative. OK, I'll have to erase that, I think. OK, but what about the phi's? Maybe I'll write it here. Here's the problem, phi satisfies 1 and 2. So the phi's are not explicit. They're bio-arthogonal functions to the size. That's the phi's are supposed to be that, and they're supposed to be polynomials. That's the answer. Well, this came because you had to invert some matrix, and you don't know how to invert it. So you encode that in these things being bio-arthogonal. So once again, your initial data is encoded in the size. There it is. There's your initial data. Then you produce these psi functions, and you are asked to bio-arthogonalize them by finding these functions, which are polynomials of the right degree, and then build the kernel out of it. And that kernel tells you what's the probabilities. And of course, I think everybody in this room can see the problem with this. How on earth are you going to do it? In fact, the phi's were known in one case. And actually, you can kind of see it already here. If you look at this thing, the first thing you're going to ask yourself is, maybe there's some case where it's toplits, right? And of course, these toplits, if the x's just look like i, or minus i, and the y's look like minus i as well, right? So of course, if x of i is just minus i, and y of i is minus i, of course, that's a toplits matrix. And then you can start solving, right? So if it's toplits, you're in really good luck. But of course, it's a special case. So if it's a toplits, so for step initial data. Well, another thing to see about the formula. Actually, this is kind of interesting. Notice the big n doesn't appear very explicitly in the formula. It's very weakly dependent on big n for step initial data. Now, it just doesn't depend on big n. That's because particles don't feel the particles behind them. So if you're asking about the first n particle, they just don't care about all those other particles back there. Now, of course, that doesn't mean that you can get away with just stopping here, right? Because these n's may not be the first couple of n's. That's not where you want to look, right? So don't get that impression. OK, for step initial data, you actually can just guess them, basically, which is what they did. And it's this, m of 0, dw over. It almost looks like the same thing as the size. Well, to the uneducated eye, you can hardly tell the difference. e to the tw, you just get that. OK. So as it was known for that case and a couple of other cases, it turns out if any time you put the particles periodically in a box, you can get a formula like this. OK, well, that's it. That's where it stopped around that period. OK, 246. So last summer, we discovered how to get the phi's, which is what I'm going to show you, probably next class. OK, what I want to do now is I want to give you a calculation, which will give you some intuition about how this comes about, OK? So first of all, we know that if we start with step initial data, then we're supposed to get the area process, right? And that's how the area process was obtained roughly. OK, so if we take the area process, I won't go through the details of the calculation, but I'll tell you roughly the results. So we take the 1, 2, 3 scaling limit. So 1, 2, 3 scaling limit. So if you remember, that's h epsilon tx is epsilon to the 1 half, h 2 epsilon to the minus 3 half t, 2 epsilon to the minus 1 x, plus epsilon to the minus 3 half t. So we want to take that scaling limit of the h's. Now, are things written in terms of x's, but that's OK. h tz is just basically the inverse function of the x's. So x's and z's basically the same thing, just in terms of inverse functions. And here, h 0 x, well, that's just you have step initial data. And so your initial thing looks like this wedge. And now under this rescaling, the wedge collapses down to a narrow wedge. And h epsilon 1 x is supposed to go to the area 2 process minus a parabola. OK, so well, I won't go through the details. You have to ask what's the probability that this is less than some things, which turns into the probability that x's are greater than some complicated expression involving epsilon to the minus 3 halves. And you plug in that, and you make this sum, which can actually be done and turned into one contour integral. And then you take the limit of the kernels, and you do it by steepest descent, and you get the following formula. You get the probability that the area 2 process, equal to g1, is equal to the determinant of i g l 2. And with this extended area kernel, and the extended area kernel k extended is equal to integral 0 to infinity. And now this is the area function. That's a lambda, this integral d lambda. And this is for u1 bigger than u2 and minus. OK, so this is just another one of these awful formulas. OK, so that's the formula for the area process you get by just taking a limit of this particular special solution which they found. And yeah, that's what I said. So let me just repeat. Think of that special initial data, and then this thing just looks like i, or n minus i, or something like that. And then suppose you just ask for the y's, also to be like that special type of initial data. Then you see this matrix become topolets. And then you have all the topolets checks. You solve it. And then after the fact, once you see this machine, of course, that's Johansson's calculation. After the fact, once you see the machine, you can guess what's the phi's, which make that work. So if you have a finite box and you put the particle periodically in the box, this trick works. And you can get a phi like that. So you can also imagine how to get the area one process. So you put the particle periodically every two in a box. And then you do the scaling, and you let the box go to infinity, and then take the limit, and you'll get the area one process. So that's why those are the only ones they could do, because it's really the only phi you knew. But I just want to end by showing you that there's another version of this. So there's a problem with these type of formulas. And the problem is that the formula's complexity depends on n, or maybe m. I wrote m over there, and I wrote m here. It's the same thing, right? So we would like a formula. Remember, I wanted a formula for the probability that the area process is less than a function everywhere. And it's obviously going to be kind of hard to do it from this formula, because the dimension's increasing. So there's a better formula you can use for that. You'll see why next class, why you want a formula for it. It'll become extremely apparent why you want a formula for something less than something. So there's a path integral version of this. It's also equal to the determinant i minus k area. Now, k area, this is the original area kernel, which is just what you get here if you put u1 equals u2. Well, u1 equals u2, then you're in this formula. This guy goes away, and you just get the area kernel. So that's called k area. And now I have chi bar. There's reason these things get inverted. e to the x0 minus x1h, g2 bar, e to the x1 minus x2h. And you keep going. gn, e to the xn minus x0h, k area. But now this one's just on L2 of r. That's the beauty of this formula. Now, this formula, actually, this formula, strangely, this formula predates this formula. Because the way I'm telling things isn't completely historically correct. The formula, this thing was happening at roughly the same time that Prahafer and Spam were trying to derive the area process. And the way they did it was different. And actually, they arrived at this formula. But then they heard about all this other stuff, and so they rewrote it like this. And then everyone forgot about this formula. But this formula is the good one. Why is it good? Oh, here, h. I didn't say what's h. h is the area operator. Everything's called area here, sorry. Minus d squared plus x is the area operator. So this is a format. Now, one thing about this formula, you notice a cyclic in the variables. You say x0 minus x1, x1 minus x2, et cetera. But then it goes back. And then this one, this one's funny. Now remember, they're in order, right? So x0 minus x1 is negative, right? And this is minus d squared. So this is a fair game heat equation applied to this, right? This is the heat equation in the right direction, because x0 is less than x1, but then there's minus d squared. So that's fine. But then, at the end, something goes horribly wrong. And you go loop all the way back, and you have to apply the heat equation backwards for this whole time xn minus x0. And that's just intrinsic in this problem. Well, of course, you were told when you were a baby, you were never allowed to solve the heat equation backwards. But as I showed to you last time, if you apply the heat equation to airy functions, it works both ways. And this thing's being acting on airy functions inside here. So it's fine. Actually, this is just fine. You can just compute this, OK? So although it looks bad, it's perfectly legal. And the thing that makes perfect sense is a kernel. All right. So I have, like, five minutes? OK. OK. Now, the other nice thing about this formula, so we'll mostly see this next time, I guess, is it's kind of clear how to take a scaling limit of this formula. By a scaling limit, I mean, I want to know what's the probability the airy process is less than some function g. So maybe if I'm in some box, I want to know what's the probability the airy function is less than some function g. OK. What you do is you just put in lots of fine points, x0 through xn into the box. And you solve this. And then you make your mesh size go to 0. But look at what this thing's asking. It keeps cutting off all the values higher than gn and then applying this e to the delta now, h. So you cut off high values, everything above gn. And then you apply e to the delta h. And then you cut off high values and apply e to the delta h. But what is that? All that is, is it means you're solving, along this box, you're solving the equation dtu equals minus, sorry, etu equals minus hu with a boundary condition of g, with Dirichlet boundary conditions there. That's all that it means. So you can just look at this equation. You can immediately see what the limit is with a fine mesh of x's. chi bar just means cut off everything less than the value. OK. So here, once again, I'll say it. Without closing your eyes to the little problem down here, you cut off all values above g, solve a little bit of h minus h, cut off all values. So you're basically just solving a heat equation with this potential x, well, the potential becomes minus x. You're holding heat equation with the potential minus x with Dirichlet boundary condition on your function g. That's actually what you're doing. So that's what's beautiful about that formula. The other thing, and this is where I'll end, is that this is a very, very, very general fact. So these extended kernels pretty much always have representations like this. There's a way to go back and forth. So I'll write you the version here, as Craig said. That's because you're free for me on. I defer to you, yes. Do you want to explain that? Well, you wouldn't have formulas with which to make this in ASAP. So I'm saying you wouldn't get either side naturally in ASAP. So the only claim I'm making is that extended kernel formulas like this coming from kernels like that can very generally be written like this, where you just have to say what's the thing that goes here and what's the thing that goes here. So let me show you the one for a TASAP. So in other words, all I'm claiming is that this formula, just like that one, there's a way to rewrite it on L2 of z in the following way. So let me just write it. So just write equals determinant of i minus k T n m. Now this is just that k T. This is k T n m dot n m dot. You just put n m's in both positions. And then you call it that. The one point version of the guy. And then you get, OK, you have to believe me that these two things are roughly the same. But q to the n 1 minus n m a 1, I called it a here, yeah. q n 2 minus n 1 chi a 2, q to the n m minus n m minus 1 chi a m. But now it's on little l 2 of z, OK? So the propagator here is this q, remember what's q, OK? And that thing plays the role of this guy. And you've got your k T at the one point distribution, which is this analog of this k area. And then you keep cutting off here. You cut off above, OK? So you're solving it above with a boundary condition at these a's, OK? Now q may not look like you're solving the heat equation or anything. q is this indicator function of x greater than y. But there's a trick. And the trick is that if you conjugate the formula, remember the formulas are invariant up to conjugations. If you conjugate the formula by 2 to the x, then the q just becomes 2 to the y minus x, there. That's all the result of conjugation by 2 to the x, OK? But now it's a probability transition function. This is now just the transition probabilities of a geometric random walk jumping down, OK? So now this thing looks like heat equation. And then cut off. Well, cut off, heat equation, cut off, heat equation, cut off, heat equation. Well, for random walks in terms of heat equation, OK? But of course, once again, you've got this weird guy, right? He wants it to solve it backwards in time for time n and minus n1 at the end, OK? But you can do that. So why don't I stop there? And next time I'll show you how these two pieces can be put together to solve the whole thing. Oh, this, this. Yeah, yeah. So this equals that. And just like that one equals that one. It's just the same sort of thing. Yeah, well, OK. So first of all, I would rather do this, right? Because then it looks like Brownian motions, OK? But actually, to change, to get rid of that potential is very easy because it just turns into a parabolic shift of the Brownian motions up to some trivial factors, OK? So after you do that, the propagator of this is just Brownian motion. So all I have is a Brownian motion path, OK? But it gets killed whenever it goes above g. You see, it propagates the Brownian, well, forget that. Sorry, you have to close your eyes there for a second. So the Brownian motion comes in and goes backwards in time for a while, OK? Don't worry about that. And then it gets killed when it goes above g, propagates a bit forward. Just think of these as deltas, right? So I solved Brownian motion for a little bit of time, killed when it goes above g. And as you take this delta, it goes to 0. All it is is a Brownian motion killed when it goes above g. So you can just take this formula and you can just rewrite it explicitly in terms of a kernel, which is just Brownian motions starting at some point and ending at some point and killed when it goes above g. Well, g plus a parabola, OK? Yeah, so here, what the formula tells you is that if you want to know the probability of the TASAP is bigger than AJ at some points, all you do is you take a geometric walk, right? And go from here to here at the variables which are hiding inside this operator. Being killed when it goes below this a, OK? And then it goes backwards in time for a while. Don't worry about that. And that's what the kernel is, OK? Does that mean, is that what you were asking? Well, chi a is the indicator function that x is bigger than a. So here, you solve and then you kill everything below a. chi a is just the indicator function is x bigger than a. If I didn't write it, I wrote it last time, I'm sorry. So chi a, chi bar is just 1 minus chi, yeah. So you kill, you only let things be around when x is bigger than a and then you propagate a bit. And then does that make sense? OK, anybody else?