 So, okay, it's time to start our afternoon session. And our speaker is Mr. Balainen, with his talk, Random Work and Random Environment and the KPC class. Thanks very much. And yeah, thanks again to the organizers for this great event. So this talk is coming slightly off the field, right? It's from the area of motion in a random medium where one of our, the benchmark models that have been studied for a few decades is this RWRE model, Random Work and Random Environment. And in recent years, this class of models has been linked with the KPC class that Jeremy began to discuss yesterday. So let me begin by giving brief statements of the theme of this talk, mainly to the experts. And then I'll back up and go to square one and start with definitions. So what this talk is about, it's about one plus one dimensional, yes, yeah. So one plus one dimensional directed random walks in correlated random environments that manifest KPC behavior rather than the typical diffusive behavior of random walks. So there are two examples or two sources. So the first one is limits of one plus one dimensional directed polymer measures. So limits of a quenched directed polymer. And the example, that exactly solvable example here is the log gamma polymer. And this piece of work is in a joint paper with my ex-student Nikos Georgiu from Sussex and Firas Rasul Aga from down the street from Salt Lake City. And then Atilla Ilmas from Istanbul and it's an AOP paper from 2015. And then the second case is one where you take nearest neighbor directed RWRE in an IID environment. So RWRE in an IID environment, but then you condition it to go off to infinity at an atypical velocity. So that's a singular conditioning. So the conditioning has to be done in the sense of dub. So conditioned on an atypical velocity. And this is ongoing, not yet published work with Marci Balas from Bristol and Firas Rasul Aga again. And hopefully we'll get this out in the next few months. So what is important in both cases is the expectation of universality. So Jeremy touched upon this yesterday a little bit. So in both cases, I think it's reasonable to expect that the results would be valid very, very broadly subject to some assumptions on weights on moments and maybe decay of correlations. But as it's the case for much of this KPC class, the only existing results are for exactly solvable cases. So I won't be able to cover both of these two topics in this talk. So today I am going to talk about this one. And here the exactly solvable example is the beta RWRE. All right, so that's where we're headed. So I hope to describe some of the results for this beta RWRE that show how this atypical velocity conditioning puts it into a situation where it seems to be exhibiting KPC behavior. But now let me start from the very beginning and just tell you what RWR is for starters. So let's talk about RWRE, Random Walk in Random Environment on ZD. So the ingredients of this model are the following. We have a space of environments. Let me call it capital omega. And I'll denote those environments by lowercase omega. And on this omega space, there's a probability measure. And in this random medium business, there's always several probability measures floating around. So one has to be precise about the notation. So this environment probability measure will be this blackboard bolt P. And there's a group action on omega. And we'll assume that this P is always at least ergodic under some group of bijections, T sub X, where X ranges over that lattice. So that's the background situation. An ergodic group action on a probability space. And then each of these environments omega drives a Markov chain on ZD. So there's a function. So there's a function that takes omega into a transition probability matrix. And I'll denote that by pi. And I'll put the omega upstairs. So pi of omega is a transition probability matrix on the lattice. So the matrix elements are pi sub X, Y. And the assumption is that this transition probability matrix respects these translations here. So the assumption is that if you shift the environment, then that's the same as translating or shifting the lattice arguments. So if I look at a shifted environment, Tz of omega, and the probability of jumping from X to Y, then that's the same as under the original environment jumping from X plus Z to Y plus Z. So there's that structure. And then some notation. I'll write P super omega sub X for the path measure of the walk or the Markov chain on ZD that's obeying the transitions specified by omega. So this is the so-called quenched path measure on the path space. So the path space is the space of ZD valued paths in indexed by non-negative integers. So P super omega X satisfies the obvious properties, right, that at time 0 the walk. So the walk is the capital X. Capital X dot, this is the walk on ZD. So at time 0 the initial state is X. And then to go forward in time, we multiply transition probabilities, just the good old Markov chain recipe, nothing more. So I write, say, that the path probability that you go from X to X1 and to X2, et cetera, Xn. This is then just a product of, let me call that X0. So then this is a product of transition probabilities, pi omega going from Xi to Xi plus 1. So that's the general RWRE model on ZD. Anything I could clarify at this moment? It's funny to stand up here in the bright lights and not with no contact with you guys out there. Any questions at this point? So obviously, like we often do in probability right, we construct things on canonical probability spaces. So here the natural canonical choice would be that omega is a product space, say script B to the ZD, where script P is the space of probability measures on ZD. And then omega is a configuration or an assignment of these jump probability vectors on this work. OK. So what was I about to say? Yeah, just to describe the canonical choice for this probability space is one where we just build in the transition probabilities into the space omega itself. So I would go that omega is now a configuration indexed by the lattice points. And each omega little x is a probability measure, which gives the probabilities of the jumps. So then the transition probability matrix from x to y would be you pick the omega x probability distribution, and then you take the probability of the jump from x to y. And then as you might imagine, the most basic and popular and important thing to study is the case where these omega x's are IID, right? So the IID environment means that the omega sub x's are IID. IID under P. All right, let's try to see one. Can you hear me now? Yes. I can hear myself now, so you can too, right? All right. So that's the general RWRE model. Where is it coming from? Well, the mathematical study can kind of be dated to a Cornell thesis by one Fred Solomon in 1972. The advisor was Frank Spitzer, of course, whom we all know for his random workbook, et cetera. Somehow this is not sitting right. I guess I did it wrong. No, still not right. Maybe I should. I really should use these clips here rather than this. OK, that's good. Let's see. All right, let's try. OK, so Fred Solomon wrote his PhD thesis under Frank Spitzer, 1972. And he did a very nice piece of work. So he studied the one-dimensional IID case. And he delineated transience and recurrence and laws of large numbers. And his thesis is a 1975 AOP paper. So that's when it all started. So now we've been studying this thing for about 40 years. But just like in these other favorite models of ours, like first passage procolation, which is the generalization of the IIDN model that Jeremy mentioned yesterday, and many others, despite these decades of work, we've barely even scratched the big results. So the study of this model has sort of splintered into studies of various special cases where we assume a little more structure and then we can make some progress. So just to tell you the pathetic situation of the study of this, well, it's not pathetic, but just to indicate what kinds of basic questions are open, if you take dimension D equals 2 here and make these jumps to be just nearest neighbor, we still don't know if it's recurrent or transient. So that kind of stuff, that level stuff is still open. So I'm going to now drastically simplify the model. So I'm going to go into two dimensions and I'm not going to allow the walk to sort of roam around. But rather, I'm going to force it to go into particular directions and I'll of course take the environment IID. So now I'll do a directed nearest neighbor, RWRE on Z2. So I'll allow only two steps. So from each point X, you can only go to the right or up one step. And my environment, omega, well, I only need one number at each point X now to determine the environment. So I'll take it to be this horizontal jump probability. So let me call it omega from X to X plus E1 like this and one for each lattice point. And then the complementary probability will of course be one minus that. And I'll take these to be now IID. So now you see what's happening is that as my walk marches along, it always jumps to a new level, so to speak. So it's always seeing fresh environments as it goes. So now if I calculate the average distribution of the average path probability, let me do it over there conveniently. Still have that in view. And let me actually take a piece of colored chalk here to see if this works. So the average distribution of the process would be the one where I integrate away the omega variable. So let me just write it by dropping the omega. And so that's what I get when I take this probability here and I integrate over omega. So that's now this blackboard bold P. And I come over here and I take expectation. So this blackboard bold E is the expectation under that P. And now for any fixed path, I'm always forced to sort of go forward. So these XIs will be all distinct. And since my environment is IID, expectation of product is product of expectations. And we just see that this average process is just a classical random walk that obeys the averaged transitions. So this simplification allows us to do a lot with this kind of a model. So under these assumptions here, this walk X dot satisfies all the classical results. So it has a law of large numbers, a central limit theorem, a large deviation principle. Now it's obvious that it has these properties under the average measure P because then it's just a classical random walk. But it's also the case that it has these same properties under the quenched distribution. So the first one is the first statement is immediate from classical results. The second ones need proof, not the law of large numbers because an almost sure law of large numbers under P implies the same under P omega for almost every omega. But the other two results need proof. I don't need the CLT going forward, but I do want to record the law of large numbers and the LDP. So let me just write those down. So the law of large numbers says that xn over n converges to the expected velocity. Let me call it c star. So that's, of course, in terms of this notation here. It's just the pair of transition probabilities. So the velocity is a vector. Now pointing in that northeasterly direction. And then there's a quenched LDP, a quenched large deviation principle. I need to refer to it in the sequel. So let me actually write it down. So qLDP, so quenched LDP, it says that if I take the quenched probability that xn is at point n times c. So my c will come from the line segment between e1 and e2. Let me draw a picture here. Since the admissible steps are e1 and e2, all the possible asymptotic velocities of the walks are on this little simplex here between e1 and e2. And sometimes I might abbreviate it u. So that's where this c comes from. And the brackets nc, well, that's going to be a discrete approximation of n times c, which is n steps away from the origin, so that this is a reasonable probability to write down. And then we take 1 over n log limit as n goes to infinity. Sorry, running a lot of space here. And the statement is that there is a quenched large deviation rate function, which I'll write capital I sub q, such that this limit is true for bold p, almost every environment omega. So that is a theorem. And IQ, well, it's a typical kind of decent large deviation rate function. It's going to be positive as long as you're off the low of large numbers limit. It's convex, but we don't really know any further regularity properties of it in this general case. If and only if not equals. Testing if anybody is still awake. All right. Let me, even though I don't really need the CLT, so I won't bother writing it down, let me, though, emphasize its importance. Because the whole point of one of the points of my talk is that once we go to the dub conditioned RWRE, we don't see diffusive behavior anymore. So diffusive behavior means that the walk obeys the standard kind of central limit behavior at the, in the scale, square root of n. Like the first CLT you learn in undergrad probability. OK, so that's the nearest neighbor directed RWRE. So now let me specialize further and impose a distribution on these weights. OK. Oh, to be honest, I should have said that there's a moment assumption for the large deviation principle, but it's not important, so let me skip that. So now I'll make those weights omega beta distributed. So now for the beta, beta RWRE. So now omega X to X plus E1 will be beta distributed with some parameters alpha beta. So fix alpha and beta to positive reels. So in case you forgot or have never had any reason to be interested in, beta distribution means that, well it looks like this. Let's see, I hope I get this fraction right here. The density is between 0 and 1, as it should write X to the alpha minus 1, 1 minus X to the beta minus 1 dx. So that's what the probability measure looks like between 0 and 1. And let me just highlight the fact that the uniform distribution is there. So the case alpha equals beta equals 1, sorry, is the uniform distribution. So in some sense, if you had to say what would be the most natural model here, that one would probably rank pretty high on your list. It's a pretty natural choice. OK, so that's the beta RWRE. So now it didn't seem for a long time that there was really nothing in common or nothing linking these RWREs in IID environments with the KPZ class. But this changed when Guillaume Barracond and Ivan Corwin discovered that this beta model is exactly solvable, so one can do very deep computations with it, and they found a link with KPZ. So here's their result. So Barracond, Corwin. So the result says that if you take the difference between this converging log probability and the rate function, then that has a tracy wedom correction in that n to the 1 third scale, which is the one for KPZ. So in other words, we take log of quenched probability, and now I need to write a deviation event here. So I'm going to write, I'll do the following pictorially. So here's, again, the simplex of those possible velocities. I mean, the uniform case now, so alpha equals beta equals 1. So the velocity c star points in the diagonal direction. So I want to go out here, say, and symmetrically out here, far enough away from the diagonal. So I'll say something like xn.e1, capital N maybe, is bigger than nx1 log of that, and then minus n times, no, plus n times the rate function. OK, let's put rate function at c, where c is this point called at x, x1 minus x. So that's the probability of the deviation minus the limit. It's a plus there, because we have this convention of putting a minus sign there in large deviations. And then divide this by some constant, depending on x, times the n to the 1 third normalization. And then look at the probability of this under the probability distribution of the environment. And the statement was that this converges to the GUE distribution. Oops, I forgot something important, didn't I? I need to put an inequality. So let me write that all over again. Should have been a little more careful here. So bold P probability of log of the quenched probability deviating away from its law of large numbers velocity. And this less than or equal to, say, s, close the bracket, this goes to the GUE distribution. That's correct, Ivan, is it? OK, so that was the result. So hypothesis uniform and then far enough away from c star. So this tells us that something Kpz, like, is going on. So then what, of course, intrigue Kiras and me right away, was then the question, where is the wandering exponent, the 2 thirds? So maybe a word of explanation here again, or maybe a reference to Jeremy's talk yesterday. So if you still recall, early on when Jeremy exhibited what the Kpz universality is about, there were two exponents floating around in his presentation. One was this 1 third here. And then the other one was the wandering exponent, or the transversal exponent 2 thirds. So we sort of expect these two to go together. And the answer is then that, well, first of all, of course, the mystery was that, well, since it was on that board that got erased, since the walk in the IID environment is just perfectly classical with central limit theorem behavior, it's not like the walk itself can have, sorry about this, maybe that's good. So since the walk itself in the IID environment behaves just like classical random walk, that's not the source of a 2 thirds wandering exponent. But then it was the doob transformed thing that exhibits the 2 thirds behavior. All right, so let me now then turn to describing our results in three parts.