 Uniformly a random and then you get a random variable, which is the length of this longest increase in subsequence of your random permutation. In a theorem that Big Dave's and Johansson proved is that after you look at large n, you subtract twice the square root of n and you normalize by n to the 1 over 6. And then the limit you get was called tracividum distribution. So that's the limit for the largest eigenvalues of the Gaussian union transform. That's another appearance of random matrix object in object built out of the permutations. Now, why do I mention that? Well, I mentioned that because, well, original proof was of course different, but if you look at one of the modern ways to prove this theorem, maybe the shortest way to prove this theorem, then the proof would proceed through four steps. First, you will use some combinatorial algorithm called Robinson-Schanstedt bijection, which will map your problem of computing long increase in subsequences to some object, to some problem of computing the asymptotics of ensemble of, you know, planche or random young diagrams. So you reduce to young diagrams by these Robinson-Schanstedt correspondence. Well, then you have young diagrams with n boxes where n was the size of your permutation and you really don't want this discrete parameter n, what you do was called Poissonization. So you replace this discrete n by continuous parameter theta by sampling n according to Poisson distribution with this parameter theta. Now you will have some ensemble of random young diagrams at this point, which you will treat as a determinational point process. So you will manage to compute something as minors of some matrices. And finally, this determinational point process you will analyze using double contour integral representation for the kernel of this, of this determinational point process. So that's how you would prove this Big-Difte-Johanneson theorem. Well, on the other hand, you know, in the random sorting methods, it turns out that you can find analytics of these four steps. And that's how our proof with Moustasi works, actually. Well, all of them are somewhat different. So, you know, Robinson-Schanstedt is different, contour integrals are different. But conceptually, you know, these are the same four steps that we find after proper modifications. Okay, so what are these four steps? How does it really work? Well, first of all, what replaces this Robinson-Schanstedt algorithm? There is a very nice bijection which was developed by Edelman and Green in 87, which maps sorting networks that we are interested in to another interesting object which is called standard staircase shape young tableau. Well, first of all, what is this standard staircase shape young tableau? Well, staircase shape young diagram, this is this young diagram of this shape. So, this is just a collection of boxes in rows where in the first row you have n boxes, in the second row n minus 1 boxes, et cetera, up to 1 box. So, here is one example. So, it really looks like a staircase, and that's why it's called staircase shape. Okay, now that's a young diagram, and you fill it with numbers which are integers from 1 to n in a monotonous way. So, the numbers will be growing along the rows and column, like this is one possible filling. And you will fill with numbers from 1 to n where n is total number of boxes. So, all the numbers will appear, each of them will appear exactly once, and they will be strictly growing along the rows and columns. And that's what is called staircase young tableau. Again, there are, finally, many of these young tableau. Turns out that their number is precisely the same as number of sorting networks that are already Stanley proved. And what Edelman and Green proved, actually, they constructed an explicit algorithm which provides a bijection between these two objects, between sorting networks and between these random, you know, young tableau. Now, while there are two ways in this bijection, you can start with a sorting network or you can start with a young tableau. So, if you want to start with a sorting network, then the algorithm is really a version of this Robinson-Chanson-Kleuth correspondence. So, you modify the rules of these Robinson-Chanson-Kleuth correspondence a little bit, and that's, you know, what you get there. So, I don't want to get into that. In the opposite direction, the correspondence is a version of so-called Schuessel-Mangerz-Zhodotaka. And then it's closely related to this algorithm that we saw in the limit as well. So, what are these versions? It's very easy to explain. Again, it's related to some slidings that you do. So, okay, so what we want to do, we want to start with a staircase, young tableau with filling with numbers. And what we want to get as an output, we want to get sorting network. So, how do we get it? So, we start with our young tableau, like this one. You locate the maximal entry of your young tableau. So, here it was six. That's the maximal entry. Now, you write down the column number of this maximal entry, so it was two in this case. Okay, since it was two in your sorting network, on the first step, two will be swapping with three. Okay. Now, after you've located this thing, you can compute sliding path. So, the sliding path is a path which locates your maximum with the origin, so with this point. And the sliding path goes to the left and up, to the left and up. Now, how does it choose whether it goes to the left or up? Well, the choice is very simple. When it needs to choose between two numbers, like starting from six, you need to choose between two and three as your first step. You always choose a larger number. You would choose larger numbers until you reach the origin. Okay, after you computed this sliding path, you really do slide it again. So, you shift all the numbers toward the border of your young diagram. So, there's one, three, six. So, six disappeared, and one became here and three became here. And that's the new diagram that you get. And here in the corner, you actually don't care what will be there. You can just put zero. It is irrelevant for our procedure. And then you repeat. So, you again locate the largest entry. So, that will be five now. And the column number of five is one. So, because it's one, the second swap will be a swap of one and two. And here is the swap. And then you again do slide. And you again locate the largest entry that's four. And it's column three. And that's the next swap. And then you do the slide and get the largest entry that's three. And that's in the column two. And that's your next swap. And then three. That's your next swap. And that's one. That's your next swap. And what Edelman and Green proved that it is indeed a bijection. We sorted networks. Well, our priority is not even clear that you get a sorting network. Why is it the shortest path which, you know, connects identity to reverse permutation? But, you know, with some combinatorics, you can prove that. That's true. So, that's the first step. And this step reduces our problem of starting a sorting network to starting these, you know, random staircase young tableau. So, that's the thing that we now need to study. Okay. Now, the second step that would be an analog of Poissonization in Big Dave Johansson theorem. Really, what it now does is a bit different, but it still moves from discrete setting into continuous setting. In this way, it is similar to this Poissonization. But there will be actually no, you know, Poisson random variables, but I still wanted to keep the name. Now, what we want to do, we want to change from this feeling with integers, which is our standard staircase young tableau, to feeling with real numbers. How do we do that? Well, we just say that, okay, we have inequalities, that the numbers should be growing along the rows and columns. Well, first we will have an integer subject to these inequalities. Now, let's have real numbers from 0 to 1 subject to the very same inequalities, that they're still growing along the rows and columns, like here. Now, of course, you know, if you know this thing, then you can reconstruct this thing. Instead of looking at the numbers, you can look at the rank of these numbers, and you are back to your integer picture. You know, how is it good? So why is it good for us? Well, first of all, in our limit transition, if we want to find some information about sorting networks, say near the border, through this correspondence, what we need to do, we need to understand what's happening with your random young tableau near the border, near this staircase. Now, when you switch from the discrete entries to continuous entries, it turns out that this local limit that we're interested in is just the same. So it doesn't matter. Some version of low-fledged numbers will show you that the limit is just same, so we don't care about that. Well, on the other hand, while the first object is hard to analyze, there is no exact formulas for that. For these objects, there are nice exact formulas, because it turns out that you can encode this object as a certain determinant point process. And that will appear in the next step. So what is the determinant point process? So we need, again, to make some identifications to see this determinant point process. So we have this filling of the table with real numbers. That is our Poissonized Standard Young Tableau. Now, we treat this Poissonized Standard Young Tableau as a growth process for young diagrams. So how do we do that? We just say that young diagram lambda at time t, this is the young diagram of this table, which is spanned by numbers, which are at most t. So maybe, you know, when your t is 0.02, then you just get a corner. And when you get t equals to 1, then you get an entire thing, because all numbers are smaller than 1. So in this way, your young diagram will be slowly growing in this continuous time t. So the boxes will be added one by one when you reach the time that you see the number corresponding to this time in your young tableau. Okay, so now you have young diagram. You have growth of young diagrams. In order to see the determinant point process, we will also identify these young diagrams with some particle configurations to some more standard way that is done by rotating your young diagrams by 45 degrees. So you get this kind of picture. And then projecting the border of your young diagrams onto horizontal line. And when you project, there are two kinds of segments, like down pointing and up pointing, and depending on which segment you have, you either put particle or you put a whole absence of the particle. And in this way, now, young diagram is encoded by the semi-infinite particle configuration. By semi-infinite, I mean that it's infinite to the left, but it's actually finite to the right. Just because to the right, you know, you will get these kind of up steps after a large time. Okay, now you have some particle configuration. So in this way, now your Poissonized standard young tableau is encoded by the collection of paths. It just shows how your particle configuration moves. So you start with the so-called step initial condition when your particles are all densely packed to the left, which correspond to this picture to the young diagram, which is really kind of a wedge. So you go like that and like that. So that's an empty young diagram we should start from. And then, you know, boxes will be added one by one, which means in this picture that the particles will be jumping to the right until you finally go to your staircase, which is the diagram where, you know, you have particle, hole, particle, hole, particle, hole. That's because in the staircase, well, you really go, you know, up to the right, up to the right, up to the right. That's the configuration that you have here. Now, you know, it turns out that this kind of object, this non-resecting path, this is a very nice object. It's really analyzed through determinational point processes. Yeah, because this, you know, this RSK version was mapping us to the uniformly random young tableau. After we did putonization, it's still uniformly random. There's nothing changed. Oh, you know, here's a theorem, again, in our article with Muster-Zirachman, which says that if you look at this non-resecting path and if you look at, you know, we don't care actually that at the very top you end up with this particle configuration. The theorem will be all the same. Well, a little bit different. The formula will be a little bit different, but conceptually, it will be the same theorem. Now, no matter what you end up with, the collection of the jumps to the right of your path, this forms a determinational point process. And we have a double contour integral for the kernel of this determinational point process. Well, it is written here. Well, it is, you know, quite a complicated formula, but the most important thing is some explicit formula. Now, when you have explicit formula, no matter how complicated it is, you know, you can start working with it. And that's, you know, what we do here. Now, this kernel, actually, this, you know, can be found as a limit of certain object which was in the literature before. So, the kernel, as we find in our determinational point process, that is the limit of the kernel that Leonid Petrov, who I think is sitting somewhere here, obtained five years ago in his study of uniformly random laws and stylings of specific domains called trapezoids. And why that relation? Well, that's because there's actually a relation between laws and stylings and between what we are doing. So, if you look at the laws and stylings pictures, well, something like here on the top, so you try to tile, you know, with these lozenges, which are these rhombuses of three types, like this is one type, this is the second type, this is another type. You try to tile some specific domain like the one which is drawn here. Some people call it solitude domain. Some people call it trapezoids. So, essentially, you have, you know, straight line on the bottom and on the top you have some, this teeth. And you try to tile it with these lozenges, with these rhombuses. You choose one time uniformly at random. Then it turns out that out of this model of random timing, there is a certain limit transition into our model of non-intersecting paths. So, this model, if you want, it is fully discreet model, because our lozenges, you know, they are discreet in all directions. Now, our paths, this is semi-discreet model because the particles are still there in this direction, but in the vertical direction we now have continuous time. There is some transition between, you know, discrete time and continuous time, which you can figure out here. And this kind of limit transition is similar to the limit transition that Alex Saberadine and Grigory Aschansky had four years ago when the article called, I think, Young Book Care or something like that. So, there is some limit transition which works there. Okay, now why is it good for us? Well, it's good for us because it's now well known that the lozenge tiling is a determinantsal. There is a general theory called Castiline theory, which tells that no matter what domain you are tiling, you always get detrimental point process if you look in the right way on this uniformly random tiling. Rick Canyon, for example, developed many things based on this observation. This is just, you know, an abstract theorem guarantees that this is a detrimental point process. Well, we want more. We want some control on the kernel of the detrimental point process. And here, you know, the article of Leonid is very important for us because he developed, you know, based on some formulas of inner meta for the measures given by products of determinants, he found double contraintical formula for this precise model of lozenge tiling and then we can do our limit transition to our non-resecting paths and a kernel for these non-resecting paths and then we can, you know, make our limit transition, you know, first link into sorting networks and then make an appropriate limit in sorting networks and then in the very end, after several steps and after some massaging, we will arrive at the limited objects that we want. Okay, so that's a summary. So what we saw, we are looking at this uniformly random sorting networks of large rank and we saw two kind of limit results. First of all, if you just look at the spacings between two swaps on the same level then after proper rescaling these spacings turned out to be governed by the universal distribution of random magnetic theory, this Godin meta distribution, which is bulk spacing of, you know, eigenvalues of symmetric matrices, or as Wigner says, model for energy level spacings in given nuclei. And if you more generally look not only on one spacing but on the whole local picture then this can be identified by an algorithm, Giordateca, applied to the universal object from the random magnetic theory, this time hard edge of eigenvalues of anti-GUI corners process. And the main tools how we get it is to Edelman Green bijection with young diagrams and then some double contour integrals which we managed to understand after poissonization. And that's it, thank you. Well, it gives partial indication. So what we established in particular that the limit object is the same modulo, this semicircle scaling. Now, of course this is, you know, if you believe that it doesn't matter which wires go near your point, that local limit is kind of completely decoupled from wires, you know, analogy in random magnetic theory will be that maybe eigenvalues and eigenvectors are decoupled. If you believe in that, then our results will tell you that okay, the spacings, you know, they are spacing, how do they change? They change just as one over square root of x times one minus x. Now, the spacings change like that and you look at something like let's open the picture. If you look at something like the topmost wire here, now the topmost wire really cares only about the spacings because, you know, the first jump here the next jump it will again go down, the next jump it will again go down. So the spacings between the swaps go into the slope of this wire. And really, when you integrate something like one over square root of x times one minus x, then you immediately get arc sign. And that's the indication that you get, you know, the sine curve there. Of course, for that you need to believe in this kind of decoupling, which, you know, we cannot prove rigorously, but at least, you know, that's some indication that, you know, the sine curve appears. Well, our results do not say too much. Well, the original conjectures they say a lot about the permittahedron. So where are these conjectures? Yeah, these conjectures are implied by the general conjecture that the same authors made, which is the following one. So if you look, if you embed everything in the permittahedron, so you just treat permitation as an n-dimensional vector, then, well, this permittahedron really the sum of the squares of these numbers is fixed. So all the permittahedron lies on the sphere. And what's really the conjecture is that, is that this shortest pass between these two points on permittahedron, they're close to these circles just on this sphere. You have just n-dimensional sphere and you just have large circles on this sphere. So there are many of these circles, so there is some randomness which remains, but the sine curves on this circle, this, you know, circle which you observe here, that is just indication of this more general conjecture, that the shortest path is close to the circle. So this is related to that. Now, in our asymptotic results, again, this fact that you need to rescale by this semicircle density, this again relates to the same circle, we see these randomative distributions. Well, I'm not sure what it says about permittahedron. Yeah, that's true, that's true. Yeah, so this circle is precisely some slice of this permittahedron, that's correct. But the swaps, that's kind of how locally you move in this permittahedron. And it's something about local geometry, that's true. Like, when you look at the uniformly random, maybe, not really uniformly random, I'm not sure how to say. Yeah, yeah. I know about that result, but I can't say I fully understand that result as well. I mean, it's some miracle. High swap, like these... What does it mean? High swap, because in each vertical line there is only one swap. But that is just the same swap process, I don't fully understand. Because each time there is only one swap, so there is no really well defined the top one, that kind of all the same. Yeah. Well, there might be... I don't think that anybody looks into that. But, you know, there might be a nice approach. I don't know, unfortunately. That's a good point. Maybe it better explains what... Maybe, yeah. I will look carefully on that, thank you.