 special permutations in this caligraph which is first this identity permutation like 1, 2, 3 in the example n equals 2, 3 And then there is the reverse permutation when all the letters are Reverts so you can say that you made in a maximum amount of transpositions you inverted the order of all letters and The sorting network by the definition that is the shortest path between these identical permutation and the reverse Permutation in the caligraph of the symmetry group Okay, so what does it mean shortest path? Well, when you interchange two letters in a permutation, there are two options Maybe they were in increasing order and became in decreasing order So you increase number of inversion or maybe they were in decreasing order and after you sweep so swap They become an increasing order then you decrease the number of inversions Now if you have a shortest path that you should be always only increasing number of inversions Which means that the number of inversions in your permutation will be just growing you know from zero here There are no inversions in this permutation all of Of the all of the numbers are just not I just ordered up to this permutation Where there is a maximum number of inversions, which is n choose to n times n minus 1 over 2 and The number of inversions will be growing one by one So here you have started with zero inversions and one inversion to inversion three inversions Okay, so that is a certain network. Yes Okay, so so so what is an inversion in permutation so you have two Positions and you look at the numbers in these positions if they're Ordered in increasing order then there is no inversion But if they're ordered in decreasing order then there is an inversion And then you just count how many are there pairs of positions such that the numbers are ordered in Decreasing order so here are none of those because all of them are increasing by here That's a maximal amount Okay, now there are various no geometrically you can identify these sorting networks with various things Which might explain what was why do we use the name sorting network when one nice Representation so-called wiring diagram, so you should think about having n wires so here n is equal to 4 which I first order it in increasing order 1 2 3 4 and That's at the first moment of time and at the last moment of time they order it in the decreasing order So all of them should be reversed and then you want to link this thing by wires And at each point of time you just allow only two adjacent wires to swap Then you want to draw this wiring diagram, so that is just geometrical interpretation of this sorting network So that's one thing another thing if you're interested in I'm like computer science or programming then you can identify this The sorting network With well really in a network which makes a sorting because if you think about each Swap like each cross on this picture as some elementary sorting element which takes two inputs and Sort them into the increasing order Then by using all these Sorting elements no matter what input you started with the output will be sorted in the increasing order So if you ever try to you know code something like bubble sort algorithm, then it will be a particular case of this sort in that Okay, there are many beautiful mathematical results about these sorting networks the first ones already That's like 35 years ago So Richard Stanley was interested in how many other sorting networks and he found a very nice Formula through some product of factorials like the formula is given there So you fix the rank of your group you count the number of sorting networks, and that's a beautiful formula that he discovered so this park hold many No various projects in studying the sorting networks and beautiful mathematics around it mostly in the area of algebraic Miniaturics so people will count in different things and there were Numerous articles related to that just to mention some of them So people will find in bijective proofs of this kind of you know enumerative results Then there were some weighted versions. There are two families of symmetric of you know Polinomials which appeared one of them was Schubert polynomials and other was Stanley symmetric functions So there is for example nice review of Adriana Garcia from 15 years ago Which called set the say go for use particularizations of elements of the symmetry groups like 150 pages Summarize many different developments. Oh, yeah, that is in combinatorics, but really are really in a random matrices school So so far. I don't have any matrices. They will appear a little bit later But let's first at least you know say something about random objects and here the main interesting developments happened like 10 years ago where Omar Angela Sander Holroyd Denormic and Balent Virak asked a question. Okay, so fix and and choose Sorting network uniformly and random. Now. How does it really look like when n is large? So that was the question that they ask so how this picture looks like when it's a huge picture Well, there are various twists, you know The sorting network is a complicated object so that you can clarify What exactly you mean by this question and then the answers will be very different So here are three ways how you can clarify what you mean by that So the first question that they'll also asked is what if you really look in these wiring diagrams on the Trajectors of individual particles or if you just look on one wire So for example, this is a wire of particle number two. So it starts here goes down down goes up to here So just look at the picture, you know, you will have huge wires and they have some projectors So how does it how this rejectors? How do they look like when n is large and you sample your certain network uniformly at random? Here's a simulation which was done by under Holroyd. So these lines. These are really trajectories of different particles So they're shown in different colors. Well, you see that something is happening here You can see that they're not entirely random There are some patterns here and the pattern that you can see is that you know all these curves are somewhat different But if you look more closely, you can realize that they all really look like sine curves Now they're shifted and The amplitude can be different, but they always sine curves and there's a general conjecture of these four Searcher that you always trajectory of every particle should be a sine curve after proper shift Okay, that's one question. Oh, there is another question Now instead of looking at the trajectory of the wire You might try to look at the permutation that you get well Anyway, you know by the definition of the sorting network the sorting network is a sequence of permutations because it was a path and calligraph So you start with identical permutation here you and by reverse permutation here Let's have let's look what's happening somewhere in between So for example, you might try to take a look what's happening halfway through So altogether you do n times n minus one over two steps So maybe after n times n minus one over four steps. How does your permutation look like? What is it? Well, we need some again. We want to draw some pictures. So How do we draw a permutation? Well, I know one way permutation you can identify with its graph So for example here you have no permutation two four one three That's because it is second particles fourth first and third one and you can identify this permutation with a set of points One two two four three one four three just because you know your permutation is two four one three So just this is just a graph of function, which your permutation represents And again, you can ask the question of how does this graph look like so? What are the picture for the typical large, you know sorting network? And here's another simulation again by Andrew Horoid So this is a you know large permutation You can see that this is something very special which is happening here So you can see that this all the points which are there they're clearly inside the circle and that is one of their conjectures again That it should be inside the circle Moreover, there is some density of particles and then they have precise conjecture how the density should look like everywhere on this picture So you can ignore the colors here. They don't mean anything just to make a picture more funny Okay, so these two conjectures are still open So there is some more general conjecture related to geometry of some Object called pyramid a hydrant which would imply these conjectures, but that conjecture is also open There was some recent progress about related by to be different model of so-called lazy sorting networks Where something of this sort was obtained by the two groups of authors? But in this original setup, you know, nobody knows how you can you know prove something of this sort Okay, so these are two points of view of the sorting networks, which are lead to open problems I will switch to the third point of view where we have some progress and where actually we can prove something So instead of looking of wires on permutations, let's instead of look at swaps So really these you know swaps where you swap to adjacent labels in your permutations So the swaps are just crosses here So let's forget about everything but just look at these swaps and just look at them as just unit masses So you have some point process of your swaps or some random point process and you want to understand Okay, how do these swaps look like so maybe analogy with random matrices would be that you know in random matrix things You have eigenvalues and you just put particles and positions of these eigenvalues And you want to look how this point process look like now instead of eigenvalues We have these swaps, but there's still a point process and you see two-dimensional point process I don't want to understand something about it Well, the first result about this point process of swap already is in the original article on the sorting networks of Angel, Horatromic and Virak, so they prove two results Well, first of all, they prove that The point process of swaps is translation variant in horizontal direction So if you just Shift all the picture to the right by one then probabilistically, you know the distribution is all the same Not nothing changed. And there's just some property of sorting network. There is some some nice Objective map on the sorting network which maps use which shifts you to the right That's one thing, sustainability. Another thing that is the global low-flage numbers, which they proved Namely imagine, you know, how many particles are there? There is n times n minus 1 over 2 because you know each particle increases the number of inversions by 1 So let's put on each particle position a mess 2 over n times n minus 1 over 2 2 divided by n times n minus 1 So you get some probability measure, which is a random probability measure But what they proved is that as n becomes large and because of this random probability measure It approaches a non-random probability measure. And what is this non-random probability measure? How does it look like? So that's the formula that's the density of this limiting thing So in a horizontal direction, of course in the transition variant we can get only the back measure So anything everything becomes uniform. Now in the vertical direction, that's where really the interesting things are happening And namely if you look at the density of the particles in the limit in the limits and after you rescale by n This density starts to look like a semicircle So that's the formula for the limiting density the square root of y times 1 minus y Where y is a vertical coordinate and it changes from 0 to 1. So that's a global law of large numbers Well, you can say okay, it looks very similar to the random metric thing, which is called samus or Wigner semicircle law Well conceptually actually I don't know why is it the same object here So I treat it more like a coincidence because the source of appearance of these Semi-circle law in this model is very different from what from how they appear in the random matrices So that you know, I would treat as a coincidence But in the next result actually the random matrices will appear more conceptually not just a coincidence So now you know now we switch finally to our results to something you know more recent not 10 years ago But just from this year and for these results what we look at instead of looking globally on this point of point process Of swaths we want to look locally Really how this picture looks locally when their size is huge when you zoom in somewhere in the middle and you want to understand You know what's happening with these swaths. So what are the distributions of these swaths? Again, here's the first theorem that we prove here and it is joined with a postdoc at MIT Musta Z. Rahman So we will first speak about only the swaths which are closest to this left boundary of this picture So since everything is translation variant, it's not very important, but it's easier to start with this thing So fix some level like maybe here the swap between two and three That's the thing that we want to understand and let's just look at the time when the first swap happens in this line Well, how should this time scale? Let's think a little bit So the total number of swaths is of order n squared and the number of positions is of order n So in each position there'll be like order n scales Okay, and after you realize that it's not surprising at all that after you you know rescale the first time of the swap by n and Then send n to infinity then you will get some random variable which we'll call f-gap Okay, so what is this random variable? Well, the first thing that you can notice that in this limit transition you can look at different positions So it might be you look closer to the border somewhere here, or maybe you go closer to the middle of the picture Now the random variable that you get is always the same So the only thing which will change is rescaling and the rescaling is exactly according to the same semicircle density that we had before So that's a rescaling Moreover, we have explicit formula for the probability distribution here. So here is the here is the formula So it's law is given by a certain fret home determinant Well given, you know with kernel in this fret home determinant given by the sum of the two terms Which look familiar to people from random matrix theory usually because that's what was called sine kernel The sign of u minus v divided by u minus v and then some twist of this sine kernel as a second term Before proceeding any farther, let me know mentioned that there was an independent Parallel work by group of four research shows Angel Dover and Hull-Royton-Virac who are also looking at this problem And they also prove the existence of the limit here However, you know while our methods with most as the are more methods of integrable probability So we work with some explicit formulas and pass to the limit in these explicit formulas Their methods are more probabilistic and because of that, you know, they also can prove existence But the identification of the object is more complicated for them because they have some kind of probabilistic description And it's hard to match with explicit formulas for this just probabilistic description. Well, nevertheless with our explicit formulas Well, it might look at some complicated things some you know fret home determinants I might wonder you know, what is it where did it appear before and there are two ways Where the same object the same distribution appeared before and both of them are in random matrix area So the first way where these you know Distribution F appeared. That's when you look at the object, which is called anti-symmetric GUE my historic reasons Which is the following objects. So you take a matrix X Which you'll be to n times to n matrix of iid real mean zero Gaussian random variables Well, this matrix has complicated eigenvalues So you want to simplify it a little bit for that you make a skew symmetric matrix out of it How do you do skew symmetric matrix you compute X minus X transpose and that's a skew symmetric matrix Okay, now if you have real skew symmetric matrix and its eigenvalues will come in pairs X and minus X and all of them will be pure imaginary. This is just some linear algebra Well in particular, there will be a pair of these pure imaginary eigenvalues, which is closest to zero Turns out that if you look at the distribution of this random pair of random variables of This you know random eigenvalues I'll choose one of them and you send the size of the matrix to infinity then after proper rescaling you'll bet You will get precisely the same distribution here So our distribution this limit of the smallest eigenvalues in this anti-symmetric GUE So that's the first appearance Now the second appearance is actually when you look not in the anti-symmetric matrices, but on symmetric matrices So start with the same X But now instead of making a skew symmetric make it symmetric but looking at eigenvalues X plus X transpose Now it turns out that the distribution function of this Eigenvalue of this random variable that we have here probability that it's Larger than s is precisely the limit of the gap probability of these ensembles So limit of the probability there are no eigenvalues in small interval for minus s to s for this Election that's an interesting, you know independent questions why these two Descriptions are the same and I would really appreciate if somebody can you know explain it to me because no I know this on the level of formulas people know computed one thing People computed another thing and on the level of formulas you can see that they are the same on other hand You know you want to know see that this should be something simple because this is just you know Smallest eigenvalue of one matrix and another matrix so they should be a nice description unfortunately I don't know any you know simple description That's the asymptotic formulas which coincide. Yeah But I don't know maybe there is some you know nice relation on the finite end as well Now these two you know formal there is a nice corollary actually of that nice a reformulation of our result So let me give you this reformulation or we call it everything was translation and variant So because of that when you look at something near the origin like on the left of the picture Really you can do the same results everywhere in between Because of that our probability our distribution of the first swap can be just by some you know Massaging you can get out of it just a spacing between two swaps So if you have your random sorting networks, you look at some line like here and you look at the spacing between two adjacent swaps somewhere Okay, and then as a corollary of what we had we will have the statement that asymptotically as n goes to infinity the spacing between two swaps of random sorting networks has the same distribution as bulk spacing of the eigenvalues in Gaussian or toggle ensemble or that is an universal object called godian method distribution or sometimes exact winner surmise Which governs, you know spacing between eigenvalues in any large real symmetric matrices Yes, exactly. That's why that's not the same distribution. So so so the Spacing that's a bit different distribution than this distribution f gap So there is a but there is a simple formula how you transition from one to another Something like you take derivative of the density on that something very simple It's not fgap. Yeah So but this is you know because fgap was identified with some You know gap probability for GUE and then these gap probability you turn it into spacing distribution again by the same procedure As in the procedure is the same you conclude that the spacing in GUE is the same as a spacing in This random sorting networks now this the spacings between eigenvalues of Symmetric matrices that's an object which you know many people studied maybe starting from the work of Wigner in the In the middle of the 20th century and what Wigner was saying is that these spacings Actually a nice model for the energy level spacings in here in nuclear and that was his you know That's Wigner surmise. It was his guess. You know what this distribution or this spacing should be So now with our results Well, we can say that Wigner could have said different thing, you know, he could say that okay the nice model for the spacing between the Energy levels these are just spacings on Sorting networks and the result is you know the same that's the same Okay So just to compare there's another result of Alex Ross enough who was a graduate student at NYU at that moment That is I think in his thesis that if instead of looking in the middle of this picture You look near the border of this picture. So you look maybe at the bottom most swaps here and the skin will be a little bit different But still you can compute the limit and the limiting object will be simpler than what we had You don't need any fret home determinants You will have just now the first swap you will have just absolute value of Gauss a random variable and as a spacing Near the bottom of your picture. You will get something. It's also called Wigner surmise But if previous one is called exact Wigner surmise This is called just Wigner surmise and which is just you know the following distribution you take two times two Matius of from Gauss an orthogonal ensemble. So real symmetric Gauss and matrix and Compute the difference between two eigenvalues of these matrix That's what you get here. So in the middle of picture you have kind of you know spacing of infinitely large matrices But on the very bottom you have just spacings between two eigenvalues of two times two matrices Okay, now let's go farther. So so far. I was describing just a limit of one swap So it was just really, you know, maybe maybe two adjacent swap their spacings or maybe left most small Swap well, you can naturally ask what's happening when you go farther So what's happening if now you look at the you know joint limit of multiple swaps so you can just understand how locally this picture looks like Well, there are again different twists how exactly you can ask that question So the first question the first way how you can address it is just by looking out of local patterns that you absorb in this picture So maybe draw a window like here and you look at all the swaps Which appear inside this window then you ignore all the empty spaces and whatever remains in some local pattern Which looks like some part of a sorting network Or you can ask me what kind of patterns appear there and we had a theorem in eight years ago With Omar, Angel and Andrew Horowitz, which says that well if combinatorially any Pattern is possible then it will actually show up in your configuration could quadratically number of times So if something is possible combinatorially that it will happening, you know with high probability So that is kind of This doesn't give you some quantitative control But it gives you some you know general feeling that you know everything is happening Now that was actually the first time when I got to know about sorting networks That was when I was internet Microsoft research actually it was you know, it started nine years ago but since that time, you know, we have many developments and Now again in our work with most of the Rahman instead of just you know saying that all the swaps all the patterns appear We can now do precise computations We can now compute the probabilities and also compute the local spacings in these patterns Which appear in your sorting networks and now I'm going to present this theorem, which will describe how this looks like Okay, so how to think about these you know limit of several swaps so we need to draw a window So the window that we will be drawing will be finite in vertical direction And it will be again of order n in horizontal direction And we will look at swaps inside these finite window. That's the object that we will be interested in Now afterwards, you know, you can position your window anywhere in your picture and you look what's happening inside this window You will rest here the positions of swaps inside the window by n just because that's that typical spacing as we understood And you look at this point process like what's happening inside this window Now the statement will be that you know with some explicit distribution of the limit is the limit exists So you fix a window then you send the size of your certain network to infinity choose uniform the random one and probabilistically you will have some point process which governs the limit Again, there is an independent work of this group of four researchers who also proved Existence but again, they didn't have nice description and we have a description which I will present in a second Okay, so there is some you know object which governs the limit Unfortunately this object quite complicated, but you know sorting network itself is quite complicated So I will need to describe a comminutorial procedure which generates this object which appears inside sorting networks So the procedure is two step So on the first step you start with an object, which is very familiar to random matrix community there will be some determinants of point process on Z indexed, you know real lines, so we'll have a number of Real lines like that So there'll be one coordinate which is integer valued and another coordinate which is just real valued And you will have some particles here We'll have some point process of particles on this thing And this point process of particles is determinant also any correlation kernel can be compute any correlation function can be computed as a minor So the magics and these magics well is somewhat explicit is given by integrals as here Maybe complicated integrals, but still you know you can compute them in particular when the Vertical coordinates are the same yi is equal yj in this formula then this simplifies to the sum of two signs Which we already saw that's a determined a detrimental point process that we need Now this determinant will process actually appeared before so Peter Forrester and Eric Nordenstam in 2009 proved that these Determinant point process appears when you look at something which is called anti-GUE corners process Namely you again take no matrix of IID Gaussian random variables So this time it will be infinite matrix And you create out of its Q symmetric now It's infinite skew symmetric matrix and you start cutting corners of these skew symmetric matrix That will be against skew symmetric matrices and you look at the eigenvalues of these skew symmetric matrices Several of those maybe you know you take a corner of size 1000 1001 1002 and you look at the all the eigenvalues of these corners and you put them You know on the same picture and that's precisely what you get You know the skin and limit of these eigenvalues near zero. There's this point process So you can say that maybe now you say that this limit Level zero that is kind of the matrix of size 1000 and this minus one that's 1000 minus one and this is 1000 plus one and you look you put the eigenvalues here on these lines and then you look at the limit when this 1000 becomes very large and that's where you encounter the same object So, you know in the world that is called hard edge limit of this anti-gUE process near zero Yes, so that is the general point process that is maybe Easy object for the random matrix community. How will we need additional step? So we need to feed these the general point process into certain Communitorial procedure, which is a version of what is called Jodataka in combinatorics, so what is this combinatorial procedure? Well, we started with this point process So these are these no hard edge of eigenvalues of anti-gUE now We started doing something with that so this will be no blue particles That's our input and our output will be brown crosses. These are the crosses which govern the limit of sorting networks So what do you do? Well first Well, you fix some large window just to avoid, you know speaking about infinities Now what you do you locate the particle in your window, which is closest to the left border Well inside this window you can see that clearly this one is closest to the left border Okay, you put a brown cross here. So that will be one of the outputs of our procedure Okay, after you have chosen this brown cross you do a procedure which is called sliding So what is this sliding? Well, you compute something called sliding path. So how do you do it? You start from this cross and you start moving to the right and You look at the two adjacent lines and you want to find a particle in one of these two adjacent lines So you move to the right look around look around here is the first particle. So you found this particle Okay, you move to the line where this particle sits And then you continue moving to the right and again you get you look at the two adjacent lines And you again look for a particle there. You move move more. Here's another particle. Now you move to this line Well, again, you continue moving to the right. You found next particle move to the right found next particle, etc So that's a sliding path. So is it clear? What is the sliding path? Okay, now if you can after you construct a slide in pass you do sliding So the sliding as the following procedure you look at the particles along the path and well You can each particle you can move either plus one or minus one so that it still remains on the path So if you want you are moving your particles toward the origin origin kind of to the left But modular that the path is really not going also up and down. So this particle will be moved down This one we will moved up this one moved down this one moved down this one moved out So that's a slide in procedure. Okay, so let's make this slide in procedure. Here it is and at this point you just Removed a particle which became a cross the first one that you started from so you remove it. It's a cross And then you just repeat your procedure So you will again locate the leftmost particle this one you will construct a sliding path You will slide particles along this path You will remove the particle you will again find the leftmost particle compute sliding paths Slide the particles You know after large time all the particles will disappear and all the these brown crosses will remain Now you can notice that here, you know brown crosses had even coordinates of our picture And that will be always true due to just combinatorics of this picture They always have even coordinates So you need to divide this coordinates by two because we wanted to have integer value things in the end So you divide this crosses by two and that's precisely the limit of the sorting networks that we are seeking for So here's a theorem. So now our Local limit of this sorting network inside a window is a result of applying these stochastic algorithm Georgia Tech to Determinant a point process which appears at the hard edge of eigenvalues of this anti-symmetric GU corners process now In particular if you look carefully, it's what was happening in our Algorithm you can notice that if some particle Was left most in some line like this one was left most in this line and this one was left most in this line And in the end you will have a cross precisely on that position And because leftmost particle there is no way how the sliding path can go through leftmost particle well because of that in Our limit of certain of certain networks if you look at Like positions of the leftmost crosses like maybe join distribution of these spacing these spacing and this spacing again This can be written in terms of the fret home determinants in terms of this determinant point process Well more complicated for this distribution. That's then you really need to understand these These algorithmic procedure to understand them Okay, now let me say a couple of words about the proofs and in order to speak about the proofs It's nice to have some analogy with some previous results in this you know this subject now there are other famous results how the objects from the Studying random permutations will link to something from random magic theory Maybe the most famous one. That's a big Dave Johansson theorem from 99 What what what was this theorem about so instead of looking certain networks, which is complicated object can start it out of permutations They looked at longest increase in subsequences of permutations. So what is that? Well, you look at the permutation like here. So here's a permutation of N of nine letters And you look at these letters which form an increase in subsequence. So for example this 3489 that's an increasing subsequence here Well, there are maybe many increase in subsequences in your permutation But you can choose one of them which has the longest length and you can look at the length of this, you know longest increase in subsequence Now you sampled your permit