 And this is by Peter Forrester, speaking about decomposition of measures in random matrix theory applied to number theory and integral geometry. I've clicked my microphone on quite a wild echo here. Why is that? Try over here. Still got an echo, but take two. Right, take two. Yeah, the echo is gone and I'll resume. Ten years ago, I was fortunate, well, I was going to start by thanking the organisers, particularly Alice and Ivan and in the background, Alexi. Ten years ago, I was lucky enough to be in the state of Utah for a meeting organised by the random matrix community. I think Dino had something to do with that meeting. One of the participants was Ballant Varag. Ballant was talking about his newly introduced Brownian motion model relating to the beta ensembles, and he wanted to explore beta-dependent quantities. And he asked a particular question of Alexi, and Alexi passed the question on, or Ballant, over to me. So perhaps say something about this question. So this question goes back to the times of Meteor and Dyson. Well, it's actually a theorem from their earlier papers, where they consider the circular ensemble, the beta equals one, with two end points. Circular orthogonal ensemble. So with the two end points, the eigenvalues on the circle. Now I didn't keep count there if I got an even number. Well, we'll find out. If one integrates out every second eigenvalue, we get left with a new point process. And the theorem of Meteor and Dyson is that every second eigenvalue, the operation they introduce there is alt, is actually equal to the PDF defining as an eigenvalue PDF, the circular symplectic ensemble. So that's going between beta equals one and beta equals four. I was very interested in these sorts of interrelations. And what Ballant asked was, let's suppose we take, now that we've got particular interest in general beta, let's suppose we don't restrict to just the classical values here, one and four. We take another beta, and the simplest example would be, can we find a value of beta such that if we integrate over two eigenvalues in a row and observe the third, integrate two in a row, observe the third, integrate two in a row, observe the third. Is there a interrelationship between a certain circular beta ensemble? We have to nominate what beta is here. We'll have three n eigenvalues. And the answer is, in fact, there is. And the interrelation goes between beta and four on beta. Well, this is when beta, in this setting, what's it going to be? So I've got to get my values correct here. So we're going to have beta, the usual interrelation is between two on beta and so beta equals one. It's going to go, let's excuse this, slightly impromptu, but I won't continue. But to say that we can find a beta value that for this relationship to actually hold, we can find a beta value that if we integrate out three eigenvalues in succession, we can find another, there we would have a four n to n, et cetera, et cetera. So this is something that actually came out of discussions in a conference like this. And I think that's one of the great advantages of having people together and hearing about different perspectives. So at a technical level, it's also of interest how can one make a contribution of this sort? Well, I have my own technical tools. My technical tools were working with Solberg type integrals. So I knew how to actually perform integrations over these sort of domains and I was very popular in studying models relating to share processes, McDonald's processes, et cetera, where we have interlacings and I knew of such integrations and I knew that perhaps one could extend this with values of beta although it wasn't obvious at the time how to break away from structures that are special to beta equals one, a determinational structure that appears there how to break away from that, but it was possible to get extended beta. So here, this would be actually thinking of it, beta equals six. And on this side, well, I think we go down to a third but I'll double check that afterwards, but it's certainly a theorem. It's a theorem that holds for finite internet. It's something that came out of the discussions here 10 years ago. Hopefully, as Percy Deaconis said, I will try and pose some questions where I'm at in this present line of study where I'm a little bit stuck on some points and maybe I can get some advice that will help me out. So let me talk about the material for today. The material for today is a little bit ambitious. Less is better, so I perhaps won't continue on to the very last dot point but I will give an indication of why I'm interested in that very last dot point. So let me get underway. Problems in number theory and integral geometry are not necessarily mainstream. It's random matrix conferences. It is a particular line that we could follow if we were to take an historical perspective and ask where do certain studies within random matrix theory show themselves? And if one was to actually start off with the work of Hurwitz, one would actually be giving a talk of this sort. So I did start off with the work of Hurwitz. I found a collaborator in that pursuit, Percy Deaconis over there and wanted to explore further that particular lineage and this is what comes out of such a study. So I will talk about three particular problems. Concrete problems. So the first one will actually be counting matrices with integer entries and determine it equals a one. So they form a group, the simplest example, 2 by 2 case, SL2Z. How many matrices are there with a bounded norm when that norm gets large? That's an asymptotic counting question and it turns out that question can be answered through notions of random matrix theory. The second question I'll inquire about is to do with a random lattice. A random lattice is defined by taking a basis chosen with uniform measure and I'll explain what uniform measure means. So take this again, just take the 2 by 2 case. We're just considering two vectors, linearly independent. We're going to form now the integer span of those two vectors. That will give us a lattice. We've defined the lattice as the integer span of our two given vectors. Within that particular lattice there'll be a smallest vector and a second smallest linearly independent vector. And the question we'd like to ask is what is the actual statistical properties of that smallest vector and that second smallest vector? And that is again a question that can be answered through notions in random matrix theory. The third problem that I'd like to discuss is one of statistical properties of the convex hull. So Santa, sitting over there, the ginger has studied convex hulls relating to Brownian motions. What I'd like to talk about is convex hulls, for example, as the concrete example take three points in a disc at uniform, uniformly, form a triangle and ask about the statistical properties of that triangle. How does one actually perform calculations like that? That would be my last topic if I had time. So let's proceed. This is the first motivating question, asymptotic counting problem. As I say, very concrete. If we just think in the 2 by 2 case, we're asking ourselves how many matrices are there that have some bounded norm? And notions in random matrix theory are relevant because of an asymptotic counting formula due to those particular authors, Duke Rudnick Sane, 1993, where they say that asymptotic counting formula is related to a ratio, one on the volume of the fundamental domain. The fundamental domain is an arithmetic quantity that's actually known. So in n equals 2, it's z to function at 2 pi squared on 6. The remaining question is a random matrix problem question. How do we actually integrate a volume where we're choosing invariant measure? I have to define what the invariant measure means. On the space of matrices in the n equals 2 case, SL2R. So that's the question. We'll be answering it as we go along. The second problem I made mention of is this random lattice problem. Random lattice problem is very much in the lineage of invariant measures as we'll see. So here's a picture of what was going on. v1 and v2 are our original basis vectors that we're spanning with an integer span. The question we would like to know is what is given v1 and v2 are actually chosen uniformly at random in a sense that we'll be made clear in a moment. What is the distribution of the shortest basis vectors, u1 and u2? Very concrete question. You can see it. It's a question in geometry of numbers. What about general dimensions? So n equals 2 hasn't necessarily got that much prestige in our random matrix community. The large n limits are usually preferred, but still. n equals 2 is very good for drawing pictures. Here's n equals 2 again. Here is the third and final problem. This problem is not a number theory problem per se. It's a problem in integral geometry. Or the geometrical probability theory. So we've got a disk here. We're choosing three points uniformly at random. We're forming a triangle. We're asking about the statistical distribution of the area of that triangle. So here I've denoted that area by delta, or the region by delta, the volume of delta in general, the area. We could ask about moments, but let's be happy to compute the average moment. Now, I saw this remarkable formula and really got interested in this topic because of this remarkable formula. I wanted to see, understand why we could get the binomial coefficients with n squares in there. This is a result of Kingman. And I just sat down trying to actually compute this integral in the case n equals 2. So we can try that. We've got how much time I've got left. Let's see if we can do that in the next three quarters of an hour. It's an elementary integral in printable, but the method of computation is not elementary at all. Kingman's method. In fact, I read about this first in an old American math monthly article. And the question was posed. It wasn't answered except sort of a footnote to say that, I think the person's name is Clee, he was thanking Kingman for a note saying that he knows how to do this, but he kept us all in suspense. He didn't explain how that might be done. So if I get time, I'll say a little bit about how some of our random matrix theory gives rise to an understanding of a computation of that sort. So this is moments of the volume of a convex hull point. Okay, so the very beginnings that I would like to talk about is one of the beginnings of random matrix theory. We can say there's a number of different beginnings. There is this beginnings of Hurwitz that's not necessarily that high profile, the beginning in mathematical statistics due to Wishart, a beginning in theoretical physics due to Wigner at least. There are those beginnings. So the beginning due to Hurwitz goes right back to the days where invariant theory was of interest. Hurwitz following on from some famous work of Hilbert on a problem of finiteness problem, a particular notion relating to invariance of higher-order polynomials. Hurwitz got to ask a similar question in the continuous setting and he was led to the notion of what Percy introduced earlier as the Ha measure, the invariant left and right measure with respect to the group action. And he did a number of calculations that are just lasting interest, I think in random matrix theory, I'd like to make that point as we go. So what's the notion here? The notion is the definition of this left and right invariant measure introduced, as I say, by Hurwitz some time ago. Hurwitz actually gave a specification of that in terms of the volume element. So if we're taking orthogonal matrices, the Hurwitz specification is to take this matrix R, orthogonal matrix, R transpose DR. The meaning of DR, I'll give an example over the page. How can one see that that's actually invariant? What do we mean? Well, we mean that central equation, but let's look here for a left invariant. So we take a fixed R naught fixed orthogonal matrix, multiply on the left. We want to see that we actually want to see it at all. We want to see that we get a... that our nominated volume form that we're claiming is invariant under left and right action, group action, is actually unchanged. It's a very simple calculation using the fact that R naught is a member of the orthogonal group that that particular volume element's unchanged. So the correct combination R transpose DR, that's fundamental. And we can form a calculus out of that. And here is an example. So we'll take a parameterization. This is what Hurwitz did as well. One could give a lecture, and Percy certainly over the years, give a number of lectures relating to the use of Euler angles and generalizations for the permutation matrices. It's just quite remarkable that it all goes through for permutation matrices structurally. I'm just taking the 2 by 2 case. So we're taking a standard parameterization of a rotation matrix and using the formalism to do a calculation of what that actual invariant measure is in these variables. So one finds that R transpose DR is an anti-symmetric matrix. The meaning of the volume element is that we take the product of the independent differentials there. Well, there's only d theta that is independent. So our actual volume element is d theta for a 2 by 2 orthogonal matrix. If we went to the 2 by 2 unitary matrix, we'd actually have four variables. If we normalize the first and second column to have a positive real first entry, we just go down to our two variables and we can do the same calculation. We can form U dagger to U and then take the product of the independent differentials. It's slightly trickier here because we do have something on the diagonal. That's something on the diagonal is repeated. We have another DL for on the off diagonal. In fact, what we do in practice is multiply together the independent differentials, the real and imaginary part on the off diagonal only just as we did it in the orthogonal case. So this is the 2 by 2 setting. Very concrete final expression here in terms of the Euler angles in both examples. One has to know how to parameterize a general orthogonal matrix for general N and unitary matrix for general N in terms of Euler angles to proceed. The first person to do that was actually Euler himself in the N equals 3 case. Excuse this technical glitch. And apparently in that paper, Euler sketched how to do this for general N. Herwitz repeated Euler's parameterization in the real case and in the case of ON for UN he gave a generalization was then able to give explicit parameterizations of these volume forms for general N and his application computed the volumes themselves. So I remember for our application for these asymptotic counting formulas we want to compute volumes and here is the very first volume calculation so this is way back in 1897. Well, perhaps the next major development along this line was while in his book 1939 on the classical groups he introduced this notion of a class function, a function unchanged by conjugation by orthogonal and unitary matrices and for that purpose he was interested in decomposing these invariant measures not in terms of Euler angles but in terms of eigenvalues and eigenvectors. So in his book he showed how to do such calculations technically they're not necessarily that difficult and one gets this well-known decomposition to measure that somewhat underlie what I was sketching out over there that we have these product differences to some power. We have a factorization of this eigenvalue part and the eigenvector part so the V's here is our matrix of eigenvectors so we see that invariant measure introduced by Hurwitz V-daggerty V appear in this particular calculation this is where the V's are have indeed the first entry normalized to be positive so we've actually got two lots of unitary matrices going on here where U-daggerty U on the left and V-daggerty V on the right that's Vile's calculation. In the 1940s Siegel got interested in the... well, had already been interested in the geometry of numbers he wanted to introduce this notion of a random lattice so to introduce in the notion of a random lattice he wanted the vectors to be unchanged by all elements of GLNR he wanted his measure to be invariant so let's go back to the... I don't even need to... two dimensional... can we go to the one dimensional case and ask ourselves what measure is unchanged by multiplicative group action so we want to replace X by alpha times X and have our measure unchanged the answer to that would be dx divided by X the matrix generalization just isolated here by Siegel very simple is just dividing out by the absolute value of the determinant of the power of N an example like I just said was N equals 1 in the study Geometry of Numbers these lattice that Siegel was interested in forming at random he normalized things so that a volume, a unit volume of cell was equal to 1 we know that the determinant of the matrix if we're thinking of the columns as defining the vectors gives us the volume of a cell parallel piped what one has to then do is go to a... I like to think of it as a distribution on this particular invariant measure that has a delta function when the determinant is equal to 1 now since the determinant is equal to 1 the factor that we saw in the denominator in that first displayed equation is no longer present and it's replaced by this Siegel distribution Siegel did not actually use this same formalism but the calculation, in a calculation sense it's actually equivalent so Siegel had some interest and I'll mention later in the talk he calculated or gave a mean value theorem using this formalism it'll come up later continuing studies of the Geometry of Numbers this guy Jack who's well known to us studying Jack McDonnell polynomials etc and one of the collaborators of a specialist at the time in the Geometry of Numbers this guy Macbeth they partnered together to calculate for a different purpose but they did a calculation which now could be used and plugged into that formula of Duke S.R. because it is the actual volume where we make a truncation according to a certain norm so a norm is chosen to be measured according to the largest singular value so that's the operator norm they show us how to compute the volume so we know how to compute the volume by integrating out over the orthogonal matrices that's her which result what remains is how do we actually do this computation over the singular values and we have to perform this particular integral after many pages they were able to perform that integral their purposes were very technical they wanted to show an equivalence of a particular limiting process under Riemann integration as opposed to Lebesgue integration they had very technical reasons for that purpose there's a different application in mind here but before that if you look back at their paper it goes on many pages and then if I flashed over it I didn't say it at all their final result is actually we start with an indimensional integral their final result is a single dimensional integral so that's the sort of my sort of stuff playing around with those integrations turns out integrals are more techniques that are about to revise here sort of come back into vogue there's an activity in integral aspects of random matrix theory for random matrix products and one of the technical tools there is using this Mellon transform so how can we introduce the Mellon transform here in a way that allows us with just in a few lines to compute this integral go from this indimensional integral down to one dimensional integral what we can do is introduce a dummy variable T into the problem before we can strain our determinant to equal one we can now constrain our determinant to equal this free parameter T and now we'll take a transform with respect to T a Mellon transform and the Mellon transform undoes the delta function for us that's fine and we know how to take an inverse Mellon transform so if it happens that by undoing the delta function we've got a multi-dimensional integral that's well structured then we've made progress and indeed the R dependence in this multi-dimensional integral depends on S and R the R dependence factorizes out we're left with something that's very familiar and occurred many times in random matrix theory it's a particular example of the Solberg integral the Solberg integral in turn is evaluated in terms of products of gamma functions so we've evaluated that multi-dimensional integral we still get left with one integral because we have to take the inverse Mellon transform so at the end of the calculation we get a single integral and that's good value the application we have in mind is a large R the large R form of that and the large R form of that is actually easy to to compute and we've gotten ourselves if I have the next yes it's down the bottom there we've gotten ourselves an actual prediction or corollary of knowledge of this or an application better still of the knowledge of this particular volume for the invariant measure of matrices in SL and R to see get a little bit better understanding of this result of Duke Rudnick and Sarnac one has to know a little bit about the geometry of let's again take the n equals 2 case very very well known famous picture up there the geometry relating to the quotient space SL and R SL and Z when n is 2 is hyperbolic geometry in the upper half plane where each of these particular regions is identical under this Mobius transformation so the the mappings of SL2Z themselves basically each of these cells is can be thought of as one of these particular matrices so all we have to do is count up the number of cells within some region but they're all equivalent so we're basically just work out the total area and divide by the area of a single cell so that's the way of thinking of their result so it's very natural result n equals 2 it's not even too easy to find an elementary derivation of this the final expression is that the number of matrices goes like 6 times R squared so remember R is the bound on the norm and turns out you can look at different norms and in the original paper of Duke et al they consider the Frobenius norm the here this is the operator norm you still get the same answer in n equals 2 only you get 6 6 times R squared that's the end of the first problem the second problem why we might be interested in exploring invariant measure following the line age of Hurwitz is this smallest shortest lattice problem so let's get under way with that I've stated the problem a few times you can think of as a minimization problem the if we specialise to n equals 2 some sort of remarkable analogies with the diagram I put before relating to the upper half plane appear so we need to get an understanding of criterion for our 2 vectors to be the shortest vectors that are basis vectors the key one is this formula in the middle there 2 times the absolute value of u dot v less than or equal to the absolute value of u squared the other one is just saying that the is the second shortest vector and u is the shorter one there's an ordering in our 2 vectors if we align this shortest vector u with the considering these vectors in R2 we align that shortest vector along the x-axis we look at those 2 inequalities those 2 inequalities actually give us exactly the same region as we saw for the fundamental domain of hyperbolic geometry in fact it's not too hard to go from those inequalities and perhaps it'll become clearer when we go to a particular coordinate system and see that we get precisely the inequalities that define the fundamental domain the absolute value of this z has to be bigger than 1 and the absolute value of the real part has to be worrying about the boundary here equal to a half this second inequality here is actually precisely 2 times u dot v less than or equal to absolute value of u squared provided we parameterise u and v in a way that's going to come up on the next slide so let's have a look at the next slide how is our random matrix very relevant to this problem well it's relevant because we have a natural what did I say in that previous diagram we wanted to rotate the shortest vector to align along the x-axis naturally the q-r-d composition does that for us this further we're taking out two basis vectors m1,1,m2,1 is one of them that's the first column, second column we're rotating by a 2 by 2 member of SO2 this is basically the Gram-Schmidt what we get left with a triangular matrix these are though the correct coordinates to use for the problem I just described the vector we just aligned along the x-axis is actually r1,1,0 the second shortest lattice vector, a basis vector is r1,2 and actually it's 1 on r1,1 because we require the determinant to be equal to 1, we've said we're normalising things to determine trigger to 1 it's um we can generalise that so I've been just descriptive purposes talking about the n equals 2 case but this works for general n at least this part of the calculation works for general n what doesn't work for general n is an easy way to describe these inequalities here we've only described the shortest lattice basis vectors for n equals 2 and it's these particular inequalities which I say are strictly equivalent inequalities that we see in the upper half hyperbolic model geometry upper half plane model of hyperbolic geometry what might one be interested in here, one might be interested in the distribution of the shortest lattice vector because where our original basis vectors are at random our shortest lattice vector will be described by probability density function and that calculation is relatively elementary we're just integrating over r12 it's the coordinate we don't observe, we observe the coordinate r11 no big deal, we have a particular domain to integrate over and something we're fairly familiar with in random matrix theory, we get some linear repulsion we see linear repulsions of course in this circular orthogonal ensemble, beta equals 1 very characteristic of beta equals 1 they're chosen with uniform measures so that's a very important point so the original vectors are chosen at random with respect to the invariant measure for SLNR we have to then normalise things so that we have less than or equal to r so that's the notion of Siegel we want to choose, what does it mean to have a random lattice it means that our basis vectors are unchanged by all group actions of SLNR and that is the invariant measure so there's a unique variant measure it's not normalisable, we had to cut off the absolute value although in the case I considered the largest singular value it doesn't, that cutoff does not show itself in the final calculation you don't require a cutoff as it turns out that's a point I haven't tried to attempt to discuss here but there is no need to have the cutoff in the final calculation that's in the conceptual underpinnings of the calculation we get this linear repulsion in the sense, it's not really repulsion because it's just a distribution of the shortest lattice vector but it's linear so that's sort of something that we've seen in random measure theory for other reasons let's say what's this number four-thirds to the power of one-quarter that comes about for the geometry of a triangle lattice that's an extreme the best packing is the triangle lattice case the second shortest lattice vector doesn't start till s equals one what's the s equals one significance that's a square lattice so that's when the shortest and the longest and the second the other basis vectors are actually equal to each other now to perhaps better illustrate that the setup of this one can do a numerical experiment so we want to choose our basis vectors from the sample from the invariant measure where we put some bound on the operator norm of the matrices we know how to do that we go back to our singular value decomposition to do that so we know how to sample uniformly at random from harm measure Percy gave us some some details talk and it's n equals two one can explicitly sample from the distribution specifying the singular values if n is not equal to two one can do a Monte Carlo calculation so we can easily sample uniformly at random with this particular bound what we need then next to do is use other knowledge of how one computes the shortest lattice vectors and that's classical algorithm due to Lagrange and Gauss in another setting in setting of quadratic forms and it's a very easy it's very elementary the idea behind it you take you take your original vectors order them so that you is length less than v and try and create this red vector you cannot actually create this red vector the orthogonal projection or the orthogonal component of the orthogonal projection because in our lattice we are only allowed to use integer linear combinations so what you do is form the closest integer this is exactly the same as minimizing v minus m times u the norm of v minus m times u what value of m gives the smallest length it's precisely this projected very familiar projection quantity from this geometrical procedure of very familiar from Graham Schmidt just repeat that algorithm that's the way it goes it's very closely related to the greatest common denominator divisor algorithm actually you can see that but I won't attempt and then you can compare the theoretical predictions against the numerics the first two shortest lattice vector the second shortest basis vector and this third graph here is a prediction for the cosine of the angle between the two vectors so that all works very well for n equals 2 okay I'll now just briefly say well no one more thing as I sort of said n equals 2 perhaps not very prestigious and one actually do calculations for general n which I've now changed to d here you can say a little bit about the general n case because one has this tool the Seagull's mean value theorem in this geometry of numbers so he says that if one wants to average a function over all lattice points provided that the lattices themselves are chosen uniformly at random then it's just the same thing as averaging the function now if one takes the function to be the indicator function of a ball of radius r you can make some predictions relating to the shortest lattice vector so I run through a little bit of a calculation here the end of the day it gives a prediction for what the small s behavior should be of the shortest lattice vector very simple calculation and it gives a prediction that this particular coefficient relates to the Riemann Zeta function because of this interesting calculation how do you integrate nought to one where you've got one on s you're taking the integer part this one on s just comes about as a number of s here is to be thought of the length of the shortest lattice vector r divided by s is the number of multiples of that shortest lattice vector you can have up to the value r and then we're multiplying that by the distribution of the shortest lattice vector so that's the content c is unknown here but since we know the values of the left hand side that's just the volume of the unit ball we can work out what c has to be and if one superimposes that prediction for the small s behavior over an exact numerical calculation numerically exact sampling and then lattice reduction that's an interesting point in itself lattice reduction is a non-trivial task for d equals 2, 3 and 4 it can be done in polynomial time exactly beyond that one only has available approximate approximate what's it called LLL algorithm a very famous algorithm but it's very approximate in the sense it's exponentially far away it can be exponentially far away from the shortest lattice vector as the dimension increases but up to up to four dimensions there is a polynomial time natural extension of the Lagrange-Gauss algorithm if one does superimpose that prediction here involving the Riemann Zeta function evaluated at three one finds very good agreement up to around about 0.3 or something even though this is supposed to now just be the very first term in an expansion still still exhibited in the numerical data and the prediction is that that's the case for general d in the limit d goes to infinity this problem in some sense becomes less interesting because you get a Poisson process for all the different lengths of the basis vectors there is some sort of decoupling okay my final point which I won't go over in too great detail but of interest from the random matrix perspective because it's now making use of a different decomposition we started off with a decomposition the Euler decomposition of the orthogonal matrices we then made use of singular value decompositions we made use of the QR decomposition and his final subtopic the decomposition that is relevant is actually the polar decomposition so the polar decomposition is closely related to the singular value decomposition and I have a little bit of discussion there what we want to do is decompose our general rectangular matrix in terms of a what's our Q it's going to be a unitreme a matrix but times is that correct there I got to get this right n by n and v itself n by n so it's not actually so u times v transpose n by n I think that's correct multiplied by a symmetric matrix so that's the difference it's a little manipulation of the singular value decomposition we know how to change variables we know what the Jacobian is for this particular decomposition what's the where is this sort of particularly heading why we're interested in this particular subtle difference from what we saw before with the QR decomposition we're interested in this because we have a different problem in mine the different problem is the one from integral geometry of giving some statistical information about the distribution of the volume of this simplex so it's this subtle, subtle difference okay so let me continue on I got as I say interested in this problem because I wanted to know how one could actually compute that formula of Kingman and I was doing reading a lot of the literature and I just came across this particular how is it now it's doing it on its own who knows excuse that so how to okay where are we yeah we're up to that slide next slide here I came across this particular paper in one of the local journals let's say the bulletin in the Australian Math Society um um what this paper did was relate the polar decomposition which um we might come across in our random matrix theory to a key a key decomposition of measure in the integral geometry so I just like to um lead towards that particular decomposition of measure that particular decomposition of measure that Kingman at least implicitly rediscovered and made use of in his um his calculation so from the polar decomposition we can write for ourselves an integration formula now it's clear to me we're stumbling over before this space of matrices used is actually in this stifle manifold so it's not actually unitary matrices we have n by n so we have each of the individual columns are mutually orthogonal but we don't have as many columns as there are rows so um that's this stifle manifold so it's subtly different we have um our positive definite matrices so the polar decompositions um second part of the polar decomposition is symmetric matrices so we have here a positive definite I should say matrices I didn't emphasize that before on the previous slide I should have the positive definite matrices all eigenvalues are positive so um that's our starting point one of Mogadish's Mogadish's observations is that if you use this formula twice you get this sort of very subtle point sort of a magic trick of a person if you use this twice you get a formula that um that is of interest in random matrix theory because it goes between um a distribution on say a Geneva matrices rectangular Geneva matrices n by small n by big n to a distribution on n by n matrices and this is something that in this interest of having the random matrix products has been quite useful so it's a very subtle change but if you examine what that formula is saying integration formula over rectangular matrices integration formula over square matrices is saying how they're related they're related by this Jacobian factor so that was already a bit of a um interesting point that one can read into the workings of in that paper one can uh these these sigma here is just a notation for the surface areas that come in integrating in this dipole manifold the uh ingredient that goes beyond what one would do entirely from a random matrix perspective perhaps is this next step here where one introduces the uh grasmanian this grasmanian the concept here is that the space of subspaces where each subspace is to be thought of as coming with a preferred basis and it's a conceptual um challenge of sorts to get a head around but if we get a head around that particular way of thinking we can we can introduce an invariant measure on this space of grasmanians one can with minor manipulation of the previous formula deduce this decomposition of measure this decomposition of measure is close to what is used implicitly at least in Kingman's work so it's not really too different to what we've seen before we've just written into some different notation we've certainly got the determinant of the matrix so I'm thinking here of all the columns as defining vectors that's how the people in the integral geometry think we have a factor that's our determinant we have this new invariant measure relating to this grasmanians whereas before we had an invariant measure either on a file or manifold or on an orthogonal matrices so it's not too different there's yet one other little step that one has to do the formula is actually due to these two authors, Blaski Petts Ganskson and there's an affine version and one wants to make use of this affine version with some a few subtleties but it's very similar we've got this preferred basis and there is an extra coordinate that appears this coordinate that I've denoted by R and that's in a direction orthogonal to the column space of these preferred coordinates B and that's really if one reads Kingman's paper the essential idea that he introduces this orthogonal direction so if you try to do that integration I mentioned at the very beginning what is the average area of a triangle in a disk I don't think it can be done elementarily without this insight that Kingman was able to give to the problem how can we see that this is useful for computing volumes of simplices well we have in our determinant now not the determinant of the original matrix that has columns V we have this combination of a fixed vector subtracting off each of these and that's exactly the formula for the volume of a triangle for example you have to what's one of the formulas for the volume of a triangle if we have our three points denoted V1 V2 V3 in the plane then the formula for the area relates to the determinant of the difference say V3 minus V1 as a column and V2 minus V1 as a column and the absolute value of that determinant so this is exactly what we want and that's exactly what this particular decomposition and measure that appeared in the classical literature that was sort of rediscovered by by Kingman at least implicitly shows us the interesting development that I try to emphasize there is that this really can be viewed from a random matrix perspective from this polar decomposition alright that's about all I want to say there is quite a few ongoing aspects of this one ongoing aspect is in a random matrix theory I have particularly in this talk really been talking about the beta equals one case the real entries but we could ask about the invariant measure on SLNC or we could ask about the invariant measure on SLNH where H are the caternions once we've gotten that we're sort of starting off being able to perhaps answer some counting questions asymptotic counting questions of the sort posed by Duke et al we need a quotient space if we take SLNC the natural quotient space perhaps is the Gaussian integers so we're now going to do lattice reduction where our integers are Gaussian integers well actually do we only have to work with Gaussian integers when we're in the two dimensional case we could perhaps work with other families of integers like Eisenstein integers for example turns out it would seem from the calculations doing to date that any integers that have a generalized integers that have the Euclidean algorithm works for that we actually can do division and get a smaller remainder that seems to be really very much the mechanism we did calculations that relate to the distribution in the two dimensional case can we do those calculations as I say for SL2C well yes we can do those calculations and I have a student working on that and we can compute for example what's interesting even to compute the fundamental domain the volume of the fundamental domain is fairly challenging there it involves the Catalan constant for example and as well as the Riemann Zeta function that's done if one moves on to the Caternians a bit more difficult because these lift-shift integers do not form a Euclidean domain you have to go to something called the Hurwitz integers makes it a little bit technically more difficult what I haven't said that is also very interesting about this random lattice's problem is that this is all very close to related to continued fractions firstly the problem in that I've been talking about in this talk two dimensional real case is closely related to continued fractions in the complex plane if one looks at this a problem that's already made a fair bit of progress on SL2C the continued fractions are now themselves Caternians actually go up to the Caternians if you look at the complex case and there's some quite a bit of geometry relating to all of this that I believe is a topic that's fairly rich it's also quite fascinating that the line I've been taking is begun by Hurwitz in the 1890s Hurwitz is the person who proved that the only normed real division algebras are the real numbers the complex numbers and the Caternians if we require them to be also associative so that's something that we have a Dyson 3 file way that's very important to us in random matrix theory one of Hurwitz's other interests even before his main result in the 1897 paper was complex continued fractions so there is three interests of Hurwitz that well over 100 years later are quite prominent in following this particular line of random matrix theory any questions yeah that's very much the exactly right Percy so what Miles noticed was that underpinning Kingman's calculation was this particular decomposition of measure due to Blasky Pinkaskin and then he was able to extend the calculation by computing all of the moments very much that's exactly right well then that's yeah so once you know all the moments you can try and compute distribution in some cases that was actually done and it involves it's a little bit like the problem in random matrix theory of asking about what is the limiting distribution for finite N of a determinant it's very much in that class of problems so the determinant case so if we took our Gaussian sort of problem GUE problem and interpretation of what we're doing there is computing a volume of a parallel piper that's pinned at the origin what I showed that Kingman did he didn't have his simplex pinned at all it was moving around that's an interpretation of the average or moment of the determinant then you can ask the question for finite N what is the distribution from knowledge of the all the moments then you can write that in terms of my AG function and that's how things all go around because my AG functions is what another line in random matrix theory of present interest of integrable properties of products of random matrices my AG function is very prominent so perhaps that was another reason why I was looking in this direction interested in that sort of side of things