 Thank you for the introduction. And thanks to the organizers for the opportunity to speak here. So as the title says, I'll talk about three rather distinct objects, which somehow turn to how to converge in rather concrete ways, and to what was generated, I think, some interesting new objects. And one of the things that's spinning on the slide is a limit set, which in some sense encodes some hodge theory. So I'll try to maybe explain how these things arise. But before I talk about all of these complicated objects, I'll begin with giving you the one version of the main theorem, which is very short and has the advantage that it should be clear to a calculus student, and you can state it in three lines. So I'll just give you the three lines. So this is the first half of the first line. So you can consider this power series. It has some radius of convergence. Psi 0, you might recognize this function from a variety of classical topics. Maybe in the 19th century, it was probably more recognizable than the 21st. But it's a function. It's Psi 0. And then you consider Psi 1, which is another function of a similar flavor, like Psi 0. You'll know that it has a logarithmic term. So there's this log t. And you should think of t as a complex variable. So log t in particular has some monodrome. As it goes around 0, it accumulates a 2 pi i. That's two functions. So you have two functions. You can create something that's called the run skin. So this is basically a determinant of a matrix whose columns are the function and its derivative. And so this is an expression. And then you consider yet another function. Here it is, lambda of Q. So Q is now a new variable. Q is of absolute value less than 1. And the theorem is that the following composition of these two functions never vanishes. So it's never 0. So that's a kind of a mysterious-looking statement about power series to convince you that at least some of these functions are not completely random. So I can relatively easily explain what is the function lambda. So lambda is a function that I think everybody here knows that it exists. But you might have not seen it written like this. So what is lambda? Lambda goes from the punctured unit disk into the Riemann sphere minus 0, 1, and infinity. And it's just the covering map, the uniformization. So there exists such a map, which corresponds. So this is a free group on two letters. This is fundamental group, the free group on two letters. This is a covering space. And there's a holomorphic map. And there it is. That's lambda. And you can write it in terms of classical functions. Psi 0 and Psi 1 are, again, in some sense, classical functions in the sense that they're hypergeometric functions. But these particular ones K-pop kind of more recently in relation to something that's called a merequintic. So I'll try to explain what that is. But it will take some time to explain what all these objects mean and what this has to do with hodge-tier and napnov exponents. But essentially, all the difficulties in the main theorem are expressed in this statement that this function ever vanishes. However, I don't think it's possible. At least I don't know of any proof that really works with these functions and proves anything about non-vanishing by just looking at the functions. So this really non-vanishing, it proves rather a conjecture of Eskin, Konsevich, Moller, and Zorich about a formula that involves napnov exponents, which are some invariance from dynamics, and some other topological invariance, mu1 and mu2, which I'll define. So they're just degrees of some vector bundles over some topological space. So they're something from topology. And then the napnov exponents are really some dynamical concepts. So I'll try to first explain what are these napnov exponents and how do they relate to hodge-tier. So this particular formula is in one very particular local system over 0, 1, and infinity. So I'll explain in a moment what the napnov exponents are. So this formula is just one instance of rather this statement on the top half is one instance of when this formula holds. There are at least six more instances when it holds. I'll try to explain what are these instances. So I'll try to explain the context in which this is true. There's a context in which this is false. And I think it will be interesting to figure out what is the border and why it happens in. But before I do that, let me try to explain what are napnov exponents. So napnov exponents are really a concept that goes back a long time to just the beginning of calculus in some sense. But so suppose that you have an ordinary differential equation. So you have a vector that depends on time. And its evolution is given by a differential equation where you have a matrix that depends on time. And the derivative of the vector is given by a formula. You apply the matrix to the vector. So that's just a regular ordinary differential equation. So you are interested in what happens for a long time. So if t is very large, what is the behavior of the solutions of this differential equation? And so what you typically expect doesn't have to be always the case, but oftentimes it is. It's the case that the vector v of t grows exponentially at some rate lambda 1. So there's some number lambda 1 such that the typical solution of this differential equation grows at rate e to the lambda 1 times t. So that's roughly the size after time t. And by roughly, I really mean essentially up to sub-exponential terms. So there could be polynomial factors in front and something even much larger than that. So root t e to the lambda 1 times root t is negligible with respect to, for example, this kind of asymptotics. OK, so this is what is the first Lapland exponent. So I'll try to explain what are several Lapland exponents. But first, the typical situation that we'll be in is you have some hyperbolic surface, some finite volume hyperbolic surface, and gt is going to be the geodesic flow. And we're going to sample this matrix, this derivative matrix, using the geodesic flow. So we're going to have a function essentially on the entire Riemann surface. And then the geodesic flow is going to move around on the surface and pick up this matrix in some way. And so we're going to try to solve this ODE on this matrix on this finite volume hyperbolic surface. So now how do we sample the matrix? So you start from a representation of pi 1 of the fundamental group of the surface into some matrix group. So you just have this representation. And then from this, you can create a local system, which just means the following. So let me draw a picture. So suppose that this is your surface. So maybe it has some cosps. So suppose you fix a base point. So you have your vector space here above this point. And now you move along a geodesic, hyperbolic geodesic, and you come back not far from where you started. Now what this representation tells you, it tells you that your vector, so if you parallel transport, so locally along this geodesic, you identify this Rn with this Rn. But as you go around the loop, you apply a matrix given by this representation. So you can think of this V rho as a vector bundle with a flat connection, or as a local system, or basically the data of representation of the fundamental group into some GLNA. So this is how we're going to sample the matrices. And so what we're interested in is what happens when you take one random geodesic and you go around for a very long time, and you want to understand how the typical solution grows. So I said a Lapunov exponent. So the other Lapunov exponents are really defined. Yeah, so as I said, you'll have a growth rate like this if you move around a vector now. And if you take exterior powers of the representation, then you'll get a sequence of Lapunov exponents that are determined by this formula. So if you take the second exterior power, the growth rate is going to be bigger than the first exterior power, will be given by some number, some expression which determines what lambda 2 is. So if you know lambda 1, and you know lambda 1 plus lambda 2, and you can compute lambda 2. So then you get a sequence. So maybe to say just in terms of the ODE, maybe that's maybe a little bit easier, you really have not just a vector solution. You can find the fundamental matrix of solutions. So you'll have a fundamental matrix. And what you look at is essentially the eigenvalues, but not really the eigenvalues, but rather what are called the singular values. So this is a concept from linear algebra for representation, for Lapunov exponents. What's important is really the singular values and not the eigenvalues. But for typical matrices, these are comparable. And certainly in the regime that we're looking at, where we're taking logs and dividing by the time, essentially the difference almost disappears, but not entirely. So if you don't know what singular values are, just think eigenvalues. But no, so v is in just so the tangent spaces to dimensional, this could be 100. The vector space could be 100 dimensional. Yeah, would you mean this one? Yeah, so I mean I pick a vector v in this fiber and this Rn. So it's in some big Rn and then I move it around on the two-dimensional surface. And then it comes back, I have to apply the monodroming matrix to the result. So this is essentially what Lapunov exponents are and they're much studied in dynamics. Yeah, so these are the Lapunov exponents. It's not just one? Lambda and A. So the matrix A, so the way I kind of presented it, that the matrix A seems to have disappeared because here there's almost no identification, right? The connection is flat, it seems like the vector is not moving. But yeah, there's an OD. Yeah, there's a matrix A. No, so these are not the eigenvalues of A. I mean they really measure the long-term behavior of your differential equation. So typically it can be very hard to read off the long-term behavior. So typically figuring out what these numbers are is can be tricky because your matrix, your OD for a while it can be expanding in one direction and then for another time it will expand another direction. So then you won't really know what is the rate of expansion by just looking at the OD. You kind of have to run it to understand what's going on. No, so sigma. So I'm giving you an example of where this setup applies. So the setup is that we need a matrix, we need to sample a matrix and we need to a flow, right? We need to evolve in time. And we're gonna evolve in time by moving using the hyperbolic geodesic flow and the matrix is going to be essentially given by this local system. Does that make sense? Yeah, and this is actually really a vector bundle. So if I start with a vector in here and a path, which is given by the geodesic, I can move around this vector along the geodesic. So we have a representation rule. And so the representation is absolutely unrelated to the hyperbolic structure. You fix some representation of the fundamental group that gives you a vector bundle. By parallel transport. Yeah, so that's, so the representation and the geodesic flow are completely separated. Two completely different pieces of data. Okay, so, sorry? Yeah, I mean, it's by parallel transport. It's unique. Yeah, it's from the flat bundle structure. Okay, so now, so far, this is all kind of purely dynamical systems. Now, to connect it to complex geometry and algebraic geometry, so suppose that you have an algebraic variety, as you know, or as you might know, it's homology, which is a purely topological concept, has this PQD composition into the different hodge subspaces. Now, if you have a family of algebraic varieties, then you get exactly at least one part of the input. So suppose that you have a family over the three month surface. You have a family of algebraic varieties coming from somewhere. Then in particular, you get a representation of the fundamental group of the surface by acting on the homology of the fibers. So if you have, now, instead of this vector space, you have some variety, you linearize it by taking its homology, and so you can now parallel transport the homology spaces as opposed to the variety. But, so this row is a purely topological piece of data, but the hodge subspaces are some homomorphic sub bundles that really interact with this topological information and they give you, they put all sorts of constraints on this topological representation. Okay, so what are the properties of this? So, or how does this interact with the Apnov exponents? So the first case that kind of came up historically was when Zoric did some numerical experiments and then Konsevich kind of connected this to Hodge theory, and then Giovanni Forni also developed this a lot further and in particular proved rigorously some formulas that I'll show you in a moment. So this, the weight one situation when you have a family of Riemann surfaces over, for example, another fixed base, in this case, your homology, the middle homology is 2G dimensional. So a Riemann surface of genus G has h1 2G dimensional and has a G dimensional space of one zero forms and the complex conjugate subspace of zero one forms, homomorphic one zero forms. So in this case, these Apnov exponents in general are some random numbers, but what, so there are 2G numbers. They have an extra symmetry because the group here, so on the homology have a symplectic intersection form. So a symplectic matrix has a symmetry in the eigenvalues. If X is an eigenvalue of a symplectic matrix, then one over X is also an eigenvalue. So once you take logs, then you'll have that if lambda is an Apnov exponent, so is minus lambda. So you really have G numbers instead of 2G numbers. And in general, they're completely random, but what Zorich kind of discovered first experimentally was that if you take their sum, then actually you get a rational number. This is what was observed first experimentally, but then there was a formula for this rational number and this rational number turned out to be essentially the degree of this hodge bundle and divided by the Euler characteristic of the base. Sorry? Of h10, you just take the top exterior power and that's a line bundle and take the degree of that. Yeah, so for simplicity, I'm always assuming that I'm over some one-dimensional base so that the degree makes is unambiguous. But in general one has to, yeah. So in general one has to be a little bit more careful about the degree. So this is what happens for the simplest case of what are called weight one hodge structures. So if you have weight two, then now you have three subspaces, this two zero, one, one and zero two. And basically the short story is that there is a similar kind of formula. So, but now you have a formula for the first D exponents and in total there are two D non-zero exponents. You only have a formula for the first D and again it's a rational number. So when D is one, I did this and then for all D material constantinian and I also observed that the similar kind of formula can be done, can be proved. So then the interesting part is when you have a weight three variation of hodge structures, so then in this case you have the, so I'll restrict to the simplest example where you have just three subspaces, four subspaces h three zero h two one h one two and h zero three and they have these dimensions one, one, one and one. So there are four spaces, four, that's a four-dimensional space. There are two LAPNOV exponents. I'll take it back, there are four LAPNOV exponents but because you have this symmetry there is lambda one, lambda two and then minus lambda one and minus lambda two. So then what Eskin, Konsevich, Moeller and Zorich proved was that what you have is in general a pre-war you only have a bigger than or equal to situation. So this lambda one plus lambda two is bigger than or equal to this expression using the hodge bundles and this is essentially all you can get by using the sort of older methods and of course, so the result is much more general but it's kind of a general principle that these LAPNOV exponents can be bounded from below by topological information but in this particular case of way three this question arose as to when, when does one actually have equality and this is a question not just in way three. So, not really but you can think of just two mu one is degree h three zero over chi and so it's those terms appropriately normalized. So then the following thing happened. So Maxim Konsevich ran some numerical experiments on some very concrete, basically there are these 14 families of representations of the fundamental group of the Riemann sphere minus three points into the symplectic group. There's just 14 explicit representations is just specifying two matrices. So you can just plug them into a computer and try to compute these LAPNOV exponents numerically and see if they come out to what the formula predicts and what he found was that in seven cases the formula holds and in seven other cases the formula is a strict inequality. So this was a little bit puzzling and then at some point there was a conjecture that if you see that if you look at this blue line then every four, so these 14 families they really come into a two parameter family that depends on two real numbers mu one and mu two. So those numbers mu one and mu two really are coordinate axes on this picture and for every mu one and mu two you have a representation but it's going to be not into the integer matrices but into real matrices. For specific values of mu one and mu two you'll actually land or rather you'll be conjugate into something integral. But in general your is gonna be just a real representation still it will underlie some variation of hard structures and you can kind of run the same kind of computations and you can ask is this formula true or not? So at first it was conjectured that anywhere above that blue line the formula holds but then Charles Foujuan performed much more accurate experiments and basically what he found was that so if you look at the blue dots that's where the formula is expected to hold and that the red dots is where the formula, the inequality is supposed to be strict roughly. And you might notice that around the blue line there are some little, there's some blue dots but then he actually ran much more accurate experiments just on that specific blue line and it seems like the formula is true only at a countable set of points along the blue line. So if you look at the kind of defect from the formula it looks something like this. So there are these points where the formula seems to hold and the points accumulate towards the upper right corner but in general it's not true at, so there's no open set where it's expected that this formula is true at least on this line. So on the upper part it's not true. So there are these seven cases which are good and seven cases which are bad and so kind of parallel to this whole story people looking at, there's this notion of thin groups. So if you have a representation into a matrix group, integral matrix group you can ask when is this representation, when does it have finite index inside the full integer group or infinite index? So what turned out to be the case is that the seven good cases are when the matrix group is infinite index. So this representation has infinite index in the symplectic group, integer symplectic group and in seven other cases when the representation has finite index so that's called the arithmetic monodromy meaning that really you up to some finite index you're really almost surjecting onto this symplectic matrices then you have strict inequality. And so this was proved independently by Danielle Deron, Jeremy, Danielle and Bertrand Deron and then I have a proof by slightly different methods and what I'd like to talk about for the rest of this talk is about the equality case. So what is the geometry that really kind of forces this equality and how does it bring together a bunch of different concepts? So before I try to explain what goes into the proof I want to explain maybe just to connect to maybe a drawer and Alex's mini courses what does one really have to show? So let me say that this is the uniformization of your Riemann surface. This is the unit disc and this is your Riemann surface. Finite volume you could have some cusps. So to consider a typical geodesic on this surface is basically the same as to consider a big ball of hyperbolic radius R and the uniformization. And when you're interested in the growth on the boundary, yeah, when you're interested in the growth on the boundary you would like to, so the place where homomorphic geometry comes in is that when you try to compute the norm of a vector or at least the average norm of a vector on such a boundary and in particular you're interested not in the average of the vector itself but the log of the norm of the vector you can using integration by parts you can really compare it with the integral of dd bar log of log of that vector. So using a little integration by parts you can reduce it to understanding the behavior of some sub harmonic function on a larger and larger disc. So in general the sub harmonic function will give you the degree of some vector bundle. So it will be the curvature of some vector bundle if it's a line bundle but oftentimes so a sub harmonic function can have logarithmic poles and so these logarithmic poles come up when certain subspaces are not transverse. So when you have some non transversality you'll get some, when you kind of run this machinery you'll get some logarithmic poles and essentially what you want to exclude is some kind of non transversality happening very often but in fact the way the problem kind of comes out is that it either happens all the time or it doesn't happen at all. So there's some transversality that one needs to check and somehow in all the previous cases the transversality kind of followed by some just pure linear algebra there are some complex subspaces and real subspaces and they just could never intersect by some reality, by some condition that something was real but in this way three case is the first case when there's no prior reason why this shouldn't happen and there are exactly seven cases when it fails and seven cases when it's true. So from now on I'll just try to explain what we're trying to avoid and why it doesn't happen but before that I need to explain what are hyper geometric equations. So this is the oldest, the first hyper geometric equation so it's called the Gauss hyper geometric equation and it's just a differential equation for a function it's a differential equation of degree two and there it is, it's a function of the variable Z and has an explicit solution so what's nice about the solution is that it has you can understand it in several different ways one is as an explicit power series so I wrote the formula over there, these symbols A and so A, B and C are parameters and then this A and means A, A plus one dot A plus and minus one so it's something like a factorial but not really beginning at one so you have a power series expression like this you also have an integral expression like the one I wrote down there which you can look up in a book I don't think it oftentimes explains what's going on and there's a more geometric understanding of what it does using the Schwarz's reflection principle so what Schwarz showed essentially is that using in fact not just one but two so it's a degree two equation so there are two solutions and you can think of these two solutions as giving you a map to P one to the Riemann sphere and if you think about where zero one and infinity are mapped they're mapped to some points in the limit and then the horizontal axis, the real axis is mapped into the edges of a rectangle of a triangle and the angles, so these are circles on this Riemann sphere and these triangles, the angles here that these edges form so these angles you can compute in terms of A, B and C and in fact you can analytically continue the solution using Schwarz's reflection so you have a geometric understanding of what this differential equation also looks like okay, so the general case so you can consider these hypergeometric equations for all N and it's an equation that looks like this now it depends on not just three parameters, A, B and C but rather a family at two N parameters alpha i and beta i, beta j and for the particular case that we're interested in that where these seven families occur so the parameters beta are all zero which means that so I didn't talk about monodrome so this equation is really as I said defined on the Riemann sphere minus three points so you have zero one and infinity so it's a differential equation and what is the monodrome of a differential equation so in each patch away from these singular points there's always a basis of solutions however if you analytically continue these solutions around the loop then you're gonna come back and you're gonna see a different basis of solutions so this transition matrix is what gives you the transition matrix is what gives you this representation this monodrome and for example you can understand the Schwarz's reflection picture that I had earlier as composition of these monodrome matrices as you're traveling in the fundamental domains and the condition that all the beta parameters are zero means that the monodrome around zero so this monodrome is maximally unipotent which means that it's a unipotent matrix and it's such that basically it can be conjugated to one, one, one, one so a four by four matrix can be unipotent in many different ways but maximally unipotent means that they're all ones here it's a full Jordan block okay so and then the parameters mu one and mu two have to so the alpha parameters have to satisfy this extra symmetry that they have to be symmetric with respect to one minus like with this reflection so then you really have only two effective parameters mu one and mu two which I plotted earlier and the particular case that we're interested in is what's called the mirror-quintic example so this arose first in kind of string theory and people use this for enumerative geometry but is the following kind of concrete situation you have a family of projective threefold so complex three-dimensional manifolds in p four which are cut out by this equation so x i are the variables and t is the parameter and these are they have a high degree of homology so I think it's 200 and forget 202 or something like that in the middle degree so h three is there 200 something dimensional but there's a lot of symmetry and effectively once you account for all the symmetry there's only a four-dimensional subspace that's that's really interesting and this four-dimensional subspace exactly corresponds to this hyper geometric equation for the values mu one and mu two one fifth and two fifths so this is very concrete object and the functions that you that I had on the first slide these psi zero and psi one they're essentially the integrals of a homomorphic three form this omega three zero there's a way to write down an explicit rational homomorphic three form and there are two cycles gamma zero and gamma one which are defined exactly near this singular point so above each point around here there's a six a real a complex three-dimensional manifold and there's some homology homology class gamma zero and you integrate this three form you cannot evaluate that integral explicitly however you know that this integral will satisfy a differential equation of this hyper geometric type so it's something again there are these kind of abstract theories of variations of height structure and then you can make them very explicit and get actual differential equations and in this particular case these are hyper geometric equations so why are these groups infinite index in the symplectic group so this was proved by Bravin Thomas and what they did was the following so I'll just show you in this concrete example also I'll use this mere Quintic as a running example so you have two matrices there's T and there's R so R is a finite order matrix and what Bravin Thomas basically showed is that these two matrices play ping-pong so they found two T invariant cones C plus and C minus which I have an invariance problem so I'll just show you the picture and then I'll explain what the cones are so T in this picture so T acts on is a real four by four matrix it acts on projective three space on P three and then having these cones in R four is the same as giving essentially a polyhedron in R P three right so this is a chart in R P three which the chart I took to be just R three and I took some particular point at play net infinity so then these two cones you have C plus and C minus and then if you apply the transformation so T acts by transition shifting to the left and then if you apply T inverse it goes to the right and then these cones are taken into themselves so if I apply T you're shifting the left cone into itself if I apply T inverse you shift the right cone into itself if you apply R to these two cones you get so you can apply R up to four times and you'll get eight cones which will form which will form this little kind of polyhedral curve that connects the edges of these two other polyhedral and now if you apply T then T is gonna take these eight red polyhedra into the blue cone and if T inverse is gonna take them back so what this basically tells you is that the group gamma acts as a free group it's group theoretically so yeah group theoretically it's isomorphic to the free product of basically Z and Z mod five because you've explicitly showed this so in geometric group theory this is called a ping-pong argument you've kind of exhibited these fundamental domains on which the group acts in a very understandable way this is what Bravan Thomas did so there's a little amusing extra piece that one can do here so I don't know if it's possible that this was known in the literature but it turns out that these matrices also have an extra reflection structure this is a general feature of hyperbolic geometric equations once you impose certain conditions so in fact you have these matrices A, B and C and each of them is in order to matrix so A squared is one, B squared is one and C squared is one but these monodermic matrices factor as these reflections as these reflections and in fact these but what's interesting is that these matrices A, B and C are not symplectic matrices they're really reflections in this family of Lagrangian so what you have is you have Z4 or Q4 and then you have two Lagrangian subspaces which are transverse and then you can get the transformation by acting by plus one on one Lagrangian and by minus one on the other Lagrangian this is not going to be a symplectic transformation but it's going to be almost in the sense that the symplectic pairing of any two vectors will change side typically under such a transformation so it's kind of a subgroup which is a group which contains the symplectic group with index two not the free group but rather the hyper geometric parameters so it's not always going to be true for all so I had these alphas and betas so you have a similar factorization for alphas and betas satisfying certain symmetries so here I happen to have integral matrices but you have this thing kind of the real picture there's a real version of this no, it's a structure of the hyper geometric equation not of the symplectic group so it's really related to the Schwarz reflection principle okay, so maybe I'll skip this okay, so now I'm getting a little bit closer to what I had on the first slide so we have these limit sets so what is the limit set? so you have this group acting on projective three space so the limit set is the kind of concept that is because it's so useful it has many different definitions so one definition is that it's the smallest closed invariant subset closed subset that's invariant on the group gamma which is acting on this projective space and in this particular case you can check once you have these ping-pong polyhedra then you can check that it's a non-trivial subset it's not just any subset so this is a picture of this set so I wanted to show you just a second so you can see how this set looks like so if you look at it unfortunately this picture is not very representative but this turns out to be a curve so if you look at it from this angle it's a spiral that's kind of spiraling if you look at it transversely it looks so it doesn't quite exactly match up it has this kind of funny structure one of the reasons why this set is a little bit difficult to visualize is because it's not one of these conformal fractals so the typical fractals that you've seen in popular culture are these Mandelbrot sets or Julia sets so those are invariant under conformal transformations which if you zoom into a piece of the fractal it will look roughly the same as the entire fractal but here this guy, this limit set is invariant under a group of transformations that's not conformal so in order to get the same picture you shouldn't zoom in and take a little cube of equal sides near some point but rather you have to take a parallelepiped that's very degenerate in some directions and then if you zoom in and you rescale that then then you'll get a similar looking picture but this one, so this limit set is not invariant under conformal groups so this is why it's kind of... if you zoom into it, you'll see a different picture but it's not because... it's because the invariance is under a different kind of group it's a higher-ranked group so I guess what I should say is that this theory of higher-ranked groups so going away from Kleinian groups so it only recently started to be studied more seriously and so some of the people who looked at this so he introduced these... and also representations about which I'll talk in a moment and then there's Gishara and Wienhardt and Kassel and Gerita and there is a separate group of people like Apovic, Liby and Porti who are working on this okay, so once you have this limit set what you can do is you can consider and the Lagrangian Grasmani and you can consider this kind of saturated version of the limit set so for each point in the limit set you take all the Lagrangians which are not transverse to that point in the limit set so the point in the limit set is a line in R4 and you look at all the Lagrangians which are not transverse, which contain that line and you call this the saturated limit set so if you have the curve in R4 then this will give you something like a tube in the Lagrangian Grasmani so the Lagrangian Grasmani is three-dimensional for R4 so it's going to be some real three-dimensional manifold and you'll have this kind of cylinder sitting inside so now you take the complement of this closed subset and then you get an open subset which is some non-trivial open subset and this is going to be the domain of discontinuity of so you can call it the domain of discontinuity but the fact is that gamma acts on this limit set on this domain of discontinuity properly discontinuously so once you've thrown away these bad points then you actually get a properly discontinuous action so the problem is that so these people in the theory of analysis of representations they really study groups which have only hyperbolic elements so the action is always kind of loxodromic you always have non-trivial eigenvalues and the dynamics is always uniformly hyperbolic in some sense but here you have unipotent so the monotromy around cusp is always unipotent so you have to kind of extend a little bit the notions that they used to prove such a result so you have to mess around with this okay so this so far was all topological so how does this relate to Hodge theory so if we go back to the Hodge decomposition so you take the first two subspaces in the Hodge decomposition so you call that f2 so that's a traditional name in Hodge theory essentially because this f2 bundle it actually is what varies holomorphically so h3 0 varies holomorphically but h2 1 doesn't vary holomorphically however their direct sum which you call f2 will vary holomorphically so then what do you do? you associate to this so maybe remember what are we trying to do? we're trying to establish some kind of transversality that happens so what you do is the following you take the to each such possible element of the Hodge filtration you associate a subset of the real grass manian which are the bad real Lagrangians so what are the bad real Lagrangians? so you have this C4 and you have a subspace f2 which happens to be also Lagrangian and you consider those real Lagrangians which if you complexify they are not transverse to this complex f2 so this is a complex codimension one condition where real codimension two condition so this gives you actually a curve inside this real three dimensional Lagrangian grass manian so now you can consider over the surface so you can consider this kind of bad bundle so what is it? so the fiber over a point on the Riemann surface on P1 minus three points so the fiber is just the Lagrangian so it's a circle bundle which is the circle of all Lagrangians which are bad for that specific holomorphically varying piece of the Hodge filtration so now what you can show is that this bad bundle it actually is uniformized by this domain of discontinuity that we had earlier so this group gamma as I said it acts properly discontinuously this is kind of a purely topological picture on the Lagrangian grass manian this open set and then this open set identifies with this once you quotient by gamma it identifies with this bad with a space of bad Lagrangian which is kind of something that really comes from holomorphic geometry so once so yeah I should say that Kulia, Tolzan and Thulis they something similar but again kind of in the context of Anosov representations and yeah so in particular so there is not this non-compactness and the cusps are absent but and it's in a slightly different language of Higgs bundles but I think it's kind of similar in spirit but so once you have this you're basically done because what you get is that this Lagrangian which you had here so remember now around zero we had this monodromy which had which was maximal unipotent so in particular it had an invariant Lagrangian and it had an invariant line so an invariant line for this monodromy will give you a point in the limit set and the invariant Lagrangian will give you a point which lies on the circle associated to that limit set so what you get is that this maximal unipotent Lagrangian is never bad for any of these filtration for any of these Hodge filtration because this maximal unipotent Lagrangian belongs to this complement of the limit set so maybe I can state it kind of geometrically so you have the Riemann sphere minus three points so here it is and you have this F2 subspace which is varying over this picture so now near zero you fix this one Lagrangian subspace and then you start flatly moving it around all over the surface and then it comes back to itself many, many times using non-trivial monodromy transformations and basically the claim is that it will never intersect the holomorphic this F2 term of the filtration so it will always stay transverse so once you have this transversality essentially yeah so what I said is that this function that I described it will never vanish so there's an explicit way to express this non-vanishing it's this transversality is some non-vanishing of some matrix coefficient and you can explicitly write it out using power series and it's kind of amusing you can write it out explicitly and you'll get that some random integers or rational numbers will have to have some rate of decay there's a power series that doesn't have any zeros and the unit disk so it's inverse has no poles so it has a very good radius so the coefficients have to converge very quickly but in any case this was just something related to the Apnov exponents there's a lot more what one can get from this so if you take G to be again the real symplectic four-dimensional group and H to be the indefinite unitary subgroup so H what I want to emphasize here is that H is not a compact subgroup so H is non-compact however what's true is that the group the discrete group gamma acts properly discontinuously on this G mod H so you have so gamma is a discrete subgroup of G but H is not compact so there's in general no reason for why a discrete subgroup should act property is continuous and in fact there's a criterion of Benoit which tells you when this happens and again one can check it rather explicitly in this in this case so again the name as I put down there so Gershark, Kassel and Wienhard they have some rather general theorems which are in this spirit but again somehow they always work with purely loxodromic elements so one has to adapt some of these proofs when you have unipodens but what is the space G mod H? it's the Lagrangian Grasmagnian of these indefinite so these F2 terms of the Hodge filtration so it's a complex manifold in particular and what you have is that basically using these thin subgroups of the symplectic group you can construct a complex three-dimensional manifold which in principle there's no reason why it should be there so you have some complex three-dimensional manifold by just taking the quotient by gamma and what's amusing about this complex three-dimensional manifold is that if you take the so it has this so if you fix the real Lagrangian you consider inversely all the if you fix the real Lagrangian you consider all the term Hodge filtration which are bad which are not transverse with respect to this one fixed real subspace so this is a complex codimension one condition on this complex three-dimensional manifold and what you get is that you have this three-fold and inside you have a complex divisor and you have also a curve and they are disjoint so being disjoint is essentially the fact that these two things are disjoint is essentially equivalent to the previous theorem that when you do this parallel transport these guys are always transverse so now you have this three-fold and inside you have a divisor and you have a curve it would be interesting to understand more about the geometry of this three-fold in particular can one build other interesting for example homomorphic functions on this complex three-manifold alright so let me conclude so the first kind of takeaway that I would like to make is that first of all you can study these period maps that come from Hodge theory in this kind of global way by trying to understand the global geometry so most of the time people study this by looking at the differential geometry the infinitesimal picture but they have these large-scale properties which can be interesting so one can prove so this is one thing I didn't talk about which is that one can prove classical theorems in Hodge theory using some ideas from dynamics and in particular the advantages that this applies to a more general setting so maybe Alex will talk about it in his mini course so in particular for foliations and in Tecnoir dynamics you can use the tools of Hodge theory to prove versions of these theorems in Hodge theory which don't really apply as they were classically proved they don't apply in some of these foliated settings on the Tecnoir setting but using ideas from Hodge theory you can kind of implement them another direction is the topic of thin groups so this is something that especially people in number theory have looked at a lot but they have interesting asymptotic properties and especially when they come from some kind of geometry like for example Hodge theory so you can control their asymptotic properties quite precisely and you can make geometric statements about their behavior the other thing is that there's this parallel theory of Anasa representations and one would need to kind of deal with these unipotent that appear most of the time when you have families of algebraic manifolds there's always unipotent monodromes that you have to understand so you would need some kind of notion of so I don't think this is standard terminology but you could call these log Anasa representations so in the Anasa picture you always have some kind of linear divergence of some eigenvalues or singular values here you only get kind of logarithmic divergence but you can still recover many of the conclusions of that theory even though the assumptions are not directly satisfied but if you allow unipotent then you can work with these objects and finally you get these new interesting geometric objects like this complex three-dimensional manifold which is really it's locally homogeneous it comes from this group gamma but whether it's properties and what does it mean like what can one do with it is I think something that remains to be kind of discussed so I think nobody's gonna complain if I end a little bit early so and here thank you mm-hmm yeah so I didn't say much about the flat surfaces I mean I yeah so the situation I was in in fact for most of the talk was just the Riemann sphere minus three points with a hyperbolic metric and the geodesic flow on that and you have an explicit representation of the fundamental group of the Riemann sphere minus three points that's a free group on two letters I wrote two matrices at some point so there's those two matrices they have a lot of rather remarkable properties in the sense like they have you can derive a lot of interesting objects just from essentially two exposed matrices this one oh here yeah so so in this case so I mean I think maybe from the perspective of so if you take the second exterior power of this of this thing then what you'll get is right so what happens is that the other part is f2 there's f2 and there's f2 bar and f2 bar is just the other two guys and for example if you take the exterior power f2 will give you one complex vector and then f2 bar will give you the complex conjugate vector and together they will give you I guess a yeah a negative definite yeah there's always the question of like normalizing the the sign but no here in fact it's unambiguously regardless of how you normalize it it's always going to be I think negative definite so you have this negative definite two dimensional subspace with a complex structure essentially so from the point of view of the question that I was interested in which was this computing this these lpnov exponents this is somehow the space that you want to naturally avoid but I think one can express it in terms like there's a different parallel story if you replace sp4 by so 2 comma 3 which is isomorphic and then f2 has an expression there which I think is quite similar to what occurs I think in your paper yeah but but I guess maybe one little difference is that I think once you have this hot structure is just a little bit more extra information but it's kind of compact information on this second exterior power like you have some bundles there but here you have a little bit more data than that mm-hmm yeah 14 oh so the 14 is not anything in particular what happens is you have this family you have these two real parameters mu1 and mu2 and the values are in symplectic matrices and you can write down explicitly what these matrices are it's a formula in terms of mu1 and mu2 what Doran and Morgan did is they figured out which of for which real parameters mu1 and mu2 this representation can be entirely conjugated into the integral matrices and they found 14 answers and they showed that there are no other possibilities you know a pre-warrior that there's a finite number but the bounds that you get a pre-warrior are not very good so they just computed which examples can be conjugated to integer matrices and they found these 14 examples but in fact you know you can take hypergeometric equations with like of degree six and eight and so on so there's like an infinite family there's always in each degree there will be only finitely many integrals such representations but in fact yeah so if you want for example to get algebraic numbers there's in fact infinitely many if you want to conjugate it not into sp4z but sp4 you know some algebraic entries and then you can take a lot conjugate representations and so on and then those are actually also very interesting and somehow they yeah they actually so like even if you just want to stay in dimension four there's a lot more geometry if you allow non-integer countries