 Thank you very much for the organizers and you to be here. So let me start quickly with the topic of the course. So the idea is to analyze in this first lecture and probably in the next one what happens with instead of iterating one different morphism, you iterate two of them, which commute with each other. And in this start with the nicest situation, then later probably if we can we go and move on to do some more complicated stuff, probably making the group more complicated, but let's start with this simple case. Okay, so to introduce the objects, let me just start with one guy. Okay, which I guess a lot of you are more accustomed to. Different morphism from M to M. In general, M will be a compact manifold, but it could be something different. Ifomorphism, when we study these guys, there is one notion which is called hyperbolicity, which is very useful to understand them. So there is the ultimate hyperbolicity assumption, which is the analysis of condition. So let me define it is an Ossoff if half a splitting into a contracting space and an expanding space of the attaching space, so that this means in each point have the invariance condition. This is the first condition and the second is the contracting and expanding property. Okay, so this is land less than one positive such that C positive DXFN of B is less than C lambda to the N, B for every B. S and similar thing, but when you go to the past. So when you go to the future, you contract on this table, when you go to the past, you contract on the unstable. Okay, and the ultimate example of this guy is the matrix. This which has eigenvalues lambda, B plus, this should be the other guy, plus minus, plus minus quarter of five over two. You have the eigenspaces ES plus U, as it is to these guys, which you can compute quite easily. You guys, this is the ES guys here. You have the unit cube. This is not supposed to go through this diagonal. This is supposed to go, let's try again. Taking this guy is supposed to go through. So these are the eigenvalues. So this is the one, one, the zero. This is unstable, this is the stable, eigendirection. Okay, so and since the entries of this guy are integers, we can look at it as acting on the torus T2, which has used the same letter for the corresponding guys. So here T2 is R2 mod the integer. So if you take a point, plus the integer lattices will be mapped to a point, plus the same integer lattice. So it goes to this quotient space, and this is just the torus. This space is stable and unstable. You can put it just by translating over every point on the torus, and you will have your decomposition. You are looking for here, and you will have the contraction. You can take the lambda here to be exactly this lambda minus appearing here, which is the contracting guy. And one over lambda minus exactly the lambda plus, so you get exactly this condition, beautiful condition here. Now, this is a good example to understand, and you have a lot of properties. So it has a lot of imperial measures, you have a lot of periodic points. So for example, you can show that periodic point of F coincides with the integer points. F is A, the rational points of the torus are exactly the same as. As the periodic point, you can put it as an exercise if you want. It's a simple one with a perfect exercise. So this already give you a lot of imperial measure because you have a periodic orbit, and you can put equal weights of one quarter, one quarter, one quarter, one quarter here, and you have an invariant measure already. Also, the fact that the determinant of this matrix is one makes it this map area preserving. So it's area preserving on R2, and you have the natural area measure on the torus T2 that is also preserved. Okay, so this is also nice. You can prove ergodicity of the area measure using Fourier analysis, which is very useful. We are going to discuss very little of it later, not for this map, but for a little bit more general than this, so higher dimensional analysis of this. But one problem that it has is the following. This is the second exercise. It says that if B is here, so these are integer matrices, have another such matrix here, and I know that this matrix commute, I want to commute in matrices. As I said, the course is about commuting matrix, commuting different or fism, so I want another commuting different or fism. If this matrix happened to commute, then there exists a K integer, which is not zero such that B is plus or minus A to the K. So it could be minus this matrix, but this is not that much. Means that, of course, an iterate of different or fism will commute with a different or fism, but this is not really fun, so you're not adding any other guy there. And indeed, you have this exercise and you can try to go further and prove a similar statement for any matrix in SL2Z, okay? So if you are in dimension two in the torus T2, you will not have two matrices commuting, unless one is essentially a power of the other, or half a common power or essentially half a common power. So what we can do is we can just go to higher dimensions, so let's go to dimension three. Because, well, here I exclude zero because it's obvious that this, I don't want to delete the matrix. So, you're right, I shouldn't exclude zero if I want a correct statement here. For this statement zero, it's okay. It's not, I should say that it's not identity and they have a common power, thanks. Okay, so now I take a matrix in SL3Z and I looked at these matrix, so stream by three integer matrix with a terminal one, so I can make it act on the torus and I have a different fism of the torus and, well, you will have essentially, depending on what is the matrix, you will have a same picture as this one, but I can make it more delicate explanation of what it happens and that's what I'm going to do. So I don't want A to be the identity matrix, so identity has very trivial dynamics to discuss. So what you will have always is three numbers, which are essentially the modulus of the eigenvalues, so we have dimension three, so we have these three numbers and need to do some discussion here, which is, yet again, some exercise. You may have double eigenvalues, but then you will need to have modulus one if it is not diagonalizable. If it is diagonalizable, then this will be one and it's complex conjugate and still this will not be fun enough and the nice case is when the three guys are real and different, okay? So the exercise is to formulate correctly what I'm going to write here and solve it, okay? So unless lambda one positive, lambda, let me just state which are the lambda i's, so lambda i are modulus of eigenvalues, then the statement is this lambda one, larger than lambda two is larger than lambda three, lambda i different from one. All of them, we are in a situation this or completing non-hyperbolic. So completing non-hyperbolic meaning the eigenvalues are all of modulus one, indeed plus or minus one. Not interesting for us. Okay, so this exercise I will tell you that if you really want to compute in guys, you want three real eigenvalues, the three of them of modulus different from one. Okay, so that's the implication. If you have a complex eigenvalue, you need to conjugate, so you will have an equality here. So you need really real eigenvalues there. So there are such matrices, you can find them. It's not too hard. Here is one example, which is yet one more exercise to show that this example works, like this matrix. It has a determinant one, I hope, and it's eigenvalues are all reals, two are larger than one and one is between zero and one. And you have the matrix B. You can put two identity minus A. This will commute with A trivially. It will have a determinant one, and the property, if I make my computations correct, if not, you can try to fix them and find the correct guy. A to the K times A to the L is the identity. K, L different from the zero, zero. I'm sorry, K, L in Z, then K, L equals to the zero, zero. The only way they are a power of the other, and A, that's absolutely. If they have a common power, so to speak, then it's because you are doing the trivial thing. Then I have, generally, a map from C2 to SLGC, which is taking K and L into this power. And what this exercise tells you is that this is a one-to-one homomorphism. Let's call it wrong. And they have two different morphisms of the total T2 that commute with each other. Okay, now, now since I have three different eigenvalues here, I have the composition of R3 as E1 plus E2 plus E3 into the corresponding eigenspaces. And since the eigenvalues are different, you will get that B preserve the same splitting. So it's jointly diagonalizable with A, and it's just because it preserve this very same splitting. We have that A of EI is EI, and B of EI is EI. Also, it means that if I iterate, let's call lambda I of A, the eigenvalues of A, and lambda I of B, the associated eigenvalues of B. So if I consider this iterate of these matrices on a vector B1 on the direction E1, then what I will get is lambda one of A to the power K times lambda two, lambda one, oops, of B to the power L times B1. I get multiplied by these numbers. That's how this guy is acting on these spaces. So this is for every B1, E1, and the same happens with the E2 and E3. Now, since these numbers, this is yet another exercise, is these three numbers are real and different from one, you can show that not only this power is never the identity, but also this number here can never be one, another exercise. We are not going to use it right now, but it will show up later, I'm sure. Okay, so since these numbers are different from one, you have that you can put together the two numbers which has models larger than one, or if there are two of them, if there is only one of them, you just put it by itself, and the two numbers which are smaller than one, if there are two of them, and you get this splitting ES plus EU that you wanted. Okay, good. So this is the type of examples we want to study. So, well, this is the linear version. We want to study the non-linear version of this guy. Move forward and define what are the leapons of exponents in this setting. So we have this splitting of R3. Well, you can do it in whatever dimension you want. I'm just doing three dimensions, make life of everybody simpler. So I have this splitting to three spaces, and I can define a sheeted functionals from C squared R, which is chi i of the point in C squared will be the logarithm of the absolute value of lambda one to the a to the k times lambda one of b. So it's the logarithm of the absolute value of the number I am multiplying there on the EI direction. And this is what is understood as the leapon of exponents. In general, when you are lucky and you have eigenvalues, the logarithm of the eigenvalue is the leapon of exponents. And here you have this nice eigenvalue and these directions and the logarithm of the eigenvalues is this leapon of. But now the leapon of exponent is not just a number, it's a function. Because I'm playing with C squared guys, so I have really a function. So let me put here my R squared and inside the R squared is a C squared, of course. And I have three functionals. The one corresponding to the first direction, the one corresponding to the second direction and the one corresponding to the third direction. So I can naturally extend it, it's the linear map, then to map the linear map from R squared to R. So just instead of putting K, L, you put TS or allow K and L be real numbers. Then I have this linear function and the linear function is essentially determined by the kernel of it plus some normalization. And here the normalization will show up later, but the kernel is very important for us. No? Of the guy one, say here's the kernel of the guy two, say here is the kernel of the guy. It's a one. It's also one, yes. So the subindex, well it's an I, I'm sorry. This should be an I and this should be an I. So the subindex correspond to the direction on your applying your dynamics. In this splitting here, E1, E2, E3. So then once you have these kernels, another very important information is to which side of the kernel your eigenvalue, your level of exponent is positive or negative. Let's say I should have done this computation before. So let's say this kernel is positive to this side and negative to this side. Then I hope I can do this second guy positive to this side and negative to this side. Then if I do this, this should be probably negative to this side, positive to this. So the very important observation here is guy one of N, so for every N in C square, guy one of N plus guy two of N plus guy three of N. Okay, what happened with the sum of these three guys? This is the logarithm, the first eigenvalue plus the logarithm of the second eigenvalue plus the logarithm of the second eigenvalue. So it's the logarithm of the product of the eigenvalues. The determinant is one, so the product of eigenvalues is one. The logarithm of one is zero, so the sum of these three guys should be always zero. It means that if two of them are positive, the third has to be negative. So you cannot have the three of them positive and I hope I made things in the right way and you cannot have three positive. So a point here where the three guys are positive. Now what happens if you sit in one of these lines? Where is the kernel? So if this is the guy one, the kernel of the guy, if you are in the kernel of the guy one, it means guy one is zero. Guy one zero means that the eigenvalue is one or has modulus one. Here are real numbers, positive real numbers say. So if the modulus of the eigenvalues is one, so the guy is one, which shouldn't be possible unless you are irrational. So this means that this line will be totally irrational. Okay, so it's not a rational line. This needs to be proved, but it's yet another exercise. There is no integer point on any of these kernels. It's not that bad, this is, you can do it. Okay, so these are called, so you have here the termine, three different, several cones. Let's say, oops, we should go all the way up to here. So this comes with the associated guys. This is three cones and these are called veil chambers. So whenever you pick an element here, you are getting a lot of information, but in particular, you are getting the information that on which direction is unstable and which direction is stable. If you are a plus plus minus, so meaning the first is positive, the second is positive and the third is negative, then you have that the this and this are unstable and this is the stable guy. And in this way, you can look at these combinatorics and distinguish what will be the unstable space and what will be the stable space for these guys. Okay, what we do with this? So this is all the linear picture. Now, what we do with the non-linear picture. So let's discuss a little bit different morphisms of the torus T3 or TN for that matter. Of a map F, it's TN to the torus TN. A diffeomorphism. Doesn't need to be diffused, but since we are going to work mostly with diffeomorphism, it could be a homeo or it could be just a map. It doesn't matter. So some very basic algebraic topology, so it's covering theorems, will tell you that you can lift F tilde, or F to F tilde, lift is a tilde, is a map from RN so that the natural diagram, here you have the canonical projections diagram commute. This lift is not unique, but let's not worry about the non-uniqueness now. You can put some conditions to make it unique or you can study what's the non-uniqueness or it's not really big deal. Now this lift will always be of the form AX plus pi of X where A is an N by N, so I say it's a diffeor, so invertible integer matrices where the inverse is also an integer matrix. Okay, so essentially it determines this plus or minus one. So integer matrices with a terminal plus or minus one. And the phi function that is y that phi of X plus N is phi of X for every N, is ZN periodic function. So you can always write this diffeomorphism, so the lift as a linear guy plus a periodic guy. So being periodic means in particular this bounded, okay, while the linear map is a linear map, so it's not bounded at all. So it means that if you look RN close to infinity, you are moving by a huge linear map and then you are moving a bounded amount. Okay, so it's a bounded perturbation of a linear map. Okay, so we have always such lift. So let us assume A is hyperbolic. So it's an analysis of matrix. So it means that RN can be written as this U and after the norm of A to the N is less than Z lambda to the N, oops, restricted to this stable part, lambda less than one, positive for every N larger than zero and norm of EN E minus N in the unstable is less than Z lambda to the N, same things and A of S is yes and A of U is the same as with an also. So the transformation of the total city with the linear is an analysis of the morphism and there is your splitting. So what can we say about the F? Can we say anything about the F just from these conditions? So this very condition we start with. Indeed, we can say a lot. We can say that F has a fixed point. This is yet another exercise. Prove that F in this situation where A is hyperbolic, F will have a fixed point. It, you will have a huge amount of periodic orbits also. And indeed you have a little bit more. This is a theorem by Franks from Franks, the 70s or 60s, 60s probably, that says that in this framework, there exists H from the N to the N and continues on the topic to identity, which essentially on the topic to identity means that this matrix that appeared here corresponding to this H is the identity matrix and H compose F is a compose H. And the proof in this is so he never called it here and indeed it's a simple computation that we are going to do. Worry, but the point is that this linear map is a factor of F. That's the meaning. It's a topological factor. This being homotopic to identity implies automatically that the map has to be onto plus continuous. And it is not hard to prove it in this setting. So what we are trying to discuss then is whether we can make this H better than just a continuous map homotopic to identity. Okay, so can we make it a homeomorphism? Can we make it a different morphism? We have contraction and expansion. This lift X plus some phi of X. And let's look for H of the form identity plus some U of X where U of X is the same as U of X plus N for every N. Identity plus periodic. So we put the question. H compose F is A compose H. This is the same as very few, I will do it maybe a little bit faster. You can do us an exercise the slow motion. So this is F plus U compose F is equals to A plus H tilde here. A compose U and this is the same as A plus phi plus U compose F is equal to A plus A compose U. So we can cancel out the matrices A and we get a very nice equation which is phi equals to A compose U minus U compose F. There is a functional equation but it's linear in the variable U. So A is a linear map. So this is a linear in the variable U. So I can write down a phi here. Go from RN to RN and the U also is my unknown. Go from RN to RN but I can split them since RN split as ES plus U. I can split this RN as ES plus U. So I can look to phi S from RN to the ES space and US from RN to the ES space and the same thing E U U 1 phi U. Okay, I will do just this part. The other part will be exactly the same. So I can put S's here. AS means to the stable part of the A. Let me just multiply this by this vector. Maybe let's remove this, not really this. So this is your functional equation. So let's move there and rewrite the functional equation there. S compose F inverse is minus here. Is US minus AS U compose. So that's the same equation. So how I solve this equation, I can solve it explicitly. If you think that this operator L of US equals to AS S compose F inverse. Composing with a map here in the C0 topology is an isometry. As soon as this function is onto. And AS is a contraction. So this is a contraction. And what I'm having here is identity minus L times US. So we know how to take the inverse of this guy. So I minus L inverse is sum of L to the K. K larger than or equal to zero. I can just write explicitly the solution. So I will get that US will be minus some K larger than or equal to zero. I don't want to put this here. A to the K, pi S compose F to the K. That exercise show that this is a convergent sequence in C0 topology. Okay, the bias trust M test. And you will have, analogously, that UU is sum K larger than or equal to zero. A to the minus K, UU, pi. There should be some minus one somewhere. Yes, here. And this is a plus one here. So this is the formula for the U, essentially. And again, you can check that this is a convergent sequence. And since the function pi S is periodic, the F will preserve the integer lattice. This will also preserve the integer lattice. So you have that this function, US is periodic, this function UU is the end periodic. So there is your solution. That's the proof of the theorem. Now how to make this H better than just a continuous map. The continuum, which now is Franks and Manning. Then it's really a hard theorem. I'm not going to prove it. That says that if is an anosoph, from TN to TN is an anosoph, if you, from TN to TN during that is a homomorphism. So the age we just built is indeed a homomorphism. There's one to one. So it was already continuous and onto. Now it's one to one. So this is very important. It has more than just this piece. This Franks, oops, Franks Manning. I didn't wrote Manning. So part of the statement, which we will not care much is that the matrix A is indeed a hyperbolic matrix. If F is an anosoph, different from this one, then the matrix A is indeed a hyperbolic matrix. This is part of the theorem. Just not only assuming that. Okay, now comes the next question. Can we make H a different morphism? Even can we make it absolutely continuous? Meaning it takes Lebesgue measure zero to Lebesgue measure zero. And the answer to both questions is most of the cases, no. You cannot make these guys not even absolutely continuous. And you can see easily that you cannot make it a different morphism. Just have that A of zero equals to zero. So zero is the zero vector. And if you look at the formula, you will have H compose F at H minus one of zero is the same as A of zero. So this is the same as zero. So this means that F of H minus one of zero is H minus one of zero. H minus one of zero is also a fixed point. Okay, I'm using this homomorphism. So there is such an H minus one of zero. And if you take the derivative of this equation, you will get matrix Q, which is essentially the derivative of H at the H minus one of zero times the derivative of F at zero is A times Q. That means that these two matrices here and here has to be conjugated. And this is very strong condition. Okay, you can write easy perturbations that will kill this property. Most likely it's not. Now the theorem, still you can do something when you have the two commuting guys. So this theorem in the very final format is a theorem with zero and one. But I will just state it because I will not state the precise hypothesis here. So then it's by several authors. Alpha one to one, two commuting different or fission, but not being one a power of the other or some trivial thing. Assume this and not such that alpha of N not is an ossoff. So it's the F of that theorem. Then it will be topologically conjugate to the linear map. Now can we say more? Yes, we can say more. Then appears here on Frank's Manning. Okay, and indeed this row from C square into SL3Z which essentially correspond well GL3C which essentially correspond to the linear part of the corresponding matrices alpha and not. And if I want to make it correctly and there exists a gamma subgroup of C square, quotient is finite. Finite in the subgroup such that alpha of H compose alpha N is draw of N, compose H for every N in the gamma. If not there are fine guys which you cannot make linear. This is some subtlety but this is of minor effect. Probably it will appear clearly later. Okay, so I guess time is up. So the idea of the next lecture will be to prove this statement and the next two lectures we are going to in a sense apply this theorem to understand more general group acting. Okay, that's it, thanks.