 Okay, well, hello everybody, and welcome to ICTP. Of course, I'm sorry we cannot welcome you in person this time, but welcome you online. But hopefully in the near future, we'll start having in-person events. Good morning, good afternoon. I know we have people connected from many different parts of the globe. And welcome to the school on Marco Partitions and Young Towers in Dynamics. So I'm just going to give a little introduction. And let me say that one of the motivations for the school is that in the last 20 years or so, there have emerged two very, very powerful approaches to the study of smoother guarded theory and hyperbolic dynamics. And these two approaches are, one is Marco Partitions, which is actually generalizations of one of the very original approaches of Sina Iroel and Bowen in the 1970s for uniform hyperbolic systems. And the other one is the method of Young Towers, which was introduced by Lysa and Young at the end of the 90s. Both of these methods are very geometric. They involve some highly non-trivial geometric constructions for the systems. Both of them are very powerful because they give very strong results, but both of them are really highly non-trivial for ply. And there is more or less a so non-trivial that in my experience, usually in terms of research, people who learn one of the methods, they then are sufficiently traumatized that they can't face learning the other method as well. Or anyway, somehow there has been created a situation in which there are these almost two communities. Of course, these communities interact, we work in the same areas, but somehow there has been created this specialist in Marco Partitions and specialist in Young Towers. And both of these communities have developed a lot of results, have given a lot of results. Some of these results are sometimes very similar, but using two different techniques. And what, together with several colleagues, as we're thinking about this situation, we thought it really would be time to create an event to bring together these two approaches to try to understand what these two approaches really have in common and what they're really different. Because, of course, ultimately, they're studying the same systems, they're getting very similar results, so they're very closely connected. Although they have some important differences and some methods might be better in some situations and the other technique could be better in other situations. So this came the idea for this event. And we have decided to structure this event like this. So in these next three weeks, at a very calm pace, there is going to be too many courses. One of them is by Professor Yuli Lima of the University of Sierra in Brazil. And Yuli is going to talk, is going to stop the lecture today and is going to talk about symbolic dynamics for non-uniform and hyperbolic systems, by which he means the mark of partition approach, let's say, for non-uniform and hyperbolic systems. And the second is going to give, I think, four or five lectures probably over this next couple of weeks. And then next week is going to start the second mini-course by Professor José Alves from the University of Porto. And he is going to give his lectures on young towers dynamics. So the purpose of these two courses is really to introduce, you know, from the beginning these two techniques. I should warn, of course, that this is not meant to be a completely introductory mini-course. So we will assume that, you know, everyone has a little bit of background in dynamical systems and possibly a little bit of background in a category. But we're not, I don't think neither Yuli nor José are going to assume any background in these particular constructions of marker partitions in young towers. Okay. This is going to be followed by a final week in the first week of December. We have invited 10 very prestigious speakers who have done research in applying these constructions to particular systems. And so we will finish this event with a week. It's going to be two a day. The time slot is going to be the same from two to four central European time to try to make it accessible to people also in the Americas and in Asia. And we're just going to have two a week, two a day for one week. And these are all going to be talks around this topic. So this is an extremely focused event, right? This is the purpose of this. We also encourage everyone to really study during these three weeks. If you're following these courses, we have, instead of doing them in an intensive way during the week, we have purposefully spread them out so that you have time to study them. I think maybe Yuri and Zed will, at some point, perhaps, will certainly share their notes, I believe, and perhaps give some exercises. And you know, you're most welcome. We will create, so we haven't done that yet, but we will create a kind of a group, working group or website so that you can put some questions and ask for some, you know, give some feedback about how it's going. And, you know, we will try to have a kind of interactive experience, especially for these two mini courses. Yuri, is this comment meant for me, the survey at the PBS? Or you're going to say something about that? Okay. Okay, so does anyone have any questions or comments for me before we start? Okay. So this is, these talks will be visible on Zoom and also on YouTube. So if you, and there will also be recorded and we will announce soon whether the recordings will be. So you can also tell your friends and your colleagues if they're interested. They can watch the recordings. They can register on Zoom or they can just watch on YouTube. Okay. So I really wish everyone a very interesting and profitable experience for this course. And I pass the microphone and the video to Yuri Lima. Yuri, thank you very much. Okay. Thank you very much, Stefano, for this introduction of the event that together with himself and José Alves, we are organizing and trying to gather together everybody even in a virtual way. So I should also thank ICTP for allowing this to happen. And I believe that Stefano said everything that was important in the beginning of this mini course. So what I want to add and as he said, I'm going to give four or five lectures. That depends on the pace that we will have here. And actually in between the lectures, in the lectures, I suggest you to make questions. So this mini course is made for you and whenever you have questions, I like very much because maybe it's not only you who is having these doubts. So maybe other students are having the same doubts. So don't feel shy to ask questions. All right. Okay. So I will talk about symbolic dynamics for non-uniformly hyperbolic systems and the way that the symbolic dynamics comes into play is exactly by constructing as Stefano said, Markov partitions. So all of my lectures will be based on a survey paper that I have written and it has been published last year in Ergodic Theory and Dynamical Systems. So I'm sending to you the link to archive of this paper. And I would try to follow this survey because it was written in a kind of very methodological way in which first I start dealing with the technique in the uniformly hyperbolic situation, which is classical. And I expect that most of you have seen before at least definitions and some results. And then I start to build this new theory from the simplest situation of surface different morphisms until the most complicated ones of billiards and flows. I should say that this theory, this recent theory of Markov partitions for non-uniformly hyperbolic systems, it started with an influential paper of Omri Sariq that was published in 2013 in which he did this construction for the case of surface difomophisms. And since his result, there has been developments and extensions and adaptations and new tools that have been used in order to apply this line of ideas to more complicated systems, such as higher-dimensional difomophisms for three-dimensional flows and higher-dimensional flows. And also for some classes of systems that exhibit singularities and are very natural phonodynamic point of view, just like billiards. And this is a nice point in which there is a good intersection with the mini course of Zalvis, who is going to speak about young towers, because as far as I understand, the first constructions for young towers were aimed to try to understand some properties of dynamical billiards. So there will be a kind of intersection in this class of applications for billiards in these two mini courses. So let's start here with the notes that I have prepared. I can make these notes available to you after the lecture, so that you can more easily follow. And I've chosen to put these notes in this smaller way here. If you want, if you feel that the letter is still too big, I can make it even smaller like this. It depends on you, because like this, you can see more of each page. I would try to do like this, and if it's too small, then you just let me know, and then I increase the zoom. So basically, this mini course, which I plan to do in four to five lectures, it has two goals, two main goals. Both goals are aimed to understand better this class of systems which are called no uniformly hyperbolic. The first goal is to try to identify exactly how there's no uniform hyperbolicity can be measured. So by means of measurement, I'm going to say that we will be able using some parameters to say when a point has a good non-uniform hyperbolicity, and whenever it has this good non-uniform hyperbolicity, we will be able to say something about the trajectory it defines. So in this regard, we also want to identify how to measure this non-uniform hyperbolicity and what are the good points for which we can say something. And why do we do that? Because in the final stage of the mini course, what we actually want to do is to construct Markov partitions. And for that, we will implement in the context of non-uniform hyperbolicity an approach due to Rufus Bowen that was developed in the beginning of the 70s of the last century, in which he constructed Markov partitions for the classical class of uniformly hyperbolic systems. I should say that Markov partitions, they, at least in the way that we understand nowadays, they started with two works in the 1960s. One of them was by Adelaide and Weiss and the other one was by Sinai and both of them treated exactly simple situations of uniformly hyperbolic systems. Well, Sinai, it was not that simple because he actually treated a nose of defilmophisms but certainly Adelaide and Weiss Nevertheless, later on Bowen came with a different approach in which he was able to understand Axial May defilmophisms and the niceness about his approach is that his approach is more suitable to generalizations and that's why actually Sarig was able to use this approach of Bowen in order to apply it in one uniformly hyperbolic situation. Okay. Well, maybe some of you have never heard about some of these names but I believe that during the lectures you will become more familiar with some of these concepts. Okay. So I would like to start actually introducing some notation so whenever I write UH I refer to uniformly hyperbolic systems and whenever I write NUH I refer to non uniformly hyperbolic systems. So let's start with the beginning which is that of examples. So I would like to to give to you a list of examples that you should have in mind during my lectures. Some of them will be actually uniform hyperbolic and others will be non uniformly hyperbolic. So the simplest example let me go a little bit below here is that of the cat map as we usually know this map here induced by this 2 by 2 hyperbolic matrix and this matrix induces a map in the tutorials. So if you try to look at the action of this map in the fundamental domain of the tutorials it does some very complicated thing with these pieces here. You have this piece that I numbered 1 this other 2, this other 3 and this other 4 and the way that this map in the tutorials he assembles these pieces is by means of putting them just like some tangran in these 4 positions. As a matter of fact it is quite complicated to understand that a geometrical representation and what Adler and Vice in the 60s did was exactly to look at these maps by means of what is nowadays known as Markov partitions. So they understood that for these classes of maps there is a simpler system of coordinates. In this case it's given by the Eigen directions of this hyperbolic matrix in which you can more easily understand what is the action of this matrix in the tutorials. So this gave rise to the idea of Markov partitions at least in this simple setting of total automorphisms. As I told you later, more or less in the same in the same time, Sinai came with a different approach that not only covered this situation of linear maps but only of nonlinear maps of any unorthos of difilmophism. So this is perhaps the simplest example of difilmophisms of uniformly hyperbolic difilmophisms. And I also want during the mini course to say things about flows. Actually flows in some sense are the original object of interest in dynamics. So when he was interested in trying to understand the stability of the solar system what he had was a system of ordinary differential equations and this gives rise to flows. So the original object is to try to understand the trajectories of the flows and in this regard the most classical example that we have in dynamics is when we consider geodesic flows in negative curvature. So these flows they have been studied actually since the end of the 19th century and how do we define them? Well let's focus ourselves in surface for example you consider this surface here which has genus 2 so having genus 2 we know that we can by the new information theorem we can endow it with a negative curvature metric. So in general you can consider any closed manifold with negative sectional curvature and in this guy you can consider the geodesic flow. So how is the geodesic flow defined? Well one observation that people usually make mistakes it is not a flow in the manifold itself it is actually a flow in the unit tangent bundle of the manifold. In the case that the manifold has dimension 2 this guy here is going to have dimension 3. It's a 3-dimensional flow in the simplest situation and what is the flow? What does the flow do exactly? Well it gets one unit vector and it evolves this unit vector through the unique geodesic that it defines and in time t you have the speed of this geodesic which is again a unit vector. So the image of this vector v under the geodesic flow at time t is exactly this new vector tangent to the geodesic at time t. So in this way to each vector you associate a new vector. And it is in the unit tangent bundle of this in our example of the surface so it is naturally 3-dimensional. All right? Well the study of these objects I believe it goes back it goes back at least in 1898 and during the 1920s there has been a lot of studies for this class of systems that was further developed by Anosov when he he considered not only geodesic flows but flows in general that exhibit an important dynamical feature just like this geodesic flows in negative curvature. And what is the dynamical feature that this geodesic flows in negative curvature exhibit? Well, they are exactly the simplest examples of uniformly hyperbolic flows. So here I have drawn a picture that people usually use to explain this uniformly hyperbolic behavior and you should understand this disk here as the universal cover of our surface so if you are in negative curvature the universal cover is going to be the Poincare disk and when we try to understand the dynamics in the quotient well sometimes we can first understand it in the fundamental domain and then project back to the quotient and in the fundamental domain every vector unit vector it dies to a point at infinity the boundary of the Poincare disk and it is born in the past in another point in the boundary of this Poincare disk and if we look at what are known as horal spheres so here from this vector here and tangent to this point passing through this base point and tangent to this point a disk and this disk has a lot of unit vectors pointing inwards and all of these unit vectors they have the same forward behavior as the original one so if you iterate this vector and this vector to the future they both approach the same point at infinity and actually the trajectories of these two vectors get exponentially close one to the other as time evolves so what I am saying is that each vector has a bunch of other vectors which are fellow companions in the future and this is telling us that you have a notion of stable manifold and in the same way you have a notion of unstable manifold so looking at the past you can define another tangent sphere here and the vectors pointing out outwards the sphere if you look at the backward iteration of themselves under the flow these vectors they have the same behavior as the original vector v and they both die they are both born here at this point at infinity and their backward trajectories they get exponentially close one to the other so they give rise to what is known as an unstable manifold as a matter of fact this is a uniformly hyperbolic flow and what is a uniformly hyperbolic flow well maybe I can define it later just now I expect that most of you have seen the notion of uniform hyperbolicity so we can go further in these lines there are two examples and they are the most classical examples in the uniformly hyperbolic context what are the simplest examples in the no uniformly hyperbolic context well the simplest one is when you are in dimension 2 so if you are in dimension 2 and you have topological entropy positive then there is an inequality which relates topological entropy metric entropy with Lyapunov exponents it is called the well inequality and in this low dimensional context it actually implies that any system like this there are many measures for which your system is non uniformly hyperbolic I will define to you what non uniformly hyperbolicity is today but not now but just keep in mind that the simplest example of non uniformly hyperbolicity is of surface bifemorphisms with positive topological entropy and well in the flow situation what is sorry there is a question yes in the second example you are consider the negative curvature I can for simplification of pictures I can make it constant but the same results of my effects variable negative curvature okay thank you welcome okay so what is the simplest example of flows that is non uniformly hyperbolic well it is the following instead of considering strictly negative curvature you only assume non positive curvature so for example if you got the original genus 2 surface that we had before and you put a cylinder in the middle a cylinder on which you have the curvature then this is going to be genuinely non uniformly hyperbolic in some sense trajectories it is not entirely true it is morally true trajectories that live here in these two parts whenever they are passing through these parts they see because of negative curvature they see some uniformly hyperbolic behavior with expansion and contraction but whenever they are passing through this cylinder region you are in the flat part of your surface so you see no divergence of orbits so yes it is true that you don't have some sort of hyperbolicity that is uniform but the hyperbolicity here occurs on the average so if you measure on the average how much you expand and contract even though you don't do anything here the contribution that you have in this left and right hand sides of the surface on the average you are going to have expansion and contraction and this expansion and contraction on the average is exactly the notion of non uniformly hyperbolicity so for the purpose of examples whenever I say non uniformly hyperbolic flow you should think of this surface here well as I told you I'm going to focus also on the so called dynamical billiards so let me give you a bunch of examples of such billiards what is a dynamical billiards let me show to you two of them whenever you have a domain in the plane for simplification I'm considering dimension 2 for instance this kind of diamond shaped here you can consider a dynamics which is just like the dynamics of pool that we play if you consider a point particle here it is pointing somewhere and you allow it to evolve at unit speed it evolves at unit speed in straight line until it hits the boundary and then when it hits the boundary you assume that it gets reflected in a specular way just like the pool and then it continues doing the unit speed in straight line until it hits the boundary again then it gets reflected and then it hits the boundary again and you could continue in this way and this defines a dynamical system so as a matter of fact the dynamical system is a dynamical system that to each position and direction with respect to this position you can define a trajectory so this defines a map which associates to the position and direction that you start with the first collision that you have with the boundary of this billiard and this is called the collision map so the collision map is exactly the object that I told you a few minutes ago that has been widely studied both by the communities of Markov partitions and also of young towers and actually it is in my understanding gave rise to the notion of young towers in order to understand exactly these kinds of dynamics you actually have many examples because whenever you give me a different shape you can define, you can consider the map so here you have the example in the in this diamond shaped situation but you could also put it in the tutorials so here you have the fundamental domain of the tutorials and out of this fundamental domain you can consider these two disks which are like obstacles just like you take them out of the tutorials and you look at the billiard table as being the complement of these obstacles so inside this region that is not shaded here you can play your billiard dynamics so you start with the particle here and then you go in a straight direction until you hit the boundary reflect the specular hit the boundary here recall we are in the total so this point is associated to this point here below and then you continue and so on and so on and so on so this because of the shape that I have drew in the picture they are true examples of what are known as dispersing billiards and in some sense they are uniformly hyperbolic there is some uniform expansion and contraction in some directions what would be the examples of non-uniformly hyperbolic billiards the simplest way that we do is by playing with the boundary of the table that we construct so for instance one of the examples was discovered by Bunimovich in which instead of considering this kind of concave examples that I have drawn above here you consider a convex one so here is the most perhaps the most famous example which is called the Bunimovich stadium well you can understand why right it looks like an stadium and here whenever you are in this flat part here the behavior is somewhat similar to the non-positive curvature geodesic flow in which the trajectory when it hits this flat part it has no basically no dynamics going on here you have no expansion you have no contraction things look very how should I say not chaotic but whenever you hit these parts of the boundary then chaoticity starts to appear and as a matter of fact Bunimovich showed that these are non-uniformly hyperbolic stadiums so again whenever I refer to results on billiards in the context of non-uniformly hyperbolic systems you can think of the Bunimovich stadium in dimension 2 but you can also think of Bunimovich stadium in higher dimension for instance there is this nice example here which was introduced by Bunimovich and Gianluigi Del Magno that looks like a kind of Bunimovich stadium in dimension 3 so you have this parallelepiped here in the middle which is like the flat region of your boundary of your domain and you put some kind of parabolic or circular in one direction and flatten the other direction cups in one side and in the other and then you can play with the billiard just like before to every position in the boundary and every direction you can make a straight movement with unit speed until you hit the boundary then you hit it specularly and continues the trajectory so this is the Bunimovich stadium in dimension 3 so examples more complicated examples for instance in the three torus which are just like the three dimensional version of this guy here oh sorry of this guy here sorry so you have the three torus and you remove out of it two obstacles one of them is this sphere another one is another sphere which in the fundamental domain is eight pieces here and so you can play with the pool outside this obstacles and the behavior is somewhat similar to that to the behavior of this billiard here okay alright excuse me Yuri excuse me actually my question is regarding to this billiard so the uniformly hyperbolic city or non uniformly depends on the shape of the because here we have the star shape and there you have the torus you know the fundamental domain of torus but here you have just the convex polygon yes exactly it depends on the shape exactly so when you have some irregular shape maybe the non uniformly hyperbolic the regular shape it depends on how you regular it is yes it actually depends not on the shape not on the shape or how regular the shape is but what is the concavity of the shape if it is concave or convex okay okay okay thank you welcome guys so again you can ask questions it's going well we have had two questions so far very good so with respect to this very wide class of examples what do we want to prove about them so the main results are actually trying to get these models these non hyperbolic models and also the uniformly hyperbolic ones some of them were not known before it was not known that that we could do what we do is to get these models and to represent them in some simpler situation the simpler situation is what I call here the symbolic models what are the symbolic models how do we represent our dynamical systems that we are introduced that we are considering by the symbolic models well the way that we do is that we consider we construct a symbolic space sigma for which the left shift acting on the symbolic space is an extension of our original dynamics and being an extension means that from this new dynamics to the old one you have a map pi which you can see as a coding map that commutes the dynamics of f and of sigma so above this pair sigma sigma is what we call topological Markov shift what is a topological Markov shift it is here you give me a graph g and we assume all over these lectures that the graph is an oriented graph and that the number of vertices is countable sometimes it's finite and whenever you have this object this combinatorial object you can consider all the paths on the graph all the paths on the graph is exactly the symbolic space sigma that we consider and to all the paths on the graph you can consider a very simple dynamics which is at position at time one it's the left shift in which you have a path here that is at position zero at v zero at time zero at v zero at time one at v one and you evolve on time so now at time zero it is at v one at time one it is at v two and so on so for instance yes so in these functions they are always supposed to be invertible or that's not important it is very important the first results have treated the invertible situation because the original the original proofs they make strong use of the symmetry that you have a feature in past but the most recent results treats also the non-invertible situation okay and I expect to talk a little bit about the graph the graph so here for instance the graph could be something like this you have two states one and two and you could you have edges here for example like this so if you have this situation of the graph the symbolic space it generates is the space of all z-indexed sequences with ones and two so it's like a complete shit it is like this that you have this symbolic space defined but defining the symbolic space it's perhaps it's one instance of the one important part in the construction but another very important part in the construction is to try to construct this guy so that this coding map allows us to relate the original dynamics f with the lifted dynamics sigma so in general for applications and this is what we care a lot we require that this coding map by it has at least these two properties that it has some regularity it is it is hold the continuous and I don't know if it's more important or less important but as important as it is also finite to one and what is finite to one finite to one means that well I'm considering an extension so to every point here in the basis I can look at the pre-image under pi of this guy I can consider x in m and look at pi minus one of x which is a subset of sigma so if these sets are too big then the complexity of the system is being represented by a system that is much more complex because it's much bigger so whenever this happens we cannot say many things about the original system by understanding the above one the idea world is in which we consider the system and in which we lift it to a system for which the complexity is either the same as the original one or more or less the same in a sense that for instance we preserve entropy and to preserve entropy we know that the entropy of finite sets is zero so whenever these sets here are finite they carry no entropy and the entropy of the original system and the entropy of the lifted one are related they are actually the same and this is the idea world because then we are able to consider an original dynamical system and then lift it to a one that is combinatorially simpler but has the same complexity as the original so whenever you now understand the properties, dynamical properties of this lifted one you can project it back to the original one and then understand what you wanted to understand in the beginning which was the map F so it is extremely important to have this finite one property and well it is important to preserve the entropy of the original one and of the symbolic space that you construct okay so about the shift space in order to maintain the capacity can we have a countably many symbols in a full shift? that is also a good question so the symbolic space is usually non-compact okay because usually for general non-linear formula hyperbolic systems the symbolic space is going to have countably many states so you lose compactness but you still have local compactness which is good okay this is actually excuse me excuse me I have a question actually I am a little bit confused is it the space that consists on by infinite sequence or only the one side infinite sequence they will always be by infinite okay they will always be by infinite because as Lucas asked we will be mostly concerned in the first lectures for f's that are invertible so if f is invertible we have to represent it by another invertible system so that is why sigma will be by infinite alright because the invertibility of f but if f is not invertible then the sigma will be the one sided no it will also be two sided it will also be two sided but to explain that we have to go a little bit further in the theory I will only explain that in the fourth or fifth lecture okay thank you welcome so what are the applications that we can get out of this construction as I told you the idea is to get a dynamical system that you want to understand this is exactly the same idea as young towers that Zalvis is going to try to explain you want to understand some dynamical properties of a system it's hard to see it from the original geometrical point of view that you are given so you try to see with different lenses with different classes in a way that it simplifies the properties that you want to understand the properties that we want to understand they are easily understood if we consider exactly the symbolic spaces so we construct this different representation of our system which is combinatorially simpler and being combinatorially simpler means that we can understand many of its dynamical properties for example counting periodic points and other things and as soon as we understand them in the combinatorially simpler system now we can pull it back to the original system and conclude the same properties for the original system so this is the idea and this construction of Markov partition has been very successful for many applications what are those first one and as I told you it's very important that the coding map is finite to one is to say something about these measures of maximal entropy so the maps below F if it is for instance infinity it has measures of maximal entropy and by making use of this Markov partitions a bunch of people that are written here were able to get this measure lifted to the symbolic space in the symbolic space we understand much better these measures of maximal entropy so then the understanding projects down to the original system and in this way it was possible to prove in this context that the number of measures of maximal entropy is at most countable and in some specific situations surface differmophisms that are infinity and transitive well if you have positive topological entropy you have measures of maximal entropy so it was possible also to prove that such measure is unique this is a very low dimensional result it only works for surface differmophisms but it's a very nice result it shows uniqueness of measure of maximal entropy in this context we can also say many things about the ergodic properties of such measures so usually the result is in this direction the measure is going to be hyperbolic in low dimension and we can prove that it is either Bernoulli or Bernoulli times a rotation both in the differmophism and in the flow situation this work here treated a low dimensional two dimensional differmophisms and this work here treated three dimensional flows alright and there are some extensions for higher dimensions as well periodic points could you tell me just by looking at that cat map how many periodic points of period and you have it looks very complicated from the geometrical point of view but from the symbolic point of view it's very easy you just look at the number of closed loops of size N in the graph so because of this we can get results in periodic points so the results which were obtained for example by these people here also by Lima in materials and other Benovadia as well and other results that if you have a measure of maximum entropy again I'm restricting myself here to the low dimensional context then the number of periodic points grows exponentially fast and the exponential growth rate is given by the topological entropy so this H here is the topological entropy of the system so this is a kind of Margulis like estimate in the context of flows you have a similar estimate you just have to divide it by T but it's still exponential because exponential beats this T below ok so these are the classical results that have been obtained until 2020 and more recently there have been new developments and this is nice because this is exactly the kind of field that our powers wanted to understand in the beginning which was exactly to try to understand decay of correlations and it was possible to prove again in the low dimensional situation for surface different morphisms that that unique measure of maximum entropy above it actually has exponential decay of correlations so this is contained in this preprint of Buzicrovisia and Saari and another instance of these conditions is for SRB measures so Benovadia has a paper in which he gives necessary and sufficient conditions for the existence of hyperbolic SRB measures the way that he does is using this symbolic representation of dynamical system to try to build these measures in the symbolic space and then project this notion down to the manifold so this is the class of applications that we are interested in and now my lecture will focus on giving some preliminaries on the methods that we will use for non uniform hyperbolic systems and for ease of understanding first do this construction for uniformly hyperbolic ones and not only that I'm going to assume that again for simplification of presentation I'm going to assume that we are in the low dimensional situation we are in the situation of surfaces and then also that being uniform hyperbolic I'm going to assume that the map F is uniform hyperbolic all over the surface so it is in a whole differemofism ok what do I mean by that I mean that I have a splitting of the tangent bundle into two sub spaces es and eu es and eu generated here by some unit vectors and then en es you have a uniform contraction of vectors in the future and in eu in the future, which is the same as a uniform contraction in the past. So yes, you see uniform contraction for forward iterations, EU, you see uniform contraction for negative iterations, backward iterations. And I assume that this surface is a remaining surface, so I have an inner product defined on it, okay? And the idea here is the following. How can we more easily understand such a nozzle of defilmophisms? They have these invariant directions and it looks like that these invariant directions, they should be the ones that we should consider as kind of systems of coordinates to understand our defilmophism. So by system of coordinates, I mean, could we introduce charts so that with respect to these charts, we can represent our map F more easily? Well, it depends on what we mean by more easily. Here, more easily means in some hyperbolic way. Being a nozzle means that you have a hyperbolicity, expansion and contraction. And it would be much nicer if this F was something like a hyperbolic matrix, or a small perturbation of a hyperbolic matrix. And it is exactly in this regard that we can get such objects and construct what are known as Lyapunov charts that I'm going to explain to you right now, for which with respect to these charts, it gives an atlas of my manifold, the dynamics of F will be nothing more than a hyperbolic matrix. Actually, a hyperbolic matrix plus some small error that is not influential. So how do we do this kind of simpler representation of these nosof defilmophisms? First of all, we introduce a new inner product. And this new inner product is going to be more dynamically relevant. Recall, the inner product that we are given is first geometrically relevant. Now we are going to use the eigen directions, which are the stable and unstable directions in order to define a new inner product, which is more suitable for the dynamical point of view. How do we do that? We fix a lambda that's still smaller than one, but weaker than the expansion and contraction that we have. And then we define the inner product using the old one. But we consider an average, or not an average, but a sum, a weighted sum, with respect to the forward iterates of vectors whenever these vectors are in the stable direction. Whenever you are in the stable direction, you expect for contraction. So these guys here, they tend to go to zero, and they actually dominate this guy here, which goes to infinity. The idea is that the faster you have your contraction, the smaller will be this number. In some sense, this new inner product, as defined in this stable direction so far, it allows you to measure how fast is the contraction that you see in the stable direction. And similarly, you do it for the unstable direction. So in the unstable direction, well, you want to do something similar, so you have to iterate the vectors negatively, so that you also see a contraction. So these terms here, they tend, they go to zero exponentially fast. So they dominate this guy here, and the faster they go to zero, the smaller are these inner products. Again, this inner product in the unstable direction measures how good is the uniform contraction that you see when you iterate the vectors negatively in the unstable direction. And you can complement the definition of this inner product, making these two sub bundles to be perpendicular or orthogonal. So VSVU is zero. Why is this inner product nicer from a dynamical point of view? Well, for those of you who have heard about adapted metric, it is exactly what this inner product is. It gives rise to a new metric for which now the action of the derivative of the map in the stable and unstable direction is uniform and hyperbolic without any constant in the front. But this is genuinely smaller than, constant is smaller than one times the norm of the vector, both for one iteration in the future. And here I have written it wrongly and for one iteration in the past in the unstable direction. So here is minus one. BF minus one VU is also smaller than lambda times VU. So you actually, with respect to this new inner product, you see the contraction in the, in the invariant bundles at every iteration, since from the first iteration, which is not true, coming from the definition of uniform hyperbolic. And, well, another one good feature of the uniform hyperbolic situation is that with respect to this new inner product, the unitary vectors that we consider that generate the stable and unstable directions, they have bounded length uniformly. So there is a constant L for which for every x in M, these guys are between L minus one and L. Okay. This is very important when we try to understand what could go wrong when you pass from the uniformly hyperbolic to the non-informal hyperbolic models. Another thing that I have, I have not written here, but I, I can, well, it's actually a geometrical property that's not related to this new inner product is that since you have the splitting, it is continuous, the angle that ES and EU make is uniformly bounded from zero. You cannot have regions in which this angle gets arbitrarily close to zero. So it's, there is a, an alpha for which the angle between ES and EU is bigger than alpha. This is very important because having an angle bounded away from zero means that you can always distinguish the stable and the unstable direction. If you had some situation in which the angle goes to zero, this means that in this common direction for which the angle is the, the direct, the eigen directions that sub bundles are converging to, you have a behavior that you don't know exactly what's going on because it looks like it wants to expand, but it also looks like that want to contract. So it's not properly defined. So these are very bad regions of hyperbolicity. So whenever you have angles bounded away from zero, then you can separate the stable from the unstable direction. And this is the good situation that actually happens here in the uniformly hyperbolic context. Okay. And as a matter of fact, we can actually measure hyperbolicity. And in our context of low dimension, we can measure this hyperbolicity using three parameters. They are related to what I just said about the angle and to this new inner product that I introduced. So you measure how good the hyperbolicity in the stable direction is just by considering how big is the unitary vector with respect to this new norm in the stable direction. So there is just this sum here. And as I told you, the faster this term goes to zero, the smaller is the sum. So how big s is telling you, or how big or how small s is telling you how good you are having the convergence of these numbers to zero. So how good the uniform contraction is occurring in the stable direction. And the same thing for the unstable direction. But then you put a negative sign here. And also to distinguish the stable and unstable direction, you have to consider the angle or you have here the angle between these two invariant directions. And using these three parameters, we can in some sense measure how good the hyperbolicity at the point x is occurring. Beauty. Yes. So I would like to ask you something. So in the beginning, you have fixed these vectors e, s, u sub x. And they have norm or respect to the original norm equal to one, right? Yes. So if by chance the original norm was already adapted, it could be that these numbers, s of x and u of x, are all identical to one? Yes. And no, because we usually introduce some, so for the new hyperbolic situation, I believe yes. But this formula will no longer make sense because this right hand side will not be equal to one. Okay. Okay. You would have to define and with respect to this new guy, which depends on lambda, you see that there is a lambda here. This new guy depends on lambda. To every lambda, you can define a new metric. Ah, I got your point. Okay. So thank you. Okay. So why are these concepts good? Because it's going to allow us exactly to represent our map like a hyperbolic matrix. But before representing the map, we have to represent the matrix, sorry, the derivative of the map as a uniform hyperbolic matrix. So we are actually able to diagonalize, in some sense, diagonalize the derivative. Using what? Using a linear change of coordinates, which is given by this matrix c here. What does c do? It sends the canonical basis into the basis of the eigen directions, right? The s and u. But it sends them with some weight, s and u. Exactly because whenever you have a bad hyperbolicity, this s will be big, and you will send this unit vector to a very tiny vector. And the same thing for the u. Now, why do you have to define this guy by s and u? Because when you do the calculations, so if you are trying to represent your matrix with respect to this new class of linear transformations, so you should compose c of x, df of x with the inverse of c at f of x. So here's a picture. Let me go a little bit smaller so that you can see c of x makes you go in from r2 to the tangent space at x. dfx goes to the tangent space of f of x. So to come back to r2, you have to compose with the inverse of c of f of x. So this is the representation of the derivative at x with respect to these linear transformations. You get this composition. And when you write this composition, you see that if you don't put the s and u here, the composition is nothing but a matrix that you have no control about. But if you put the s and the u, then this composition becomes a hyperbolic matrix. It is diagonal because these directions are invariant. But even better, this number is smaller than the lambda that you fixed a priori. And this number, its inverse, is also smaller than lambda. So you are seeing, with respect to the system of coordinates given by this matrix cx, c of f, c of f of x, you are seeing your derivative. First of all, your derivative went from one tangent space to a different one. And now you are making, you are seeing it defined from r2 to r2. And even better, it is a hyperbolic matrix. So you know exactly what happens, for instance, if you consider a square here. The image of this square is going to be a new square like this. Okay. This is perfect because we understand the action of the derivative in this kind of charts for the derivative. If you want to understand that for the whole trajectory, you just compose these guys along the trajectory. So a square becomes a rectangle, which again becomes a thin rectangle, which again becomes a very thin rectangle and so on and so on and so on. And then afterwards, after doing this iteration, you can project it down to the manifold by the action of one of these matrices in the respective iterate that you have. So they are very, very useful for understanding the derivative of the map. As you know, we don't want only the derivative of the map. We want to understand the map itself. So how would these classes of transformations allow us to actually consider to define charts, dynamically relevant charts that on which we can understand better the map, the original map itself? Well, before saying that, I should tell you that all of these matrices, recall, I'm still in the uniformly hyperbolic situation. Everything here varies continuously. Angles are far from zero uniformly. So in this class, all of these matrices and their inverses, they have a uniform bounds on their norms. So you have a distortion that is allowed to occur once you apply these cfx and the inverse of these guys. And again, this is a crucial difference when we pass to the non-uniformly hyperbolic context. Again, in the uniformly hyperbolic context, everything is continuous. So things are usually bounded. In particular, the angles are bounded away from zero. The sizes of sx and ufx that I defined above are also bounded away from zero and infinity. And as a consequence, these guys have bounded norms. All right. So after calling the attention of you to this, we go to define what are known as the app north charts. So we want now not only to use the cfx to represent the derivative, but also the map. So it should compose these cfx here with something that comes from the tangent space to the manifold. And the easiest guy to do this are exponential maps. So we compose c with the exponential map. So c comes from r2 to the tangent space at x. And if we compose the exponential map, we fall inside the manifold. And this composition is what we called the app north chart at the point x. It is properly defined in a small neighborhood of zero. This small neighborhood just needs to be smaller than the injectivity radius of the manifold, because then the exponential map and its inverse are properly defined. What we will actually do is fix a small parameter epsilon and then take q to be a power of epsilon. The power that appears is related to the regularity of f. Recall that I am assuming that f is c1 plus beta. Its derivative is beta holder. So this beta is the same beta that is appearing here. The reason is going to be clear when we present a theorem in the next minutes. So far, just imagine the following. First, I wanted to diagonalize my derivative. I was able to do this using the c of x. Now I want to diagonalize my map itself. So I compose c of x with exponential maps. This defines charts. So with respect to these charts, I can represent the action of f around the neighborhood of the point x. So I can consider this composition here, which is representing f exactly in these charts. I need to use the chart at the point x and at the point f of x for that. I compose with psi x, then with f, then I compose with the inverse of psi f of x. Then I get this map here, which because I consider this inverse, I can only define it in a small neighborhood of this origin here of R2. But nevertheless, it will allow us to understand the behavior of f in a neighborhood of the point x here. How? Here's a theorem. Well, the map f of x will not be linear, but it will be almost a hyperbolic matrix in which sense. It will be a small perturbation of that hyperbolic matrix that diagonalized the derivative. So it is the same hyperbolic matrix A00B plus a small error. And this small error, it is zero and its derivative is zero at the origin. So this guy here is actually the derivative of f of x at zero. That's why this complement here has this property that it is zero and its derivative is zero. But then perhaps the most important thing is that this small error is in some sense irrelevant for us. We can control the C1 plus beta over two norm of it. Look, f is C1 plus beta. But if you want to approximate f by a hyperbolic matrix, we cannot make the error to be small in this C1 plus beta topology. We have to diminish a little bit what we can control. And with this diminishment, we can control the derivatives of this map h i and they will be smaller than epsilon. But it's still very good because this is telling us that with respect to this Lyapunov charts, our map is a small perturbation in the C1 plus beta over two topology of this hyperbolic matrix. So again, whatever you see the image, whatever you see happening for the hyperbolic matrix itself, you see somewhat distorted with small distortion for the map f of x. This distortion is introduced by the presence of this small error h1, h2. How do we prove this? Here is a sketch of proof. Well, you define what h1, h2 should be. Daniel, do you want to ask a question? It was me, I think. So these functions h i, they are, so you control these norms in the sense of a function from r2 to r2, right? The h i's are from r2 to r. r2 to r, okay. Yeah, yeah, because of the i, okay. But actually, each of these functions, h i, is dependent on x, right? Exactly. So you have this control uniformly on x? Uniformly on x, exactly. Okay, thank you. For all x, this thing happens. So how do we define, how do we prove it? We define what h i should be, should be f of x minus its derivative at zero. So automatically, this guy satisfies this property here and a and b satisfy these properties here. So we just need to worry about this. So we just need to control the beta over two holder norm of the derivative. And such derivative is controlled by the beta over two derivative of this guy, holder norm of the derivative of this guy. So you differentiate f of x and you try to compare its evaluation at two nearby points. And when you do that, you can write this guy, because f of x is the composition of three maps. The derivative is going to be the composition of three matrices. So at w1 is going to be the composition of these three guys at w2 is of these three guys in which the left hand side ones are derivatives of some exponential maps, the right hand side as well. And the middle one is going to be the derivative of f itself. And then because of the regularity of the exponential map, if w1 and w2 are closed, a1 is going to be close to a2. And the same thing for c1 and c2. And for the middle ones, you just apply the assumption that you are c1 plus beta. So the derivative of f is beta holder. So this is smaller than or equal to a constant times w1 minus w2 to the beta power. And that's because you have this beta power here that you can divide it into two parts, beta over two and beta over two. And one of them, if epsilon, because these guys are in that domain minus q, q square, and q, recall, is epsilon to three over beta, that this difference raised to beta over two is going to be very small. And it's going to kill any constant that appears above. So beta over two in the exponent kills the constants so that this guy here, which is the original guy that we wanted, this is smaller than or equal to whatever we want, epsilon times the difference now raised to the power only beta over two. This is a kind of a trick that happens a lot in analysis. You have an exponent and you want a bound. So you diminish a little bit the exponent so that what you took off of the exponent kills all the constants that are appearing. Okay. And in this way, what we did was to construct charts for which with respect to these charts, f is nothing but a small perturbation in this one plus beta over two norm of hyperbolic matrices. This is perfect because we can implement the classical idea of graph transforms. Using this, I'm going actually to be able to prove to you the existence of stable, local, stable, and unstable manifolds, at least in this context of C1 plus beta, the film of films. What are the graph transforms? The graph transforms, they are objects that come from the following observation. f of x is a small perturbation of a hyperbolic matrix. So if you have something, some graph, the graph of a function that is almost vertical, its image is going to expand vertically and contract horizontally. So its image is going to be something like this and you expect that its steepness is even more vertical. So if it was not so vertical here, like I'm trying to show in this picture, it becomes more vertical here. So you'll have some convergence to some curve that is appearing here and it looks like the set of points that are all together with the original orbit in past iterations. I know I said a complicated sentence, I'm going to try to explain it to you. But first, just looking at this action, I'm going to try to define what the graph transform is. Well, the observations of looking at this action is that, as I told you, f of x makes vertical graphs more vertical. And not only that, if you get two graphs and you look at their images, their distances is going to be smaller. So if you get this graph here and another graph here, the images are going to be this one and something that is much more closer than it was in the beginning. So f of x approximates two vertical graphs. So it is these two properties that happens as you consider almost vertical graphs and iterated forward under f of x. And also, if you iterate almost horizontal graphs negatively, if you consider f of x to the negative to minus one. So almost horizontal graphs are going to be even more horizontal. And two nearby horizontal graphs here, their pre-images under f of x are going to be much closer in the image. So based on that, you can define two operators, one that acts on these almost vertical graphs, another that acts on these almost horizontal graphs. So you consider these two spaces, the space of almost vertical graphs, and the space of almost horizontal graphs based at point x. Then you can define the action of f of x in these spaces. So you have actually what we call two graph transforms. The first graph transforms the unstable one and it gets an almost vertical graph at x and sends to its image, which is an almost vertical graph at f of x. And the same thing you can do for the stable direction. So you get now a graph that's almost horizontal at f of x and its image is going to be almost horizontal at x. So this is the action of the inverse of f of x after you restricted to this small box that you are considering all the maps. And the nice thing of each f of x being close to a hyperbolic matrix is that if in these spaces here, you put natural norm with the super norm with respect to the functions that define the graphs, define the graphs, then these two operators, f u and f s, which are the graph transforms, they are going to be contractions. This is exactly that property of approximating two vertical graphs or two horizontal graphs. And using this contraction, well, whenever we see contraction, we want to look at fixed points. Here, we cannot look directly at fixed points because as you see, these maps are defined in different spaces. But what we can do is that if we want to understand the fixed point for f for the almost vertical direction, what we do, we go very far in the past and starting in the past, we iterate all these f us until we get the position zero. And the further we go, so here it is the picture, we go to a far distance past of x, we get any almost vertical graph here at the position n. And then we iterate under all the graph transforms until we get at the position zero, which is that of x. If we iterate at n times, we get this. If we iterate two n times, we get a different one. And as we make n go to minus infinity, exactly the contraction properties of all these maps that we are applying make this limit here to exist. And this is exactly, this is what? This is going to be a new graph here at the zero position, which is going to be the local invariant manifold of the point x in the charts, because we are doing it everything in R2. So dynamically, all of the points in the limiting graph are all of the points that are close together in these domains of definition of all the x, fx, f of, f of minus one of x and so on, along the whole past of iterations. How is VN defined? VN can be any guy that you get in the far past, because this construction actually does not depend on VN exactly because you are applying more and more contractions. So it's just like the fixed point theorem in Banach spaces. The limit of iterations converges to the unique fixed point and it converges to any initial condition. So here the VN is, you can take any almost vertical vector in the far past. Okay? So why is this the unstable manifold? Because this is exactly the set of points. If you look at this point, they are the set of points that when you iterate backwards, they remain in these small domains in the whole past. And if they remain the small domain in the whole past, in the manifold, it means that the trajectories are closed to the trajectories of x in the past. So we not only get that these points that remain close in the whole past, they exist, but they have some differentiability structure, because they are actually in this coordinate, they are the graph almost vertical. They are the graph of a function and the graph is an almost vertical curve. So someone else asks why is it FN and not F minus N? It is N going to minus infinity. So here you should recall that this N that appears is negative. Okay? Okay, so this is exactly how you identify all the points that are fellow companions of x in the past. The conclusion is that they are locally a differentiable structure. They are a differentiable curve. F minus one of x is not unique. Yes, it is unique. F is invertible. Okay? Recall that we are assuming that F is invertible. So you can go all the way to the future, all the way to the past in any way you want, from any point you want. Okay, so let us continue. Similarly, you can define the locally stable manifold. So locally stable manifold, what it should be? Should be points that are fellow companions of the trajectory of x in the future. So I look at all the points that fall inside the domains of the F of x's in the whole future. And how should I do that from the geometrical perspective using these graph transforms? You go in the future up to FN, here now N is positive, and you consider any almost horizontal graph. And then you iterate it under the graph transforms, the stable ones, this one, the next one, up to F as of x. Then you get an almost horizontal graph at x. And as N gets bigger, the images that you get here on this process, they will converge to a curve that is almost horizontal here. This curve is the set of points that are always close to the trajectory of x in the future. So they are exactly the stable, local stable set of the point. The conclusion with this construction is that this local stable set, which in principle could be any set, as in the case of the invariant manifold, is actually a manifold. In our case, it's a curve. So what I did here is usually classical when you do a class in uniformly hyperbolic systems. Hyperbolic dynamics, one of the things is to construct these invariant measures. And graph transforms are used to construct these guys. So this machinery of graph transforms is not new in this field. It's actually quite old. Okay. So we discussed everything in low dimension, dimension two, with one stable direction, one unstable direction, but we could also do the same in higher dimensions. So what is the difference that occurs in higher dimensions? Well, in higher dimensions, the invariant subspaces perhaps they cannot be further decomposed into invariant lines. You could have, for instance, an spiral behavior here that is converging, but it's spiraling because of some complex eigenvalue. So you have to work with the whole bundle or subspace as it is. You cannot work with individual vectors that are defining each of the directions in this high dimensional subspaces. So we define CFX on the sub bundles, just such that the image of the ds dimensional sub bundle, the horizontal one is the stable one. And the du dimensional sub bundle, the vertical one is the unstable one. So here ds, sorry, ds is the dimension. Let me come back here. It's better. ds is the dimension of the stable direction and du is the dimension of the unstable direction. And you can define it. And as you define it using, again, our s and u and now the s and u, they are no longer going to be defined as they were, but they will be some sort of infimum of the old s or the one dimensional s. So it's an infimum of one dimensional s of x. And now it depends on the vectors that you are taking in the stable direction. Then you can define again c as before. And in this way, again, the derivative of f, so here I should think of the derivative of f at the point x, is again a hyperbolic matrix. It is a block matrix in which this representative here is a ds by ds matrix that is a contraction. It's smaller than lambda. And this guy here is a du by du matrix that whose inverse is a contraction. Then you can continue as a both. So you define the up north chart. You define the representation of f in the up north charts. You show that this representation is a small perturbation of hyperbolic matrix. Then you consider almost horizontal graphs, almost vertical graphs, apply graph transforms it, and then you construct environment manifolds also in this higher dimensional context. This gives a kind of complete picture of what we do when we want to understand locally the dynamics of uniformly hyperbolic systems. Now I will pass to our main goal, which is that of non uniformly hyperbolic ones. I should first tell you what is non uniformly hyperbolic about. As I told you, uniform hyperbolicity is when you see contraction and expansion at every iterate of your map. Well, sometimes you don't see it at every iterate. You only see on the average. So to understand if you are having this construction and expansion on the average, what you should consider is the Lyapunov exponent. So here's the definition of the Lyapunov exponent. It exactly measures the asymptotic growth rate of the derivative with respect to some vector. And whenever this number is different from zero, it means that you are having some sort of exponential growth rate expansion or exponential contraction. Because as you see the Lyapunov exponent has this log here. So if this is positive, it means more or less that this guy is behaving something like e to the chi n. So whenever chi is positive, this guy here is exponentially big. And whenever chi is negative, this guy here is exponentially small. So negatively Lyapunov exponents is associated to contracting directions. And positive ones are associated to exponent directions. So in this context, we have the notion of chi hyperbolic measure. What are these measures? So observe that we introduce measures now. So whenever you have an invariant measures, you have a famous theorem, which is Osalabet's theorem, that says that all of these Lyapunov exponents exist for almost every point. So we pass to the measure world, we start talking about almost everywhere statements. And saying that the measure, invariant measure is chi hyperbolic, is that its Lyapunov exponents are bounded away from zero. You have some of them which are bigger than this chi. Some of them are smaller than this chi. So if this happens almost everywhere, then the measure is called chi hyperbolic. And this is the classical setting of non-uniform hyperbolic history. It is much more complicated than the setting of uniform one because you need a priori, this extra structure of an invariant measure. So you only talk about non-uniform hyperbolicity when you are given an invariant measure. What we will do here is actually different from the classical context because we will not work with any particular measure, but we will actually be able to understand what are the good points that exhibit some sort of non-uniform hyperbolicity for some fixed parameter chi. And with respect to this fixed parameter chi, these good points, they are going in some sense to see all the measures at the same time. So we forget the classical view of looking at a single measure, and we understand how to measure non-uniform hyperbolicity point-wise, such that when we consider the set of all these points, then this set is good enough all the measures that are chi hyperbolic at the same time. In this way, our construction makes no refer to any particular measure. It only makes a reference to our notion that we are going to introduce of the good non-uniform hyperbolic points. And how do we do this? We are going to define, I recall to you, we are still in the two-dimensional situation, we are going to define a subset of the manifold. So we are given a chi, which is fixed, and I want in some sense to measure the points that have some good non-uniform hyperbolicity, as good at least as chi. I'm going to call this non-uniform hyperbolic locus, and I'm going to denote it by this letter here, the symbol here. So what is this subset of m? It is a subset of m in which when you look at the tangent space at x, you can find two unit vectors. This guy is going to represent a stable direction. This guy is going to represent the unstable direction. And in these directions, they are transverse, so they are actually different. And in this direction, you saw some behavior just like the stable and unstable directions. So look, in the future for this stable direction, you see an asymptotic contraction of rate at least minus chi. And in the future, you see just an expansion. Oh, in the past, sorry. You see just an expansion. So this is exactly what the stable direction should do. It should contract in the future, and it should expand in the past. Here, we require that this construction is at least of the rate minus chi. So we only look at points for which this construction is at least of rate minus chi. You ask the same thing for the unstable direction. So for the unstable, you should require contraction in the past. So you require that the unstable vector EU gets contracted on the average in the past with rate at least minus chi and that it gets expanded in the future. So this means that it is kind of unstable direction. So this, in principle, would be somewhat sufficient for us. We are looking at the points for which you have a kind of Lyapunov exponents bounded away from minus chi, both for the stable and the unstable directions. But we require something extra. We also require that the same S and U that we defined before, when you calculate them with respect to chi, so you should see this as lambda minus n, where lambda is e to the minus chi is the contraction rate, then these numbers you only look at the points for which these numbers are finite. So again, we are defining a subset of our manifold for which there are two directions that resemble, they are actually stable and unstable directions in a non-uniformly hyperbolic way for which additionally, when you calculate these two sums, they are both finite. You may have a question. Sure. In U8, two hypothesis, for instance, can we allow the lemif in X to be zero? I mean, this lemif should be uniformly bounded away from zero or they could approach zero? It can approach zero, but it cannot be zero. As you see, this is going to be an unstable direction, so it should expand in the past. I see, but the expansion can be as low as I can. Okay. Yes. Yes. Thank you. Yes. So it all looks very technical, but you just list these properties and you look at a set of points X with these properties. This is our non-uniformly hyperbolic locus with this parameter chi. This is one way that you can measure the hyperboleicity. You observe that you are requiring something a little bit better than having the app of exponents smaller than or equal to minus chi. You could have the app of exponents equal to minus chi as long as these sums here are finite. Okay. And what is the main difficulty when you pass this non-uniformly hyperbolic context? Well, one property is true is that this set for every chi, it is invariant. If X is, f of X is also invariant, but it is usually very poor from the topological point of view. It's not compact. One good feature is that, as I told you, it is sufficient, understanding the set is sufficient to understand at the same time all hyperbolic measures with the parameter chi because they all live inside this non-uniformly hyperbolic set. So this is good. This is good. This is bad. And another bad property is that the parameters s, u, and alpha, also the angle that we used to measure the hyperboleicity in the uniform and hyperbolic situation, they are no longer continuous as you look them as functions in this set. So for example, you could have two very nearby points, f, x, and y, for which the two eigendirections, the two directions, the s and u here, are very unrelated to the two invariant directions at y. So you have a complete loss of continuity when you consider this non-uniformly hyperbolic context. You should ask myself, what happens if I change chi? Well, if you get chi prime smaller than chi, then the set that you get, you are allowing a weaker hyperboleicity. So this guy is going to contain this guy. So as you make chi going to zero, you put more points inside your set and you are allowed to understand more of the non-uniformly hyperboleicity of the system. Okay. Nevertheless, and as I told you, we will fix chi all over the construction, chi will be a fixed parameter. So what will you do now is that we will try to, just like we did for the Lyapunov charts, to consider some nicer system of coordinates defined for all points inside this subset of m. And our first step was to construct a different inner product, which is more relevant from the dynamical point of view. So we define inner product on this guy in a very similar way as we define the Lyapunov inner product in the uniformly hyperbolic situation. It is the same sum as long as if you hear, you allow lambda to be e to the minus chi. Which is the contraction rate that you are seeing asymptotically. If you define this, then you, in the stable direction, you define it forward in the unstable direction, define it backward, you assume e as in e to be perpendicular one to the other. And if you define this, then again, if you make the calculations, the derivative of f in our matrix is c of x, which are exactly defined as above. So c of x sends the unitary vector here to e s of x over s of x. Now look, s of x needs to be finite for this to make sense. Otherwise, I would be dividing by infinity. So I require s of x to be finite for defining c of x. And the image on zero one is e u x over u x. So in this way, I define again, c of x from R2 to TXM. And in the same way, these classes of linear transformations, they diagonalize the of f. The proof is the same. Then we can now use these guys, the c of x to introduce the charts that we are going to use to represent our system. In this context, these charts are known as passing charts because they were introduced by Yakov Pessin, which was one of the great developers of the theory of non-informatability in the late 70s. It's actually called passing theory, this theory of non-informatability. So we do exactly the same thing. We compose c of x with the exponential map at x. It can only be defined in a small neighborhood of x because of injectivity radius issues. But now we are faced with a new difficulty. Because to recover the hyperbolecity that we saw in the informal hyperbolic situation, we will only have it if our Q in which we define the map, the chart is very small. And this Q is totally related to the rate of non-uniform hyperbolecity at x. The worst this non-uniform hyperbolecity is, the smaller, the more I have to zoom inside the neighborhood of x to see, to recover true hyperbolecity. The reason is that when we calculated, when we make that calculation in order to control the C1 plus bet over two norm, this term here appeared, which in the uniformly hyperbolic situation was bounded away from infinity uniformly. Recall that all the matrices C were bounded away, them and their inverse norms were bounded away from zero and infinity. But now, since you have a possibility, now it can happen that they stable and unstable directions, they get arbitrarily close to each other. They are different, but they can get arbitrarily close to each other. The norm of the inverse matrix might be very large. And as you calculate this, this term here appears. So in order to kill this term, which before was uniformly bounded, but now it can be very big, you can only kill making W1 and W1 being very small, which means that you have to restrict yourself to an even smaller neighborhood of zero. So the trick here is the same. You get this exponent beta, you divide it into two parts. And in one of the parts, you require that W1 and W2 are very close to each other. So W1 and W2 being very close to each other will imply that this difference will be very small. And this difference will kill the perhaps very big norm that is appearing here. So this is how we introduce this new parameter, Q of x, which now is totally related to the hyperbolicity features at x. That's why it depends on x. Before it did not depend, it was uniform. And how it is, it has to kill the possible growth of this guy. So we define it just as a constant times, a very large negative power of this guy. This large negative power depends on betta. It's like 12 over beta or something like that. So the smaller the beta is, the bigger this power is. And when you do this and make the calculations properly, then you get at the same result as we had before, which is saying that now it's a very large negative now in the zoomed region. So this region that you are in a much smaller neighborhood of zero, and this depends directly on the hyperbolicity features of the point x. F, when seen in passing charts, is a small perturbation of a hyperbolic matrix, and the error term is exactly as before. So because of that, we will be able to again apply the graph transform method and construct local invariant manifolds also in the non-uniformer hyperbolic situation. But this will only happen in the next lecture because we have reached the end of this first lecture of the mini course. So I believe we don't have chairs, so I will be my own chair. So now I invite you to make any final question or comment that you want before we finalize the recording of today. So I have two questions, Yuri. So then at the end of the day, the size of the stable or unstable submanifold will depend on x. You won't be able to give a lower bound. I will not. Okay. And so my second question, so this one's, I don't understand my implications of my question in some sense, but so in the first setting, the uniform, the hyperbolic one, we have created some charts, which were, in some sense, centered at each point x. We had one of these good charts. But now we have charts only centered at points in nuh key. Yes. And if we collect all of these charts, do we recover something or? No, you might be missing some neighborhoods of the manifold. So it's not an atlas, especially because in this context, you could have a region of fixed points. Fixed points, there is no dynamics going on. So our charts will not be seen in this region. So they will be useful. Let us say that you want to understand. I want to understand f to the thousand at a point x at a particular point x, which I know that has some knowledge from hyperbolic. So what do you do? Instead of looking at f to the thousand, you look at the composition of f at x with f at f of x with f at f two of x up to f at f nine nine hundred ninety nine at x, the orbit of x up to this iterate. The composition of these almost hyperbolic maps will be exactly the composition of f in a neighborhood of x. So you can locally understand the iteration of f in a neighborhood of x by composing this almost uniformly matrices above. But if I collect all of this domain of these charts, problems, I'm recovering something which is larger than n du of chi. But yes and no. In principle, you could. But we are going to have to refine our set. So the set of points that we are going to understand is not this nuh because we need some extra conditions on the on the on the hyperbolic parameters. After that, we're going to reduce the subset of this set that I defined today. And in this subset, we are going to understand it fully, but not only we're going to understand only it. So we are going to define a map that is surjective into this set. Okay. Okay. Thank you. And is there a way to see this stable and unstable sets globally? Because in the uniform hyperbolic setting, we have a global manifold, everything is, every one of them is smooth. There is a continuous splitting in a partition. Here everything is seems like a mess. Everything is defined locally, but after you define locally, you can use the dynamics to define globally. So you can define the global stable is stable manifold of x by looking at the local stable manifold at f of f at f at the pre iterates and then iterate them forward up to x. We need to be a partition of the it will not be a partition because you have the set anyway, it's usually not M. So as I told Lucas, you could have regions of M in which you have a neutral dynamics, you have a identity. So in the identity, nothing is going on. There is no unstable manifold, there's no stable manifold. But it will be a partition for a full measure set. Yes. For all hyperbolic, high hyperbolic measures, it will be. Any other question? Is there any example where we can see the two directions colliding? I don't know. I only know in the flow situation. In the flow situation, let me come back here. And this is a problem that I had when I was working with Carlos Matheus and Ian Melbourne recently in a problem that consider the geodesic flow in a non-positive curvature in which you only have zero curvature in a disk here. So as you approach in this disk, they stable and unstable directions, they coincide. So if you get a trajectory that gets very close to this disk, they stable and unstable directions in the left-hand side, they are far away, but then they start to get very, very close. They don't get the same because they only get the same if they are exactly the directions of this closer geodesic, but they get arbitrarily close to zero. Okay. Thank you. This one was nice. Thank you. What about, Katrin asked, what about a degenerate horseshoe in R2 with a parabolic fixed point? Could you recall me what the parabolic fixed point is? That's where the derivative is an identity in all directions. The derivative is the identity. It's like a parabolic fixed point in a one-dimensional interval map and just a generalization, like a product or post. But you are thinking something like a catok map? I don't know what to say. Like a neutral fixed point? Exactly, yes. Well, in this case, the stable direction do not collide with the unstable direction, right? What did it degenerate? It degenerates the hyperbolicity rates, but not the angle. So you have two possible degeneracies here. This one that I drew, the angle gets degenerate, automatically also the hyperbolicity rate, but the angle is somewhat a subtler situation. It's more subtle, at least I think, at least it was in my work. Some other people might have a different opinion. May I ask a question, Yuri? Yes. It's just a curiosity. Should be these hyperbolic parameters, upper or lower semi-continuous, in interesting cases? I was thinking about the rank theorem, something like that, and I was expecting at least upper semi-continuous of them. I don't know if this is a good intuition or not. I never thought about this, but usually these objects that are obtained as limits, they have one of the semi-continuous, right? So I believe that there is a chance for at least one of the sides to occur. But I would not give you a permanent answer because I never thought about it. Great. Thank you, Zanyi. Any other question or comment? Yuri, a question. In your example on billionaires, there is an example that you like avoid the corners. There is a reason for that. To avoid the corners? Like a box, but there are circles in the corners. No, it's just the following. So that is an example that you look at the two torus. So you're looking at the two torus, and in the two torus you remove two obstacles. You remove this, for instance, and you remove this. When you look, what it means to remove in the fundamental domain of the two torus, that's the picture I drew. So it means that you are removing a disc here, sorry, here, and the disc centered at this point, which is nothing but this, drew in the fundamental domain. So you have the billiard going on here, bla, bla, bla, bla, hit, then bla, and so on. And below it's much easier to understand, right? You start here, you do this until you hit, and so on. Okay? Okay, thanks. Before completing, I just want you to understand two things. Always try to see what happens in the uniform and hyperbolic situation. Everything is continuous. So everything happens nicely. Then I will go to the non-uniform hyperbolic world. So things no longer vary continuously. But we still can mimic many of the objects that come from the uniform and hyperbolic situation to this new non-uniform hyperbolic one. The difficulty is that now the sizes have to be better controlled. So the sizes will depend on the point, more specifically on the trajectory of a point. So for instance, as Luca said, as before in the uniform and hyperbolic situation, invariant manifolds, they are local invariant manifolds, they are defined with a fixed size for every point. Now everything depends on the point, and it depends in a very non-continuous way. So this is the main difficulty of the approach. We are going to have to bypass this variation in some sense. Okay? The first one is by constructing the invariant manifolds because the key of x is different from the key of f of x. And we have to understand how it varies. And this I'm going to do in the next lecture. All right? And please take a look at the survey because the survey contains this and much more discussion that it actually contains all the calculations that I didn't do today. It is there. So if you have any doubts, you can look at it and understand how to do it. And having no more questions, so I believe we can end this first lecture. The next lecture will be Friday exactly on the same time. Okay? So I'll see you there if you need anything just contact