 Okay. So I believe we can actually start with the presentation. So welcome everybody. Today is the third lecture of the mini course on symbolic dynamics for non-uniformly hyperbolic systems. And today I will mostly focus on the construction, on the theorem of Omri Sariq from 2013, non-uniformly hyperbolic systems. I'm actually hearing some minutes. Let me mute some people. Oh, okay. So today I will focus mostly on the construction of symbolic dynamics for non-uniformly hyperbolic defilmophisms. And as usual, we will focus on dimension two to easy the explanation of the main new ideas. So in the next lecture, I will explain to you how we adapt this technique or we'll employ this technique for more difficult contexts such as billiards and flows and perhaps non-invertible dynamical systems. So I would like to start recalling to you what we did on the last lecture so that we can continue. So on the last lecture we put here, we used what we had done in the first lecture about the invariant manifolds for uniformly hyperbolic systems. And then we understood that if you want to do the same thing in the non-uniformly hyperbolic context, the crucial idea is to restrict ourselves to points that have some non-uniform hyperbolicity in which the parameter Q does not decrease to zero exponentially fast. So this gave rise to the subset NuH star in which we could apply the graph transform technique and then obtain invariant manifolds for points or for trajectories that are inside this set. So we kind of completed the construction of invariant manifolds both in the uniformly hyperbolic context as well as in the non-uniformly hyperbolic one. And we also explained how to do it in more complicated situations just like point-carrier return maps of flows and billiard maps. So then we started discussing how to effectively construct Markov partitions and on the last lecture we focused only on the uniformly hyperbolic defilmophisms. So the idea was following Bowen's approach of pseudo orbits. The first thing that we did was to introduce pseudo orbits for uniformly hyperbolic defilmophisms and actually we rephrased everything about pseudo orbits in terms of this notion that I introduced called absent overlap. So you should recall that in the uniformly hyperbolic situation saying that two Lyapunov charts do absent overlap is nothing but saying that their centers X and Y are closed and off. So this was a sort of kind of overshooting on the point of view of notation just to say that two points are closed to each other. But today you will see that the notion that we introduced will be easier to understand what we will do today. So that was the reason that we represented pseudo orbits by means of this new notion of absent overlap because today we will do it for the non-uniform and hyperbolic context. So after introducing this notion we understood that the context of graph transforms could also be applied not only to real trajectories but actually to pseudo orbits. So because of this we were able to develop the method of Bowen in which he constructed Markov partitions using graph transforms along pseudo orbits. So recall that we divided the construction into three steps. The first one was just considering a sufficiently dense subset of our phase space. The second step was constructing an intermediate coding map which was very good for most conditions that we wanted but unfortunately was usually infinite to one. Then the third step was to kind of pass from this infinite to one to an actual finite one coding and it was based on a Bowen-Sinai refinement. So every time we saw an intersection of rectangles we divided the rectangles like this understanding the stable and unstable manifolds to which parts of the rectangles to intersect the other one with respect to this stable and unstable manifolds. So this is the summary of the last lecture. So today what we want to do is to be motivated by this construction that we did on the second lecture and then construct Markov partitions for now the non-uniformly hyperbolic situation. And the usual difficulties that come into play are that first of all and I already told you that a few times in the previous lectures. So the objects that we consider in particular passing charts which are the usual charts that we allow ourselves to understand hyperbolicity in this context they do not vary continuously. And this was actually very important for us in the uniformly hyperbolic situation because whenever X was close to Y we knew automatically that all stable and unstable directions were closed one to the other and so the charts associated to these two points are closed to each other. And now we lose continuity we only have measurability. So in principle we could have two nearby points very close in the manifold for which the objects that we use to measure the hyperbolic feature of our points are very distinct one to the other. Actually this is the second difficulty is totally related to and kind of explain the first one. This is exactly what I'm trying to tell you which says that the non-uniformly hyperbolic behavior of points varies a lot. So quantitatively this means that exactly the stable unstable directions and also the parameter Q that we introduced there is no reason for two nearby points having these quantities close to each other. But fortunately at least we know how to measure this non-uniformly hyperbolic behavior. So we have introduced a bunch of parameters that allow us to allow us to understand exactly this non-uniformly hyperbolic behavior of each point. Recall S and U are parameters that measure how good the stable direction contracts. U measures how good the unstable direction contracts. In the past alpha measures how distinct the stable and unstable directions are. So alpha is just the angle between these two directions. Recall we are all the time focusing on the context of surface defilmophisms. That's why we only need one value to measure the hyperbolicity along this stable direction which has dimension one. One value to do that for the unstable direction and then alpha is just an angle. Then based on these three parameters we introduced map C of X for which the norm of its inverse is related to this parameter capital Q here which says us to each scale we have to reduce our passing chart in order to regain the hyperbolicity along the trajectory of the point. Then we introduced three extra parameters which are directly related to the graph transforms. So the small Q is in some sense measuring how fast the big Q is possibly converging to zero. So remember that the small Q is an infimum along of the big Qs along the trajectory of the point. So this is a good scale to apply graph transform methods and we actually divided this definition into two parts. One that only sees the future behavior. This is the small QS and the other one which only sees the backwards the past behavior that's the small QU and we understood that we could actually consider the stable graph transform only with this scale and the unstable graph transform only with this scale and the good feature is that usually we could have QS bigger than small Q so that the stable manifold that we constructed was actually of this size which was potentially larger than this size. So in some sense and I told you on the last lecture these two parameters are the largest scales in which I can consider stable graph transform and unstable graph transform. So we are doing everything in a kind of almost optimal way. Okay so these are kind of difficulties and the first input that we introduced in order to bypass these difficulties and now I wanted to mention what are the previous results is actually on the question that we want to address. We want to construct Markov partitions for non-uniformly hyperbolic systems. What was known before the main result of SARIC that I will mention today? Well something was known and it was due to Katok. It's a quite old and famous result from 1980 in which he constructed the now called Katok horseshoes. So in the context of surface defiomophisms what Katok did was if this surface defiomophism has positive topological entropy then you can find actual horseshoes with finitely many legs so in the symbolic point of view they have finitely many symbols and to which the entropy of these horseshoes can be as close to the topological entropy of the defiomophism. So in some sense you can approximate the full topological entropy of your surface defiomophism using genuine uniformly hyperbolic compact subsets which are these horseshoes with finitely many symbols. And the idea of Katok was exactly to do what? Well the machinery of passing theory that constructed the invariant manifolds was already present so his idea was okay so what if I know that all these objects do not vary continuously but I can restrict my attention to the passing sets. Passing sets are exactly subsets of your phase space in which all the quantifiers vary continuously. This is a sort of applying losing theorem. If you have something that's measurable then it's basically continuous on sets of large measure so you can restrict your attention to these large sets in which continuity holds and with continuity holds then you are in a good shape to apply exactly the notion of pseudo orbits. I think my internet is a little bit unstable. Are you hearing me? Yes, yes. Okay, good. If for some reason you don't hear me please let me know because otherwise I will be speaking only to myself here. Okay. So the idea of Katok was like that. Okay things very measurably but on large sets they vary continuously so I restrict attention to these large sets and apply Bowen's approach using the notion of pseudo orbits. He indeed needs a more precise version of pseudo orbits and if you look at the Katok Hasselblatt book that has an appendix by himself by Katok and Mendoza exactly this notion of pseudo orbit that's present there is the notion that allows himself to construct these horseshoes. Okay. So this was known since 1980 and well the problem is that these horseshoes they can have entropy as close to the topological entropy as we want but not necessarily you can find one horseshoe with topological entropy exactly equal to the topological entropy. You can approximate but maybe you cannot make it equal to and this was exactly the new result of Sarik from 2013 at least it was published in 2013 in which he was able to construct this object which I will still call horseshoe that now has countably many symbols or in some sense is a horseshoe with countably many legs but that it has full topological entropy so in some sense you relax the assumption on the number of symbols passing from finite to countably many but fortunately under this relaxation you are able to get the full topological entropy of the system so this is a very satisfactory answer from the symbolic point of view in the terms of entropy theory because you are able to construct this object which is combinatorially easier to understand and that has the full topological entropy of your original system. So our idea today is exactly to try to understand how to prove this result of Sarik okay. Do we have any questions so far? I don't want this mini course to be a monologue in which only I speak so if you have questions people that were asking questions last time like Marisa, Lucas, Leandro you are invited to also make the questions not only at the end of the lectures but also during the lectures so do you have any questions so far? I might have missed something but what do you mean by full topological entropy? It is a measure with maximal entropy. Yes. You talk about metrical property. Yes, exactly so we are able to get the measure of maximal entropy and get the coding of this measure for almost every point of this measure okay. Actually what we do is that we will consider a subset just like the nuh and we will be able to code this whole subset and we know that this subset it carries all the entropy of the system so we do not even need to talk about invariant measures. We can only restrict ourselves to this invariant subset that exhibits some sort of non-uniform hyperbolicity. Any other question? Yuri? Yes. Why taking a union of cathode horse shoes is not a good idea if you take the union of a sequence of cathode horse shoes? That is a good question and I I wouldn't I wouldn't be able to understand it. For instance, these cathode horse shoes they are not invariant okay. H horse shoe is invariant no? So the passing the passing sets they are not invariant so the objects that you construct they are not invariant that's what I recall. Yeah passing sets yeah you're right passing sets are not invariant sets so you find these cathode horse shoes that are invariant by some fn. Yes yes they come back to them yes yes so you you take the orbit of this horse shoe you find some invariant set in fact and you could have many intersections right? Exactly I never thought about it this is just for curiosity but it seems that it is not just a naive thing to take the union of them maybe as you said they may have intersections with different horse shoes and then you will not be able to find some coding some some some global thing is that yeah yeah because as I told you it's very important that this coding is finite to one okay? Exactly or at most countable to countable to one otherwise you cannot relate invariant measures and well even though the result does not focus on invariant measures for the purpose of applications we do apply to them so not having this finite finiteness to one is a is a is a bad thing so we are we are always looking for finite one coding okay? Yuri you told you told that in the circuit result you can find a measure of maximal entropy in that horse shoe so but in the union of this rush of cathode maybe you can not find it this measure yeah yeah it can only be realized as a supremum but not as a maximum right? Yes yes another difficulty of taking only the union good thank you Vilton it's good to know that that people are paying attention so I think now we I can continue continue satisfactorily okay so this is the kind of let's say the the way of relating to cathode but what we actually construct is a mark of partition on a set that exhibits some good non-uniform hyperboleicity so we can introduce to you this set we fixed the parameter chi and we looked at points with some non-uniform hyperboleicity and later on for the purpose of introducing graph transforms we constructed a subset of it which was for the point for which the capital Q does not converge exponentially fast to zero so now I will introduce a new subset an extra subset which I will call this nuh sharp so now we have an even newer non-uniformly hyperbolic locals and the idea is introducing some recurrence to the set that we have so far so how do we introduce this recurrence so this is the set we consider now only the points that for which do you have some non-uniform hyperboleicity for which your capital Q does not decrease exponentially fast to zero so this means that the small q is positive along the orbit and then we require that the small q in the future becomes far from zero for along the subsequence and the same thing for the past so in some sense what we have here is a subset which is somewhat related to to a union of passing sets so the union in some sense appears here why because you can see the set in which k is bounded for t from t as a sort of passing set so asking the point to having this lim soup bigger than zero in the future and in the past it means that its trajectory is visiting a single passing set infinitely often in the future and in the past so this is the kind of recurrence that we are adding to this subset here for which we have not considered recurrence yet and so this will be the good set in which we will be able to satisfactorily find a mark of partition from one okay so let me just recall to you what are the three subsets that we consider so we first consider this one for which we have the invariant directions the s and u and the rate to which those possibly up north exponents are bounded from minus sky they give us this finite parameters s and u so here we are only looking at the non-uniform hyperbolicity then we added an extra condition to require that this small q is positive so this means that you have a sub exponential bq and now the new input is to introduce some recurrence some recurrence to some passing sets which in some sense the iterates to each you return to this passive set they are kind of please times okay i believe this this notion of please times is nice to see as as it is because it's used a lot in the non-uniform hyperbolic context and even in the context of young towers and so on so this is a relation that i believe that will appear also in the talks of zealvis well actually zealvis is a is an expert in using these please times to understand non-uniform hyperbolic systems okay so uh with the introduction of this of this notation now i can say to you what is the theorem that we want to discuss today so here's the theorem let me decrease this a little bit so the theorem of sarip considers c1 plus beta defi amorphisms on surfaces and given a parameter chi bigger than zero he is able to construct a topological mark of shift so this is a symbolic space with the left shift and the whole that continues coding map just just like uh we wanted in the first lecture so this guy is an extension of the original guy so this diagram commutes and furthermore the image of this subset is exactly annu h chi sharp and actually when once you restrict to this subset sigma sharp which i will tell you in a minute what it is then this pi here is finite to 1 okay let me just warn you of a few things this is not the original result that sarip proved so this is the way that we see nowadays his his theorem and this equality was studied by one of his students benovadia so he was actually able to obtain this not only in dimension two but actually in any dimension this is very important for some applications so he was actually able to understand what would be the image of this subset here of your symbolic space and just to complete the statement what is the subset of your symbolic space it is basically the symbolic counterpart of this uh sharp subset that we introduced recall that sigma is a space of sequences so a sequence is in this sigma sharp if you see a same symbol infinitely often in the future and the same symbol infinitely often in the past so in some sense it is a kind of recurrence assumption as well so this is a kind of counter part symbolic counterpart of this more complicated set annu h sharp and yes sorry so this is equation number two it depends on key on the right hand side but not on the left it depends because this guy that we construct we fixed this guy okay so we fix chi and then we find uh we find this topological mark of shift and we find this coding for which all of this holds okay okay so the original result of sarig it actually got only this inclusion here or actually not the let me see if it's this one anything no it's this one okay yeah so getting the other the other direction this one yeah it's actually hard because well you're gonna see we don't know how to to to directly to control the parameters of the coded points yeah great well even if you if you knew this you could make some applications but for the new applications for example uh that i'm going to that i already mentioned the generalized hyperbolic srb measures that benovar the construct it is important to get the other directions that direction as well but i will state sarig's theorem like this because it's exactly this version that i want to uh comment on the proof okay okay so again you have your surface defamofism is a geometrical object and you are interested in understanding the non-uniform hyperbolicity of it so you fix a threshold to understand the non-uniform hyperbolicity and then you look from the threshold to a subset of your manifold which is a kind of the good subset in which you have a good non-uniform hyperbolicity with also some recurrence and what we are able to do is to construct this symbolic object to which a subset of it falls inside this set of good non-uniform hyperbolicity and actually the coding is finite one on it so if you restrict yourself only to the restriction of pi by restricted to sigma sharp then you have a map that is finite one from here to here so all the dynamical properties that are happening in the set can be lifted to this set here and we can understand because this set is combinatorially much simpler to understand what happens here and then project down back to the original subset and conclude the dynamical properties of the objects that we were interested in the beginning okay so this is the idea and this is the reason that for many applications having this surjectivity and finiteness to one only in this subset is enough for us because for instance low dimension measures of maximum entropy live here this is one one example of application okay excuse me Yuri yes I'm sorry the original pi it is surjective it is surjective on to this guy okay but outside this guy not necessarily not necessarily okay yeah and even not the original we usually we use the same pi okay no actually we make a few a few changes because the set that Sari considers the annuity chi is slightly different from ours he actually only considers points which heavily up on all the exponents okay okay he's able to obtain surjectivity on to this subset here he's able to obtain finiteness to one but he's not he's not able to understand what is the image of this guy okay okay okay so we will not be able to to prove this theorem only using the same ideas that we did for the uniform hyperbolic situation because things here do not very continuously so because of this new ingredients have to be added to the proof and I will focus on on the five main ingredients that I believe are the key ideas in order to be able to prove this result I'm sorry Yuri yes can I ask you a question so you mentioned something about the measure of maximum entropy being included in the annuity chi but for which chi for which chi you just so if you get a measure of maximum entropy you just get a chi smaller than h and then if this measure is ergodic then almost every Lyapunov exponent will be bigger than chi because it will be bigger than equal to h right you have the well inequality which tells you that h mu greater than or equal to the Lyapunov no sorry smaller than or equal to the Lyapunov exponent so the Lyapunov exponent will be bigger than chi okay okay thank you so any chi is smaller than the if you want to understand the measure of maximum entropy any chi is smaller than the topological entropy will do the job so here are the five main ingredients that I want to discuss today I want to discuss the notion of epsilon overlap in the non-uniformly hyperbolic situation recall in the no in the uniformly hyperbolic one it was a tautology the notation that we introduced because whenever x was close to i psi x was close to psi y so now I will add a condition in order to have for passing charts this property as well this is the notion of epsilon overlap and then comes a very important notion which is this one of epsilon double charts so we introduced to analyze uniform hyperbolicity passing charts passing charts will not be enough for our interest because in some sense passing charts they are analyzing the behavior at the stable and unstable directions at the same scale so the idea of these double charts is to introduce two different scales one to analyze the behavior along the stable direction another to analyze the behavior along the unstable direction so in some sense passing charts they are single charts and then we are going to pass to this notion of double charts in which we can separate the study of the stable direction to the study of the unstable direction then we are going to discuss this notion of coarse graining which is how to reduce the study from uncountably many charts that we have that we need a priority to consider to understand all trajectories to only countably many of them so recall that in the uniformly hyperbolic situation this coarse graining was very easy we just needed to consider a finite sufficiently dense subset of the manifold so if you have your manifold here if you have sufficiently dense finite subset then we can code all trajectories using the pseudo orbits generated by this finite set so applying this technique to this non-uniform hyperbolic situation is the new input of coarse graining that I will discuss in a few minutes then more difficult questions come into play the first one is this improvement lemma I will not tell you what it is right now it will be clear as we go in and then the last one is the inverse theorem the inverse theorem is exactly related to the following question so we constructed our code in mac so to every pseudo orbit v we get a point in the manifold that is coded and the idea of the inverse theorem it's actually an inverse problem so what can we say if we know x what can we say about v so in some sense we are trying to understand the inverse image of x so we are trying to understand the fibers of this code in map five this is an important and necessary question in order to prove the finiteness to one here we have to solve an inverse problem so we have to prove an inverse theorem okay so these are the five main ingredients that in some sense do not come into play in the uniformly hyperbolic situation but that play a key role here in the non-uniformly hyperbolic one okay so let's go let's discuss them epsilon overlap so let me make a comparison again epsilon overlapping the uniformly hyperbolic oops situation was very easy because whenever two points are nearby their invariant directions are nearby but now you can have in the non-uniformly hyperbolic context two points very close for which the splitting associated to them is very different and the consequence in the point of view of those matrix that we introduced is that these matrices so recall c of x comes from r2 to txm and c of y comes from r2 to tym so these are the matrices that allow us to diagonalize the derivative of the map and if these splitings are very different in two nearby points then c of x and c of y are very different for instance if you subtract one from the other and you calculate its norm this norm could be a priority very big why because they are defined by these splitings and these splitings are very different so how do we actually impose some condition in order to have these matrices close to each other so this is exactly the notion of overlap to which we will actually need to consider an extra object associated to passing charts so passing charts we consider psi x but now we need to introduce an extra parameter which is the parameter in which we see the passing chart it's a kind of a scale for each passing chart so to each parameter eta i'm going to introduce this new object psi x eta which is just getting the original passing chart at x and restricting it to the scale eta why because the scale is one thing that we will be interested in controlling from now on the uniformly hyperbolic situation scale was uniform because everything varied continuously but now even though we have two nearby points we have similar splitting the scales defined by them for example the capital Q might be very different so we have to adjust the scales to nearby scales as well in order to be able to define this notion of epsilon overlap okay so epsilon overlap will actually be a property of two passing charts we are going to say when two passing charts have an epsilon overlap so it's not only a property of the centers of the charts but also of their scales and when do we say that two passing charts epsilon overlap so consider two of them one centered at x one with scale eta one and the other one centered at x two with scale eta two and we say that these two passing charts are epsilon overlapping in some sense they are allowing us to understand the hyperbolicity at more or less at the same place with the same scale if the scales are comparable so this notation here just means that this quotient is between e to the minus epsilon and e to the plus epsilon so this quotient is very close to one and here is the more difficult assumption to hold is that observe to have the epsilon overlap we require that the centers of the charts are close to each other but we also require that the matrices c of both points are very similar to each other in the sense that this sum is smaller than this number which is potentially very very very small so this is a very strong assumption if you want an epsilon overlap to hold you should choose proper scales that are comparable to each this sum here is smaller than the product of this chaos to the fourth power so the smaller the scales are the stronger the condition of epsilon overlap becomes so the closer the points have to be and the more similar the matrices c have to be from both points so this is a very strong assumption that not only occurs whenever you get to nearby points you need to have more things close to each other associated to the points x1 and x2 so here is the picture so not only the splitings have to be similar but also and I will not prove but it is necessary that the parameters s of x1 and x2 are also very similar the parameters u of x1 and x2 are also very similar and this is exactly the condition that whenever it is satisfied we can understand a neighborhood of x1 and x2 using either this passing chart or this passing chart because their images they cover more or less the same region of the manifold and also the distortion associated to them is almost nothing so actually one consequence of this definition is that if you consider the composition of psi x2 minus 1 with psi x1 this is very close to the identity okay we use this in the uniformly hyperbolic situation but in that situation this was guaranteed only by requiring x2 being close to x1 now we need these stronger assumptions in order for this change of coordinates to be close to the identity so this is in some sense saying that if you have absolute overlap it makes no difference if you consider this passing chart or this passing chart to analyze the manifold in the neighborhood of these points and why is this good because once we understand how to change from one passing chart to the other we can actually understand how to analyze f from changing from passing charts to others so recall a theorem that I stated for the uniformly hyperbolic situation last time this theorem of Sarig is exactly the same theorem in this non-uniformly hyperbolic context our idea is now to analyze f using two different passing charts one at x and one at y and to do that we require not only that f of x is close to i but that you have a passing chart at f of x close to a passing chart at y so if this occurs then you can understand your map f using these two passing charts psi x and psi y and the conclusion is that you recover hyperbolicity because this change of coordinates this representation of f in in these two passing charts they are again a small perturbation of a hyperbolic matrix the same hyperbolic matrix that we had before plus an error turn which is controlled in the c1 plus beta over 3 norm so epsilon overlap allows us to recover the hyperbolicity that we want in order to analyze our non-uniformly hyperbolic system again the proof of this is just observing that f of x y is the composition like this and we know that this guy is almost hyperbolic and by the epsilon overlap assumption this guy is going to be close to the identity so the proof is the same but now the assumption that we require is stronger to guarantee that that change of coordinates is close to the identity okay any questions i have a question how can we guarantee the existence of these overlaps for two nearby points because it resembles a lot continuity and this ethos could possibly be very small smaller than q for sure yes it seems hard yes so if we don't have overlap then it's it's better because then we don't have a kind kind of duplicity in representation of our model okay but invariably we will have to require overlaps so we will see when we deal with the coarse graining in the coarse graining we will exactly pass from this uncountable space which is the space of fault trajectories to a countable one which is a space reduced space of countably many passing charts not not passing charts but epsilon passing charts double charts that on which we have overlaps that allow us to consider pseudo orbits that we'll code all points that we want so in some sense it is good when we have not many overlap but we will require overlaps to occur because it's using the overlaps that we will pass from actual orbits which are uncountably many to pseudo orbits which are which come from countably many objects did you understand more or less more or less okay so yes it will be complicated because nearby points they might have not overlap but we will get to it okay okay any other question Yuri yes on the overlap condition the exponent four is optimal or no it's not optimal it's I believe a three or two is sufficient but then while you prove you pass from this estimate to some as you have to lose something so for some calculations the four becomes three in the middle of the proofs and then the three becomes two and then the two becomes one so if you have a better way of applying the calculations in the proof perhaps you could reduce this to three or two two so it's not optimal but it's sufficient and we can do it with any number bigger than four if you want to put a 10 over there we are also able to do it of course epsilon overlap will be harder to hold if you change four to 10 but still the the the idea of the proof and the techniques can be adapted to changing four to ten all right it's just a number that's sufficient for the bunch of calculations that are done in the papers okay let's continue the next input is a very very very important one it's exactly how to separate the analysis to the stable and the unstable directions in a different way this is the notion of epsilon double charts so motivation is as follows the uniformly hyperbolic situation we have no problems with this because the angles associated to the splitting they are uniformly bounded away from zero but now in the non-uniformly hyperbolic situation these angles they could be as close to zero as you want and if this happens it means that it is hard for you to measure the hyperbolicity of your system along both directions using the same scale because the angle is small so these two directions are almost in the indistinguishable run from the other so the the the solution that Sari gave was okay so instead of considering the same scale to analyze hyperbolicity along these two directions at the same time I will consider two different scales one for each direction so this is the notion of epsilon double chart so the idea or nowadays the easier way to explain this is recalling what we already introduced four points in the in the nuh star subset recall that nuh star is the set of points in which the small q is bigger than zero the small q allows you to consider graph transforms and then we said okay why don't we separate the definition of the small q into a small qs which only considers the infimum for forward iterates and the small qu which only considers the infimum for negative iterates and as I already told you these introduce two different scales that the first one allows you to analyze the behavior along es and the second one along eu so it looks like that the right object to understand these hyperbolicities separately is to consider the same passing charts centered at the same point x but if you want to consider the stable graph transforms you consider the scale of qs and for the unstable graph transforms you consider the scale for the qu so this leads to the definition of epsilon double chart which is this this is the notation for an epsilon double chart it is passing charts centered at x and you put here two scales in which you can see this object as a pair of passing charts the first passing chart is considered at the scale ps so let me just fix this because this should be on top okay and this is the passing chart centered at x which we will analyze the behavior of es and the other one in which we will analyze the behavior of eu will be this passing chart at the scale pu so from the definition an epsilon double chart is just a pair of passing charts centered at the same point but conceptually what they will do is that they will allow us to analyze the behavior hyperbolic behavior along es separately from the hyperbolic behavior along eu one parameter for each of these directions okay ps and pu or qs and qu what is the relations i'm confused sorry so here uh to each x you can consider anything as ps and pu but x has an intrinsic kind of ps and pu which is the qs of x and qs qu of x so ps and pu the natural candidates that you associate are exactly qs and qu of this point they are the intrinsically defined uh parameters for the point but because we are going to have to make some approximations perhaps we might not be able to take qs of x and qu of x as the scales and we need to change them a little bit so you have two parameters family two parameters for you i have two candidates for putting this as these parameters but since i need to pass from actual orbits to pseudo orbits i need to make approximations of these parameters good okay thanks yeah okay so now uh since i defined these charts now i want to define when i can pass from one chart to another chart by the action of f so we define this in the uniformly hyperbolic situation recall that we consider that we put this in blue we consider psi x going to psi y in the uniformly hyperbolic situation if you have the proper overlaps so you have the overlap of f of x to y and of f minus one of y to x if you had this then you could make that change of coordinates uh but to analyze f using these two passing charts at x and at y now we will do the same thing but now for this much more complicated object of epsilon double charts which will be the one that we use for the non-uniformly hyperbolic context so get two of them get two of these objects one is centered at x with two parameters p s and p u one is centered at y with two parameters q s and q u when can we from this one apply the dynamics and go to this one so if this happens we write an edge from v to w the conditions are in some sense they seem complicated but let me explain to you and try to dismissify them so we require a bunch of conditions now which are much stronger than when we had uniformly hyperbolic maps the first two ones they are as related similar to the previous ones of the uniformed hyperbolic situation we require an absolute overlap of the passing chart at f of x with the passing chart at y and also an overlap of the passing chart at f minus one of y with the passing chart at x the scales that we require are the minimum between these two parameters so these two passing charts they have to be absolute overlapping with this very small scale which is the minimal between between q s and q u let me diminish this a little bit so that you see the notation here okay in the same thing for the pre iteration of the passing chart at y so you get an overlap with this scale here p s and p u so the proximity of these two passing charts have to be very very close because that sum has to be smaller than p s p u to the fourth power actually to the eighth power because it has p s the parameter here with the parameter here to the fourth power well this is not enough for passing from one double chart to the other because we also need a relation between the scales of one to the other and this in some sense mimics the condition that we needed for the graph transforms to be defined so recall that q s and q u of points so we introduced the q s effects and q u effects and we showed that these guys they satisfy a recurrence condition the recurrence condition is exactly what we require for the p s p u q s and q u here in the transition from one double chart chart to another in some sense this first equality what does it say it guarantees to us that we can define the stable graph transform why because if we start with an almost horizontal graph here defined with this scale then if we apply f minus one we expect it to grow more or less of the order of e to the chi then we are sure that it grows it becomes at least of this size well if we want to define a new graph now on v we cannot go beyond the capital q associated to v so in some sense we have to take the minimum of these two values and then we are able to define this the image of this almost horizontal graph as a new horizontal almost horizontal graph with this scale p s so this is giving a recursion between the q s and p s in order to guarantee that the stable graph transform is properly defined whenever you have an edge in the same condition uh holds here for this equality that guarantees that the unstable graph transform is well defined sorry maybe the you choose q s and q u is it a good notation here a bit confused because you define a bad notation yeah it's a bad note but i don't know what will be the substitute maybe prime or something else here here i should put q s effects in q u effects okay sorry you think that they are off a point in the left side it is p s equal to my minimum of and and below q u is it correct oh yeah okay okay i see yes no okay okay okay from x to y i see i see i see okay thank you so they stable uh you start here and you go here so p s depends on q s but they stable you unstable you start here and you go here so q u depends on p u u yes yes thank you so yes so so if you have a pseudo orbit so the parameters of the pseudo orbit are going to be like determined only by the value of two parameters like these scales you only have to choose two scales and then you can determine everything else because i see like ps because it seems like ps is totally determined by q s yes so from q s you can go to the past to define the stable ones ah yeah yeah yeah yeah all right and then q u yes if you have one guy here you have to you determine this one but if you want to determine everything you have to know the the stable parameters for far futures if you only know the zero of parameters you can define the p o p s s for the negative values so you can because you can come back okay yeah i think i think i see okay thank you and they kill you if you know one you know everything in the future they use that's true but how to know these ones so you have to know something about the far past yeah okay this is interesting because the the recurrence condition all is in some sense that's what is allowing us to to determine this because the recurrence condition is telling us that you in the far past you return to a same kind of passing set so at least you are bounded from zero in the far past so you can iteratively define everything in the future from the far past okay okay okay so so how like like this recurrence condition allows you to control the value of these parameters i don't know like just guessing yes exactly it allows you you should see it as follows we introduce the intrinsic values candidates for points and they do satisfy this yeah and now we relax this condition uh we relax we relax this assumption by now taking values which are not necessarily the ones we introduced but still satisfying the same recurrence conditions okay okay so we require the same conditions but if we relaxed to the in the sense that these values are not necessarily these values and in some cases they will not be because we will have in some sense to pass to approximations in the way if you need to pass to approximations we will not consider these values but yet approximations of them which will be the purpose of these values okay yes so let me ask something my question is kind of naive because i'm confused the many objects which are lying around yeah yeah that you are absolutely right so when we write this edge like v edge w with an epsilon epsilon on top of the arrow uh we are actually fixing uh one two three four five six seven things to write the edge right the x the epsilon dps u qs q u epsilon yes so epsilon is fixed hence and for all okay so you are fixing six things okay so my question will become similar to what my visa asked me i think uh so epsilon is fixed yeah it can happen that there is no choice for the other six arguments in order to to write such an arrow i mean trivially if we take x to be x and y to be f of x i think uh i mean and you take these guys here yes the let's say the canonical choices for the indexes okay we can fulfill the the and write the error the error but uh if if we are not working with uh two points in the same orbit uh it can be that we won't be able to write the error exactly yeah this this seems too much to ask for and it makes and it might be a so strong condition that you are right so if you don't get an original trajectory e this definition might be an empty one but and you have to to have just faith now because i'm not explain to you the details now this condition even though it is very strong it is still loose enough in order to have many edges okay okay this is what i will actually say uh next about when we talk about let me go here about the coarse graining the coarse graining will exactly be the the theorem or the result for which we will pass to only a countably countable family of these objects for which all edges that we consider will be enough to shadow all points that were interested in this is a kind of a miracle because even though this condition of transition is very very strong it is still loose enough in order to pass to a countable family of these objects for which the pseudo orbits will cover everything that we are interested in yes thank you welcome this is actually uh much so i have an ongoing i don't know for how long it will take because the pandemic complicated our plans ongoing project in which we we want to do something related to flows for this and we had a definition for edge which was as you are saying too strong to be true to be true in general so we had to redo the definition making it more relaxed so that with this new definition we were able to to get non-trivial edges okay but let's say that we have edges whenever we have edges then i can pass to the notion of generalized pseudo orbit which i will call epsilon gpo epsilon generalized pseudo orbit what is it it is a sequence of edges there is a path on a graph so it is a sequence of of epsilon double charts such that from the epsilon double chart at position n you have an edge to the epsilon double chart at position n plus one okay and as you are saying this could not exist for non uh non truly orbits well at least it exists for truly orbits you see if you get a point in the non-uniformly hyperbolic set with the star for which the qs's and q views are all positive then this is an epsilon gpo this is a true orbit but with this is chaos it satisfies these four conditions above okay and you are right the difficulty is how can we pass from this originally like orbit true orbit epsilon gpo to not orbit but still an epsilon gpo well before that let me just convince you that whenever you have an edge the graph transforms are properly defined i already told you why because those recursive relations they tell us that if you start with an almost horizontal graph and you pre-iterate you go over the new stable window here at x and vice versa for the almost vertical graphs so the difference now is that almost horizontal graphs you need stronger assumptions on the properties of the functions that you consider so you see almost horizontal graphs they are defined only at the scale ps and saying that they are almost horizontal uh now we require strong assumptions on the zero position and zero derivative and the assumption is related to the window parameters ps and pu so they smaller the windows of the epsilon double chart the closer f of zero has to be to zero for you to consider these almost horizontal graphs so in some sense for places in which it is very hard for you to see this the hyperbolicity going on at the stable direction you have to consider only graphs which are really really really almost horizontal because bad hyperbolicity means that you have to diminish these parameters and then you have to consider f of zero very very small okay the same thing for f prime of zero and this condition it was already contained in our previous lecture so you see it's much more complicated to define the space of almost horizontal graphs we have to control these graphs much more precisely exactly because at places with bad stable hyperbolicity you might lose control of things if you do not require these strong assumptions well you define the unstable the almost vertical graphs in a similar way and then you have the stable graph transform taking an almost horizontal graph at w and sending to an almost horizontal graph at v and the same thing for the unstable graph transform taking an almost vertical graph at v and sending to an almost vertical graph at w and well once we had graph transforms the conclusion here as well is that these assumptions they are strong enough to guarantee that these two guys are contractions then we can compose many contractions and define as we defined in the in the context of uniform hyperbolicity stable invariant manifolds for these pseudo orbits this generalized notion of pseudo orbits they stable one what do we do we go far in the future consider any almost horizontal graph and then pull it back to the zero double chart which is v zero take the limit this thing converges and it will be the stable manifold of this epsilon generalized pseudo orbit same thing for the unstable you go far in the past and then you push it to the zero position and then you take the limit and you get the unstable manifold of this epsilon generalized pseudo orbit and these guys they are genuine passing invariant manifolds of course they are not invariant manifolds of the zero of the center of this chart but they are invariant manifolds of any of the points they contain okay so the idea here was that with this notion of edge we can also define invariant manifolds for this loose notion of generalized pseudo orbits okay and then we have again the same shadow in lemma which tells us that to each epsilon generalized pseudo orbit you have a single point that is shadowed by it the single point is going to be the intersection of the stable and unstable manifolds of the pseudo orbit so here i have drawn it in in the different scale so the unstable manifold will be drawn in the scale of pu so it is an almost vertical curve with size more or less of the order pu and this table is an almost horizontal with size more or less of size ps and you see i require that these two guys are very at the zero position they are very close to zero with respect let me come back here to the infimum of these parameters so that an unstable and almost vertical always intersects and almost horizontal if i only require the almost horizontal to be smaller than ps and they're almost vertical to be smaller than pu then you could have a situation in which you had something like very tiny here and very big here that did not intersect okay but if you require both objects to be very close to zero with respect to the minimum of the parameters then they will intersect and the intersection will be this unique point in which i will say that this being shadowed by the pseudo orbit what does that mean it means that its trajectory is falling inside the image of images of the passing charts with respect to the minimum of the parameters as well exactly because it this intersection will also be very close to zero okay so even though stable and unstable graphs are defined in windows of size ps and pu the requirement on how horizontal and how vertical they are are with respect to the minimum of ps and pu in order to intersect them and now we get to the coarse graining so the coarse graining is the tool is the machinery that allows us to pass from these uncountably many orbits that we have in the set to accountable family of double charts for which the epsilon generalized pseudo orbits that they create that they are associated to shadow everything in the set so this is a kind of a discretization argument that allows us to go from original orbits to pseudo orbits that still shadow everything that we want in the uniformly hyperbolic situation we did this just by taking a sufficiently dense subset of the manifold now as marisa pointed out as lucas pointed out the the sole definition of edge is much more complicated and harder to hold so how can we do that well the idea is to get all axes in the set and to look at all hyperbolicity parameters that they define these six guides and in some sense to pass to a dense subset with respect to this right hyperbolicity parameters so what do i mean i mean that for every x in this guy i want to find a nearby y that has a nearby c of y a nearby capital key of y small q small qs and small qu so now instead of only approximating x which was enough for the uniformly hyperbolic context i want to pass to a dense or sufficiently dense subset with respect to all of these parameters as well okay and now because i'm in i'm allowing any known uniform hyperbolicity to hold in the sense that these guys here can be arbitrarily the large then i will actually consider a dense subset of points that approximate all of these guides but fortunately we want to consider this dense subset and we will as a countable subset so the goal here is to pass for this uncountable set that is associated to all of these parameters and obtain a countable dense subset of it so how can you pass from uncountable to countable and dense well i i believe we have a famous theorem in in analysis which is lindelov's theorem which is exactly allow you to to get countable dense subsets lindelov's theorem requires boundedness pre-compactness so in some sense we have to use somehow pre-compactness here in order to be enabled to pass to a countable dense subset how do we do well we enlist everything that we want to control so to every point x we consider this tuple gamma of x which is underline x underline x is just the position of x its pre-image and its image underline c is just the three matrices c also at these positions at x at f minus one of x and at f of x and the underline q is just the value of q at x why do i enlist all of these guys because in some sense i want to pass from an original trajectory that is going from f minus one x to x to f of x to some approximation of it so i not only have to control the values at x but also of the neighbors of x under the dynamics of f of x and f minus one of x in order to guarantee that these approximations here will actually be edges okay so in some sense i want to consider all of these tuples and obtain a countable dense subset of it well this belongs to the manifold this belongs to the manifold this belongs to the manifold the manifold is compact so this varies inside a compact space everything is good we could directly pass to a countable dense subset this is bounded by for instance by one so this belongs also to a compact neighborhood of of r so we could also pass to a countable dense subset but this is not for matrices to belong to a bounded set they are norms and they are inverse norms have to be bounded and the inverse norms of these guys can be huge because what C is doing is getting two vectors which are perpendicular and sending to two vectors which might be making a very small angle so the reverse direction you would be getting two almost indistinguishable vectors and opening them to an angle of 90 degrees so this means that the norm of these inverse matrices can be arbitrarily big so these guys do not belong to a bounded subset of matrices so what do we do to control well we divide this space into subsets in which these guys are bounded so for each choice of parameters l minus one l zero and l one we look only at the gamma axis for which the inverse norm the norms of the inverse of these guys are between e to the li and e to the li plus one and now these guys they are bounded because both the norm of the norm of c and of c inverse are bounded x are bounded and q is bounded so these guys they have countable dense subsets and now we are done because the space of all tuples that we wanted to approximate is the disjoint union of these y sub l's each y sub l is pre-compact so it has a dense countable subset so if you consider the union of these dense countable subsets as l varies you obtain a dense countable family of these objects and using the dense countable family of these objects and now again is a matter of faith we are able to construct countably many double charts that are enough for our purposes if you want i can uh i can tell you exactly where it is contained in the survey survey this construction or either i can do it in the next lecture if you are interested in but the the main idea here is exactly to pass to a pre-compact set in order to obtain a countable dense subset and then the conclusion is this theorem here for the fixed epsilon you can find a countable family of these objects they are defined by the countable dense subset of the gamma axis above in which these two properties hold first of all you have this discreteness property which plays a key role in the construction in the future it tells us that for every parameter t the number of double charts that you selected for which these two parameters are both bigger than t the scales are both bigger than t is finite so recall that if you fix a parameter t having this two p s and p u bigger than t is associated to passing sets so in some sense this discreteness property is telling us that to cover passing sets we need only finitely many double charts this in some sense is related to the original question of ali ali said said why not getting the union of horseshoes here in some sense it's exactly what we are doing because to each passing set we have finitely many double charts and then we get the union of them so this is the discreteness property and the sufficiency property is exactly telling us that all orbits that we are interested in are shadowed by pseudo orbits generated by this alphabet of charts so you construct abstractly this family of countable double charts it gives rise to a graph because you look to each edges from from from each double chart you can go to another double chart then you have the paths on the graph to each path on the graph you have a shadowed point so the sufficiency sufficiency is saying that as you look at the shadowed points you get all of the points in this set here okay Ali is it is it clear more clear now uh in some sense yes thank you so the the the hard work here in this theorem is on the sufficiency impact am i right you should prove that it is sufficient impact uh yeah to get this greatness is easier you if you got the empty set if you got the empty set of countable charts you would get this greatness so it's exactly so the difficulty is sufficiency okay exactly yes okay and how do you do do we explain for us sometimes about sufficiency or we should read it uh it's up to you but basically what you need to do is that you get these points of this dense countable subset and from each of them you have the psi qs x qux so you just have to allow this qs and qu to change a little bit to vary a little bit and if you have an original trajectory you look at the cues they define so you will be able to find that an x0 which approximates the q of x and x1 which approximates the q of f of x and x2 which approximates the q of f2 of x and so on okay so this is the idea this is very interesting because uh this this new result about the uniqueness of measures of maximal entropy the authors buzicrovisia and sariq are all the time analyzing things in compact regions like truly hyperbolic sets so this from the symbolic point of view they are related to this condition here the s and p u bigger than t i was going to ask uh can we guarantee that the bounded degree or bounded degree now um finite degree on this graph everywhere exactly we can yeah yeah so for his behavior controlling this um this degree um i think it's related to this because we show yeah if you have an edge coming from this guy to another guy then the the minimum between let me put like this the minimum of p s p u over q s and q u is close to one okay yeah great good thing so it has it is a graph that has bounded degree everywhere it's not finite degree like in the sense that uniformly bounded okay this is actually one of the difficulties that i encountered when uh i i did the construction for non-invertible systems or the one-dimensional ones i was not able to to get one of the bounded degrees i was able to get bounded outgoing degree but not bounded in going degree well later i was able with uh Hermeson Araujo and Mauricio Poletti to bypass this and the and the reason was that i thought that in the non-uniformly expanding which you only see positively up on the exponents you could only work with one of the parameters which is the q u but then this made it complicated to get this bounded degree so our idea was even if you are non-uniformly expanding also use the q s so then we were able to get bounded again but coming back to the discussion here using this result of Sarig which i call coarse graining i think he he also calls it we can do as we did in the uniformly hyperbolic situation we have now pavson a graph left shift and the coding map given by the shadowing which is pi to each epsilon generalized pseudo orbit we associate a single point in the manifold okay so now we come to the next input which is called which i call improvement lemma so what is the improvement lemma well the goal to to to motivate the improvement lemma is the following the final coding that we want needs to be finite to one so we have as i told you to solve an inverse problem so if i have this equality if i know that v is shadowing x i want in some sense to relate the hyperbolicity parameters of the charts of v to the hyperbolicity parameters of the shadowed point x and the improvement lemma allows us to do that the first natural question would be for example how can you compare the angles of the unstable directions of x with the angles of the invariant directions at x0 well analyzing the angles is easy why because the shadowed point if you look at r2 it is defined as the intersection of an almost horizontal graph with an almost vertical graph so the angle between these two objects is almost 90 degrees as is the angle of the coordinate axis at x0 and if you apply the passing chart to pass from r2 to the original manifold the distortion that you have is not that big in a way that the angle at x0 the 90 degree angle is going to go to this angle here and the almost 90 degree will go to almost the same that you get at x0 so in some sense the strong assumption that we required on these almost parallel graphs guarantees to us that after applying the passing chart the angle that you have in the stable and unstable direction at x are very close to the angle that you have the stable and unstable direction at the center of the chart okay but a much more difficult question would be for example to understand why the s parameter is finite i know that the s parameter at x0 is finite because x0 is in the non-reiform hyperbolic set but why is it finite for the shadowed point i could have a nearby point with a rate of hyperbolicity slightly different from the rate of hyperbolicity of x0 to which s of x0 is finite but s of x is infinite this could happen and here it is very important to assume that the v that we consider is in this recurrent subset because for oh sorry sigma the recurrent subset in the symbolic point of view because for the recurrent guys we can guarantee that the shadowed point has finite s and u parameters and the way we do this is exactly by using the improvement lemma so what is the improvement lemma okay so that here's the problem how to compare s x and s x0 and the improvement lemma is if the ratio between the s parameters so in some sense you are comparing how would the hyperbolicity at f of x is with the hyperbolicity at x1 is in the stable direction so if this guy is big after you apply the graph transform or the f minus one and you come back to x into x0 then this ratio becomes smaller more specifically if you get a threshold of how big it is so you fix some square root of epsilon and if this ratio is of this order e to the minus puzzle minus square root of epsilon then certainly s of x over s of x0 becomes inside a smaller interval it is of the order now of zeta zeta no psi minus this number which is a positive number so if this was the interval let's say of size one over 10 to 10 after you apply the dynamics f minus one the quotient between the s parameters will go to a smaller interval let's say one over five five exactly because the psi diminishes to psi minus this and why is this good because if you know that in the far future you see a same double chart infinitely often then as you try to understand s of x over s of x0 at the zero position you could do it starting with the s parameters the ratio of the s parameters at this position and then apply this improvement lemma going back to the zero position and then you will see one single improvement if you go from here to here the single improvement is at least of the order q of y to the beta over four but if you do this a little bit further in the future for instance in the second occurrence of this passing chart then you would see at least two improvements one as you come from here to the previous iterate and then you come come come you see some improvements but they might not be big then you come to pass to a same to the same passing chart that you were in the beginning then you apply it one more time then you see the same improvement again then you come back back back with improvements that you don't know how to control and then you come here to the zero position so if you start with the improvement lemma from the second occurrence of this same passing chart as you go back to zero you see two improvements if you do it from the third occurrence of the passing chart you see three improvements so you see many many improvements as long as you are in this threshold so the conclusion is that the improvement plan together with the recurrence in the future allows us to go over many improvements and then guarantee that the s parameter at the zero position of x is very close to the s parameter of x zero so we as a consequence get that the s parameter at x is finite and we also get that this finite value is very close to s of x zero so using this we can obtain as a corollary that every coded point if the coding sequence is recurrent then the coded point is indeed in this non-uniformly hyperbolic sharp subset i told you that sari proved this more or less his ideas in dimension two actually proved the reverse inclusion but the reverse inclusion for higher dimensions was only proved by benovadir okay so i think this is a a good point to ask if you understood what is the improvement lemma and how do i use the improvement lemma in order to obtain this corollary because this is the main point the proof really repeat again the improvement lemma please improvement lemma says that if if this ratio is big then it improves after applying f minus one one iterate before and yes and the improvement is related to how good the hyperbolicity is at the center of the chart so if you are seeing the same chart the same x zero then every time you pass through this chart you improve this amount and this is just the calculation the definition of s of x u or something more how do you prove this improvement lemma oh that's nice why because this is the usual philosophy that along the stable direction f minus one expands it so if it expands it it improves the regularity it gets something that was very curved and then it gets it it becomes less curved okay uh quantitatively how is this how is this obtained you relate the s of x with the s of f of x it is like two plus a constant times the original one and the same thing for x zero more or less so if you calculate this ratio with respect to this ratio then you see an improvement in these ratios so if this ratio is k and if it's bigger than one much bigger than one then the new ratio is basically this which is strictly smaller than k okay so this is just this is just saying that the fraction that you get numerator and denominator become bigger they are more or less of the same ratio but they become bigger so it's like uh like uh 2020 over 2021 is much closer to one than four over five exactly yeah okay but philosophically it is this and this is somewhat used also in these approaches of unisotropic spaces they make a direct use of the property that along stable directions applying the inverse dynamics to improve the regularity as well as along the unstable if you apply it forward you also do improve it okay so if you see improvements and if you see many definite improvements on the way that you pull things back to zero then you have bounded ratio at the zero position this is a crucial step in the book right okay because it's not clear at all that the s parameters at the shadow point should be finite you could you could have nearby points for which they are nearby two two points on which the the up north exponents are strictly greater than chi and smaller than minus chi but for which the shadow is not strong enough to control the up north exponent of the shadow point so you have some sort of uh semi continuity here and I believe this is in some sense the origin of this new work of this uh triple buzikrovisi and sari in which they analyze the lack of continuity of the up north exponents okay so the same thing happens to you the same thing happens to you but you if you have a ratio at the position minus one which is big if you apply it forward it it improves that's why we need that condition on the past as well exactly because to control you at the zero position you should go in the past see the same passing chart and then iterate it forward to the zero position yeah good so finally we arrive at the fifth ingredient which is the inverse theorem which allows us to control everything that we wanted so what does it say it says that if you get an epsilon gpo as marisa said that is recurrent you see the same symbol infinitely often in the future and the same symbol infinitely often in the past and if you look at the shadow point then the hyperbolicity properties of the epsilon gpo are related and almost the same as the hyperbolicity properties of the coded point how position wise the center of the charts are close to the end iterate of the shadow point this is direct the second one we already discussed with that picture the angle at x n is very close to the angle at the shadow and iterate of the shadow point the third one is what we get from the improvement lemma so the s parameter of the center of the chart is very close to the s parameter of f and of x and the same thing for the u parameter and using the recurrence properties that p s and have and that q s f and have this guy allows us to actually prove that the window parameters are also very much related so in the course graining we kind of approximated these guys to shadow to shadow everything that we wanted now we are getting the reverse direction we are saying that if you are shadowing a point with a recurrence sequence then the only way that we can do is by choosing these parameters of the double passing charts very close to the parameters of the shadow point and this is an inverse theorem exactly because if you give me x then all of these are automatically defined then the inverse theorem is saying that v is basically defined as well because the parameters of v have to be almost defined okay so this is telling us something about the inverse the fibers uh the fibers of the coding map the inverse image of x this is the fifth main ingredient that is necessary to prove the theorem why well let's do it now let's implement the three steps in the proof that we did on the last lecture in now in this context of knowing uniform hyperbolic wait wait sorry yeah please but you have not proved that the fiber is finite yet is it no no and and and it will not be recall that from pi pi is usually infinite to one so to prove that the fiber is finite first we need to do the refinement and get an actual mark of partition okay right this is what we do now so that's why I said that these estimates they play a crucial role to implement the bow and senile refinement so what were the three steps that we did for the uniformly hyperbolic context well we first did a coarse graining in our case was was uniformly hyperbolic was just considering a finite sufficiently dense subset now I consider the family countable family of epsilon double charts that we got in the course graining and then associated to it we have the graph whose vertices are the double charts and whose edges are defined by those four conditions that we introduced so this gives rise to the symbolic space sigma to each elements of it are exactly the epsilon generalized pseudo orbits so this completes first step in the construction second step how do we get an infinite to an extension I already did but let me recast we define pi just by shadowing to each epsilon generalized pseudo orbit we get the intersection of the stable and unstable manifolds of this guy so this defines a map pi from sigma to m and we already know that the way we constructed the alphabet was in a way that the image of pi would be surjective onto this guy actually the image of pi restricted only to recurrent sequences is surjected okay and as we know the other inclusion we actually get inequality the other inclusion was by means of the improvement lemma so now we have this extra input in which we get this surjectivity and onto and we understand the image of pi restricted to this recurrent set uh precisely as being this set here this is not true in the original paper of sarig let me recall you because the sets that he defines require the existence of Lyapunov exponents we do not require and not requiring we are getting a bigger set that actually identify identifies the image well this pi is a coding that is defines an extension of our original map so this commutativity relation remains true for the same reasons and as before p pi is usually infinite to us so now we have to do the third step which was the bow and sinai refinement in the uniformly hyperbolic context it was very easy why because we got this cover z which was a finite cover and then we refined it now we no longer have a finite cover because the alphabet here is countable so we have countably many objects getting countably many objects perhaps you might not be able to refine and still obtain countable for example if you get all intervals on r with rational endpoints this is a countable family of intervals but its refinement is not countable its refinement is is that every atom is a point is uncountable so we need something else about the z in order to refine and still get something countable well now z will actually be defined not directly as the image of the zero cylinder as it was before we only look at the image of the zero cylinder of recurrent subsequences why because it's only for recurrent subsequences that we understand the variation the parameters of the charts so this introduction here makes this object to be much more complicated for example from a topological point of view because this recurrent subsequence is neither open nor closed so this image is usually not not open nor closed before we knew it was the image of a cylinder which was a cloak and set so from the topological point of view we could understand z of v in this kind of a reasonable way now since we have to intersect with this recurrent subset we lose all control from the topological point of view yeah but uh Yuri but still you have these are rectangles aren't they they are still rectangles in in the notion of rectangles exactly the boundaries are made of a pezene manifold pezene stable and stable manifold yes not not no not smoothly because these guys they are highly fractal so they are like a bunch of points the boundary is the intersection of a pezene stable manifold with this with the set itself yeah like axiom a when you consider like ball and yeah yeah okay okay so good this guy is a cover of the set but now is only countable so how can we refine and still obtain accountability then the main property that comes from the inverse theorem is that this z is locally finite what do i mean by a cover being locally finite i mean that every point belongs only to finitely many rectangles so although the cover is countable when you fix a point and you look at the number of rectangles that do contain this point this is finite so locally everything works as as a finite cover and a finite cover you can refine and still get finite so as soon as we have local finiteness we can apply the refinement and still get accountable cover but just to convince you why do we have local finiteness well what do i mean by this i mean fix a point and see which rectangles can contain it assume that this rectangle is defined by this double chart then we know because the sequence that is shadowing this guy is recurrent that the parameters of this chart are close to these intrinsic parameters of the coded point so then ps and pu the minimum between them is close to the minimum between qs and qu which is q so whenever one rectangle contains x it has to be in this subset of double charts of a in which you have a lower bound for the minimum of the window parameters well this set is a kind of a passing set it's finite so you have finitely many choices for the rectangle okay so the discreteness assumption together with the evers theorem which is what allows us to relate the parameters of the charts with the parameters of the shadow point is telling us that this cover is locally finite and so we refine as before the conclusion is exactly the theorem that i stated uh in the middle of the talk which is exactly of the existence of the mark of partition which generates this coding which is finite to one if you reduce the map only to the recurrent sub sequences of your symbolic space and furthermore the image is exactly this subset here okay so this completes the third lecture of of the mini course and now uh it's time for you to make more questions so just to be sure uh the diagram you draw in the last uh slide is an upper bound for the degree or is exactly the degree diagram which diagram this one over here yes come back no no come back one more slide this yes this is the for this number well this this uh not directly because the you mean the degree the degree of the map pi or off the graph oh off the graph oh off the graph well if you get two guys uh it's not it's not because of this but just because of the recurrence relations between an edge if you have an edge you have the relations between p s p u k s and q u so this implies that the minimum between p s and p u and q s and q u is close to oh sorry p s and p u over q s and q u is close to one so the edges that come out from the first uh uh chart they go only to q s and q u's which are approximately equal to p s and p u and this space here is finite by the discreteness assumption so let me write again so if p s and p u is close to q s and q u then the space of w for which you have an edge from v to w is contained in the space of the double charts for which q s and q u is greater than or equal to p s and p u times a small value this later set is finite finite by discreteness okay um so i believe you can ask questions to me or you can ask questions to omri as well because omri is here so let's take advantage of his participation of his kind participation uh hi yuri hi hi player uh will you explain then about the no invertible case also in this mini course yes i will i will i i hope to start it in the next lecture thank you then uh the f minus one in the non-veritable case you will uh if you you will consider along the branches yes i will consider the natural extension thank you yeah so yuri yes i have maybe a question about last class uh so at the end of the class someone wrote the question about whether the elements of the markov partition had fractal boundary similar to what uh we were discussing a minute ago yeah but in that situation uh it could happen it had nothing to do with the sigma sharp because we didn't have the sigma sharp in that situation right yes it was a matter of co-dimension actually yeah so i don't understand because we have these three steps construction in last class so suppose we are dealing with one of those linear thorough automorphism with dimension at least three so this is expected to happen but i don't see in these three steps uh where's the first place where let's say this fractal picture comes in comes into play it comes into play when you get this object which is a cylinder that's naturally fractal and you project it to the manifold this guy lives in a symbolic space is like a like a contour set and then you project it into the manifold so it's fractal because not all passages are allowed something like this yes okay thank you the reason about the the boundaries that i mentioned is actually for analysis and associated to well it's easier if you if you look at the construction of adelaide and vise because it's more geometrical so then you are able to understand nicer what would be the boundary with this construction yes it is it allows you to adapt the proof to this non-informal hyperbolic situation but it's more abstract then you get fractal objects in project and then you don't you lose control of what would be the image of these guys okay thanks welcome any other question it's a silly question i guess i mean it's related to something someone asked to and earlier but there is a way to choose a sharpy c to consider these these results it is the topological entropy these guys if it's smaller you are considering more measures yes and in some sense considering more measures is better because then you're you can code with a single map all measures yeah but how can we take it sufficiently small but it's still uh being relevant to the map okay so it has to be greater than the yes if you are in dimension two and the topological entropy of f is h then if you get a measure with topological entropy h a measure of maximal entropy ergodic then you know by who has inequality let me let me uh put like this just positive let me call h the entropy of the measure well as inequality tells you that the Lyapunov exponents are greater than or equal to the entropy so if you want to understand this entropy you just need to fix a parameter chi is smaller than h than the entropy that you are interested because then chi this is smaller than h which is smaller than or equal to the Lyapunov exponents that you see then mil will be supported on annual h chi with this chi and any measure supported in the same set you will have the same properties yes actually for all the measures with entropy bigger than chi they will be supported here any other question there is no silly question okay you yes how do we assure that you have uh the the both Lyapunov exponents for the measure is different from zero why you don't have a a big positive Lyapunov exponents and the the other one is zero well here here i am in the invertible situation so i can apply who has inequality also for f minus one okay so for f minus one so for f you have the sum of the Lyapunov exponents for f greater than or equal to chi so then you have at least one exponent greater than or equal to chi but if you apply to f minus one the entropy is the same okay so you have another Lyapunov exponent for f minus one which is greater than chi this means that for f you have a negative Lyapunov exponent yeah yeah yeah in the invertible case yeah okay yeah exactly i saw that leandro or and ali would also wanted to make questions i was i'm just going to ask um whether you are going to talking about the keys where we have similarities in this situation you mean the key of correlations no no no uh single asset so yes yes next talk next talk yeah i will i will i will start dealing with the flow billiards and perhaps a non-invertible case okay so i believe we can finish now all the questions so thank you all for coming to this third lecture i'll see you in two days uh remember that tomorrow we have the the beginning of the mini course of josealdes on srb measures and yanttales okay so see you all tomorrow and for my mini course in two days bye bye thank you bye bye thank you