 Okay, so I believe we can now start safely. So welcome everybody to this fourth lecture of the mini course on symbolic dynamics for non-uniformly hyperbolic systems. So the goal of today, first of all, I believe I will not make such a large lecture as the last one. The last one was perhaps the one that had more information about new inputs and the structure of groups. So today I expect not to go that far. And the goal of today is to apply exactly the construction that we did in lecture three, but now for more complicated non-uniformly hyperbolic systems. I recall to you that previously we have done the construction due to Saarig from 2013 of such Markov partitions for surface defilmophisms. So now we are going to try to restrict the low dimensionality assumption, so going to higher dimensional manifolds, and also for more complicated dynamics. We will no longer assume that the map is a defilmophism. We will, well, assume that in order to analyze flows, we will assume that the map is a map with discontinuities, but with bounded derivative. Then we will try to understand what happens if we want to cover the billiard situation in which we have both discontinuities but now unbounded derivative. And finally, we will discuss the most recent work that we deal in some sense with also non-invertible maps. And being non-invertible, we allow discontinuities, we allow unboundedness of the derivative, and we also allow critical points, points where the derivative is not in isomorphism. So this is the framework of today's lecture, and let's go. And let's start first recalling what were the five main ingredients, at least in my perspective, that we used in order to apply Bowen's approach of pseudo orbits to coding of non-uniformly hyperbolic surface defilmophisms. So I mentioned to you that we had these five main ingredients. The first one is on trying to quantify when two passing charts are almost the same or very close. This is the notion of epsilon overlap. And then we understood or we introduced a way of acting on a passing chart through the dynamics and passing going to another passing chart. So this was the notion of epsilon double chart. So recall that we had a transition from one double chart to another if we had two overlaps with respect to the passing chart at f of x and the one at y, and the passing chart at f minus one of y and the one at x. And we also had a kind of recurrence equation for two parameters here, for ps in terms of qs. So ps in terms of qs, and also qu in terms of pu. So this in some sense was the following idea. We are trying to define these parameters as large as possible. And defining like that, we were able to understand that the graph transform is well defined. So we could go to the notion of epsilon generalized pseudo orbit. So after we have this notion of pseudo orbit, then we could go further in the idea of Bowen in which I divided into three steps. And then we discussed how to pass to accountable family of passing charts that are sufficient to shadow all relevant trajectories. So this was what I called coarse graining. Recall that in the uniformly hyperbolic situation, this is very simple. You just consider a sufficiently finite dense subset of the manifold. Now, because we are in this context where non uniform hyperbolicity can take a long time to show up, then we usually have only accountable family of such guys. But the overall idea was again, in some sense compactness. So we understood what were all the quantifiers that allowed us to say that two passing charts are close. And then we just passed to accountable dense subset of these charts. So this was enough in order to consider accountable family of passing charts that are able to considering all of the of the pseudo orbits generated by them, we are able to code all of points of that non uniformly hyperbolic set with slow convergence to zero of the parameter capital Q and also a recurrence assumption that you are returning to the same passing set infinitely often in the future and in the past. This is the idea. So then we tried to go to some subtler questions. The first one was the improvement lemma, which allowed us to understand why, sorry, whenever you are shadowing a trajectory by means of this pseudo orbits, which all have finite s and u parameters, why does the shadowed point also have these finite parameters? So as I told you, this looks like to me as a sort of semi continuity of Lyapunov exponents, because we have to prove in some way that the s and u parameters are also finite. So this is a sort of control on the on the proximity of this exponents in terms of chi and minus chi. And then we understood how to do this improvement lemma. So finally, we arrived at the inverse theorem, which was exactly trying to give us information of the inverse problem in which if you give me x, what can we tell about the possible epsilon generalized pseudo orbits that are shadowing this x? So in some sense, this inverse theorem tells us of the pre image of x, which is exactly what we needed to control in order to have finite s to 1 and then obtain the extension claimed by the main theorem of Sarik. So these were the five main ingredients. And we used these five main ingredients still in the same way that we did for the uniformly hyperbolic situation. Well, this one here gives the course grainy. It gives step one in Bowen's construction. And then once we have these notions and the notions of pseudo orbits, we can go to step two, which is the construction of an infinite to one extension. And finally, well, this guy is usually infinite to one, but nevertheless, it induces a mark of cover on the manifold. And we understood that the inverse theorem implies that this mark of cover is well, it's not finite, but is locally finite. And that is enough for us to implement Bowen's Sinai refinement and then get the actual mark of partition that we were looking for. Okay, so this is the kind of blueprint of the proof that I explained to you on the last lecture. And now, if we want to go further and treat these more complicated situations, well, then absolute overlap, we can define similarly, absolute double chart, we can define similarly, but then we have to understand how to do these new, these three ingredients in the proof in these more complicated settings. So we will try to discuss some of them for this class of dynamical systems that I mentioned to you here above, which I call more complicated ones. And then once we have this, well, we are more or less done in the construction. We know how to do steps two and step three, there will be more or less the same thing with the exception in the last situation of non-invertible dynamical systems in which we have to understand better how to do these steps. But step one is already just a consequence of understanding how to do coarse graining. So in summary, as soon as we understand these three things here, we will be able morally to treat steps one and two and three and then get the mark of partition for these more complicated dynamical systems. Okay, do we have any questions on the approach and the things that I want to do today? That's okay. Okay, so let's continue. I'm trying to make this this talk with most of the details that I am able to present here in a kind of self-contained manner, but I'm trying not to make it that formal. So whenever you have some questions, you are welcome to ask. You can ask to me or you can either ask to some experts in the audience. Omri Sarig is also here, so he can help me when I don't know how to answer questions. Okay. Okay, so the first example that I want to, the generalization that I want to treat is the dimension generalization of Sarig's result. So how to do the construction if you have C1 plus beta diffeomorphisms, but now in higher dimension. So this work is from Sneeb and Ovadia, and then I will try to explain what to me was one of the difficulties that when I tried to understand this problem came up to my mind. So the course graining method here can be done exactly, not exactly, but similarly as it is done in Sarig's. So the discussion that we had, unless talk can be implemented here in higher dimensions, derivatives are uniformly bounded so you can do something similar. Of course, every time I mention to you that something similar, you should be aware that new technicalities come into play. So you have to control more objects. You are in higher dimensions, so you have more objects to control. Okay. So I'm not saying with this check mark here that the proof is exactly the same. I'm saying that the line of ideas is similar, but the technicalities, which are perhaps one of the difficulties in this business, they become much more complicated. Okay. One of the reasons, this is actually one of the reasons that papers in this field, they are usually very big, very long, because if you want to present all the details, you have to do all calculations. And then these calculations in some sense, in some situations, they become very long. Okay. But nevertheless, we are able to get the course graining. How could we get the improvement lemma and the inverse theorem? So these are the two topics, or actually just this one. I will present to you here on how to bypass these difficulties. And the difficulty of the improvement lemma is that, well, recall that the improvement lemma was something that said like as follows. So if the shadowed point S of f of x over S of x1, which is the center of the chart at position one, is big, then you see an improvement if you pre-iterate this. So S of x over S of x0 becomes smaller. And here, we are in dimension two. The stable direction is one dimensional. So we have only one parameter to measure the quality of hyperboleicity along this direction, this stable direction, the parameter S. But now, if you are in higher dimension, well, then you have many parameters to measure this hyperboleicity at the stable direction. As a matter of fact, for each vector, you have one parameter capital S of x and v. I mentioned the definition of this in lecture two to you, when we introduced the non-uniform hyperbolic locus in higher dimension. So then how the problem is if we get a vector, which is in the stable direction of x, should we compare this parameter S with which vector centered at x0? So in some sense, you have to make a kind of canonical or natural identification between the stable space subspace at x and the stable subspace at x0. And the way that Benovadia did this was as follows. So recall that in charts, the stable manifold is just the graph of an almost horizontal, it is an almost horizontal graph. And being an almost horizontal graph or actually just being a graph, you can define a projection map that gets the point tft in this graph and sends to its projection t0. Recall that here t is an element in this higher dimensional subspace here r to the stable dimension. Well, since you have this natural projection, you can apply the derivative of the projection and then send the tangent space, the space tangent to this graph to an horizontal space, which is basically rd. So the derivative sends this, which is what I call the stable subspace, but seen in the charts. That's why I put h and not es of x. Then if you get this projection in charts, you fall inside this simple product here, which is actually the tangent space, the stable subspace of x0 in the charts. So using this guy, you have a way of identifying exactly the stable subspace of x and the stable subspace of x0. So now you can just pre-compose and post-compose with the respect derivatives of the passing chart in order to get an actual map between the invariant stable spaces of x and x0. So you have this projection here. You have the derivative of the passing chart. It is invertible. So this guy can be inverted as well as this one. So if you apply this guy at the point x, or actually at this point here that we are seeing in local charts, you send the tangent space to the graph to the stable subspace of x. And the same thing happens if you do it at x0. So you send this to this. So now you can just make this composition here and define this map by theta that takes a vector in the stable space of x and sends to a vector in the stable space of x0. And it is canonical from the point of view of the charts that we are using to represent our map. So with this identification, we can now find a fellow companion to each vector in x that is in x0 for which we can compare the s parameters of these guys. So we always compare the value of xs of x with respect to the vector v with the value of s at x0 with respect to the respective vector here. And once we do this, because we have a good control on this guy, on this guy, and on the derivative of this projection, as a matter of fact, here we make a strong use of how horizontal the graph is, then we can prove the improvement lemma. And after we prove the improvement lemma, well, in some sense, the most difficult part is actually comparing the s parameters, then we can go beyond and prove the inverse theorem. With the warning that, as I told you, many new technical difficulties appear. So these have to be dealt, and they are dealt properly in Beno Vadia's paper. So this allows us to satisfactorily implement the ideas of low dimension to higher dimensions after identifying how to compare these higher dimensional objects. Do you have any question on this argument? No? Okay, so let's continue going one step further in these generalizations, in these kind of improvements of the original construction to cover more and more classes of dynamical systems. So if this we are able to go to higher dimensions, but here, as I told you, you are still in the class of diffeomorphisms with everything smooth going on smooth. So the next step is on how to deal with surface maps. So we come back to low dimension, but now we allow these continuities to occur as long as the derivative of the map is bounded. So this discussion is based on the work of myself with Sarik, in which the prototype example of map that satisfies this is a Poincaré return map of a flow. So if we understand how to do the Markov partition for this class of maps, we can do it for the Poincaré return map of a flow, and then we can go further and construct Markov partitions for these flows. Well, since we require the Poincaré return map to be a surface, we get flows defined on three dimensional manifolds. So this is the kind of low dimensional flow counterpart of Sarik's original result. So let me recall to you what is the setting again. I already told you on lecture two, so you consider a surface, and inside the surface you have a closed subset, which is called the singular set of the set of these continuities, and then we consider a map that is defined on the complement of the set that is locally C1 plus beta, and it has furthermore bounded derivative. And on lecture two, we understood how to, for this class of maps, to construct invariant manifolds. So we understood that we had to restrict ourselves to a subset in which the capital Q does not go exponentially fast to zero, with the change that the capital Q has to take into account not only the non-uniform hyperbolicity quality of the point, but also of its proximity to the singular set. Why? Because passing sets, passing charts, their images cannot go beyond, cannot intersect the singular set. So in some sense, we have to reduce ourselves to this. And it was quite easy because we just took, not quite easy, but explaining this way it became more clear. So you just have to take the new Q as the minimum of the old Q and of the distance of the point to the singular set. This was more or less the idea. So we understood how to construct the invariant manifolds in this context. And now that is one new difficulty that comes into play, which is how to get the coarse graining, how to pass to accountable family of passing charts that are enough to shadow all relevant points that we are interested in. Well, since S comes into play, we lose compactness. And losing compactness makes the original argument more difficult, because recall what was the original argument for each point X in the non-uniform hyperbolic star locals. Recall, let me recall to you that in this context, this star is the set of points that satisfies those four properties, Nu H1 to Nu H4. Nu H4 is exactly saying that the small Q of X is positive. So you have a sub-exponential convergence of the capital Q to zero. So then for these points, you define this example here, gamma of X, which is telling me these three positions, the position X and the pre-image and the image. It's telling me what is the matrix that allows me to diagonalize the derivative. This is C of X. And I also do this for the forward and backward first iterates. And I also put on this top of the value of capital Q. So in the compact setting, this guy here belongs to this compact space. This guy here was bounded. So we have no problems. But the difficulty was on this guy here, because these matrices could be going to a non-compact region of the space of matrices. Basically, because their inverses could have a very big norm. Well, they themselves are bounded. You can make them bounded by one in norm, but their inverse images can be big. Why? Because the inverse images is a linear map that is sending two vectors, which might make a very small angle to two perpendicular vectors. So this causes the inverse matrix to have a large norm. Well, but now in this setting of barriers, we still have this problem. We still have this, let me mark as an X. We still have this difficulty. Q is okay because it's still bounded. But here we also have the difficulty that this triple is no longer belonging to a compact region. So what do we do? We also aim to control this possible bad behavior as well. So instead of only fixing three parameters that allow us to control the norm of the inverses of the matrices C, we introduce three extra parameters. This is the underlying K, which will tell us how close we are to the singular set. So efficiently, we will consider the subset of all the gamma of axis for which we have the same previous control on the norm of the inverse matrices. But now we also have a control on how close these three points are to the singular set. And now we are again in a good situation because again, the set of points that we want to code. So here it should be actually a star. Let me fix this right now. Okay. So again, this set that we first aimed to code, their gammas will be the disjoint union of these new Ys. We've now Ys defined with six parameters, but nevertheless, each of them is again pre-compact. So being pre-compact, being pre-compact, they have a countable dense subspace, subsets, sorry. And then we can get the union of this countable union of countable dense subsets and get countably many of the passing charts that allow us to code the points that we are interested in. So with this adaptation, we have no further philosophical problems with the other settings, the other ingredients. For instance, since we are in low dimension, we only have one S parameter to control. Again, here once we need to control the improvement lemma, we have no problem with this identification, but we have new difficulties that come up when we efficiently try to measure these things. Okay. But then we can satisfactorily do it as well and then go further in the construction and get the Markov partitions for this class of dynamical systems. So this is the result that I have with Omri Sarig and the prototype example again is for applying it for flows, three-dimensional flows with no fixed points for which the vector field that is defined by it or that defines the flow has no zeros. One example of these flows I already told you in lecture one are geodesic flows on many flows. For instance, if you consider a surface of non-positive curvature different from the two torus, then the geodesic flow is one example of flow that we are able to make to construct this Markov partitions. Any questions on this kind of not generalization, but extension of the construction? Yuri? Yes. So I think I have a question, but I should have asked this before. So when you introduce this example with the flows and punker head sections, you draw something to us and you have shown, okay, here you have a region of discontinuity, some curves where the map was discontinuous. So I expect that these curves like propagate if we iterate the map. Yeah. And I can't have a picture in my mind about how the stable or unstable manifolds get together with these singular curves cutting through the space everywhere. Can you? Yeah, I understand your question and I had the same problem when I was working on this. So imagine, for instance, that here the singular set is something like this, but if you iterate it forward, then the images and pre-images can become something like this, right? And then you think, oh, but how can I get stable manifolds for points here, for example? They will get cut by iterates of the singular set. That's true, but I understand that if you are trying to construct a stable manifold of this point x, the effect of this curve, let us say that this curve is the nth iterate of the original singular set, then it will get cut, not the stable manifold here at x, but only once you look at the nth iterate of s. And then you have to compare how close the nth iterate of s is to this curve. So what I'm saying is that the effect of the iterates of the singular set have to be seen only at the respective iterate and not everything at once at the 0th iterate. Did that become clear? Okay, I understood. So it doesn't matter if it cuts in the 0th iterate. It doesn't matter. If you, the 0th iterate, you only have to worry about the singular set, the original one, that's given by these curves. The first iterate, you have to worry about only the first iterate of s, not the 0th, not s itself. So if you are at f of x, you just have to worry about how close it is to f of s. Okay, it helps, thanks. In some sense, yeah. Okay. Well, sorry, I said something wrong. And when you want to understand the behavior at f of x, you have to compare it with s. But when you pull it, you have to compare it at the position that you are at f of x, not comparing s with the preimage. You have to localize at the exact position that you are at f of x, then you look to which this qr, and then you have a part of the singular set here. So you have to worry about this, sorry, this distance, and not the previous distance that you pull everything back to the 0th position. Okay. Okay, thanks. Yeah. So any other question? And you see, now, look, as it becomes more natural, this for us, because what would you do to construct the stable manifold? So you go to f of x, and then you consider an almost horizontal graph here at the neighborhood of it. And it is this almost horizontal graph that you have to pull it back. It is not a single graph here at the 0th position that you will deal with. You have, you will deal with single graphs at f of x, or horizontal graphs, almost horizontal graphs at f2 of x, and so on. Okay. Any other question, either of this lecture of the previous ones, that was not clear. I had the exact same problem when I started thinking on this, because I thought, I thought, oh, you have to take into account all iterates of s, and they will cut the face space everywhere. So something will, maybe nothing will survive from it. But recall that to construct this environment manifolds, you go all over the trajectory, and you consider graphs along this trajectory, not graphs only centered at the 0th position. Okay. So the next step is on how to still considering low dimensional maps, surface maps, still with discontinuities, but now allowing the derivative to be unbounded. So the framework that we will discuss is contained in this joint work with Carlos Matheus. And what is the setting? Let me recall to you, I already told you on lecture two. So again, you get a surface with a singular set, which is closed, and you have a map, which is locally, which is C1 plus beta, but now you have a parameter a, which controls the norm of the derivative and of the inverses of these derivatives. And the overall idea here is that these guys, as you approach the singular set, they can get very big, so they are very small, they can degenerate, but only polynomially fast with respect to the, how close you are to the singular set. So this is telling you that the kind of distortion, the big distortion that you might have as you approach the singular set only occurs polynomially with respect to the distance to the singular set. And let me recall to you that we impose this because we are aiming to apply that philosophy that says that hyperbolicity will prevail, hyperbolicity is something going at an exponential level, so it will prevail over any possible polynomial degeneracy that you have. So as long as the degeneracy of the system occurs polynomial, we expect that the hyperbolicity will prevail over this effect. Okay, this is a philosophy that I told you that perhaps was explored in this business of dynamics first by Sinai, exactly treating the dispersing billiards, then it was further explored by Katakis and Strelchin, and nowadays by many others. So this is not a new idea. And in this context, in lecture two, we also understood how to construct the invariant manifolds in this context. So again, we needed to redefine the capital Q to take into account also this possible distortion. So we wanted to define the capital Q so that at the window of the capital Qs, you have a sort of bounded and controlled distortion. And the overall conclusion was that once you restrict ourselves to the new non uniform hyperbolic star set, which is the set of points for which capital Q does not go exponentially fast to zero, then hyperbolicity will prevail over the degeneracy of this derivative. And so we will get invariant methods. So as in the previous situation, sorry, what is this? Yeah, so this is exactly telling us this property here is exactly telling us that we can have an explosion of the derivative. But since it is only polynomial, we can get a very good control on the behavior of f around x in a ball of radius, which has a very small, which might be very small, but it's only polynomial is more with respect to the distance to the singular set. So again, we are putting the good control that's going on at a polynomial scale. And we hope that the hyperbolicity will beat this possible no no control that we have outside of this of this small ball. Okay, so as in the previous situation with bounded derivatives, we also have difficulties in dealing with the course. Why? I don't remember. Well, we okay, so the balls, these are the balls in which you have a good control. So you have somehow to make sure that in order to get countably many passing charts, you should be able to have countably many of these balls that are in some sense covering in your face space. And in the previous example, this was easy because the radii of the balls were just the distance to the set. These are the regions in which the map is well defined. Now we have a potentially much smaller domain. So we have to make sure that we don't put too many passing charts, because since we are diminishing the size of these guys, we could be adding much more passing charts. So in order to bypass this, we have to make a better control. And so the first step is to get a cover of our space in which f is defined by means of countably many of these balls such that we also only use finitely many of them to cover a compact region of the space. So what does this mean? Sorry, this should be greater than or equal to, the distance is greater than or equal to. Let me just check real quick here the paper, because I think I made a mistake. Exactly. So this is greater than or equal to. So the guys that are far from the singular set, you only need finitely many of them to cover this region. So in some sense, this property here is telling us that if you consider empty, which is the region of x for which the distance of x to s is greater than or equal to t, it's a kind of compact region of our face space. Then we only need finitely many of these balls to cover this space. Okay. So this is the idea. So this in some sense is telling us that the only uncountability behavior is actually going as you approach the singular set, at least at the level of having a good control for the map f. And having constructed this guy, which you can do just by considering this space, which is compact, getting finitely many of these balls to cover this. And then you make t go to zero. Then you can get an even finer control on the gamma axis that you want now controlling as before this. So here it should be this e here. Sorry, this e here should be here. Okay. Yeah. So in addition of controlling the norm of the inverse matrices and also the proximity to the singular set, you also control in which of these balls where f is well very well behaved with bounded distortion you are. So you introduce these three x the parameters. And then inside this guy, everything that you want is well controlled. And the good feature is that because you only have finitely many choices for these guys, if as you are far from s, then again, you get pre compactness of each of these sets. And getting pre compactness was again enough for us to pass to a countable dance subspace. So you get the union of all this countable dance subspace as you vary all of these nine parameters. And you get the countably many passing charts that you need to cover to to shadow all the points that you are interested in. Of course, again, this allows us to do the course training. But as soon as you go to prove, for instance, the improvement lemma, some parts of previous proofs of the improvement lemma were strongly using that the derivative is bounded. So now this is no longer the case that the derivative is big. So you have to to bypass this difficulty by using some other idea. One of the ideas is exactly going in a smaller scale for each point for which you have a better control, maybe not of this norm, but of the variation of these norms. But still the norm itself can be very big. So in some places, we have also to consider very small values in the charts that will kill the possible big behavior of these the apps. So I'm not saying I'm not saying that the only difficulty in order to implement the construction for this class of systems is just by getting this new course training. There are many other places in which the effect of the derivative of previous arguments was irrelevant because it was bounded. And now it has to be taken into account that these guys are going to infinity. So then we have to redo. But we're always with the overall idea that as long as things that generate polynomially fast, you can bypass it by means of the hyperbolic behavior of your map. Okay. Any question on this line of generalizations of extension of the results? No. Okay. So let's continue. So as you see, I'm not I'm not mentioning any application of the consequences of the existence of this mark of partitions. I actually have the plan to do it in the fifth lecture. So the fifth lecture will be more relaxed in the sense that we will only discuss applications. So nice results that come from this construction. Okay. So for instance, here in the bigger situation, we were able to deal with this person be yours that it was not known that they had mark of partitions. And also for non uniformly hyperbolic billiards as the boonimovich stadium. So I will come more to this in the next lecture. So it would be the lecture in which we'll harvest all the good fruits that come from this one structure. Okay. So finally, I arrived at the last line of extension that I want to discuss to discuss in this mini course. And it is for now I allow everything to happen at once. So I allow the map to be non invertible. So first of all, in the previous examples, it was always invertible in the sense that each point had at most one per image. Now we no longer require this. We also allow the map to be defined in a higher dimensional manifold. And we also allow it to have a singular set, which again will be made of points where the where the map is discontinuous points where the derivative of the map is exploding. And now also points where the derivative of the map is not any isomorphism. So the setting, the setting in order to cover this, all these possibilities will naturally will naturally be more complicated. But it is good in this way, because using this wider setting, we can at the same time, cover many examples that were not known in the literature before, like higher dimensional flows. It was not known for flows. The only previous result was by myself and Sarik for dimensional three, dimension three. Also higher dimensional billiards, just like that three dimensional billiards that I showed to you in the first lecture. So again, the only thing about billiards that was known was for surface billiards, two dimensional ones by my results. But it also allows us now consider non invertible higher dimensional situations. For example, non uniformly expanding one of the example being Vienna maps. Okay. So the generality of this setting allow us to apply the main result of the existence of Markov partitions to any of these contexts. So let us go now to the setting. What is the setting that allows us to cover this? So as I told you, we are going to consider a phase space that is possibly higher dimensional. So it's just a reminder manifold, we require and we did the calculations for this, assuming that this space is has it has finite diameter. In many places, we need to control in some sense the diameter, but it was just a technical feature of the proof. Why? Because you see, we are always doing the analysis locally. The only place where we do something globally is with the course granting. But you might, you might agree with me that, well, even if you are in infinite diameter, it is likely that we will be able to find countably many of such objects, right? Just like r is infinite, but it has countable dense subspaces subsets. So we just added this to simplify our notation, which was already very heavy. And also because all the examples that we had in mind, they have this finite diameter assumption, which would be very weird to add infinite diameter with that without any application of it. Okay, so just to observe that this remaining manifold, it can be disconnected, it can be made of many connected components, and it can have boundary. So one example that you have disconnected phase space is, for instance, on the example of the billiards, even in dimension two. If you consider a billiard, which is like the Lawrence gas, the phase space of the billiard will be made by the union of two cylinders. One of the cylinder given by all possible positions and angles at this obstacle, and the other cylinder given by all possible positions and angles at this other obstacle. So it is natural that we should allow disconnectedness of the phase space in order to cover billiards. This is for the billiard map, not the billiard flow. Not the billiard flow, no. So here, maps, okay, great. So as I told you, I would just slightly change the notation. So now I will call the discontinuity set D, the space in which I will have a map that is not defined there. But before going to this discontinuity set, I also wanted to say that we also allow this M, and this is aiming an application that we haven't done so far. We also allow M to have a certain degeneracy at the geometrical level. So the geometrical structure of M can also, in some sense, we also allow it, in some sense, to degenerate as you approach the discontinuity set. But again, we only allow this degeneracy to occur at a polynomial speed. So the class of manifolds that we require is, on top of those assumptions, we also require that that exists an A bigger than one, such that if you are not, and here it should be a D, so let me fix this right now, otherwise I will forget. So if you are not on D, then you have a radius, which is at least as big as a polynomial power of the distance to this D, in which the exponential map, recall, we use the exponential map to construct passing charts, and we have to have a good control on its geometry. So on this ball of radius polynomial, that is polynomial with respect to the distance, this exponential map is well defined and has some regularity assumptions. One of them being that its derivative and the derivative of its inverse is uniformly bounded by two. Recall that the exponential map, the derivative at zero is the identity. So requiring that this guy is at most two is saying that at least at this scale, the derivative doesn't grow very much. It only can go up to two. One example of such manifolds that satisfy this. Well, if you are in a compact manifold, then this guy can be taken uniformly, bounded away from zero. It's just the radius of injectivity. But if you are in some non-compact ones, you also have this just taking into account as long as the curvature of your manifold is bounded. You have this just for the distance to this discontinuity set. But we also could have curvature going to infinity as we approach the boundary. So some weird behavior that you could have something like this in which here you are seeing an explosion of the curvature to infinity. As long as this explosion and the explosion of some of the derivatives of the curvature tensor, so I call the curvature tensor r, only occur polynomially fast, then we also have this good control of that on the exponential map. So what I'm trying to say to you here is that the geometrical space that manifold in which we consider our maps, it can have also a degeneracy on the geometry. As long as the degeneracy only occurs at a polynomial scale with respect to the distance to the singular set. So again here, it appears that things can go bad as long as they go bad polynomially fast. Okay? If you are a little bit confused with this assumption, just require n to have, let's say, to be a subspace of rn, like a compact manifold with bounded derivative. That's it. Bounded curvature. Well, it's compact, so it's bounded. So just assume it's compact. Then you have this for free. Okay, so outside of this discontinuity set, we consider our dynamics, which is this map f. Yuri, excuse me Yuri. Yes. So here you say that degeneracy, so non-compactness is because of the discontinuity. So you may have something, as you said, you can deal with the non-compact case, the infinite diameter case also. But infinite we have not written, but I believe we can do the same thing. No, I mean, but your curvature can degenerate not close to the discontinuity. You can go to the infinity and then your curvature explodes. Yeah, it can also, yeah. So this imposes something new, some new difficulties? Yeah, we have to control the geometry as well, because the way that we analyze our map is by passing charts. Passing charts, they have two ingredients. So recall, passing charts, let me write here. Passing charts are the composition of the exponential map and of this matrix C. The matrix C is defined by the dynamics. So you have this ingredient of the dynamics coming into play, but you also have the ingredient of the geometry in which you are defining the map. Yeah, but I mean, so here you bypass by your polynomial condition, which is written in terms of the distance to the discontinuity. But in the non-compact case, you have new conditions also. So it is intrinsically some condition on the exponential map. So here you're, okay, yeah, but it is doable again. Yeah. So you can imagine in the non-compact situation, this discontinuity has been also the set of points at infinity. Exactly. Exactly. So you add these guys. Thanks. So you, yeah, great. Okay, so we have a map. I said nothing about the map. Now I will, before saying something, I will consider, well, let us assume that this map is, at least you can differentiate it at every point of this phase space. So you consider the derivative and you'll define what we call the critical set. The critical set is the set of points in the domain where f is defined in which the derivative is not invertible. So we will add this critical set together with this discontinuity set to define what is now our singular set. So our singular set is the union of all points where the derivative is not invertible together with the discontinuity set of our map f. And in the discontinuity set, or we can allow derivatives to be going to infinity and also in the critical set as you approach it. So now we will finally say what is the actual regularity of f. So in addition to having this guy here to be in C1 plus beta, so we require that it is differentiable, it is, well, C1 plus beta, I will not mention exactly what C1 plus beta means. I will just mention something about how well is f defined. And recall, we are allowing f to be non-invertible, so we have to say something also about the inverse branches of f. So here comes a new assumption which is the following. Whenever you get a point x that is not in the singular set and for which its image is not in the singular set as well, you have an inverse branch that is taking f of x back to x. Call this inverse branch at least defined locally around x to be g, let it be g. And the assumption that I'm going to say is that these maps, f and its inverse branch, respective inverse branch, are in some sense well behaved again at a polynomial scale with respect to the distance to the singular set. So we require that there is a radius r of x, which is at least as big as this minimum. So you compare the distance of x and of its image to the singular set and raise these numbers to the power a. So with this scale, both f restricted to this ball around x and its inverse branch restricted to this ball around f of x, they are diffeomorphisms in which again you have a kind of polynomial control on the degeneracy of their derivatives. So both the derivative of f and the derivative of the inverse branch are between a small number, which is a power of the distance to the singular set and the negative respective power of this guy. So in some sense, we are only considering non-invertible maps in which inverse branches are defined in big domains. Big domains, I mean at least as big as this. Of course, this can be very small, but I'm saying that at the scale comparing to the distance of the points to the singular set, this domain in which the inverse branches are defined are polynomially big in some sense. We also require, finally, let me just complete, we also require that inside these domains, we have a uniform control on the beta whole the regularity of the derivatives. So this is kind of substituting the C1 plus beta assumption. Okay. Yes, who wanted to make a question? Hi, Puyi. In fact, I wanted to ask that inverse branch taking fx to x here means you're looking at some special branch. I'm looking at all branches in some sense because as I run over x, I can reach all branches with this, right? Okay, so any branch you may choose and work here. Yes, but the quantifiers are exactly given by, if you want to define the branch around a point that is coming back to here, the quantifiers are exactly the point and its preimage and its inverse branch that you want to consider. The radius is defined at least as big as this. Okay. It's much nicer to define like this than saying get a point, get an inverse branch, get its preimage and then apply this. I can just get x, I apply it forward and I say something about the inverse branch that's taking me from f of x back to x. Okay. It's a notational simplification that at least also in my mind and makes it easier for me to understand because I'm always in some sense applying f, applying the dynamics that I know that I can apply and then I can consider the respective inverse branch given by it. Okay, Puyo? The point is if you are looking at the forward orbit and making some branch that with this property that you explained, sometimes in the inverse limit maybe we cannot find some special branch. Well, here you mean branch the whole composition of pre-images. Okay. So we will be able to because we will introduce the natural extension. So far I'm just defining what an inverse branch is. It's just a local inverse of f. Yes. Thank you. Well, here it is the problem, right? So if you are in this context, well, we kind of know how to deal with invertible maps. So the new feature that comes into play here, which is perhaps one of the most complicated ones, is that as long as you assume f not to be invertible, you no longer have the symmetry that we were always exploring between future and past. So for instance, to define shadowing, we consider the intersection of a stable and an unstable manifold. And we are always proving things for one direction, for instance, for the stable direction, and getting the result for the unstable direction for free just by symmetry between future and past. And now we no longer have this. We only have one way of iterating the map forward. This is given by f. But as Puyo was pointing out, we have many ways of pre-iterating because each local inverse of f is one way of pre-iterate my map in a small neighborhood. And if I take into account that I can do this at every pre-iteration, then I actually might end up with uncountably many ways of considering these pre-iterates. So this makes the analysis much more complicated, both from a kind of philosophical point of view and also from an implementation point of view. So how, what do we do? So the idea is, okay, so let's forget about f for a moment. Because of this difficulty, we will not try to code f directly. Let us try to code the natural extension of f. So the natural extension of f I will soon define, okay? But for ease of explanation of what is explanation of what is the result that we proved, call this natural extension, this pair here, m hat, which will be the space, and f hat, which will be the dynamics of the natural extension. And for those of you who have never heard about the natural extension, this natural extension is actually an invertible map. So we will try to bypass the non-invertibility of f by not coding f directly, but rather its natural extension, which is invertible. And then we can, in some sense, recover the symmetry between future and past. So the result is here. It is, as I told in collaboration with Amazon Araujo and Mauricio Poletti, considering the pair of geometry and dynamics as above, and considering a threshold chi that exists, this non-uniform hyperbolic locus, sharp, again, now inside the space of the natural extension. And that exists, a countable Markov shift, sigma, sigma. And again, a holder continuous map, defined on the Markov shift, and taking the natural extension, phase space, for which everything that we proved last lecture holds here again. But now with respect to the natural extension. So you have the natural extension acting on m hat. And then we can find this extension map, sigma, and this coding map pi, which is holder continuous, that is an extension in the sense that it commutes this diagram here. And furthermore, we can again recover the information that we know what is the image of the recurrent subsequences on sigma. And there is exactly the kind of recurrent trajectories of f hat in a way that this restriction is also finite to one. So as before, I wrote this, especially to Leandro, the oriented graph that we construct, it has finite degree. Both make an advantage to ask you, are you going to talk a little bit more about this natural extension? Yes. Great, great. Because I don't know what it is. Thank you. So every vertex of this oriented graph has finite in going and outgoing degree. So only finitely many edges start at it and only finitely many edges arrive at it. This bound, which happens locally, might not be a uniform bound. You have arbitrarily large finite degree on the graph. It is interesting to note this. Well, I will tell you later this interesting fact. Now I will satisfy Leandro's enquiry and I will introduce to you what the natural extension is. What is the natural extension? So formally, it is very simple. Let me go further here. It is just adding all possible paths to your dynamics. So instead of considering only points, you could actually consider sequences of points for which the point at position n, if you apply f to it, you get at the point at position n plus one. And if you think on this for a moment, you say, well, what happens if f is invertible? If f is invertible, each sequence is uniquely defined by its zero position. But if f is not invertible, then I'm allowing myself to see all possible pre iterations of this position x zero. So formally, you have sequences on this space for which this x minus one here is one possible pre iteration of x zero x minus two is one possible pre iteration of x minus one and so on. So by plugging all of these guys together, you are in some sense looking at all possible inverse branches of your map at once. Okay. And then this is, sorry, but I was going to ask if this is an infinite product of m indexed in z, I guess it's a subset of this. It's a subset of this with this requirement here. Great, thanks. And if n is a topological space, then m hat also can be a topological space. If m is a metric space, m hat is as well a metric space. We have natural distances on it. Okay. So, and well, how can we see the dynamics of f acting on this space? Well, it's very, very simple. You already see it here. So the only thing that you can do in order to apply f to a sequence is to shift it one unit to the left. So I define this map f hat, which is kind of left shift on this space. The image of the sequence is the shifted sequence one unit to the left. Okay. And you can also project points of this guy back to m. You can you have a canonical projection, which is just projecting to the zero position. So the nice thing about this is that this guy is an actual extension of your original non-invertible map. And moreover, let me draw this again, it is the smallest invertible extension of your non-invertible map, in which sense, every other invertible extension of f will be an extension of f hat. So in this way, in some sense, you are adding all possible pre iterates in the most economical way, because this f hat is the smallest invertible extension of f. And the good thing of being the smallest, and I believe this was proved by Rockling, is that you indeed, whenever you have a measured theoretical object here, for example, in violent measure, you can lift it to the natural extension. We could always project it, right? You have an extension, measures above can be projected. But the nice feature of this economical way of extending invertibly is that you can also lift. And the entropy of the lifted measure is the same as the entropy of the original one. So from the point of view of ergodic theory, this f hat is great, because you have a bijection between its infinite measures and the infinite measures of the original guy, and it is now invertible. So in many applications that in many studies of dynamics, you sometimes can apply this idea of going to the natural extension so that you have things invertible and still can recover measured theoretical information of your original non-invertible dynamics exactly because of this bijection between measures and this bijection preserves entropy. This is important as well. So measures of maximum entropy below are lifted to measures of maximum entropy above. Okay, Leandro? Yes, thanks. So if you are aiming to do this for the natural extension, then we could see, we want to see the singular set inside it. How do we do? Well, the singular set is going to show up here by appearing in each of the positions that you have. So you can lift the singular set, original singular set to M hat by doing what? You get all sequences that have the zero position at the singular set, and then you saturate it under the dynamics so that you also allow this positions to occur at other places of the dynamics. And so the next step is what? Well, we were interested in from information of the derivative of the map F to obtain information on F. So the natural input that we have is exactly information on this derivative co-cycle, which because F is not invertible, this is a co-cycle only for non-negative integers. And well, we want to go to the invertible situation. We already did that at the level of the dynamics, but we also should do that at the level of the co-cycle. So we should define an invertible co-cycle over this natural extension. How do we do that? Well, for every point, and here I could actually be easier, I could actually get a point for which it's zero position. No, no, I need to do it like this. For every point in which all entries are not in the singular set, I can define a bundle over this space. The bundle I'm going to call tm hat, and it is the union over these points here of fibers. And the fiber over x hat, it's very simple. It's just the fiber over x zero at the original manifold. Yuri? Yes. So I have a simple question. If F, by chance, was invertible to start with, this lift of the singular set would be the union of all iterates and pre-iterates of the singular set itself. Yes. Which could be that nasty set that's cutting the space everywhere. Yes, it could. But when I do the construction, I will not consider the proximity of this point, this whole nasty set. Now here, because I want to introduce an invertible co-cycle, I have to restrict myself to this bigger, to this smaller subspace. Because I want to see invertibility at every entry. So I want to consider only sequences that do not fall in the singular set at all the entries. But as I define the passing chart, I will only consider the proximity to the singular, to the original singular set. Okay, thank you. Yeah. So what would be the fiber over x zero is just the tangent space of x zero. So in this way, I define a fiber bundle over this space here. It's a bad space topologically, but from the measurable point of view, it's a reasonable bundle in which you can define a co-cycle. So we can actually lift the derivative co-cycle to this new bundle that I introduced. Because since I'm not seeing any point inside the singular set, the derivative is always invertible. So I can pre-iterate, I can take its inverse as well. So in this way, I can lift this co-cycle, which was defined for numbers greater than or equal to zero, but which was possibly non-invertible to an invertible co-cycle. And the pay that up rise is that this invertible co-cycle is defined in this next year subset here. Okay. Excuse me. The natural extension doesn't have a differentiable structure. So this construction here is in some sense where we are looking at the local GPU and pulling the derivative and for the measurable point of view, it's enough. Yes, it's enough. We already have the derivative which is given by the original dynamics. And then we define this strange object, which is very bad, even topologically, so even worse in the differentiable point of view, but it's still, sorry, it's still a measurable co-cycle. Okay. And when dynamics are happy whenever we have measurable co-cycles. So this is the co-cycle that we consider. Well, just to introduce a notation, let me call the inverse branch that is doing this job of taking back f of x to x. Let me index it by the elements in the natural extension. So this inverse branch can be defined as by looking at the at the sequence x hat, each sequence x hat defines an inverse branch. It is exactly the inverse branch that is taking its position at number one, its entry at position one, to its entry at position zero. And I call it f x hat minus one. Okay. So it's taking x one to x zero. Okay. In some sense, this is the opposite that f hat is doing. Okay. So inside that nasty space in which we define this invertible co-cycle, we can now define a non-uniformly hyperbolic locus as before. You give me a chi and I look only at the set of points for which with respect to this co-cycle, you have a good behavior of the s and u parameters. So I consider the set of points for which its fiber can be decomposed into two subspaces. One of them is the stable one, the other is the unstable one, and they satisfy the properties nuh1 to nuh3 with respect now to this invertible co-cycle. All right. Observe that I'm now doing everything with respect to x hat. So this stable space, well, I put it as depending on x hat and the unstable as well. You know that the stable space, it only depends on the future behavior of the dynamics. So in some sense, this guy, it actually only depends on the zero position. But now the unstable subspace, it depends on the whole x hat because I have to know which inverse branches I'm applying in order to understand what is this unstable direction. So this is a sort of a tautology. I could actually just consider it being defined by x zero. But this is genuinely depending on the whole past of x hat. Okay. So if we do this, then we start to understand that we can try to introduce all the objects that we have introduced. Now in m hat, I already introduced a co-cycle. I already introduced a non-uniformly hyperbolic locus, and I can continue to go further on the introduction of these objects, which most of them will not be as regular as we had before. But we don't care because they at least have the regularity that we need in order to implement the blueprint that I mentioned to you in the beginning of this talk. So we can introduce inside this guy. We can introduce an inner product just like we did in dimension two and higher dimension. For every point here, we can introduce this inner product, which in some sense is measuring how good this forward hyperbolicity is in stable vectors and how good the backwards hyperbolicity is in unstable ones. And then we can try to diagonalize this derivative co-cycle by introducing this linear transformation C, which, look, depends on x hat. Why? Because the unstable direction depends on x hat. So now everything is depending on x hat. So once you define this, then you define the passing chart at x hat. Here it comes, well, I will say to you soon. We define the passing chart by doing what? You get the exponential map at the zero position composed with this linear invertible transformation at the position x hat. So it depends on x hat. So doing that, we can define also the parameter Q of x hat and the small Qs. And now one question comes into play. What is the purpose of the passing chart now? Well, it depends on x hat. So in some sense, if you are in a first thought, you would think, ah, this x hat, this passing chart allows me to, in some sense, understand the hyperbolicity of f hat. That is true in some sense. As a matter of fact, these guys, they allow us to understand this hyperbolicity if you look at the zero position. So it's in some sense only allowing me to understand f as a hyperbolic, perturbation of a hyperbolic matrix, as well as the inverse branch that is taking f of x back to x. So passing charts are defined as depending on points in the natural extension. But when we want to understand the hyperbolicity, we only consider the action of the original f, which is the map that is on the level of f hat is taking the zero position entry to the first entry. And similarly, the inverse of this guy will be the inverse branch taking the first position to the zero position. So then you define this capital f, which again depends on x hat, which is exactly doing what we did before. Now with these charts indexed by elements in the natural extension. And the same thing for the inverse, respective inverse branch defined by it. So you have this composition. And the good thing is that they are, as before, small perturbations of hyperbolic matrices. Okay. So in summary, we go to this nasty space, but now that's invertible. Passing on the whole past. But when we want to understand hyperbolicity, we only care about the action on from the zero to the first position of these guys. So passing charts are defined for elements in the natural extension, but we only care about their action for the original map f and the respective inverse branch defined by it. So then we can continue on the introduction of these new objects as depending on the natural extension. So we introduced absolute overlap. And by having absolute overlap, we can introduce now how to relax, to represent now the same map f and the inverse branch, but now with different, with passing charts at different points. So whenever the passing chart at f hat of x hat is overlapping with the passing chart at y hat, I can understand f by using these two passing charts here, psi x hat and psi y hat. And the same thing happens if you pre iterate y hat. So then you have an understanding of this inverse branch, which will be the same, the inverse branch that is taking y close to x is exactly the inverse branch defined by x and f of x. This is because the inverse branches are defined in big domains. So as long as my overlap assumption requires the zero position here to be close to the zero position here, then the uniqueness of the inverse branch is telling us that the inverse branch in taking y1 close to x0 is exactly the inverse branch in branch taking f of x0, x1 close to x0. This is extremely important for us to have, for instance, that this map here is the inverse of this guy. And again, they are small perturbations of hyperbolic matrices as before. So good, we are going in a good direction here. So the next step is to define the double version of the chart. So that's on double charts. And here I would like to observe to you that we defined as before, but now with the dependence at the center on points in the natural extension. And I already mentioned to you this on the previous lecture, but I want to emphasize here, even in the situation that we have a map f, a map f, my pen causes some problems sometimes. Okay, that is non uniformly expanding in the sense that all of its, the Lyapunov exponents are actually positive. You even in this situation, you also consider the s parameter. Because you see, in some sense, if you are only seeing a positively Lyapunov exponents, you only have an unstable direction, right? So you should only want to consider pu. You can do that, you can do it. But the problem is that if you if you do the calculations only with pu, you in some sense lose track of some extra information, which was that relation of p s equals to minimum between e epsilon q s and q of x. In some sense, this is telling us that you are also restricting your choice of parameters p s on depending on q s. And this allows you to actually get the boundaries of the graph that you get. If for the non uniformly expanding context, you only use the pu parameter, you can do the calculations, you can do the you can implement the technique, but the graph that you get, you have possibly an unbounded outside degree. So this is exactly what I did in dimension one. And that's one of the reasons that in the paper for dimension one, I was not able to get boundedness of both inside and outside degrees. And then only later on, while working with Mauricio and Amazon, we understood that even in this context, we can artificially introduce the p s parameter in order to in some sense control how many edges are coming out or are coming into this vertex. And the artificial introduction of this parameter allows us to recover the boundaries of this degree. Okay, so this is a nice yes. Some comment, but it seems that it should be natural to consider this thing, because if you are an endomorphism, then you have this degree and you have kind of contraction, very strong contraction. So exactly when you consider your natural extension, you are seeing these things. So depending to the degree of your map, you have a stronger and the stronger contraction also. In the past, I mean, you have non uniformly expanding. So, okay, everything seems to be expanding. But as it is not invertible, you can see it as something which is strongly contracting in the which you see as you make two points go to the same point. Exactly. This is the way that I see the relation between non invertible and invertibilizing it. So you see some directions which are extremely contracting. So maybe this is why really should take care of this also. Yeah. So I should I should have talked to you before going to the construction. I don't know if before I say something like this. No, that's that it makes sense because you are like collapsing points, right? Exactly. That makes sense. Yeah, thank you. Welcome. So continuing, we construct the edge as before. Now we have two parameters. Oops. Let me go back just to the pointer. We have two parameters. We can define those recursive relations as well to define the edge. And then we are going in the right direction. So we can continue to do it. And then for any edge, you can define graph transforms. And here comes an observation that again, since we only know, since the passing charts are allowing us only to recover hyperboleicity at the action between the zeros and position numbers one, the graphs that we consider are actually the graphs centered at the zero position. So you see MS at the V. If you get a V centered at the guy X hat, we only consider the almost horizontal graphs at the position X zero. So in some sense, these objects, they are indexed by elements in the natural extension, but they are truly objects in the zero position of this element in the natural extension. Okay. So like that, we can recover the hyperboleicity features of these graph transforms. And then whenever we have an epsilon generalized pseudo orbit, which is now a sequence of epsilon double charts whose centers are elements in the natural extension, we can try to do shadowing. Well, what is the problem? Let me go a little bit further here. The problem is that, as Marisa said, we have no smooth structure on M hat. But we have smooth structure on the zero position of M hat. And the graph transforms, they are being defined exactly in the zero position or in position number one. So if you recall that the graph transforms were geometrically defined, and that they were the objects that provided shadowing, the best we can do up to now is to use them to define invariant manifolds in the zero position. So by intersecting the stable and unstable objects that you get from the graph transforms, you can identify the zero position of the sequence that you are trying to shadow. So this introduces the notion of stable and unstable manifolds. So you have manifolds for the epsilon GPOs, and it's exactly as before, because so far the graph transforms, they are acting on objects which are centered, which are graphs at the zero position. So by doing these limiting procedures, you obtain with this guy an almost at the zero position, and this one, an autograph at the zero position as well. So then you intersect and you get a single point that in some sense is the zero position you were looking for. But you know, you want to shadow a point in the natural extension. So you should know not only the zero position but all positions. So how do you do? Well, you now define a sort of stable and unstable set for the epsilon GPO, which is that now if you have the zero position determined by the intersection of these objects, you want to recover the other positions. And to recover positive positions is easy because you have only one way of going to the future by applying F. But observe that in order to define the negative positions, you had an epsilon GPO, so you had many edges coming from epsilon double charts to other epsilon double charts. And to any of these edges, you have a single inverse branch to consider. So you have one particular inverse branch that is taking the zero position of the sequence to the zero position of the sequence here. It is exactly this inverse branch. So in some sense, the way that you have pre-iterate your zero position in order to recover the negative positions is encoded in the epsilon GPO that you fixed. Then you can uniquely define negative positions as well, exactly by considering these respective inverse branches. So to the future, we had no choice. We only could apply F in order to recover the position one of this X. But to the past, we know what is the respective inverse branch that we need to consider because it comes for free from this edge transition from epsilon double charts to the epsilon double charts. So like this, you can define a natural shadow point in the natural extension. And if instead of doing this, you only want to do it to define actually invariant sets that are the respective of the invariant manifold, you can also do that. So you can get all zero positions given by these guys and apply it to the future. And then you define the invariant stable invariant set as just being the sequences in the natural extension whose zero position belongs to the respective invariant manifold. And you can also define the unstable invariant set. So you look at all possible zero positions given by your invariant unstable manifold. And you also know which inverse branches you have to consider in order to get the negative positions. So you apply these respective inverse branches and starting at each of these points of this almost vertical curve, you can define all other negative positions of your points. So like this, you define a subset of the natural extension. So now you have, we know how to shadow and even more importantly, we know how to define these invariant sets as subsets of the natural extension. And why is this good? Because you know the last step in the construction, it's actually very abstract. You have a Markov hover, which is a family of countably many rectangles and saying that they are rectangles and that they satisfy some Markov property is something very abstract, which does no longer depends on the geometry. So we can do the same abstract nonsense with these sets. They are subsets of this guy, which in some sense also have some sort of Markov structure. So now we can continue on the blueprint and then building countably many of these objects so that all shadowed points cover the non-uniformly hyperbolic locals that we want. And out of this shadowing procedure, we can define a first coding map, which again will be infinite to one, but it induces, it has a natural Markov structure that by pie it can induce a Markov structure here on this abstract space. So by getting this, we obtain this guy, Z fancy Z, curly Z, which is a Markov cover in here. This property of being Markov is from the abstract point of view, not no, not no anymore from the geometrical one. So it's very set theoretical in which no smoothness is needed. And so again, we can use the improvement lemma together with the inverse theorem in order to prove that this Markov cover is locally finite. So we can refine it and obtain a Markov partition, which is an abstract Markov partition of M hat. But nevertheless, who cares? It satisfies the Markov property. So it induces a new extension, which will finally be finite to one. So like this, I complete the construction for also the known invertible situation. So the idea is to blend both geometrical objects whenever you want to identify the zero position together with whenever you give me a GPO, I know which inverse branches I'm considering. This allows me to recover the negative positions that could not be defined before. And so if I do this, looking always at the zero position, I can go on the construction and obtain a set theoretical object for which I can again employ the same ideas and finally get this partition with the Markov property. So this is all I wanted to tell you today. So now I believe it's time for more questions. You, I'm not sure if I understood well, you constructed a unique Markov partition or depending on the branches, we will get different Markov partitions. So I'm not sure I understand what you mean by branch, but whenever I mean for different X hat. So you fix some X hat, then using this along this orbit, I fix F and from F I have a unique natural extension. Yeah. And it is the natural extension that I construct a Markov partition inside. Yeah, but you you're considering a special branch. A F hat branch has all the branches that are possible. I said X hat. So the X hat is just is just an element. And I consider a Markov partition that covers these elements. I'm not fixing a sequence of pre iterations. I'm actually when I consider F hat, I'm looking at all possible pre iterations at the same time. So in order to obtain a unique Markov Markov partition, you need to fix some special pre-image. Say the pre-image, some special F minus F hat or F minus. I understand your question, but I think that the context that I consider is different. So I'm not I'm not trying to construct to look at a family of inverse branches. And this particular family of inverse branches define an invertible kind of dynamics and to construct a Markov partition only for this invertible dynamics. No, what I do is that I consider this wider object, which is the natural extension, and it already has all possible inverse branches on it. And on this guy, I construct a Markov partition. So if you have if you are aiming to apply this to a particular way of pre iterating your points, you will not be able to do it directly with my result. Because I what I actually called is this whole natural extension. Why? Because I'm not interested in looking at a particular way of pre iterating. I'm interested more, more importantly on understanding periodic points or understanding in violent measures. And I can always do that only considering this natural extension, which at the same time considers all possible inverse branches. If I make it more particular, and I consider only some special inverse branches, then I would not know how to do it. Okay. I have another question is about the recurrence property. How we guarantee the recurrence property? In fact, I cannot see that how it can happen, such recurrence property. The four points to be in this annual age. So annual age. This guy. Yeah. So you have a measure. Usually, you want to consider a measure here. And you want to show that this measure is supported in this subset. So that in some sense, coding this subset allows you to code the measure. So what do you do? You lift the measure to mule hat. And on mule hat, you have azalea that's theorem. For instance, azalea that's theorem holds for co cycles. You don't need differentiable structure. So just using like punk array recurrence and azalea that's theorem, you can show that provide that you have some high publicity properties of mule. These high publicity properties of mule lift to mule hat and guarantee that mule hat lives inside this set here. Okay. Yeah. I am talking about the V underline. I mean the charts that you use. Yes. Yeah. In some points, you needed in the previous talks, in fact, you needed to define some recurrence property. Yes. Which means that the elements in forward and backward from some moment up to infinity, they are somehow the same. Yes. For the forward, I can see this. But for the backward, in fact, just I can see for the periodic points, such phenomena. Well, you should recall that this V is a family of double passing charts. Exactly. So I'm just saying that there is a single double passing chart that repeats infinitely often in the future. And another one that repeats infinitely often in the past. In order to happen this, you need your X and hat have this property. Yes. In particular, the X and hat are going to repeat themselves. Not only that, but also these parameters. Oh, sorry. Yeah. But I'm looking at the X and hat. So in the inverse limit space, the unique elements that will have such property are the periodic points. No, no, no, you are confusing. So to each sequence, I have a sequence of these guys. Sorry. I'm just saying that there are places where I see exactly the same guy. So this I understood, this N, sorry, this N here is not the Nth position of a sequence. It's a sequence itself. So this guy is itself an element of M hat. Okay. So you're repeating the same point. The same point on M hat. Exactly. Recall passing charts depend on the point of M hat. Perhaps I should have put this guy like this, including the N inside the hat. Yeah, thank you. Welcome. Yudi? Yes. So I have two questions. So I was taking a look again at Bowen's monography here. And I saw that before he constructed his Michael's partitions, he has used the spectral theorem and the partitions were adapted to these basic sets. And so this doesn't appear at all. So any comments in the sense? This doesn't. So what is your question? I don't understand. So the spectral, the composition is not used here in this uniformly non uniformly hyperbolic setting. No, the reason maybe Omri can answer that because it's related to his work with Bosie and Crovisie. So if you want to go further on understanding more about transitivity and so on, then you have to consider the aspect of the composition. Omri, would you like to say something? Exactly. I mean, it took 10 years to do this other step. So what we can do with Jerome Bosie and Sylvain Crovisie is when we restrict homoclinic classes of hyperbolic periodic orbits for these homoclinic classes, we can construct transitive symbolic coding. Now, the problem is that in the non uniformly hyperbolic setup, homoclinic classes of non-homoclinically related periodic orbits are not necessarily disjoint. So this decomposition of homoclinic classes, it's not into power as disjoint sets as in Smale's case when you're dealing with uniformly hyperbolic differing morphisms. What is true is that if the differing morphism is C infinity, then you can show that these different basic sets, so to speak, these different homoclinic classes, the intersections have measured measure zero for all robotic measures with positive entry. In dimension two. This existence of transitive symbolic models of homoclinic classes is also true in higher dimension. What is not true in higher dimension perhaps is the statement about the size of the intersection of different homoclinic classes. That was, yeah. So does it answer your question? Yes, yes. I think so. Yes, yes. At least points to a direction where I can look at. Okay, thanks. And I have another question. This one's very naive. So we have this Markov partition in the natural extension, right? But every time I think about, let's say, non-invertible expanding maps, I think the first thing, the first example I think about is some something in the interval, like those with those branches and something like this. In this situation, we have at least a natural partition to think of, which is the portions of the interval where the branches are defined. Does it have something to do with the abstract partition you get? At first, I would not know how they are related. Okay, so the answer so far is, I don't know. Perhaps they have nothing to do because in the construction, we go through many abstract arguments in which we lose control of the geometry in some sense. So if you want to do something on this direction, you should look more carefully at each of these steps and try to see if what you are doing, what is the geometrical explanation of each of these steps that you are doing. Okay, but so in general, this natural partition that I have for the original map, there's no reason to expect that it behaves well in any sense. Markov sense, let's see. Well, if you have a union dimensional Markov map for which images of these intervals are unions of other intervals, then you can actually model this using a non-invertible dynamical system, only looking at the sequences of forward iterates. Right, if you have a uniform next one for instance. So in some sense, you have something non-invertible and you are able to code it using a non-invertible symbolic dynamics. This is much better. This is something that I am not able to do here. So I don't know how to actually instead of consider this two-sided shift, to consider one-sided shift that codes F. And in some sense, this seems difficult because inverse branches, they have some combinatorics going on and in some sense, it's directly seen at the level, at the geometrical level. So it's natural to go to this invertible system. Okay, I see. All right, thanks. Welcome. So guys, any other question or comment? I was thinking about something similar to what Luca said, because we have a partition on the natural extension. And I was thinking about how many information do we lost if we project it? Because we don't need to do it. There is a correspondence between the measures. But it's just a curiosity. Yes, this is a question that I also got from the referee, the one-dimensional paper. I don't know what is the relation between these. Then I answered him, well, but gladly from the measure theoretical point of view, natural extension is enough for us. So you are making the right questions. I'm happy that it looks like you are understanding the lectures. If you have more questions, you can also ask Hermes on Araujo, who is also here and is one of the co-authors of the paper. Okay, guys, so if no more questions come into play, so thank you all for coming to this fourth lecture. So I have plans to make it. I promise not to speak too much today, but I get enthusiastic and start talking all the time. So it took me almost two hours again. But I promise to you that in the first lecture, for the benefit of our health, I will not go that far. Okay, so I will focus most on applications like concrete results and some other new ideas that came exactly from this work of Bouzi, Crovisier and Salih. So thank you all for coming and I will see you next time on Tuesday. Thank you.