 So I believe no introductions are necessary because Stefano did it properly in the first day. So now we are going to continue the mini course on the construction of Markov partitions for non-uniformly hyperbolic systems. So before saying what we need to do, let me just make a quick summary of what we have done in the previous lecture. So it's really quick. So basically what we did is that we tried, oh, sorry, we tried to understand how to measure hyperbolicity and how to get a better system of coordinates for systems exhibiting some sort of hyperbolicity. So in the first context, we consider uniformly hyperbolic maps. We assumed that they are defined in a surface. And based on that, we got many simplifications for the objects that we defined. The first one were the Lyapunov charts. So we defined the Lyapunov charts and we understood that in these charts, the map F is just a small perturbation of a hyperbolic matrix. And this is perfect because it allows us to understand the behavior of F along a single trajectory. And then we introduced the notion of graph transforms in which we get, for example, an almost vertical graph at X and send to an almost vertical graph at F of X. And vice versa, we also define, this was the unstable graph transform and we could also define the stable graph transform which took an almost horizontal graph at F of X and send back to an almost horizontal graph at X. And this, we understood that these two operators, they are contractions with respect to some particular norm in this space of graphs. And based on this contraction properties, we could iterate all of these operators either going forward in time and then pulling back some almost horizontal graph to the zero position. Then we defined the local stable manifold at the point X. And similarly, we could go back in time to the past, our past, consider an almost vertical graph at this pre iterate and then push it forward to the zero position and then making the iterate going to minus infinity. Then all of this push forward to the zero position converge and this defines the local unstable manifold at the point X. So our goal is now to do the same thing for the non-uniformly hyperbolic context. So recall that non-uniform hyperbolicity means that we have some sort of asymptotic hyperbolicity, non-zero Lyapunov exponents. So the first thing that we did was to try to understand what would be the set with some good non-uniform hyperbolicity. So for that, we fixed a parameter chi and looked at the set of points in which you have some sort of non-uniform hyperbolicity with rate at least chi. So this gave rise to the non-uniformly hyperbolic locals which we called NUH chi. This is a subset of the manifold but the problem is that usually this subset is very bad from the topological point of view. So for instance, it is neither open or closed and furthermore, all the quantities that we wish to consider inside the set, they do not vary continuously. They only in general vary measurably. And what are these quantities? These quantities are those three numbers that we introduced, the S parameter S, the parameter U and the angle parameter which I called, I said that they are quantities that allow us to understand how good the non-uniform hyperbolicity in these points are, okay? So based on these numbers, we were actually able to introduce a kind of change of coordinates for the derivative which after iterating, composing with the exponential map we finally arrived at the notion of passing charts. So we understood that passing charts, they are the good system of coordinates to understand non-uniformly hyperbolic systems. Why? Because in a properly defined neighborhood of the origin and you should recall that we introduced the parameter Q of X, which is exactly the size of this neighborhood, we can get exactly the same results that we got for uniformly hyperbolic ones. So the map represented in passing charts in this window given by Q of X is also a small perturbation of a hyperbolic matrix. So our next goal is to go further and try to define exactly graph transforms and invariant manifolds in this context of non-uniform hyperbolicity. So here is our next step, which is first of all to try to define graph transforms for points that are inside this non-uniformly hyperbolic locus. And what is the problem that we are faced right after we start to think about this? The problem is that the parameter Q, as you see here, it depends on X. Before in the uniformly hyperbolic context, everything was uniform, everything varied continuously. Now, if you change the point, then the skill can change drastically. It can be a reasonably big number and then become a very small number. And it can actually change very much along the orbit of a point. And if it does, then this is a problem for the graph transform. So I'm trying to, I will try to explain to you why this is a problem with this picture. This left picture imagined for instance, if the parameter Q at X is much smaller, or not much smaller, but just smaller than the parameter Q at F of X. Whenever you have this, you have problems for defining the unstable graph transform. Why? Because imagine that you get here a graph, almost vertical graph, which has more or less this size, because it's in this window of size Q of X. And if you iterate forward, if you apply the derivative of the map, it expands a little bit in this direction. But perhaps if Q of F of X is very big, then the image of the graph will not cross this window from top to bottom. So you cannot even define what would be the image of this unstable graph. It will not be another unstable graph crossing the window from top to bottom. So there is a problem in defining the unstable graph transform. If the Q at X is much smaller, or just a little bit smaller than the Q at F of X. Sorry, I have not got this point. Do you mean that the length of the images of the unstable manifolds are getting so smaller, or what exactly? No, this length here, it increases a little bit, right? It increases more or less of the order given by the expunction that you have in that matrix, A00B, which is basically of the order e to the chi. But even though Q of X times e to the chi, if chi is for example very close to zero, to zero exactly, this is roughly Q of X. And then Q of X will be much smaller than Q at F of X. So this new graph, which is usually bigger than this one, in some cases it might not cross this window from top to bottom. Okay, thank you. And the same thing happens if you think about the stable graph transform. So if Q of X is bigger, relatively bigger than Q at F of X, then a stable graph here, if you pre-iterate under F of X, then it might not cross the window from left to right. So what am I saying to you? I'm saying to you that if you want to define the graph transforms, you should reduce the window that you look at the graph such that the variation of these windows along iterates is not very big. So the solution for this would be to define a smaller parameter, which will be the new window in which we will consider the graph transforms such that you have a slow variation along the orbits. Okay? Well, how do we do that? We do that as follows. So we introduce this parameter as follows. We fix Hansen for all a small parameter epsilon. This parameter depends on a set of inequalities that need to be satisfied for calculations that we make. But just imagine that it is very small. And then for each X in the non-uniformly hyperbolic locals, we are going to define this parameter Q of X which is smaller than or equal to the capital Q of X. And it's just this infimum here. So it's the infimum of the capital Q along the trajectories multiplied by these numbers into the epsilon times absolute value of n. And you do it for every n integer. Well, this is a number of course for n equal to zero. This is equal to one. So this number is smaller than or equal. It's actually smaller than or equal to the old Q but what are its properties? Well, before saying what its property which is the good property that allow us to define the graph transform, I'm going to use this new parameter in order to introduce a kind of new non-uniformly hyperbolic locals. So instead of looking at all points in the original non-uniformly hyperbolic locals, I'm only going to look at those for which this small Q is positive as defined above. So in some sense, I'm looking at the points which have some good non-uniform hyperbolicity with respect to this parameter chi plus the points for which this capital Q does not decrease to zero exponentially fast or faster than e to the epsilon along the orbit. So this is the translation in words of this condition that Q of x is bigger than zero, okay? And why is small Q better than big Q? Well, because actually inside in the windows of the small Q, we can indeed define the graph transforms both stable and unstable. Why? Because as a matter of fact, this is small Q and this is a very simple calculation that I leave to you satisfies this is low variation property here. So if you compare the small Q at x with the small Q at f of x, then it's almost close, it's very close to one. It's between e to the minus epsilon and e to the epsilon. So recall epsilon is a small number. So both this left and this right hand side are numbers close to one. Because of this, if you start with an almost vertical graph here at x in the window of size Q of x and you apply the map, then you are going to get something that has a size and bigger size than Q at f of x. And so it indeed crosses the window, the domain whose size is given by Q of f of x. Okay. Any questions on this? So what did I do? Again, I reduced the window that I look at graphs in order to have a slow variation along iterates. So that I guarantee that almost vertical graphs once iterated will cross the domain from top to bottom. And again, also almost horizontal graphs here, if you pre-iterate, they will cross this window here from left to right. And then you can restrict and define the graph transform properly, okay? Well, I will do actually something even better. Why? Because I will actually, in this way, I introduced only one parameter to define both graph transforms. But now I will introduce two parameters, one that will control the window for which the stable graph transform is defined and the other one that will control the window in which the unstable graph transform is defined. And why is this important for the non-uniformly hyperbolic context? Well, in some sense, the behavior at the stable directions can be very different from the behavior at the unstable directions. So if we allow ourselves to measure this behavior in separate scales, then we will be able to understand each of these behaviors much better. And this will play a crucial role in our next talk, next Tuesday. So I will indeed introduce these parameters now and they will again come back in the next talks. So what are these parameters that allow us to define the graph transforms with different scales? So the parameters are just the one-sided versions of the Q that I just defined. Why? Because the QS, the Q in the stable direction will be the same infimum, but only for iterates bigger than or equal to zero. I only look at the future and then I define this small Q as it is here, small QS. And also the small QU is the definition of the Q but only looking at the past of the trajectory. So I consider the same infimum, but only for negative iterates of my point X. And once I do this, what do we get? Well, we get some recurrence property of these two guys which are the following. The QS at X is basically the biggest size that we can get coming from the QS at F of X. In which sense? Well, you get QS at F of X and if you iterate it backwards, you expect that stable manifolds will grow at least of the size. Then you define the window that you see of the stable graphs at the position X as being the minimum between this growth that you have and the capital Q, the capital Q recall is the window that allows us to guarantee hyperbolecity when we consider passing charts, okay? So in some sense, you should imagine this equality here by looking at this picture below. What is this picture here? Well, it's the reason that allows us to actually define the stable graph transform with these sizes. Imagine that you consider here an almost horizontal graph at the position F of X with the size QS at F of X. What happens if you pre-iterate this map under F of X? Well, we know that this almost horizontal graph gets stretched horizontal a little bit and actually roughly by the size here, e to the chi. Well, then this, the image of this graph here will be at least of size e to the chi times the original size QS at F of X. Because epsilon is very small, this guy here is bigger than e to the epsilon times QS at F of X. So the image of this graph at position X is at least of this size. So if I restrict it to this size, then I guarantee that this graph transform is well defined. So this image has size bigger than QS because QS is an even between this size and the capital Q. I only took the, I only take the capital Q to guarantee that I am still in the domain in which I see a small perturbation of a hyperbolic matrix. And I take this because then I guarantee that the image crosses from left to right, okay? Well, in the same way, these unstable versions of the small Q satisfy also satisfies this recurrence property here, which is again, a justification for having that FU at X is well defined. Because if you get a graph, almost vertical graph with the size and you iterate it forward, it will have to grow a little bit. And then if you restrict yourself to this size, which is the minimal between these two values, then you have a properly defined unstable graph transform, okay? Well, the good thing is that now, as I told you, we can define the stable and unstable graph transforms at different scales. The stable we use QS and the unstable we use QU. And actually in some sense, these two parameters, they are the largest scales in which we can define the graph transforms properly, okay? So in some sense, they are the best ones for which you can do this business of the graph transforms. So what do we do now? Now that we succeeded in defining the graph transforms, let me be actually more precise in telling you what would be the graph transforms in this context. We have actually to, well, the stable graph transforms, they are going to be graphs of almost horizontal maps defined in this domain. What do I mean by almost horizontal? Well, I require that the derivative and the position at zero is zero. And in order to control the almost horizontality, I require that the holder constant with parameter, with better over two of the derivative is bounded. I bounded by one half and this one half is enough to guarantee that if you get a graph of a function that at f of x satisfies these properties, it's image under the graph transform, we will also satisfy these three properties here. So you indeed get a map that is defined from ms f of x to ms x, okay? And similarly you do for mu for the unstable direction. So you define again, what is the mu taking only the graphs which satisfy vertical graphs, which satisfy a property as this one here. And then if you get a graph at position x with these properties, its image will be a graph at position f of x again with these properties. So this is great because now we change the scales in a satisfactory way such that both stable and unstable graph transforms are well-defined. And furthermore, as before, they will be contractions. Excuse me, really quick question. Is there a parameter beta in the definition of those graph transforms before, what was beta? So I recall to you that our map f is defined in a surface of dimension two, and we assume that this map is C1 plus beta. Ah, okay, beta is the holder constant. So beta is the exponent of the whole the regularity of the derivative. Ah, okay. Okay. And if you don't remember, we were able to actually write fx which is the scene, the map in the passing charts. This guy would be close to a hyperbolic matrix. And we are able to control the holder constant of the beta over two, that of the beta over two holder constant of the derivative of these guys. Okay. Got it. Thank you very much. Okay. So now we do exactly as we did in the case of uniform hyperbolicity. So we can define local invariant manifolds using the contraction properties of these two graph transforms. So we do the same as before. And the conclusion are the so-called passing local invariant manifolds. Okay. So this gives a way of constructing the famous passing stable and unstable manifolds of a point. Observe that at the point x, I'm going to get for the stable direction, a passing a curve, almost horizontal curve, which has roughly the size Qs at x. So this shows that in the non-uniform hyperbolic context, the stable manifold, the size of the stable local stable manifold depends directly on the quality of the hyperbolicity of the point. And it can change drastically if you change x to a nearby point, because all of the quantities that are involved in define both Q, small Q and small Qs, they do not vary continuously with respect to the parameter. So the downside of doing the same construction in this non-uniformly hyperbolic setting is that invariant manifolds are the size, the local invariant manifolds depend on the quality of hyperbolicity of the point, which is not continuous at all in general. All right. Okay. So we successfully understood how to define these invariant manifolds in the non-uniformly hyperbolic context. So the next step would, well, we did this for dimension two. So the next step would be to do it for higher dimensions. So how do we do it for higher dimensions? So this is based on the work of Sneeb and Ovadia. So the idea is to try to mimic what we did in low dimension to higher dimension. So first of all, we have to look at the set of points that we are interested in. So what would be the good space of points with some non-uniform hyperbolicity with parameter chi? Well, we do something similar. We consider the set of points for which the tangent space at x has a decomposition into two subspaces, stable and unstable, in which in the stable direction, now the stable direction might be higher dimension. So we require that for every vector in the stable direction. So here it should be for every V in ESX. We have that the kind of Lyapunov exponent is smaller than or equal to minus chi if you iterate it forward and you see some kind of expansion if you iterate it negatively. And for the unstable direction, so this assumption would be for every V in EUX you have something similar recalling that you have a symmetry between stable and unstable directions here. So vectors in the unstable direction, they should contract at least minus chi in the past and they should expand something in the future. Again, recall that we are not assuming that the Lyapunov exponents exist. We are only assuming a lim sup here and a lim inf here and we don't require these values to be equal which is usually the situation when you define Lyapunov exponents. So this is a weaker notion of having a well-defined Lyapunov exponent. We also had this in dimension two. Yuri, is there any example we should keep in mind when we thought about this concept in high dimensions? Any example? It's a good example, it's a simple one. Well, we have many maps which are non-uniformly hyperbolic but not necessarily hyperbolic. For instance, if you get Vienna maps, they are actually known examples of non-uniformly expanding maps. They have two exponents. One is, two exponents are actually bigger than zero. So this is a little bit different because the map is non-invertible but this is one nice example that you should have in mind. Okay, Yuri, just referring to that, Vienna maps, they have a second part, the paper with the Vienna diffeomorphism which is diffeomorphism. Ah, okay, I didn't know about that. That hasn't been much explored. For instance, it's not known yet the existence of SRB measures for that. I did the first part in my PhD thesis, the second part is still free. Good, so what is the context? It's in dimension two or higher dimension? It's dimension five. Five? Because it's a solenoid times random map. So the endomorphism is expanding map in the circle times quadratics. But since you want it invertible, you replace the expanding map in the circle by, I think it's a solenoid, maybe a, well, something like that. Well, passing from expanding map to solenoid is natural and then from the quadratics to the Hennon maps. So I think it's dimension five. Okay, okay. And it's non-uniformly hyperbolic, invertible diffeomorphism. So it will have three Lyapunov exponents, smaller than zero and two bigger than zero. Yes, yes, okay. Yeah, so this is a good example. Yeah, Vienna diffuse. Okay. Well, okay, let me return here. So this is the high dimensional version. I just assumed that you have a splitting in which in the stable vectors, you have this kind of non-uniform hyperbolicity and in the unstable, you have this one. But recall, we not only ask that, we ask also for the finiteness of some sums, the sums of which we use to measure how good the hyperbolicity at these directions is. Well, in this case, you have many sums because for each vector V in the stable direction, you can define its sum, which is exactly by taking the same thing here. So in this context, you have to assume that for every unitary vector in the stable direction, these guys are uniformly bounded. So you have the supremum of these guys are finite as you run over all stable directions or stable vectors with absolute value equal to one, with modules equal to one. So you require that the S of X is finite and similarly that the U of X, which is defined as the supremum of the capital U of vectors in the unstable directions is also finite. So this is telling us something strongly that only saying that Lyapunov exponents are smaller than or equal to minus chi or bigger than or equal to chi, okay? Obviously, positions points where the Lyapunov exponents are strictly bigger than chi and smaller than minus chi, these numbers will be uniformly bounded and so S and U will be finite. But with our setting, and this is important for us, we also allow some points to have Lyapunov exponent equal to chi for which these sums are finite and actually these two guys, sorry. And we also allow to consider points in which the Lyapunov exponent is not even defined. This is also important to understand, okay? All right. So once you have this notion of this non-uniform hyperbolic locals in higher dimension, then you can do everything as we did in dimension two. You can define these linear maps which are the maps that allow you to diagonalize the derivative of the map. And after you compose them with the exponential maps, you get passing charts in this higher dimensional context and so on and so on and so on. So you can define again QS and QU actually after defining the capital Q and then you define stable and unstable graph transforms. And so all of this gives rise to local invariant manifolds as before. All right? So with this, I more or less complete the construction of local of passing invariant manifolds for non-uniform hyperbolic maps in any dimension, not maps, but bithiomophisms. We assume that everything here varies well. We assume that F is C1 plus beta. What is my next goal? Well, I told you in the beginning that the setting of these constructions of Markov partitions allows us to consider not only regular maps as bithiomophisms, but it also allows us to consider some classical maps in dynamics like billiard maps or flows in which the way that you study them or they exhibit it already this property that the derivative of the map might be discontinuous or even worse, the derivative of the map might be going to infinity. So how do we adapt this machinery for maps that satisfy this? Either being discontinuous or even worse, being discontinuous with unbounded derivative. So this is what I want to say now. So the next... Yuri, Yuri. Yes? May I ask something? So you have these two locus, NUH key and NUH key star, right? And so if you understand the usual, let's say complete description, at least up to the inherent manifolds, we have only four locus with the star, right? Exactly. So we don't have these objects with the locus without the star? No, no. And if we add the star here, we expect to lose many points or how is that? We expect to lose many points. We expect to lose those points where the variation of the capital Q is very big, where the capital Q is converging exponentially fast to zero along the trajectory, okay? But nevertheless, in terms of infinite measures, the one with star is enough. So if you get an infinite measure with, for instance, the opponent of X1 is bigger than chi, then it will leave inside the star locus. Okay, okay, thank you. So for ergodic theory, the star is enough. Good. And don't worry, I will still pass. I define NUH chi and then I define NUH chi star, but I will define also next lecture, NUH chi sharp. So that will be even another one. And it is this one that we can satisfactorily fold, okay? Because so far I didn't say anything about recurrence. I define things without requiring recurrence. And in dynamics, we usually ask for, we only look for points that satisfy some recurrence. So this sharp will come into play when we require some recurrence as well. Okay, thanks. Okay. So now I will try to discuss how to do this construction for maps that might have discontinuities, but still have bounded derivative. So what is the setting? The setting, again, I will consider a surface, but now inside the surface, I allow to have a closed subset S, which will be called S because it represents the singular set. And in this context, it represents only the set of discontinuities of the map that you will consider. So we consider a map that is not defined in S, it's defined in the complement of S, and we require it to be C1 plus beta. It might be discontinuous because it loses continuity here next to S, but we assume that it has bounded derivative. What would be the example to have in mind? Well, before saying the example to have in mind, let me tell you what is the new problem that comes into play when you consider this context. The problem is that we were defining passing charts and passing charts, they were defined at the window QFX. They are regions in which you see your map F as being a small perturbation of a hyperbolic matrix. But now trajectories, orbits might approach this singular set S very fast. And whenever you have a point X that is very close to the singular set, of course, the size of the passing chart cannot go beyond the singular set, otherwise you cannot define this composition, the F sub X at these places. So necessarily in these positions in which you are very close to the singular set, you have to reduce the size of the passing chart so that its image does not intersect the singular set. And then you might have a big problem because imagine for instance that all points, the trajectory of all points approach the singular set exponentially fast. If it does, you have to decrease the Q to be exponentially small, to have an exponentially small size and perhaps the Qs converge to zero exponentially fast. And then everything that we did in order to define the graph systems will not work here. In some words, the problem that we might have here is that when we want to analyze NUH which exhibits some sort of hyperboleicity, perhaps the hyperboleicity feature that you see in NUH might not be stronger, might not prevail over the effect that the discontinuities have. So we have to take this into account. We have to in some sense restrict our study to only those points that for which that do not approach the singular set exponentially fast. So this is the main problem that occurs in this business when you consider a set of discontinuities for your map. What is the example that you should have in mind here that we actually apply the techniques to get some new results? Well, the example is for flows. Imagine for example, that you have a flow in a three-dimensional manifold. It can be the geodesic flow of a surface. So we assume it to have positive speed. Geodesic flow satisfies this. I told you that the usual way of analyzing such flows is actually constructing a return section and then analyzing the Poincaré return map. So you can pass from a flow to a map. If you construct a global Poincaré section, in this case, since the manifold has dimension three, the Poincaré section will have dimension two. And then the goal is to try to understand the flow by understanding the Poincaré return map of the section. And usually this section here is made of many small disks. The usual way of constructing these sections is putting many small disks that capture all trajectories of the flow. And the problem comes exactly because the small disks have boundaries. And if you understand the boundaries, well, they are exactly the reason of the discontinuities of your return map. For example, this curve here will be a set of discontinuities for F itself because points in this curve, they will go to the boundary of the next disk. So if you get points a little bit in the left-hand side, you will still hit this disk. But if you go a little bit to the right, you no longer hit this disk, let me put it here, and you go to another disk that is somewhere here in the construction. So this constitutes a curve of discontinuities for your map F. And similarly, you have a set of discontinuities for the inverse of the map. So if you want to say something about the original flow, you should do the construction that we did for this map, which naturally has discontinuities, but it has bounded derivative. Why? Well, you just need to consider these disks such that the time it takes for points to go from one disk to the other is bounded. So then the derivative of this map is going to be uniformly bounded. So we are exactly in this context here, discontinuities with bounded derivative. Okay, so this is the example to have in mind. Well, how do we deal this situation? Deal with the situation? As I told you, you have to restrict your study to the points that do not approach the singular set exponentially fast. So we redefined the non-uniformly hyperbolic locals to being the set in which you still see the non-uniform hyperbolicity. And this is just a property of the derivative of the map, but you have to impose something about the map itself. And it is exactly the sub-exponential conversions to S. So you only look at points for which the trajectory does not converge exponentially fast to S. And this is the translation of this condition. So this gives rise to a subset of your manifold as before. And it is for this subset that we can try to mimic the construction that we did. And then we can construct invariant manifolds for points inside the subset. How do we do? We have to redefine the passing charts so that they do not go beyond the singular set. So we just define a new size of them by taking the old size times the distance of the point to the singular set. Then you make sure that if you are nearby the singular set, your passing chart will not go beyond it. And because you just changed it by this parameter here and you are only looking at points that satisfy this sub-exponential conversions to zero, then the capital kills that we will define in a minute will also, most of them go to zero only sub-exponentially fast. So how do we redefine the capital kill? Again, we just have to add a new term. So recorded the capital kill before was a number that made sure that we saw uniform hyperboleicity at this small scale. This number was just a constant times a very negative power of this norm here. Now we also have to take into account that X might be very close to the singular set. So we actually take the old kill that we had and take its minimal with this full here. So this row is the added term to the definition of this capital kill. And the row is just the smaller distance that X, its pre-image and its image have from the singular set S. Okay, so like this, the capital kill is a good size for you to see the uniform hyperboleicity without going beyond the singular set. Once you have this, you can then redefine the NUH sharp which is just looking at the points for which the capital kill does not go to zero exponentially fast. And then inside this guy, you define QS and QU and then you apply the graph transforms as we did before. And the conclusion is that inside NUH star, you have local invariant manifolds. Okay, Lucas? Okay, I'm thinking about it, okay? Yeah, so it's only inside the star subset that we guarantee the existence of invariant manifolds. So what we did was just to take into account the boundary effect by introducing some terms to the definitions so that we can make sure that with these new parameters, we can redo the machinery that we discussed so far. Great, well, then we can get results for flows as well. But we cannot still get results for billiards. Why? Because billiards, they are actually maps with discontinuities, but with unbounded derivative. The unbounded derivative introduces a new difficulty. In places where the derivative is going to infinity, you could have a huge distortion. And having bounded distortion here is an important feature of our construction. So if we want to deal with this class of maps, we should make a new decreasing on the parameters, a new reduction in the sizes of the passing charts so that inside these new sizes, you have a sort of bounded distortion. So what is the setting here? The setting is similar to the previous one. So we have a surface and we have a closed subset in which our map is not defined. So this set S here is the singular set. And then we consider a map defined in the complement of the singular set, which is C1 plus beta, but for which we are now, we allow the derivative to explode. But we don't allow it to explode in any way. We only allow it to explode slower than a polynomial of the distance that you have to the singular set. So if you are at distance epsilon to the singular set, then you require that your derivative is between epsilon to the A and epsilon to the minus A. This is exactly the situation that occurs for billiards. So the examples to have in mind are exactly those same billiards that I mentioned to you in our first lecture, okay? As you approach the singular set in these billiards, the derivatives, they go to zero, but they can only explode polynomially fast with respect to the distance to the singular set. So here you understand as long as the degeneracy is occurring polynomially fast with respect to the distance, we expect that the hyperboleicity will prevail over this bad behavior. It's exactly, again, the feature that I mentioned, that catok that was developed by Sinai and later explored by catok and stretching, that you might have a bad behavior going on as long as this bad behavior is only polynomial because then the hyperboleicity that your system has will beat this bad behavior that you have as soon as you approach the singular set. So let us just try to understand how we redefine the capital Q in this setting. And now we get an ugly formula for it. Here it is. You have to take into account more things. So actually we not only get the minimum between this value and this value, but we get the minimum between the product of these two values, also with this new value that's appearing here, which is C of X to the minus one. Well, why do we need this ugly formula for capital Q? Because it's exactly, these numbers are those that we have to control when we do the construction. So it's a feature of the proof, but philosophically it's just saying that you consider a very negative power of what you had before with a very positive power of the distance of your point to the singular set. That's it. As long as you go into this small window, then the bad behavior of the derivative does not influence that much. That's the idea. If you do this, and if you rerun all the machinery, well, of course you encounter some new difficulties that you have to deal with, but you can go forward and again define the local invariant manifolds for these guys, okay? So the conclusion is that in this billiards context, points that are again in the NUH Chi star, you have local invariant manifolds. So now I complete what I wanted to tell you about the existence of local invariant manifolds for these three different contexts. The filmophisms, which are the most regular ones. We did all most of the details for this. And then we use the analogy that we did in order to deal with these two other guys. First, maps with discontinuities but bounded derivative. The example to have in mind is return maps of flows. And secondly, maps with unbounded derivative. The example to have in mind is dynamical bigots. We want to go now beyond and try to understand how we employ these techniques to construct the Markov partitions for these examples. But before, I need to tell you more or less what a Markov partition is. And I need to explain to you how to do the construction of Markov partition in the easier context of uniform hyperbolicity. Because in uniform hyperbolicity, everything varies continuously. So the problems that we encounter are not as hard as the problems when we consider non-uniform hyperbolic ones. So the next topic will be discussing Markov partitions. And there are actually many ways of making this discussion and we will focus on the approach of Bowen that uses pseudo orbits. I believe this approach, I don't believe, I'm sure that this approach is properly explained in that small book of Bowen Equilibrium States of a nose of AD pharmacisms. I think it's this name from 1975. So this approach came up in the 70s. So first of all, what is the informal definition of a Markov partition? Well, a Markov partition is just a family of sets. It's actually a partition of your space in the simplest situation. These sets I'm going to call rectangles. And I required that these rectangles satisfy a Markov property. And the Markov property is an intersection property. I'm going to explain to you with these pictures what would be the good intersection that defines a Markov property and what would be the bad one. So the Markov property is all about whenever one of the rectangles, the image of one of the rectangles intersects another rectangle, then it intersects all the way from one side to the other. Here you should imagine contrary to what I was doing that the horizontal direction is the unstable and the vertical direction is the stable. So after you iterate R, this rectangle gets stretched horizontally and it gets contracted vertically. And the requirement is that if this image intersects S, then the intersection has to happen all the way from left to right. So here is one good, is one example of the intersection that we require. And here is one example that does not satisfy the Markov property because this image F of R intersected the left-hand side of the rectangle but did not go all the way to the right-hand side of this rectangle S. And why is this Markov property important? It is important because indeed it allows us to solve the problem that we want. If you have a family of rectangles satisfying the Markov property, in other words, if you have a Markov partition, then you can construct a symbolic model for your original dynamics. Recall that the symbolic model is the paths on a graph together with a code in math. So what would be the paths on a graph that we consider once we have the Markov partition? Well, it's simple. The space of vertices would be the space of rectangles and you will draw an edge from one rectangle to the other if its image intersects the other. So like this, you are being able to understand from the graph point of view all possible transitions between rectangles. And the Markov property, this good intersection property allows you to each path on the symbolic space to associate a unique point in the manifold in a way that this code in math intertwines the dynamics of F and of the left shift in the symbolic space which is exactly our goal that I mentioned in lecture one, okay? So the conclusion is that our goal now in order to obtain what we wanted is to construct the Markov partitions. So let's try to do it. Sorry, it's not clear for me why the Markov property is important to get the conclusion. So the Markov property is important because imagine that you have a path on a graph and you want to find a point in the manifold that is following exactly the same rectangles that the path is giving you, okay? So for instance, let me draw it in blue here. If you have an edge going from R to S and an edge going from S to T, an edge going from R to S it means that you have a non-trivial intersection here. So it means that you have a point that is in R whose image is in S. And the second edge tells you that you have a point that is in S whose image is in T. And you want to conclude that you have a point that is in R whose image is in S whose second image is in T. If you don't have the Markov property maybe this is not satisfied. For instance, if the intersection from R to S is like this and the intersection from S to T is also like this, then the intersection from F2 of R to T will be empty. So you will not have a point that is following these three rectangles. All right, thank you. But if it's all the way from one side to the other then you can always concatenate edges and find a single point that is following this trajectory. Okay, welcome. So now our goal is exactly to construct this family of rectangles with this good intersection property. We are going to do it exactly in the way that Bowen did. So there is actually a huge literature in this construction for uniformly hyperbolic systems which is what I want to discuss right now. So Markov partitions were first constructed for uniformly hyperbolic difthymophisms in the late 60s by Sinai and by Adler and Weiss. And then later this construction was extended by Ratner for uniformly hyperbolic flows and by Bowen for uniformly hyperbolic difthyls like Anozov A difthymophisms and so on. And Bowen also as I told you he gave this different approach that uses pseudo orbits. So the original proofs all of these they in some sense use a different approach. Adler and Weiss they use a very geometrical approach. They consider total automorphisms, hyperbolic total automorphisms and Sinai and Ratner they use the approach that was developed by Sinai that is called the method of successive approximations. The method of successive approximations unfortunately is very hard for generalizations. It works well for uniformly hyperbolic systems but it works badly for non uniformly hyperbolic ones. This is perhaps one of the reasons that Sarig when he was working on this construction for non uniformly hyperbolic he considered the approach of Bowen which is much more easier for generalizations. So this is what we are going to discuss now. Bowen's approach for the construction which uses the notion of pseudo orbits. Well, what is a pseudo orbit? I believe that everybody has seen it before. It is just a sequence of points which is almost an orbit up to some small error delta. So if you give me a delta I can define what a delta pseudo orbit is and it is a sequence of points that satisfies this nearest neighbor conditions here. The image of Xn is close delta close to Xn plus one and the pre-image of Xn plus one. So here should be Xn plus one. Let me fix it now. Should be delta close to Xn. So if you have this, you call this sequence of pseudo orbit and this concept plays a crucial rule in the construction of Bowen. But before explaining the construction of Bowen I'm going to rewrite this definition using one to introducing one notion of Epson overlap which for our context of uniform hyperbolicity will be totally tautological but it will play a crucial role for us in the next lecture because then for non-uniform hyperbolic systems we will use this notion very strongly. So in some sense I just want to rewrite this notion of pseudo orbits that we all know in terms of this new notion of Epson overlaps for this so-called Lyapunov charts that we have already introduced. Okay, so let's do that. What do I mean by an overlap of two Lyapunov charts? Well, I told you in this context is almost a tautology. I will write that two charts to Epson overlap if their distance is delta close. And the Epson here is appearing. Actually I did not write but the Epson is somewhat related that the delta will be chosen as Epson over a large constant L. This large constant is for example the Lipschitz constant of the function, okay? So Epson overlap means that the two points are very close are in some sense Epson close and then the images of the Lyapunov charts are almost identical one with respect to the other. So I'm saying that these two Lyapunov charts are overlapping in this sense. Their images are overlapping. How can I use this notion in order to define a pseudo orbit? Well, it's easy. You will allow a transition from one Lyapunov chart to another if you have the nearest neighbor conditions. So if the passing chart F of X overlaps with the passing chart at Y and if the passing chart at the pre-image of Y overlaps with the passing chart at X. Again, in some sense this is basically saying that the distance of F of X to Y is smaller than delta and the distance of F minus one of Y to X is also smaller than delta. So it is nothing but the same condition that we require in the pseudo orbits that we know. So I'm just rewriting this in terms of this Lyapunov charts. Then pseudo orbit will be just a sequence of Lyapunov charts that you have an edge between the consecutive ones. Okay? Again, seeing the pseudo orbit in this way seems like we are complicating too much. War is very simple. But it is important to select this because when we go to the non-uniformly hyperbolic situation we will actually consider passing charts and not only passing charts we will consider a double version of passing charts which will be called double passing charts and then the way of looking at pseudo orbits in this way will be much more convenient for us. Okay, so what we do with this notion? Well, the idea now is exactly to try to use these pseudo orbits to do as we did before. Before we were able to understand the behavior of F in the neighborhood of an orbit by means of the Lyapunov charts of these points. Now we want to do the same thing but in an approximate way. Instead of looking at the orbit we look at pseudo orbit but we still want to do the same thing. We still want to understand F in a neighborhood of the centers of these Lyapunov charts. Okay? So here's the picture. In some sense we are considering these pseudo orbits and we want to understand the behavior of F in the neighborhoods of these points. What can we do? Well, we can similarly to each edge that we have we could try to represent F with respect to these two Lyapunov charts. So here's the theorem. If the Lyapunov chart at F of X overlaps with the Lyapunov chart at Y then I can represent F using these two Lyapunov charts. The one at X and the one at Y composed to minus one. This gives rise to a map F sub XY and the conclusion is exactly the same as we had in lecture one. This map is going to be a small perturbation of a hyperbolic matrix and we can control the arrow term here but now we can control the arrow term only in the C1 plus beta over three norm. Recall, so you should compare this with what we did to F of X. F of X was considering F in the charts given by X and F of X. And we are able to control the arrow term in the C1 plus beta over two norm. Now we are changing this guy to Psi Y minus one. And then we have to allow in order to have also a control of Psi's epsilon. We have to decrease a little bit exponent in which we are able to make this control. So we decrease from beta over two to beta over three in this exponent. Okay? Yudi. So this epsilon, the last one in your slide, it appeared before and it was chosen arbitrarily, right? Yes, it is a small number that I mean, we in some sense will choose a very small but we only know how small it is after we make the whole proof. Okay, but so- Because I have a bunch of inequalities that it has to satisfy. But the epsilon that's appearing in the pseudo-arbit definition here is the same epsilon or just- It's the same epsilon. Okay. Yeah. Okay, thank you. All right, so why this thing happens under this weaker assumption, only assuming that you have this overlaps, why does this map also, why is this map also a small perturbation of the same hyperbolic matrix? The reason is simple because this map is actually a small perturbation of the previous one. So this map, if you put psi f of x here in the middle, then it is the composition of these two maps with the previous f of x that we had. f of x is a small perturbation of a hyperbolic matrix in the C1 plus beta over two norm. And this guy here is actually very close to the identity exactly because f of x is very close to y. So the overlap guarantees that the composition of these two maps is very close to the identity. So this map is a small perturbation of the previous one. And since this one is a perturbation of a hyperbolic matrix in the C1 plus beta over two norm, this one will be a perturbation of the same hyperbolic matrix in a slightly weaker norm. So the proof of this is very simple. Of course, you have to do all the calculations in order to estimate the C1 plus beta over three norm of this guy by looking at it as a perturbation of the net f of x. But this is the only idea that you need to see it as a small perturbation at f of x. Okay, so now that we have these new maps which are like hyperbolic, we can try to do the same graph transforms that we did before. Being hyperbolic or close to hyperbolic allows us again to see expansion in the vertical direction and contraction in the horizontal direction. So that graphs will have the same behavior as they had before. So now I want to reintroduce graph transforms but no longer for a true orbit but only for a pseudo orbit. So what do I do? Whenever I have an edge going from this Lyapunov chart to this, remember this just means that f of x is close to y and f minus one of y is close to x, then I can define the graph transforms as I did before. Now the stable graph transform will get an almost horizontal graph at y and we'll send to an almost horizontal graph at x. And the unstable will send an almost vertical graph at y to an almost horizontal, almost vertical graph at, oh, sorry, almost vertical graph at x, right? To an almost vertical graph at y, okay? So the picture here is exactly the same. The only difference now is that because f of x is not exactly equal to y, even if you start with a graph here passing through the origin, its image will not be passing through the origin anymore. It will be close to the origin but not necessarily going through the origin, okay? But it will still be almost horizontal. So in some sense, we have to redefine this guy allowing f of zero to be different from zero and f prime of zero to be different from zero as well, but a similar construction can be done. And again, because of the hyperboleicity of those maps, f, x, y, we get that these two graph transforms will again be contractions. And once you see contractions, you do what dynamicist does. You iterate it many times. So you define the stable and unstable manifolds associated to this sequence of points. So if you give me absolute orbit, which is in the way that I view is a sequence of Lyapunov charts, you can associate the stable manifold of this pseudo orbit. What do you do? You go far in the future. You consider an almost horizontal graph and then you pull it back to the zero position by means of all the stable graph transforms. And as you make n go to infinity, these objects will converge to an almost horizontal curve at the zero position, which is the stable manifold of this pseudo orbit. What is it? It is exactly the set of points that are following the pseudo orbit under the forward iterations of f. So I was able to relax a little bit the property of having an orbit to only have an absolute orbit, but I still can recover the notion of stable manifold, which is exactly the set of points that is following this trajectory in the future. And similarly, I can define the unstable. So for the unstable, I go far in the past, I get an almost vertical graph, and then I push it forward to the zero position by means of the unstable graph transforms. And because each of these unstable graph transforms is a contraction, I can take the limit and get that in the limit, this converge to an almost vertical curve here, which is the set of points whose negative trajectory is exactly following is close to the points xn. Okay? I'm sorry, this invariant when falls for the pseudo orbit in the uniform hyperbolic case are actually unstable and unstable manifolds for a nearby map or something like this. No, they are unstable manifolds for the original map, but for not for the point x0, but for any point that is in the curve. Because they actually contract if you apply it backwards. Okay? So you are finding new unstable, not new, but previous one that you had, but the difference is that these points, they are following the trajectory of pseudo orbit and not anymore from an original orbit, but they are truly invariant manifolds for the original map F. Okay? Okay. Well, why do I use pseudo orbits because I want to shadow pseudo orbits? What do I mean by shadowing pseudo orbits? I mean finding, if you give me pseudo orbit, I want to find a true orbit that is exactly very close to the original pseudo orbit. And this is the shadow in lemma, which I stated as a theorem, which says that every pseudo orbit, it really shadows an original orbit. So that is a unique x. I can represent this unique x using the stable and unstable manifolds of the pseudo orbit. And this will be the unique point that will be closed to the pseudo orbit at every time. What do I mean by shadowing? I mean exactly that the trajectory of x is always following in the domain, in the image of all the up and off charts of the points xn. So fn of x is close to xn at most by let's say two times square root two of q. It's very close to xn. Why? Well, because imagine this point, it will be the intersection of this almost vertical graph with this almost horizontal graph. This intersection will consist of a single point by transversality reasons. And because it is in this almost horizontal, it is following the whole future of the pseudo orbit. And because it is in the almost vertical, it is also following the whole past of this pseudo orbit. So this intersection point is indeed a point that is shadowing the pseudo orbit. And it's actually unique because any point that does this has to be at the same time at the stable manifold of the pseudo orbit and at the unstable manifold of the pseudo orbit. So geometrically, the shadowing lemma is very simple once you understand how to define this in finite manifolds for the pseudo orbits. Okay? So this is the result. And now we arrive at the final part of the talk, which is exactly using this notion of pseudo orbits and the shadowing lemma to construct a Markov partition for the uniformly hyperbolic situation. So how will I do this? For easy of understanding, I will divide the construction into three steps. First step, consider your manifold and consider a sufficiently dense set. How dense will you get? I do not recall, but it's something like delta dense. So recall that we have epsilon and we have delta equal to epsilon over the Lipschitz constant of F. And then I consider a finite set. So I consider X finite. That is delta dense in the manifold. And using this finite set, I'm going to look at all possible pseudo orbits that this finite set has. So what is this? I'm actually going to define an oriented graph because then the pseudo orbits will be the paths on the graph. So what would be the vertices of this oriented graph? Would be all Lyapunov charts of the points that I chose. I chose finite in many points. So this set V is finite. And what would be the edge? I draw an edge from one Lyapunov chart to the other if I have an epsilon overlap. I can actually put here, oops, epsilon. Okay, so if I do this, I'm again representing combinatorially all possible transitions under F from one Lyapunov chart to the next one. And so trajectories in this graph, which are elements of the symbolic space are nothing but our old notion of pseudo orbits. This is exactly the notion of pseudo orbits. And what do we do with these guys? Well, we can exactly use in the shadow in lemma to define a coding map. The problem is that this coding map we are looking for trying to consider a symbolic space and a coding map that does the job for us. The problem is that this original coding map and I'm going to explain to you in a few seconds is usually not finite to one. And recall, we require finiteness to one in order to, for example, have the preservation of entropy between the original geometrical model and the lifted symbolic one. Nevertheless, it plays a crucial role for the third step. So let me explain to you how we make this first coding map. We just do the shadow in lemma, apply the shadow in lemma. To every element of sigma, which we understood is a pseudo orbit, you associate the unique point at shadows. So pi of V is the intersection of the stable and unstable manifolds of this pseudo orbit V. And this map pi is properly defined from the space of paths in the graph two M and it is indeed surjective. Why is it surjective? Because if you give me any point in the manifold, since I chose the set X to be delta dense, to every iterate of X, I can find the X sub N, which is very close to FN of X. And being very close, this XN gives rise to pseudo orbit. And because of the closeness... I'm sorry, Yuri. It's just a question. Because you're saying that this set X is delta dense, but every point on this set X, it has to be a point on the non-uniformly hyperbole key set or something like that. Well, so far, I'm explaining the construction for a nozzle of bifurmophisms. So that is why the map pi is surjective. Every orbit is hyperbole. Exactly, exactly. Okay, thank you. So today I'm only going to do the construction for this uniformly hyperbolic situation. And then next lecture, we will try to understand how to what would be the new difficulties that come up once you go to this non-uniformly hyperbolic context. Okay, thank you. Okay, that's why here pi is indeed surjective because the X that we choose is indeed delta dense in the manifold, the whole manifold is hyperbolic. So I want to code every trajectory. And I can't give any point, I can find nearby points in our set X to the iterates of X. And the conclusion is that because they are very close, we indeed have that these X ends, they constitute absolute orbit. And because they are very close to the original trajectory of X, then it is easy to see that pi of this pseudo orbit that we create is equal to the original point X. That is why I get the surjectivity. Given X, I found absolute orbit whose image is exactly X. Okay, and also it commutes, it makes it commutes the original map and the left shift. So you have this equality here, basically because of the uniqueness of shadowing. You just get V and sigma of V would be another pseudo orbit which will be shadowing f of X because of this pi of sigma of V is equal to, not f, yeah, is pi of sigma of V is equal to f of X, which is f of pi of V. So if sigma of V is X, then, sorry, if pi of V is X, then V is shadowing X, but sigma of V shadows f of X. So this implies that pi of sigma of V is equal to f of X, which is f of pi of V. So pi composed of sigma is equal to f composed of pi, okay? Okay, everything would be perfect if we stopped here. Then we would have our symbolic space, then we would have our coding map, but unfortunately, this coding map is usually infinite to one. Why? What do I mean by infinite to one? There might be points whose pre-image is infinite or even uncountable. Why? Because I have a lot of freedom in choosing these X ends here. So for example, if I had two options for each X end to choose, so if there were in our original set X, two guys X end and Y end, which are very close to f end of X, then any choice of pseudo-orbit at time end, either choosing X end or Y end, would give rise to pseudo-orbit, which is shadowing X. So in some sense, if you have two options for each position, then you have two to the Z options for the pseudo-orbit V, all of them satisfying this. So the pre-image of X under this map pi would be uncountable. And this is, as I told you, pretty bad for the applications that we have in mind. So this map pi is not enough for our applications. And then we have to do something else. What do we do? We use it to construct an initial candidate of Markov partition, but that will not necessarily be a partition because it might have overlaps. What do we do? We do exactly the step three, which is known in the literature as a Boolean-Sinai refinement that allows us to go from something that is not a partition, is usually just a cover to an original partition. Yuri. How do we, sure? Sorry, so because this X and Y end, they were taken from that set, which was DeltaDance, right? Exactly. But can't we choose like the set to be DeltaDance, but not to DeltaDance? Yeah, not so dense, yes. Yes, you can, but you still might have these kinds of problems in some places. In some sense, I would like to say drop one of these points if they are too close and they don't need both, I can drop one, something like this. Yeah, but if you drop them, perhaps you might miss some points. Perhaps your pi will no longer be surjected. So you need many of them to be surjected, but you don't need many of them to not be in infinite one. And it's hard to know what is the way of doing that. I mean, you would have to go through kind of some geometrical justification, saying, oh, around each point, I choose at most five points or something like that. And I believe this is very difficult to do. I think, okay. Instead, what we do, the map pi, although it is infinite one, it is defined in a symbolic space and takes values into n. So I can use the nice symbolic structure of this space here to try to induce a Markov cover here, or some family of rectangles that already have the Markov property, which is the hardest thing to get. But this family is not going to be a partition, but nevertheless, it will be a cover with Markov property in which I can just apply a refinement procedure. So the idea is that although pi is not the guy that we're looking for, it is defined in this symbolic space, which has the Markov property automatically and from which we can push its Markov property down to the manifold and then using what we get in the manifold and just refine it in order to get a Markov partition in the manifold. So this is the idea. How do we push the properties that the symbolic space has nicely to the manifold? So we all know that we have a very simple decomposition of a symbolic space. We can consider the cylinders at the zero position. These cylinders, they do have the Markov property and they make a partition of your symbolic space. So basically I'm going to look at the cylinders and I'm going to look at their images under the map pi. This defines this family Z. So what is this family Z? Is the family of the sets Z sub V and Z sub V is exactly the image of the zero cylinders. So it's the image under pi of all sequences, pseudo orbits V whose zero position and here I forgot. So here should be V zero V sub zero which is the zero position of V coincides with this fixed symbol V. So what is this? This is just pi of the cylinder V zero of the cylinder V zero at the zero position. This is a subset of the manifold. And the family of these subsets is what I'm calling here fancy Z. Because pi is surjective, this fancy Z is a cover of M. But because pi is not necessarily injective, you could have intersections of the elements of this fancy Z. So the only thing that remains for us to do is to get this fancy Z and destroy these intersections. Because since it is defined coming from these cylinders in the zero position, these guys they already satisfy a very nice mark of property. So the only thing that I need to take care of is exactly destroying the intersections. If I destroy the intersections in a nice way, then the mark of property that the original family here satisfies will also be held by the refinement that I will get. So how do we destroy the intersections? How do we make the refinement without destroying the mark of property that we already have? Perhaps most of you have already seen it's as follows. Whenever you see two rectangles intersecting, you can divide the rectangle into four pieces. How? One of the pieces is the set of points whose stable and unstable manifolds inside the rectangle do not intersect the blue one. So that's why I have put here that this piece is empty, empty. The intersection of the stable and unstable manifolds of this point with the blue rectangle is empty. So similarly, you can define this piece which is the set of points whose stable manifold does not intersect Z prime. So here should be Z prime. And oops, but unstable manifold does intersect. It's intersecting here. So you get this guy. And then you get the other two guys, E S empty and finally E S U, which is exactly the intersection of two rectangles. And by doing this nicely, after you make all refinements with all intersections between rectangles that you see, you are gonna get a partition of your manifold which satisfies the Markov property. So the refinement of this family Z under this procedure that I explained above will be a Markov partition and then our construction is done here. And so I also finish my talk today. So the conclusion is that using pseudo orbits, we were able to make the construction of the Markov partition in three steps. First one, consider a sufficiently dense subset of points and look at the graph it defines by looking at the notion of overlaps. Second part, consider an initial coding map. How to each pseudo orbit associate the unique shadowed point. This guy is very nice because it induces a kind of symbolic structure in the manifold, but it is bad because it is usually infinite to one. Nevertheless, I can use the induced structure, Markov structure that you get in the manifold. And I just need to destroy the non-trivial intersections nicely. Non-trivial intersections are destroyed nicely in this way. And the final object that I get is the Markov partition we were looking for, okay? So now I believe it's time that I finish my talk and I can ask you if you have any other questions or comments. So I have another question. So at some point you said you wanted to try to be a holder. Yes. Well, this is a different business. I will need, I will also need the hyperbolicity because what does it mean to be holder? In some sense, holder in the symbolic space means that you get two sequences. Two sequences are closed if they coincide in many symbols around zero. If they coincide in many symbols around zero, it means that I'm getting the same Lyapunov charts for both sequences. And I want to conclude that if I get two guys like this, the shadowed points of one and of the other will be very close. This will indeed be true because when you look at the graph transforms, if these two pseudo orbits coincide in a big chunk, then I'm applying exactly the same graph transforms for both iterative methods. And applying the same graph transforms make this the limiting points to be very close one to the other. So this is the reason that you get holder continuity of pi as well. Okay? Okay, thank you. Welcome. Well, Yuri, it's just a curiosity of mine, but I heard sometimes some things about the boundaries of these rectangles that they wouldn't be smooth or something. Could you comment a little bit on? Yes, I can. So if you consider, perhaps the reason is that people are misled by the simple construction of Adler and Weiss. Because in Adler and Weiss, if you consider the two torus, the Markov partition will be just like three rectangles, something like this. The boundary of these three rectangles will be just some segments, which have zero Lebesgue measure. But as soon as you consider already a matrix, hyperbolic matrix in the three torus, then the same construction of Adler and Weiss, it already introduces the components of the Markov partitions, which I call rectangles, to not being rectangles anymore as we see in dimension two. The boundary in some, for example, if you assume the unstable direction to have dimension two, then the unstable boundaries will be like blocks, rectangles or two-dimensional rectangles. But the unstable boundaries will be a bunch of pieces of stable curves, which all together will not be smooth at all, will be fractal-like. So the transversal structure of this bunch of small pieces of stable curves would be very fractal. And because of this, usually the boundary of the Markov partition will not be smooth, okay? And this happens in the simpler situation, dimension three linear. This is a paper of Bowen, in which he consider a three-by-three matrix, and in which he shows that the boundary of the Markov partition is not smooth. So it will be something like, in one direction it will be smooth, it will be like a topological disc, but in other direction it will be a bunch of small segments in the stable direction, but the structure that they show in the unstable direction is very complicated. Thank you, thank you. Welcome. So I would like to ask a question out of curiosity only, because in the shift space, we have those some special measures, for example, Markov measures. Can we make sense of Markov measures for the original maps or something like this using the coding? Yes, you can. But the problem is to ask whether, if you change the coding, you will get the same measure, because in this way you are defining measures below by means of this extra structure together with the coding map. Is it intrinsic? Yes, it perhaps is not intrinsic of the manifold below. The way that we usually do is the opposite. So we get an equilibrium measure here, or some relevant measure here, then we lift it to a measure here, and then this measure here, because it lives in the symbolic space, we can understand it better, then we project it back to the original measure mu. So we understand properties of the original measure by going up, understanding it above, and then going down. Okay, thanks. Summary of what we have done so far. We have understood infinite manifolds. In the uniformly hyperbolic situation, we understood infinite manifolds of orbits and of pseudo orbits. And using this, we constructed Markov partitions. The uniformly hyperbolic, non-uniform hyperbolic situation so far, we only understood infinite manifolds of orbits. So in the next lecture, we are going to understand infinite manifolds of pseudo orbits, and what it means to be a pseudo orbit in this complicated setting. And we are going to start trying to understand how to construct the Markov partition. I expect to do most of the construction of the Markov partition next lecture, okay? The non-uniform hyperbolic situation. So we are going to complete the picture of non-uniform hyperbolic that we have just understood for uniformly hyperbolic ones. Doing the argument today, we didn't use continuity of anything, right? I did use because, for example, I use that whenever f of x is close to y, this composition of this Lyapunov charts will be close to the identity. And the reason is that the splitting that you have at f of x is very similar to the splitting that you have at y whenever the points are close. So I use this continuity here. In the non-uniform hyperbolic situation, I can have two very close points in which this splitting are totally different. So our notion of being close will have to be stronger than only requiring f of x to be close to y. I need something else. Well, if there are not any other questions. So thank you all for coming for the second lecture of the mini course. And I will see you all next Tuesday for the third lecture of this mini course. Thank you, Yuri, bye-bye. Thank you, Yuri. Thank you. Thank you, Yuri. Thank you. Are we still broadcasting? Oh, yes, I think they will, I think we can stop recording. Also on the YouTube it has stopped, right? Yes, on the YouTube it has stopped. Yuri, if you can wait a couple of minutes, then we can just have a quick catch up once everybody has...