 And we are live. OK, thank you very much. So can I start? Yes. OK, so hello, everybody. So as Yuri would say, this is my freedom day. Our last lecture on this course on SRB measures and young towers. As you can see, there is a new link now with the full set of slides. With the four lectures. So for today, we have this fourth section on inducing schemes. And in particular, the applications to non-uniformly expanding maps and partially hyperbolic attractors. So I'm going to start with a very general framework. So the idea is we use similar arguments when we treat the case of non-uniformly expanding maps and partially hyperbolic systems whose central direction is mostly expanding, in some sense. And in many places, we use similar ideas. And when I was writing this book on these topics, I realized that I was repeating the arguments. Nevertheless, so you could say, well, simply say it's similar. But at some points, there are some technical points. There are some places where it's not exactly the same situation. So I tried to find a general framework on which I can adapt both the case of non-uniformly expanding maps and also the case of partially hyperbolic attractors. So I will start talking about this general framework. And then I will apply it to the two cases I have in mind. So now the situation is we have a map from a finite dimensional Riemannian manifold. And sigma is a sub-manifold of m, possibly equal to m. In the non-uniformly expanding case, we are going to apply the conclusions we get here. In fact, sigma will be equal to m. But in the diffeomorphism case, in the partially hyperbolic case, sigma will be a disk in the manifold, a disk where we have some non-uniformly expanding properties in the central direction. So that's the two situations we will be applying this. And so we'll consider also m, the Lebesgue measure, and Borel sets of that set, that sub-manifold sigma. So it can be a sub-manifold with the boundary if it's a disk. And I will use this n to denote the distance on the nth image of that sub-manifold. So is the distance with respect to the Riemannian structure on the sub-manifold induced by the Riemannian structure of the manifold? And also n will be the Lebesgue measure on the nth image of the sub-manifold sigma. As before, so I've already introduced the letter m for Lebesgue on the manifold. And m with the index manifold is the restriction of Lebesgue measure. OK, so it's set here. So let's start with some assumptions, and then I will reduce out of these assumptions some interesting results. So assume that there exists a disk, well, can be a sub-disk in the case sigma is a disk. So with the same dimension of sigma, such that three conditions hold. So the first one is that there is a sequence of compact sets in sigma such that almost every point in that disk belongs in infinitely many h of these sets hn. So these, in practice, are sets of points where we see some good expansion in the future. Well, for those who know what it is, these will be sets hn will be a set for which n is a hyperbolic type. For those who don't know, don't be scared. I will introduce them. So no problem. So this is the first property. So there are many moments, if you can think, in these terms. So there are many moments such that the points belong to these sets here. The second property is that the points in this set have some good expansion properties, as I said, and I will make it more precise now. So for each point in this hn, there is a neighborhood of it, so of the point x. Now I call this neighborhood vn of x, such that fn maps diffeomorphically that neighborhood to a disk of radius delta 1. So there is a uniform delta 1 such that it maps to a disk of radius delta 1 around the image. So these are the expanding properties I mentioned. So it expands to this because these vn's they will shrink. So there will be smaller and smaller neighborhoods of vn because of an expanding property I will state in a minute. And there are constants, uniform constants, such that when we look back, we have uniform contraction. That's what this property says here. See that sigma is a number between 0 and 1. So this is the usual property that uniformly expanding maps have. So now it's restricted to this neighborhood of the point. And the other one is a bounded distortion property, so for the Jacobian when restricted to that neighborhood. So this is for the points in the neighborhood. So points in hn, they have a neighborhood such that it expands in particular when you apply to k equal to n, you have expansion because when you look backward, you see a strong context of sigma to the n. And with bounded distortion of the Jacobian. So this is our second condition. The third one, so as I said, I'm going to apply these two systems with the non-uniform expansion in some directions. And the idea is to induce, to use the material I gave you in the previous lectures. And to induce, we will use this feature of points that small neighborhoods, they grow to large scale. So we'll fix some delta 0 with some good expanding properties of points on it. And I will use this neighborhood to grow to a large scale. Delta 1 in practice will be much bigger than the radius of the disk delta 0. And then we'll use some transitivity of the system to bring that large disk of radius delta 1 back to or close to delta 0. So back to delta 0 when it's an endomorphism, close to delta 0 when it's a difium orbit. So that's the general idea. And so the properties we need when we bring the image of the VNX close to delta 0 is stated, the properties we need for both. So using the transitivity in the applications, we'll see there's a uniform number of iterates such that the image of VN, the image of VN comes close to the initial disk. And we need this, we can prove in the applications that we have these properties. So there is, well, there is some uniform constants such that the VN, so the neighborhood of a point in HN has, the most important one is omega nL. We also need a neighborhood of it just to make some calculations and make some estimates just because of that. But the important ones are the omega nL without the tilde. And what are the properties for this omega nL is that the omega nL is sent onto a disk of radius delta 0. Okay, so at this moment, I'm not going to say that it returns or not, so that will be done in the applications. But what I need for the first properties I'm going to deduce is just that it's a uniformly finite number of iterates that after the VN grows to large scale, there is a portion inside the VN that in a finite number of iterates goes to some disk of radius delta 0 and goes with some control. I'm not saying anymore that we have expansion, but at least in the iterates in between. So see that this is only, so in iterates in between, so this L, little L is bounded by the large L, it's here. And so for the iterates in between 0 and this little L, we have some control on the distortion. So we don't make the idea is that we have expansion on the FN iterates, and then we don't make big damage on the expansion we had because we have this control. This is particular, this property if the map has no critical points nor singular points with the derivative going close to infinity, then this property is easily verified. So the problem is that we want to apply to systems with the critical points, derivative not an isomorphism and singular points. And so the content of this is especially important in those situations. And also we have some bound on the distortion. See that here, the number of iterates is bounded, L is bounded by the capital L. So distortion is, if it's a diffeomorphism or at least a local diffeomorphism with no critical points nor singularities, this is trivially verified as well. So these two conditions are important when we have, or we need to take some care to verify them when you had either singular points or singular or critical points. Okay. So now some notation, in practice we will be dealing with the sets omega and L and in some situations, it's not important the index. So the time n and the time l, so we'll simplify the notation and similar for these tildes which are small neighborhoods of them. And we'll see it the other way around. So we'll think of the omegas and then associated to each omega, there's a Vn. So I will refer to the Vn as Vn of omega. See that Vn is by definition a neighborhood of the point in sigma. But not necessarily, we don't have necessarily the reference point that we use to build the Vn or that we will use to build the Vn that is a neighborhood. Not necessarily the point belongs in omega and L tilde and so also not on omega and L. So the idea, as I said, I repeated, is it grows to a large scale at the end and we bring it back to some region in practice that's what we'll do. And we don't have any more, we cannot assure anymore that the reference point will fall into that region. But that's not important at all. Okay, so in terms of applications, Vn will be defined using expanding properties of the system. Actually it's not the Vn, here's a mistake. So it's the omega and L will be defined using expanding properties of the system for defining the Vn and then transitivity to bring it back to some region which will be a disk in the endomorphism case or an unstable disk when we are trying to build young structures. So we'll have a disk delta zero where we build this Vn and then a large image comes back to, but not to the disk, comes to some other disk that will belong to the instructor. So the image of omega and L and the disk delta zero itself will belong to that young structure. So that's the way we will build the young structure using this result. So, but the first conclusions, it's not important the way we are going to build, simply assume that we have these sets, okay? And okay, so now some more definitions. So in many cases, typical points in the disk delta zero belong and I'm introducing this to this terminology. So positive frequency means that typical points so there is some definite fraction theta such that for M zero almost every point, so typical in this sense for M zero almost every point which is Lebesgue measure in delta zero, we have this. So this is the asymptotic frequency in a weak sense because I'm using LIMSOOP, LIMSOOP bigger than something. So this is an asymptotic frequency in a weak sense. And I will refer to when I change LIMSOOP by LIMIF, I will refer to it as strong positive frequency, okay? So it's the frequency of times that the point belongs to the respective age. In the later case, we can introduce this because if this LIMIF holds for almost every point, then this means that for almost every point, there is a moment after which this sequence satisfies this inequality. And so we can define this minimal, of course, this minimal N here, this N zero here depends on the point X, but no problem. So we can define this age theta X, okay? So this will play an important role in our estimates. So it's a certain threshold after which we have, so in the case we have the strong positive frequency is for a given point as a threshold after which we have some good estimates on the frequency it belongs to the good sets H, H, J, or HN. Okay, so our first result that I will give you an idea of these results. It's a very abstract result with these abstract assumptions. So assume that the three properties hold, then we can build in delta zero, see that in delta zero almost every point has these VN, so they are neighborhoods of the points, but these neighborhoods they intersect the other neighborhoods, so there's no assumption saying that they are paralyzed in joint or anything like that. So it's simply for almost every point there is actually by the expansion property for almost every point, there is a fundamental system of neighborhoods of that point. And so out of these neighborhoods we'll build the partition. So the first conclusion is that there is a mode zero partition of delta zero into domains of that type, domains, omega and L. Moreover, and I think that's reading this statement you are already realizing what is my intention with this setting R as N plus L, so it will be in practice the return time. Well, there are the good properties for instance in the endomorphism case to be a Gibbs Markov map, see that I define Gibbs Markov very generally, very abstractly, but then I said, well, when we have smooth maps we can deduce the properties of Gibbs Markov and essentially are these properties. When we take sigma equal to the manifold, these are the properties that imply the existence of a Gibbs Markov structure. Okay, so is expansion in the future is and the control on the Jacobian of the return map. Okay, but as I said, this is also prepared to be applied in if your case in a partial hyperbolic case because this sigma doesn't need to be the manifold can be a sub-disk in the manifold. So this first conclusion is out of those properties I've stated, we can build some induced map with good properties. And we will also deduce some estimates which will be important for instance, in this first property, I don't say anything about the integrability of R. In fact, the integrability of R will be a consequence of the second property. So there are sets whose contained in delta zero. In fact, there will be rings around the domains omega NL. I will tell you in a minute how to build them. Such that the sum of these sequence of sets is finite. The back measure of them is finite. And this property here is saying that a point belonging to the good set HN that is not belongs to an element with return smaller than this number here, necessarily belongs to one of these sets. So this is a way, what's the idea of this? Is a way of controlling. So in the sets HN, we will build some or using the sets HN or the points in the sets HN, we will build some elements of the partition. But because we won't hit the partition, there will be some conflicts. So there will be a point for which we have an omega NL associated to it. Another point for which we have an omega NL associated to it. And those omega NL associated to those, the omega NL associated to those points may intersect in a set with positive measure. So we cannot take both points or both domains to the partition. So we'll have to get rid of one of them. And these sets S will be a way of controlling the points in HN that we get rid. So there is still some control that for the points that we do not use to construct the partition. And this is a key estimate that HN intersection are bigger than L plus N is contained in this SN where we have some control of the measure. So this second property is essentially to deduce the interability of the recurrence time. And the last one is an assumption. So it's a conclusion, sorry. So if typical points belong with strong positive frequency, so is the property that I've stated before. So we know that typical points belong in infinitely many HNs, but if they belong with strong positive frequency, then there is also a very good conclusion. See, for decay correlations, when we use the inducing schemes, we want to control the tails of the recurrence time. So we want to control model of this L, which is about the number. So it's completely unimportant. We want to control R bigger than L N. And this conclusion is saying that there is a sequence of sets with the Lebesgue measure decaying exponentially fast to zero, such that the tail of recurrence times is contained in the tail of this, and this is related to the strong positive frequency. So this is that points which have a threshold, as I said, H theta is a certain threshold after which we had some good property. So it's the tail for that threshold for points with strong positive frequency. So this says, and union with this, but this decays exponential, so the E N decays exponentially fast to zero, the Lebesgue measure of it. So essentially this says in the applications we have in mind, which are for polynomial tails in exponential or stretch exponential tails, this says that the decay of the recurrence times is essentially given by the decay of this H theta bigger than N because the other part decays exponentially fast. And so that's the idea of this conclusion. So to induce, to prove the integrability and to have some knowledge on the recurrence, the decay of the recurrence times. Okay, so an idea now of this construction. Well, this construction, the approach here is related to, well, we use these ideas, as I said, in many situations in enomorphism, giftomorphism and in writing my book, I decided to put this in this abstract setting that can be applied to all the situations we consider. So the situations are here mentioned in these papers and they are all based, well, I'm a co-author of all these papers and the ideas are all based in a very nice ideas from Guizel that apply this in some particular, well, in some situation we have in mind to prove decay of correlations for the Vienna maps. And then we use those ideas in more general situations and that's the ideas I'm gonna present now. So we are going to define inductively these objects, PN, delta N and SN, the SNs are those that have already been mentioned in the conclusions. Well, the PNs are the, is the family of elements in the partition constructed, so there will be an inductive construction. So are the elements in the partition or the conclusion of the theorem constructed in step N, at step N. Delta N is the region that after the N steps, so we have the elements constructed up to step N, is the region that, the set of points that are still not belonging to an element of the partition. So is the part that remains, is actually the set R bigger than N. And SN, so SN is the set, is the SN that I've already mentioned. So it contains points, as I said, it contains the points in HN that are not taken in the elements constructed for the partition until time N, okay? So these are the ideas for these elements. And a key conclusion is that a point in HN either belongs to an element of the partition or to an SN. Okay, so let me start by telling you the main ideas of the construction of this partition. So it's an inductive construction. So I'll describe the first step, the other steps are quite similar. So we start with a large N, N zero. And so we consider HN zero. Well, HN zero is a compact set by definition. So, and the points in HN zero, they have the VN zeros by assumption. So we have a finite set such that HN zero is contained in a union of a finite number of elements of the type VN zero X because these are neighborhoods of points simply by capacity. So we consider these points in this finite set. And for each, there is these neighborhoods. So the neighborhoods that go to a disk of radius delta zero, the delta zero given in I three, okay? So we consider these elements. And for PN zero, what is let me, oops, oops, oh. Oh, I have it, it just, well, this doesn't want to be, okay, so we take PN zero as a maximum family. So these are, look at this picture, these are the reds. So these are elements, omega, omegas in certain indices. And so these are the omegas, the red ones, the elements in PN zero. So we take a maximum family of perverse disjoint sets of that type contained in delta zero. If they have to be this joint, so I take a maximum, a maximum family, why a maximum family? Because if there is one that at the N zero belong, so if there is some other element belonging with associated to a point in HN zero that is not taken, it means it necessarily means someone, some that has been taken. So the idea is so we take the reds, full, full red. And then there are those reds which are not full red that could be taken. But since you are taking maximum family of perverse disjoint, we cannot take the other reds which are not full red. But see, they intersect necessarily, they intersect the reds that we take, for otherwise the family would be maximum. And we define the essence. So the essence are small neighborhoods of the elements that we construct. Actually, not neighborhood because these are some rings around the elements because they do not contain. So this is the reason why, because the reason why I have to introduce the tildes is because I want to make estimates. And if I take all of the omegas, I couldn't make estimates out of omegas. So this is to make estimates in the neighborhood of omegas. So that's why I introduced the tildes. And so the S ends are neighborhoods of the domains that I construct. And these neighborhoods decay, the measure will decay exponentially fast. So that's why the sum is finite. And we also need to take neighborhoods of a neighborhood in this case of the boundary of this delta zero because I want the partition of delta zero. For instance, this element here, this red not full, I cannot take it because it goes onto a disc of radius delta zero. But the part inside this delta zero, not necessarily because I need to take also a part out. And that won't be a good domain for the partition of delta zero. And for that reason, that's the reason why I consider also a neighborhood S of the complement. So this is a neighborhood inside delta zero. Okay, so that's this ring here around the boundary of delta zero. And so I define the S and zero as the union of these domains. So the domains around the domains that I take for the partition and also around the boundary. And I take delta and zero as the part that has not been constructed that we do not take to the first step of the construction. And for that definiteness, we take also S, delta N and SN as delta zero for the Ns less than this N zero. This is completely important for estimates because the estimates are all asymptotic. So this is just a finite number of Ns. Okay, in the general step, we do is the same idea. So we take a maximum. So, but now is a maximum family of whereas these joint sets in the part that has not been constructed until time N. So now I take some N bigger than N zero. And this is maximal not intersecting. So now is nothing before it was not intersecting the other ones at that step. It was the only step. Now I have to be sure they do not intersect the elements constructed before. So that's what I have here in this union. Okay. And then I take delta N as the complement of it. And I also take the rings around the domains that I construct exactly in the same way. But there's only something here that I want to highlight is that for each element that I constructed before in time N, I take also a neighborhood of it but of order. So decay expression fast, but of order N minus K. So it means that for instance, when we go to step N plus one around this red here I will consider S N plus one. It will be smaller than the S N I've taken here because here I take N zero in the other one I will take N zero plus one. Okay. So that's the way. So we define these neighborhoods not only for the elements we construct at time N but for the elements that we have constructed before. Okay. And then so this is the way we construct this. And then we have some metric estimates. I'm not going to make calculations here but we have this key property. So this key property says the sets in H that's the point that we can control that I'm not taking at time N for the construction that necessarily are in the set S N. And also the set, the measure of the sets S N so we fix omega in P K and then we have the S Ns for N bigger than K as I said before but the Lebesgue measure the K is exponentially fast. And so that's the reason why we can control the sum of the Lebesgue measures of these S Ns and see that it is finite. And so for that reason we'll have that P is a partition so P is the union of all P Ks. So this will be a partition. Why that? Because of the first property. So if a point is not taken for the construction it belongs in H N, it belongs to delta N and belonging to delta N necessarily is in S N. So since the Lebesgue measure of these S Ns is finite Boral Cantelli lemma says that a point can belong or typical points belong only to a finite number of S Ns. So since these holds and typical points belong only to finitely many S Ns necessarily at some point it will be taken for the construction. So the fact that it's P is a partition mode zero is a consequence of lemma 4.4 and Boral Cantelli lemma and lemma 4.2. Okay. So we also want the integrability of the recurrence times. And so we define for this partition we define the recurrence time as I said so N plus the L that is given by the third property. We have this property. This is because of the construction. The construction we just made gives this property. So the conclusions of the first item follows from the theorem we are proving they follow from the conditions I2 and I3. So the first item is that it is it has expansion and boundary distortion. So it's essentially composing the expansion we have here with some estimates that say that the distances are not damaged in the last L iterates. And that's why we take N zero very large to have some big expansion from here and then compensate these possible non expansion in the last L iterates. The second item was this estimate here. Essentially is this, well, it is exactly this estimate. So this is because of the construction. The third item is the most technical. So let me show you the items for you to see what. The third item is this one here and this technically very evolving. So I'm not going to even give an idea of this construction. This is where we use the ideas of Goozelle. If you want, you can see in this section that I mentioned here. Okay, so the integrity of the recurrence times. So we still, I said that this condition which corresponds to the second item of the theorem. So it corresponds to this 48. This condition is related to the integrity of the recurrence time. Now I'm going to give you an idea of that integrity, which is important. So this will be used to build, as I said before either Gibbs Markov maps or young structures. If it's Gibbs Markov, in both cases, we have associated, in this case, we have a Gibbs Markov map. But even in the case of young structure, we have a Gibbs Markov map associated because we have seen in the last day that we have a return map for the young structure and then we have a quotient, which is a Gibbs Markov map. And if we consider the sequence of consecutive recurrence times, we say that the sequence of sets is F. So this is a sequence of sets which will be of this type in practice. Let me tell you that in the case A, so we have two cases that we have in mind to apply. In the case A, that's this footnote here. In the case A, the H, the sets HN star will be exactly HN, okay? So there's no mystery about them. In the case where we build young structures, it's not exactly, it's close to it. And I'll tell you when we consider the partial hyperbolic sets, a partial hyperbolic situations. And, but will be a type of set of, with the nature of HNs. And the idea is that this property, in case A, this property, when we take HN star equal to HN, this property will be true. It will be a consequence of the definition of the HNs that we will consider. So this property, 49, will be a consequence in case A. In case B, we need to introduce the new set in order to have this property. Okay, so it's a bit mysterious for a while, that I'll let you know later how these sets appear. But so assume this property now. Assuming this property is B and F concatenate. So we have this proposition. So assume that we have a Gibbs Markov map with respect to certain partition and with certain recurrence times. Assume that we have a sequence, such that typical points in delta zero belong in this sequence of sets, with strong positive frequency, as I said before, and F concatenated here. And the sequence, SN such that some of the measures is finite, and we have the property that we just, we have just obtained. Then R is integrable with respect to M zero. So how can we prove this? Well, the proof is the following, is by contradiction. Assume that R is not integrable with respect to M zero. Well, M zero associated to the Gibbs Markov map, we have a measure which is equivalent to M zero. And so if it's not, we assume it not integrable with respect to M zero, can assume it not integrable with respect to these invariant measures. The advantage of these invariant measures that we can use Burke of theorem. And so using Burke of theorem, we can prove that this limit, Burke of theorem can be used also for non-integrable functions provided the function is non-negative, that's the case of this. And so we have that this limit is equal to infinity. Well, we know I've already mentioned that this is true. And so from Borrell Cantelli, a typical point belongs only to a finite number of S n's. So set for each, for these typical points, set S of X as the number of times it belongs to the S n's. Well, we also have, it's not difficult to see that the integral, so we have this function which is non-negative again. And using again Burke of theorem, we have this and it's not difficult to see according to the way we have defined the function that this is precisely this. And so, and this is summable because it's summable with respect to M zero and nu is equivalent to M zero with the density from above and below. And now that's the, this is the point, the only point where I use the fact that H n star is F concatenated in H n. This is to deduce, I live as an exercise, to deduce that this, the set of J's for which in between two consecutive returns belongs to H star is bounded by one plus S of F J. One is the fact that it may be in one element of the partition or there's a certain number of moments it belongs to the S n's. So that's the idea of this estimate. And so given now set R n as the minimum of, the minimum R i for which R i is bigger than M. So now we can using the estimate, the estimate in the exercise, we can prove this, that this, so using the these R n's, we can prove that. So it's divided into the iterates in between consecutive returns. So we can see that here's from one to N. So here it was from R i to R i plus one. So from one to N. So we divide into the various return times that we have. And then we can say, so using the estimate, we can say that this is bounded by this. And so this is bounded by R plus the sum of the S of F i's, okay? So therefore now we divide by one over N and I have this estimate. So this is exactly using the previous estimate and then making some arrangement here, I put R n in evidence and then I make this arrangement. And then observe that if R n is equal to K by definition, we have this. And so we have this estimate here. It's a trivial consequence of that. So we have that this limit here is the limit of this. So it's using that. So this bounds here, we usually deduce this. But now this is equal to this. This is the fact that it's not integral with respect to new the R. Well, and this is infinity. So we get to the conclusion that limit of N over R n is infinity. So it follows from the estimate that we had before this limit and so on. So this is finite when you divide by one over R n, this is finite. So we have that this limit is equal to zero. Well, but this is, I said a frequent sequence, a frequent sequence means typical points have positive frequency on it. So it contradicts that because the definition is that the limb soup of this is positive. So here I'm proving that the limit is zero. So that's the way we prove the integrability. So this is a very nice argument. This argument, let me tell you that this argument is used by Pinheiro in a situation of non-uniformly expanding maps. Actually, I think he has seen this argument in the book of the Melon-Venstrian which actually used some argument in one-dimensional dynamics from, I think it's from Keller. So in the end, this argument comes from Keller via the Melon-Venstrian and Pinheiro. And we use it a lot in the integrability to prove the integrability in many situations. This is a very nice argument. So let's go now to the applications. And now I'll be, first applying to non-uniformly expanding maps. So endomorphisms and possibly with critical or singular sets. And I will, so I'll consider local diffeomorphism out of a set that will be possibly a set where derivative is not an isomorphism. So it's a critical set or a set of points where derivative simply does not exist. That's what I call singular set, okay? Or even the boundary of the manifold if it's a manifold with boundaries. So it's a bad set of points, okay? So out of it, it's a local diffeor. And we assume that this bad set of points is non-degenerate. Non-degenerate means, first of all, that the Lebesgue measure is zero and these three conditions hold. So I won't look in many detail to them, but the first one says essentially that F is seen in some sense is the push forward of Lebesgue is non-singular, sorry, is non-singular with respect to Lebesgue measure. So if a certain set has zero Lebesgue measure, then the pro-image and the image is zero of that set is also zero. And also these two other properties essentially are saying that when we go close to the critical set, the derivative behaves as a power of the distance. This property is true in many situations. For instance, for non-flat critical points in one-dimensional dynamics, for the critical points in Vienna maps. So these are abstract conditions actually introduced in my paper with Bonati in Vienna. We introduced this in order to generalize many examples as a class of non-uniformly expanding maps in any dimension. And so these are the properties we need, okay? Essentially saying that sets of Lebesgue measure zero are preserved and near the bed set of points critical, singular, the derivative behaves as a power of the distance to that point. And now the non-uniform expansion. So non-uniform expansion is this condition here. So we say that the map is non-uniform expanding on a certain set H, if for some choice of remaining metric on M, we have this condition. What is the condition? Well, first look at this condition here. So this is saying that derivative, the inverse of the derivative as so lambda is the positive number. So we have log here. So this is saying that log of the inverse of derivative if it were only this, not the average, if it were only this, the log of the derivative would be smaller than a number negative. If you take the log, take out the log, this says that the derivative is smaller, the normal derivative is smaller than one. Well, this is the usual condition to define uniformly expanding maps. For uniformly expanding maps instead of taking orbits. And so here we have an average along orbits. Instead of taking average, we say that this is true for all x and that's the usual definition of expanding map. So here we are saying that we have that condition in average and in a weak sense. It's not the limit, it's only limiting. See that if I replace here limit by limit soup, I obtain a stronger condition. This will be used as an assumption. So this assumption is better. I'm taking limit, saying limit is smaller than some negative number is weaker than saying that the limit soup is smaller than that negative number. I will use also limit soup later for other proposed not to construct the SRVV. So I will use this to construct actually the inducing scheme and the SRVV map. And for the decalculations, I will need the stronger condition. And since we want to consider situations where we have critical or singular sets, we have a price to pay. And the price is to impose a condition on slow recurrence to the critical or singular set or the bad set of point C. And slow recurrence is this condition here. It essentially says that we can go close to the critical set but not too close too fast. So in average, we have some control on the way we go close to the critical set. Let me say that this condition is also, it holds in, for instance, for Vienna maps for quadratic maps for many well-known one-dimensional maps, this condition holds. So this was deduced, for instance, for quadratic maps from Benedict Carlson conditions, freighters prove that this condition holds. So it also holds in the case of Vienna maps. So it's a natural condition in this maps in the situation of maps with the critical points. And there is a distance here, which is not a true distance is the truncated distance which is defined in this way. So it's the real distance if the point is close to C. So, and is one if the point is not close to C and close is close distance R. And see that since we are taking log here when it's one, it doesn't count anything for this average. So when we are far from far distance R from the critical, from the set C, I'm not going to call it critical because it can be also singular set or the boundary from the set C, from the non-degenerate set C, we simply do not contemplate this in this condition. So this is to say that we are only taking these averages when we come close to the criticals, to the set C, okay? So, and now we come to a very important concept in all these results on formally expanding maps, which is the notion of hyperbolic time. So, yes? Can I make a quick analogy? Yes, sure, please. I believe that this kind of slow recurrence is also related to what appeared in Miami, of course. Your measure, yes, it is. Yes, I don't remember the name you've given it. The adapted measure. Adapted measure, yes, it has to do with that. So if the measure is ergodic, then the limit is precisely the integral, is your definition, okay? Let's see then, this is the tail. See, since we take adapted, it means that the integral of log of this is finite, right? Yes, yes. Okay, so if the measure is adapted, then this condition holds, condition 55. Thank you, Yuri, it's a very nice remark. So if the adapted, it holds, but with respect to almost all points, with respect to that measure, okay? Exactly. Because this is the tail of the, wow, this converges to the integral of this function. But since you are taking here this R, this is essentially the tail of the integral, I'm assuming that the integral that there are poles in the function. And so this is the condition that gives the integrability. But the problem is that at this level, we don't have any measure. So I want to build a measure. I want to build an SRB measure. So this is a condition that I'm going to assume with, on a certain set H, that will have positive Lebesgue measure. This will be my assumption, okay? And Lebesgue measure, unfortunately, is not invariant, a priori, okay? So actually, in many cases, it's not invariant at all. So we want to build another measure, which is well related to it, and these are the SRB measures. And, but we need to put this as an assumption. And it's, of course, related to what the measures, to the condition that you use to define the adapted measures. Okay. So let us now consider the few morphemes of a known degenerate set. We fix some small constant, which appears here, it's completely unimportant. And it's only important when we go to make the calculations, and that's not the case here. I put the, sometimes I put certain things that it's only to facilitate if you want to go to check the calculations, exactly the same quantities. And you don't have to be adapting what I said here to what is there. And so B here is some constant, which is completely unimportant here. And we say that N is a hyperbolic time for a point. If this condition holds, and what is this condition? This is slightly stronger than saying that this holds for all K between zero and N. This implies in particular that when you go from, so you have the point X, you have the Nth iterate, and you have a Kth iterate in the middle, in between zero and N. And this in particular implies, this is chain rule, this condition here, implies that when you look backward from the FNth iterate to the Kth iterate, which is before N, the derivative of the inverse is contracting, because derivative of the inverse by the chain rule is precisely this product here. And so this is smaller than a number, smaller than one, so it's contracting. So this condition of hyperbolic times is saying that it contracts when you go one iterate before two iterate before three until zero. So when you look backward on all iterates, you see good contracting properties. So this is a property that uniformly expanding maps have at all points, at all moments. So I'm here introducing a concept that says for certain points, at certain moments, we have the good property of backward contraction. So the points for a certain point, the good moments, that's what we call hyperbolic time. And so this is the case when, for instance, we have a global local difumorphism. If it has an under-generate bad set of points, we also need to impose some, again, is a condition on not going too close to the critical set. And using this, so this is the content of the next result, which, so this was firstly proved in my paper with Bonati in Vienna, we proved that given lambda, if these conditions hold, and these are the conditions which are assured if we assume non-informed expansion and slow recurrence. So assume that we have non-informed expansion and slow recurrence. So for big N, for most points in a certain set, we have these two conditions. And so the conclusion of this proposition is that then we have hyperbolic times with certain constants. So this is a number, the important thing is that e to the minus lambda before is a number smaller than one, okay? And so we have hyperbolic times. And with a good property, there is some theta zero such that the fraction of hyperbolic times until time N is bigger than theta zero. This is a very important property of these hyperbolic times. There is some technical point is that there exists. So we have the idea is to apply of the proof is to apply this please lemma, which is a merely a number theoretical lemma due to please. Like many people have used this please lemma. And the proof is very simple and using this. If the conclusion were only the, so if hyperbolic time did not have the second condition and let me say this second condition in the definition of hyperbolic time, we get rid of it when we have globally local diffeil morphine. So we don't consider these. And so we simply forget this part as well in the conclusion when we have a local, globally local diffeil morphine where there's no bad set of points, okay? And so if it is only this first condition, the existence of hyperbolic times is a straightforward application of please lemma. The problem is when we have the bad set of points C, then we have to use this condition as well. And we have to use it in two moments. And so that's why we use these apps on one, apps on two, but for the conclusion is completely unimportant. So the idea is to obtain the first condition with a certain frequency and then using the second one and then adjusting the frequency of the second condition in such a way that the frequencies of both, the sum of them is positive, is bigger than zero or the sum of the frequencies is bigger than one. So we can obtain points for which the two conclusions on the two conditions that define hyperbolic time have some positive frequency. That's the idea of this apps on one are one, apps on two are two. So it's applying this please lemma in two moments in order to change the frequencies for which of the moment, okay? So hyperbolic times for hyperbolic, so if we have these conditions which are related to the definition of non-informal expansion and slow recurrence, we have hyperbolic times with good frequencies. And the conclusion is then on points having hyperbolic time. So if we have a point with the hyperbolic time, then there is a neighborhood. And I guess this is well known to you because this is my first requirement on the condition, the very first condition that I have presented you today, the condition I one. So sorry, is I two, it's not the very first, it's the second one. The very first is that typical points belong in infinitely many HNs. And for the points in HN, we have these neighborhoods. So these were the second condition. And now I think there's no mystery of what will be HN in applications. HN will be the set of points which has N as a hyperbolic time. If that happens, then we have this conclusion. So this is something we have deduced in that paper of 2000. So it's a very nice feature of the hyperbolic time. So they have these neighborhoods that grow to large scale. And so we have backward contraction. That's what this condition here says, the backward uniform backward contraction and also some bound on the distortion. So these are properties that uniformly expanding maps have everywhere, every moment. So here we have these features of uniformly expanding maps for points in certain moments. And so using this property, we can obtain a first result. It's a very nice result, which is this one. So assume we will have a local defymorphism out of a non-degenerate set and assume that we have non-uniform expansion and slow recurrence to see on a set with positive measure. So we have the good conditions on a set with positive, positive Lebesgue measure. Then there are transitive invariant sets. These are kind of attractors. So transitive, some sense attractors and you will see that they have some attracting property such that for M almost every point in H, the omega limit set will be, oops, the omega limit set will be precisely the omega j. So the assumption is we have a set with positive Lebesgue measure. We assume that we have non-uniform expansion and slow recurrence on that set. And then we can decompose. So think of the omega limit points of points in H. And I'm telling you, there is a finite number of transitive pieces that attract almost all points in H. So this is the composition of the attractors for the points in H, a finite number of transitive pieces. And moreover, on each of these transitive pieces, there is a ball, see, H, I'm not telling any, I'm not giving any topological property of H. It's only positive Lebesgue measure. And I'm telling that inside the omegas, the attractors for the points in H, for almost, this is always for almost all points, there is a ball. So these attracted contain balls. And on that ball, the map is still non-uniform expanding and has slow recurrence for almost every point in that ball. I'm not going to tell you more than I will say right now for this theorem. The idea is to use these VNs. So these neighborhoods constructed in the previous proposition. And using these VNs, we can for points in H, we can build smaller and smaller neighborhoods with more and more kind of Lebesgue density theorem. So smaller and smaller neighborhoods with bigger and bigger proportion of points in H. And so when we look to the image, we see disks of radius delta one for which the proportion is essentially the same because we have a uniform bounded distortion condition. So with this set with positive Lebesgue measure, we can build balls of a uniform radius delta one on which the proportion of images of points in H is close to one, arbitrarily close to one. Well, and so we take accumulation points of these balls and we have a ball on which the proportion is exactly one. So the idea for the proof, for a first step of the proof of this theorem is to construct these balls when we have Lebesgue almost every point with non-uniform expansion and slow recurrence. And then we use some decomposition argument. If these balls interact, they then belong to a same transitive piece. And so we can decompose the set of omega limit points into a finite number of those pieces. Finite number because this delta one is uniform. So the radius of these balls is uniform. So the use is very nice ideas from a paper of Pinheiro where it considers expanding measures of paper from 2011, the Milton Pinheiro. And well, in Pinheiro it's not stated like this. We also, so this, we are still only considering the case of non-uniform expanding maps. This has been, so it has, we'll see it today. It has a similar situation in the case of partially hyperbolic attractors. And I've proved this theorem with Pinheiro, Carlo Diaz and Stefano in the case of partial hyperbolic attractor. So this is just an adaptation of those ideas formally from Pinheiro to this situation. You can see this in the book. So this theorem says that having a set on which we have, so assuming non-uniform expansion and slow recurrence on a set with positive low-back measure, we have these transitive pieces containing balls where we see the non-uniform expansion and slow recurrence. And these balls are contained in the transitive sets omega. Well, so let's look at these transitive pieces now. So almost everything I'll be doing now is with respect to a transitive piece containing a ball where we know that we have non-uniform expansion and slow recurrence. So after this, the composition theorem is natural to consider it. So let omega be a transitive set and omega and sigma be a ball as in the previous theorem. So there's a ball for which we see a non-uniform expansion and slow recurrence to the bad set of points. So see that the non-uniform expansion is almost everywhere. So let H be the set of points for which we see the non-uniform expansion and slow recurrence. So the measure of H is exactly the measure of sigma. So I'm going to now present you the HN. So I'm now approximating the assumptions of our first theorem today. So I'm going to introduce now the HNs and I think there's no mystery for you, the HNs and checking the assumptions, okay? I1, I2, I3. So I1 is the HN. So the HNs are naturally the set of points that have hyperbolic time at time N. See that proposition that we have just seen gives us the positive frequency of points on belonging in the sets H changes, okay? So that's a conclusion of the, is the proposition is not the last one, is the one, but last is the proposition on the existence of hyperbolic time. It gives this frequency. And see that it gives, so see that assuming non-uniform expansion. So this is the weak condition of non-uniform expansion. We have infinitely many moments for which this non-uniform expansion and also slow recurrence. We have infinitely many moments for which this holds. And so we have infinitely many moments and for which we see a positive frequency until N. But then it may happen that for a long time we don't see any longer the positive frequency. But then it will eventually arrive a new moment on which when we look backward we see a positive frequency of order, the same theta at that time and so on, okay? So there are infinitely many moments for which when we make the frequency until that those moments, it's a definite fraction theta, okay? So this is the weak condition Lin Shu. So it says that we have this Lin Shu. Okay, so there are infinitely many moments for which these fractions are bigger than theta. In particular, if this holds points in sigma belonging infinitely many HNs. And so we have the validity of the first condition that was belonging to infinitely many HNs. Well, and the second condition was the condition on the VNs, the existence of the VNs. Well, the condition on the existence and the conditions are stated precisely in the proposition on the existence of these VNs. So the condition we have just seen. So this is I2. Well, for I3, I'm not going to tell you how to build this in the non-aniform expanding case. Just the idea, the idea is to use the transit. So this is the point where it's important to live to be considering this situation in a transit set because with the VN, so we have some disk delta zero which is delta zero via subset of six. So we have some disk, typical points have the VNs infinitely many times. So in the VNs they grow to large scale. We take this delta zero, a disk with very radius much smaller than delta one. The delta one is for which the radius of the balls image of VN, okay? So we take delta one, delta zero much smaller than delta one. So delta one is defined up priority. Now I'm going to choose the ball delta zero. So I choose the ball delta zero with radius much smaller than delta one. And then using the transitivity, I can in a finite number of fit rates and uniform. That's an important feature. Bring any ball of radius delta one to cover the ball delta zero. Provide I take delta zero sufficiently small that the radius of the ball rate delta zero sufficiently small. And so I can see inside the domain, a sub domain such that what is the property for this sub domain that we need is this one here. So I can see a sub domain that satisfy this property. So it's essentially the property I three that we need to apply that theorem is essentially using the transitivity, okay? And so with this, on these transitive sets, we have the conditions that we need to apply theorem, the very first theorem of today. And then we set the recurrence times as they should be. And we obtain so C, as I told you in the beginning, I said, well, I'm just putting these conditions I one, I two, I three, for I three, I'm not assuming any recurrence property. It's simply a finite number of iterates more. But now I need, I want to build Gibbs Markov. So that's why in this, in this case, I'm applying I three with this extra assumption returning to the this delta zero. And so since I'm constructing this omega n else with this property of being returning to the this delta zero, we obtain a Gibbs Markov map. So this is exactly the conclusions of the theorem. So this is theorem 4.1. So it's the first theorem of today. And see that by definition of hyperbolic time, this is very easy to check. This is because it's a concatenation property because of the property that defines the hyperbolic time. So if you have a hyperbolic time, so it's backward contraction. And we go to the middle, Fj also has n minus j as a hyperbolic time because it's backward contraction. So we have that hn is fr concatenated in hn. So as I told you in the endomorphism case, I'm using exactly the same h, not hn star as a... So hn star is more general. So in this case, I can make a choice of hn star exactly equal to hn. So this sets hn as we have seen, they have some typical points belong in them with positive frequency. And also they are fr concatenated. So these are the conditions to ensure that this r is integral. So this was the content of proposition 4.6. Okay. And so applying it, we have this nice theorem. So assume that we have non-uniformly, so a map, which is a local diff, you're out of a non-degenerate set and assume that we have a transitive set. So if F is non-uniformly expanding and has low recurrence on almost every point of a ball contained in the transitive set and these balls exist and motivated by the previous theorem, then F has a Gibbs Markov induced map with integrable recurrence time defined on some sub-ball of the ball sequence. Okay. So, well, the first version of these results, so the conclusion is existence of a Gibbs Markov map with induced map with integrable recurrence times under very general conditions. So a first version of this result was proved by Pinieru under, so this is a local version, locally in the sense that we are restricting to transitive sets, which actually can be seen as the transitive pieces of tractors for non-uniformly expanding maps. And Pinieru, so this uses a local construction in some sense similar to what Pinieru did, but he did with a global construction on the whole manifold. So this is a local version of that result. Which is in turn of applications is better because we can apply more situations. So we don't have to have the global assumptions, okay? As a consequence of the results we have seen for maps with inducing schemes in the endomorphism case is that so under the same assumption, so the same situation and same assumption, so if we assume that F as non-uniform expansion and slow recurrence on a certain subset of points, then there will be these disks and then applying the previous result, F has a unique ergodic SRB measure we could support coincides with C. Well, it has an ergodic SRB measure is an immediate consequence of the fact that it has a Gibbs Markov induced map with the integral recurrence time plus what we know for maps with integral recurrence times the endomorphism case. So if we have an induced map, then the original dynamics has a measure which is ergodic and absolutely continuous with respect to the back measure, which in this case of maps with no contracting directions is what we call SRB measures. The only plus here is the support coincides with Sigma. Well, for this, we use the fact that Sigma is transitive, okay? The transitivity allows us to conclude that the support is the whole attract, okay? So we have this very nice constant. So if in a transitive set, we have non-uniform expansion and slow recurrence on some set of positive low back measure, then there is a unique ergodic SRB measure supported on the attract. And so this finishes the construction of SRB measures in the endomorphism case for non-uniformly expanding maps. See that I've already said I'm stressing the fact that here we have a weak non-uniform expansion condition. Let me say that the conclusion on non-uniform existence of SRB measures for non-uniform expanding maps in general. So this is an animation was first first approved by myself, Bonati and Vienna. But in our assumption was stronger in the sense that in your very first result, we had lean soup and not lean meat. And it's interesting because we use that. So to prove with lean meat that the idea just given you we use this inducing scheme. So we have these structures, these geometric structures on which we control these recurrence times. We build these induced maps with very good properties. The original idea under the stronger assumption was simpler. We're simply using that idea of iterating low back measure and controlling the densities at least at some iterates, at some subsets. So, and the method gives much less information. Of course, to build an absolutely continuous invariant measure, it was enough and it worked and it was very good. But gives much less information than these new method gives. And it's interesting because this new method is applied under a weaker assumption. So we improve the original result from 2000. And the weaker assumption will obtain more. So this method in some sense is more efficient. Okay, what is in fact is more efficient is our knowledge on these non-uniformly expanding maps. So 20 years have passed. So it's natural that there are improvements. Okay, so now next goal. Well, decay of correlations. Unfortunately, with these assumptions of non-uniform expansion only and slow recurrence, we cannot say anything on the decay of correlations in general. So we need stronger assumption on non-uniform expansion. But the stronger assumption is not so weird. It's exactly the assumption I used in my paper with Bonati and Vienna. So we change now limb to limb soon. That since this is less than negative, well, less than something, so limb soup is a stronger assumption. So clearly, strong non-uniform expansion. I use this S and UB for strong non-uniform expansion. And this implies clearly non-uniform expansion. But what is the advantage of this condition? Is that it allows us to introduce this function here. Why? Because this says if limb soup is less than minus lambda, this holds for all points in a certain set. And in applications, I will assume that this certain set has positive low back match. So this says that for the points in this set, there is a moment after which these averages are always, so the averages here are always smaller than minus lambda. So this is just the definition of limb soup. So for points in H, we can define this E of X. So X in H, we can define this E of X. Is the minimum, the first moment after which these averages are smaller than epsilon. This is assured by the fact that we have limb soup. So there's a threshold for every X in H. There's a threshold after which we have the averages, good averages in a certain sense. Satisfying this. In particular, we have the positive. So this is the condition. Of course, I think by now of maps with no non-degenerate critical sets. So assume that it's globally a local diffeomorphism. So if it's global diffeomorphism, so if we have this condition, we know that for every N bigger than this capital N, for that point X, we have positive, for all, we have positive fraction of hyperbolic times when we look back. So this gives the strong points belong with strong frequency to the set HN that we have just built. Well, in the case, in the case we have a non-degenerate set, we also need to consider the other assumption. And it also allows us to introduce this minimum for which this is satisfied. And so we have the two conditions, this and this for the point X, we know that for all N greater or equal than this, the minimum, sorry, the maximum. Or see that, well, again, this is a technical point. As I said, we need in the existence of hyperbolic times we need to consider two. And so technically we have two functions. And so when we define this, I'm going to introduce this function HN of X. And so I had to take into account the two R1 and R2. And so I take the maximum of the three, also this one here above. And so this is here, I'm saying that when the C is the empty set, then it's exactly the only one above. And so this function plays an important role in the next result I'm going to present. So what is the result? So assume that we have a C1 plus something. See that the plus something is important in the conclusions of the lemma that gives us the VMs. And this is to control the distortion. So we need more than just C1. In many of these things, I would say most of these things, we need more than just C1. In the C1 world, there are many weird phenomena. So we have to take care of them. And so we usually put more than just C1. So assume it's C1 plus holder, local default out of a non-degenerate set and a transitive piece again. If we have a ball such that the transformation, the dynamics is not uniformly expanding and has slow recurrence to the non-degenerate set almost everywhere in that ball. That's what this says. Then there is an L. Well, it's not so important, but there is an L and a Gibbs Markov induced map defined on a symbol. Well, the conclusion on the existence of a Gibbs Markov, we already know because we have deduced it under weaker assumption, not necessarily strong, but new, it was one of the previous theories. So the good part now or the new part now is what comes next. In such a way that this holds. And this was precisely the conclusion of, sorry, was the one of the conditions that we have in, sorry, so this is a conclusion. This is a conclusion. And this appeared before. This conclusion appeared before or something similar appeared before when we were in our very first theorem. So in our first theorem, the conclusion was here. So I was a bit confused now. The conclusion was here. So in our very first theorem of today, it was theorem 4.1, we had this conclusion, okay? So what was H theta? H theta was, so H theta was the strong positive frequency of points in HN. And clearly the condition is this. The condition is this one here that I'm not being able to highlight, but I don't understand why, but that's what happens. So the condition is this one here. So and clearly if N is bigger than H, this capital H, well, this four writing is not a problem for saying is not very good the notation. So this is the H H I introduced before. So this is to say that H theta is the one introduced in theorem point one is bounded by this H H. So the conclusion of our first theorem of today gave us this. And what I'm telling you now, so applying theorem point 4.1, we obtain a Gibbs Markov induced map. That's what we are obtained before with this condition, with this conclusion. But since this is true, we'll obtain this conclusion. What's the advantage of this? Is that this is a quantity, this quantity here, we can control in terms of the tail of non-uniform expansion and slow recurrence, okay? So knowing the tails of non-uniform expansion, that's what it means. So is the tail of non-uniform expansion and slow recurrence condition, it determines. So now recall, well, it's stated here. So this sets E n, the decay exponentially fast. And to apply the results on decay of correlations and decay of correlations in the polynomial and stretch exponential cases, we need to control this quantity. So this result says that we can control this, provided we control this. And this is unimportant in terms of the applications in the exponential case or even worse, polynomial or stretch exponential, okay? So E n is no problem. Okay, so this is a very good theorem. It tells that we can construct. So it's essentially under the same assumptions of the previous theorem, essentially, well, there's this S now is strong non-uniform expansion. We obtained the gives Markov in this map that we already know it exists. And also with a control on the tail of recurrence times in terms of the tail of non-uniform expansion, okay? So the idea is the idea of the previous one and just observing that there is this relation between H, small H, capital H and H theta. Okay, and so with this is natural to have this conclusion. So under the same assumptions, we conclude that if the tail of, so let's think of this as the tail of, the measure of the tail of non-uniform expansion and slow recurrence condition is embedded as well in there. The tail, if the tail is polynomial, then the recurrence times decay polynomially fast. The same polynomial, right? If it's stretch exponential, then it decays stretch exponentially fast. Okay, that's very nice conclusion. This is a simple consequence of the previous theorem. And okay, well, there's a difference is that here is N plus L, but it's completely unimportant. There's this, this L is uniformly bound, is uniform constant, so it's essentially the same. So this is the existence of recurrence, the tail of recurrence times, the conclusion. And so applying the previous theorems of the second day on the conclusions of decay of correlations, we have this first conclusion. Actually, we have many conclusions that we can deduce. I only put this one, this is the most general, is that, so the assumption, oops. Mm-hmm, mm-hmm, mm-hmm, mm-hmm. So assume that here is elementary set, I was, here is transitive set, contain, so the assumptions are the same. And so we know already that we have a unique SRV measure whose support resides with mu. And again, this is as in the case we have seen before. So I don't know if this measure is mixing. So there's no hope, at least mixing it. There's no hope of, in general, obtaining the key of correlations. But the consequence is that, so we all know it's ergodic. The consequence is that we can decompose this measure as we did before. So this is using previous results. We can decompose the measure into a cycle of measures such that the measure is the mean of them all and such that we have the, so this result, if the tail of the hyperbolic times, this is the tail of hyperbolic times, essentially. The case exponentially fast, polynomially fast, then we have polynomial decay of correlation. But as in the cases we have seen generally for these measures, that actually is a sub-measure of our good measure for the original system. And the same for stretch exponential, okay? And if you go to this section, 6.5, there are several corollaries where I put under some extra assumptions, we can deduce that Q is equal to one and a societal equal to B. And so the measure itself, so if B is equal to one, then the mu i's are the measure mu. And so we obtain the decay of correlations for mu. And one of the assumptions is topologically mixing. If we assume that the dynamics is topologically, is more than transitive on omega, if we assume it topologically mixing, then Q is equal to one. And so we have the decay of correlations for the unique ergodic SLB measure. Another assumption which appeared before is the ergodicity of the powers of the dynamics. If the measure mu is ergodic with respect to all powers of F, then necessarily the measure is unique. Okay, so let's apply now this to Vienna maps. So Vienna maps, well, that's the Vienna maps that Yuri has already used. So it's a family of maps in the cylinder. So in the base in S one is D S mode one. So it's a uniformly expanding map. In Vienna original paper, D has to be greater or equal to 16. It has to be a strong expansion. Actually, this is still an open problem to see if in general, the great result would be D greater or equal to two. And this has been improved by B with Z, and Suji, but only for open sets of maps in the C infinity topology. Because this is only one map, but the Vienna maps is an open set of transformations. It's actually a small neighborhood of this map that I describe here. And they prove that for, we can replace D greater or equal to 16 to D greater or equal to two, but only for open sets of maps in the C infinity topology, not in the C three as we are going to take this. It's very interesting. So it's still an open problem. It's, can we in the original work of Vienna, can in these Vienna maps in the C three topology, diminish D to D greater or equal to two, still an open problem. So in the fibers, the vertical lines, it's a quadratic map. So in quadratic, the parameter, so you think of AS as a parameter depending on the fiber you are. And they, so this BS is a Morse function. So it's effectively changing with the S. And it's actually turning, so it's changing around a very good parameter for the quadratic family. So this is the original map. And we can see that there is an interval such that the image of the S one times the interval was strictly inside of it. So we have an invariant region as for the quadratic. So essentially it's due to the quadratic having this property. And because the parameter here is strictly smaller than two. And then, so there is an attractor also for maps near this map. And so nearby maps also satisfy this condition. So if we change here F hat by any sufficiently close, and for this only C zero topology is enough. And so we have an attractor, I call it omega F. And Vienna maps is a family of C three maps in a small neighborhood, in a small C two topology neighborhood of the map described above. Well, the C three is essentially due to the fact that is too, it's because of the critical. So for this map, the critical set is precisely the line S one times zero. When you change a bit the map, you change the critical line. And the C three is to make a change of variables in order to put the critical line of the perturbations of the half hat, again, the critical line. It's just because of that. And the C two topology is because we need C two topology for Vienna calculations. And to make a perturbation and still obtain it, it's C two, C two some manifold. Okay, so what are the conclusions? Well, the conclusions are due to the original work, essentially the original work of Vienna, where, well, it's not stated there. Well, in the time Vienna proved this result in 97, at that time there was not this language of non-linear form expansion, strong non-linear form expansion and so on. So Vienna, what actually did is, it proved that there is a positively opponent of exponent in the vertical direction almost everywhere. But since we have strong expansion in the base dynamics in directions, complementary to the vertical ones, we have for free a positively opponent of exponent in other, in these other directions. And so actually we can prove that we have non-linear form expansion. Now, let me tell you that non-linear form expansion in general is stronger than having all the opponent of exponents positive almost everywhere with spectral back measure. It's effectively a stronger condition. Or at least it's not precisely correct. At least this is not known if the all Lyapunov exponents positive on a set with positive Louvage measure implies non-linear form expansion on a set with positive Louvage measure. So this is the right way of saying this. So this is essentially, so the strong non-linear form expansion is essentially comes from the Vienna work. Well, what I do with the Vitor-Arnav-Uzis is put into the language of non-linear form expansion. It's deduced from the original work of Vienna. And also the, so here I say that on the attract is topologically mixing. And as we know, this is important for the uniqueness of the measure. And this, what we actually, so I approve this with Vienna in a second work. We prove that it's locally eventually on two. So this is more than just being topologically mixing. So this is every open set in a finite number of iterates becomes the whole attract. And in the work with Vienna, this from 2002, we prove that. So we have an open set of transformations. So applying the previous result, like let me just derive the conclusion is that so applying the previous results, we have that there is a unique SRV measure whose support is the attractor, omega, I call it omega. And moreover, we can prove that the Bayesian covers almost every point in the, so this is the Bayesian of the attractor, S1 times this interval. So we can prove that, well, the measure in the attractor is a consequence of our previous results is unique. But we can actually prove that the Bayesian covers almost every point in S1 times that. So it's a very good measure given the conclusions for almost every point of physical measure for almost every points in S1 times I. And well, we have this tail, this is what comes from Vienna's work that we have stretched exponential tail of the hyperbolic times. And so far our previous results, we've used this. Let me tell you that the decay of correlations has been reduced by Guazelle first. Here is a consequence of our results. So Guazelle use slightly different, well, the ideas of Guazelle, that's the ideas that we use, but slightly different approach in the line of what I told about the work of Pignets for global construction and so on. And, but it proved that we have proved before with Stefan and Pignets super polynomial decay of correlations and then Guazelle improved our result. And in my work with Vienna, we proved that the measure, the SRB measure, so that's what I was saying, the SRB measure depends continuously. So the density in the L1 norm depends continuously on the map in the open. So we have an open set of maps. So that's what we call statistical stability. So we have these very nice properties for these Vienna maps. This has been this example of Vienna maps. I've been very inspiring for many of the things we do for non-uniformly expanding maps. It remains an interesting open problem to know if this estimate here above is optimal for this Vienna maps or just the limitation of the method that Vienna used is still an open problem. Why stretch exponential? Why not explanation? So that's very nice problem, open problem. So with this example, I finished the non-uniformly expanding, the case of non-uniformly expanding maps and now to finish my course, I'm going to consider partially hyperbolic attractors. So the ideas and objects are similar to this non-uniformly expanding case. Now we'll have a strong contracting direction and we'll have another direction. We'll have an invariant splitting. So that's my first next goal. And we'll have, so assume we have a foreign invariant compact set on which we have a splitting of this type. So splitting means invariant by the derivative, this sub bundles, ECS and ECU and assume it's a dominated splitting. Dominated means this. So this is a condition used by many people, many results. So this says, so in the beginning, I'm not assuming contraction or expansion in these fiber bundles, but this says that, so the idea is that the behavior in ECS is essentially contracting behavior and the behavior in ECU is essentially an expanding behavior in the following sense. So if, so this condition says that not necessarily, so if we have strong contraction here in ECS, we don't need to have contraction backward in the ECU direction, or if you prefer expansion in ECU direction because we can compensate any non-contraction here by a strong contraction here, okay? So this is the idea of splitting and vice versa for the other case. So it says that not necessarily contracting, but not necessarily doing what it does when it's a uniformly hyperbolic decomposition, but when it does not what is expected to do, the other direction compensates, that's the idea. And then we see that, we say that ECS is uniformly contracting if this holds. So this is the usual contracting property and uniformly expanding if this holds. So this is the usual uniformly expanding property. And see that if both are uniformly, so if ECS is uniformly contracting and ECU is uniformly expanding, this is the classical uniformly hyperbolic situation. When one of them, when ECS or ECU is uniform, so the behavior is uniform, we simply, we drop the C. So C here stands for center stable, center unstable. So when we drop the C means is, as in the classical situation, is stable or unstable. It contracts uniformly or expands uniformly. So this is just simplifying the notation. And we say that it's partially hyperbolic. So above I defined having a dominated splitting. The set K is partially hyperbolic if it has a dominated splitting for which one of them has the uniform behavior. I mean for every point in the set K, either ECS is uniformly contracting or ECU is uniformly expanding or both hold, but it cannot be for certain points is one of them. No, it's for every point, either ECS is uniformly contracting or ECU is uniformly expanding, okay? Well, we say that omega is an attractive. Well, if the usual condition holds, there is a neighborhood on which the image is sent onto the interior of that neighborhood. And so the attractor is the intersection of the forward images. So let me, so we have according to my definition of partially hyperbolicity, we have essentially two situations. The case, the first one we are going to treat and the one all these methods I've been describing can be applied is for this situation is ES, so uniformly contracting plus center and stable. There are results and at the end of this presentation, I will mention some results in the other situation. So the other situation is ECS plus EU. There are several results on the existence of SRB measure and decay of correlations. But interestingly in the other, in the dual situation, not the one I'm going to treat here, the construction of the SRB measure is much simpler because when we, so the SRB measure essentially leaves on unstable manifolds. If we have strong non-uniform expansion, then essentially is using a result that I've already mentioned by Pesin and C9 on which essentially is the classical situation. We take Lebesgue measure, it's way to forward control the push forwards and we have an SRB measure. And so constructing the SRB measure in the other situation, not the one we are going to consider here, it's easier. But interestingly, proving decay of correlations for the other case for which constructing the SRB measure is easier is much more complicated. There are only some partial results. So I'll try to have a time for at the end of this, this presentation, mention those results and tell you the situations on which we have those results on decay of correlations. So let's start now with this situation. And naturally we are going to consider along the center unstable direction, we are going to consider some non-uniform expansion. And again, I'm going to consider a weak version to deduce the existence. So this is this, we'd leave inf here and I don't know why I cannot highlight, no problem. So we leave inf here and later for the decay of correlations, we will consider lean soup. And again, we introduce hyperbolic times. Well, see that this is the condition along orbits, but of course in the ECU direction, which is natural, but the condition is essentially the same as we consider for the endomorphism case. And also introduce hyperbolic times is essentially the first condition in the hyperbolic in the non-uniform expanding case. Here we don't have, we don't have, so we consider defiomorphism. So actually there are generalizations of these results for cases of defiomorphism, but out of a set that there might be discontinuity points. And, but I'm not going to consider that situation. It's using similar ideas of non-uniform expanding maps with critical sets, so we can do that. But here I'll stay at this level of globally a defiomorphism. And the hyperbolic times are introduced similarly. And so here is a direct application of please lemma. We obtain the existence. So whenever we have this condition, we obtain the existence of a positive fraction of hyperbolic times. So this is please lemma essentially. Now we have some more complication not due to the defiomorphism case. So we have more directions to control. And a first one is that, okay. So assume we can, the first one is that we can introduce some number such that this is essentially uniform. So this condition here, this condition is very important. This is essentially uniform continuity of the first of DF. The uniform continuity of the DF allows us to introduce a delta one such that whenever the distance of a point to X is at the distance less than one. Oh, the condition. So this is, I didn't introduce them, I forgot. The condition of partially hyperbolicity, actually the dominated splitting allows us to introduce invariant cones. So this is as in the classical situation. In the classical situation to introduce the cones, we don't need the hyperbolicity, we just need dominated splitting. And this is the center unstable cone. And so see, so we have naturally this quantity here appears. So the DF at a certain point in the ECU direction. And so I'm telling that, so see that sigma two minus one four is a number bigger than one. So I'm saying that in a neighborhood, the derivative of any other point, so this is for FX, so this is why I take FY. So the derivative of any other point is close to the derivative of this point, F of X. But not only for tangent vectors, but also for vectors in a cone field, in the cone field, because the slope of the cone field is very, we can take it very small. And so this is just an argument of continuity. So we have this good estimate. So this is the proposition given us the good neighborhoods to VNs. So this is similar to what we have obtained. So here we have to take CU disks. So CU disks is disks where the tangent space is controlled, is contained in the center unstable cone field. So for points in the CU disks, if a point in a CU disk has a hyperbolic time, and here there is some complication, because in the previous case, I always consider sigma hyperbolic times. And I could avoid this, but I want to tell you something very subtle, which has to do with the HN stars and HNs. So this is the case where we need to introduce HN star. And it has to do with this hyperbolic times. And see that sigma hyperbolic times are better than sigma two, any power smaller than one hyperbolic times, because sigma is less than sigma two, one half or sigma two, three fourths. And sigma is the rate of contraction, okay? So I'm telling that if it's, and this is better than this. So sigma to the one is better than sigma to the three, four is better than sigma to the one half, that's the idea. And if we have this intermediate sigma hyperbolic time, sigma to the three fourths, hyperbolic times, then we have the good conclusion, the good neighborhoods in the disk, we have them expanding because we have sigma hyperbolic time. Well, it will lose something in the hyperbolic times, no longer sigma to the three fourth, but it's sigma to the one half. And we have so the expansion and the good distortion control. So model of these details on the sigma to the power, this is essentially the previous lemma of endomorphism case now for CO disks. And so we can deduce as before at the composition. So this is a theorem similar to the previous situation where with the compulsive, if we have a partially hyperbolic set and we have a set with positive measure where we see non-uniform expansion along the center and stable direction, then there is a finite number of transitive sets attracting almost every point on the set H. So this condition holds. Moreover, this is parallel to the previous result for endomorphism. Moreover, each of these attractors contains a CO disk on which F is non-uniform expanding along the center and stable direction for almost every point in the CO disk. So the idea, actually this year was proved first on the other one. The other one I've decided to include in the book in the endomorphism case, but we proved it directly in the defiant morphin case with these co-authors of mine. Okay, so this is similar to what we have seen before. And so now we consider, again, I'm going to consider one of these transitive pieces and one of these disks that we know exist in the transitive pieces where we see non-uniform expansion in the center and stable direction for almost every point in the disk. Again, we consider hyperbolic times and so on. There's this difference here. Well, I can see that the sigma three fourths hyperbolic time. Well, this is completely unimportant at this level. The important thing is that we have some numbers smaller than one, sigma to something is okay. And we have this. And we have this co-continuation property as we had before. So this is simply the definition. So actually here is sigma hyperbolic time but I define the HN. So here is the sigma hyperbolic time and we know that we have this since we have the strong assumption is, sorry, here is, I'm sorry, here is only, still not the KF cross is only to build the SRB measure. I'm taking the weak assumption. Since it's the weak assumption, we obtain the weak frequency. Okay, this is lean super to theta. And I define this for, if we fix a sigma hyperbolic time, then we have this. Then we consider HN as a set of points which have a weaker hyperbolic time. So it's sigma to the three fourths. For this HN, we have this property and notice that sigma hyperbolic time implies sigma to the three fourths hyperbolic time. Because it's smaller, so sigma is smaller than sigma to the three fourths. And so we can use the same ideas and build a partition and use. So the idea here is to check the conditions of our very first theorem of today. But now I'm not, so what is the idea? So we take a disk and we use these VNs as the first, so VN satisfy the first assumption of our very first theorem of today. And we build these neighborhoods that grow to large scale. And then we use the transitivity to bring them back. But now in this situation, bringing back is not as before because before it was going a certain part defiomorphically onto the delta zero. Well, now coming back is coming to a disk which is, this is a defiomorphic case. It doesn't need to come exactly onto the delta zero, the disk delta zero. So it comes close to delta zero but is another unstable disk close to delta zero. And other, you take another domain, another VN, it goes to large scale and it comes to another disk close to delta zero. And so we build the first generation, the very first generation. So this idea here is similar to the idea we used in the last day for the solenoid with intermittency. We can build a young structure using this idea of generations of disks, of unstable disks. So we start with these VNs, grow to large scale, use transitivity and we bring them back and we build a second. So the first generation is the disk delta zero itself. And then we bring, we build more generations of center unstable disks. So the idea is that one. So it's similar to the construction of the solenoid with intermittency. There's only one point is the integrability. So the integrability, so this is the conclusion. So under the assumptions that we have above, then F has a set with a full young structure. See, a full young structure is the disks themselves. They are contained in the set lambda, which has a young structure. So this in general doesn't need to be like that. So in general it's only a product structure, but in the unstable manifolds has a set with positive measure. Here is the full disk is contained. So this is a special situation of attractors because we are considering some attractors. But the integrable recurrence time. So the idea is to use the theorem 4.1 in the second condition of our third condition, the I3 in the last number of iterates. There in the theorem it was free. So it doesn't matter how we are going to obtain that provided we have some properties, we have some conclusions. So here I'm telling you that we have to be careful when we return use transitivity, but we construct this young tower under again, some algorithmic construction in order to obtain that young situation. What about the recurrence times? Well, you may say, well, but we have obtained the integrability of recurrence times under the assumptions of the theorem. Well, under extra assumptions because we use the concatenation property, which is here, but we use the strong positive frequency of points in a certain set. And the concatenation property is with respect to a Gibbs Markov map. So in that place where we deduced the integrability of the recurrence times, there was a Gibbs Markov map. And the problem, okay, you may say, well, but here we have a Gibbs Markov map as well. We have the quotient of the recurrence map. Well, that's what I put here, that quotient that I have introduced last day. And this is the Gibbs Markov map, this capital F. But the problem now is here, is that the concatenation property here, and that's why I consider this three, four, one half and then sigma to the one is due to this fact. The fact, we have this concatenation property for hyperbolic times. The problem is that when we go from X to FR of X, so is the return to the Young's structure. Okay, if N is a hyperbolic time for X, N bigger than R. If N bigger than R is a hyperbolic time for X, then N minus R is a hyperbolic time for FR. Okay, this is what it say here in property 67 map. But we want the concatenation property, not with respect to this F, but with respect to the induced dynamics, the Gibbs Markov dynamics F. So I've used properties of the Gibbs Markov map in the proof of that proposition that gives the integrability. So we have a problem. The concatenation property is no longer true because then I replace this point FR of X by the point here down, but the point is the following. When I build the Young's structure, I build in a very small region. Small region in such a way that we have this property here, this property here, small in a neighborhood of size delta one, in such a way that this property here holds. And so we can pass, see that estimates on the derivative of a point to any point which is close to it losing one fourth or sigma to the minus one fourth. And that's precisely why I consider first, I'm not going to check all details, but just recalling you that we used sigma to the three fourths, when we lose one fourth, we go to sigma to one half. And sigma to one half appeared here as well. Okay, so that's the idea, but we need to change the set for have the concatenation property. And so that's why I consider the H and star a stronger version of hyperbolic times with the sigma. And with the stronger version, I have the concatenation property because I choose the size of this stable manifolds small in such a way that it's contained in the good neighborhood around the point in which you can control the derivative of any other point in terms of the derivative on the point X or iterates of the point X. And so we have this, this tells us that H and star is a frequent sequence of sets. Well, this is because of this. And moreover, we have the concatenation property. This is the concatenation property. It's or implies the concatenation property. The concatenation property is that if you take X in H and star, then the image under the gives mark of map necessarily belongs in H and minus the recurrence time. Well, it's something like that. Okay, so this is the point where we need to change the set because we cannot prove that the same set, if we consider with respect to the same hyperbolic times, we cannot prove the concatenation property. Fortunately, the proof of that proposition holds with the weaker version of the concatenation property. And so with this, we can prove the the the the the capability of the recurrence times and applying it. So we have a young structure with integrable recurrence times. We deduce the distance of SRB measures. And to finish, so this is to finish this ECU plus ES case. For the key of correlations, again, we need stronger version, we need lean soup. We can again introduce this. And we have again, so under the assumptions which are similar to those we had before, then we have a unique SRB measure to support is the attractor. But then again, we don't know if this measure is exact. And so we obtain the composition of it into a finite number of measures and we will obtain the good conclusions with respect to these measures. And again, under some topologically mixing properties or ergodicity of all powers, we deduce uniqueness. And there are also examples. No, I'm not going into that. So this is the results for partial hyperbolic attracts. And to finish just two more slides, the dual case, this is just to mention results. I think it's fair to mention them that deserve to be mentioned. Any questions? Question? What? A question. Oh, please. So what should I think about when as examples of these partially hyperbolic systems, does the time one map of flow fit into this framework? Sorry? So examples of systems like this? Yes. In my paper with Vian and Bonati, we introduce, so it's derived from a nozzle. If you go to, so you take an nozzle, you go to a saddle point of the nozzle, you have to take the unstable direction, the dimension greater or equal to two. And so you can make some damage in the unstable manifold in order to create a non-uniformly expanding map in that direction. So you create an invariant direction which is contracting in the unstable manifold of the attractor, of the anose of t-fumorphism. And so you create an anode, that's what we call derived from a nozzle, such a way that the center unstable direction was the previous unstable direction. But since we have their stable contracting direction, it's no longer uniformly expanding. But in average, you can make a calculation if the Jacobian near that saddle point is strictly bigger than one, the frequency of visits to that point is small. And so we still have non-uniform expansion in the center and stable direction. So what we had in mind when we proved these results were these derived from anose of t-fumorphism, okay? But we can also apply to, if you think of the Poincaré map for the Lorentz geometric flow, we can also apply these ideas to that situation. I think you mentioned flow, wasn't that the question? Yes, I was mentioning time one map of flows. For time one, well, for time, well, we can try to apply, the problem is that for, if you want to deduce the results for flow is more delicate because well, that's dogo piata techniques and more people have used this and we have problems also with the flow direction for the mixing. But for time one, there are flows for which these assumptions are verified. Yeah, for the time one at level that time one, it's no problem. We obtain partially hyperbolic systems like this, okay? Okay, thank you. So these are the examples I was not going to talk about, but thank you since you asked. So I mentioned them. So let's go to this, the dual situation. Just to, I think it's fair to mention these results. As I said, constructing, so the dual is now, we have uniformly expanding direction and center stable. And so constructing SRB measure is essentially iterating Lebesgue measure. So we have unstable manifolds, classical constructions also work in this case. We have unstable manifolds. You take Lebesgue measure iterate on an unstable manifold control the push forwards, the densities and you can construct ergodic SRB measures. Then Viana and Bonati, they prove the result saying, if we have an attractor for which the center step, with, here is a mistake. Here is not CU, is CS, mostly contracting. So here should be CS, mostly contracting. So what is mostly contracting? Mostly contracting is, so this is positive, sorry, this is negative. So this is the biggest Lyapunov exponent in the center stable direction, this lambda C plus. We say that ECU is mostly, I don't know why I cannot underline, I highlight. So we say that ECU is mostly contracting in that direction. If we have all Lyapunov exponents negative in that direction, so that's what it says, for positive measure subset of points in any unstable disc, okay. For any unstable disc, there is a positive Lebesgue measure set of points for which we see all the Lyapunov exponents negative in the ECS, so again, there's a mistake here. This is ECS is mostly contracting, it's not ECU. ECU here is EU, so it's my habit of writing EC, ECU, I wrote ECU for ECS, okay. So and Vien and Bonati, they proved, so they use this work of peasant and Sinai, and they give sufficiently sufficient conditions for the basins of the topological basins of the attractor, almost every point belonging in a finite, so belonging in the basins, so there's a finite number of measures such that the topological basin of the attractor is contained in the union of the basins of those SRB measures. And moreover, they give sufficient condition for the uniqueness of the SRB measures, and the condition if the leaves of the unstable foliation is, if any leaf of the unstable foliar, so the global unstable manifold, if it's dense in the attractor, if that is dense in the attractor, then there is only one SRB measure, okay. So the result of Bonati and Vien is essentially a result of finiteness and uniqueness on some conditions for finiteness and uniqueness of the SRB measures. And the care of correlations, well, for the care of correlations, I mentioned here three results, but you see that it's only in partial solve the problem. So we would like to have, so assume we have an attractor of this type and we have some contraction in the CS direction, can be non-uniform contraction similar to what I've considered, but now instead of F minus one, you take F, or even this, well, this is a weaker condition. So maybe the other one stronger can be a first attempt. We would like to have, well, assume some strong non-uniform contraction in the ECS direction and deduce similar results for the decalculations. Well, we don't have that result and the results we have are this one. I don't know, to be honest, I don't know if there are any recent results in the direction. The only ones I know for this kind of attractors are due from the beginning of this century due to Dogo Piat and to Armando Castro. Well, you see the first result is by Dogo Piat is for three-dimensional manifolds, so not very general. And moreover, the ECS has to split into two sub bundles, one uniformly contracting and the other one mostly contracting, the sense I defined before. Then in this situation, he proves that the SRB measure the unique, so we assume that unstable manifolds. So this is all for unstable manifolds being this. So there's a unique SRB measure. And so he proves that the unique SRB measure has exponential decay of correlation. The second result is due to Armando Castro. Yes, he starts with something of the type I've just mentioned derived from an Osof. It starts with an Osof and it goes to a fixed point and makes some damage. That example we introduced in, we consider the exact, in fact, this kind of examples have been used before by Manien, I think by Bohm. So many people use this kind of derived from an Osof. The derived is not always the same, but the idea is always the same. And so Armando proved the decay of correlations for those derived from an Osof, but using in a strong way the existence of a finite Markov partition. So it makes some perturbations of an Osof, but the perturbations have to be done inside one element of the Markov partition in such a way that the Markov partition still exists for the perturbed system. And he uses this in a strong sense. So still very particular situation. And the other situation that Castro considered, so he could forget the Markov partition, but the price to pay is that the center stable direction is only one dimension. And in all these cases, the conclusion is exponential decay of correlations. But you see that the cases are very, very particular. The argument of dongo Piatis is not inducing schemes. The argument of Castro in both works is using these young towers. And, but he does a very nice inducing scheme because it does the inducing scheme in the stable direction in backward time to build the inducing scheme. But he's still using young results that I've mentioned here. The result of dongo Piatis using different techniques, I think standard pairs and things like that. Okay, so that's all. Freedom now, especially for you. Thank you very much. And I leave you with a lot of references. I'm going to the very first page. So this very first page has this new link of today which goes to these full notes. Okay, so you have here the link. And thank you very much. Are there any more questions? Actually, I wanted to make a comment that you do some kind of essential use of the hyperbolic times, right? Places, returns, and if you want like exponential decay, you assume positive frequency in some way. Yes, we need a positive frequency. See that I use the positive frequency in two ways. For the decay of correlation in the strong sense that has to be, for most points, after a certain threshold, depending on the point, it's positive frequency forever, okay? For every time. And the decay of correlations is given by the decay of those thresholds. So the point that needs more time to achieve this positive frequency, strong positive frequency. I also use positive frequency to build the SRB measures, a weaker version of positive frequency is that for any large end, we can find arbitrary large end such that at that end, we have positive frequency. But then maybe for a long time, we don't see more ends with positive frequency. But again, in the future, eventually, a new end appears with the same positive frequency. So I have positive frequency in two ways. And the second one, well, the first one I've described now is what I call strong positive frequency. But I use, as you say, I use the hyperbolic times in a very strong way to build these V-ends. They are essentially in all these constructions. And the frequency of those hyperbolic times are also very important. So only assuming the presence of these infinitely many hyperbolic times without saying anything about the frequency, you can get the young structure. Yes, but not the integrability of the recurrence time. I use the weak version of positive frequency to deduce the integrability of the recurrence time. So if you don't want integrability of recurrence times, you don't need positive frequency, any kind of positive, just for almost every point has infinitely many hyperbolic times. It's enough. That's what I did here today. So the parallel I want to make is that in the symbolic dynamics construction, we also require some presence of hyperbolic times. Okay. So the set that we called is a kind of a, the subset in hyperbolic points for which you kind of return to the same passing set infinitely often in the future in the past. Okay. So we see this as a kind of a hyper, it's not directly a hyperbolic time because hyperbolic times is stronger, right? You have to have those contractions in the past. But it still has some hyperbolicity of that time. Yes. Some sort of a good, almost uniform hyperbolic. I didn't mention that result, but I think Stefano with Pezzin and Clemenaga, they use those, that idea to build the inducing schemes. The idea of symbolic dynamics, you mean? Yes, they use this Pezzin blocks. Ah, okay. Okay. So it's in between our approaches, there's this work of, the approach of Stefano, Clemenaga, Luzato and Pezzin is inducing, but uses a lot this Pezzin blocks. Okay. Still going in the direction of positive frequency. So on the symbolic side, you can also have that. And I think that's exactly what Bouzique, Crovisie and Sariq are going to do next week. Ah, okay. So they have this coding and then to get exponential decay, they have in some sense to prove some positive frequency of good times. Okay. And both of them tell us maybe. Yeah, very much related to what you spoke today. Yes. That's great. Thanks. See you next week. Have a nice weekend. So does anyone have any other question? I don't think so, perhaps. Okay, so if not, let's finish. Yeah. Bye-bye. Thank you. Thank you. So I'll see you all next week for the workshop. Yes. That's exactly the same time. Bye-bye. Five times a lot, but every day. Bye-bye.