 And actually I will take the chance, hello everybody. So before we start our activities of today, I would like to make a quick advertisement of a postdoctoral position that we have here at my university at OFC, which is related to Instituto Serra Pilheira. And this postdoctoral position is for one year with a possible extension for another year for candidates in dynamics to come here and join our dynamics group. So you are all invited to either apply if you are eligible or to send to other people who have interest. And I just left the link of the application of the advertisement on math jobs here on the chat, okay? So now I believe we can continue our activities of this school. And it is a great pleasure to have now the beginning of the second mini-course, which will be given by José Alves from Universidade do Porto. And José, he will talk about the other part that we want to discuss in these two mini-courses. So the title of his mini-course is SRB Measures and Young Towers. And well, José can give you further details on the number of lectures and how he will make the development of his mini-course. Thank you, José. Okay, Yuri, thank you very much. Well, good morning, good afternoon, good evening, depending on where you are listening to me. First of all, I would like to thank the support of ICTP for this mini-course and all these activities and also the co-organizers, Yuri and Stefano, maybe the real organizers, I am only helping them. And so I will talk about SRB Measures and Young Towers. I will start today with a brief digression through physical measures for which the SRB Measures will be a particular case. And then I will use the Young Towers inducing schemes more generally to deduce some statistical properties of those SRB Measures. The inducing schemes are also the method I will be using to construct SRB Measures in some situations. So I will start with the physical measure. So I will give four lectures, in principle with one hour 30 each. I have this link here where you can access the notes. Well, the notes are still incomplete. This is an ongoing project. So I will make each lecture available on the day I presented. And I think for today, I have more material than I will talk here. So we'll have extra material for the next class. Okay, so I will start with physical measures I set. So the setting is the following. We'll consider a map. So I'll be only considering discrete time dynamical systems. So it means a map from a certain set into itself. And the set here will be M and it will be a Riemannian manifold. And I will use a little M to denote the Lebesgue measure on that Riemannian manifold. We consider f invariant probability measure mu on the Borel sets. And we say that this measure is a physical measure. If for a positive measure set of points, we have this conversion. So you consider the mean of Dirac measure along the orbit of a certain point. And it converges in the weak start topology to mu for positive Lebesgue measure set of points X. Well, for those who don't know what is weak start convergence, I translate it here. So it's for any continuous functions, function observable. We have this convergence of the, see that the phi evaluated on fj of X is precisely the integral of phi with respect to the Dirac measure on fj of X. And the basin of, so a physical measure is this. So this happens for a positive Lebesgue measure or set of points. The basin of the physical measure is the set of points for which we have this convergence above. I leave a first very simple exercise to show that if we have periodic attracting orbit, so you consider Dirac measures on that periodic attracting orbit, you consider the mean, so you add them and then you divide by the number of them then you have a probability measure and that probability measure is a physical measure. Well, it's straightforward exercise, not difficult. Okay, just to give you a first example. But the examples we will be considering here are of a different nature. So these Dirac measures that I've just talked, they are singular measures and the measures we'll be considering here in many cases. So the course will be divided into two main parts. One for endomorphism, another one for diffeomorphism. In the diffeomorphism case, we'll have contracting directions and for the endomorphism case, we don't have contracting directions. So it's not expectable to have singular measures as physical measures. So what we'll have, in fact, are absolutely continuous measures with respect to Lebesgue measure. And so first, I consider some general results here in order to motivate the measures and we'll see that they are certain measures, they are physical measures. So I start with a proposition whose proof you can find in this reference. So this is a recent book of mine. It has been published by Springer in the end of, well, about one year ago. And you can see many proofs of results that I'll present here. I would say most of all, you can find them in this book. But this first result, I think is a very instructive, the proof of it is a very instructive exercise. And so I leave the proof of this first result as an exercise with a hint. It actually has to do with something when I was very young, when I was starting seeing these things, I thought that this result was a trivial consequence of Burke of ergodic theorem. But it's a consequence, it's not a difficult consequence, but it's not a trivial consequence of Burke of ergodic theorem. So what is the result? So consider M, a compact metric space and F from M to M, a Borel measurable map. By Borel measurable map, I mean that the map is measurable in the sense that the image of any Borel set is a Borel set. So if mu is an ergodic F invariant probability measure, then the basing of mu covers mu almost all of M. Well, Burke of ergodic theorem says that the means, the time means they converge to the spatial mean for every L1 function. So in particular in this setting for any continuous function. But the problem, and this happens for mu almost every point, but the problem is that in principle, the mu almost every point depends on the function we are considering. And in particular, on the continuous function we are considering. So the idea is to use, to prove this result is to use Burke of ergodic theorem as something else. And the something else I'll leave here as a hint for the proof is to use that C0M, so that the space of continuous functions on M, it has in this setting, because I'm considering compact metric space, it has a countable dense subset. And so with that we can manage to, and of course with Burke of ergodic theorem as well, we can manage to prove this proposition. So the basing is almost all of M, but with respect to the measure mu. For being a physical measure, I need at least positive measure with respect to the bag measure M. And the corollary is that if I have an ergodic EPSO, if I add the absolute continuity and absolute continuity when I don't say anything is always with respect to the bag measure, then it is indeed a physical measure. Well, this is because if it's absolutely continuous, a set with total measure cannot have zero loop back measures. So it has positive loop back, not necessarily total loop back measure, but that's not important for the definition of physical measure. And so it has positive loop back measure, in particular the basing has positive loop back measures, so it's a physical measure. So this for an ergodic and absolutely continuous invariant probability measure is a physical measure. And this is a type of measure that will be obtaining in many situations. And in particular, using inducing schemes, we will be able to construct ergodic absolute continuous measures. Just a remark that the physical measure does not need to be ergodic. In fact, I leave this reference, which has a very nice example where we have a physical measure that is not ergodic and this example actually is a one parameter family. And for certain parameters they are maps on the two sphere S2 and the transformation in the equator is the identity but the loop back measure on the equator is a physical measure. And clearly it's not a physical measure if it is the identity is not ergodic loop back measure. And so it's a nice example just in many cases and cases are because sitting here, we will always build these physical measures as ergodic and absolutely continuous measures. Okay, so I will present now two toy models that can be a good motivation for many things that are going to appear here. I start with this very, very simple example, the doubling map so defined in the unit circle and it's two X and mode one of course. Well, it's easy to see that F preserves the length of intervals because if you take an interval then the image is two intervals of half size and so it preserves the length of intervals. And so it preserves loop back measure on M because the intervals generate the sigma as for for all sets. So loop back measure, which is obviously absolutely continuous with respect to loop back. And we can also prove that M is ergodic. Well, I live here this as an exercise, especially for the young people that has never seen the proof of this. I live here with two hints. One is using Fourier series and the fact that F is ergodic if and only if the functions, the observables which are a constant along orbits which is the meaning of this. This says that it's constant along orbits then they are necessarily constant. So this is a result characterization of ergodicity. And so using Fourier series is not difficult you write down the expressions of Fourier series of F of phi and phi composed with F and you arrive to the conclusion that the only coefficient does not need to be zero is the constant one. So you get the ergodicity in this way. Another one is more dynamical proof. So is using the fact that that is true for this is that any interval becomes the whole S1 in a finite number of iterates. And also the fact that this transformation preserves proportions. This is because transformation is the derivative is linear constant. And the using Lebesgue density field. So we can choose a very small interval on which if you start with a, well, we go to the ergodicity of measure. Well, the ergodicity is, well, you can take this as a definition or in the hint above, but well, more simple definition is any invariant sets. So F minus one of A equals way implies that A as zero measure or the complement of A as zero measure. So if you assume that A as positive measure then you take a small interval when the proportion is almost one. This is Lebesgue density theorem. And then you iterate in order to become the interval become the whole S1. The proportion is the same. So the proportion is almost one. And so the almost can be proved to be one the connection argument and that's the way it works. Okay, so this is two ideas to prove the ergodicity of this doubling map of the Lebesgue measure for this doubling map. So Lebesgue measure is a physical measure for the doubling map. Well, let's go to the second time model which is a bit more sophisticated is the solenoid attractor. And so it's defined in the solid torus. So I consider the solid torus S1 times D where the D is the unit disk. You can define it in C as the subset of complex numbers with absolute values less than or equal to one. And we can describe the transformation in this way. So it's a skew product map. So in the base, which is S1 is the previous doubling map. And in the disk, well, it shrinks the disks and it wraps around twice and does it. So this is the usual solenoid transformation. And we have an attractor. When you cut this attractor, you see by a disk transverse to S1, we see a cantor set. And there are some well-known facts about it. So this is what I'm going to say now is hyperbolic dynamics. So A is a uniformly hyperbolic set. So we have a contraction in the disks transverse to S1 and we have expansion in the S1 direction. And so we have for each point in the attractor, we have a stable disk and an unstable disk. The stable disk is a one-dimensional stable disk. The interval maybe is a better word. So and the unstable direction is a two-dimensional disk. And maybe not so well-known, but also true is that there exists an aphegodic probability measure mu. So simply saying, we can lift Lebesgue measure. So we have this projection. So this is a skill product. It's easy to see that we have the projection in the first factor. And we can lift Lebesgue measure to a measure in the attractor A. And I call this measure mu. So we'll see that mu is also a physical measure. You have to work a bit more and use some more facts about this transformation. And we'll see that this is true. So as I said, I need some more facts, but I'm presenting this based on a simple example that I guess many people, maybe most people attending this course knows, and I hope many people knows all these informations, but even those who do not know, don't be scared. I'll be presenting in detail all the concepts I'm presenting here now. So some more facts about the solenoid attractor is that it's fully activated by unstable manifolds in the sense that given any point, there is an unstable manifold. And any point, there is an unstable manifold, the unstable manifold through that point is contained in the attractor. This is a characterization of attractors. It necessarily has to be like this whenever we have an attractor. And also the conditionals of the measure mu, which is the lift of the Lebesgue measure to the attractor under. So we have a measure mu on the attractor and we can consider the family of conditionals on unstable manifolds. We consider rocking the composition. I'm going to present that in detail. And so we can decompose the measure into kind of fubini way. We compose the measure into conditionals on the unstable manifolds. And they are for the lift of Lebesgue measure, the conditionals are absolutely continuous with respect to the conditionals of Lebesgue measure on the unstable disks. So this is another fact, the second fact. And the third one is that Burkow, so that's what I say here. Whenever you take a point in a stable manifold, stable manifolds are disks transverse to the S1 direction. The Burkow averages, they are constant on stable disks. This is a simple fact. So using, so I'm putting these facts because I'm going to use them to deduce that mu is a physical measure. So this is the third fact. And also that the stable foliation is absolutely continuous. So you can consider the stable lifts through points. And if you consider any manifold transverse to the stable disks, if you consider two transverse to manifolds transverse to the stable disks, and if you take points from one manifold and you slide along the stable manifold to the other transverse manifold, we have an absolutely continuous transformation. As I said, I'll be presenting all these concepts in detail. Well, so using these four properties, we can prove that the mu, the lift of the lower back measure is a physical measure. Why that? First of all, mu is an ergodic measure supported on A. This comes from the fact that we are lifting that. Godic. So using one, which is the property that A is foliated by a stable manifolds. We deduce that in proposition 1.2 is that the basin contains almost every point with respect to the invariant measure. So we conclude that the basin of almost every point in unstable disks necessarily is contains almost every point. Better say, the basin contains almost new restricted. So the conditional on the unstable disk contains almost every point in the unstable disk. So this is a consequence of the ergodicity and the fact that it's foliated by unstable manifolds and that proposition. Then we use two simply to replace, so to replace the conditional invariant measure to the conditional off low back measure. And then we use the absolute continuity to deduce that we can, so this is positive, almost every point. In fact, the conditionals are equivalent in this case. And so almost every point low back conditional low back, almost every point in unstable leaf belongs in the basin. And then we can, so we need, in this case, we need a three dimensional low back measure positive set of points. Well, since the stable foliation is absolutely continuous, then we deduce that necessarily it contains a set with positive low back measure of points. And also mu is a physical, and so mu is a physical measure. Okay, we also need to use the fact that Burke of ergodic measures. So it has to do with the definition of physical measure. We also need to use the fact that Burke of ergodic averages are constant on stable disks. So this is the idea, simple ideas to prove that the lift of low back measure to the solenoid is a physical measure. So let me now start to present more, present the concepts in detail. So the first one is roughly disintegration for not being so mysterious. Some people know that, but never seen the definition. So let me put here the definition. So this can be defined on a compact metric space. And I'll consider the Borel Sigma Algebra on that metric space and compact metric space. And I'll consider a probability measure defined on that Borel Sigma Algebra. Consider now a partition into Borel sets of partition P. And let P be this map. So let Pi be this map. So we define Pi assigning to each point the atom of the partition that contains the point. So this is the transformation. So the idea is to introduce a measure on the partition. And then I use this to introduce the composition of the measure with respect to the partition of the initial measure with respect to the partition. So this transformation Pi enables us to introduce a Sigma Algebra on P, which is defined in this way. So you can see this is the push forward of the partition P Sorry, maybe this is not a good notation but that's what I have here. So you can see the push forward of the Sigma Algebra B of Borel sets on X. So this is B. So this is the set of subsets of P such that the pre-image belongs in the Borel Sigma. And I can also consider the push forward of the measure which is defined in this way. And here I cannot underline, I don't know why. This is tech problem. But it's defined in the usual way. So you will have a transformation and the push forward is simply you take the pre-image of a set and you measure with the measure you have in the first space. So this is to introduce a measure in somewhat natural way on P. And so with this measure on P, we can introduce a disintegration of the measure Mu with respect to the partition P. So what is that? Is a family of partition indexed by the elements in P of probability measures. So, and here it is stated that the measure Mu Omega on Omega is equal to one. And this holds for almost every element of the partition. And also we can integrate continuous functions with respect to Mu, integrating it with respect to which Mu Omega and then integrating with respect to all omegas in P, okay. So this is a disintegration of the measure. And we'll refer to these measures Mu Omega as the conditional measures of Mu with respect to this partition P. And there is this very nice result by wrapping from result from the fifties, which says that every Borrell probability measure has a disintegration with respect to any measurable partition. So this does not hold for all partitions. It holds for, I would say reasonable partitions. What is a measurable partition? We say that the partition is measurable. If essentially the elements in the partition can be obtained by accountable cuts of the space in this sense. So there is a sequence of Borrell sets and a set with full measure. So this is all full partition. So partitions and all these equalities are always with respect to a full subset of point. So there is a sequence of, countable sequence of Borrell sets, such that any element in the partition up to a zero Mu measure set of point can be obtained intersecting these elements which are related to these elements, the ENs, which are either ENs, so EN star is either EN or the company. So that's in this sense that I say that the elements of the partition are obtained by accountable number of cuts of the whole set, okay? So these are the reasonable partitions for which we can make a Rockling disintegration. Okay, so I hope it's not mysterious anymore what is a Rockling disintegration of a measure. Okay, so in the solenoid, we use the stable disks and stable disks because we had a uniformly hyperbolic attractor, but stable and unstable disks, they can be constructed more generally for maps with the Lyapunov exponents. Non-zero Lyapunov exponents, more precise. So that's what I'm going to review briefly. So let F be a diffusion morphism of a smooth manifold. And so given a point in that manifold and the vector in the tangent space, we consider this limit here. So is limit, so I'm assuming here that the limit is equal when N goes to plus and minus infinity. So this is, for those who know the well-known Lyapunov exponents and that there is this result by Oselladets that these limits exist for mu almost every point with respect to an invariant mesh. And actually there exists a result is more precise so that there exists measurable functions and the DF invariant splitting, such that for mu almost every point, the limit is the same for every point in that subspace of the tangent space. And in addition, if the measure is ergodic, then the dimensions and the functions lambda i, they are constant almost everywhere. Okay, so and these lambdas are precisely the Lyapunov exponents. We will refer to the set of points for which the Lyapunov exponents are defined. We will call that set the regular set. And I will denote it by R. And it's known by passing work that if X has at least one positive Lyapunov exponent, then there is a small disc in the manifold tangent to the sum of the spaces associated to the positive Lyapunov exponents. So at least one positive Lyapunov exponents is for this not to be empty, okay? So you consider the set of points for which, so for that point, you consider the subspaces for which the respective Lyapunov exponents are positive and you consider the sum of those subspaces. And we have a disc tangent to that space such that for all points in that disc, we have these converges in the past. So if you want, you can think that this is expansion in the future, but what actually happens is that we have converges in the past. The explanation. Can I make a question? Yes. So here, just to compare with what I did, you are not assuming that the other exponents are negative. There can be zero exponents as well, right? In practice, they will be negative, but well, this is just to introduce the discs, but for the applications for what I have in mind, they'll be negative as well. Yeah. But in the end of all, in case, I will only consider positive Lyapunov exponents. Okay. So considering F minus one instead of F, we can define the stable discs in a similar way. Okay. Of course, for those that have at least one negative Lyapunov exponent for F, or if you prefer a positive Lyapunov exponent for F minus one. Okay. So this is just to refer to the Lyapunov exponents. And to see that we have more generally the stable and unstable discs defined under more general conditions than just the uniform hyperbolicity. So this is Lyapunov exponent. And so now we are prepared to define the famous SRB measures. So assume that we have a point which is regular in the sense that Lyapunov exponents are defined and assume that it has, I'm sorry, assume that it has at least one positive Lyapunov exponent. So this is for assuring that we have an unstable disc defined in that point. So we can define the global unstable manifold in this way. This is the standard way. So you consider the, if a point has an unstable manifold then all points in its orbit also have, because the Lyapunov exponent is constant on the orbit and so we have that Lyapunov unstable discs are defined for all points in the full orbit of the point. And so you consider, we consider the unstable discs through the points in the orbit and then you go back and then you iterate forward the unstable disc and you consider the union of all these iterates and we obtain in this way the global unstable manifold. Right before defining the SRB measure, I need one more consideration is partition. So consider now measurable partition as I introduced before and we offer a certain set contained in the manifold and we say that the partition is subordinate between stable manifolds. So this is the concept. If F has at least one positive Lyapunov exponent now for having the unstable manifolds defined and from you almost every point, the element of the partition containing the point is a subset of the unstable, of the unstable manifold. Okay, so this is the key property. So the element of the partition containing is a subset. Okay, so this is being subordinate, partition being subordinate to the unstable manifolds. And now the definition of a Sini-Huell-Bohm measure, SRB measure, we say that mu, so an invariant probability measure is an SRB measure. If for any measurable partitions subordinate to unstable manifolds, the conditionals of the measure are absolutely continuous with respect to the conditions of the black measure on those unstable manifolds. Sorry, on the partition which is a subset, the elements of the partition are subsets of the unstable man. So this is the definition of a Sini or SRB measure. Well, and the example of the solenoid attractor, I also refer to absolute continuity of the stable foliation. And so now the definition. So given to embed the disks in the manifold and assuming that they intersect transversely a certain subset of stable disks. So I have here this picture. So the disks are here, D and D prime and assume that they intersect transversely some subset of stable disks. I'll go away. Oh, sorry, I was testing my mic here just that because I wanted to ask you something. Okay, it works. Okay, I have to put a microphone because it was not working. Okay, so the previous definition of the unstable manifold for the global one. Yes. So it's allowed the fact that this gamma use are shrinking very, very fast as you go through the orbit or the size is fixed at some scale. Well, it depends on the dynamical system you are considering. If, for instance, if the dynamical system is uniformly hyperbolic, then you can have these lengths fixed. And so in the future, you are expanding. So these are growing and growing and growing. And so you obtain this unstable set is it has an infinite length in many cases. But of course it depends on the concrete example you have. So it may happen that when you go back, these guys start to be small and small. In non-uniform hyperbolic cases, you can have that. So it depends on the case. So in the proposition you have referred to before to give the existence of this gamma use, there was some reference to the size of the- Proposition, I'm sorry, which proposition? In the previous slide, I think. Yes, this one. So you have this gamma use, there exists a small disk gamma use. So the size of this disk can vary point by point. Yes, and they can be arbitrarily small, depending on the points. Again, if the system is uniformly hyperbolic, then we have a fixed size uniform for all points. But in general, that's not the case. So the size of this disk can be arbitrarily small. But they are defined for every point in the map. For every point, actually we can assure with respect to an invariant measure for almost every point with respect to invariant measure by Osalette's theorem, okay? But this has nothing to do with measures. If the Lyapunov exponents are defined, if Lyapunov exponents are defined for a certain point, then you can have these disks, that's a zine result. It has nothing to do with measures. Measures has to assure that with respect to it, we have almost every point has the Lyapunov exponents defined, but the result of pezin is better. It's better in the sense that you only have to have the Lyapunov exponent defined. If it's positive, then you have the unstable disk. Okay, thank you. Just to make a comparison, the size of this gamma U is somewhat related to that parameter Q U that I introduced, the Q U is the scale in which we apply the unstable graph transform, okay? And how is R compared to NUH chi star? So this R here, he just assumes that the Lyapunov exponents do exist. The NUH does not assume that. Okay, so R is contained in NUH star chi for some chi? No, because he's getting all the points for which the Lyapunov exponent exists, even for those Lyapunov exponents that are going to zero. And we fix a chi. So we only look for those which have some sort of, yes. There's a threshold where. Yeah, there is no a priori relation. So you are more general in the sense that you don't consider the limit, but you put a restriction on the threshold, using a threshold. Okay. Okay, but I won't be using this. This is just to motivate the definition of SRB measure. Okay. So I was referring to this absolute continuity property. So you can see this, so look at the picture and you can take points in D and slide them through the stable manifold through D prime. That's what this holonomy map does. And the stable foliation is called absolutely continuous. If for any subset with zero measure is sent to a subset with zero measure, no matter which direction. So this is the definition of absolute continuity. So measure and here is Lebesgue measure. So it's the conditional Lebesgue measure. Well, is the Lebesgue measure on the disks? Okay. So the absolute continuity here is with respect to Lebesgue measure on the disks. So there is this nice result still by Pezin, which says that the stable foliation. So if you consider a map for which the opponent of exponents are known zero with respect to an ergodic invariant probability measure, then the stable foliation is absolutely continuous. We need some differentiability C2, differentiability for the map, very nice result. And so with this, we have that every ergodic. So this is for every ergodic with non-zero Lyapunov exponents. So if you have an ergodic SRB measure with non-zero Lyapunov exponents, then it is a physical measure. So this is essentially the scheme that I used to prove that the measure in the solenoid attractor was a physical measure. So it's essentially, so we have ergodic SRB measure. So it means that the conditionals on unstable manifolds, they are absolutely continuous. And so using the absolute continuity of the stable foliation, we deduce that it is a physical measure exactly as we did in the solenoid attractor. Okay, so SRB measures are a specific case of physical measure. So, but they are measures with more geometric properties. We know that their conditionals are absolutely continuous with respect to the unstable, the back measure on the unstable manifolds. And these are the measures that the physical measures that I'll be constructing using inducing schemes. Well, the existence of SRB measure has been proved by many people in many situations. I will mention here just a few which in some sense are related to what I'm going to discuss here in this mini course. I divide the results into two main groups. One is for endomorphisms, for transformations which are not necessarily invertible and for transformations which are invertible, and I'll call it diffeomorphisms and also differentiable. And by the way, here I will always be presenting transformations which are differentiable, but there are many of these results hold also for systems with singularities, meaning discontinuity points or even critical points in the endomorphism case. But here I will always consider smooth maps, but in many cases we can generalize the results. It just stayed away from many technicalities. I will consider this simpler situation. So I will measure briefly the results. So the existence of SRB measure, for instance, for uniformly expanding maps has been proved by Kzizewski and Schlank in the 60s. In the beginning of the 80s, Jacobson proved the existence of... In this case, SRB measures when... So this uniformly expanding map, so there are no contracting directions. So SRB measures are simply measures which are absolutely continuous with respect to the volume measure in the manifold. In case the quadratic maps is in the interval, so it's an absolute continuous with respect to the bag in the interval. So Jacobson proved that there is in the quadratic family a positive set of parameters for which the respective quadratic map has an absolute continuous invariant probability measure. I proved that for a two-dimensional example introduced by Vienna maps in the beginning of this century. Actually, I proved in the end of last century what it appeared published in the beginning of this century. And in the Vienna maps is a case of non-uniformity. So it's not uniformly expanding map, but it has a map in the cylinder, a two-dimensional cylinder with two positively up and off exponents. And I proved that that map has an SRB measure. Also, and then we generalize that so with the together in Bonati and Vienna, we generalize that for any n-dimensional expanding, non-uniform expanding map. Well, with some variations in the definition of non-uniform expanding expansion, Pinier improved that a bit in the paper back in 2006. And so all these maps, the first four examples, they all have the number of positively up and off exponents is always the dimension of the space we are considering. And there is this very nice example by Tsuji of a family of maps for which it's an endomorphism, in this endomorphism case, for which it's a two-dimensional family of maps, for which it has one positively up and off exponent and one negatively up and off exponent. But it has a physical measure which is absolutely continuous with respect to the two-dimensional, the back measure. This is a very interesting example. I could conjecture for being absolutely continuous then all the up and off exponent with respect to the n-dimensional, the back measure should have to have all the up and off exponents positive. No, it's not the case. So there is this very nice example. The fact is that we have a strong expansion and then there's a lot of crossings through points and that creates some measure which is absolutely continuous with respect to the two-dimensional, the back measure. So these are the examples I would like to mention in this endomorphism case. And in the diffeomorphism case, so we go back to the works of Sinai, Hual and Bohr. So Sinai proved it first for another diffeomorphism, so diffeomorphism for which the whole manifold is a hyperbolic set. It also appears in the book of Bohr, a famous book of 75. And XMA attractors, both flows or diffeomorphism and that has been proved by Hual and also together with Bohr. There's this Hennon maps or Hennon diffeomorphism for which Benedict Sanyang, based on previous work by Benedict Sanyang and Carlson, they proved the existence of SRB measure for the Hennon attractors, which are not uniformly hyperbolic. And in the partially hyperbolic setting, so an O of an axiom A is what I mean by a uniformly hyperbolic setting. In a partially hyperbolic setting, we have these results by Pezin and Sinai. So they built some class of measures whose conditionals absolutely. So these are systems for which we have uniform expansion and some central direction. This particular case of partial hyperbolic systems. And Pezin and Sinai, they proved that there is invariant, so these systems have invariant measures whose conditionals on the unstable manifolds, which exist because we have uniform expansion in EU, they are absolutely continuous with respect to the back measure. And then Bonati and Vien, they introduced some conditions, essentially on the other direction, on the center stable direction, under which we can prove that these measures obtained by Pezin and Sinai, they are SRB measures, particularly they are physical measures. And together with Bonati and Vien, we proved that the existence of SRB measures in a dual case, where we have uniform contraction and some expansion in the other direction, the non-uniform expansion. And these results, so it's from the year 2000, on which in this work appears this partial hyperbolic case and also the non-uniform expanding maps. And as I said, it has been improved by Pignet or the foreign morphisms. And for the different morphin case, it's a result by myself, Diaz Muzato and Pignet from 2017, where we improved in the sense that the condition of non-uniform expansion is weaker than the one here, but still not positively upon all expanse, is slightly stronger than that. Maybe at the end of this course, I can talk a bit about this non-uniform expansion. Also in the non-uniform hyperbolic case, there are these recent results by Clemenagal, Luzato and Pezin, for surface-deep morphisms, for systems with non-zero Lyapunov exponents and generalization by Benovadia for any dimension. I have put some colors here. Colors not this yellow, so I'm removing it. The colors appear on the authors of these works and the colors are essentially related to the method that they essentially use. So the first method due to Krzevsky and Schlank is essentially, they take Lebesgue measure, they iterate, they consider the push forwards of Lebesgue measure, they consider the average, and then they study the densities of those push forwards and they control the densities. And so any weak start accumulation point is an invariant measure. And since they can consider bounds for the densities of the iterates, they can control also the density of the limit measure in the weak start topology. And so they can prove that the weak start limit measure is absolutely continuous with respect to Lebesgue and can prove in the uniform expanding case, can prove that it's ergodic and so that's the way they obtain the SRB measures. And so, and this method is also used by myself with Bonati and Vienna in this result for non-informal expanding maps. In some sense by Tsuji, this paper is very, very complicated, there's a lot of nice ideas, but in some sense also uses this iteration idea and it's used in more situations. So this is one of the methods in this color. The other method is using Markov partition, which was used by Sinai, Huell and Bowen for the SRB measures for analysis of an exome attractor and is also used by Benovadia in the line of what Yuri has been telling us in his mini course. So using this generalized Markov partitions, that's what the method used by Benovadia. And there's another method for building SRB measures which is the one I will more focused in this course is the inducing schemes. And in particular for obtaining more than just the SRB measures using the Yan-Talos. So these inducing schemes have been used for instance by Jacobson to build the absolute continuous environment measures for the quadratic maps. I also use them inducing schemes for the Vienna map. And we have used also the inducing schemes for generalizing my previous result with Bonati in Vienna. And also is the idea used by Klemenaga, Lutzato and Pesim. So you see here mixed and methods. So I will not say that one is better. Of course, Markov partitions and inducing schemes, they give more than just the existence of the web measure, which is not the case of global iteration. So it's the weaker method in some sense. But for the other ones, they also have their own virtues and they can be used to deduce other statistical properties of systems. And the statistical property of systems I'll be focused here is the decay of correlations. So what is the decay of correlations? So you consider two observables in the manifold and we define the correlation function of these two observables. And this is the notation. So core mu phi psi composed with fn is this. So this is this difference of integrals. This has to do with the correlation in probability theory. This is for the random variables phi and psi composed with fn. And if we assume, for instance, if you assume the correlation going to zero and you take for the observables being characteristic functions of Borrel set, it's easy to see that we obtain the usual notion of mixing. So going to zero means, so this integral gives precisely this first term, the first integral. And the second integrals give this term here. So going to zero means that this is conversion to this. So this is the definition of this. And we are interested in rates at which this core mu converges to zero. Polinomial, exponential, stretch exponential. So these are the cases we are going to consider. But we cannot have these results with these rates, polynomial exponential and so on. We don't have it for any pair of observables phi and psi. We need some regularity on the observables. And typically here, at least phi will be a whole or continuous map. The other one depends on the type of systems we'll be considering can be only bounded observable. But also in the difium often case, we also need it to be a holder continuous map. Well, and some motivation for this correlation function. In some situations, the measure is not only absolutely the measure mu. For instance, in the doubling map, it was the case. So the measure mu was equivalent to the back measure. Actually it was the back measure. And, but in many situations that happens. Not only in the specific case of the doubling map. So in many cases, the measure is equivalent to the back measure. So there is a density of the back with respect to mu. And if I call phi this density, and assuming that mu m, the back measure is normalized. So it's very easy to see that the correlation function can be written in this way. So what is this saying? This is essentially saying that the the decay of correlations gives information on the speed. So this is the push forward of the back measure. So we are saying that the push forward of the back measure is converging in a strong sense to the measure mu. So the integrals are converging to this. Of course, we have to take care with the observables we are considering. But the idea is that the push forward approach, in some sense, they approach the physical measure mu. Okay, so the correlation in some cases gives, it says that if we start iterating consider push forwards of low back measure, we are converging to the physical measure of the system. And the correlation tells us the speed at which we have that covered. So exponential means very fast. And in some cases, it means that in terms of applications, we can obtain good approximations, very fast, good approximations of the physical measure of the system. So this is very interesting in terms of applications. So this is the statistical properties we are going to consider in this course. So using young powers, but some other statistical properties can be deduced using young powers, but they essentially, I wouldn't say all of these statistical properties I'm putting here, but in many of them, essentially is using the decay of correlations we deduced the other statistical properties by meaning of young power. So and the other statistical properties can be the central limit theorem, which has been deduced by Yang in her two famous papers in the 90s. Also, Gouetzel deduced some local limit theorem and Berry-Essen theorem for certain systems. Melbourne and Nickel, they deduced almost almost sure invariance principles using this technology. Also Melbourne, Nickel and independently Ray-Bellat and Yang, they deduced large deviation results for certain systems with these young powers. Also Yang with Dammers and Wright, they deduced escape rates for certain systems with holds. And myself very recently, I used these young powers to deduce entropy formulas for the SRB measures, but for systems with singularities, kind of billiard systems, for instance, it can be used, the young powers to deduce the entropy formula. The entropy formula is known for differentiable systems by Ruel and Pezin, but for systems with discontinuities, for instance, we cannot apply the classical results, but there are many examples, billiards for instance, where there are systems with singular sets and it is known that young powers exist and so we use that to deduce entropy formulas. So this is a very useful technology. So let me start now telling you what are the structures and what are the inducing schemes and what are the young powers and I'll divide this presentation into two main parts, the endomorphism case and the diffeomorphism case. The endomorphism case is for non-invertible maps, the diffeomorphism is for invertible maps with stable directions, so with contracting directions. So what are the inducing schemes? So consider a map, I'm not going in this part, I'm not, in the beginning, I'm not going to use any structure in the space M, only that it has a measure there, defined on some sigma algebra, not necessarily the Borel because I'm not assuming M is a metric space. So assuming that we have a certain space with a sigma algebra and that we have a measure there, reference measure. Of course, in the applications I have in mind, M will be a manifold and little M will be a loopback measure. But this is very general. So now take a subset that I call delta zero with finite measure. So we say that a measure, sorry, a transformation from delta zero to delta zero is an induced map for F if the following conditions fall. First, there is a countable partition and partition here is always M mod zero. Everything here related to partitions is always with respect to a full subset with respect to these reference measures. So a countable partition of the domain of this new map, so see that we have a little F from M to M and though this map is from a subset of M into itself. And the function, essentially I'm assigning to each element of the partition, I'm assigning a positive integer. So that's what this function R gives. So I'm assigning to each element of the partition a positive integer. And so we say that capital F is an induced map for F if this equality here holds. So for every element of the partition, the restriction of the new map is precisely the power of the initial map, the power R of omega. So this is the new map. We frequently, we don't change the notation. Well, in some sense we change, but we don't introduce the capital F. We simply denote it by F R. But please have in mind that R is a function. It's not only one positive integer. So this is only notation. So this is the notation for the induced map. And we have this very nice result for induced maps. So assume that we have an induced map for a certain transformation. And we have an absolutely continuous measure with respect to the reference measure and invariant with respect to the induced map. If we consider this, so what is this? So we consider the measure restricted to the set of points which return after time J and you consider the restriction of the measure. So this is a subset of delta zero. And then you consider the push forward of this measure. I'm sorry, the bag is not here. So we consider this measure. So you consider the restriction. And then you consider the J push forward of this. And you add them all and you obtain this measure here. Then we have that. So these are very general facts. This is kind of folklore theorem that nu is an F invariant measure. So this measure nu and it is, I don't say it's finite in general, but we can characterize the finiteness of nu. It's finite if and only if R is integrable with respect to the measure nu zero. And the integral of R, see that the R takes only positive integer values. And the integral of R with respect to the nu zero is precisely this series. So this is a very easy exercise on measure and integration. And so we have a characterization of, so we have a way of obtaining invariant measures for the initial system. And we can characterize just in terms of the tail. So this is the tail of points which return after time J. And we can characterize the finiteness of this new measure for the original system in terms of the tail of points that are returned. And also if the measure nu zero is ergodic, then the new measure for the original system is ergodic. And if the push forward of low back measure, that's what it means, is absolutely continuous with respect to low back. So this is a way of saying that the map F is non-singular. So it doesn't take sets with positive measure to sets with zero measure. Then the measure nu is absolutely continuous with respect to the reference. See that I'm assuming that the nu zero, actually this nu zero being absolutely continuous with respect to M is used only here in this last item. Maybe I should remove it from here and put it here. That's the only point where I use the absolute continuity. Okay, so if nu zero is absolutely continuous with respect to M and the transformation is non-singular, then the new measure for the original system is absolutely continuous with respect to the reference measure. So morally, this gives us a way of building absolutely continuous ergodic measures for the original system out of an induced map. So we have an induced map. And so the idea is to induce, so we have original system, which may be very complicated, but we induce in some region and that induced map has some good properties. For instance, some hyperbolicity in some sense, some expansion. And for that map, we can build good invariant measures and then we can deduce the existence of good invariant measures for the original system. There's a proof of this kind of folklore result in my book. You can see it there. So this is a good approach, but we have to have good measures for the induced map. So this absolutely continuous invariant measure for the induced map. So if the induced map is very wild, there's no hope of building that. So we have to have some reasonable induced maps. So that's what I'm going to introduce next. So assume that we have now, so let's look now only at the region we are inducing and let's look only at the induced map. For these first considerations, I don't even need that it is an induced map. So assume we have a map from delta zero into itself where we have a measure, reference measure. And assume that we have a partition. So we can consider the prime image of the partition and we have these elements. And assuming that the transformation is non-singular with respect to the reference measure, then we have mode zero partitions of M as well. And so this is not M, there's a mistake here. This should be delta zero. And so we can consider these wedges of the, so this is the dynamical partitions associated to P. So, and these are defined in this way. And we can consider this infinite wedge and which is described in this way. So we have this family of partitions, okay? So starting with a partition, if the transformation is sufficiently reasonable with respect to the reference measure, we have these partitions. Okay, so the, I mentioned reasonable induced map. So reasonable here is what I call a Gibbs Markov in this map. So we say that it is a Gibbs Markov in this map. If there is a countable partition of the domain of the transformation, such that. Well, the first one is, it says that every domain in that partition is sent to the whole domain by F. So in some sense, this is related to expansion. So it takes domains which may be very small. So the partition can be countable. So we can have arbitrarily small domains. And then it's taken by the transformation to the whole domain delta zero. So this is what I call Markov partition. So Markov in general means the image of any domain is a union of domains. So here is full-branched property, which is a particular case of Markov. The transformation is non-singular. So what is non-singular is it has a Jacobian. Jacobian means that if you consider an element of, so an A contained in an element of the partition, you can evaluate the measure of the image simply integrating with respect to this function that we call Jacobian. So this function J is what I will be calling Jacobian. And this in particular implies that the transformation is non-singular with respect to the most. This is stronger than being non-singular. And here somewhat technical property. It's related to this sequence of partitions so the dynamical partitions. I'm using terminology of Rockling. It goes back to the fifties and Rockling refer to some measure spaces as being separable. If there is a sequence of partitions such that the sequence of partitions generates the sigma algebra so it's the first property here. And the second one is that infinite. So you can see that infinite wedge of this sequence of partitions is the partition into single points. So in some sense it is separating the points and generating the sigma algebra. So this is what he calls a separable measurable space. It's not related to dynamics so it's not this specific sequence of partitions. It's for some sequence of partitions. But here we have this specific which is useful for what comes next. So, and I also call this separable. And it also has to do so separation essentially has more to do with this property here. Oh, but he calls this a basis. So a basis is generating and separating some sets. And with the second point here in this G3 property we can introduce the separation time. So given almost, for almost every points x, y in delta zero, so for a full subset of points in delta zero with respect to the measure. I'm only saying the measure, but it's not the same measure M. We have this separation time defined. So it's the minimum iterates such that they belong to distinct elements of the partition. Since this second property holds, so when we iterate dynamical iterations of the partitions in the end it will be this by single point. So almost every point at some point it will be separated. So almost any two points will be separated in the future. And there is this last property it gives property is that the Jacobian is regular. This is essentially, you can think of beta to the separation time as a distance. Because if the separation time is very big in some sense we are saying that the points are closed. And so beta to the separation time is very small. So you can think of this. I'm not formally introducing this distance. I don't need it, but we could think in these terms. And so if you think of beta to the separation time as being distance, I'm simply saying that the log of the Jacobian is a Lipschitz map, okay? So that's what it say. So this Gibbs property is telling that the Jacobian is somewhat regular. So the map is no singular. It has a Jacobian and the Jacobian has some regularity in this terms, okay? So this four properties characterize Gibbs Markov maps. And so you may say, well, this is very technical. In particular, this separable condition in particular the first point that this sequence of partition generates the sigma algebra. But there is a situation where we can easily prove that we have these technical conditions satisfied. And this is for some piecewise expanding maps. And this is the kind of Gibbs Markov maps that for instance, me with Stefan Luzato and Pinier we can expect in some papers with also with Carlo Diaz some Gibbs Markov map of this type. So assume that we now is Delta zero is manifold possibly with boundaries, some this for instance and M is Lebesgue measure on Borel sets and assume that we have partition somewhat regular. So the closures have smooth boundary. And assume that we have a map such that the restriction to each element of that partition is a C1. So as an extension, which is a C1 different morphism onto its image. So as an extension to the boundary of the element which is a sub manifold, smooth sub manifold and it has an extension C1 extension. So first of all, F is non-singular which is reasonable since I'm assuming that it's a C1 different morphism in these elements. And the Jacobian is given by the determinant of the transformation. So this I hope is expectable. So the Jacobian in this case for smooth maps is precisely is a change of variables. The second property is that if the map is expanding in this sense, so if there is some number which is smaller than one and the derivative restricted to the elements of the partition on which we have the smoothness of the transformation and the derivative in norm is contracting the inverse of the inverse because I have minus one. So this is a way of saying that it's expanding. It's expanding in all directions. In higher dimensions we need to impose this. It's not saying that the derivative in norm is bigger than some number, bigger than one. If you go to higher dimensions it only says that at least one direction is expanding but doesn't say that all directions are expanding. So we need this as stronger assumption. Then if this holds then the separability property holds. So this is the most technical property with these two sub-items. So it's satisfied. So this is a very simple criterion and in many situations we have the smoothness in the domains of extension, smooth extension to the boundary of the domains. And so if we have this property so this is related to expansion of the transformation. And for the third property so the regularity of the Jacobian is implied precisely by the holder property, some holder continuity for the logarithm of the Jacobian. So see that we have some uniform constant not depending on the elements of the property. But this is a property that holds only for the elements of the partition. It doesn't have to hold for all elements in the space delta zero, okay? So if this holds then the Gibbs property holds. So we have for this piecewise smooth maps which in particular I'm assuming if they are expanding then we have a way of checking the conditions of Gibbs Markov map. Well, the first one is Markov. So it's essentially a geometric property of the domains being sent to the whole domain delta zero. So this is just comment on a way of proving the existence of Gibbs Markov maps. So, okay. So I think I passed two pages now. So the first main result I'm going to present is a result on the existence of absolutely continuous invariant probability measures. And I start introducing a space where the density for the absolute continuity for the absolutely continuous invariant measure for the Gibbs Markov map I present a space where the density lives. And the space is this one. In some sense, if you think, as I told you if you think of beta to the separation time as a distance, I'm introducing here holder of lip sheets space of observables. So it is the set of observables such that this soup is finite. So if you think of beta to the separation as distance, this is definition of lip sheets. And this is not necessarily a norm, this phi beta but it's a semi norm. Then if you put it together with L infinity norm then we have a norm on this space. And I will also consider that space with a plus which is the functions which are bounded away from zero. So non-negative functions bounded away uniformly bounded away from zero. And this space is good in the sense that it is relatively compact in L1. So L1 is the space of intractable functions with respect to M. M is the reference measure in delta zero. In many cases, you'll be the Lebesgue measure. Let me say that this comes from me and paper from 99 already left in the references. All these references are in the end of this presentation. And Yang says that it's relatively compact in L infinity. I tried to prove that I did not succeed. I could prove this in L1. I think there's some problem in that centers of Yemen. But it works with L1 and so it's okay. And the proof, well, I put the reference if you want to can look at the proof but I also leave a hint if you want to solve the... Well, if you want to prove this as an exercise especially for students, I think it's very didactical to try to prove this, some of these lemmas. And the proof, my proof in the book is, I looked at Ascoli Arzela theorem in Wikipedia and I mimicked the proof. So that's my hint. Go there, look at the proof and try to mimic the proof. Okay, being relatively compact, oh, means that any sequence, bounded sequence has accumulation points in the norm of L1, okay? So, and just one more concept before this first main theorem is the exactness of a measure. We say that the probability measure is exact. If for any set in the intersection of all pre-images of the sigma algebra with positive measure then necessarily it has a measure one. It's very easy to see that the exactness implies that it is a trivial exercise. It's not so easy to see that it implies mixing but it's also true. You can see that, for instance, in the book of Yuri and polycodes, not this, not our Yuri, another Yuri and polycodes. So, exactness is stronger than mixing, in particularly stronger than Ergody city. And so, assume that we have, so now the theorem, assume that we have a Gibbs Markov map, then it has an absolutely continuous invariant probability measure and absolutely continuous always with respect to the reference measure M. And it is also unique. There is a unique absolute continuous invariant measure. Moreover, this measure has good mixing properties in the sense that it is exact. And the density belongs in a good space. So, in some sense, the density is holder with respect to that distance. And the density is bounded away from zero and infinity. So it's bounded in particular, okay? So this Gibbs Markov maps, they have good absolutely continuous invariant probability measure, good in a sense of mixing, it's very strong mixing, exact. And good also in the sense of the density being very regular. The proof is made in my book, but I'm just brief idea of the proof. So the proof is using that idea of iterating. Of course, I don't do this proof using these schemes because this is the beginning of the inducing scheme. So I have to use a different method. And the different method is the simple method of considering the reference measure, iterating reference measure and consider this sequence of measures and seeing that the densities of these measures, mu n, they are bounded in this space or with respect to the measure in this space. And so by the lemma I've just presented that says that this space is relatively compact in L1, then the sequence has accumulation points in L1. And an accumulation point is necessarily an invariant measure, is the density of an invariant measure. And since it's the densities, the density of an absolutely continuous invariant measure. Let me say that the first item of this separable property, which is this somewhat enigmatic way I need this, and it's related to the exactness of the measure. Again, let me say that in the paper of Young of 99, she doesn't impose this property, this first property. She only imposes that this infinite wedge is the partition into single points. But with only this property, I did not succeed to prove that the measure is exact. So imposing the extra property and you, well, for a while I wonder if the second property did not imply the first one. But since I've seen the work of Rocklin, where he imposes two to be a separable space, I realized that one does not imply, and then there are examples where one does not imply the other. So I think it needs the two properties to deduce that the exactness of the measure. So the property is that the sequence of dynamical partition is generating, generates the sigma algebra. Okay, so we have this very nice result on existence of invariant measures with good properties, both mixing and absolute continuity for Gibbs Markov measures. And together with the proposition that says that if we have an induced map, if it is Gibbs Markov, then we'll deduce the existence of good measures for the original system. That's what comes next. So assume that we have an induced Gibbs Markov for a certain original system. I'm not putting here, but it's implicit. And assume that we have a map, a measure, which is now since it's Gibbs Markov, we know that there is a unique absolute and continuous invariant probability measure. And assume that the original map is reasonable with respect to the reference measure and consider that same measure that we considered before. So I'm only translating and adding some more information to that previous proposition that we had. So now we know that this measure is invariant, that's invariant and ergodic. And absolutely continuous, why that? Well, because we know that new zero, now we are in a specific case where I'm assuming Gibbs Markov. So new zero is in particular ergodic. And by the previous proposition, we know that ergodicity is inherited by this new measure. And it's finite if and only if the recurrence times are internal, this is the same result as before. And the density of this new measure is bounded from below by some positive constant on delta zero, because it is very easy to see that this new measure new, so let me go back. So this new measure new is greater or equal on delta zero is greater or equal to delta zero. Delta zero appears in this sum when you consider J equal to zero. So the density has to be greater or equal to the density of new zero on delta zero. And the density is bounded away from zero. So that's why we have this. In general, so the density can, when you spread the measure to the whole space, it's not necessarily true that the density is strictly is bounded away from zero. Okay, so and if new is finite, so in finiteness has to do with the probability of the recurrence times, if new is finite, then if you normalize, so that's what I'm doing. So I'm dividing if it's finite, the new of M is finite. And if you normalize the measure new, which comes from the original for the measure new zero for FR, if you normalize it, you obtain an ergodic F invariant probability measure, which is absolutely continuous with respect to the reference measure in the whole space. And it's actually, see, we cannot say in general that this is a unique absolutely continuous invariant probability measure, ergodic. Because if we are inducing a certain region that does not interact with other region, there's no reason for this measure that comes from this inducing scheme here, since it does not interact with another region, there's no reason for that measure to be unique. But if I say that it gives positive weights to the set where I'm inducing, then it's the unique measure. It's the unique measure giving positive weight to that. That's an important information and interesting, okay? This is also proven in my book. And so we can apply this to the situation where we have Lebesgue measure. So assume that we have now M Riemannian manifold and we have M is Lebesgue measure on the Riemannian manifold and F is a transformation from M into itself such that Lebesgue measure is, so the transformation is non-singular with respect to Lebesgue measure. For instance, for local different mortgages holds, or even with critical points, if the critical points are not very degenerate, degenerate then this also holds. This is a very general condition for smooth maps, for instance, and then, so assume this setting now, very specific. So if F is as an induced Gibbs Markov map with integrable recurrence times, then F has a unique SRB measure giving positive weight to the region we are inducing, okay? So this gives in this specific situation of, for instance, smooth maps in Riemannian manifolds, we have, with the inducing schemes, we have a way of building SRB measures for the original system. So I only use, so this is a way using inducing schemes. I didn't use any young power so far. So for building the SRB measure, we don't need to introduce all the technology. This is just part of the technology, but for studying statistical, in particular, the decay of correlations for this SRB measure, we need to introduce this more sophisticated technology. We need to elaborate a bit more these inducing schemes. And in particular to deduce the decay of correlations. So assume now that we have a transformation with an inducing scheme. So an induced Gibbs Markov map with integrable recurrence times. And so, suppose that mu is the unique ergodic afivarian probability measure, which is absolutely continuous with respect to the reference measure and giving positive weight to the region where we induce. So our next goal, it will obtain estimates for the decay of correlations for this expression here. And the general idea is, so I want to deduce the decay of correlation using this induced Gibbs Markov map. And the general idea is that the decay of this, so the measure mu is the measure that comes from the inducing scheme, okay? The general idea, very rough, but not far from what happens in practice is that the decay of correlation is given by the decay of the tail of the set of points that return after time ends. So this set of points, the measure of this set of points since we have a partition for the induced map. So the measure of this set of point decays to zero. So the idea is that the weight is decays to zero implies the decay of correlations. Well, unfortunately this statement cannot be proved with this generality. It can only be proved in specific cases and the specific cases are polynomial, stretch exponential, exponential, that's the cases we are going to see here. And also for observables with some regularity. In our situation here is the regularities, phi being all the polar continuous and psi being bounded. So belong to L infinity of M, essentially bounded. See that the rules of phi and psi, they are not symmetric. So in general, well, in this setting for endomorphisms, we need the observable phi more regular than the observable psi. We don't need the holder continuity for the observable psi. But when we go to these few morphons for some specific reason, we need both holder continuance. So what is holder continuance? The usual notion now assume that we have a magic on M. So assume that M is a typical M will be a manifold, but so far I didn't need any remaining structure. But so the distance is enough. So this is the usual definition of holder continuance. And I'll use h eta, so with exponent eta, that's what it is here. And I will use h eta to denote the space of holder continuance functions. And what's the idea of the proofs? How we will obtain the decay of correlations for systems with measures coming from inducing schemes is to obtain an extension of the system. So obtain through an extension of the original system. The extension will be denoted by T nu. So T is the dynamics and nu is another measure. And F nu is the measure we have that comes from an induced map. So what is this an extension? It's a new dynamical system. Here is T from space delta into itself. And projection, transformation from delta to M, think of it as a projection, which is actually a semi-conjugacy. So means this, so F composed with pi is pi composed with T. Well, if you put T above and F below, and you can see the projection, the diagram commutes, and in a measure theoretical sense. So the measure for the system, for the invariant measure for the system T is projected. So the push forward is the measure for the system F. So this is what I mean by an extension. It doesn't need to be one-to-one, not even onto. Of course, if it's bijective, then it's what we call the conjugacy between the systems. But I don't impose that. Pi will be measurable. It's the only reasonable property I need. And then the exercise is that if these properties hold, so the semi-conjugacy and the fact that the one measure is projected to the other, then the correlation term, this is what we need to study, of the regional system is equal to the correlation term of the new system. Of course, you have to adapt to the observable. So the observable is now the observable that we have below, so think of T up and F below. So it's the composition with the projection, okay? And see that the measure new here is the projection of you. So it is the measure new. So that's the relation we have with the two measures. Okay, these four, for all, for all signals for which these expressions make sense, okay? So this is simple, straightforward exercise. So this to say that we have this extension, we can transport the problem we have to these new system. So in addition to the space and the function phi and pi, we also need a suitable space because as I said, I'm going to consider phi in the space of all the continuous functions. So I need the space. Spaces appear naturally in this correlation problems. So I need the space to include the observable, for the observables with respect to the new system. And I need the spaces. I'm going to consider correlation of phi, composite pi. I need a space such that the composition belongs in that space, okay? So we need to build three things. The new dynamics, T, the projection and also the new space. So having this, we transport our problem to these new dynamics and with this new space of observables. And the new dynamics is finally the young tower, the tower extension. So how is it defined? So consider an induced Gibbs Markov map for the, so we have an original system and consider an induced Gibbs Markov map. So associated to it, we have a partition. So I'm going to use this notation. So far I've only used P and I didn't specify the elements of the partition but it's a countable partition by definition. And I'm going to label this using the index zero. And I consider now this new set, the tower, which is this. So what's this? So you can see the delta zero, so the system where F is defined and you can see the copies of subsets of delta zero that we put here. I didn't say but L, so this L here, it ranges in the natural numbers. It's implicit here. So it ranges from zero. Well, it's not implicit that it's in the natural numbers but it is in fact. So L ranges from zero to the recurrence time minus one unit. So this is, so I have here this picture. So if you, if the recurrence, so I'm putting here unfortunately, I cannot highlight this parts of this picture but we have R equal to one, R equal to two, R equal to three. And so here the tower above R equal to one, there's nothing more. So because it goes from zero to R minus one and minus one is zero. So there's no more levels above these points. But if R is equal to two, there's one more level and of R equals to three, there are more levels. So in practice, the elements are not so organized as in this picture but this is just a picture to illustrate what we have here. What is this tower here? Okay. And I also introduced the tower map. So what is the tower map? So it essentially is for the points that it essentially goes up but when we reach the top level, so it's a translation until we find the top level, the roof, it doesn't do anything. Simply the second coordinate, we add one more unit. So that's what it does. And when we reach the roof, it comes back to the ground with the induced Gibbs Markov map, with the FR. There's one more coordinate to say that it goes to the ground but essentially the return to the ground is FR. And we define this projection map, the pi. And it's defined in this way. If you take a point in the elf level, then you use the elf iterate of that point. And so these leaves in NM, okay? So we have defined now the new dynamical system, this young tower, we have defined the projection. Well, it's not difficult to see that this pi is measurable and we have this conjugacy property, okay? So this is a simple exercise that state forward comes from these definitions. Let me just put a remark, which I will use later. So I associated this tower construction to an induced map but this tower construction can be associated to any Gibbs Markov map. So assume we have a Gibbs Markov map and we have a function assigning to the elements. So we will read it differently. So consider R is constant in the elements associated to the partition. And then we can build, make this same construction associated to this R and this F. Of course here, when we go to the ground we go with this map F. So, but this is to say that the map F can be independent of the R. So we don't need F to be a power R of the original dynamics, okay? So this is just a remark of use in a very specific point. Okay, so some notation now we can talk about the levels of the tower. So see that we have some ambiguity when we consider the zero level. So the zero level by the notation is delta zero. Delta zero is the domain of the induced map. So, but the difference is only with the zero coordinate here. So they are naturally identified. So that's why I don't use different notation. So the zero level is naturally identified with the domain where we have the induced map. And under this identification, the return to the base of the tower, the ground level, the zero level is essentially our induced map. Or if you can think differently that the induced map is also an induced map for the tower map. And it's clearly gives Markov map because we start with that assumption. And see that the L level is a copy of a subset. Actually the set of points which return after time n of delta zero, okay? So we are taking copies not of the full delta zero but of subsets of delta zero with putting them in higher levels. And so we can reproduce the sigma algebra or the sigma algebra restricted to those subsets in all levels and also the reference measure and also the partition. So we have a partition associated to the induced gives Markov map in the base of the tower. And so we can reproduce these elements of the partition the higher level. See that the set of points which return after time L, this set of point is a union of elements of the partition P. So we can think of the restrictions of the partition in the higher level. And so collecting all these partitions we have a partition of the whole tower and I'll call Q this partition. And the elements of Q will be denoted by delta L i and L is to refer to the L level, okay? And this also, so we see that we have a separation time for the gives Markov map for the FR and this enables us to introduce a separation time in the tower. What is the separation time? Well, you take two elements in the tower assume that they are in the same level of the tower. So if they are in the same level of the tower we can choose unique elements in the base of the tower that go to these two levels to these elements in the L level. Well, by the dynamics we have in the tower and define the separation time exactly as the separation time that we had we have for the gives Markov map, okay? So the separation time of the elements in higher level so you simply bring them to the ground level and consider the separation time associated to the gives Markov map that we had initially. So this separation time essentially is saying that introducing in this way we are saying that we don't separate when we are going up in the tower we only separate when we come down, okay? So this is the idea. And well, I said take two points in the same level if they are not in the same level you define the separation time has been equal to zero. So what matters here is having big separation times big separation times means points being close. If they are in different levels so we put the separation time as zero so very small separation times. In some sense I'm saying that they are far from each other. Okay, so this is all the notations I need. And for this gives Markov maps and this to finish sorry for this young towers we also have equilibrium. So we have invariant measures. Well, think of the way I've introduced the young tower. So the dynamics is completely uninteresting when we go up and when we come down is essentially the gives Markov induced map. So I introduce here some spaces or functions which essentially are mimicking what we had before for the induced map. So now the observables are defined in the whole tower. I think I highlight more than I wanted. So the observers are defined on the whole tower and this is the space that we considered before with the separation time. And we introduced this constant here which is so in some sense I'm not using hold the norm because I'm not saying that this is a distance but morally you can think of beta two S Y S as a distance. And set also the space where the functions are bounded away from zero by some uniform constant. And also this constant, this constant will be very important when we go to the decay of correlations. This constant is the maximum of the constants in some sense. So it is the maximum of the constant of the holder or lips it's constant in some sense. And also the L infinity norm of the function and also a bound for the density from below or if you want to take one over five and take a bound from above. Okay, so this is a max of these two, three constants. And for the tower we also have a unique invariant probability measure. Well, think of the way we have defined it's not, I think it's not a big surprise that we have a unique invariant probability. See that I'm using the same letter M for the measure in the space M, the reference measure also in delta zero because it's a restriction and also M for the measure in the tower. I didn't say, well, it's written but we can also pass the reference measure. So since the higher levels of the tower copies of subsets in the base level we have there a measure we pass naturally to the higher levels. And I denote by the same letter. So there's no reason for, there's no confusion. It's always clear where we are using M and in some sense it's the same measure. Okay, and so T as a unique invariant probability measure which is absolutely continuous with respect to the reference measure. And it's, the density is good in the sense it lives in this space as before. The measure is ergodic. Well, in the other case for Gibbs Markov maps it was exact, this is the only point where it's not exactly the same statement. It's not reasonable to expect that when we put these dynamics to be exact because we are making an interesting iterations of the dynamics when we go up. And in particular, I put here this exercise. If the greater common divisor of the recurrence times is bigger than one then the system, so this measure that we build now is not mixing. It's not that it's not in exact it's stronger saying that it's not mixing because exact implies mixing. It's not mixing. This is a simple exercise. A simple thing of the transformation and see that if there is a common divisor for the recurrence times it's very easy to see that it's not mixing. So it's not expectable in general that it's exact. But we have a condition. And actually, so this is a necessary condition. The great common divisor to be one because if it's greater than one it's not mixing. So this is a necessary and actually a sufficient condition for the measure to be exact. And also we have this good property. Is that the projection of this measure coincides with the measure that we obtained by the inducing scheme. Okay? So this is saying that which is unique giving positive weight to the region we are in use. So there's a bridge here between a measure somewhat natural measure for the tower and the measure that we obtained only using the inducing scheme. So the idea is that for building absolute continuous measure we don't need this tower technology. Using the tower technology we obtain another measure in the tower. And I'm saying that when you project from the tower you obtain the other the measure that we had before in the ambient manifold. Okay? So this is proved in my book reports the exactness is somewhat long and we use the exactness of the Gibbs Markov map but also the fact that the great common device is equal to one and but it's proved in detail in my book. See that the existence and uniqueness follows from corollary that corollary of that way of obtaining measures for inducing schemes because uniqueness is uniqueness intersecting the region where see that the tower also has an inducing scheme which is the return to the base and the inducing scheme that the induced map is essentially the same we had before. So in the tower there is a unique measure giving positive weight to the base. But then the dynamics is simply sliding up translating so the uniqueness in the base implies the uniqueness in the other levels. So the uniqueness essentially is a consequence of that previous result which is general for measures coming from induced maps. So that's the observation here. Okay. And so I will stop here in the next day. So I've introduced the induced maps. I've introduced the tower maps and I've deduced the existence of, let me say a natural measure in the tower map and in the next day I'll be, I will give you an idea on how to prove the decay of correlation for the original system and having in mind this and building also the good space for which we have this property, it will be enough to deduce the decay of correlations for the tower system. So that's what I will do in the next day. Okay. Thank you very much. I'm open to questions. Okay. Thank you, Zé. So I have a quick question. So when you say Yank Tower is exactly this later tower that we introduced? Yes, this is the Yank Tower. Associated to the Gibbs Markov. Yes, yes. That's the way she introduced it. Well, towers have been introduced. I didn't put towers have been introduced before by Cacutani. So this is a general concept. Yank Tower is the way is the use that she makes to these towers, the space that she, the space F that I still haven't introduced, but this space, it's introduced here. So this is the space F that will be useful for the decay of correlations and all this and also the decay of correlation. So that's what I mean by Yank Tower. Okay. But the tower is this. This is the object which was introduced much before the nineties, okay? The use of it and these functional spaces and this measure and the exactness of this measure and in particular, most specifically, the decay of correlations, this is Yank use of the tower, okay? Okay. And when you want to prove the decay of correlations, then you have to use other tools to construct this Gibbs Markov map, right? Well, that's the application. So in these first lectures, I will assume I have Gibbs Markov induced map. Maybe in the next, or maybe not, maybe in two next classes, I will give a simple example for intermittent map of applications. So one thing is constructing the induced maps that are so far, I didn't go to that, okay? I will go in some examples in the end. If I have time, I can give an idea on how to construct in non-uniformly expanding maps, for instance, if I have time. But one thing is building this Gibbs Markov induced map. Another thing, and that's what I'm doing first is assume we have them, what can we do with them? So today we have seen we can construct SRB measures, for instance. And in the next day, we are going to see, we are going using this tower, we can deduce the decay of correlations for the SRB measures. And after that, I will give a simple example for intermittent map where we can construct the inducing scheme, okay? And the goal, so I can give you the syllabus of the course, the goal is then go to diffeomorphisms that in some sense can be reduced. So there is also notion of inducing scheme, young tower, and in some sense we can reduce to these towers here and use these results that we are going to prove first here. And in the end, if I have time, I can give you examples on where we can construct the inducing schemes, okay? So that's the contents of this course. But this goes slower than expected. For instance, today I was expecting to talk only one hour and a half, but I think these things have to be slow for otherwise it will be not good to digest. Okay, so my goal with the book was trying to make it more readable than the original papers, in particular my original papers that I have constructed in the young towers. So the idea was trying to make all this technology of young and the way we apply it readable. So I can look at it and... Excuse me, yeah? I might have missed when you say it, but this pie is not invertible. It's a... This pie is not invertible at all, not invertible at all. It's not one-to-one, it's not on two in general. Of course in some situations can be. But one-to-one is in general, in most situations it's not one-to-one. It's a semi-conjugacy, okay? But it has this nice picture that it conjugates, semi-conjugates the two dynamics. Is it five-to-one? Yeah, not one-to-one. Is it five-to-one? It's five-to-one. No, no, no, no, no, no. There's no reason for that as well. It's countable to one. And that's good for, I mentioned an application for entropy formula and the fact it's countable to one is important in that point. Well, it depends on the example. So in the examples and the constructions, in many examples, you cannot assure what happens. Of course for some points it's five-to-one, but in general we cannot assure. In the construction there's no way of controlling unless it's a very specific example. Yeah, in general it's not five-to-one. Jose, can you please go back to light 23? 23, yeah. Yeah, so here you were discussing that your control on the decay of correlations is for phi holder continues and C all infinity. Yeah. So I thought it would be natural based on what you said before to consider phi to be the Haddoni coding derivative, dU dM. Yes? dMu dM. That's what I said in that remark, no? Yes, but... I think it was dM dMu. Okay, yes, but both we are wrong. But the control on this derivative we have is with respect to that a holder, let's say the different holder norm which uses better... No, no, no, here. So at the level... So these considerations are all for the original system. You can think of a smooth map on a remaining manifold. So the holder notion here is the usual holder notion. D is the distance in the space, okay? Okay, but with your construction, what type of control you expect to have on the derivative between mu and m? Well, the problem is that we have a good implication in terms of the... To study the decay of correlations because so I want to obtain rates of decay of correlations for the holder of serval phi here, okay? And I have this implication, okay? And so what this implication three says is that it lives, when you consider the composition with pi, it lives in a good space. This f will be the f delta that I've introduced for the tower. And the density of the measure for the tower lives in this good space. So in some sense you could be expecting it to be holder. The problem is that there's no way back. So this is not an equivalence. So you cannot assure using that space which is morally a space of holder functions, you cannot say anything about the regularity of the measure for the original system. It can be much wilder than just holder. The problem is that there is no implication here. Implication in the following sense. If you take a map, let me not highlight this. So if you take a function here in this f, there's no reason to expect, and assume it is a composition with pi, there's no reason to expect that it comes from a holder function in the other side. So in principle, we could not guarantee that we can choose a phi to be the derivative in the previous slide. No, no, no. Of course, I say in general, there are words. I think that in one of the words I mentioned of dammers with young and right, I think they use these to deduce some regularity properties of the densities. It can be used. What I'm saying is that in general, you cannot go back. So we cannot say, I cannot assure in general that the density is holder continuous. No, in fact, it's not true in general. But in some cases, we can study using this method. And then I think they do that. So in some sense, they studied the regularity of the measure. Okay, and I also have another question, just one is more general. So the measures you were dealing with here, they were all, let's say, at the end of the day, approaching some notions like SRB measures are absolutely continuous with respect to reference measure and so on. Does these constructions have something to say about equilibrium measures or are they? Yes, this kind of approach has also been used by many people to build the equilibrium states for other. But so SRB measure is the quotation for the logarithm of Jacobian, which is here logarithm of the determinant. For those who know what is equilibrium states, the SRB measures can be described. It's essentially a consequence of the entropy formula or the variational principle. But see that all this is made on a reference measure. So if you change the reference measure, these kind of ideas can also be used. But we need some geometrical properties, some of the reference measure that here, in the applications I will be using is Lebesgue measure. But yes, it can be used for other purposes, yes. In some sense, you have to start redoing everything with like conformal measures instead of Lebesgue. Yes, and you have to have partitions adapted to a big class of measure, reference measures. And in some sense is the idea of the Markov partition because they are adapted to all the interesting measures. But some people did use in this young construction. Okay, thanks. And there are some interesting results by Pinheiro and myself use that result with Edward Santana and Kredo Levada using these inducing schemes to deduce other equilibrium states. But here, so this is the first approach to this for many people. So I don't bring all the possible complications possible generalities, try to stay as simpler as possible. And the simpler in that sense is considering always the Lebesgue measure, which is sufficiently important because it brings us to the SRV measures and the key of correlations for those SRV measures. But yes, it can be used for other measures, conformal measures. So we need some geometric properties of the measures. Sure, okay, thanks. You're welcome. So I believe this was a great start. Thank you, Jose. And you will continue next Friday, right? In two days? Yes, next Friday at the same time. At the same place? Not necessarily. Some of you can go to the beach. But at the same time, yes, on Friday. Okay. So and Wednesday next week and Friday next week. So after this class, I will have a better notion what is reasonable to expect for this course. Okay, so see you. And in the next class, maybe I'll give you a syllabus for the course. Okay. The notes are already available if you want to look at it. Yeah. Okay, so thank you. Thank you again. Okay, thank you very much, Yuri. Yeah. Bye-bye. Bye-bye. Thank you all for coming. See you on Friday. Yes.