 The recording, yes. Thank you. So very welcome everybody to the fourth day of this workshop on Marco Partitions and Young Towers. And we have two very nice talks lined up. I think so far all the talks have really been very interesting. It's been, for me at least, you know, so nice to hear all the talks so very connected to each other really. This is why we decided to have such a focused workshop. So it's a great pleasure for me to introduce the first speaker, Amri Sarig, who is really one of the, I would say the pioneer of the development of Marco Partitions in the non-uniformly hyperbolic setting, which is what we're focusing on. And he will continue the topics talked about by Jérôme Bousy and Sylvain Covizier and talk about strong positive recurrence for different office. Thank you very much, Amri. Okay, thank you very much. And thank you for the organizers for the initiative to hold this meeting and for giving me an opportunity to speak. Everything that I'm going to say today is joined with Jérôme Bousy and Sylvain Covizier. And in fact, it's a direct continuation of the two talks which they gave earlier this week. So let me remind you that Jérôme and Sylvain proved two very useful properties for general C infinity surface diffeomorphisms with positive topological entropy. Jérôme proved the property called entropy continuity of Lyapunov exponents, and Sylvain proved the property called strong positive recurrence. I will recall what these properties are later in the talk. So don't worry if you don't remember exactly what they mean. What I'm going to do today is I'm going to explain to you why these properties are useful. And I'm going to do it by presenting a variety of consequences which describe the stochastic behavior of the measure of maximal entropy. So everywhere in this talk MME means measure of maximal entropy. Now there are going to be many, many consequences and the best way to remember them concisely is that they all follow from a spectral gap for the transfer operator of the measure of maximal entropy. And that in some sense is the central result of this talk, that if you have a general C infinity surface diffeomorphism with positive topological entropy, then an ergodic measure of maximal entropy has a transfer operator which acts with spectral gap on some manner space. And as I'm sure many of you know, this has many, many consequences. Since we're going to talk about measures of maximal entropy, let me begin by recalling some facts about them. So suppose we have a closed smooth surface and C infinity diffeomorphism on the surface with positive topological entropy. Well, then the most fundamental result on measures of maximal entropy is that they exist. This is true in fact in any dimension, not just in dimension two. In dimension two, you also have a finiteness properties. There are most finitely many different ergodic measures of maximal entropy. And in the topologically transitive case, exactly one. Since there is only one, it has to be ergodic. It's not always mixing, but you can show that every ergodic measure of maximal entropy is always isomorphic to the product of a Bernoulli scheme and a cyclic permutation of the P points. Now there is a small asterisk here because it's meant to remind me to tell you that if you don't like the assumption of topological transitivity, then you can remove it at the price of localizing to homoclinic classes. So everything that I say today is also going to work without assuming topological transitivity for the diffeomorphism provided you focus all attention on one homoclinic class. On this homoclinic class, there is a unique measure of maximal entropy, which is ergodic and in fact Bernoulli up to a period. Now what we're going to do today is to go beyond the ergodic theoretic description of the measure and discuss the stochastic properties of the measure. And we're going to start with exponential decode correlations. So the result put briefly is that if you have a mixing measure of maximal entropy, then it exhibits exponential decode correlations for Helder continuous functions. So what I'm going to do next is to explain to you what you can say when you don't have mixing. Okay, so first of all, what are Helder continuous functions? These are real valued functions on the manifold, which have finite Helder norm. So beta, the parameter here is the Helder exponent. And the result is that if you have a C infinity surface diffeomorphism with positive topological entropy and you take an ergodic measure of maximal entropy, well if it's mixing, then you have exponential decode correlations for Helder continuous functions. If it's not mixing well, you can reduce to the mixing case by noting that since the measure is isomorphic to a Bernoulli scheme times the permutation of p-points, if you pass to the power p of the diffeomorphism, then the measure splits into a finite collection of ergodic components for f to the power p, each of which is mixing. And for these mixing ergodic components of f to the power p, you get decode correlations, exponential decode correlations for every Helder continuous phi and psi. The covariance of psi at time zero and phi at time np decays to zero exponentially fast. So that's exponential decode correlations. The next results I would like to present are, belong to what's called in probability theory invariance principles. These are results which say that a certain stochastic process looks like Brownian motion. So the stochastic process we're going to talk about is the stochastic process of Birkov sums. So let's think about the following procedure. Pick an initial condition randomly with respect to the measure of maximal entropy and calculate the ergodic sums at this initial condition at time one, two, three and so on up to time n. And now draw a graph of the results. Again, I'm fixing the initial condition and I look at the value of the ergodic sums for this initial condition at times one, two, three up to n. So let's draw a graph of this. This is a discrete graph because we're in discrete time. So let's turn it into a continuous graph by using linear interpolation. Then what we get is a zigzag line. Okay, now let's look at this line from far away. Formally what I want to do, I want to scale, I want to divide the x axis by n so that I get a function on the unit interval and I want to scale the y axis by square root of n because that's the right scaling in this business. And then the statement is that after the scaling or after zooming out, the random zigzag line that you get from the Birkov sums looks like the path of Brownian motion. Okay, now of course I have to tell you what exactly I mean by looks like and there are several non-equivalent ways of saying it in a precise way. We already saw one of these ways, the Donetsk way in the talk of Carlos Mateusz. I'm going to state the stress and way, what's called the strong invariance principle or the almost true invariance principle. So I will first give you the formal statement and then I will comment on what it means because if you see it for the first time, maybe it's not clear what it means. So here is the statement, again let's fix an ergodic measure of maximal entropy for a surface diffeomorphism which is infinity and let's fix some held continuous function and I want this function to satisfy two conditions. The first condition is I want the integral to be zero, I want it to be centered. The second condition is I want my function not to be a measurable co-boundary. I don't want it to be co-homologous to a measurable co-boundary and let psi n denote the ergodic sum at time n. So now the formal statement is that you can construct another probability space equipped with two stochastic processes, Sn and Bt so that Sn is equal in distribution to the process of Birkhoff sums. This means that if you look at a random graph of Sn and a random graph of psi n, you cannot tell which one generated the graph only using the methods of statistics. The joint distribution is the same. So Sn is going to be the same as our process of Birkhoff sums in distribution. The second process Bt is going to be standard Brownian motion, so a process which at time zero is equal to zero and has Gaussian independent increments. Now since these two processes are defined on the same probability space, I can ask about the distance between the two. And the statement, the important statement is that Sn is equal to Bn up to a constant sigma plus a negligible error, and this is true almost truly. Negligible in this business means much smaller than the square root of n, so here the power lambda is smaller than one half. So probabilities say that what we are doing here is we take the original stochastic process of Birkhoff sums. What you get from the Birkhoff sums when you choose the initial condition randomly with respect to the measure of maximum entropy. So we take this process and we couple it to Brownian motion in such a way that the distance between the two is negligible. The reason I need to pass to another probability space is that in general the Birkhoff sums are a sequence of functions on my original surface and Brownian motion is defined where Brownian motion is defined on venous space. So I can't talk about the difference because they are defined on different spaces. But what the almost true influence principle allows you to do is to define a copy of the original process on the sample space of Brownian motion so that you can actually write an equation relating the two. That's the idea of coupling. Intuitively what this means is that you can define your process on a probability space which has Brownian motion so that the random graph of your process is within negligible error from a random graph of the venous process of the Brownian motion. Okay, so this is the strong invariance principle, the almost true invariance principle. And let's discuss some consequences of this, but the first consequence is the central image theorem. The almost true invariance principle is about the distribution of the entire graph of the function. The central image theorem is a statement about the end of the graph, okay, the value at time n. It says that the distribution of the value at time n, let's say this bar chart converges in distribution to the distribution of the endpoint of the Brownian path, which is precisely a Gaussian by the nature of the venous process. So here is the statement. Again, you have an ergodic measure of maximal entropy for a C infinity surface diffeomorphism. You have a helter continuous function with zero integral, and you assume that the function is not a measurable co-boundary. Then the statement is that you have the central image theorem. There exists a sigma, which is positive, so that the measure of the set of initial conditions where the Birkhoff sum after scaling is between a and b, well, that's approximately what Brownian, what the normal distribution tells you. There is also a formula for the asymptotic variance, this parameter that appears here. This formula is called the Green-Cubo formula. It presents the asymptotic variance as the variance of psi, which is what you would have gotten, had psi of fi being independent. But since they're not independent, there needs to be a correction, and this is this term. This is the Green-Cubo correction, which takes into account the correlation between psi and psi. Another useful identity for sigma squared is this second derivative of the pressure. This is useful because then you can use the methods of thermodynamic formalism to see that sigma squared is zero, if and only if psi is a measurable co-boundary. This is why I need this additional condition to have sigma different from zero. You get it from this formula for the linear response. The next consequence of the almost true variance principle is the law of the iterated logarithm. What you see in this picture are many graphs of Birkhoff sums, but started at different initial conditions. Each initial condition gives you another zigzag line. The law of the iterated logarithm says that with full probability these lines tend to stand within an envelope depicted here in black. The law of the iterated logarithm identifies the form of this envelope, and it says that it's the graph of the square root of 2x log log x. Here is the statement. Again, you have an ergodic measure of maximal entropy for a symphony to surface this homomorphism with positive entropy. We have a held continuous function with zero integral, which is not a co-boundary. The law of the iterated logarithm says that the precise rate of growth of the Birkhoff sums of psi is what's written here. It's 2n log log n up to a constant, which is this sigma that we discussed before. This is an optimal bound because the lim soup of the quotient is equal to plus 1 almost surely, and equal to minus 1 almost surely when you look at the limits. The formula in the red gives you the top half of the envelope. The formula in black gives you the bottom one. You have one part of the envelope and another part of the envelope, and these are optimal. They are optimal. It's a lim soup and a lim inf. If I change slightly the constant sigma, the statements are no longer valid. They are violated. Why does this follow from the almost true invariance principle? Basically, because this statement is a classical result for Brownian motion, and since our process is equal to Brownian motion up to an error which is much smaller than square root of n, well, it's also much smaller than square root of n log log n. Therefore, you get this result from classical results for Brownian motion. There are many other consequences of the almost true invariance principle like the arc sign law and the stress and law of the iterated logarithm and many other results, the law of records. But instead of discussing them, I want to focus on something else. So far, I've stated the results for helter continuous functions psi. But as you will see in a short while, the way that we prove these results is by passing to a symbolic model. And in order to prove these results, we don't really need the function to be helter continuous. It is sufficient for our purposes that the coding of the function on the symbolic model is helter continuous. Okay, let me say this again. We don't need the function of the manifold to be helter continuous. It's sufficient for us that the lift of the function to the symbolic model be helter continuous. Now, because the symbolic model is totally disconnected, it's much easier to be continuous or regular or helter continuous on the symbolic model than on the connected manifold below. And as a result, there are many functions which are not helter continuous on the manifold, which nevertheless have a helter continuous coding on the symbolic space. Specifically, certain passing sets have indicator functions which are coded by indicators of cylinders. So although they are not helter continuous on the manifold, because they're equal to either zero or one, they are helter continuous on the symbolic model. Similarly, the geometric potential, the logarithm of the unstable Jacobian, this is not a helter continuous or even globally defined function below on the manifold. But if you do your coding correctly, it has a helter continuous coding above. Okay. Now, when you apply the result I stated before to these discontinuous functions on the manifold by applying them to the continuous codings on the symbolic model, you get consequences, passing theoretic consequences, which I think are interesting and I would like to show you what they are. The proofs are immediate. Okay, there is nothing much to do to prove them. They are really special cases of the previous results interpreted correctly, but still I would like to show them to you because I think they are interesting. So the first result relates to a passing sets. So let me remind you that a passing set is a closed set where the dynamics is uniformly hyperbolic. It is specified by three constants, k, epsilon, and chi. And the uniform polarity is expressed by saying that every point in the passing set has a well-defined stable and unstable direction. And you have uniform contraction on stable spaces in the future and on unstable spaces in the past. Now, the parameter chi is the one that gives you the rate of exponential decay. And the parameter epsilon tells you how these bounds deteriorate when you pass from x to f to the power k of x. Okay. So chi is the parameter of contraction in the stable and unstable directions. Epsilon is the control on how the bounds deteriorate when you pass from x to another point on the orbit of x. Okay. I should mention that part of the definition also is that you have a lower bound on the angle. Okay. So this is a passing set, and the union of all passing sets, you know, for all k, is a set of full measure by those sedatives theorem. Of course, these sets are not necessarily invariant sets. And because of this, if you start from a point in passing set and you iterate it, it's possible that the point comes out. Okay. And spends a very long time outside. Now, the time that the point in the passing set spends outside the passing sets is unbounded in principle. But we can ask, you know, how big is the time that you spent outside the passing set for most points? Okay. And the way to do it is to define a function, which is called the entrance time to a passing set, which tells you how much time you need to wait before your orbit hits the passing set. Okay. So this function is finite almost surely, okay, by Ergodicity, but it's not bounded. And the question is, well, how big is the set of points for which this function is big? But the answer is it's exponentially small. So if you have an ergodic measure of maximal entropy for a surface diffomorphism, c infinity, then the measure of the set of initial conditions, which require more than n iterates to enter a passing set, decays exponentially in n. Okay. How do you prove it very easily? You take the indicator of a passing set, which you choose carefully, so that its indicator has a held continuous coding above, and you use exponential mixing. Okay. It follows from exponential decal correlations for a suitable choice of a passing set. Okay. Another question you can ask. I'm sorry? I have a question. Yes. Go ahead. Does the constant data depend on epsilon from the passing set? Yes. Yes, it does. Can you take epsilon to zero? It goes to one. What goes to one? When theta goes to one, when epsilon goes to zero? Oh, you're asking whether theta depends on epsilon. Yeah. Let me think. No, I can't say anything about it depends. Yeah. The best I can say tends to one. Yeah. I cannot say anything. Okay. So another question. Okay. Next thing, let's discuss the optimal passing constant. So what is the optimal passing constant? By the oscillated theorem for almost every point x, there exists a constant kx, which satisfies these bounds. Okay. Let's look at the smallest possible constant which satisfies these bounds. It exists and it's called the optimal passing constant. And the way that I define it, it's automatically temperable in the sense of passing. It deteriorates in a controlled way along iterations. If you're uniformly hyperbolic and also, then the optimal passing constant is uniformly bounded. If you're not uniformly hyperbolic, then the passing constant is finite almost surely, but it's not uniformly bounded. Again, you can ask how big is the set of points where the optimal passing constant is big? Well, the answer is it's small. Okay. It's exponentially small. So here's the result. If you look at a set of points such that the logarithm of the optimal passing constant is bigger than n, it decays exponentially in n. Okay. And again, it depends on epsilon and chi. What's the proof of this? Again, it's very easy because the optimal passing constant is tempered if the log of the optimal passing constant is big, is big at time zero, then it will take you a very long time to return to a passing set where the optimal passing constant is let's say less than five, because the optimal passing constant varies by steps of e to the epsilon along iterates. So the event that the optimal passing constant is big is a subset of the event that the entrance time to some passing set is big. Okay. So again, the proof is very easy. It follows from previous results. Okay. Next, let me show you a couple of consequences which you obtain when you look at the geometric potential. Okay. And these consequences are related to statements on the rate of expansion on unstable directions. So let's see what happens when we simply write down the consequence of the law of the iterated logarithm. Okay. So again, let's assume that m zero is an ergodic MME or the sinfinity surface diffeomorphism. And let's make the following assumption. There are two hyperbolic periodic orbits, which are homoclinically related to m zero, but have different top Lyapunov exponents. This assumption is sufficient to guarantee that the geometric potential is not a measurable co-boundary. Okay. By the Liftschitz theorem for non-uniform hyperbolic dynamical systems due to Katok-Mendosa and the polycode. Okay. Well, in this case, the geometric potential is not a co-boundary. So the law of the iterated logarithm applies and it gives you this inequality. It says that for almost every x, for n sufficiently big, for all n sufficiently big, the expansion of f to the power n on the unstable direction satisfies these bounds. Okay. Moreover, these bounds are sharp. Okay. If you slightly decrease sigma, then the upper bound will be violated infinitely many times and the lower bound will be violated infinitely many times. Okay. So this gives you, if you want, an almost true rate of convergence in the limit which defines the Lyapunov exponent. And it has the interesting consequence that they're almost surely there infinitely many times when you see super exponential expansion. And they're also infinitely many times where the rate of exponential expansion that you see is smaller than what it should be. On read this almost everywhere is for the measure of maximum entropy. Always for the for the measure of maximum entropy. Everything that I'm going to say now is for the measure of maximum entropy. That's right. Again, this follows immediately by applying the law of the iterated logarithm for the geometric potential. The last result I would like to mention doesn't follow from the whole machine variance principle. It follows from something else. It's related to the property of entropy continuity which Jerome mentioned in his talk. It's a result in the stability of measures of maximum entropy. And before I state you, before I give you the precise statement, let me tell you roughly what I'm trying to say. What I want to say is that if you take a measure which is not the measure of maximum entropy, but has nearly maximal entropy, okay, its entropy is nearly maximal, then I want to say that this measure is very close to the measure of maximum entropy in a precise sense. So again, I'm going to give you a statement that says that if a measure has nearly maximal entropy, then the measure has to be close to the measure of maximum entropy. So here is how to say this. Now I'm going to assume topological transitivity or I'm going to stick to a homoclinic class. Now it's going to matter. So here's what I want to say. You can find a constant C beta so that the following inequality holds. Suppose you have a measure, invariant measure M such that the entropy of M is epsilon close to the, to the maximal entropy. Then I claim that the measure M is square root of epsilon close to the measure of maximum entropy in the sense that if you evaluate M and M zero on the Helder continuous function psi, the results are square root of epsilon close. So again, you start with any measure M and you assume that its entropy is epsilon close to the maximum. The statement is that in this case, the measure is square root of epsilon close to the measure of maximum entropy where distance is measured by comparing the integrals of Helder continuous test functions. So that's closeness of the measures. There is also closeness of the Lyapunov exponents. If you have a measure whose entropy is nearly the maximal entropy, then the Lyapunov exponents are close up to square root of epsilon. Okay. Are you, Aubrey, are you assuming the measures are got it? No. No, okay. Anyone, anyone. I'm assuming that M zero is ergodic. Yeah. Yeah, I'm assuming that M zero is ergodic. You see this, the expressions here are fine. So I can, I can integrate them on, on all ergodic components. That's why I don't need the ergodicity. Okay. So for a north of diffeomorphisms, these results are known. They are due to Shirali Kadyrov. He proved them for substance of finite type and they follow by the existence of a finite Markov partition from his results on subships of finite type. For countable Markov shifts, which are, which satisfy a condition called SPR, which I will mention in, in, in a short while, these results are due to an error in the ME. Okay. So there are many others. I'm sorry, Asha. Your internet is not clear. What about isolated decompositions? The stable subs, stable subspaces in your case are they close? You say close this of Lyapunov exponent. Yes. What about close this of subspaces, isolated subspaces? They are also close because in my coding, the oscillated spaces depend in the helter continuous way on the point on the, in the symbolic space. So yes, they will also be close. Okay. Yes. Yes. Yes. Yes. It's also true for our coding that the oscillated directions are helter continuous in the, in the coding of the point. So it will follow. Yeah. Yeah. Okay. So the other results you can get from spectral gap. I don't have time to, to mention all of them because what I want to do is I want to, to use the remaining time to say something about the proof. Okay. So I like to emphasize everything I had said so far is known for a nose of the film officers. Okay. So there's nothing new here for a nose of the film officers. Then the novelty is that I'm not assuming uniformity publicity. I'm only assuming a C infinity and two dimensionality. So the question is how to obtain these consequences without assuming uniformity publicity. Now, everybody that worked with the stochastic properties of dynamical systems knows that the way to get the statements I mentioned before, the way to do it is to prove that the transfer operator a spectral gap on some banner space. Okay. This is a, if you know that the transfer operator of the measure of maximum entropy expect act with spectral gap on some banner space, then they are well known, highly sophisticated tools that were developed over the last 50 years, which allow you to reduce everything that I said. Okay. These results where these tools are collectively known under the name of the transfer operator method and they were developed by many people. For example, Ruel, Parry, Policote, Sharp, Giver, Schenhardy, Rousseau, Egel, Sebastian Guizel, for example, for the almost your invariance principle and many other people. So really the question is how to get spectral gap. Okay. If you don't know exactly what I mean by spectral gap, don't worry. I'm going to explain later on. Now, in the talks, Jerome showed that every C infinity surface diffeomorphism with positive topological entropy has the property he called entropy continuity of Lyapunov exponents. That's the property that says that if a sequence of measures converge, has entropies which converge to the maximal entropy, then the measures and the Lyapunov exponents of the measures converge to those of the measure of maximal entropy. Okay. This Jerome showed always holds for C infinity surface diffeomorphism. And then Silvanini's talk showed that this property implies a strong positive recurrence, a property that I will recall later on. What I'm going to do now is explain to you why strong positive recurrence implies spectral gap for the transfer operator. And once I do this, we get the proof because as I said before, to see that spectral gap implies the results I mentioned before is well known. It's the transfer operator method and nothing else. So we are going to focus now on the implication written here in red that the strong positive recurrence property introduced in Silvanini's talk implies a spectral gap for the transfer operator. Okay. So how to do this? How to show that the strong positive recurrence implies spectral gap. Okay. So let me begin with just recalling some facts on symbolic dynamics after Uri Lima's mini course. I mean, we'll discuss at length there. I will only say it briefly. Suppose you have a C1 perceptual diffeomorphism. Now we can work in any dimension. Then I proved in dimension two and Snihr-Benovadia proved in any dimension that you can construct a countable Markov shift and an equivariant Helder continuous map pi from the countable Markov shift to the manifold so that this diagram here commutes. Okay. The action of the shift on the symbolic space commutes with the action of the diffeomorphism on the surface. Okay. Now this coding map has two important technical properties. It is almost surely finite to one and it is almost surely onto. Specifically, what I mean is the following that you can define a very big subset of the shift, sigma-sharp, which is huge. It contains the non-wandering set. In particular, it has full measure for all shift invariant measures. And when you restrict the coding map to this huge set, you get a finite to one map. Further, the image of the coding map on this huge set has full measure for all erotic invariant hyperbolic measures whose Lyapunov exponents are bounded away from zero by chi. Now the significance of finiteness to one and almost every other surjectivity is that it means that if you have an invariant measure on the shift, it projects to an invariant measure with the same entropy below. And if you have an ergodic invariant chi-hypobolic measure below, it lifts, it has a lift to a measure above with the same entropy. Okay. So I emphasize that when you move from, when you lift a measure below to a measure above, and when you project a measure above to a measure below, the entropy remains the same. That's what's crucial here. And this is because of finiteness to one. Okay. And that will be important for us. I stated this result for diffeomorphisms, but by now there are many, many generalizations to other dynamical systems. These were explained in Yuri's talk. There are now results for maps with singularities, for flows, for maps in higher dimension, even for non-invertable maps. There are also improvements of the almost sure injectivity and almost sure surjectivity statements. Bohlen-Bouzi have an almost sure one-to-one injective coding. When you fix the measure, Stierbrenov idea has a coding for which you can actually calculate the image precisely, identify the set. For us, it will be important that you can strengthen the coding in the following sense. If you only code a homo-clinic class, you can get a transitive coding. You can make the countable Markov shift topologically transitive. This will be important to us. Why is it going to be important to us? Because of the following reason. Imagine that either the diffeomorphism is topologically transitive in dimension two, or that you are only looking at a homo-clinic class. Then by this time sending I mentioned before, you can make the coding with the topologically transitive countable Markov shift. Now, why is this good? It's good because let's see what happens when we code the measure of maximum entropy. First of all, the measure of maximum entropy is unique by topological transitivity. Let's take the measure of maximum entropy mu, and let's lift it to a measure on the countable Markov shift. Because the coding is finite one, the measure above has the same entropy as the measure below. I claim that the lift is the measure of maximum entropy above. Why? First of all, it has the entropy of the measure below. Secondly, there cannot be any measure above with bigger entropy, because such a measure would have projected to a measure with bigger entropy below when we started with the measure of maximum entropy. So for this reason, if you start with the measure of maximum entropy below, then the lift is the measure of maximum entropy above. And what's nice about transitive countable Markov shifts is that we have a formula for the measure of maximum entropy above. We have an explicit formula. This formula was found for sub-shift finite type by Bill Parry and was extended to the countable state case by Boris Gurevich. It says that the measure of maximum entropy above is a Markov measure, and there is a formula for the matrix of transition probabilities. I'm not going to repeat the formula, although it's known completely. I'll just say that the matrix of transition probabilities for this Markov chain is conjugated to the transition matrix of the graph up to a constant. So, Omri, the bound is uniform, I guess, right? It's not just finite to one at points by some simple argument. You mean the bound of the coding? So, finite to one could mean for each point there's a finite pre-image. But for almost all points, it's the same, right? Right, right. Because the number of pre-images is an invariant function. That's right. Good. Okay. That's right, yeah. But for different measures, it could be different. For different ergodic measures, this bound could be different. But for the measure of maximum entropy, for each individual ergodic measure, it's a constant. Oh, thank you, right. Yeah. Okay. So, let's summarize. If we code a diffeomorphism by a transitive countable Markov shift, then the measure of maximum entropy of the diffeomorphism is coded by a Markov chain, whose transition matrix is known. It's basically conjugate up to a constant with the transition matrix of the graph. And now, you know, when you have a Markov chain, you can ask, okay, what do I need to know about this Markov chain to prove that it has exponential decode correlations? Or to prove that it has satisfied the central limit theorem or the all-Mushul environments principle? Well, basically what you need, you need to know that it satisfies the spectral graph property. So, you need to know that the transition matrix of the matrix of transition probabilities, or basically the transition matrix of the graph, you need to know that it has a certain property called a spectral graph property, which I will now describe. So, what is the spectral graph property of the transition matrix that guarantees exponential decode correlations, invariance principle, and so on. So, here it is. So, let's begin with some notation. So, suppose we have a countable directed graph, and let's assume for simplicity that every vertex has a finite degree, okay, not bounded but finite. Let's T denote the matrix of zeros and ones, which encodes the collection of edges. So, a pair A, B will have entry equal to one, if there is an edge from A to B, and it will have the entry will be equal to zero if there is no edge from A to B. This is a matrix of zeros and ones, and it's an infinite matrix because my graph is infinite. So, now I need to be careful because an infinite matrix can act in many different ways on many different vector spaces. So, let me tell you which action I'm going to study. So, I'm going to study the action on the space of continuous functions on the one-sided shift, okay. So, the one-sided shift is the collection of all one-sided paths on the graph, and I will look at the collection of all continuous functions, bounded continuous functions. On this space, and I'm going to let the transition matrix act on them using the Ruel operator action. So, what does the Ruel operator do? You take a function of one-sided paths, and you map it to the function Lf so that Lf on a path is the sum of the value of f on all possible extensions of the path one step to the left, okay. So, if I want to calculate the action of my matrix on my function f, it's a function, and the value of this function at the one-sided path is given by this formula. It's a sum of the values of the function on all paths, which are predecessors of the path I started from. Why is this an action of the transition matrix? Because it's the transition matrix that tells you which extensions to the left you are allowed to put, okay. So, this is an action. And the spectral property is a property of this action, okay. It says roughly that there exists a banner space such that L acts on this banner space with good spectral properties. Here are the spectral properties. What I want is to be able to construct a banner space of functions on the one-sided shift so that the space is big. It contains the indicators of cylinder sets. The norm is strong. Norm convergence implies uniform convergence on cylinders, on partition sets. I want my operator, my action to be bounded to be continuous, and I want it to have a finite norm. And most importantly, I want the spectrum to consist of a simple eigenvalue e to the entropy plus possibly a subset of a strictly smaller disk. This is called spectral gap because of the gap between the leading eigenvalue and the rest of the spectrum, the gamma here, okay. Next, I want some other functional analytic properties which I'm not going to use in this talk. So, I just mentioned them to have a complete definition. I want the norm to be a Banach algebra norm. I want to have the lattice property. I want the absolute value to be a continuous nonlinear map. And I want multiplication by bounded continuous functions to preserve the space in a bounded way, okay. So, the spectral property of the transition matrix is that there exists a Banach space of one-sided functions which satisfies this list of axioms, this list of requirements, okay. And now the question is, well, how to know whether such a Banach space exists? Well, first let's ask, does it always exist? The answer is no. There are some matrices, infinite matrices of zeros and ones which do not act with spectral gap on some Banach space of one-sided functions. I have your diagram which indicates some well-known examples of countable Markov shifts for which such spaces exist. These are those. And there are also some well-known examples of countable Markov shifts for which such spaces do not exist, okay. For people in this conference, let me say that if you have a Young Tower, such that the tail of the return time function is exponential, then such a space exists. That's the result of Lysang Young. But if the return time has a polynomial tail, then such a space cannot exist because spectral gap implies exponential decay of correlations whereas exponential decay implies non-exponential decay of correlations, okay. So, there are some Young Towers, sorry. So, when you say Young Tower, you mean Young Tower with respect to the measure of maximal entropy having exponential tails? I mean, the tower itself is a system with a countable Markov partition. But having exponential tails is with respect to the measure of maximal entropy? That's right. That's right. Exactly. Okay. Yes. Okay. So, since there are some countable Markov shifts for which such spaces exist and there are some countable Markov shifts for which such spaces cannot exist, we face a problem which is how to decide whether a given matrix of zeros and ones admits a Banach space with all those properties, okay. This question is completely understood in the framework of abstract countable Markov shifts. So, there is a necessary and sufficient condition. Let me briefly explain it. It's best understood in terms of combinatorial features of the graph, okay. So, consider the graph associated with the countable Markov shift and given a vertex A, let's look at the collection of all paths, admissible paths of length N from A to A or all loops from A to A, okay. Let's count how many such loops exist and let's call the result of the count ZN, okay. Now, let's count how many first return loops there are at the vertex A. So, how many paths from A to A exist which do not visit A in the middle. Let's count them and call the result of the count ZN star, okay. Obviously, ZN star is less than or equal to ZN. It turns out that a spectral gap property holds if and only if ZN is not just larger but exponentially larger than ZN star. So, here's the result. It's the result of works by many people. If you have a topologically transitive countable Markov shift with finite Gurevich entropy, so the supremum of the metric entropy is finite, then you have the following three equivalent properties. First property is spectral gap property. So, the property that there exists a Banach space with all the properties I listed before, okay. So, this property is equivalent to the following two properties. First property, what I said before that there are exponentially more loops than first return loops at some vertex. It doesn't matter which. If this happens for one vertex, it happens for all vertices, okay. Third property, the entropy at infinity, something that which I will define in the minute, is strictly smaller than the supremum of all entries, okay. So, these three properties are equivalent. A bit of history of this, the fact that the spectral gap property, the existence of a Banach space, the fact that one implies two is due to David Vir-Jones. The fact that two implies one is due to Van Sir and me. The fact that two implies three is due to Gurevich and Zargarian. The fact that three implies two is due to Silvier-Witt, okay. But for us, what's important is that the three properties are equivalent. So, one quick question. The classical notion, like in Sineta's book and so on, is this positive recurrence for the shift? Is this the same as that or stronger? It's stronger. So, the way to think about it is the following. A Markov chain is positive recurrent. If when you start at the vertex A, you return to A and the time you need to wait has finite expectation. Markov chain is strongly positively recurrent. If not only do you have your return with the finite expected time, the return time has exponential tail. Ah, good. Thank you. That's a stronger requirement. One says that you're in L1. The other one says much more. Okay. For Markov chains, strong positive recurrence is basically the statement that you're exponentially mixing on cylinders. Oh, good. Thank you. Not just mixing. Okay. So, we have three equivalent properties. Perhaps the third one is a bit mysterious because it has this strange notation here, lim sup as the measure tends to infinity. So, what do I mean by this? Let me explain this. It's a very simple thing. Just to make my life easy, I'm going to explain it only for countable Markov shifts where every vertex has a finite degree. Okay. So, I'm going to say that the sequence of measures escapes to infinity if for every pre-compact set, eventually most of the mass is going to be outside the compact set. Okay. So, for every epsilon, for every pre-compact measurable set B, there exists a time capital N after which the measure of B is less than epsilon. So, most of the mass escapes every pre-compact set, every measurable pre-compact set. Okay. So, that's what I mean when I say that the sequence of measures escapes to infinity. Now, what do I mean when I write down lim sup of the entropy as the measure tends to infinity? What I mean is the supremum over the lim sup over all possible sequences which escape to infinity. Okay. So, I look at all sequences of measures which escape to infinity. I calculate the lim sup of the entropies along this sequence and then I take a supremum over all possible sequences. That's what I mean by lim sup of the entropy. And what I just told you is that the spectral gap property, the property that there exists a Banach space on which the transition matrix acts with the spectral gap, with all the properties I mentioned before, is equivalent to the statement that the entropied infinity, the lim sup of the entropy along all sequences which tend to infinity is bounded away from the Goebbich entropy, the supremum over all metric entropies. Okay. Any questions about this? What this means? Please ask verbally. If you send me things in chat, I can't see it. Okay. So, at this point, I would like to tell you that this entropied infinity was considered by other authors in different contexts. For example, by Jean Bouzien Sivirouet, they used it to construct measures of maximal entropy for CR interval maps. It appears in a series of papers by Godofredo Yomi, Mike Todd, and Anibal Veloso on upper semi-continuity of the entropy map on countable Markov shifts. It was used in number theory, in pulse of Dukes theorem. For instance, in this famous paper by Ansidolini and Strauss, Michelin Venkatesh, it's used in variable curvature as well, by their papers by Pete and Shapira, Goezzel knew Shapira the P. Henry Kelm on geodesic flows on non-compact manifolds with variable negative curvature. And I'd also like to say that if you think about the entropied infinity as the complexity of bad things, and think of this inequality as saying that the complexity of bad things is smaller than the complexity of the system, then it's very similar to results appearing in the work of Von Klimenaga and Dan Thompson on generalized specification properties of dynamical systems. It's in the same flavor. The infinity is bad. Soon you will see why I think that infinity is bad. And this is basically saying that the complexity of the system close to the bad part of phase space is smaller than the full complexity of the system. That's a good way to think about this condition. Okay, so now let's move back to how much I'm there. I'm short of time. Now let's move back to diffeomorphisms on manifolds and see how to interpret the entropied infinity on the level of the manifold. Okay, so first of all, I can't use the same definition of entropied infinity on the manifold because the manifold itself is compact. So no sequence of measures can escape to infinity on a compact manifold simply because the manifold itself is compact. I need to use a more subtle and sophisticated notion of escape to infinity. Now to define this notion, I'm going to use some abstract nonsense. I'm going to use the notion of bornology, which is a general way of defining escape to infinity on abstract spaces. So what is a bornology? A bornology is a collection of sets which we call bounded, which satisfy natural axioms. For example, the axioms are that every point is bounded, a finite unit of bounded sets is bounded, a subset of a bounded set is bounded, and every bounded set is a subset of a measurable bounded set. Okay, so these are natural axioms. We're going to say that a sequence of measures escapes to infinity with respect to a bornology if for every epsilon most of the mass is outside every bounded set. Okay, so for every bounded measurable set B, for every epsilon, there exists a position in the sequence when the measure of the bounded set is less than epsilon. Okay, so you saw an example, precompact set, the bornology of precompact sets on a countable Markov shift, but now I want to define a bornology on the non-uniformly hyperbolic part of a manifold. Okay, so I'm going to look at the set of Lyapunov regular points with Lyapunov exponents bounded away from zero. That's a horrible Borel set. It has perhaps empty interior. Okay, so on this highly non-compact set, I'm going to define a bornology by appealing to the collection of passing sets. I'm going to fix two parameters, k and epsilon, and I'm going to look at the passing sets with these parameters, and I'm going to say that the set is bounded if it's included in the passing set. Okay, I remind you that the parameters k and epsilon have this meaning, k tells you the rate of contraction on the stable and unstable directions, and epsilon gives you the control on how the bounds deteriorate when you iterate the point. Okay, so if you fix k and epsilon, you get the collection of passing sets defined with these parameters, and I'm going to use the the bornology of all subsets of these passing sets. Okay, so escape to infinity in the passing bornology means that most of the mass escapes to regions in the space where hyperbolicity is very, very bad. Okay, these bounds happen with huge constancy. Okay, that's what it means. And now I'm going to talk about escape to infinity with respect to this passing bornology. So now let me recall what was the stone positive recurrence property in Sylvain's talk. Sylvain gave several equivalent conditions, one of them was this one, or at least it's equivalent in my notation to this one, that there exists a parameter k such that for all epsilon small, the lim soup of the entropy at infinity with respect to the passing bornology is strictly smaller than the topological entropy of the system. In other words, every sequence of measures which escapes to infinity in the passing bornology in the sense that most of the mass moves to parts of the space where the passing constant is huge. If you have a sequence of measures like this, then its entropy has to be bounded away from from the full topological entropy. This is why I said that entropy at infinity measures bad behavior, because it measures the complexity of measures, which are concentrated in the part of phase space where the hyperbolicity bounds are very, very bad. They require huge optimal passing constant. Okay, so now here is our situation. Jerome and Sylvain basically showed that if you have a general C infinity surface differomorphism, then the entropy at infinity is going to be smaller than the full topological entropy. If you think of entropy at infinity using the perspective of the passing bornology, escaping passing sets. And I in this talk recalled a result by Vir Jones, Cyr and me, Rwet, Gurevich and Zargarian, which together showed that the condition that they entropy at infinity with respect to the bornology of pre-compact sets, the condition that this entropy at infinity is smaller than the full topological entropy, is equivalent to the existence of a Banach space on which the transition matrix acts with spectral gap. Okay, so the entropy at infinity appears on the left and it appears on the right. The only difference is that it uses different bornologies. Here we talk about measures on the manifold and we use escape to infinity in the sense of escaping passing sets. Here we use measures on symbolic space and we use escape to infinity in the sense of escaping pre-compact sets on symbolic space. And the question of course is whether these bornologies are compatible. Okay, what we really need to be able to pass from this condition, which we already have because of the work of Jerome and Sylvain in their talks, what we need to pass from this condition to this condition that will give a spectral gap is to know that if you have a sequence that you have the following compatibility between the bornology of pre-compact sets in symbolic space and the bornology of passing sets below in the sense that if you have a sequence that escapes to infinity above in the sense that it escapes every pre-compact set, then its projection to the manifold escapes to infinity in the sense of escaping every passing set. That's what we want to know. We want to know that if you have a sequence above which escapes to infinity in the sense of the bornology of pre-compact sets, then it projects to a sequence which escapes to infinity in in the sense of passing sets. And of course, this depends on the coding. And the technical lemma in this talk is that there is a coding for which this is true. There is a symbolic coding of a surface diffeomorphism, which is finite one and almost surely a surjective, which satisfies this strange set theoretic inclusion, which I would like to ask you not to read, okay? What I would like you to read is the implication of this inclusion and the implication of this inclusion is that if you have a sequence of measures on symbolic space, which escapes to infinity above, then it projects to a sequence of measures on the manifold which escapes to infinity with respect to the best symbology. That's what this inclusion implies and that's what you get. And that's the compatibility of monologies that we need. So now we finally arrived to the symbolic dynamical result in the talk. And that is that if you have a SPR surface diffeomorphism, then it has a symbolic coding which has spectral gap. So that the countable Markov shift has spectral gap. And the proof is as follows, you take the coding in the lemma, the coding which gives you compatible monologies, and you try to check the criteria on the entropy at infinity is smaller than the full topological entropy. So here is the entropy at infinity above, okay? On the symbolic model. This is this expression. Since projection preserves entropy, this limb soup is equal to this limb soup, okay? Mu and the projection of mu have the same entropy because pi is bounded to one, almost surely. Final to one, almost surely. Because of compatibility of monologies, if mu tends to infinity above, then the projection of mu tends to infinity below with respect to the passing of monology, okay? That's compatibility of monologies. So this limb soup is smaller than this limb soup because every sequence that contributes to this limb soup projects to a sequence which contributes to this limb soup. Now, by the SPR property from Sylvain's talk, this limb soup is less than the full topological entropy of the diffeomorphism. And because pi is finite to one, the topological entropy of the diffeomorphism is the Gurevich entropy of the symbolic model because every measure lifts to a measure with the same entropy above. So you get that the entropy at infinity of the symbolic system is strictly smaller than its Gurevich entropy. And this implies spectral gap, okay? By the result I mentioned before. So now we finish the circle. We saw that if you have a general C infinity surface diffeomorphism, then it has entropy continuity of that one of exponents. This implies the strong positive recurrence. And this implies an existence of a symbolic coding whose transition matrix acts with spectral gap on a suitable Banach space. And now we apply the classical results of a well-parry, polycode, sharp, Givar-Shinhardi, Rousseau-Egel, Baladi, Sebastian Guesel. And you get the exponential mixing, CLT, almost your invariance principle and basically everything else that you want. And with this, I will thank you for your attention. Okay. Thank you very much, Ramlein. Thank you very much for this wonderful, really wonderful, inspiring talk with. Thank you. With so much, so much material now. I'm really tired. Yeah, I'm sorry about that. That's okay. Fortunately, we have the YouTube video. So are there any questions? Yeah, I just wanted to ask, we go back to that slide where you had all the actual implications, not the diagram, but the actual theorem. Two slides back, I think. This one? That one, yeah. So your coding, you need a special kind of coding is the one that you produce with your Markov, you know, the way you prove the symbolic existence different from this coding? I mean, does this really recoup? It's the same. Oh, it's the same. It's the same, it's the same. Okay, good. Thank you. Right, so in other words, just one quick one. So the extension of the Bowen technique of using Markov partitions basically always produces this kind of thing. Is that correct? That's right. Okay, thank you. That's absolutely right, yeah. The reason is that my coding, okay, now I speak to the people who attended the Yuri's talk. A partition set in the symbolic coding corresponds to points which are shadowed by epsilon chains with a fixed symbol. Now, if you fix the zero symbol, then you get information not just on the arbitrary location, not only on the location of the point, but also on the rough direction of the oscillated spaces and also on the rough rates of contraction and expansion along the stable and unstable spaces. The coding contains information not just on the location of the point, but also on the hyperbolic features of the point. Oh, good. For this reason, partition sets in our coding are automatically subsets of passing sets. And this is the reason there is a connection between escaping cylinders on the coding and escaping passing sets on the manifold. Oh, okay. Yeah, thank you, Ryan. Thank you, Sheldon. I actually had that exact same question, so thank you. Okay, so it's a joint question. Yes. Ron. Yeah, I have a question about one of the applications you mentioned, Omri, in the very first section. Actually, at the very end of that section, the result section, you stated this result that if a measure is almost maximal entropy, then it's close in terms of integrals and the app of exponents. That's right. I wonder if there's some kind of finite time version of this. If I think about periodic orbit measures, there's zero entropy, but of course, they equidistribute. And they carry some complexity up to a certain finite time after which you no longer see it. Is there any corresponding statement for them? That's a very, very penetrating question. You are right. If you want to get quantitative results for equidistribution of periodic orbits, you have to do exactly what you say. You have to work not necessarily with the measures of high entropy, but maybe even with atomic measures which approximate the measure of maximal entropy. And the answer is this. For substitute for finite type, the result of Kadirov, gives you something like this. It has a finite reversion, where what you have here is not the metric entropy, but I don't know how to say this in words, not lower script H, but capital H for a partition, okay? So this is true for substitute for finite type. For countable Markov shifts, we don't know how to do this. Is it a case that there's some sort of clear obstacle that must be overcome, or is it the case that there is an obstacle to the method of the proof? And the obstacle is that Kadirov uses a very specific Banach space where you have spectral gap. His Banach space is a Banach space where convergence in norm implies uniform convergence on the entire space. And that's very important for his proof. For countable Markov shifts, you can't have such a space. You can only require uniform convergence on cylinders. You cannot require uniform convergence on the entire space because it's simply not true sometimes. So this is an obstacle in the method of the proof, which prevents René and me from getting the finite reversion of inequality that you mean. Maybe a more clever proof can give it, I don't know. But there is sort of a clear obstruction, thank you. Yeah, yeah, we tried and we couldn't. I think Yasha, you had a question. Yeah, I have a question. It's kind of a more, I don't know, it's more general and more, I don't know, maybe a little bit philosophical, that there are results about measures of maximal entropy. For example, my paper with Sam Sainti and Kerzhenko, which also shows that measure of maximal entropy has exponential decay of correlation. And there's some others in the different settings. So on the other hand, there are a number of results that show that for smooth measures outside of uniform hyperbolicity, you should expect polynomial decay of correlations. So my question is that, and in fact, if you connect, if you consider the geometric potential you connect measure of maximal entropy to Lebesgue measure through the parameter T in the geometric potential, then all equilibrium measures corresponding to different T will also have, I mean, known results show that there are situations and all have exponential decay of correlation which breaks down exactly at the Lebesgue measure. So my question is that first of all, do you believe that your result may be extended to equilibrium measures in the two-dimensional cases or C infinity diffeomorphism, showing that they all accept T is equal to one have exponential decay of correlation and whether for T is equal to one, it will basically, you should expect polynomial decay of correlation. I mean, it's a speculative question, of course, but... No, no, it's again a very good question. I talked about the strong positive recurrence condition for the transition matrix. There is a notion of strong positive recurrence for potentials, where we generalize as a notion for matrices by basically strong positive recurrence of the zero potential or the constant potential corresponds to the strong positive recurrence which I discussed today. But you can also talk about strong positive recurrence of other potentials. And this property is a stable property. So it follows from what I said today that for very small T, for T close to zero, T times the geometric potential is going to be strongly positively recurrent and its equilibrium measure will have exponential decay of correlation. So however, this is a perturbative result and it only going to work for T very, very small. Now, when you move to T closer to one, they are counter examples. For example, Payman-Islami discussed or mentioned in his talk, an example due to Martens and Leverani of a Hamiltonian system which preserves the Lebesgue measure is infinity as far as I understand and has a sub-exponential decay of correlations. So this is going to be an example which shows that for T equals one, sometimes you're not going to have strong positive recurrent property. Now, can I answer on a heuristic level what is the difference between the constant potential and the geometric potential? Not really except to say that the geometric potential is more complicated than the constant potential. Yeah, I mean, there is a subtle difficulty that you have to fight, not really fight, you have to avoid because you can't fight against it twins all the time. This difficulty is that the continuity, what Jerome called the entropy continuity was a statement about weak star convergence on the manifold. Whereas in order to use thermodynamic formalism for countable Markov shifts, you need continuity with respect to weak star convergence in symbolic space. These are very, very different conditions. And the constant potential has the wonderful property that it's continuous both above and below. And therefore it behaves well for weak star convergence with respect to the original topology and for weak star convergence with respect to the symbolic topology. The geometric potential doesn't behave well for both topologies because it's discontinuous on the manifold. It's continuous above, but it's discontinuous on the manifold. So the analogous result for entropy continuity for the geometric potential is much more difficult. Indeed, it's not true in general, precisely because it doesn't play well with weak star convergence on the manifold because the potential is not continuous on the manifold. I don't know if I answered. I mean, I understand what you're saying. Yeah. Okay, thank you very much, Snell. All right, thank you. So I have a couple of questions on me. So if I understand the scheme of your proof, you start with showing first that for, let's say, topological transitive surface deformalism, you can found in a clinical class with high entropy on which the coding is transitive. And then you can apply a method for transitive shifts, but there could still be other homo-clinic classes on the manifold even in transitive case or mixing case. So let me ask the following. Let's call an MME, local MME. If it is a supremum over some homo-clinic class, do your results apply to those local MMEs as well? Yes, yes. So our results apply to homo-clinic classes to measure the maximal entropy on homo-clinic classes. One of the differences between the C-infinity case and the CR case is that in the C-infinity case, we proved with, I mean, Jerome Silva and I proved that there are only finitely many homo-clinic classes with entropy bigger than a constant. Why is it constant? Any constant, you take any constant. There are only finitely many homo-clinic classes with entropy bigger than that constant. But in the CR case, you can have infinitely many homo-clinic classes where the topological entropy, you know, it keeps growing without ever achieving a maximum. And each one of them would have... Each one of them you would have a local measure of maximal entropy, maybe. You may have, on some of them, you may have a local measure of maximal entropy without the differential morphism itself having a global measure of maximal entropy. Okay, so that's what I'm trying to say. Can you tailor it so you find a homo-clinic class which carries an SRB, but also carries a local MME so you can say that the fact that the SRB is coded on a transitive coding where the entropy is with a spectrograph property? Assuming in advance that there is an SRB. Assuming in advance that you have an SRB. Yes. Okay, that's wonderful. But what I don't know is I don't know how to do it in such a way that the SRB is SPR. In fact, there are examples when it's not always true. Yeah, it's not always true, yeah. But I don't want, I don't know a condition. What condition to put, to force it to be true. Okay, well, so I have one more heuristic question. Let's forget for a second about your coding that you constructed, is it true that for whichever coding that you code and MME it will have to have a spectrograph property? Wow, I don't know. I don't know, I would expect not because I can think of many ways of spoiling a coding, but I don't have an example in mind. Okay, thank you very much. Okay, thank you, Sneha, Wilton. Let me put in some scenarios, suppose that you have induced the map, a full mark of map. Suppose for maybe a compacting, mainly for a smooth map, and suppose that you look for the set of all variants probably that can be linked to this induced map, okay? And so suppose that this is a very good induced map, every elements go through all the helpful image. And the question is, suppose that you have a liftable measure that has maximum entropy between these, all the liftable measure to this induced map, okay? So, and suppose that, so I want to relate your condition that a measure goes to this kind of scenario. And look, you can think about, if you have a sequence of a liftable measure, maybe the sequence goes to, does not converge to a liftable one. So, but suppose now that all measure, liftable measure with big enough entropy, the limits converge to a liftable one. So, in this case, you can conclude the, the expectable gap to this induced map that is in, did you understand my question? I think the answer is, okay. So, you have like the closer of all liftable measure, and suppose this closer is a compact set, so, and suppose that the measure that maximized the entropy in this close set is a liftable one, but it's good. Yeah, so I always have this problem because different people, when they say induced map, they mean different things. I added power for instance. So you see the problem is that the induced map, if you mean the first return map to the base of the tower, it's usually a system with infinite topological entropy. So, everything that I will say will collapse because there are many, many measures for the induced map on the base of the tower, which have a maximum entropy because the maximum entropy is infinite. Yes, but I'm looking for only the set of measure that can be liftable and you can go down again. That's the... Still, still, I think that the induced map will have infinite topological entropy. The induced map? Also, I'd like to say another thing. I'd like to say another thing. You could have situations. These are the so-called null recurrent situation or even transient situations. When you look at the induced map, it has... In order to avoid the problems of infinite topological entropy, let's talk about equilibrium measures of potentials. The pressure could be finite if the potential is unbounded from below. You could have situations when the induced map has an equilibrium measure, but when you try to lift it, you either get an infinite measure or you get a measure which is less than what the pressure should be. And so I think that it is difficult to pass from conditions on strong positive recurrence of an induced map to conditions on strong positive recurrence of the original map, at least the way that I do it. You understand what I mean? Yes, yes. It's difficult, it's difficult to do it like that. Okay. So actually I have a bit of a time, but if Aniesca doesn't mind, we will take another few minutes because I have actually a comment slash question, which I would like to say. So, Omri, can you go to the beginning again to where you talked about the optimal pacing constant right at the beginning in the statement of results. There we go. The next slide, when you talk about the result here, yes, your result. So this reminds me of some old work that actually I did with Wilton and Jose, with Wilton and Jose Alves in the context of non-uniformly expanding map in which basically we prove the kind of converse or we studied at least the converse result of this slide. So the definition of K was not exactly this, but morally we are, okay, suppose that this tail of this pacing constant decays at a certain rate, polynomial stretched exponential, exponential, does that imply that the system has corresponding decay of correlations? And we proved... It's like this critical set. Well, no, the day it was really like this, it was the way we defined it was the first hyperbolic time, right, which was not always this pacing constant. So almost every point has a time, which is like the first hyperbolic time, which is basically this constant. And then you look at exactly this tail that you've got here, right? The measure of the set of points where this constant is bigger than M, for example. And you look at how fast this goes to zero, just like you've got. Okay, I didn't know, I will look it up. Yeah, I can send you, but this is in the non-uniformly expanding case. And the thing is the reason why I wanted to mention it is because we were not able to solve every case. So we were able to show it in the polynomial and stretch the exponential case. But we never actually managed, as far as I can remember, I don't know, this is 15 years ago. So maybe Wilton or Zeck and, but I think we never managed to crack the exponential case somehow because we were doing some estimates where we lost some optimality. And so I think the reason I wanted to mention it is because I think it's a still an interesting open problem. If anybody here is interested, I just wanted to mention it. So, and also I wanted to ask you if you think in this context, because we did in the non-uniformly expanding case. So of course it would be interesting to look at it in the non-uniformly hyperbolic case as to whether you can prove some kind of converse. So suppose that you satisfy this exponential thing. Does this imply that you have exponentially care of correlations? I think the answer is yes. I think the answer is yes, because the optimal passing constant, let me say it in the following way. Every countable mark of shift, mark of partition gives you a first return young tower. You build on the base of the tower is going to be a partition set. And okay, now what I'm saying is not written down, but I think it's true, maybe I'm wrong. Is it recorded? Yes. It's too late to start. So I think that if you view the countable mark of partition as a young tower over a partition set, then you can relate the tail of the optimal passing constant to the tail of the first return time. Right, that was our approach. Yeah, I think it's true under fairly general circumstances. And I think that then you can use the, I mean, by now, well-developed techniques of relating the tail of the return time of the young tower to the rate of the care of correlations. So on the other hand, this young tower that you construct from the mark of partition, it does not depend on the measure, right? But my condition on the tail, the condition of the tail depends on the measure. Right, right. So do you think that what you just said might apply for the SRB measure as well? No, no, not in the level of generality that work because of the counter examples of, for example, we discussed in Payman, Islamist talk. Martin Sliverani, I mean, there are examples of see infinity different morphisms which preserve the volume, which are not exponentially mixing. No, no, no, that's not what I meant. I meant that if, yes, but you can still, you can still ask this question for polynomial mixing. So if the tail of the optimal passing constant decays polynomially in that case, you know, you could, I mean, the relation could be, the relation could apply for different rates, not necessarily just- I see, I see. There is a chance that it's true. I think there is a big chance that it's true. Okay, thank you. So we've gone way, way overboard. I'm sorry, Agnieszka and everyone. So let's take a six-minute break and start again with the... Thank you very much, Amri, first of all. Thank you.