 We can give Payman a chance to go a little bit over time as well. I don't think I will go over time. Okay, okay. I'll go under time. No problem. No, nobody ever complains about that. Okay, so Payman it's very nice to have you here. It's a great pleasure to introduce our second and final speaker for today, Payman Eslami from the University of Lomto-Vergata. We'll talk about mixing rates for symplectic, almost an ounce of maps. Thank you very much. Thanks to ICTP and organizers for organizing such a nice event and especially for making it available for basically everyone with an internet connection and yeah, and they can at some point contribute to the mathematics as well, even if they don't have an affiliation or something like that. Okay, so I will be talking about mixing rates for symplectic, almost an ounce of maps. So I chose this topic because I wanted to give a concrete class of examples where you can see some of the tools and some of the ideas that were explained in the last couple of weeks in the courses of Yuri Lima and Rosa Alves. And what I will be talking about has components from both of the courses, but mostly from the one on young structures. And I will try to use that terminology as well. So, yeah, I will have to. Yeah. So the plan is to talk about, I will tell you what are these almost an ounce of difumorphisms in general. And then I will introduce to you the specific class of examples that I will introduce to you. The main result, which will be as the title suggests that mixing rates for these systems, and then I will go to explain some of the main ingredients of the proof. And there will be some pictures and as we go on the pictures become more sophisticated, so I hope the audience appreciates that. I took the definition of an almost an ounce of difumorphism from a paper of who into the who who from 2000. Let M be a C infinity two dimensional compact Riemannian manifold without boundary and lab to denote the Riemannian measure on M. F is a C to the few morphism on M and such a difumorphism is called almost an ounce of if there exists two continuous families of cones, unstable unstable, such that, except for the finite sets and this is probably the most important part of the definition, you have the invariance of the cone the usual invariance of the cones. And the differential expands the vectors in the unstable cone and contracts the vectors in the stable call. And for most of the presentation, this presentation, as this finite set will consist of a single fixed point, fixed point of F. Okay, so so this is the general definition of almost an ounce of difumorphism given by who you go in this paper in 2000. Now let me briefly mention some of the could ask could ask a question please. Yes, please. When you say subset you mean possibly equal or is the closure inside the interior or just a subset. Yes, just a finite subset. This is how it works. Yeah, so this is a copy paste of the definition. Of course, I will consider specific examples and in all of the specific examples there are additional constraints and you will see that they will be part of this, they will meet the definition. I don't prove anything for the general almost an ounce of difumorphism. Okay, thank you. Okay, so the works. So I mentioned some of the works that I know of in case I miss something I would be happy to know more. So in 1995 who and young they introduced a class of almost an ounce of difumorphisms and they proved their goal was to prove that this example is don't admit a finite SRB measure. And in their examples, the non-uniformness was only in the unstable direction. So this is the first instance where I saw the word almost an ounce of in 2000 where the definition is taken from we who proved the distance of finite or in infinite SRB measures and both can happen. And in those examples, there are some additional non degeneracy conditions and the derivative the differential at the fixed point is the identity. And Zang and who in 2019 they proved also a polynomial upper bound and a lower bound for the same class of examples in the finite and infinite measure preserving case. And if I'm correct the upper and lower bounds, they're not, they don't have the same exponent. So it's not sharp in that sense. Then later, Boolean and teresio proved more precise mixing rates for the same examples of who in finite and infinite measure case. Also for flows limit laws were obtained by Boolean teresio and Todd I think, and also Hank Ruin himself. Another example that I'm aware of is the example of the catwalk map and in a preprint. I don't know if this is published or not passing senty and shady proved polynomial upper and lower bounds for these maps. And moreover, they show that any smooth compact connected oriented surface admits an area preserving C1 plus difumorphism with nonzero Lyapunov exponents, which is Bernoulli and also has polynomial decay correlations lower and upper bounds. And these last two papers or results, they have something in common, which is that they take advantage of the possibility of viewing the map near the bad fixed point as a time one map of a flow and Hamiltonian flow in the in the latter case. You will see the importance of I mean I will comment on the importance of this maybe later on when I consider specific examples. And I've, there's another example, a class of examples by Liberani and Martha's and I kind of separated that from the other ones for two reasons. The main reason is that these are exactly the examples that I will consider in this talk. And another one is that, for example, this way, it doesn't have in these examples, you cannot really view the map as a time one map of a flow. I mean, at least that's not clear. And also the derivative near the derivative at the fixed point is not the identity. So I'll mention the specific examples. These are the examples from Liberani and Martha's. And I will comment a little bit about that before mentioning the main results and then the ingredients of the proof. So. Yes, yes, question. Can you please go back to the slide where you showed the results by boy born in Tahesio and by who you. Yes, this is blue and Tahesio. This is we who. So in your definition of the almost a massive system, you only assume there's something about an indifferent set, finite set. But a, if I, if I'm not wrong, if I understood correctly, whatever this requires some another generate form, which is to do with being topologically contributed to something with a Markov partition. I think at least in who whose work uses this fact, right? Yeah, people use, I will actually use a Markov partition as well. So, so my question is, because it's not mentioned here, can you show polynomial rates also without assuming it without assuming that contribution to just in the general set? No, I'm not proving anything in the general setup. I'm just proving something for a class of systems. And I will. Yeah, in a few slides, you will see a specific Markov partition for this specific class of maps. Okay, so our setup is where we do have a Markov partition. Yes, exactly, exactly. But so yeah, since you asked the question, I have to spoil the surprise, but I guess most people already guess because I mentioned that this is related to the, to young structures. I will construct a young structure for the class of maps. That's, that's important. Now, of course, if you know a finite Markov partition for your system to begin with, the task is much easier to construct the young structure. But it's not necessary that you have a Markov partition. It's possible that you might be able to do it without, but the technicalities will be more involved. I think I understand Sneer's question and payment's comment is that these results all have some additional technical assumptions, right? They're not all results just, just using the definition that you gave previously. Yeah, yeah. As you see, there is one I mentioned who I mentioned non-degeneracy conditions. And I didn't mention what these are. And yeah, as Sneer mentioned, who also assumes, I think, existence of Markov partition. And as he mentioned, it is generally believed that these are topologically conjugate to, on the torus to the linear map. But there is no such result in the literature. So either you assume it, or you prove the, or you construct some young tower without assuming a Markov partition, or you construct it by hand. And in our case, we construct by hand. And I will show in a few slides a picture of the Markov partition. Thank you. So before showing you the actual Markov partition, here are the specific examples of that Lirani and Martin's study. The map T is the almost annulus of diffeomorphism. It's given by this formula. TXY maps to X plus HX plus Y, where HX is some function on the, some map of the circle. I denote the circle by T1. That's the notation here. And in the second component is HX plus Y. The assumptions on H are that H maps 0 to 0. So the point 0, 0 is also a fixed point for T. H prime of 0 is 0. This is responsible for the origin being neutral fixed point. H prime of X is positive. This is responsible for the hyperbolicity away from the origin and, or to say, non-uniform hyperbolicity. And 2 and 3 imply in particular that 0 is a minimum of H double prime. So H, sorry, H prime. So H double prime at 0 is also 0. And H triple prime around 0 is non-negative. In addition to that, we assume that H triple prime of 0 is strictly positive. And we make some symmetry assumption that H of minus X is minus H of X. And these conditions overall mean that you can write HX. H is a C infinity function or sufficiently smooth. You can write HX as BX cubed plus some terms of order 5, where B is a positive real number. OK. By the way, this symmetry assumption on H also induces some symmetry in T. But yeah, I haven't written it down. But you will see in the pictures, if you see some symmetry, it is because of this assumption. So just a point about motivation of Liberani and Martens, they wanted to study the simplest class of two-dimensional systems that retain some, as they write, some Hamiltonian behavior and exhibit anomalous behavior. And this is usually due to some weak hyperbolicity. And in the specific examples I mentioned, in contracts to the other examples from the papers I mentioned, the dynamics at the fixed point is, so the linearization is a shear. And this is the generic behavior near a neutral fixed point for a 2D symplectic map. Simply because symplecticity means that the determinant is one. So if you have an eigenvalue equal to one, the trace is two. And such a matrix in general or generically is a shear, not the identity. So they were motivated by physical examples, Liberani and Martens. And this is the simplest picture showing the stable and unstable manifolds of the fixed point, which I throughout the rest of the talk, I denote by bold zero. So this is just the origin. For me, red, well, blue is always stable. And in the pictures, blue curves are decreasing as well. And red is the unstable. So this is the picture of the stable and unstable manifolds at the origin. Liberani and Martens in 2005, they obtained quite a number of estimates for these maps. Quantitative estimates on hyperbolicity, some of which I've listed here that the stable and unstable distributions are C1, except at zero. Stable and unstable manifolds are at least C2. That's all they need for their estimates at zero. And invariant cones. So you have such invariant cones. So there are K plus K minus positive, such that the unstable direction, if you denote it by 1u, note that these unstable directions, the unstable curves are always increasing. At the point x, y, they satisfy this property. Moreover, the derivative in all directions is bounded by this, the reciprocal of this theta, which is related to the angle between the stable and unstable directions. And also they have estimates on the regularity of the whole anatomy away from zero. And then we will use these results from the paper of Liberani and Martens. Oh, by the way, this work that was the main theorem I will present, and the work is joint work with Carlangelo Liberani. And in that paper, they also obtained decay of correlations, but decay of correlations with the rate n minus 2 times log n to the fourth. And they use this zero noise limit method, which means that you introduce some randomness into the system, which makes the correlations decay exponentially fast. But as you let the randomness go to zero, the rates, of course, becomes worse. And if you're able to control it up to the limit, you will get the decay of correlations for the limit, and this is what they obtained, n to the minus 2 log n to the fourth. And a bit earlier, these people, Artuso and Prampolini, in 1998, they did some numerical experiments on maps which fit into the class of maps that Liberani and Martens studied. And they obtained their numerical estimates and suggested some faster decay rate, like n to the minus 2.5. And so this at least makes one curious to see whether this is the true rate or not. Not because really the true rate is important or not, but because to understand if the methods used to obtain such a rate are efficient or not, because the same method of zero noise limit was also originally used for the LSP maps to prove rates of mixing for the LSP map. And there they obtained the right rate for decay of correlations. So as you can guess, if the rate was correct, I wouldn't be talking about this. So the sharp rate is actually n to the minus 3. And that is the main result of this talk. And since I already told you that the proof of this theorem uses young structures, it also shows the efficiency of young structures as opposed to other methods like this zero noise limit method. Of course, it leaves the possibility that zero noise limit method was not done in the proper way, but I don't know much about that. So I cannot comment on it. But here, so we have this theorem that says that for every eta between zero one, there exists constant C1 C2 such that for any two observables, they can hold their observables on the torus whose integral is normalized to be one. And for the lower bound, they are bounded away from the origin. The following estimates holds the correlations, which have been defined, I think yesterday, but they're also defined here in the footnote. They are lower bounded by C1 times n to the minus 3, times the norms of the observables, and upper bounded by different constants C2, but same rate, n to the minus 3, times the norm of the observables. And if one of the observables has integral zero, then there's a possibility to get a better rate up to n to the minus fourth. Okay. So if there are no questions, then I will go into the ingredients of the proof. Okay. So these are the ingredients of the proof. So the main ingredients are basically a young structure. It's not necessary, but we use an initial mark of partition to make the task easier of constructing the young structure. This is... So the second point is first return map plus further inducing. This means that we obtain the young structure in two steps. First, we take the first return map to some subset away from the neutral fixed point, and then we further induce it to something that satisfies all the properties of a young structure. And the second point is that in this... In this world, you have to estimate precisely the tail of the return times. This is by far the most difficult part of the problem. So most of the paper, most of the proof of this theorem, boils down to just estimating the tail of the return times. And that's where all the geometry comes in, and that's why the Markov partitions simplify the task. In addition to Markov partitions, there's one other component that we use, and that's what's called the quasi-Hamiltonian. If you remember early on, I mentioned about two of the other papers that they use the fact that the map near the fixed point is a time one map of a flow or Hamiltonian flow. We don't have that here, but what takes the position of that is a quasi-Hamiltonian, which allows us to control the shape of the trajectories when you get near the fixed point. And another component is the dynamics on the stable manifold. So if you have a point on the stable manifold of the origin, it goes towards the origin, but it of course slows down, but you can control this rate of approach to the fixed point. So these are the main ingredients. If you were to do this in general, maybe you could do it without this one, but the other ingredients are quite useful, and you can deploy them in general situations. Okay, so here's the Markov partition. So why is this a Markov partition? Well, if you stare enough at it, you will see that it's a Markov partition. The way you construct it is you take the stable and unstable manifolds, and you then until they close back on each other. And you can argue using the symmetry that exists in the map that these curves they really cause on each other, and you can come up with this shape. And now you can do a little game and see that the pieces that stick out of the fundamental domain of the torus, you can translate them by Z2 to put them back inside the square. So for example, maybe I do one or two. This one you can translate back into here. Maybe this one goes there. This one goes here. Did I make a mistake? Where does this one go? It goes to the top light and corner. Oh, sorry. This one goes there, right? Yeah. Okay. And now this one goes there. Okay. So it's a Markov partition using three elements. So in which sense you are saying it's a Markov partition. So what is the main property of this rectangle? So the stable manifolds in forward time, they go into stable, the stable sides. So the stable are blue and the unstable are red. Everything is constructed from stable and unstable manifolds of zero. And they fill out the whole domain. And so in backward time, the unstable also goes inside the unstable. And so that's why the Markov property. So if I understand, excuse me, I'm sorry, if I understand this, this figure it looks like the stable and unstable manifolds are intersecting non-transversely at this, at the singular point. It's not correct. Exactly. Exactly. That's what makes this class of examples different from the other papers that I mentioned in all of the other papers, the distributions are transversal everywhere uniformly. Here they become tangent. And this makes the geometry a bit more difficult. And hence it makes it more difficult to estimate the tail of the return times. And besides that, you need, you need upper and lower bounds for the tail of the return times. And you need to estimate things precisely. Is this fixed point still neutral? Or is the differential now have a shear effect because of the transversality of the... The derivative at the fixed point is one. It's one. Okay. So it is neutral, I see. Or an indifferent. Oh, it's not neutral. I see. Okay. It is a... I don't know what you call it. Not actually neutral. The eigenvalues are one. The eigenvalues are one, but the identity is what I was trying to say. Yeah. It's not the identity. I see. I see. Okay. So here's a little bit more about the inducing step. So we want to obtain a young structure. And we obtain this in two steps. Now, this is maybe a very simple idea, but I think it's underestimated. This idea of obtaining a young structure in two steps or in multi-steps, I think goes back to Roberto Marquerian, where he used it to prove the KF correlation for billiards. And this is maybe something that was not mentioned last week in the courses, because I think when Alvish constructed young structures, he used hyperbolic times and did it in one shot. But what one can use is that inducing schemes in general, they can be chained together. So you can do them in multi-steps. You don't have to do everything in one shot. And the good thing about this is that you can divide the tasks into two independent parts. First, you can get rid of the non-uniform behavior near the bad region. Then you get a uniformly hyperbolic system, but the uniformly hyperbolic system will have these continuities, which will be these boundaries of sets that return at different times. But in any case, it's a uniformly hyperbolic system, and it should have exponential tail of return times. But then you can chain them together. If the original system had some slower decay rates for the tail of the return times, if you add something exponential to it, then you get the full return times are still comparable to the first return times. I mean, you can make these estimates easily. So this two-step procedure is, I think, very nice. And I haven't seen it used that often. Well, I mean, it remains to be seen whether it's, yeah. Excuse me, Payman. Yeah. So I just wanted to mention that this is exactly the approach that Matheus was explaining yesterday that we used to do. Yeah, there's a reason for that, because, well, this comes later. But at the end of this work, we have a result by Bruin, Terhese, and Melbourne, which is the same that you guys use. And what their main theorem requires is the existence of what they call a Chernoff-Marquerian Zang structure. And that's exactly a two-step inducing procedure, right? So, I mean, this paper that I'm talking about, this was published last year. I mean, we were writing these papers at the same time, and Melbourne, the theorem of Bruin, Terhese, and Melbourne is the common part. So this is why it's used in both works. I also want to make a further comment that in our example at the degenerate set where the curvature is zero, we have exactly that the stable and unstable directions coincide. So it's very similar to... Yeah. Thank you very much. In fact, when Carlos Matheus was talking about this, I was thinking about exactly the same. I mean, comparing the two examples, yeah. Thank you. So I am not actually don't think that I am not sure whether I have understood these two steps, whether I'm familiar or not with these two-step procedures. But just to understand this neighborhood O of zero, is it a refinement? Is it a piece of the refinement of the Markov partition? I mean, is it a... So you wanted to be... You want to induce on a set away from the fixed point. You wanted also to be some refinement of the Markov partition. Our Markov partition has three elements, three of them intersect at the fixed point. So you cannot choose one of those. So we refine it forward and backwards. And then you can... Then you will see exactly this picture. Right. You'll see... So I've drawn here the original Markov partition, but with straight lines because it gives me more room to draw other things. In reality, things are tangent here. And what you see here, this diamond shape in the middle, this is a union of four elements of the refinement of the Markov partition. This is exactly the set O. So we take the first return time to the part outside of O. This is the first inducing step. It's just the first return map outside of the set, outside of this diamond shape region. These pieces, of course, they do go through O before they come back, right? I mean, it's the first return time, but these pieces, they pass through O on the way to coming back. Right? So you say... I'm not sure if I completely... I'll say what you're trying to say. I think in a more accurate way. So now I explain to you what O is. Take the preimage of O. If you take the preimage of O, you will see this rectangular shape here. These are the set of points that map in one iteration into O. Now take the image of O. You will see this set. These are the set of points that map in one iteration outside of O. Now, if you consider... Maybe I should write T inverse O minus the set O. O is the union of the... It's the diamond shape region. This I denote by Q. This is just this set here. This is divided into sets Rn, where each Rn is the set of points that takes any iterations to get out of O. So you have... It's exactly shown in the picture, right? Maybe I should zoom. So it goes... This point goes into O. It spends some time here. And then goes out. Similar thing happens here. Points from here, they go here. They get out in any iterations. Same here. And same here. And that's when you stop the inducing, right? I mean, that's the inducing. This is just the first return, yeah. The first return is it goes from... Basically, it spends all its time, you know, on the way. I see. Points. But it's the compliment of this is why, right? Of course, after you get out, then you can spend some time outside. But then eventually you go back in here and then you go through O again. Yes, yes, yes, yes. So this is just the first return map, right? And you don't... Once you've returned, you only care that this return map is a hyperbolic map. It will be a hyperbolic map, uniformly hyperbolic map, but with these continuities. On Y. And I said Y, which is the torus minus O. So this is what you call the first step of the inducing. And that's the important part. Is this the tail of this first return time that takes most of the work to estimate? Sure. Once you have gotten rid of this part, you're just left with a uniformly hyperbolic map. And then you still have to do some argument to make a young structure for that one. But you've gotten rid of the non-uniformity. Sure, sure, sure. If you want to do this in general, you have to understand in general what the discontinuities look like. And this is what we kind of put under the carpet by using a Markov partition to begin with. But I should also mention this, that your system to begin with is a smooth system. The discontinuities are kind of an artifact of the method. And the young structures, they're not unique. If you do it somewhere else or in a different way, maybe you can also control these discontinuities in the way that you like. So there's some flexibility there as well. The discontinuities are not forced on you, the shape of the discontinuities. So I like this multi-step inducing scheme more than doing everything in one shot. And so I mentioned this as an idea of Markarian. But let me also mention the idea of Chernoff, which is also useful. If you have a uniform hyperbolic system, you can do it. It's only enough to do estimates in one step, like this one-step expansion condition which is used in billiards. It allows you to only consider finitely many steps rather than the full trajectory of points. So maybe this is something related as well. Okay. Anyway, let's move on. Yeah, so this was about the first part of inducing. Let me just say a word about the quasi-hambentonian. What is the quasi-hambentonian? A quasi-hambentonian is a function on the torus, which you can obtain by just writing this relation and doing some Taylor expansion. And using this quasi-hambentonian, you can control the trajectories up to some error. And by that, I mean exactly this relation that you see here. The function H applied to the image of a point is minus the function at that point. The difference is extremely small. I mean, this is the control we need, but maybe you can use a more complicated polynomial for the Hamiltonian, and then you can get even a better control if you want. But this is all that suffices for our needs. This ingredient exists in, exists that also in the paper of Leverani and Martens where they initiated this study. We use it here. So that together with the dynamics, I mentioned the dynamics on the stable manifold. This is again a lemma from Leverani and Martens 2005. And basically what it says is that if you're on the stable manifold going to zero, the x-coordinate is like one over n. So these points, they go to zero like one over n. And the importance of these two, the quasi-hambentonian and the dynamics on the stable manifold is that we follow the trajectory of points. If you have a point that returns after has a high return time, like it's in one of these rn's with a large n, then it's very close to the stable manifold of zero. So there you can apply the dynamics on the stable manifold to control the trajectory. When it gets near zero, when it gets near zero, you can instead use the quasi-hambentonian to control the trajectory. Maybe I should again zoom in. So here you use the dynamics on the stable manifold to approximate the trajectory. You go in, but then more closer to the origin, you use the quasi-hambentonian to approximate the trajectory near zero. There will be another picture which is more sophisticated and there you will see exactly what's happening. And in our case, we're able to prove that the first return time is of order n to the minus five. So the points that get in one step into the region O and spend n iterations there have measured n to the minus five. If you do the next inducing scheme, you will get something which has exponential tails. Usually when you connect these two return times together, you will get log n terms, log n to some power. But there are ways you can possibly get rid of them. In our case, you can. So you get the same estimate. The constant in front will be, of course, different, but the rate is the same. So, yeah. And maybe let me say something about the second inducing scheme. So the second in... So first we consider the return from Y to itself. Y is everything outside of this diamond region. But then to do the further inducing, we restrict ourselves to this set Q. Q is the union of these two sets. So this would be the base of the tower now. And yeah. And it's on this region that the map would be uniformly... When you return, it would be a uniformly hyperbolic map with tail of return times n to the minus five. So you can see exactly what Alvish was talking about in his lectures here. Q is this set that you start with a hyperbolic product structure when you start to check the conditions of young. Of course there are other conditions, right? But again here you can check them using the estimates from Leverani-Martens in 2005. And also you can deal with the discontinuities by the fact that everything is a Markov element. And you have these Markov partitions. All the pieces are formed by stable and unstable manifolds of zero. Okay. So once you have this estimate on the tail of the return times to get the mixing rate, as I mentioned to Yuri, we both mentioned the same thing. We used the result of Bruin, Melbourne and Thereseo. Which requires a Chernoff-Markerian Zen structure. As I said, this two-step inducing was used for billiards by Markerian, Chernoff and co-authors. So that's where it comes from. And in this paper of Bruin, Melbourne, Thereseo is the first time they call it by this name. And this is their theorem. I just put it here so you can see why it's useful to us. It provides an estimate for the correlations, which is much more precise than the usual upper bound. And the crucial thing you need to apply it is the estimate on the tail of the return times. For us, we obtained the tail... Well, mu for us is Lebesgue, because the simplecate map, which I mentioned is my preserves area. And the estimate we obtained for this, for phi zero equal to j, we obtained j to the minus five, right? So if you do phi zero to greater than j, this is of order j to the minus four, then you sum up, you lose another degree, is j to the minus three. And this is why you will get n to the minus three, the decay rate. So you might ask, what does a Chernoff-Markerian zang structure have to do with this result? Well, in this result, there are objects that depend on this kind of structure, like this space of this space, the gamma, which I haven't defined here. So this theorem requires this kind of two-step procedure. So I guess I've explained all the ingredients. It only remains for me to say, how do you obtain this most difficult part of estimating the tail of the return times? Maybe I can say a few words about that. And this is the most sophisticated picture I could produce. Let me first... Okay, so this is the big picture in a very small neighborhood near the origin. It's a rectangular neighborhood of size of this form. So if I go back to figure three, I can tell you where this rectangle is. That rectangle is something here. So the picture you see in the other one is on the other slide, is a kind of precise picture of what you see there. So what is the set O? The set O is this diamond region that has boundary in black, which I just highlighted in purple. That's my set O. You can see the set Q here. Maybe I'll change color. Q is here. It's this part and this part. And now the colorful rectangles that you see, these are the points that have different return times. So let's go into black. So this one in particular is R2, R3, R4, R5. The same thing here. This is R2, R3, R4, and so on. So they have four components. This is also R2, R2. So for example, for a point in R2, these are the points that get out of O in two steps. So they go here, one and two, get out. For points in three, they go here, here, and there. If you can see very well, these kind of sets are kind of highlighted here. So these are the images of the pieces as you get out. And the points that you see in black and yellow, these are the typical trajectories. So you can see that if you take a point typical trajectory in R4, it goes here, one, two, three, and four, it gets out. The thin lines that you see in the picture, these are the level sets of the Hamiltonian. So there's this energy function, whose level curves approximate the path of the trajectories. And this approximation becomes better and better as you approach zero. This is what you see here in all of these black trajectory, for example. You can see that it very precisely lies on the level set of the quasi-Hamiltonian. So our task is to... We need to estimate the Lebesgue measure of the return times. So these are basically the sets Rn. And here we use the... One of the things we use is the invariance of the Lebesgue measure. It turns out that it's more convenient, instead of estimating the measure of this rectangle there, it's more convenient to estimate it geometrically when it's closer to the y-axis that is somewhere here. Well, maybe this one is better. No, this one. There. And we estimate the measure of such a rectangle by the distance between the axes, the trajectory of x. So we know x spans, say, k iterations. So xk and xk-1, the distance between them, and also the distance between the y-coordinates of points. In order to do this, you need to be able to approximate these values. So we relate them to the Causi Hamiltonian, so to some energy. Then we relate the energy to the time, the time that points span in the region. And that way we relate the measure of the rectangle to the time, and that's the tail of the return times. So either you stop here, you look at the picture and you're happy, or you look at the paper and read 12 pages of little lemmas and calculations. They're simple calculations once you know what you're trying to do, but it's a little bit long and involved. But this is what happens when you try to get your hands dirty and take a specific example. And the more it's related to a physical system, the more complicated it becomes, or the more generic it is, the more complicated it becomes, but that's life. So that's, oh, I wanted to, there's one thing that is drawn here that I'll explain and then I finish. So here you see two parabolas that are drawn in light blue. These are the ones here and there. So these parabolas, they divide the trajectories into two parts. And these are exactly the two parts where outside of the parabola, we use the dynamics on the stable manifold to approximate the trajectory. Once it goes into the parabola, we use the Hamiltonian to estimate the trajectory. And a nice property of this system is that once the trajectory first enters this region, the Y coordinate of it is almost constant. So it starts to go, it follows the stable manifold. Of course it cannot follow it forever because then it will end up at zero. It needs to get out. But then once it starts to get out, it just goes in a straight line. And so the Hamiltonian controls this shape of the trajectory. You also have a control on the speed because the speed of the trajectory, if you think about its X coordinate is just Xn, say plus one minus Xn. And for us, if you just look at the formula, it's just Yn plus Hxn, which is just Yn plus one. So it's also the speed at which you approach zero is also related to the Y coordinate and the Hamiltonian. And putting all of this information in, you get at the end an estimate on the tail of the return time, which is this one, right there. And theorem of Mellbo and Terhesio finishes the job. I should mention that the theorem, all of these results on lower bounds, they use the work of Omi Serig and the refinements of it by Sebastian Gosev. Okay, I'll finish here. Thank you very much. Thank you very much. Thank you. This is really very nice, very nice construction. Congratulations. And very nice talk. Thank you. Are there any questions or comments for payment? Dominic, yes. Yes. Hi. So, how, how, the Markov partition that you construct, or that you use for this, to create the young structure, right? You start with the Markov partition, do that to sort of create the young structure in the course of this proof, correct? Yes. So, how important is the, how important is the conjugacy, the conjugacy between the, the almost the Nassav map that you have here and, and a linear Nassav different morphisms of the torus? We don't use it at all. I mean, this is commonly believed. If you know that, then you know that you have a Markov partition, then you, we only use an existence of a Markov partition. Since we don't know that conjugacy exists, we just say, okay, here it is. There are three element Markov partitions that you can use. Right. I see, yeah. The reason I ask is because there actually is results in the literature about a conjugacy between the almost Nassav map and the Nassav linear map, but for a different differential, the differential in that it's a result of mine from a few years ago, but the differential there is the identity, not a few maps. So I'm not sure if it would work in the same way, but it's possible that, but, but it's possible that there, that there might be a. This would be very helpful if you have such a result, because I remember seeing some results, but they didn't exactly fit. We couldn't exactly apply then, but if you have such a result, I think it would be, I mean, like I said, it's a different class of maps, so I'm not sure if it would work exactly in the same way as it does here, but it's, but I can put the, but I could put the archives link in the chat, which might be useful if anyone's interested. On the other hand, I have the point of view of doing this without the Markov partition. I think this is also possible, but maybe we had to do double the work in order to do it that way. And, we would really have to repeat a lot of results that were obtained by other people. We would have to redo them ourselves. So we just took the easiest route. That's great. That's great. Thank you very much. So, Jerome, you have a question. Maybe it's about the distinction between the case where the integral, the average of the observable fixed nonzero or they are zero. So in one, I mean, the ticket rate is different. Yes, can you comment on this? So this is just a general type of results you get for for lower bounds when I think, I don't know, maybe, maybe Sarah can comment on this better. I don't know if this was part of his work and or later on by Sebastian was out where he proved this kind of result, but it's Sebastian. Okay, so it was a result by Sebastian and since I don't know all the details, I cannot comment why this happens, but this happens in general that you could have, when the average is zero, you could have a faster decay rate for those observable. Is it related to the lack of smoothness of the, of something like that. I don't see the relation. I don't know. Another question was this this quasi Hamiltonian. So you said it has some physical origin. Because it looks like some kind of miracle. No, it's not having a Hamiltonian flow. Having the dynamics being exactly represented by a flow that's that's a miracle. But this is just approximation. And it comes from the fact that you have, you know, H is C infinity smooth and just do some Taylor expansion and you obtain this function that approximates the trajectory up to this error, which is, what was it x to the eight plus y to the fourth something like that. So it's not it's not such a miracle. In this sense, it's, you're not saying that trajectory is exactly lie on some Hamiltonian flow. What do you prefer? What, what kind of condition. So here okay you have this formula for the map T with the function little H and this little H satisfies. I mean basically is the x cube plus the big O of x five. If you, if you change this, this formula for H will you still get just another Hamiltonian with similar, quasi Hamiltonian with similar properties or is it changing H? Well, you can change H and still I don't know how you want to change H. My question is how general is is the fact that what do you need to have a quasi Hamiltonian. I would say just H C infinity. Okay, so just the fact that you have this tangency. Yeah, and some properties you just write down the Taylor expansion and see what terms cancel out and what remains. If the if the bad terms, lower terms they all cancel then you're happy you have a good error term if they don't then you're not happy. Sorry, I don't want to jump in but it's just connected to this. Jerome, is your question about how, whether this whole construction and results work for perturbations of this is this what you're asking how much you can. Yeah, yeah, I can change the precise and what are the assumptions that that you really need. Yeah, I had the same question actually so can you, you know, can you remove some of the explicit. I would say let's say let's pick at least the first four. The quasi Hamiltonian is obtained in the never any Martens 2005 it's some footnote but I think it's just writing. H, expanding H composer T and subtracting H. So the puzzle Hamiltonian is a consequence of these these are the, these are the assumptions and that's it. I say you don't understand which is just some polynomial that if you apply to T and subtract H, you can cancel the lower order terms. So the actual formal assumptions that does the ones that are on this slide only these five that's it. Yes. But but but what does but the but the tail will depend on something else it will depend on. So are you saying that this, the fact that ages of this form automatically follows from these assumptions we will always get this into the minus five under these assumptions. Yes, yes, exactly. That's what I'm saying. Yeah, the estimates. Yeah, as I said it's several pages of estimates but all you need are these. These order five terms, these are this this form of ages enough for the whole result. Okay, thank you very much really you had a question. Yes. So, do you need to control bounded distortion inside these are hands. Yes. Yeah, so distortion is also controlled. Okay. Yeah. So I hate to say this but a lot of these things are at the quantitative estimates are in the paper of live running and Martin so I listed them before. Yeah, and distortions are easily controlled. It's not hard. Okay. So the first return to why isn't it already uniformly hyperbolic because you waited until you passed through the last, the bad region. Yes, it is it has these continuities but and it's not this food doesn't have this full branch. Okay, so, but perhaps another possible approach to avoid the mark of partition the existence of the mark of partition would be to prove Chernobyl axioms. Yes, if I were to do it without the Markov partition, this is what I would do. But yeah, maybe maybe this is what you guys do. Yeah. Yeah, and then one difficulty is the regularity of the unstable manifolds. So you should have said that they see to outside of the of the neighborhood. Yeah, perhaps the induced map already will have see to a regular infinite manifolds, but to be on the safe side. If you have like some sort of uniform C1 plus lips, it's just like what we have. Then I believe that you can go directly without using the Markov partitions and get the young structure. Yeah, I think this would be an interesting thing to do. I mean take the same examples. Decrease the regularity and see if you can do the same things without Markov partition. And then you can perturb outside of the name of the neighborhood also in the any way you want. No, I think would be very nice. I'm not sure if the effort. Yeah, you might have to put a lot of effort into getting exactly the same results. But I think it's worth it. I didn't do it. Okay, Roland. Hi, Roland. Nice to see you. This is just a very naive quick question. I think you mentioned regularity of the enemies, maybe on the next slide. I was just wondering if there's anything exotic hidden behind this phrase. Yeah. No, but it's. Okay, so we need to check the assumptions of young. I think in the last condition of young, she needs absolute continuity of the whole of me, plus some formula for the some product formula that you have to check. But these are con in these examples these are consequences of the general theory I think if you go back to the book of many and look at some theorem and it's proof it's exactly the same. So, I mean, by regularity of the whole of me, I meant that we have this property in the paper we comment about it, but we don't do everything from scratch. We just say that by this this and this it follows that you have these two properties of young to obtain a young structure. Thank you. Okay, Mattels. Yes, I have a quick little question so do you know how your rate of mixing changes when you allow age to be flatter. The difference is stronger and they assume that you, you make the first car terms of the Taylor series of age to vanish. So how does influencing the final results on mixing. No, we didn't explore that. I would say I haven't thought about it. Yeah, because I think it's, yeah. Do you think that's a question especially if you look at the papers by. I mean the individual literature by turn off and Jen where they have a full range of mixing. So they had they have these formal that I mean if you start this person leaders but then you make one of the obstacles to acquire zero curvature by this profile one plus extra power are. Then they have a rate of mixing for the beer map of type r plus you over our minus or some some rational function of the flatness of the obstacle and I think it would be natural here. Okay. Thank you. Yeah, it sounds like a very interesting generalization. So, so the way this project started was because Collangelo asked me, can you obtain the better rate for this because of this controversy between the numerical results and the result that he obtained himself. I said, okay, yeah, let's do it. And we did it so it was not. At the beginning we were not trying to understand the whole theory surrounding this is just. This is a battle between mathematical mathematical physicists and physical mathematical physicists. Yes, I would say that. Yeah, I tried to act like a physicist there, just like Collangelo. Great. Okay, well thank you. Everybody are there any final questions before we go. Okay, so thank you again to both of the speakers really it's been a fantastic. Fantastic session today. Thank you. Thank you Payman. Thank you very much. Thank you. And, okay, so everyone get on with their lunches or dinners or night times. And we will see you all again tomorrow. Happy asling now. Okay, so you can watch us obviously do your homework first. And we'll see you all tomorrow at the same time. Bye bye. Thank you very much. Bye.