 Thank you very much. And also I would like to thank the organizers. Right now it's more thanking the one organizer. And I mentioned that this is joint work with Dalia Tehesu. And the topic is, well, decay of correlations. So let me just start a little bit about, well, this is going to be a general dynamical system. So X is going to be the space with some sigma algebra of measurable sets. This is my invariant measure. And my dynamics will be denoted by F. And if we speak about rates of mixing, the thing that we want to estimate is of course the integral of something like this. And this would be in the finite measure set, finite measure setting. But you can also ask this question in the infinite measure setting. And then it doesn't make really sense to subtract the integrals of V and W. But it's because in the infinite measure setting, the thing here for L1 functions will tend to 0 anyway. So this is rather the question. If you look at this, this will go to 0. And the question is, what rate? So maybe this goes with some polynomial rate. And then n to the minus gamma, you want to know what is this n to the minus gamma and what is there in front. And that is the infinite measure setting. Now, well, for the first one, there are of course a lot of results, especially if you have a Markov structure, then studying the transfer operator will be very successful. But also if you don't have a Markov structure, then you can try to induce. Maybe first return inducing works, but since Liza and Jung, we know that we also can do general return inducing. And it works very well. And of course it's not just Liza and Jung. There are many, many people also in this audience who have worked in this direction like Stefano, the chairman, I know. And if you were at the talk yesterday evening by Pacifica, you will have heard more results about this. Now, one thing about this method or maybe all the methods like Cole-Messus, Tomba and Livarani or coupling methods, they tend to give you upper bounds for the decay of correlation. Now, if you're in a setting where you're interested in exponential decay of correlation like Pacifica yesterday, then upper bounds is where you're perfectly happy with just having upper bounds. However, if you are safe, you'll find yourself, for example, in a setting where you expect polynomial decay of correlation, then just having an upper bound is maybe not entirely satisfactory. So then we're going to ask, is there also a way of getting lower bounds? So is this rate of mixing polynomial really sharp or actually maybe the rate of mixing much sharper than you think, much better than you think? Now, that's where Sarek in 2001, I believe, and Goezelle in 2004, but also other papers where their methods come in. So that is operator renewal theory, and that allows us to look as well at upper as well as lower bounds of this particular object. And I would like to add work by Melbourne and Tahesio, and that is in the infinite measure setting. But the unwing thing about this is that this method as introduced by Sarek and Goezelle, or by Sarek and extended by Goezelle, uses first return inducing. And that means that, well, it works perfectly if you already have some Markov structure to start with that you don't lose. If you don't have a Markov structure, then it's precisely the general return. Okay, so you go to look at the return map to some subset of your space. If the first return, if your system is not Markov, then this first return map will not be Markov either, usually. So you try to re-induce and just keep going until your return at the good time, where your branch is really onto when you get a Markov partition. Now, that works here fine. That didn't work here. So that's what I want to talk about. And so the plan of the talk is basically, well, to cover three things. So this is general conditions using general returns, finite as well as infinite measure setting. And, well, polynomial tails. And now I have to look because I'm forgetting a thing. And, yeah, upper and lower bounds for the mixing. So that is the setup. And I want to hopefully get to some major theorems about this. Secondly, there's a natural class of examples. And these are going to be non-Markov interval maps with in different fixed points and time permitting remarks on the proof. So let's try to do the setup a bit more carefully. So I have this map on my dynamical space X, but I don't have any expansion there. So I want to use some inducing to subset Y of X basically to regain expansion. So these are going to be maps without contracting directions. They might be non-uniformly expanding, but I am not allowed to take any hyperbolic contraction. That doesn't work in our setting. So let's say F is going to be this is Markov. And that is to say there is this Markov partition alpha such that, well, for every element in this partition, the dynamics is on two. So for every A in alpha is this field. And I have an invariant measure. F preserves a measure which I call mu zero. And the union of all the partition elements measured them is full measure. And I have some distortion which is kind of the other part of Gibbs Markov. So if I look at the potential, this satisfies the following distortion property. There exists a C such that for every A in the partition and for every Y and Y prime, the exponential of PY is less than a constant, the measure of this. And if I compare two of these, then I still have this constant, mu A and theta. So, okay, so there is theta in 01 such that this holds and this S is the separation time between Y and Y prime. So it's the number of iterates that I need for Y and Y prime to end up into two different elements of the Markov partition. This is a setup. I assume that I have an induced map with these properties. And then I get to three conditions. And the first one is about the tail of this return map. So this here, let's just use yellow here. This is the general return time. I'm basically assuming not that I'm showing this thing about the potential. Okay. So this is about the tails of phi. And then two different conditions, one for the finite measure case and one for the infinite measure case. Finite. I just want that the measure of what we tend to call the big tail, where phi is bigger than n, not equal to n, but bigger than n is order n to the minus beta, where beta is some number of bigger than one. And the infinite condition, so one of the two will be true. I have this, but then I need to ask a little bit more. That is going to be n to the minus two beta, where in this case beta is between a half and one. Second condition. So that is kind of quite new in this setting. And that allows us to do general returns in the first place. Well, phi is general return or let's say good return. And let's call tau is the first return. And that means that this induced map is basically re-inducing over the first return. So if I look at the first return, then this is not enough. I have to take interest of the first return to reach the good return. And this re-inducing time is going to be rho. So rho is re-induced time. That's it. So I could say that if rho is k, then phi is precisely the k-th return where, okay, this would be the first return. k is going to be the k-th return to f. So I need some tail condition on this row. And it looks as follows. Let's say that zkj is all the y in my space y. So that phi of y is j. And rho of y is precisely k. Okay? And now I want that there exists a constant such that for all n, the following is true. If I measure the, let's write it like this, if I sum over all n, all j greater than n, and all k's, k times the measure of this particular set, then this is less or equal than the measure that phi is bigger than. Well, then there's one more technical condition which I think I might as well skip because in all the, only definitely in the example that I want to present, that other kind of slightly constructional assumption is true anyway. So let's just keep to do these two conditions. Well, let's just say plus a little other assumption which I call h2. But then I hope to get to the theorems. That is a way of interpreting that. And I hope to get to a lemma that makes this a bit more precise. Well, in the examples that I hope to do, I know that if you fix j and let k increase, then these measures of these sets decrease exponentially. They don't decrease exponentially in j, but for fix j they tend to increase exponentially in k. And I think that is a pretty normal situation. But if you look at this, we don't ask so much the k. Okay, theorem. So I have two, one for the finite measure setting and one for the infinite measure setting. And you know, since I put it over there anyway, I might as well just write it here. And now I also realize that I need to say something about the observables that V and W, but I first write down the theorems and then I say a bit more about these observables because that class of observables is important. So in this case, the finite measure case beta was bigger than one. And that means that if I look at the average of my general return time, that that is going to be finite number. And in this case, I have the statement that the thing that I want to compute has a leading term, one over five bar sum j bigger than m times. And now it's going to first of all just the integrals. This is the leading term. And now we get the error term, which we hope to be of an order of maximum smaller, the maximum of about two different norms that we can put on V, namely a scaled theta whole norm and a scaled infinity norm. This I have to multiply by a scaled infinity norm on W and an extra term dn, where dn is, takes values depending on beta. It's either n to the minus beta if beta is bigger than two and minus two log two. If beta equals two and n to the minus two beta if beta is between one and two. And now for the infinite measure, I will use that board over there to explain that. But I forgot to say about the regularity of my observables and I will in a minute. In very measure case, in that case, so beta is between a half and one. And that means that there exists a Q, an integer, which is the smallest j, such that j over j plus one is no, largest j, such that this is still less than beta. And then there are constants, which I called d0 up to dQ, such that. Now in this case, remember, I will not subtract the integrals of V and W, because it doesn't make any sense. It's rather I want to find out how fast does this go to zero. And the answer is d0 n to the minus beta minus one plus d1 n to beta minus one. And so you can work down all these powers, these fractional powers of n up to the power Q plus one beta minus one. And this thing you have to multiply by those integrals again. Okay. And then there is an error term, which is of the following type. And in this case, dn is n to the minus beta minus a half. Okay. So I hope I have stated the theorem without any errors. So time to, okay, mu is the invariant measure of the original system. And mu zero is the invariant measure of the induced system. So now the regularity of the observables, V and W, well, I need to scale in a particular way or weight them in a particular way. So let's say T tau star is going to be one plus the minimum time. And this minimum could be zero, such that f is equal to fn of y. Well, let's say this is a point x could be anywhere in the space, belongs to the set that I'm going to induce on. And I also fixing some epsilon bigger than zero. And now I want to say that the weighted infinity norm of W is simply going to be the supremum over all x in the space, W of x times this particular weight one plus epsilon. So that's the simplest case. And I'm going to do something similar for the theta, the theta whole norm, which in first instance is going to be a semi norm. So that's going to be the supremum over all elements in my Markov partition. The supremum over 0 less than i less than 5. And the supremum over all points in these particular Markov partitions. And then what I want to do is first weigh them and do that. No problem. No, quite. Fi of a. And then tau star is this one. Has nothing to do with tau. Shall I use another number? Yes, but I might already start in y. And in that case, I say this number is one because I add one. Otherwise, it's the first thing. Plus one. And now we get V of f i y minus V f y prime. So this is a weighted theta semi norm. And to get our true norm, I'm just going to take the semi norm plus the weighted infinity norm. So that's the regularity that I'm imposing. So those theorems on the left board hold for all V and W in the, well, Banach spaces with these norms. I, well, okay. So I'm going to discuss an interval example with a neutral fixed point. And what this basically tells me is that those V and W should decrease to zero at the neutral fixed point. And they do that with a particular rate. And this rate depends on this epsilon, but epsilon could be arbitrary close to zero. And the parameter beta. So it means that the V and the W may be supported on the whole space, but they should go to zero when I go to the indifference fixed point. So this particular map looks as follows. It is very much like the Montville-Pomol map. So f alpha of x is going to be x one plus two alpha x alpha if x is in the left half. So here we have this neutral fixed point over particular order of contact. And also my beta will be one over alpha. And normally we just have a straight branch from zero to one. But I'm not going to do that. I'm going to make a non mark off by taking a straight branch from zero to gamma. So this looks like, what would it be? Something, two x minus one, if x is in half one, where gamma is some fixed parameter strictly between zero and one. And now we have a neutral fixed point with map without mark off structure. And for which we expect and can compute that this tail condition, this one, is indeed going to be satisfied. And I'm going to induce to this interval. Y is going to be a half. Now this class has been looked at before by Ho and Venti in a preprint which hasn't appeared yet, but you can find it on the archive. So not quite know the year, but in the last year or two. And there they looked at similar results for V in BV. So there they used bounded variation space and only for finite measure setting. And the support of V and the support of W, they are both inside Y. So not this general support. But the statement is quite the same as this, except of course that you use a first return or that they use a first return inducing scheme. So that gives the following kind of philosophical question that, well, we are estimating this, the integral W compulsive F. And now you can ask, am I going to use the first return map and use Ho and Venti's results? And what they get is one over tau R, where this is the average of the first return map, times, and now mu tau is going to be the invariant measure of the first return map. They don't. Okay, but who and Venti do, they take first return map on this set Y, which is not full branch, but it's still in the, or the density of, it's invariant density, is still in this class BV. So that's what they can do in, well, dimension one, times V d mu tau, d mu, what, W d mu tau, plus the error terms, higher order terms. That's what you get from first return and what you get from general return would be J should be larger than this, 0, 5 bigger than J, and this is all the same except that I'm, oh, that should be not be tau, plus higher order terms. And both results are true. And this should be of an order of magnitude smaller than this, same here, because otherwise we wouldn't have the lower bounds. This is the leading term and gives you upper bound as well as lower bounds. So question, are these two things the same? Well, the formulas don't look the same. They look similar, but definitely not the same. And definitely those two numbers are not the same. So let's call this one and this two and, and the lemma to resolve this mystery, why do we get two formulas that look different is the following. So lemma, well, of course we scale differently, but that's not the issue because it turns out that phi bar is rho bar times tau bar and rho bar is the average of the re-induced time. And, and I'm going to look at the crit paper again. If I look at this difference, well, I should put the right, the right scaling in, right? So let's say tau i minus double i. And if you work that out, this is the sum 1 over rho bar mu 0 phi greater than J minus mu tau, tau greater than J. And we hope that this is really so small that the difference between this term and this term is absorbed by the further error terms. Now it turns out that this has a nice formula. Well, maybe that, not that nice, but it looks like this. So I'm summing over okay greater than one and integrating over the set where phi is tau k plus one, bigger than N. Okay. That's different classes of observables. I mean this is for BV and this is for the class that I'm raising at this moment. But of course there is a big intersection between the two, right? Yeah. And also for functions in that big intersection, you definitely don't want contradictions. And the mu, this is what they get in their paper. And the bottom is that theorem that is a bigot. Now you can work it out like this. And in this particular example you can indeed compute it. And this is indeed, well have you started a bit longer? I mean of course there's a bit cryptic formula, but it means that I'm integrating over the set where phi is bigger than N. And at the same time phi is the k plus first return. That this is really a lot smaller than either of those terms. And it's really absorbable in these two. Now I can say a few things about proof and I think I will. Because how do we do that? Well, we use definitely a tower method. So it's going to be a tower, which I call delta, which is, well I'm just writing down, but it's quite standard. So let me just draw it. Here we have y0, which is y cross 0. And then we get the tower above it. And the tower map just brings you from one level to the next. Tower map I go, going to call f delta. And at some point you reach a point where you can't go higher, and then you go diffeomorphically to the base. And this map preserves some measure new delta. And the good return, phi, on this tower of course corresponds to a first return to the base. So there we have a first return, which we would have liked from the start if we could have used Syke and Goussel straight away. And here we have the projection to our space x with subset y. And we call this yi. It's just fi of y. And now we get the problem that if I lift the set y, I don't only get the base, I get definitely more. So let's use some color. We get this, and maybe part here, and maybe part here, and maybe part here. And this part I want to call. And then we use the ingredients of operator renewal theory. And I'm just going to write down some operators and comment on them. So we have the associated to the tower map and the invariant measure of the tower map. And we're going to build a power series using this. L of z is the sum over all n greater than zero, the nth iterate of this power series of this operator times z to the n. And we also use operators associated to a return to the base. So t is going to be the indicator function of the base times the nth iterate of my transfer operator, but only starting at the base. So this is this. And you form the power series in the same way. So this is the transfer operator that is associated to all the points that start in the base and after n steps are back in the base. They could have been in the base in between, but at step n, at iterate n, I want them in the base. And we get the first return. And then we do the same thing. But here we not just start in the base. We also restrict to the set where the return map, return time is n. So if we look at this and rz is the sum of these operators, then this is associated to when I return time n, but really for the first time. Now not in between. And then there is this standard relation this renewal equation that allows you to associate this power series to this. Now if I knew that this was the first return, then I could simply say, well, these things are the same. And that is if phi was first return and the supports of v and w are both inside this set that I use on. But that's not the situation that we have. This is not the case. So I just need one more board and explain what adjustment we have to make here. And this is basically following techniques introduced by Gouazelle, where we have another set of operators. Well, let's just try to explain this just by the picture. Let me just write this. So instead of this here, I will have a decomposition of this power series using the same as the core, but some extra power series here and here. Now this power, sorry, let's just do this power series. If I just take any observable v and I lift it to the tower, it's going to be supported on the whole tower, not just on the base. Now what this operator b does, it just iterates forward until I hit the tower the first time. And I make a power series out of that, and that is the bc. Okay, and then I return to y, 0 a number of times, and at a certain point I'm close to my final, it's with n. And in the last steps, well, this mass spreads out again, and the operator associated to that is the ASAP. Now what is the CZ? Well, if I have a particular n, fixed n, I might be somewhere in the tower, and in n steps I don't even return to the base. Now then is the stuff that I take in C. So then I get this particular relation which replaces the standard renewal equation. And then, well, condition h1 and other stuff is allowing us to do estimates on this one, estimates on this one, and the Fourier coefficients and so on, so that we also can compute the whole operator product. And that's basically what's behind in trying to estimate this because after all, the nth coefficients of LZ is precisely an estimate for your nth coefficients of correlation. So I think I have now exceeded your patience enough to stop. Thank you very much.