 a problem when you're sharing the iPad? Okay, okay. Alright, so I just wanted to say a few words before I pass the word to José, who will be sharing the session. Since it's our last session of this workshop, I just wanted to share a few reflections for me. Personally, maybe it would be nice at some point to have a little discussion about this, but I think by the end of today, we will all be a little bit tired. And I think for me, it's been really extremely interesting and a very valuable set of sessions. So of course, I would like to thank hugely my co-organizers, José and Yuri, who also gave the mini courses. I know that was a lot of work last two or three weeks, but I hope that these mini courses will remain. Of course, they're all on YouTube, so they will be kind of their definitive mini courses on young Towers and Marko partitions. And so I remember, I feel very old saying this, but you know, I remember when I was a student and once at a conference, I heard John Hubbard, he said he was working on complex, two-dimensional complex, Hanon complex dynamics. And I remember the beginning of the talk, he said, well, there is a there is a fierce competition here between what John Hubbard himself was doing. And then there was Bedford and Smiley had also written a kind of whole series of papers. And he said, you know, there's a competition we're using two different methods and Bedford and Smiley are winning hands down and really going very far. And at some point during this talk, I kind of had a little bit that during this workshop, I had a little bit that feeling, you know, all these somehow these Marko partitions are really winning hands down, you know, on young Towers. They are able to capture so much information, so much dynamics, there's so many applications. Then as the week went on, I started to feel a little bit less that way. And I really, I think this week has helped me to understand that the two methods are not competing methods, they really are methods that can help each other in some sense, you know, they have different strengths and weaknesses. And I think it is a really valuable opportunity for me to learn a little bit more about all these results that use the Marko partitions and how, you know, it's really, it's really about understanding non uniform hyperbolic system and using using all various tools that are adapted to particular situations. So I come out of this with this with this very positive feeling about how, you know, how these various techniques are not as foreign to each other as they seem from the beginning. So I hope this feeling is shared by the others. I hope anyway that it's been useful for everyone else, everybody else. And I'm very much looking forward now to the last talk. So I will pass the speaker, the mic, the microphone to Jose Alves, who will present the last talk and wrap up the workshop. Okay, thank you very much Stefano. I would like to start thanking my, the core organizers of this meeting. It was a very great pleasure to organize all these sessions. And I would like to thank especially Yuri, because he did half of my job for this afternoon. I was supposed to be the chairman for this session, but we had internet problems, actually energy problems here in Portugal. And so I couldn't attend the beginning of the first lecture. So our next lecture is from, is by Sneer Banu Valia from Penn State University. And he will talk about zero-summable orbits. Sneer, please. Yes, thank you very much. While I share my screen, let me start by saying also my appreciation for this week, which was wonderful for me and for organizing the week beforehand. So of course, I'm thanking the organizers for this weekend for giving me the opportunity to take a part in. Can you see my screen? Yes, yes. Okay, good. So let's start today. I want to talk about zero-summable orbits. Before we go into the technical slides, I can give an overview of the main results that we want to discuss today. So first we introduce a class of orbits, which may have zero leopon of exponents, but still exhibits sensitivity to initial conditions. We'll see what it means. And the first result we mentioned is that we construct a Markov partition, a countable Markov partition, which induces a finite one almost every coding. And as importantly, it lifts the geometric potential with some of the variations. And part of the technique which is used in order to construct the Markov partition is the graph transform, which will explain the difficulties with this and how it's overcome. And the graph transform allows us to construct weak stable and unstable leaves. These are local manifolds which may contract in a strictly sub-exponential way. We show the absolute continuity of those foliations. And then we give a general condition for the existence of such a foliations in a system. And then we continue with a family of examples where we can simultaneously code in a finite one almost every one manner all the invariant measures in the system without assuming kai publicity for some kai. And basically I should say that this is consisting of two results. One is general and one is the study of a class of examples. So this part will be more specific. And then part six and seven are again general and they relate to the general construction of what measures can be coded with this coding we have. And let's start with the framework and the definition. So the framework is m is a closed d-dimensional eminient manifold. Closed means compact and boundaryless. d is greater or equal to two. And f is a one plus beta default morphism of the manifold, which means that f and its inverse are differentiable and both the differentials are beta held up continuous with some positive data. Now we want to introduce this notion of zero sum abilities of point in the manifold is called zero sum of all or sometimes sum of all in short. If it's tangent space splits uniquely into a direct sum of stable and unstable subspaces edges and edges respectively where we have that for every tangent vector on the stable subspace. Oops, one second. Let me see if it's working. Okay, but for every stable tangent vector on the stable subspace this quantity is finite and for every tangent vector on the stable subspace this quantity is finite. So what can we see from this definition immediately that every hyperbolic orbit satisfies this condition? And also it is lean in the sense that there is no choice in this expression except maybe for the number two here. But as we will see next we want this quantity to be a norm on the tangent space which is induced from an inner product. So the parallelogram law tells us that it has to be two here for this property to hold. So there's really not much choice but I will circle back to this same definition in two slides or three slides later on to see another feature of it. So let's continue. The first step in the construction is the oscillated spacing reduction. So we start by constructing the Lyapunov inner product on tangent spaces of summable points. For every two tangent vectors at a summable point they decompose uniquely into their stable and unstable components and then we define a new inner product on the stable subspace and according to this formula which is well defined by the Cauchy-Schwarz inequality and the fact that x is summable. And similarly we define an inner product on the unstable subspace. Given this formula they both extend to an inner product on the whole tangent space of x through taking the respective inner product with their respective components. The next step is to do the Lyapunov change of coordinate at these points. So the Lyapunov change of coordinates is a matrix which depends on the point from the d-dimensional Euclidean space to the tangent space of x such that when we consider two tangent vectors at x their new inner product which we define equals to the standard Euclidean inner product of their pre-images under this map. Now this formula defines the map a c-naught of x defined up to an orthogonal mapping of a gs or a u. So we have a degree of freedom first to choose that hs of x stable subspace is mapped to the first s of x coordinates in rd where s of x is the dimension of hs of x and it still leaves us with a degree of freedom of choosing this map up to orthogonal mappings of a gs and a u but we don't care about this for two reasons one the quantity that we are truly interested in is the norm of the inverse which is of course not affected by a composition with an orthogonal map secondly this matrix is a function of x can be chosen measurably over the set of zero summable points even though it's not defined uniquely it can be chosen measurably so we are happy um let's continue oops good um so the next definition one two to introduce is temporability so let's start with weak temporability or epsilon weak temporability for some positive epsilon we say that a zero summable point is zero sorry is epsilon weakly temperable if you have a function from the orbit of the point into this discrete set now what is this discrete set it doesn't really uh matter how you choose this set it just needs to have some correspondence with the variation bonds here we'll circle back to you in a second but it just needs to be the discrete set which goes to zero and it finds those limits and good so with two properties we want to be satisfied are that first this function is bounded by a constant over the inversely upon of norm to some power two gamma and you can immediately notice that this property of weak temporability depends on epsilon and on a gamma and secondly we want this variation the multiplicative variation over the orbit to be bounded with e to the plus minus epsilon that's the second condition okay now don't worry about what is gamma is just a parameter which for now we are allowed to be any choice of parameter and and we will see why it appears later on now we have strong temporability strong temporability is similar and we'll explain how it extends the notion of victim probability so we say that a zero summable point is strongly temperable if there is a function from the orbit of the point into this discrete set and now let's define this discrete set first consider this map i of x equals x times e to the gamma x to the power of one over gamma i know it seems daunting but trust me it makes a a lot of sense you'll see in a second and then we want to have a choice another choice doesn't really matter which choice we make but let's choose this one this discrete set which finds levels of i and goes to zero or let me write it down and i minus one over four is the fourth iterative root of the inverse of i so exists and now this is like a a set of a letter levels on which we can climb or go down and again we want this function to be dominated by a constant over this inversely upon of norm to the power of two gamma and we want now the variation over the orbit to be bounded with respect to i instead of with respect to e to the plus minus epsilon so let's just for one second see what it means it means that whenever q is very small so without being formal at this point when q is small it represents bad parts of the space so whenever we are in the bad parts of the space when q is very small the the tightness of this estimate becomes much stronger because q can be much smaller than epsilon you fix one epsilon in the beginning of the construction then q is can always become arbitrarily smaller than epsilon so you see that the factor which approximates the closeness of those two quantities becomes much smaller and in particular if you substitute i of x equals x to the e epsilon you get exactly the definition of width and probability so it's simply replacing this function with a more strictly function and the passing temperate canal emma tells us that every environment probability measure which is a parabolic is carried by the set of epsilon weekly temporal points for every epsilon okay yes question sure in in i plus i plus minus one of q of q is is this inequality or an identity inequality sorry let me write it down explicitly and it's q of f less than i of q greater than i minus one of q where minus one means the inverse not the one over the iterative inverse okay so a what did i want to say is that we'll go back later to see why this form of the function i is quite natural okay now let me give one more definition before we state the main result and the definition is points which are recurrently strongly comparable so we saw what is strong temporability but if this function q can be chosen so its slim soup forward and backwards in time is positive then we say it's a pointly weekly temporal it means that it returns to some regular level set of two infinitely often in the past and future and then we'll present the next theorem which is a there is a countable mark of partition r which induces a locally finite a countable directed graph g and the factor map from the induced topological mark of shift into the manifold and i'm not defining those terms because i know that the audience now knows all those terms so such that pi is uniformly continuous on sigma while sigma may be not non-compact two when we restrict pi to sigma sharp what is sigma sharp is the set of all chains which have a symbol return infinitely often the future and another perhaps different symbol return infinitely often in the past so when we restrict pi to this set it is finite the factor map is finite to i it's coding and we have this commutativity relation and the image of sigma sharp equals to the mark of partition the union of all the elements which equals to the set of our currently strong temporal points and another important property is that the geometric potential lifts with summable variation and so let me mention a result by buzzi which says that if one wishes to code all hyperbolic measures simultaneously when you allow kai to be arbitrarily small then you cannot hope for pi to be held or continuous so indeed this is not the case but luckily we get that the map pi is uniformly continuous and furthermore it lifts the geometric potential with summable variations which is the threshold requirement for carrying out the thermodynamic formalism of comfortable microchip so we're happy and now i will say the main ideas of the construction and because again i know the audience is an expert on mark of partition sense specifically saryg's theory i will compare the differences with saryg's theory and then we'll show why those five items are sufficient in order to construct the mark of partition with the property one so the first property we need is that we have this process of coarse graining which is done by for the set of the currently strongly temporal points but you want on each level set where you take a sufficiently not dense because it's a finite set but sufficiently like finitely dense set and you want it to be closer you want it to be closer to respect to the choice of the map i and you want the discreteness property which basically means that whenever the minimum of ps and pu is a greater or equal to some positive number then the number of double charts which satisfy this is finite this is the discreteness property we want and this object is basically a triplet of a point which is currently strongly temperable and two neighborhoods that's what this triplet is and and i want to say that perhaps if one seeks out to code oliparbolic measures then one can try naively to work with with charts which have another parameter let's call it chi x which is some discriticized way of choosing the liapunov exponent at x and then one can also try to to have an appropriate in a product at liapunov in a product where you put another factor here which would be e to let's say chi x so you would impose some exponential contraction in this quantity but morally this is wrong because a liapunov exponent is not the property of a point it's the property of the tail of an orbit and similarly it's not the property of a symbol it's the property of the tail of a chain so if one seeks out to code oliparbolic measures you cannot have a that a symbol holds the property of the tail because it will break the Markovian property so you need to find an underlying structure which is common to oliparbolic measures together okay so now we have this refined edge condition and i compare it with Sarit's definition of an edge and so the first condition is that those quantities are small how small doesn't really matter but they are sufficiently small and then we have this property which is perhaps i mean if allow me to say it's the most important the id n takes work which allows you to define the these parameters in a good algorithm way along the orbits which in the end allows you to solve the inverse problem so in Sarit's construction you just substitute i of x equals x times e to the epsilon now we want to have some finer structure so we define it similarly and i call that i is just a discrete lattice and we when we say i of a number we mean the closest lattice point greater equal to the number that's what we mean so we have a this definition of the gave the algorithm now so when we are in bed parts of the of the space it means that edges a the wind department has changed much slower the variation is smaller and we have another step which is necessary which is the graph transform for chains which don't have uniform contraction in charge so the a idea which which or let's say the engine behind the psych's coding is peasant theory peasant theory tells us that when we take hyperbolic orbit you can find the change of coordinates which locally puts the action of the differential in a block form matrix which is hyperbolic and furthermore not only it's hyperbolic it is a uniformly hyperbolic up when we have a second order error term but it's at the point it is uniformly hyperbolic with some fixed guy and then we use this uniformly publicity to prove uh using point of demand methods we use convergence of graph transform in this setup even after we go into charts we don't we no longer have uniform estimates it's uh you could say a heuristically it's like non-uniform non-uniform hyperbolicity so one needs to carry out this graph transform to show that it converges without the uniform estimates within charts and after a change of coordinates and not only this you want to be able to show that the c1 convergence is fast enough so you end up having summable variations to leave the geometric potential and this can be done on what we call lift force which are smaller portions of the of the lift which still contain all codable points so it's good enough because we just want the tangent space to codable points and and then we need the next step we do that we need is an improved inverse problem so what is the improved inverse problem it says that if we have two chains which are recurrent and they call the same a point then these quantities basically what you wish to estimate is the inverse and the up and off norm they are close to each other with respect to two i so as we stress this means that when this norm is very big this requires this approximation to be much much tighter than just e to the plus minus epsilon so we are left with two difficulties which together are much harder to deal with it's the fact that we have weaker estimating charts and simultaneously we want to get stronger bounds and but now assume that we have items one two three four five let's see why it's enough to have the mark of partition it's a it's enough for the following reason we start with a point which is a currently strongly temperable and then we have the graphier formula which tells us that a given the fact that x is strongly temperable we can find a window parameters p u and p s so it has a chain using the edge condition which shadows it what the Dappier's formula is exactly this formula when i of x is x to the epsilon but when i of x is another function it works similarly as long as you have the adopted strong-temperability condition okay and then notice that this function i of x let me do this for simplification i will just write x here instead of gamma and notice that this function is strictly increasing and it's always greater than x it's expanding so what it means is that when we take the sequence of its inverses it goes to zero it cannot have any fixed points so it means that our current chain meets the dominating quantity infinitely often similarly to an argument which appears in Sarig's coding where he uses this fact again with the appropriate i function and now we are able to define i maximality which one can again compare with epsilon maximality which is the following i maximality means that assume that we have two chains let me remind you one second we have two chains which code the same point and our current then we have this estimate for their centers center of chart is a point x for this chart so we have this estimate then assume that this inequality holds for this parameter piu for some i some integer i then we write the following equality by definition pui plus one equals to the maximum over these two quantities and that's the beauty of the greedy algorithm that we have an explicit formula for the next one in the chain and then we substitute the fact that pui is greater or equal than i minus one we substitute it here so we have i composed i minus one we use this commutativity relation and we use also the inequality that this quantity is greater or equal than i minus one of this quantity so and we can take i minus one outside because it's monotone so it preserves the maximum so we are left with i minus one of this maximum which by definition of the greedy algorithm is i minus one of pui plus one what we get is that once this property the inequality holds at one integer it pulls itself as a bootstrap to all greater or equal integers in the chain and from the fact that it happens infinitely often in the past it holds for every integer in the chain and this is the inverse problem so items one to five are enough and these properties items one to five are sufficient in order to construct the Markovian cover but once you have this Markovian cover with the the properties which correspond to the coding of side then you can continue with the bohens and as I said theoretical refinement it's a theoretical process it doesn't matter from what the smooth structure it can be for him and you get refinement of the cover into a partition which induces a finite to one almost every coding on the current point that that is the same and of course there's another thing here which is the fact that for the actual inverse problem you don't want just the norms to be close to each other you want the maps to be close to each other particularly it means the norms but this is highly technical so I'm not going to go into details over so now let's see what is the engine that allows us to carry out a Gaussian form okay for simplicity assume for a second that we are in the two-dimensional setup and define this function which is well defined doesn't depend on a choice of a tangent vector because there's only one tangent vector up to a choice of direction which is normalized in the unstable the subspace so we choose one a normalized unstable tangent vector and we have this quantity now I'm going to show you a formula which is true which is the contraction in chart this is when we do the change of coordinates the contraction in chart equals exactly to this quotient which equals exactly to this factor and this is dominated by e to the minus one over u square of x now whenever u is very big then this inequality becomes actually a good approximation of course we're interested in dealing with the parts in the space which are big and u is big this is a good approximation now let's try to see how the graph transform works in a simplified setup what is the simplified setup the simplified setup is when the differential does not contract tangent vectors on the unstable subspace this is not always true in an uninformed hyperbolic system but we want it to happen somehow on average because in the end those contractions they end up being summable they go to zero so on average it should not expand tangent vectors so let's just for simplification assume that it truly happens on every point of the orbit so what one can write this recursive formula u square of x just expand this expression here above and you get it it equals to two plus a factor times u square of f minus one of x these factors less or equal to one by assumption then it's less or equal to two plus u square of f minus one of x you divide both sides of the equation appropriately and you get the following inequality one over u square of f is greater or equal to one over u square we multiply by this factor e to the minus two over u square now why is this something that we're interested in so let me do a small drawing here on the side imagine that you have two admissible manifolds and then you start pulling them backwards and steps until you get to some neighborhood of a point so in each step they become closer to each other the rate of how close they get to each other is the contraction backwards on the unstable direction which we saw is minus one over u square n why n because this is the end step and then the sum of all contractions or the product of all contractions is one minus u square k when you go up to n so in order to show that the graph transform converges you want to show that this sum goes to infinity because when this sum goes to infinity this quantity goes to zero and because there's a limit point they converge and so let's see how we can show that this this sum goes to infinity you use this recursive formula one over u square composed fn is greater or equal to one over u square which we take the common factor outside times the sum of all those bounded factors and we get that if the left hand side were to be finite then it means that the terms in the sum have to go to zero which means that the sum actually goes to infinity which is a contradiction then you must diverge and so this is the engine which allows for the graph transform but the fact that the reason I chose to show you this simplified setup is because in this simplified setup we get that if you want to rewrite this relationship you get something of the form when sorry i of x equals x e to the power of x you get that one over u square composed f is greater or equal than i minus one of one over u square and i minus one and u square is what we should have in mind is the inversely upon of norm okay so I want to circle back to the definition of strong temperability let me go for the to the definition of strong temperability in the definition of strong temperability we have one more argument or one more structure sorry it's this little gamma but this gamma doesn't matter because it's factored out here with one over a gap it's just to give us another degree of freedom in the choosing of strong temperability but the rate is exactly the same rate requiring a function to be a strongly temperable is exactly requiring that this sequence will be strongly temperable with respect to this function the same requirement and as we can see here in this very ideal setup this is sort of the minimal restriction we can have so the restriction of strong temperability is a minimum in a way and it's the fact that the the strong temperability needs to hold with respect to this expression the inverse the upon of norm which comes from these sums and as I try to explain these are almost optimal in a way because it would be conceptually wrong to try to put an exponent here it will either obstruct the Markovian structure or a not allow you to code measures with small exponents okay so this is the engine which allows us to carry out the gas gas system which tells us that even for weak chains one can construct weak manifolds which is nice and they contract but they may not contract the exponentially fast so a natural question is are those fluctuations absolutely continuous with respect to allonomies and the answer is yes they are absolutely continuous with respect to allonomies and then a another general fact about the existence of strictly weak falliations a strictly weak falliation is a weak falliation which whose contraction is strictly sub exponential so whenever we are in the presence of small exponents that means we have a some invariant measures where the the Lyapunov exponents are not uniformly bounded from below then you must also have strictly weak falliations let me explain the construction say quite straightforward you start with the fact that given a period sorry environment probability measures with small exponents one can construct periodic orbits which have a small exponents and if they all belong to the same mochini class they can be coded transitively due to buzikovicianity and once you have a periodic orbits on a transitive shift space with weak exponents then one can concatenate them cleverly in a way which makes sure that first the chain the concatenated chain is a current but also that the tail property the exponent to the tail would be zero so you get a chain you get it it must induce an unstable leaf or a stable leaf and it will have a sub exponential contraction and now let me make another remark to justify this setup so a very interesting setup to work in is this setup where we have a sinfinity surface deformation and you may assume that the pressure over measures with positive entropy of the geometric potential is zero that's uh an interesting enough from its own setup to try to fully understand if you assume in this setup that for all environment probability measures the Lyapunov exponents are bounded from below then the result of buzikovicianity tells you that there is an srb so you could say that this case is sold this is exactly the converse to our assumption so a motivation for this setup is that in all cases when we don't know that they are sold we don't know that there is srb you do have strictly weak foliation so and it might be interesting to understand could ask the question could ask question sure sure sure could ask the question could ask question please what is a strictly weak foliation what is a strictly weak foliation strictly means strictly that the contraction is strictly sub exponential uh okay the exponent the Lyapunov exponent is zero thank you very thanks sure very thanks and because otherwise we just know contraction but we didn't give low sorry lower bound than the contraction which is upper bound on the okay so sure and now let me discuss some examples so I was very happy this week to see that almost a lot of systems is something in the which gets a lot of tension attention so I don't need to put a lot of time into defining them turns out that the audience is permanent as construction but just let me mention it one more time so a definition due to who and young is given a two-dimensional torus and f is a c d-form of the torus where r is greater equal to two but it's also allowed to be infinity it's required to be topological eternity it's required to have one in different fixed points and you want to have everywhere on the manifold a splitting of the tangent space such that on the stable subspace there's uniform contraction everywhere with some k s smaller than one on the unstable subspace you get that there's never a contraction but there's always expansion except for one point the point p and at p there's no way no expansion is no expansion at all so the result of who and young say that this setup is enough to show that there are no srb measures now let me tell you a theorem is at first every point in this system aside for the in different fixed point is zero sunnable and our coding can code all invariant probability measures aside for the delta measure at the in different fixed point all of the invariant the probability measures simultaneously in a finite to one almost everywhere fashion while you have measures with arbitrarily small exponents and now there's work in progress which i want to mention so my argument is not complete for this result and when if anyone is interested i can explain why i think it should be completed i don't know soon or sometime but i will mention this result and then i will show a corollary why i'm trying to prove this result why it's interesting this result to me and okay so what we do know first there are some notions of almost a no subsystems where a you have a non-degenerate form around in different fixed point which allows for the for finite srb measures and in this case we can show that we can code Lebesgue almost every point and using the absolute continuity of of the in fact here you don't need the weak foliation it's enough than the absolute continuity of the exponential filiations but you can show that you can code Lebesgue almost every point while there is no invariant density now what i value where the argument is not complete yet is in the setup mentioned here when you do not have a finite srb measure then we would still be able to code a Lebesgue almost every point currently or in fact it's enough for us when i'm stable if and in this setup it's also true that the Leoponov exponent for Lebesgue almost every point is zero but it's not an obstruction for the coding as i explained now there is immediate corollary from this property i will not explain how to derive it it's a well-contained argument and which uses properties of the symbolic space and i it's written somewhere else but it's technical so i will not explain it today and which says that if this property holds then the pressure of the geometric potential is zero on sigma zero sigma zero is the coding of the the currently strongly accountable points where we allow for zero response this holds given that we can code given one this holds while simultaneously another thing that holds is that for every chi the coding on the chi symbolic space the symbolic coding of a chi symbolic measures must be strictly less than zero so let me just explain why this is true okay for now don't read this let's just read this box so why the pressure on the sigma chi is less than zero for a chi greater than zero and assume otherwise assume that the pressure here is is zero then you can find the sequence of measures mu n which approximate the pressure this pressure goes to zero but simultaneously the Leoponov exponent is bounded from below then it follows that their entropy is bounded from below now it means that the limiting point must be in srb and show why because first of all the real inequality tells us that we have a bound from above the by zero for the for the limiting measure but also you can decompose this into the difference of the entropy with the sequence element the sequence elements you know the difference of the integral over the geometric potential plus the pressure of the sequence element this goes to zero by the choice of the sequence this goes to zero because they converge in a weak start topology and fires continues and in c infinity this is greater or equal to zero by the upper semi continuity of entropy then you get that you have a limiting measure which satisfies entropy formula and has non-zero entropy it must be in srb this is a contradiction to the result of fu and yang therefore the pressure must be strictly less than zero on every sigma chi so we are not able to capture this phenomenon where we have a sequence of measures which want to approximate the pressure the pressure is zero you want to approximate the pressure but it escapes every sigma chi coding you are not able to capture this sequence and the reason that you are losing something important here is when you are not able to capture the diffusing process you are not able to capture its limiting the limiting of the process and the limiting of the process is that the geometric potential would have zero pressure and will be nullicar on the sigma not space and so you truly are missing a symbolic description of this phenomenology where entropy diffuses and it happens also with c infinity examples so as I said this can all be shown given the fact that this argument is not complete yet I will say it if anyone is interested I will explain why I think it should be true now the following slide might some people might not like it but I feel obligated to show it because I said that every point is zero summable so I feel obligated to explain how so the idea of of how to show that is the following first you take some approximation Taylor expansion of the action of the differential on the unstable direction in the different fixed point let's call it actually let's call it f minus one because we are interested in that then you want the inverse the first step is to show the relationship between the Taylor expansion of f and its inverse then you notice that f n of x must go to zero because of the same argument we said before it has no fixed point this expression aside from zero and and then you use some nice equation argument to show the following these constants are important I'm sorry that they may look messy but they are crucial I will try to explain why but you need to get a bound on the rate of how slow this goes to zero why do you need to get a bound on the rate with the constant the exact constant it's so important that you want to also control the error term to show that it goes to zero I'll explain in a second this constant ends up being an exponent so you need to show that this you need to be able to show this bound in fact it's not only about it characterizes the rate up to the actual constant and what happens is that you end up showing something of the following form you end up showing let's call this for a second c then you end up showing that you have a bound which is of the form of a one minus a minus sorry c over n we have a to the power of r minus one so it cancels cancels with one over r minus one here and this is kind of like a to the minus c r minus one times log if you take this sum up to n time to log n which is one over n to c minus one so in order to have some ability of these terms over capital n you have to show that the exponent is sufficiently nice you need to control you have to control this constant properly and that's indeed what you get you get a bound on the nth derivative which is a constant over n to the power of r over r minus one so it means that for every alpha which is greater than r minus one over r this is even smoother than one this sum is finite in particular two is greater than this quantity and so you get that you are summable and okay I know it was technical but I had some obligation to show it and good so now we can discuss codability it may not be up your clear what measures are carried by the point is throwing temperable points and so let's show a condition okay the condition which is sufficient is the following we want the square of the inverse Lyapunov norm to be sublinear and if this is also almost optimal I mean it's enough to have big O with some constant but this constant will have to depend on our choice of gamma and epsilon and of course we want the construction to work for every gamma and epsilon so you want it to be a little O and as I said the strength and probability is also optimal because we have a very nicely behaving setups where this is exactly the boundary which appears so why is this sufficient to show coding why this condition is sufficient and it's because consider that assume that for some fixed delta will determine later this inequality holds for every n in fact it holds holds for every large enough n but for simplification assume it holds for every n then define q n to be exactly one over delta n to the power of gamma then by definition this quotient equals to this quotient to the power of gamma which again is very similar to a minus gamma over n the substitute the formula for one over two q n you get this expression and this expression if you choose delta small it will be greater equal to this so this is exactly what we need you multiply both sides of the equation this moves here and it's here you get this inequality this is exactly the definition of it of n and you indeed get this q n is very very equal to an i minus one of q n minus one which is the strength and probability you want so strength and probability is basically this property and it's interesting to see where when it holds so let's see one condition which shows it one condition is if the inverse Lyapunov norm is integrable but in fact we can use a weaker condition which is that the difference of the inverse Lyapunov norm square and its image with f is integrable and what happens in almost a no subsystems is that you square yeah you're right okay thank you okay okay okay so what happens in almost a no subsystems is that this difference is bounded by two let me remind you which is actually side I don't know I picked it up but we showed it and where did we see it here we assume the setup where there's no contraction on the tables we assume this setup and we got the two square facts is less or equal to two plus so the difference is indeed bounded by two so what we get is that this function is one sided integrable but because of the specific form of this function one can lift the one sided integrability into an integrability which is what we wanted this is an integrability in this case because u square is up to a constant the inverse Lyapunov norm the s parameter is uniformly bounded and so we get that indeed we can code every invariant measure aside for delta p in almost a no subsystems this is how we show that you code them and I want to point out another thing is that we all we can always lift the pressure of every a helder continuous potential why is that because the pressure is can always be approximated by uniformly parabolic measures and uniformly parabolic measures have little q bounded from below uniformly so of course it's it doesn't grow more than linearly it's bounded so you can code all periodic orbits you can code you can code all you can code all uniformly parabolic measures you can lift always the pressure to to dismiss symbolic space and there are many measures which satisfy this because the fact that you can code all periodic orbits in a Markovian structure in a transitive way gives you some shift space which exists you just pick some Markovian measure on shift space and project it below the projection would be strongly improbable it's a Markovian measure which lives on an unstable instant in fact it will have to be parabolic this if it's a finite measure from have to be parabolic and I will touch upon it later on but so these measures exist so there's a challenge which is to try and figure out which measures is a general class satisfy this property and that's another ongoing work or in progress but not in the finalizing steps now. Good so the last slide I wanted to show is this nice a relationship between temporability and rate of contraction so first define this parameter s square similar to how we define this square and notice that the contraction after n iterates is up to a constant and this sum of 1 over s square of f change so why it's true because we saw what happens to the contraction after a change of coordinates almost inequality it's a bound from above but it's very tight when s is big and when s is not big then we're good good so what what follows from this expression is that the point is a parabolic if and only if it visits some level set of 1 over or in fact s square and u square it with positive recurrence so in particular every invulnerability measure which is carried by the zero sum of the points has to be a parabolic and the publicity rises is generated through the positive recurrence the positive recurrence generates a publicity but one may have orbits which may still care interesting measures perhaps infinite which are recurrent and are not positive recurrent and then one can get other rates of contraction so let me show one thing consider the function f of n equals e to the power of delta an exponential function then it satisfies that the derivative of f over f equals delta which is positive and it's uniformly bounded from below so if one considers another function if for which it's not bounded from below let's say it goes to zero one autonomically you have this equation you can put any other function on the right hand side which satisfies this property and then solve the differential equation for example you can find a stretch exponential rate so if one wants to if you have such a function and you have the property that you have this kind of temporability that the inversely upon a firm square is less or equal to f of n over f f prime of n which goes to infinity because this goes to zero then you get the following yet that the cumulative contraction after sufficiently many steps is a less or equal to one over c of the inverse of this which because this goes to zero monotonically it's like taking up to a constant in minus one over c to the integral of of this expression but this expression is the derivative of log f so you get that it equals to one over f of n to the power of one over c in particular when you have a sublinear rate like we wanted and you get that you must have super polynomial a super polynomial contraction and one can try to tailor even stronger you can try to get stretch exponential if you can show stronger things in the reason that this is interesting is because temporability rates is an object which is easier to study you have level sets of a function and one can try to use ergodic properties in order to find some asymptotic growth bounds on this function and once you have some asymptotic growth went through if you have quantitative information on return times let's say if you have quantitative information and one can use this quantitative information on the on the rate of return times to get quantitative information on temporability which ends up giving you interesting contraction rates on weekly unstable that's why I showed it and so if we the work in progress if the argument is complete in particular so we have this infinite srb measure in almost a no subsystems which of course a almost every point has the level of exponent zero on the unstable direction but you would still have super polynomial contraction on the unstable direction and so the one thing last thing I wanted to say before I finish is to thank let me stop the screen sharing is to thank oh thank you it's to thank the organizers for this week and the name of everyone in the audience so thank you it was wonderful thank you very much okay are there any questions okay I have a question so the dream is you have a system such that the beg almost every point is zero level of exponent but maybe on your coding you can use your coding to construct an infinite measure with such that almost every point is that we can stable and weak stable manifold and absolutely continuous condition also unstable weak we can stables that's a one version of the dream and I don't want the the condition in this generality but I do want to try and figure out a sort of dichotomy or trichotomy between different possible and thermodynamic properties of the geometric potential when first you assume that the pressure is zero then you can restrict to the cases when it's strictly less than zero and then you know that dimension unstable it would be and you will not have hope to get physical measures this way because the dimension will be bounded from one but in principle the additional power that you get by coding a non hyperbolic but zero summable points is that maybe these points and the amount of structure can allow you to construct interesting infinite measures which have on the one and zero yeah but on the other hand some type of quasi hyperbolic structure exactly so it will be a mycobian structure on the one hand but you can capture a phenomenon completely of the diffusion of entropy which we have examples when it happens and you want to capture this and it's limiting a process so you end up finding the null and current process which happens in terms of your thermodynamic formalism with finding another current SLB measure that's another question so by coding by one single transitive shift or zero summable orbits you are able to find this incorrectly to code at once for hyperbolic measures without any lower bound on the hyperbolicity you can code all measures which are carried by the recurrently strongly temperable set in the almost unossoc systems it happens for every invariability system but a question which I'm working on and I think it's hard is to see in what generality one can prove that a measure is carried by the set I have some speculations but I want to say that you always lift in any case you can always code all periodic orbits which are hyperbolic you can always lift uniformly hyperbolic so you lift pressure you have measures which are strongly temperable always and you could construct strictly weak foliations always and and the goal is to show that for some interesting measures that is equilibrium and measures of a helo continuous potential the local product structure plus some other Gibbs estimate or sort of Gibbs estimate will allow you to show strong temperability and and then they should I hope to be able to code them as well but that's my goal I see oh wonderful great any other questions Stefano yes very nice thank you snare very nice talk so in the in the almost unossoc example of course you I guess you are not able to use the full power of your construction of the stable leaves because they are already there right I'm not what do you mean I'm not able to know in the side it's not needed it's not needed yes it's not needed yes true so so do you do you think it would be nice to see an example where really you you don't know a priority that there is such a foliation right or I guess it's very hard to verify no it's it's not hard as I explained and you just need an example where the Lyapunov exponents are not bounded from below once you have such a system I haven't tried to construct a system but I'm sure there are systems where the Lyapunov exponents are not bounded from below once you have that you can construct a a periodic orbits where the Lyapunov exponents are not bounded from below and once you have that you can construct another current chain for which you would have to have strictly sub exponential contraction they exist these foliations exist they're intertwined with the exponential contraction foliation okay but we didn't see them and nonetheless they are absolutely continuous and they exist so so you're saying that you would you would have a sequence of hyperbolic measures each one with its own foliation but then you would necessarily have some some of these non-exponential foliations yeah it's a it's like a limiting process where the the exponentiality goes away and and the thing is that those foliations they don't carry invariant probability measures as we said an invariant probability measure would have to be in positive recurrence implies hyperbolicity so what what do they carry they exist geometrically but they can carry infinite measures that's what they can carry which can still be interesting physical and so on absolutely absolutely okay thank you very much thank you any other questions okay if not so let's thank the speaker again thank you very much and so it's time to finish this very interesting week with a lot of interesting talks and let's hope you can meet very soon in person at some place preferably at the icgp thank you thank you everybody for coming thank you everybody take care take care