 Thank you. So first I would like to thank the organizer for the school, the conference, and the opportunity to talk. So I'm going to present this joint result with Sylvain Crovisier and Henri Sariq. Okay, let's start. So well first I for the second time this week you will have a short introduction to entropy So why should we care what we do we do is it so it's essentially you can you want to count or bits in a certain way and You can do it for measures you can do it for from the point of view of topology and It leads to a lot of Interesting fascinating stuff you can use it to classify Classes of dynamical systems in the probabilistic setting in the symbolic setting even for in the smooth setting we see and As was mentioned in the Talk by Keith Well, when you have it's especially interesting when you have systems with say some high publicity Which have because of this a lot of invariant measure and then entropy becomes a way of Selecting measures Interesting measure. So how do you count? So let me repeat Only for the second time this week The one way to do it, which is the let's say the boy and where the boy and in a burgway Which is you you fix yourself? Precision a scale. This is the epsilon a small positive number and then when you Consider some time and then you will say, okay I will not make any difference between two points whose orbit stay epsilon close within I mean during time and Okay, that gives you a metric and hence a notion of balls and once you have these balls You can define covering numbers for subsets That's just the minimum number of balls in this sense that you need to to cover your subset and You can then extend it to measures as Just by saying now you're not interested really in covering a Definite given subset, but just anything with a significant measure and One half was already introduced in the previous lecture Okay, and then once you have this number then you can say, okay I'm going to now define for instance topological entropy just by saying, okay I let the scale goes to epsilon for fixed scale. I just compute the exponential growth rate Of the covering numbers I mentioned before Okay, and then if you have a nice If you're in a reasonable setting a nice setting then you can just say okay The topological entropy is just when I want to cover everything Okay, and for measure I mean Catox formula tells you that at least in the algorithm case you can essentially do the same Adjust this time instead of covering everything. You just cover a subset of significant measure Okay, just so feel free to interrupt me. Otherwise these things so you see them for the second time this day, so Okay, and then one of the key Thing which is the rational principle that tells you that actually it is what you see when you're looking at measures is Reflects in some sense all that can be seen in your system as soon as you You are in this topological a compact setting Okay, and there is a very nice proof by Mishorevich Which is also interesting because the way it is The proof goes is to say okay if you try to actually distribute point or actually equidistribute measures measure on epsilon n Separated Points then so you try to build some kind of equidistributed measure when you let n goes to infinity You will get something which has close to the Full entropy so this is one way to say okay, this is one more reason to think that Measures maximizing the entropy are going to be an exciting thing. Okay. Anyway, that's Something we can decide for ourselves now. When do they exist? So Okay, first what I should say is that their existence is in some sense much less general than The setting of the rational principle you need something more than continuity and compactness one of the thing which was mentioned By keys is expansivity so if not to orbit stay close to each other For all times at least in fact positive and negative then This will imply that the entropy function of other measure with upper costumy continuous so then You achieve your mark the supremum is achieved. This means you have a measure of maximum entropy This same upper semi continuity argument applies in fact as soon as you have a infinity map on a compact manifold this is due to your new house using young Dean's theory But interestingly so Not only C0 is not enough, but actually for every finite smoothness you have counter example so the this was shown by misery which in a Dimension four or more and actually you can build Also examples at as soon as it look at question makes sense in dimension two Okay, now the subject so this was for Existence which essentially Well, you have this at least very this very Well still general theorem of new house that as you just see infinity a personal continuity existence Even if you don't really know anything about the dynamic you have this Kind of functional analytic a reason that gives you measures of maximum entropy. So now the more difficult Stuff is Well, what about the number? I mean as was mentioned if ever for instance you have a system with zero Entropy then all invariant measures are measures of maximum entropy and this is well and actually this you can use this to To to see something very easy, but still which you have to keep in mind Which is it's easy to have systems with uncountably many measures maximizing the entropy. So What is the solution to this exercise? You just take Your favorite system which has a measure with maximum entropy. So let's say a sinfinity map on some compact manifold and then you take the product with let's say the identity of the circle and while using things about the entropy of Product and things like this you can easily check that well now Not only you have if you so you had at least one measure of maximum entropy for F because it was infinity And now you have Uncontably many ergodic measures of maximum entropy. So you have to To I mean you need something for this I mean to hope that you will have finiteness. So what kind of condition wouldn't show finiteness? What type of system so one of the first type of system for which it was known our subshift or finite type Okay, let me just Point out that for existence we had this Expansivity condition and the expansivity condition is automatically satisfied for any shift on a finite alphabet and you see here to get we have this example of existence and now among all this shift the one for which it is classical at least to That they have finitely many measures of maximum entropy they have a much more rigid structure and that's the idea Usually when you can prove finiteness, it's because you know a lot of things about your system about your dynamics This is the opposite to the question of existence Okay, and then it was shown that This is this also holds in some sense if you build if you go Classically through a Markov partition This is a consequence of the let's say the Paris result that this is also true in the smooth setting for uniformly hyperbolic different morphisms and so well hyperbolicity gives you a Well a setting where you can hope to show that You have finitely many So for uniformly hyperbolic this this was done in the 70s. So what about non-uniform? Okay, is it true that when you have something which is non-uniformly hyperbolic Let's say all measures are hyperbolic in this in the case in sense in the sense that they have no the only opponent exponent will be Will this give me Uniqueness or finiteness or something like this Okay, so the first type of symbol in the symbolic category The answer is somewhat. Yes So well you have to to assume something because well for substitute or finite that you had By the nature of things only finitely many pieces So you would get finiteness of measures of maximum entropy For the their generalization to infinite But countable alphabets you can have of course infinite you can take one system and take Countably many copies of it. So you do not you cannot hope to have finiteness without adding an assumption But for instance go a bit show the I mean the result that makes sense here Which is that if you ask you if you assume your disability Then actually you have at most one measure Maximizing the entropy for this type. So in the symbolic setting you have this nice I mean the the result has this natural generalization and Okay, what about The smooth setting so actually it's not enough so already for Smooth but with finite smoothness interval map So assuming positive entropy, I mean I've made the remark before that otherwise The question is Doesn't make sense Well, you can check it's not very difficult to build example where you see that in finite smoothness You can still have infinitely many measures of maximum entropy Actually, there are countably many that is not this is better than the general situation But still you can have infinitely many The fact that you have here because of reals inequality which will appear in a second You know that when you have positive entropy measures with entropy close to the maximum will have a uniform lower bound on their exponent So in some sense you do not have uniform high publicity, but you have Well as somewhat strong form of non uniform except you seen high publicity still it is not enough What do you need more? So for sin infinity it works. I mean you have if you take a sin infinity interval map with With Which is infinity smooth not only it has measures of maximum entropy by new house, but actually it has finitely many of them and what is the So what is So that's reals inequality that there's you that So this is in dimension one for maps I mean I'm quoting the special case in dimension one for maps or in dimension two for Differomorphisms of surfaces So sorry You see that if the if you have a lower bound on the entropy of the measure then you have the lower bound on the largest Yapunov exponent, okay, so if you want So what is the extra thing that you get from you go when you go from here to here it's it's Well, it's what we'll see. Okay, so that's the that's the this theorem now We want to extend it to dimension two so dimension so this was dimension one for maps and we expect usually that when Things are nice with us then what's holds for interval map should hold in a Adapted form maybe for surface differ morphism So for a long time there was no What what no progress in this with respect to this question and then a few years ago Saric built so succeeded in building a kind of symbolic dynamics for C1 plus alpha Surface differ morphism with positive topological entropy and one of the consequence of his construction is that the as in the For the interval in slow smoothness the set of measures of maximum entropy is countable It cannot be uncountable As in the general case Okay, but in fact is I mean and we will build on his result So to to show the the finite net so this is the the main theorem so it's repeat It's a joint work with a silver and Omri Okay, you take Any compact surface and he's infinity if your morphism on this surface The hyperboleicity assumption here is that the topological entropy is positive Okay, and then you get I mean the most important thing is that thus You have only finitely many ergodic measures that maximize the entropy Okay, and it's even nicer in the sense that if you assume a natural condition for your disability Of your difumorphism that is topological transitivity. You really get uniqueness Okay, so this is a this is a Result it somehow completes my PhD thesis With a little help from my friends. So now I'm going to try to to explain Some things that how you prove this oh, sorry, I forgot so once actually the so we will see we prove a little bit more and This has to let me quote two Things that we get let's say for the price of the previous theorem two more things so the first one is You can apply this to equilibrium states and especially holder continues Small potentials. So what does mean a small potential? It means that they satisfy this condition which actually Somewhat somewhere appeared at some place in the key stock The point of this condition Usually usually in this business is that you can check that it immediately implies that measures that equilibrium measures measures that mu that Maximizes the sum of the entropy plus the average of the potential the integral of the potential This measure under this condition They will have positive topological entropy They will have positive entropy and then actually the positive is not something abstract It's the difference between the right-hand side and the left-hand side so once we have Well, we will see that the same argument that proved the The theorem about measures of maximal entropy actually applies then to equilibrium states Under this condition, sorry. Yeah, they were so what we will obtain is that the Intersection of the support of the support of two measures like this will have zero topological entropy More than that. I don't know and the other Stuff which I don't want really to to read is that actually we prove a theorem about CR different morphisms and then Everything gets a little bit more complicated but so instead of having the condition that the topological entropy is Non-zero you have that it has to be bigger than some threshold Where in the numerator you see essentially the lift sheets constant and in the denominator you see the smoothness And this has to do with what you have lots of estimates like this First in a young beam theory or when you try to compute Some or in South theorem and we'll see that's that's how it it comes from It's on some the interest of having a CR statement is maybe to understand better What's going on in this infinity case where essentially everything becomes zero? Okay, and then you get the same conclusion, okay, so now Okay, that's me so what is the strategy of the proof so there are three Let's say three parts one part is to introduce a Kind of spectral decomposition, which will be based on the Homoclinic relationship, but between measures not directly between points then we prove So we had this Big theorem of sorry that I quoted or at least I quoted some consequence already This theorem is very powerful But it's also very difficult to to establish and in particular You need a construction, which is quite abstract and it's really difficult to understand how what's going on on the manifold is reflected into what's going on on the symbolic dynamics so Here the point of introducing these Okay, sorry there is a mistake. It's not hyperbolic measure class. It's homoclinic measure class that might I mean the measures are hyperbolic too But that the world that should be there is Homoclinic measure class. So the point is to have this notion which is Clear let's say canonically defined on the surface and to understand it on the on surrog Symbolic dynamics and this is where comes the into play the local version of sorry theorem will get symbolic Essentially symbolic coding for each of these homoclinic measure class So then you have these So so and finally so you have this spectral decomposition into These natural pieces, you know by the local version of sorry theorem that each piece will have At most one measure of maximal entropy and now to conclude you only need to show that Only funny too many pieces can have entropy above some threshold I mean if you have all this then you have the the main theorem Okay, and oh yeah, so what I should say is that we use I mean this type of ideas is very close to what Federico, Hannah, Alita, Zibi and Raoul us did for SRB measures on surfaces So some of the problems are different The results are different too, but the idea that but they they also had I mean they they Introduce this idea that you should look at a version of homoclinic class for invariant measures and also that you that in dimension to you can use a well plan of topology and that it Simple let's say simple figures tells you that a lot of stuff that if you so I mean it's it's different but It's in the same spirit Okay, so what are more precisely the ingredients so as I said the basic I mean the big thing is the Saric's coding for surface Differmorphism and We use a more specifically how I mean He shows that In some sense the size of the alphabet of the symbols that you need to use in his coding is strongly related to How good or how bad is the high publicity on the surface for a given point I mean I will give a slightly less mysterious statement when I Go into the horrible details later but that's the Okay, the second argument is This thing about plan of topology, so let me make Drawing which is what it said it looks a little bit stupid, but still one has to To do it what is a rectangle for us So a rectangle is just you have so suppose somewhere you have a Hyperbolic set with its so you have this Hyperbolic set lambda often it will be a horseshoe so so transit topologically transitive so locally maximum Okay, this is the Sorry Yes invariant So this is lambda What if you not too sure about the other words it's the the last ones that matter most so my Rectangle is just Well, sometimes a rectangle is just a rectangle so the rectangle here is just the open set so but the boundary pieces of leaves of Of the dynamical four liations of this uniformly hyperbolic horseshoe, okay, so this is really I just Well, and essentially I'm going to when things intersect in Well, the boundary in a non-trivial way their boundaries have to intersect and Then I know things That's the second ingredient Okay to build these rectangles to show that you can cover a Lot of measure with them we will use a piecin theory and especially to build the relevant horseshoes linked to To hyperbolic measure we will use catholic horseshoe theorem That tells you that when you have a measure with Hyperbolic measure, so it knows you only have enough exponent you can find a horseshoe which has topological entropy almost equal to that of the measure and Which is homo clinically related to the measure And which is close to the measure in lots of different senses Okay, and sorry and one one of the things that we that is known about these horseshoes is that so you have this so the collection of the stable sets gives you a foliation or lamination technically with continuously I mean the leaves are CR and they depend continuously on the point So it's not exactly a foliation is Let's call it dynamical lamination or dynamical Foliation and these things are known to have positive transverse half-dough dimension and there is even there is a lower bound for these things Which I mean the lower bound comes into play when you want to do the CR case for the C infinity case You just have to know that the transverse dimension is always positive Okay, and so when we do so we are going to to do this Jordan theorem thing to show that essentially when for instance when When you have let's say two rectangles that intersect in a non-trivial way There you see immediately that the boundaries have to Well, that's a definition let's say of intersecting One meets the other no no one covers the other then you see that the boundary have to intersect Well as sets and then the question you ask immediately yourself is okay Well, you know, maybe not immediately but after the next slide is whether this intersection between smooth curves are transverse or not Okay, the topology of course doesn't tell you anything about it, but then that's where South theorem or an adaptation of it in the For dynamical for Asian where you have this kind of partial smoothness Very classical Comes into play and here you have the other Bound that tells you that for C infinity in the C infinity setting if you have Affoliation This type of affiliation then well the you can look at the sub foliation defined by leaf By leaves which are tangent to some smooth curve And you will get that the dimension of these bad guys is actually zero And when you compare with the previous thing you see that Well, when you have an intersection not between curves, so now what will happen? Well, I will come back to that later. So when you are looking at two given curves, of course, these curves could intersect Non-transversally, but if you use that these curves actually are accumulated by a whole Dynamical foliation like here then using these two ways these two arguments You can show that well, maybe not this curve But another curve Which is just as good as it as the first one will have this transverse Intersection that you're hoping for. Okay, so essentially while we at point two We had topological intersection and then from three and four you get nice transverse intersection and Finally the finiteness and what is the extra secret ingredient to a hyperbolicity? it's yamlil theory and more precisely so what you Yamlil theory gives you a way to uniformly control the tail entropy which was introduced by Mr. Ivits under the name of conditional topological entropy Which I caused for some time Local entropy, but okay Which is the this simple it's a very simple quantity in fact, but maybe it doesn't look like that, but Well, which is okay you at each scale epsilon what you do You you look at the complexity that is the topological entropy defined just by asking yourself How many balls do I need to cover this subset? So here the subset you want to cover is any let's say kind of tubular neighborhood of a given orbit When you are close to a given orbit Close at scale epsilon how many entropy can you will use? May you see when? You keep looking at this epsilon tubular neighborhood, but now you look with a very much smaller Microscope a much smaller scale and that's the topological entropy of this set Okay, this is for one the neighborhood of one given orbit now you can take the supremum of all orbits Okay, and Well here this is for at scale epsilon now as in the definition of the entropy You should let epsilon goes to zero. This is the tail entropy of F Okay, and it's also called the tail entropy because one of the key property which is completely obvious from the definition is that When you compute let's say the entropy of a subset or of a merger it doesn't matter Okay, you will you get immediately from this definition the estimate That if you stop your computation if you do your computation of entropy at a given scale with a given precision Then actually what you can miss out is bounded Just by this quantity So it gives you a uniformity in the so it gives you a it's a bound. It's a first bound now the I mean Sex entropy theory will give you much finer bound, but it gives you a uniform bound on the Uniformity or non-uniformity of epsilon and when you are seeing infinity the these things go Goes to zero this number goes to zero. So you get uniform convergence Okay, that makes lots of people that I know happy But here for us it's even more basic than that it tells us that essentially for a given smooth map There is a scale at which if we compute the entropy we see almost everything. We don't need to go To go arbitrarily small scale. Okay, and if it's not since infinity Yeah, no, no, sorry. There is a No, this is I forgot to take the limit when epsilon goes to zero. Thank you for should be clear for I mean I I said it quickly But yeah, it's worth repeating Yeah, of course the Okay, okay, so that's the That's the list of things you should be ready before you start cooking So let us start So the first ingredient is to define this homo clinic measure class here. I got it Correct. It's homo clinic So this is a very well. This is a classical idea Let's say according to the definition of Vivian Baladi something that was very well known before you started your PhD so You take so first you take two points you say that they are homo clinically related If the following hold so I'm not saying the points are periodic for now So I have to assume that they have reasonable stable and unstable set let's say what what what is produced by piecin stable theorem that is you look at the set of points which converts exponentially in the future or in the past To the orbit of the point you are considering and I want these to be nice guys Okay, just so that what I'm really interested in saying will make sense and now the homo clinic relation I mean goes back to maybe smell or Is to say okay, I want the unstable Of One to cross the stable of the other in a transverse way Okay, so the picture is Like this, okay, let me keep my colors. You have This will be Okay, let me say this is Okay, and this is the stable P the unstable of Of P and then you can imagine the rest of the notation Okay, this is so you want this Okay, you once you have defined this for substance points you extend it to subset when there is no It's completely trivial. I mean this is just a definition anyway It just takes care. I mean this partition just takes care even in the classical case of Periodic orbits you need to pay attention About I mean the where you are on the periodic orbit. So that's why you need to have these these sorry these Partitions and you want that essentially you have a kind of synchronization and the the matching parts should be homo clinically related Okay, so this you can apply this now to horses, which I mentioned before And Now using the lambda lemma you can check that Apply to horses to this is really a well. It's an equivalence relation and Because of well everything is uniform and so on it's enough to find two points which are related in each or shoe to Actually establish that they are themselves related in the sense that you have all points for which The periods don't forbid. I mean the that are not the off a Modular the periods will be How more clinically related which is the definition of the subsets being related Okay, so something that cannot I haven't yet defined here. So let's just keep that Okay, and as I said the classical case which go back to a new house in the 70s is to apply this To hyperbolic periodic orbits, so of course, you know how they have stable and stable manifold And this is what you call homo clinic class. So there are two with two definitions which you can define that okay the The people I speak with they like the closure and here we will also like the closure So if you have a hyperbolic periodic orbit, it's homo clinic class is the closure of the set of hyperbolic periodic orbits, which are homo clinically related in this sense to the first orbit Okay, and that's it's well known that Well, for instance, you want generically you can define a spectral decomposition using these well These things appear in a spectral decomposition. I don't want to I don't have the time to well Okay, and now what I'm really interested in is in defining these four measures like in a similar way to to what Federico, how he did a slightly different way, but yeah So when I have to ergodic hyperbolic measure, well, I said they are related if they are You have a full measure subsets which are related in the previous sense Okay, and now the homo clinic measure class Well, it's just the The class for this relation and you can check that it's an equivalence relation on this set of measures and it generalizes the classical definition in the sense that when you have Hyperbolic periodic orbit, they define the hyperbolic measures and the two notions then are the same Okay, and you can check that actually At least in the scene infinity case. Well, don't pay too much attention essentially with respect to positive entropy measures ergodic measures It's essentially the same to speak of hyperbolic measure or to speak of measure carried by these homo clinic classes, which are the closure of Homoclinically related periodic points, so it's Almost it's not exactly the same. We don't know exactly what could be what is the difference. They could be Some difference. Let's say at the level of entropy zero, which is why we want to to deal with the measure stuff Okay, and one of the things we could know that one of the benefit of Going back to this more classical notion What the homo clinic class is a compact invariant set is that if you have a C infinity map the entropy function stays stays Upper semi-continuous when you restrict to the compact invariant set and therefore you still have by new house the exist the local existence of Measure that maximizes locally the entropy and this is for existence Okay Okay, then there is a local uniqueness theorem, which I'm not going to really to speak about since I have no time oops Okay, what is the Okay, this is what I want to to mention now It's just the corollary the corollary tell me it tells me that now I had this Global sorry theorem, but global with a control on hyperbolicity and Well linking the hyperbolicity for the map to how Smalls in the symbols I must use I cannot really explain more in the few minutes that remain and But here the the what we are at 10 is this now you can focus on a homo clinic measure class and They are only they can all be lifted somehow to a single piece of the Symbolic dynamics and the consequence of this is that each piece will have at most one measure of maximal entropy I mean we actually have one measure that maximum exactly one measure that maximizes the entropy among the class and this measure either it is of Entropy which is the topological entropy of the map or it's something lower And then it doesn't count as a measure of maximal entropy for the whole thing, but essentially you have here you have the local uniqueness and now the well In a zero seconds the finite multiplicity That we want to prove now can be formulated as this if you take any positive Number then the You can look at the homo clinic a measure class Which have entropy in the sense that they carry measure with entropy bigger than this positive number and This is a finite subset and this when you combine this with your previous Uniqueness theorem you get the main theorem. How much time do I have? It's over. Okay, so let So that's the proof You know all my secrets now Okay, let's me just conclude very quickly. So I would like to say that What it is? So in some sense it completes my 20 years old PhD Now it leaves you Some problem especially so now we prove uniqueness for finite Smoothness actually we don't yet have we think we can have but it's not Really a counter example that would extend to really show that C infinity is necessary Okay, I don't want to know no time to Mention two and three and then of course so a further question which is much less Well, which is really open we don't really know to how to to deal with it now is You have this spectral decomposition. Can you get more information? Especially in this is spectral decomposition appears Entropies and periods of the various species. What can you say about that? And then this would give us? classification of surface Defiomorphism Okay, and then Well, what about higher dimension so maybe You really make and tells tell us something about the case of three-dimensional flows which should be well interval maps should be like different morphisms of surfaces which should be like Flows in three-dimension well, maybe we don't have to wait 20 years for that and Now the bigger question is what happens for different morphisms in higher dimension So we know that some extra assumption is needed. It's not clear why it's not clear which one Well, we do the job are really necessary or whatever anyway, so we already three minutes Over time, so thank you