 Thank you. Thank you very much to Stefano and the organizers for the invitation. And so I would like to discuss the thermodynamics of one-dimensional maps. So I will basically concentrate most of the talk on quasi-quadratic maps. So these are maps which are, say, smooth with a generic critical point, a unique generic critical point of this form. I will assume it's infinity to simplify most of the time. And perhaps I will assume transitive on some interval. And I will also consider at the same time, say, polynomial maps, quadratic or polynomial-like maps. So this is some map which is defined in some domain U, which is compactly contained in some bigger domain, which is the image of the map, in such a way that this map is a holomorphic and proper map and proper. And I will assume mainly for simplicity that there is a unique critical point which is a quadratic type. So this is the class of maps I would like to discuss both. I will discuss both at the same time. Just refer to them as quasi-quadratic maps. And so I want to discuss the thermodynamic formalism. So this geometric pressure function, which is defined for parameter beta. So it's the pressure of f at beta is the supremum over all invariant measures defined on the Julia set. So here, I forgot to mention, the maximal invariant set of this map is the fielding Julia set and field Julia set. And the Julia set here is just a boundary of k. And f is always transitive on this set. So we have basically a unique piece where the dynamics is more interesting. And this pressure function is, as usual, the supremum over all invariant measures of the measure theoretic entropy. And the integral of minus beta times the integral of log df, which I will write as, sometimes, as the Lyapunov exponent of f. Maybe perhaps it's better this way. So f of mu with respect to f is the Lyapunov exponent. Measure mu is an equilibrium state or Gibbs measure if the supremum is attained. So the pressure function appears. So the motivation is that it appears naturally when studying some multifractal spectra and as well in some large deviations rate functions. So I want to discuss the properties of this pressure function. And there is this theorem that Felix mentioned this morning that says basically that this function is very nice. In most cases, so without any further assumption just transitivity, we know that, in fact, in greater generality, that this function is, in fact, real and analytic except at most at two points. So these are the so-called phase transitions. So here we have one line in the negative part and another line here. So this is parameter maybe beta. And here is minus beta of some number that I didn't define yet. And the pressure function is some convex function above these two lines. And here this asymptotic slope, which is given essentially by chi-inf, this function, this number chi-inf is the infimum of Lyapunov exponents. And the other asymptotic slope is given by the supremum of Lyapunov exponents. So the theorem says that the worst that can happen is that maybe the graph of this function touches at some point this line and then it becomes automatically equal to this line, same here. So let me just draw a picture. So let me just draw a picture. And of the worst case, this is when there is some parameter beta minus, some parameter beta plus. And outside this interval in the middle, the function is just linear. It coincides with these two slopes. And in between is real analytic. So p of f is real analytic on the interval b minus b plus. But in fact, it's more than that. So what's really going on is that for each beta in here, there is actually a unique. So there's a pretty good understanding of equilibrium states, Gibbs measures. There is a unique equilibrium state. It depends holomorphically on beta, blah, blah, blah. So it's really very, very nice in this part. And we also know more or less when these transitions happen. So for example, when there is some lack of uniform, of non-uniform hyperboleicity, we know that there is a phase transition at the first zero of the pressure function. This is the hyperbolic dimension. And we also know there are some cases where there can be a phase transition after the first zero of pressure. There are some examples like that. And we have some at least very rough understanding of when this function lacks to be real analytic. Yeah, so beta is fixed. And for each beta, you take the supremum over all measures of this number. So you're maximizing a different function. So if you want, you can forget about the beta in some sense. And you can think of that for each mu, you have a line. And you're taking the supremum of all of these lines. And you get this convex function. It's another way to see it. So I'm not saying this is the asymptote. I'm just saying it's the asymptotic slope. So there are examples where actually this is not the asymptote. I think the same happens on the negative spectrum. There are examples. So I'm not saying it's the asymptote. I'm just saying it's a asymptotic slope. No, that line. This line that goes through zero. Right, so there are essentially three kinds of phase transitions, one in the negative spectrum. We know very well there's work of Makarov and Smirnov. And basically, you can remove it. You change slightly the potential in some way. There's a trick, and you get a nice analytic function. So that's removable in some sense. And for the positive part, there are two types of phase transitions, high temperature, where this beta plus is at the first zero. And the second type of phase transition is when you have low temperature. And that's a different kind of phenomenon. So the first one you can characterize because it's a lack of expansion. It's really where the infimum of the epinep of exponents is zero. So in fact, this line would be horizontal. And the other one is when this kai-soup is positive, that corresponds to the Kole-Eggman case. And it happens, there are examples where you can also have a phase transition in that case. So this is the result I want to somehow, this is the background. And what I would like to discuss is somehow the next step, which is the case of phase transitions of infinity. So from now on, I will assume there are no phase transitions in the positive spectrum. So we get some nice analytic function. This implies in particular, this slope is actually strictly negative. So we are in the Kole-Eggman case. And the question, so is phase transitions of infinity or chaotic temperature dependence? So it's the problem of whether the Gibbs state converges. So that's the problem I want to discuss today. And I should say I've been using without saying. So usually in a statistical mechanic, this beta is the inverse of temperature. So of course here, it doesn't mean anything, it's just a formalism. And so this corresponds to the case where temperature is going to zero and we want to discuss in the problem is to understand what's the limit. So of course it would be very nice to know that we have a limit because the limit measure would be an approximation for the low temperature Gibbs measures, right? So that would be very nice. And so before going to the results, then the non-results, there is a very important remark. So observations, which is a very simple observation that goes back to a paper of Eismann and Liv that says the following. So as we increase beta, this part, the Lyapunov exponent becomes more and more important. So somehow Gibbs measures will tend to concentrate on measures that minimize Lyapunov exponents, okay? So this, so this will be very important in what follows. Okay, so any limit measure, so any accumulation in beta minimizes the Lyapunov exponent. And also there's a second remark of new beta, yeah, if I didn't define it. So it's the unique, sorry, so, okay, I'm assuming there are no phase transitions and this implies that for every beta positive, there is a unique, you write it this way, there's a unique Gibbs measure at temperature one over beta that I'm denoted by mu B. So it's the Gibbs measure at beta and the problem is to understand what's the behavior of this measure when beta goes to infinity. That's about the last type of pathology that could happen. Okay, so any accumulation point, so this is a very important remark, so any accumulation points of these Gibbs measures will be Lyapunov minimizing measure. And more than that, so if we look at this graph, I mean you can, well, more or less understand that once we know that minimizes this term is the same, so it's the minimal measure, moreover you can say that among those that minimize the Lyapunov exponent, the limit measures will tend to favor higher entropy. So limits also maximize entropy among those that minimize the Lyapunov exponent or all invariant probability measures, yes. And moreover maximizes entropy among Lyapunov minimizing. Okay, so these are really, really important in what follows, so yes and no. So we will see it's more complicated, especially when you have a critical point, but that's basically the idea. So the first place to look at would be those that minimize the Lyapunov exponent, you would get maybe a compact set of measures. And among those, you look at those that maximize the measure theory of the entropy. And that's one bit of where to look. However, there are very simple cases where there are many such and somehow, for some reason, the pressure, this gives measures favor one among these many. And it's really sometimes mysterious to know which one of those is really the limit. Yeah, maybe I will comment more on this. Okay, so that's a very important remark. And first of all, so this minimizing property is very nice because it makes this connection with this ergodic optimization, which is this setting where you fix some function and you try to maximize the integral among all invariant measures. And there is a sequence of articles going to infinity that deal with this problem. So there's a lot of things going on. And so what I want to do is to extract some of those results that are known for the symbolic case to deduce some results for the smooth case here. So maybe I will write here. So let me discuss first the case, the uniformly hyperbolic case. And actually I will just restrict to Leobanov, sorry, to uniform expanding maps of the circle just to simplify the discussion. And there is a very recent result of Contreras that if you combine with a very well-known trick, so Contreras result is on a symbolic setting and there is this trick to sometimes you can pick a function. So there's no derivative here and the key just considers any function. And there's this trick to realize any Holder continuous function as the derivative of an expanding map, okay? So if you do this, you can take this very nice theorem of Contreras and you get this result that says that for a generic circle expanding map of class C1 plus Lipschitz, these Gibbs measures actually converge to a periodic measure. And well, somehow, unfortunately, at least I don't see any way to extend this to higher regularity. So if you really, really use at some point the Lipschitz regularity to, well, at least I don't see a way to adapt it to get something for higher regularity. And for whatever it's worth, I want to conjecture here. So yeah, so he proved that in this case there will be a unique Lyapunov minimizing measure which happens to be a periodic measure. So it must converge there, right? And that's the only one. So entropy zero, right? Or just an equity-distributed measure on a periodic orbit, okay? Okay, so conjecture for hunch, whatever. So I would like to see what happens in the analytic case and maybe, so let me say that for, so it's a conjecture, so I can say whatever I want. So for a real analytic circle expanding map there is convergence. There is only vague evidence, I will say it very quickly for those that know. So first there is this very beautiful result of Ramon in the symbolic case where he shows that if you have any potential that has finite image, so it only takes a finite number of values, then you automatically have convergence. And the proof uses, his original proof is based on some sub-analytic machinery. So basically the equilibrium states in this case are defined by a finite number of functions. He shows they have a very nice regularity of infinity and that actually this implies the convergence at the end. And there is another loose source of motivation for this, which is in the, so there's this model in the statistical mechanics, the X, Y model. So there is, if you Google this, you might get a model agency in Colombia, but it's not that one. So this is some model in the statistical mechanics where your states belong to circles now. So it's like the symbolic case, except that now instead of having finitely many states, you have a circle of states. And there is, so it's the simplest case where you have a nearest neighbor interaction, which is given by some analytic, by some function, continuous function on the circle. And there are examples where of say infinity interactions that give divergence, but for the analytic case, you do have convergence. So that's actually not very difficult. It's really straightforward to see. So that's somehow some loose evidence that this might have a chance of being true. Sorry? So the interaction, so it's a nearest neighbor interaction, which is given by some, so it's symmetric. So you pick some function U on S1, and you evaluate on the difference. Yeah, of Gibbs measure. So for this guy, by wealth theorem, for every beta you have a unique Gibbs measure. All right, so you take a box and take a limit, blah, blah, blah. So what happens here is that the Gibbs measure is very explicit, so after you do some transformation it's just a Bernoulli measure that you can describe very explicitly in terms of this function U, and you can prove by hand that you have convergence in the, if U is analytic in that case. Okay. And so there's another example that you can cook up here, transfer to this hyperbolic case. So there's this example by Schatzotz and Hochman. So in the symbolic case, they found some Lipschitz function, some Lipschitz potential, so that you do have diverges in this case. So there is a phase transition at infinity, and so they did a very concrete potential where basically they constructed some shift, sub-shift very explicitly, and they took just the minus the distance function to that, and by doing some calculations you can more or less estimate the Gibbs measures, and you can prove by hand they oscillate. Now there is a simpler example that was by Cornell and myself where it is much, so it's more qualitative in nature, you have to do less computations, and it's more flexible in some sense, and you can adapt this example to get the following. So just by adapting the similar example, so there is a same infinity SQL expanding map so that you have divergence as beta goes to infinity. So a result like this would really complement this in some way so you could, it would show that this is the right, I mean that's the right regularity to have convergence. So perhaps I want to, so it will give a very brief idea of the proof of this result, so let me just go to the symbolic case. So the idea is just to use this Eisenman's deep observation as follows. So suppose you have in your symbolic space you have some first shift set in variant compact set X minus and X plus, maybe I want to put it here. So two compact variant sets which are disjoint, and suppose that you have some function which will be your potential so that the supremum is attained, so it attains its supremum in the union. So this principle would say that all of the limit measures will be supported in the union of X minus and X plus, right? But in fact, there's the second part that says that actually if we would have that the topological entropy of one is bigger than the other, actually all of the accumulation measures will be concentrated on X plus by the variation of principle, right? So, and we could revert this very easily. So now if we take compact invariant set, say X one minus contained in X zero minus and we take X one plus contained in X zero plus and we add say indicator function so maybe we are careful and put this closed open set so we could add indicator function of the union. So by just adding a very small proportion of the indicator function, we will know that in fact the accumulation points will all be contained in the smaller sets but if we now revert the inequality so we choose these subsets so that the opposite is true. So now as soon as we change epsilon from zero to anything positive, we know that the equilibrium measures will be supported on the other part. So they will be supported now on the minus part. So in this way if we take epsilon very, very small, for a long time they will be close to the plus part because they will tend to follow the higher entropy that is here and as soon as the epsilon gets more weight as temperature goes to zero they will be concentrated on the negative part. So in this way you can shift the low temperature Gibbs measures from one side to the other and if we repeat this infinitely many times you can actually get some measure that is a potential that will oscillate between these two sides and you get divergence at the end. Okay, but actually I think there is a better way to describe what's going on. So, and that's in the title of my talk, there's in fact this sensitive dependence that's a more or less feel here. So it's sensitive dependence. So that example is more or less saying that if we draw beta here and we think we are in a graph of the space of measures so we have maybe these Gibbs measures that are oscillating but in fact something that describes this better is that by a very small perturbation here so as soon as you take any positive epsilon you can have a very drastic change on the behavior of the low temperature Gibbs measures. So that's really what's going on here. So by very, by perturbing slightly the potential or the map you can get significant change in the behavior of the Gibbs measures. And it's very much like the notion we have in dynamics where you have a very small change in the initial condition can have a big effect at the end. So let me state this in a more precise way. So actually we get the following. So actually there is a same infinity circle expanding map that's not. So that the following happens. So if you choose any sequence of temperatures going to infinity we can actually perturb. There's a small perturbation so that we have divergence. So these are the Gibbs measures for the perturbed map. So by just being a little bit more careful with this construction you can get divergence along any sequence of temperatures going to zero. So great, I have five minutes and I just finished with the introduction. So what I wanted to discuss is these quadratic maps. So the thing is that we know these examples. We are using this BAM functions and so on to get some very complicated behavior for smooth maps. But the question remains of what happens when you have real analytic maps or holomorphic maps which are very rigid and we cannot change things locally. There's much more delicate situation. And the deal is that well since the simplest case is difficult let's go to a more complicated case and we consider then these quasi quadratic maps. So and in that case we do manage to get some examples and I hope to have some time to explain what's the idea behind the example. So there is a real analytic quasi quadratic map, zero having sensitive dependence of Gibbs measures. So the same as here except that now that we are in the space of quadratic maps. So I will finish by explaining what's the idea and maybe comparing these two results, the circle expanding case with smooth maps and these quadratic maps. So we can actually have both cases. So there is such an F zero where we have convergence and there is another one where we have divergence. But what this is saying is that if you prescribe any sequence of temperatures going to zero by perturbing the map with a very small perturbation you can get divergence along that sequence. So it's really very sensitive on the behavior is really that sensitive on the map itself. Well so that's the uniformly hyperbolic case which I think is as I said that's the difficult case. Yes, so I'm assuming there is sensitivity so the critical point is involved in the dynamic. So this is more or less what I wanted to address here in this conjecture. So I would expect in that case that you mentioned that we have convergence. But I don't know that seems difficult. Okay, so let me give you an idea. So very roughly the motivation for this idea comes from an example of Kovvauer and Keller. They constructed physical measure of a quadratic map which is supported on a repelling fixed point. So heuristically at least you have the following idea. The following idea so they somehow managed to show that to construct a quadratic map that in such a way that most of the mass follows the orbit of the critical point and they ensure that most of the time the critical point is near a repelling fixed point. So it will land very close to the repelling fixed point and we'll spend a lot of time trying to get out of it to only return afterwards and so on. So but the trick, the main point is that they ensure that most of the mass is following the critical orbit. So here we somehow managed to do the same but at low temperature. So we found a class of maps for which at low temperatures most of the mass is near the critical orbit and we can ensure that most of the time the critical orbit is near a periodic cycle is oscillating between one cycle and a second cycle. So it's oscillating and this way we can get this divergence and with being more careful we get this sensitive dependence. And let me finish by saying comparing these two results. So as you can see in this construction we have to be very careful so these potentials have to take the same value at two different places and we had to change this inductively so at least in the way we did it this is definitely an infinite co-dimension. Condition so at least this type the type we use to show this to prove this theorem I would expect that they're very very rare in the space of SQL expanding maps. There are many many conditions but here somehow strangely this seems to be robust so we can actually show that such F naught form or contain a co-dimension two sub-manifold in the space of quasi-quadratic maps. And let me just say quickly that one of the co-dimensions comes from the combinatorics of the map and the second one comes from the same as here we need these two periodic orbits where the critical point oscillates on to have the same Lyapunov exponent.