 I'd like to thank the organizers for the invitation. I've been here several times, and it's always a pleasure to come back. So I will talk about the optimization of Lyapunov exponents, and this talk is kind of unusual because as you see, it has much more questions than results. But there are a few results. OK, so just fix some extremely basic notation. So I work always with a dynamics T acting on a compact metric space, and these dynamics will be at least continuous. And we'll work a lot with the space of invariant probability measures, which is a compact and convex set, topologized with the usual weak start topology. And the subset of ergodic measures, as you know, is the set of extremal points of this compact convex set. OK, so even if the main topic of the talk is optimization of Lyapunov exponents, I will spend some time talking about what I call commutative ergodic optimization, which is the optimization of Birkhoff average. And if you want an introduction to this topic, you can look at these surveys by Oliver Jankson, an old one and a new one. And then there you can find most, well, not the proofs, but you have a list of most results that are known in this field. So what's the problem here? You are given a function, usually called a potential, or maybe, I don't know, performance function, something. And then you look at the integrals of this function. You integrate this function against invariant measures. So the dynamics here, let's think of these dynamics T fixing. So what I say in the beginning works for any continuous map. So you integrate, you look all the possible values, and then what you get is an interval that depends on the function f. And therefore, let's see if we can use it. So you have this beta, which we may call the maximum ergodic average and alpha, which is the minimum ergodic average. And in more geometric terms, you have this compact convex set of invariant measures. And essentially what we are doing is taking a one-dimensional projection of it because the integration is some affine operation, right? And so it's clear that some projection of your set. And then you get something which is also compacting convex in the line there for an interval. Moreover, these extremes are always attained by extreme points of your set, which you recall that these are exactly the ergodic measures. So you can always find at least one ergodic measure that projects here to be. So these measures that attain the maximum are called maximizing measures. And what I'm saying here is you can also take an ergodic one. And if for some reason you know that the maximizing measure is unique, then it automatically is ergodic. Okay. So this was stated in terms of integrals and how you relate this with Birkhoff averages. Well, it's not difficult. You basically need, if you know, Birkhoff theorem, then it's kind of obvious. So let me use this notation for the Birkhoff sums, right? So with respect to your dynamics T, right? And so these are the Birkhoff averages. Now, if you take some x and you make n go to infinity, well, maybe these x is not a typical point. So the limit doesn't exist, but then you take the link sup. And then you take the sup with respect to all the points x, the initial conditions. And this gives you the same number beta that we saw in the previous slide. The maximum ergodic average can also be defining this more elemental way. And it's a nicer exercise to show that this thing can also be written in this other form. So here I interchanged the sup and the link. And now it's actually a limit because of subedity. The sequence that you have between here is subedity. Well, the sequence without the ends, of course. Okay, so you have these alternative formulations in terms of Birkhoff averages. So this is called ergodic optimization of Birkhoff averages. So you want to find the points for which the orbit has the asymptotic average as big as possible, let's say. Okay, let's focus on maximization. Because minimization is essentially the same problem. So, okay, but to describe the orbits may be kind of messy. So maybe the more convenient way of dealing with this is trying to describe the maximizing measures. Okay, so let's say, let me call this the meta problem because it's extremely vague. But this is essentially the main problem in this topic. Okay, so to give you some intuition, let's start with some very general things. If you fix any reasonable space of continuous functions, I will say in a moment what this means. But let's say this f is just the space of all continuous functions. And then generically the maximizing measure is unique. And in particular ergodic as we have seen. Okay, so some explanations. Reasonable means that it contains the continuous functions and the continuous functions are dense there. And generic, I think this is a standard terminology, means that a generic set is the intersection of accountable family of open and dense sets. Okay, so something like in Bear's theorem. So generically you have this uniqueness property, nice. Okay, now let's look at this inverse problem. So the problem ergodic optimization is given the function and the dynamics to find the maximizing measure. But the inverse problem is given a measure, ergodic measure, can you find a continuous function for which this is maximizing and moreover unique to make things non-trivial? And the answer is yes, this was proved by Jenkins. Maybe in the early 2000s. It's not as automatic as it seems, but you need some functional analysis. Okay, so for example, let's say take the doubling map. You can find some continuous function on the interval or on the circle for which let's say the bag measure is the measure for which the integral is as big as possible. If you take another invariant measure, you have a smaller integer, right? Or you can take, okay, that's the bag measure, but... So a question here, can you improve the regularity of this function? Of course, if your measure is very simple, it's supported on a finite orbit, you just... If it's supported on this orbit, I can take something like that, right? If I have here a period 2 orbit, I can take a continuous function for which the maximizing measure is... The unique maximizing measure is exactly the measure supported on this pair of points. And this can be made a sinfinity or whatever you want, right? What if your measure here is something more complicated as the bag measure, then it's not so obvious what happens, right? And the answer is that in general, if your measure here is more complicated, the function f cannot be too regular. So this, let's see, y. Okay, so this claim will be a consequence of something else, right? So let's restrict our setting to get more interesting results. So let's work, let's call the nice setting where our dynamics is hyperbolic and the function is regular. So I don't want to be extremely precise here, but hyperbolic means, let's say, something uniformly expanding, right? So it's not, it's endomorphism, perhaps. Or a nozz of diffio or axiom A, something like that. And regular means at least holder, but maybe you want to work with even better regularity. But okay, so in this nice setting, there is something called a maximizing set, which is a invariant compact set where all the maximizing measures live. So you can prove that there is an invariant set for which a measure is maximizing if and only if the support is contained there. Okay, in this trivial example, the maximizing set would be this periodic orbit. Okay, and... Sorry, this is given F. Given F, yes. Yes, given F and given T, you need. Okay, and now you see that, let's say the dynamics here is, for example, 2x mod 1, right? In the circle, and remember the previous theorem that says that any ergodic measure, for example, Lebesgue is uniquely maximizing for some function. And as a consequence of this, you see that if I take Lebesgue measure, this function cannot be holder. It will be continuous, but with a bad modulus of regularity. Because if it were holder, then you could apply this theorem and conclude that there is a compact set where the maximizing measures live, right? But the measure has full support. So in the end, we would conclude that all the measures were maximizing. And if you come back a little, Jankison's theorem says that this F that he finds is uniquely maximizing. The proof of this is some kind of humbunic kind of argument. It's not very constructive, right? So I consider this more interesting, right? So, okay, so by the previous theorem, as I said, this really requires this regularity because we know it's not true by force of zero functions. And actually, this thing is a corollary of a stronger result that I won't explain, right? It's something called sometimes manier lemma, or Jankison calls it the revelation lemma. Well, and there are several versions that were discovered independently and then later several improvements replacing the class of hyperbolic maps that are allowed. Okay, but I want to enter in detail here. Basically, I just say that this manier lemma is a version of the Livesec theorem that you maybe know where you replace equalities by inequalities. Okay, it turns out to be something extremely useful in this field. But I won't give you much detail. Okay, so let me state this conjecture or perhaps a meta conjecture because it's something kind of flexible and if somebody comes with a counter example, I can say, no, that's not what I meant. Well, this is my statement, but it's basically a reformulation of something by Hansen Ott, right? In a physical journal, 1996, so they say suppose this chaotic and they don't explain, they give a few examples, but let's read with the gaps and then we fill the gaps, right? Suppose this chaotic, then for typical regular functions f, the maximizing measure has low complexity. Okay, so what this means? Well, chaotic, I think everybody agrees that this uniformly hyperbolic maps or uniformly expanding are examples of chaos. Okay, but let's think of something hyperbolic. And now typical, well, we've seen a theorem where there was something, some property that holds for generic functions, so this is typicality in a topological sense, but you can also try to formulate a probabilistic sense of typicality that something has full probability, right? Regular, as I said before, it's at least holder, but maybe you can get better theorems if you put more regularity. And low complexity, this also has several possibilities. It's at least zero topological entropy, but if you are more ambitious and this is the way they wrote the conjecture, they said that typically the maximizing measures should be supported on a periodic orbit. Okay, like this example. So typically this shouldn't be too distinct from that. Okay, so there's the conjecture again and there are several results that I don't have time to explain the history but let's just jump to the best result we have by Gonzalo Contreras, who proved that if your dynamics is uniformly expanding, then for generically generic leap sheets functions, the maximizing measure is supported on a periodic orbit. So it's a very nice result and actually the generic here is also open. It's an open and dense set, something very strong. But it doesn't say, I think the original conjecture had the spirit that the typicality was meant in a topological sense, so we still could hope to obtain a topological version of these results. And I tried to do that with Yuwei Zhang and we proved a probabilistic version of this, but we needed to work with the shift and with a much more restricted class of functions. So we still have an infinite dimensional space where we can prove a probabilistic notion of, we can prove that typically in a probabilistic sense you have this kind of conclusion, but this space of functions that we are working with, they have very strong models of continuity, much more than older leap sheets. So it's not as good as we wanted, but there is something to be done here. Okay, and to make things more concrete, let's look at this nice example that was discovered several times. And in some sense it's like the simplest example you can think of. So the dynamic is the doubling map. Here it's convenient to take 2x mod 2 pi and then I consider my functions will be trigonometric polynomials of degrees 1. So linear combinations of constant sine and cosine. Of course the constant doesn't contribute too much to the complexity, so essentially you can work with this one-dimensional familiar function, just the cosine and then you shift the compose with a translation. And then what happens? This was proved by Thierry Bush that for every choice of this parameter theta, this cosine function has a unique maximizing measure. So it's for every parameter, it's very strong. And this maximizing measure always has zero entropy. Actually it has a very precise description, it's something called a Sturman measure. And well these Sturman measures they include some periodic measures, not a measure supported on periodic orbits, not all of them, some specific ones. And what happens if you take theta randomly, let's say for Lebesgue almost every theta, this measure will be supported on a periodic orbit. And as I said it's not any periodic orbit, it has some restriction. It must be Sturman. And it's even stronger than that. The set of bad thetas for which this measure is not periodic, it has zero household of dimensions. So it's extremely thin set where you don't have this periodicity. Okay. Well this is something else you feel like, I don't know, thermodynamic formalism or multifractal analysis you can think, try to play with a different kind of problem of maximizing, let's say the entropy of the measure given the average. So where this T, of course, must be in the set of possible averages which was this interval that we saw in the beginning. So you should obtain some function, well so by the previous conjectures, typically this function should have zero at the extremes corresponding to zero entropy. And then under some hypothesis this will be a nice concave graph. Okay but maybe let's, I don't want to enter too much in this kind of problem. Okay and now let's say something about the Lyapunov exponent. So I will begin with the top Lyapunov exponent. So we still have our dynamics T and now you replace your function F. That's your real function by a matrix function. It's this capital F. Okay so let's say these are D by D matrices. Usually more convenient to work with invertible matrices. And for some complicated reason this is called a cosine. Or maybe the pair, the T and the F, this is called a cosine. And then instead of summing you're not going to sum the matrices or because it won't be too interesting to multiply the matrices. And then it's more convenient to multiply then in this order. Okay so this will replace the Birkoff sums. You have this product. And the Lyapunov or the top Lyapunov exponent says something about the growth of this product. So let's take the norm of the matrix. And then this norm typically has exponential behavior. And the rate that you obtain is this lambda 1. So lambda 1 is defined as this limit if it exists. And as a consequence of the sub-additive ergodic theorem, for any invariant measure the limit really exists for almost every point. With respect to this measure. And then we are working with continuous function. Our space is compact. And so in particular these things are uniformly bounded and you can integrate. Well this will be measurable and let's look instead this number which is integrated or average Lyapunov exponent. Okay so this is the number that we are going to focus on. And then as before you can try to maximize or minimize it. So using a notation similar to what we used before. Let's call the minimum or the infimum alpha and the maximum bit, the supremum bit. Okay but now you have the question. Is this really a maximum or is this a minimum? Are these infants super? Are they attained? Oh well let's leave this for the next slide. Just this kind of thing has been studied before. Actually it first appeared a long time ago 1960 in a more restricted situation and it was called the joint spectral radius by rota and strength. And then in the 90s lots of people started to study this thing again because of some applications to wavelets and there are lots of engineering papers where you can find this thing. And in the other quantity or if you take the exponential of it it's going to the joint spectral sub-radiance. Okay so this is a more restricted situation called the steppical cycles that we will see later. Okay but right so comparing to what we had before the most basic difficulty is that the function we are considering remember that before we are just considering the integrals where these nice affine functions and now the Lyapunov exponent is not even continuous as a function of the measure. It's only upper semi-continuous in general. So using the upper semi-continuity you can prove that the soup here is attained but the infimum not necessarily attained. So of course you lose the symmetry that you have before so the supremum has a very different behavior. Actually we'll focus more on this beta because it's better behaved. But let me show you quickly an example where this infimum is not attained. I think this example is interesting because it shows some pathologies that may appear. So in my example the dynamics is the two shifts, the one-sided shift on symbol 0 and 1. And my matrix function is... Okay so here you have your cantor set where the shift acts and my function A is locally constant. It has some value on this cylinder and it has another value on this cylinder. These values are matrices but I'm drawing them like that. So your matrix function only depends on the symbol on the zero position. So that's why I wrote like that. So in the pair of matrices that we choose are exactly these. These two matrices. One of them is hyperbolic matrix and the other is a rotation by 90 degrees. And if you have any experience in multiplying matrices you know that to multiply hyperbolic by a rotation is a nasty thing. You can get into trouble. So let's see that in this example the infimum of the Lyapunov exponent is not attained. Actually the infimum is given by this value and this is not difficult. If you look this sequence of measures... Okay so this means the following. This is a sequence of symbols where we have n repetitions of symbol 0 and then we have a single 1 and then you repeat this periodically. So this is just a measure supported on a periodic orbit and it's easy to compute the Lyapunov exponent because it's 1 over the period times the log of the biggest eigenvalue or the norm of the biggest eigenvalue. Then you compute the product. Actually this matrix has no real eigenvalue. So another way of computing maybe there is a mistake here. Maybe I forgot to divide it by 2. Yeah but anyway I think the conclusion is correct. So this is what you get in the end and you see that it's something that is approaching log 2 by above. Essentially this matrix had previously it had or any power of this this has real eigenvalues but then you multiply by this rotation and then now the eigenvalues are non real or complex. But then there is a price because this has determinant less than 1 and this has determinant 1. This guy slightly pushes a little the Lyapunov exponent. That's the trick. So in the end you get these numbers approaching log 2. This example is nice because it also showed what I mentioned before, this continuity. These measures are converging to the measure supported on the fixed point but on the fixed point consisting of only 0s and here the top Lyapunov exponents plus log 2. But these measures have a Lyapunov exponent that converges to minus log 2. So we have a discontinuity of the Lyapunov exponent with respect to the measure. Yes. Okay but the same example already also illustrates something else and now to conclude the proof. So we have seen that here is minus log 2 and we found some measures for which the Lyapunov exponents are approaching minus log 2 from above. So this shows that your alpha, the minimum Lyapunov, the infimum of possible Lyapunov exponents is at most minus log 2. So in the converse direction what you do you can bound by below the Lyapunov exponent by the top Lyapunov exponent by the average of the first two Lyapunov exponents. Well I haven't explained what the second Lyapunov exponent is but this quantity can be written as that. Half the average of the log of the determinant and this is, as I said the determinant here is 1 the determinant here is what? 1 fourth, right? So this is always bigger than 1 fourth and then you get this bound for the Lyapunov exponent, right? And now this inequality is always strict unless this function is constant equal to 1 fourth but this only happens if you are in the periodic point here but in that case your Lyapunov exponent is plus log 2. So if this inequality is not, one of the qualities will be strict so you never have a quality, okay? So that's the end of the proof, okay? No measure can attain this there. Okay. So let's focus on the beta, right? Let's maximize the Lyapunov exponents and let's forget about this problem for the moment and now let's formulate another metaconjecture which is essentially what we had before but now the co-cycle version, right? Suppose you are t-schaotic in any of these senses then typically in any of these senses regular co-cycles meaning at least holder well the conjecture is that the maximizing measures should have low complexity the weakest version of the conjecture would be that they have zero topological entropy and the strongest version of the conjecture would be that they are supported on a periodic orbit but maybe you can, I don't know, who knows maybe in some specific situation can get something more precise and there is a result that fits in this philosophy but I won't give you much details by Michel Ranz and myself and now you remember I mentioned this subordination principle before right? that there was this in the commutative situation there was this set where the maximizing measures live and then we have a version of this basic result for co-cycles so as the previous this was called subordination principle so as in the classical subordination principle you need to assume that your dynamics is hyperbolic and that okay and now let's work with what's called a fiber-bunched co-cycle I want to find this but is at least holder co-cycle and moreover it cannot be too big in some sense or it cannot this fiber-bunching condition is a kind of partial partially hyperbolic condition for some projective dynamics but I want to find this but this kind of co-cycles here they are locally constant on cylinders they always satisfy this fiber-bunching condition because you can tweak the constants that appear there and so this includes this this easier setting okay and what's the conclusion that you have this maximizing set you have a compact invariant set such that a measure is maximizing for the up and down exponent if and only if the support is there okay and then there is this word here well okay so we actually we prove if the dimension is 2 then we have exactly this kind of statement the dimension is bigger than 2 then we need actually a stronger version of fiber-bunching but then as this paper was completed I sent it to a few people and Clark Butler realized that he can improve a technical part of the proof and remove this strong bunching condition so that's why this using the suggestion by Clark Butler we can remove this but this is not written yet and actually as I said before this subordination principle will come from something else which is a version of this manielema for co-cycles okay and I should mention here some works by Ian Morris probably was the first of the first people to consider maximizing sets in this kind of setting well not exactly the same setting more restricted in ours but yeah so we also improve some results of Ian Morris okay yeah so now let's do the following let's look at the other Lyapunov exponents I mentioned that we were talking about the top Lyapunov exponents but now let's see all the Lyapunov exponents sorry in that example an example yes yes yes yes eh eh eh eh we would have some land of one situation if I have one no that particular example has a particular situation yes I believe this is far from being typical, but I don't know if I've wrote too many conjectures already. But maybe, I don't know, typically this doesn't happen or maybe you can find some explicit condition that improve that you don't have this kind of phenomenon. And what is the hyperboleicity or whatever? What is the main point of the hyperboleicity? All that kind of condition, what is the main reason? The hyperboleicity of... The hyperboleicity on T. Well, for this kind of game to be interesting, you need to work with a dynamic that has lots of invariant measures. But you have so many... Yeah, but the thing is, this kind of stuff is already very difficult for the 2x mod 1. Maybe in the future we will be able to work with more complicated dynamics. But on the point, they prove what is used... Proof of what? The fault of results. Okay, so this one... Here you have this fiber bunch and as most results that work with fiber bunch, they use a lot of these olonomies. Which is some kind of... So you mentioned, in all this theorem, even in the first part, you were using that T is hyperbolic on your model. Yes, yes. So what is the main ingredient of the hyperboleicity? All this type of theorem. Well, as I said, there is this basic thing called manelema, which is a kind of... It's a version of Leavesic theorem. Right? Leavesic theorem, I think it doesn't make any sense without hyperboleicity. Well, if you inspect the proof, you are always using hyperboleicity. I don't know if I can come up with a counter example to show you that hyperboleicity is important. Yeah. Well, I'm sorry. Let me go one. In the previous results, do you know something about the entropy of the maximizing measure? Which ones? In these results. No, no. There are not... There are many problems here and few answers. So this result is like we look for the most basic tool we have in this field. We choose this manelema and we try to generalize this. And then we hope that using these kind of results, then later we will be able to prove more precise descriptions. So we started from this technical thing, but as in Gonzalo's theorem, the first thing he does is, well, let's... He starts with manelema is the starting point of this consideration. No. No, no, I'm sorry. I don't have time. Yeah, I still have lots of... Okay, let's go to the other, the up and down exponent. Okay, so what you do is the following. You just, instead of taking the norm, now we take the singular values. So what are the singular values? If you have a matrix A, you take the image of the unit ball, which is an ellipsoid and these singular values are half the axis of your ellipsoid. So the first one, the biggest one is the norm. Okay, and then you have the others. And then, okay, so you do the same thing and then you obtain some numbers that are called the up and down exponents. Okay, these are ordered in a decreasing way. So you obtain the sequence of numbers and you can prove that as before, those limits exist for almost every point with respect to any invariant measure, probability measure, right? And if your measure is ergodic, then these numbers will be constant. So, yes, so let's work with this sequence of numbers with respect to ergodic measures. So I don't want to work with the non-ergodic measures because it's funny to make averages of things of the second Lyapunov exponent if the measure is not ergodic, right? So let's work with ergodic measures instead because anyway these limits that we'll see here for ergodic measures are all the possible numbers that you can see on a set of full probability. Okay, so let's call this the Lyapunov vector of your ergodic measure. And then let's look to this set, the Lyapunov spectrum, which is just a set of vectors that you obtain. So the obvious remark is that these numbers are ordered, so they satisfy this kind of inequalities. And so the Lyapunov spectrum is contained in this set, which I call the positive chamber. I don't know if that's standard terminology or not, but so for example if you have what's that? The dimension 2, right? Then each vector is a point in R2, right? And then you have something here, which is below this line, okay? It's called a wall. Well, maybe it touches the wall, we don't know. And the quantities we were considering before, the maximum Lyapunov exponent and the minimum top Lyapunov exponent, okay? So if you just look the first Lyapunov exponent then this is the maximum point, this is the minimum, right? And another, maybe this is a nicer situation. For example, your matrices are in SL3R, meaning 3 by 3 with determinant 1. Then the sum of the exponents will be 0. And then, well, we can work instead with this restrict yourself to this two-dimensional space. And then the chamber becomes this wedge, this cone here, right? And so we'll have some subset of the chamber that maybe may touch the walls or not. I should mention that Sert has defined something called joint spectrum. Slightly is a different situation, but this joint spectrum is related to this thing. And he did this in generally groups and has some very nice large deviation results. But you can, if you're interested, you can check his paper. Let me just mention that since there are very few results, let's put this nice result here that says that the periodic orbits or the measure supported on periodic orbits, they will feel a dense set here of your Lyapunov spectrum. It is if you are in the usual nice setting where the dynamics is hyperbolic and the cycle's holder continues. So you can always approximate the Lyapunov exponents by using measure supported on periodic orbits. And now, as always, let's formulate a metacongential. So let's put the nice setting again and then typically what we should expect. Actually, in principle, we don't know much about this set, but it should be a convex set. The boundary is fishy and I'll explain later. It's some strange boundary. So it want to be these round disks and these blobs in the previous picture. And the boundary points outside the walls, they should be attained as Lyapunov vectors of each boundary point attained by a unique measure. And this measure should have low complexity. And now low complexity means zero topological entropy. We can't hope for, if you have this situation, right, you have here your spectrum. You can't wait for, you can't hope for all these points to be, to correspond to periodic orbits because there are countably many of them, right? But they should have zero entropy. What else? Ah, and again, the subordination principle, each of these measures is characterized by its support. So in particular, this support should be uniquely ergodic. Okay, so I'll skip this, this is about some multifractal thing, but it's not important. Well, so I have at least one example where this philosophy holds. It's again a one-step co-cycle, so this kind of locally constant co-cycles over the shift. Now with two other pair of matrices. And if you take exactly these two matrices, what you see is this picture. So these are the Lyapunov exponents, right? And the boundary is this curve. Well, there is a part of the boundary that touches the wall, right? And then you have this curve. And, okay, so what happens is this curve, it looks like a, well, maybe the picture is difficult to see, but you can guess that this is a, you can see a kind of vertex here. Maybe this is a polygon, but actually it's not. It has, this is indeed a vertex, so you can find a cone there. You can fit your set inside some cone with origin in this point. So this is called a corner, but there are corners everywhere. There is a dense subset of corners. And so this is what's called a fishing, right? And moreover, each point here is attained by a unique, the Lyapunov spectrum of a unique ergodic measure, which is a storming, and in particular has zero entropy. Well, this statement is an easy-caller library of a study done by these guys. They were not exactly interested in this set. They worked with something else, but you can reinterpret the results and get to this conclusion. Okay, so we have at least one example where this philosophy, this metaconjecture. Sorry, this one makes sense, right? Yes, because if it were once, then you'll have determinant one. And so the sum of the Lyapunov exponents would be zero, and then your set would be inside a line, so it wouldn't be very interesting. It's like in the previous example, the determinant plays a role here. That's basically because your group of matrices is too small. Maybe you can reproduce this kind of trick in SL3ER. Okay, so I have five minutes. Okay, so let's consider the following particular situation where your matrices are strictly positive, right? Most of you, well, Ricardo Manier is here, so most of you must know what a dominated splitting is, so actually what I need is a dominated splitting, right? So if your cycle has a dominated splitting, in this case in one-dimensional bundles, then your problem becomes much more treatable because now the Lyapunov exponents are just integrals of the expansion rates along these bundles, right? And so all these complications of the Lyapunov exponents not being continuous, et cetera, they disappear. So now the Lyapunov exponents continues, and the Lyapunov spectrum is just the set of these integrals, integrals of these vector-valued functions. It's something called a rotation set. And by domination, this set is also away from the wall. So in this case, it's very well-behaved, and then your spectrum will be something convex, et cetera, right? Actually, the important word here is dominated splitting, right? So at this moment, we realize that we missed something. We should have, before dealing with this complicated problem about Lyapunov exponents, we should have worked with this kind of thing, right, the rotation sets, the sets that you obtain by integrating a vector-valued function, right? So I don't have much time, but here I define this thing, and there is a basic example that's called the fish, where you have the doubling map, and let's say here it's easier to work with the unit circle, and your function f is just identity, where you identify the complex and the reals, and then you get some set that was first studied by Oliver Jenkins on his thesis, and these are some points in the boundary of the set. So later it was proved by Thierry Bush that indeed this thing has a dense set of corners, it's not a polygon, okay, and he called this the fish, because there is a trick with words in this statement. This is the title of the paper, the Poisona Padaren. Okay, and we can formulate a meta-conjecture for these kind of things, but by lack of time let's go back to co-cycles quickly. So as I said, it's nice when you have a dominated splitting, the situation becomes simpler, so the definition of dominated splitting, you have two bundles that are invariant in the reaction of the co-cycle, and the expansion of one bundle is less than a constant, less than one times the expansion is on the order. And in this case it means that the Lyapunov spectrum is away from a certain wall, because there is a bundle where the least expansion that you've seen in the first bundle cannot approach the strongest expansion you've seen on the order. So there is a gap between the Lyapunov exponents. The converse is not true, but maybe, well, it's true under some assumptions and maybe it's true for typical co-cycles. Okay, so yes, okay. If you are very lucky to have a dominated splitting into one-dimensional bundles, then you get a very nice situation where this LF is automatically convex, compact, etc. And maybe you can prove the convexity of this guy reducing to some subsystems where you have this simple dominated splitting, but it's far from being trivial that you can do that. And so let me conclude. I have a couple slides. I had this metaconjecture. This was the same as I explained before. And let's add an extra piece of information. Okay, so we expect this set to touch a wall, if and only if there is no dominated splitting. So typically if you see this kind of picture, the set is away from the walls, then the reason for that is that there is a dominated splitting here preventing the first two Lyapunov exponents to get together and another dominated splitting there. Well, but this is a conjecture. Furthermore, there is something else. Furthermore, if you touch a wall, then here. So if it touches a wall, if and only if it has no dominated splitting of the corresponding dimensions. And moreover, there is a larger convex set called the Morse set. So if it touches a wall, let's say it's touching the wall. So there is a bigger, an extension of the set, right? A larger convex set called this M plus F, whose intersection with the chamber is the set you had before. And this new set is invariant under reflections. Let's see a picture. So this is not, so you shouldn't see this kind of thing because if you reflect, you get something that is not convex. So it should maybe this. Or if your group is more interesting, then you have, and it touches several walls, then it must be invariant under the vial group and so you get some funny blob like that. And the philosophy, why you would expect this to be true? Basically because if there is some lack of domination, then you must be able to mix the Lyapunov exponents in some way. And the way that this translates to the geometry of the set of Lyapunov exponents is that you get this kind of symmetry. And this terminology Morse set comes from, there is also some relation of control theory. And that's where the terminology comes from. That's it. Thank you. Maybe the construction also explains how this set moves. When F moves, you didn't... Ah, yes. There is some... There is some semi-continuous, but it's kind of three. There is this funny situation where... Let me go back a little. Or maybe a lot. Here. So remember that I told you that this point is always attained, but this one is not. So there is a tricky thing. This is attained and so something here persists. So you have some kind of semi-continuity from this vertex, but not from the other. And actually there is a precise semi-continuous statement, but then you need to take it into account the geometry of this group of reflections. So if you make the reflections and take some convex hole, then you will obtain something that varies semi-continuously. And when you change your co-cycle, and I don't know if this helps to answer your question. The co-cycle was dominated, both are attained. Yeah, if you have a dominated splitting, then everything is very nice because the dominated splitting persists. Yeah. There should be some subsistence with domination, and this should give you some robustness. But typically, that's part of the final. Doesn't make any sense to try to optimize just the second element of the experiment? You mean to... Well, this is looking at the pair, because we've got the second exponent for the third element. Ah, the set. Just fix something that is not attained. Yeah, then you can go into the same kind of trouble again. Maybe this point is not attained. But generally, it should be attained, right? Yes. I can tell you something precise here. In this situation, right, if you have, let's say the spectrum is a blob like that, and then you take the reflection across the wall, and now you take the convex hole, so you get this kind of stadium, and then this part of these extremal points here will be attained. So this part will be attained, but this one you don't know. So let's put like this. Okay, so your question was to optimize, or let's say to maximize lambda 2. Well, in this picture, maybe you don't have a maximizing measure for lambda 2. But there are some other linear combinations of lambda 1 and lambda 2 for which you have... that are attained. But this depends on the geometry of the thing.