 da se postavimo oseledek, to je jazda, da si postavimo ergodiziti. Sreč, to je tko, sreč, oseledek je tezvonega ergodizira in za živosti sofistikati stah, zelo bodo, OK, zelo so, zelo so z X t plus 1, zelo se z X t. OK, znažite, zato sem z X 0, I generate x t. Then I start from x0 plus x0 prime equal x0 plus delta 0, and I generate x prime t, which will be equal to x t plus something by definition. You want to understand this stuff. This is very simple. You consider this equation, you want to see x prime t plus 1 is equal fx prime t. So this is x t plus 1 plus delta t plus 1 equal to f or x t plus delta t. Now, if you assume that this is small, if you assume f of x t plus f prime x t delta t plus order delta t square. So if you assume that this is small, if you assume that this is small, you just simplify this with this, and you have delta t plus 1 is equal f prime x t delta t. Street-ree-speaking, you have also this step. So you have this equation. So this equation is tautological true, but you have this part which is disturbing. So the idea is to define the Lyapunl-sponent in a proper way, in this way, let me call lambda. In this case, there is only one lambda, but the official name is the maximum characteristic exponent, but just for friend, the Lyapunl-sponent. So this is defined as the limit t go to infinity, of the log of delta t over delta 0, 1 over t, but before this limit, I take the limit delta 0 go to 0. So I assume that delta 0 is small, before I take delta 0 small, and then the limit t go to infinity. So in the picture, sorry, you remember the picture before, we have something like that. I take delta 0 very small. I take delta 0 very small, in such a way the interval time for which you have the exponential is very large. I have this stuff. So this is the trick to take before this limit and then this. So the mathematician, who like to invent Pompu's name, consider another equation where this is removed, so they call the term tangent vector. Sorry, Angelo, this depends on x t, on where you start. Yeah, yeah, exact. This can depend on x 0. Then I show you that, in principle, this can depend on x 0. Yeah, absolutely. Thanks. I forget to make this. So the tangent vector, in this case, it's not a vector, but the tangent vector z t, and tangent vector z t evolve in this way. So practically is the previous equation where I just delete the term order delta. So this is the first derivative delta, right? Fortunately, I have the German, which is very precise, and this means that z t is delta t. It's delta t, but delta t is small at any time. But how is possible to have small at any time? You have to take extremely small delta 0 and so on. So in this way, this is automatically is true. So if you introduce this trick, it's not necessary to perform this, because automatically you have performed this limit. This is the trick. I think you see in this way, automatically you have this limit. And so, and then you have the leopon, oops, the leopon exponent, which of course depend on x 0, is the limit, the limit t go to infinite 1 over t log z t over z 0. Okay, you have this now. Okay, now this depend only from x 0. This quantity has a strong relation with the ergodicity. This is not obvious, we don't see. You don't see, but if you look carefully, carefully, if you look enough, I show you in 20 second that this quantity is not about the time average. Doesn't appear time average, but I can write this as time average. Why? Because just log of z, let me write in this way, z of t, z t minus 1, z t minus 1, z t minus 2, z 1, z 0, 0. It's the same, you see? It's the same, because you simplify this with this, this, okay. So, bonus. So you see that you can write in this way, you see? But now, okay, I remind the definition of this, this is limit t go to infinite, 1 over t sum j from 0 to t minus 1, log of f prime x t, x j modules. Now, you see, this is time average. This is time average. So, now, remind me the first Birkhoff-Neumon theorem. This, for sure, there exists. This, for sure, there exists. And, in principle, is a function of lambda 0, or x 0, in principle, okay? Now, you can wonder, is really a function of x 0 or not? Okay, it depends. If the system is ergodic, it's not a function of x 0. They prove that it is ergodic, another story. So, you have it. Now, if the system is ergodic, then you have that lambda is equal to... So, lambda, you see that lambda is this quantity. If it is ergodic, this is the integral of rho invariant, x log f prime. If this is invariant, then you have a precise... Okay, and you can compute. Okay, compute in turbulent invariant distribution. And that's it. So, for example, in the case of the... Just trivial exercise. In the case of the 10 map, in the case of the 10 map, so you have the f prime is 2, so, and this is constant. This is immediately, this is just log 2. Okay, in this case, you are just log 2. Okay, and you can wonder, what is the... The Lyapunosponent of the logistic map is still log 2. Why? So, you can perform brute force just using this formula and performing some integral, if you are clever enough or some trick, or you can wonder a general, more general question. What happens if I have a system and I perform a topology agonugation? The Lyapunosponent remain the same or not? The answer is, yes, remain the same. It's possible to prove, it's not difficult to just... I have no time, but you can find on our book, or there you are, you'll find, okay? So, you have that the Lyapunosponent is invariant under topological agonugation. This is very important because it means that it's not important if you use a variable x or x cubed. It's the same. So, this is an intrinsic property. It doesn't change if you change the language. Okay? So, the Lyapunosponent is an intrinsic property. It's an intrinsic property. It's not a property which depends on the variable you are using. Otherwise, it's a somehow stupid property. If something changes with x or x cubed, it's stupid. Okay? This is the result, without proof, but so. Angelo, we have a question on the chart. On Poincaré-Bendisson, as I said, it's valid only for integer dimensions. The question is, can we have a tractor in a dimension between 1 and 2? No, no, okay. So, it's necessary to distinguish the situation with continuous time or discrete time. With continuous time differential equation, you need at least three dimension. This is as a consequence of Poincaré-Bendisson theory. On the contrary, in one dimension, in discrete time, one variable is enough. So, this is what we prove. This is the proof, no? This is the proof because it's the front map. Yes, then from the intuitive proof we gave, I understand 2D should be taken something like a lower bound to a lower radial dimension for cows. Okay, it's what I said before. The question, I can have an attractor. Yeah, dimension between 1 and 2. Yes, we discussed tomorrow. Tomorrow, for example, in the n-map, in the n-map, the dimension is between 1 and 2. It depends on the value. The usual parameter you find on the book, the fractal dimension is something like 1.25. I know very well because when I was young, I compute many, many times this stuff, inventing some method and so on. Yeah, in n-map is a archetype of two-dimensional cowdy system, this typically. And the dimension is between 1 and 2. Okay, so the lambda is invariant and topology of conservation. In 1D, the proof is just a direct checking, but it is valid in n-dimensional. So the upon-exponent is something, I define all this. This is the upon-exponent in 1D. So the upon-exponent in 1D, there is no particular difficulty. It's just this stuff. Now you wonder, what happened if I consider, let me discuss for maps, for the differential equation, the situation is very similar. But if you consider a two-dimensional map, for example, an n-map, immediately you realize a terrible problem. Why? Why? The problem is the following. Let me write the n-map. Okay, this is the n-map. And the idea is just to repeat the approach before and to write the tangent vector. So the tangent vector means delta x, okay? So if I had the tangent vector, zeta 1, zeta 1 is associated to x, zeta 1 t plus 1 is equal to minus 2 A x t zeta 1 t plus zeta 2 t zeta. I guess if I write in compact way, z vector t plus 1 is equal to a matrix, which is minus 2 A x t 1 b 0 zeta t. Okay, I have this equation. So for the tangent vector, tangent vector means zeta, means delta x, repeating the argument before. So I have that the evolution of the tangent vector is given applying a certain matrix to the previous time. Note, this is the matrix, so the structure of the problem is the following. In the structure of the problem, so in 1D, in 1D, we have this structure. In d2, we have z z plus 1 is equal to a certain matrix, which depend of x t. We have this, this is the structure. So and now you want to understand what you want to understand how behave z in time, okay? And so the problem is the following. Imagine that you started with a certain x 0 and you started with a certain z 0, okay? You compute all this stuff and then you look at the limit t go to infinite 1 over t log of modulus. Okay, so the time is just modulus of this over modulus. You look at this quantity. So, of course, this quantity must depend from x 0 and z 0. So these are called Lyapun exponents. So, of course, in the problem is that this quantity, which depend of these two stuff, okay? And so you see that the difference in the one-dimensional case is related to the fact that in general, a, this matrix does not commute. So if the matrix commutes, the problem is trivial. The problem is trivial. The entire situation is when the matrix does not commute. Because if it commutes, you are back to the people's case. Just diagonalize for this component. Because in this case, what we have, z t is equal to the product of f prime x j, j from 0 to t minus 1, z 0. You have this. If you repeat here, you have the same. You have the same. But the problem is that now the product must be computed in the correct chronological order, where here is not important. So you have this problem, this might be a x t minus 1, a x t minus 2, up to a x 1, z 0. So you have this product of matrices in a proper order. If it's possible, if this commutes, the problem is trivial. But the fact that in this case, you see, this doesn't commute. This doesn't compute because it depends on the evolution of x t. So the computation of this stuff is absolutely not trivial. And the first point is necessary to prove that there exists this stuff. The existence of this stuff is not trivial, because before it was just a consequence of ergodicity. Now also in this case, the consequence of ergodicity, but the proof is necessary to invoke a non-trivial theorem. The non-trivial theorem is the theorem of Zellig. The theorem of Zellig is following. The theorem of Zellig is that. When you perform this quantity, now look at the dependence of z 0. So in principle, you can expect that you can have all the possible value depending of z 0. But actually, the possible result are only 2 in this case. We are in two dimension, in d dimension, d value. So the way to mention the Oceledek theorem without entering in the detail, without entering in the detail, the Oceledek theorem in this case, say that the limit t to infinite 1 over t log of z t modulus z 0 is equal to lambda 1. We can depend on lambda 0 or lambda 2, which depends on lambda 0. Only two possibilities, depending of z 0. So if, depending of z 0. And so, and what that is the following, that for almost all z 0, you have this one. In order to have, and this let me call it smaller than lambda 1, lambda 1 and lambda 2 are called liapron exponents. The liapron characteristic exponents. This is the first liapron point, this is the second characteristic. And so, in the, so, in more than one dimension, we have the first, the second, and so on. So you had the number of liapron exponents, how many degree of freedom you have. And this depends on z 0. So, and in order to understand the trouble, this is not a proof, this is just to have the intuition. Let us consider the case where a is constant, doesn't depend on time. So if it doesn't depend on time, you have that z of t is equal to a to t z 0. And then, if you consider the eigenvalue problem, this is equal to alpha. Imagine that you find alpha 1, alpha 2. And so you write z 0 as z 0 as a certain c 1 v 1 plus t c 2 v 2. So the evolution of z of t, z of t is c 1 alpha 1 to t v 1 plus c 2 alpha 2 v 2. Now, let me assume that alpha 1 modulus is larger than alpha 2. So in this case, what happened? If c 1 is different from 0, so z 0 has non-zero projection on the most expanding direction, the leading term is this one. So in this case, you have the z t vave, like alpha 1. So on the contrary, if c 1 is equal to 0, you have alpha 2. So you see that this is selected almost sure, apart of the very strange situation. But if you wonder, this is the essence of the theorem. So no, this is not a theorem. This is just why you have this dependence of z 0. This is just the definition. This is just the theorem. Then you can wonder, OK, but how can compute this stuff? How can compute this? Possible to compute in some efficient way this stuff? This is another story. May, now the fact that there exist this limit is a sort of ergodic property, and is strictly associated to the statistical mechanics of this sort of system. Maybe let me spend the rest of this class to show you why this problem of the product of random matrices are common both to chaotic system to the sort of system. So you told me that you are familiar with transfer matrix stuff, so let me just write the just for denotation. Just for denotation, and let me, so I, OK, let us consider a one dimensional spin glass, OK? Is a tribute to Giorgio Parisi, my professor. So you have a 1D random easy model. So there is something like that. So you have some random coupling and random magnetic field. And, OK, you have to write the partition function. So the partition function will depend on the temperature. It will depend on all these coupling and all these stuff, OK? And you write sigma 1, sigma n, exponential of these things. So here you have the sum on n, beta, jn, sigma n, sigma n plus 1, plus beta, hn, sigma n, here this. And then you can just close, just take on the circle. So sigma n plus 1 is sigma 1, something like that, OK? So now you note that you can write this quantity using the trick of the transfer matrix. So if you introduce this, let me introduce this sigma, sigma prime, let me put here n here, beta, jn, sigma, sigma prime plus beta, hn, sigma. If I introduce this, I have that zn is the sum of sigma 1, sigma of t sigma 1, sigma 2, t sigma 2, sigma 3, n, minus 1, sigma n, OK? No, let me put this one, sigma n, OK? So you see that, you see? And this is, so you see that you can write this as the trace, the trace of this product. Sorry, this is 1, this is 2, this is n. You have the product of this stuff. You have the product and stuff. So now you, OK, if these are constant, if these are constant, it's enough to compute the first and second, no? In the case, in the case, this is, in the case, they are constant, it's just lambda 1 n plus lambda 2 n, where lambda 1 and lambda 2 are the eigenvalue of t. And this is, this is simple. And so you see that what is real important is only lambda 1 and the maximum 1, because when you take the free energy, take the free energy per particle, kt log of z n over n, you have this, this is just minus kt log lambda 1 plus smaller term, OK? So you see that the free energy, the free energy is given by the maximum eigenvalue of the matrix. OK, this is for the real case. Now repeat the game for the random case. So if you repeat the game for random case and you don't know, you are very naive, you are not familiar with this problem of the system, what you have, that you have f n equal to minus k log z n over n, OK, this is the, this is the main kt log of the trace of this product divided by n. Of course, this quantity, this quantity, if you don't know nothing, is a function, you know what, is a function of n, is a function of t, is a function of all j and all h, right? Here is one. So if you are completely naive, you have this stuff, but what is important, so this is the, then you can wonder, but if I take this quantity and I perform an average, the average on what? On this quantity here, OK, this stuff, and then the probability of all the j, the probability of, and this, I write in a formal way, it's clear what it means, it's all there. I have a number, I have a number, OK? I have a number, but of course this is horrible. So this is horrible, and from the concept of point of view means that you have an experiment, you have a disorder system with the gold, with some purity of iron, and then you move the impurity and compute the free energy, and then perform the average on all the possible way to compute the impurity. It's ridiculous, nobody is able to do this. Then you can wonder, you can wonder, if I perform a unique experiment, unique experiment means that with the unique realization of the disorder, with n large enough, what I have. No, if I take this, if I take this, and I look at the limit n, go to infinite, I obtain this or not, OK? This is from a physical point of view, it's a very precise question, because this means that I have just one situation, and this is representative of the ensemble. This is enough, OK? So you see that the answer from physical point of view is yes, I hope this, I hope this, and the mathematical point of view, why I can say yes, I can say yes, because of a selected tier, because at the end of the story, this is a part of the decoration, a part of the decoration, this is the logarithm, this is the Japanese point, or the logarithm of the Japanese point, because when you perform this product, a part of a very rare situation, I obtain always the average, is like in large number theory, the theory of large number, so you perform the sum of, in principle you can obtain everything, but you obtain always the good value, which is the tm value, this is exactly the same, but of course it is highly non-trivial, this is the consequence of this, Ocelial theorem, the official name is multiplicative ergodic theorem, because it is associated with some multiplication. The non-trivial part is associated with the fact that the matrix doesn't commute, OK? This is the, so this is the connection with the two fields, OK? This is the connection to the field, then next step is how you can compute, this is another story, how you can compute this number, you can try to compute it with some analytical effort, I spent some years when I was young on this, I also write a book, I can send to you the book, and all there are some analytical method, approximation, which, whose difficulty does not depend on the dimension of the system, even 2D, the problem is difficult, it's not the fact that 2D is simple in 20, no. No, 2D is simple, it's difficult, it's difficult, that's it, sorry. There are not systematic approach, there are some methods developed by many people, they redow work a lot on this stuff. And, OK, but I don't want to discuss about the analytical method, I just want to discuss about how to compute numerically, how to compute numerically, and then, what is the connection with the other problem? Because apparently, I split the problem in the time and space, but actually there is a connection. These are not two split problems, two aspects of the same stuff, and there is a bridge between these two. So, before to, let me spend the last 10 years, just discussing the method due to Benetin and the other people, and which is a computational translation of the oscillator theorem. OK, I discuss this in the case of the NOMAP, and then you can generalize it. When you will have time, maybe you can play with this, and you can write, the code is very exterior, it's just a few lines. So, started with, let me write the, this we are in 2D, OK? We are in 2D, then I write the Z equation, the Z is equal to a certain matrix, which depends on X, Z, or T. How is this matrix? This matrix is just i, j, it's just the derivative of f, i, put along the trajectory. That's it, OK? In the case of NOMAP, because this A is minus AX, no, there is a factor 2, maybe there is a factor 2 here, I don't remember. Yeah, there is a factor 2. Apparently, there is an innocent structure, but it's not innocent at all, OK? Apparently it's very, so this problem of random matrix, when you read for the first time, it's like, come on, I had to manage 2D matrix, but I didn't care. It's not so. And, OK, you had this. Then the computation of the first one, OK? First characteristic diagonal exponent, lambda 1. Lambda 1 is defined in the same way. So you started with a certain X0. So let's assume that this is a gotic, otherwise it remains independent of X0. And you started with Z0. So you iterate this. You need to iterate both. You see? In order to compute the exponent, you have to iterate the system, because you need these matrix. You iterate the system, and then you compute Z or T, and then you take the limit T, go to infinite 1 over T, log of ZT or ZD, OK? And this, with probability 1, you take lambda 1. This approach, this for T, go to infinite. This approach to lambda 1, with probability 1, in the sense that unless you are so unlucky, that you consider Z0 perpendicular to the most expanding, you take lambda 1. But don't worry, if you are unlucky, the computer will solve your trouble. So, I mean, if I think of doing this, I mean, I will have ZT, which is exponentially larger than Z0, and then maybe I will get a round of errors. Right, so this is the technical, no, technical, this is the practical trick. OK, this is just, now, I will solve the practical trick, this problem, this is very simple. And this trick is a clean trick, it's not a dirty trick, so there are two kinds of tricks, the trick where you are cheating, but this trick you are not cheating, OK? You observe the following, so you see, of course, this increase a lot, you expect to have trouble with the numerics, but you look at this argument, so you can write this as, you can fix a certain time tau, which is not important, it depends only on your computer, maybe 3, 5, 10, even 1, if you like. And so you can write this way, you can write this as ZT, ZT minus tau, ZT minus tau, and so on, OK? So, let me call this, let me call this, so here you have that, this is the sum of log of Z, let me call this Tn Tn minus 1, where Tn is equal to n tau, and here I write this way, OK? So, and then I take the limit capital N, go to infinite, then why this is nice, is nice the fact that note that this quantity, if I change this, this change, but the ratio doesn't change, why? Because the dynamics of the tangent vector is linear, so this means that I can, I can use the following trick, the following clean trick, I start with Z, Z at 0, for example, modulus equal 1, why not? Then I compute, I compute Z at 91, OK? And then I have alpha 1, and then I have alpha 1, which is the stuff, then I rescale, then I rescale, then instead to consider, then I rescale, then zeta prime, zeta prime at T1 is equal to Z at T1 divided by alpha 1. In such a way zeta prime has a modulus 1, and then repeat, and then I have alpha 2, and so. And this is a clean trick, because I just use the fact that the evolution of this stuff is linear. So at the end of the story, I have limit n go to infinity, 1n tau, sum of log of alpha j, alpha n, OK? And this is a way to avoid numerical overflow, because you can take also this, even 1, if you like. OK, then you have to decide, if to rescale too often, you spend your time, but in this way, you avoid the trouble, you avoid the trouble of the overflow. Now you can, how you can compute the second one? The second one is more complicated. OK, but let's start with the idea, so I... The second, the computation of the second, so the idea is to start with zeta 1, 0, and another zeta 2, 0. This is perpendicular. So you start with these two, so this is zeta 1. So you know that asymptotically this will increase in some direction. OK, then if you start at random, even this will collapse very close to this. Of course, it's impossible to avoid. OK, but if you start now, instead of looking at the distance, you look at the area between these two stuff, you realize that this area, the growth of this area, the growth of the area increase as the sum of the liaponsponent. So this area, area at time t is the area at time 0 exponentially, lambda 1 plus lambda 2 at time t. Of course, here you have horrible, horrible problem with the underflow, because now this increase became infinite, but this angle became zero. So the trick is to perform a ground speed or to normalization. So you repeat that came, not only to rescale this, but also to or to normalize after a certain time. OK, you start from this, you arrive to a situation like this and you perform an or to normalization. So these remain, and so on. And you compute the area and blah, blah, blah. But so the problem is that if you want to compute the second liaponsponent, you cannot compute the second liaponsponent if you don't know the first. So I am interested in a system with 200,000 liaponsponent. I want to compute the liaponsponent 1,023. OK, you can do, but you have to compute the previous one. OK, this is the method. Which is not impossible. So for example, when I was younger we compute some 1,000 with Stefan Ruffo. It was in our game when we were young to compute many liaponsponent. It's not particularly difficult. It's just long. OK, tomorrow I'll continue to discuss this stuff. But in addition, the liaponsponent has some property and I forget. I need to discuss the property of the liaponsponent. Tomorrow I'll continue. OK, thank you very much. So we continue tomorrow. Today some of you will go to CISA. And there were some questions on how to go to CISA. I think you will receive shortly some instruction on where the bus will be, et cetera, et cetera. OK, thank you very much.