 So, we begin by continuing with our study of maps and specifically one dimensional maps. So recall that the kind of discrete time dynamics we were looking at was a map function to form xn plus 1 some function of xn and x0 is defined to be in some interval, but we started off by defining it to be on the reals and then the question was to study the dynamics if you like implied by this discrete time evolution equation before I do that let me settle a couple of questions which were asked last time the first of which we had to do with what happens if this map function itself changes with time in other words if at the end stage you have a different map function each time you iterate this of course would be the discrete time analog of a non-autonomous system where the evolution equation would explicitly involve time on the right hand side and we are not looking at such maps to start with although one can in principle do so we are not going to do that at the moment so we are again looking at autonomous systems which do not have this possibility here. The second question that was asked was much deeper and had to do with whether in the presence of chaos which we have not defined as yet very clearly you could have order as well from chaos and there the comment that I would like to make is that the lesson taught to us by chaotic dynamics is that simple evolution equations simple dynamics could have very complex solutions so very simple looking equations could lead to very complex evolution very complicated evolution in time the solutions could be extremely complicated the other thing is also true namely you could have a very complex system and on the average it could behave in an extremely simple fashion for instance if you took the gas particles in this room the motion is certainly chaotic at the classical level highly chaotic as we will see and yet there are very simple average laws such as the ideal gas equation of state PV equal to RT and similar laws which represent the behavior of macroscopic average quantities those could under suitable circumstances under special circumstances we actually extremely simple another example is you take a piece of metal and you apply a voltage to it and you have a current in it in a conductor in the current is proportional to the applied voltage very simple ohms law this again is in spite of the fact that the individual charges inside the electrons could be doing very complicated things and yet you have a macroscopic law which is very very simple or hooks law in elasticity or fixed laws of diffusion and so on all these macroscopic laws are quite simple to write down but they refer to average or macroscopic quantities and they come out by averaging over a large very large number of individual microscopic motions which could themselves be very complicated there are reasons for that as well and a little later when we talk about invariant densities I will come back to this and talk about thermodynamic systems and what is so special about them the third question that was asked again jumping ahead a little bit was whether you could have simultaneously coexisting in a system both regular motion as well as chaotic motion and the answer is yes in general this is possible there could be regions of phase space in a dynamical system where if you start with an initial condition one of these regions the motion remains regular non chaotic on the other hand there could be other regions where the system behaves chaotically this is entirely possible we even talked about this in the context of Hamiltonian systems where something very special happens when you have chaotic motion and I will return to this a little later when we talk about chaos in greater detail and if you recall in Hamiltonian systems which are integrable the motion is on n dimensional tori in general for an n freedom system and if you took the example for instance of n equal to 2 then the motion in a 4 dimensional phase space is restricted to 2 dimensional tori and they could victorially I could draw them as in this fashion perhaps you have one torus like this and there is another initial condition for which you have a larger torus of this kind and these tori are nested within each other it is entirely possible that in the region in between two tori you have chaotic motion irregular motion but since these tori split the whole of the space into an inside and an outside remember the energy surface in this case is three dimensional in a four dimensional phase space and a three dimensional space is split into an inside and an outside by an object like this torus anything that is inside can never escape outside and so the chaos is actually contained in the region between successive tori if it happens as soon as n becomes greater than 3 greater than or equal to 3 then the phase space for 3 n equal to 3 for instance the phase space is six dimensional the energy hypersurface is five dimensional and the tori are three dimensional tori and these tori cannot split up this five dimensional space into an inside and an outside so all the chaotic regions can actually be connected to each other and you have what is called a stochastic web or arnal diffusion which takes you from one point of this stochastic region to another and that goes through all of the phase space which is not striated which is not contained which is not occupied by the tori themselves so this kind of behavior can happen could be very very complex and certainly coexistence of chaos and regular motion is possible in many systems okay. Now let's go back and look at some of these maps in a little greater detail we talked about the fixed point of such a map so a fixed point at x star implies is a root of x star equal to f of x star and it was stable if mod f prime of x star was less than unity unstable if it's greater than one and marginally stable or indifferent if it's exactly equal to one so these fixed points are the analogs of critical points for continuous time flows what happens if you have a fixed point of the following kind what happens in a map if the point let's call this point a if you have a equal to f of b and b equal to f of a what happens if you have a pair of points a and b such that a is f of b and b is f of a so it's quite clear immediately from this this implies that a and b are fixed points of the iterated map f of f of x which I'll denote by f 2 of x they need be fixed points of the map itself but they could be fixed points of the first iterated map so you take this initial point you take the value a then you calculate what f of a is and turns out to be some number b but you calculate f of b and you're back to a so this is entirely possible what would you then call the pair a and b it's clear that under iteration a goes to b and b goes to a keeps coming back what would you call such an orbit it's the analog of a closed orbit in continuous space it's a periodic cycle and such a thing is called a period 2 cycle a b we're going to look at examples of this very shortly so in this staircase construction that we had or even this cobweb construct we had by the method of successive approximations it would turn out that the point the function corresponding to the initial value a is something some b and then to the value b is back to a and they form a period 2 cycle when would such a cycle be stable well it's clear that they would it would be stable if the slope of this function this is a new function now it's the function of the function of x it's a new function if the slope at the fixed point of f 2 of x is less than 1 in modulus in magnitude this is exactly when it would be stable and what would be the criterion for that so I'd like to have d over dx of f 2 of x at x equal to a or x equal to b I'd like this to be less than 1 in magnitude so it's easy to see that since this thing could also be written as df over da let's call it x 1 the x 1 sorry so let's so what I have here is a situation where f of x 1 equal to x 2 and f of x 2 is equal to x 1 and if you have a situation like this you could rewrite this in this fashion and if the modulus is less than 1 the product is less than 1 then you have a stable fixed point a stable period 2 cycle the moments thought will convince you that this is right all I've done is to write this as f of f of x and I've called f of x some other variable so it's df 1 f x 1 over dx 1 dx 1 over dx 2 so all I do is to differentiate the second time and this is it similarly you could have a period p cycle so an orbit which goes from some value a 1 to a 2 to a 3 to a p back to a 1 and the set of p points a 1 to a p forms a period p cycle which is stable if the magnitude of the slope of the p iterate of this function is less than 1 at any of these periodic points or written in this fashion if the product of the slopes at all the points on the orbit is less than 1 in magnitude then you have a stable period p cycle and if it's greater than 1 it's unstable once again you could have a marginal stability here now let's draw a picture and see when this happens and how it goes on so we need to write down right away some kind of function which would do this and here for example let's specialize to the unit interval in the x axis so let me plot f of x here versus x here we focus on just the unit interval and let f of x also take values in the unit interval and then we ask what does this function look like what are it's fixed points and so on so let me take a very typical function which we are going to study in some detail so here's the bisector and suppose this function is like this some function with the single hump on top here's the fixed point and here too is a fixed point and it's immediately clear from this picture that both these are greater than 1 this is a symmetric parabola for instance this slope is greater than 1 this slope is greater than 1 in magnitude and therefore these are unstable fixed points on the other hand if I iterate this function once I go through the iteration once what would the iterate look like what would f2 of x look like what would iterating this function lead to well we can do this in detail but perhaps it would do something like this and there are now several fixed points there's one here two here three here and four here what would these fixed points correspond to this fixed point remains a fixed point here so the fixed point of a map remains a fixed point of its iterates that's quite clear because if f of x star is equal to x star then f any number of iterates fp of x star is again equal to x star under iteration this doesn't change at all so the fixed point of a map is also a fixed point for all its iterates the converse is not true in general because if you look at this case this is f2 it has four fixed points while f has just two fixed points here now it's quite evident from this picture that this was an original fixed point it remains so this point here is this once again it remains so but you have two new fixed points here what would they correspond to as far as the original map is concerned what would these two points correspond to that correspond to a period two cycle the new fixed points would correspond to a period two cycle so it would simply mean that if you took this value and found the map function it would go to this value and if you took this value and found the map function it would come back to this value so the fixed points the new fixed points of the first iterate F2, they straddle the original fixed point of this map. And evident from the picture in this case, at least pictorially it looks like all of them are unstable. But that does not have to be the case, it depends on the kind of function we are looking at. We are going to look at it in some detail, but I want you to appreciate the fact that period cycle, period P cycles are fixed points of the pth iterate of the map, which are not fixed points of any of the earlier iterates. Just as this period 2 cycle is a fixed point of F2, which is not a fixed point of F itself. So now let's do some specific calculations to understand what this thing looks like. There are several standard maps, we should not use the word standard map because that is used in a specific context as well, but there are several so to speak typical, prototypical maps which are studied in one dimensional dynamics, which are useful to know about because they illustrate many, many general properties. Let's start with a few of them and go on from there. The first of these is the so called Bernoulli shift or the doubling map or the binary shift and it's as follows. So you start with the number x0 in the interval 0 to 1 in the unit interval and x1 is twice x0 modulo 1. In other words double the number and subtract 1 if the number exceeds 1 and the general function, the map function is xn is equal to twice xn plus 1 is twice xn modulo 1. Already this map has many, many interesting properties and it's called the binary shift or the one dimensional Baker transformation or the Bernoulli shift or the Bernoulli map and so on. For ease of notation, for convenience let me just call it the Bernoulli map and it's given by xn plus 1 is 2xn modulo 1. Let's try to draw picture of this map. So here is the unit interval 0 to 1 and here is x and here is f of x which is 2x modulo 1 and as always I draw the bisector first and then I start with this number. All I have to do is to double this thing. So it's a straight line with slope 2. So at the point half, at the point a half the map function has already crossed 1 and then it would go on like this up to this point up to 2 but then we are told you have to subtract 1 if the number exceeds 1 and that's equivalent to taking that piece, cutting it and putting it back here. So I should draw it as a straight line so it doesn't look like it's a curved line. The slope is 2 everywhere. Is this a linear function? Would you call this a linear function? It's piecewise linear. It's not linear. It's piecewise linear. This piece is linear. Is it continuous or discontinuous? It has a discontinuity at this point. Does it have fixed points? 0 and 1 are fixed points. Are they stable or unstable? They are both unstable because the slope is equal to 2 in magnitude at each point and therefore these are both unstable fixed points. What happens? Perhaps this has stable periodic points. We should check that out. So what does the double map look like? What is f2 of x? This is equal to 2 squared x modulo 1 because it's 2x modulo 1 and then another 2x modulo 1. Double it once again so it's 4x modulo 1. What does this function look like? So here's a half, here's a quarter, here's three quarters and this function looks like this and this is the bisector. Where are the fixed points of this map? Well, 0 is a fixed point. So here we had fixed points as 0 and 1, both unstable. The fixed points here, of course 0 and 1 are fixed points but you also have two more. Where are these points? It's easy to guess here. All you have to do is to double this number and take modulo 1. So where's this point? What happens if we take 1 third? Where does 1 third go when you double it? 2 thirds and where does 2 thirds go when you double it? Back to 1 third because 4 thirds you subtract 1, it's back to 1 third. So it's evident that 0, 1 third, 2 thirds and 1 are fixed points of this iterated map of which 0 and 1 are already fixed points of the original map and 1 third and 2 thirds form a period 2 cycle. So this point jumps into that, that point jumps back into this and it keeps going forever. Are these stable? Is this a stable period 2 cycle? It's definitely unstable because the slope is 4. The slope in fact is 4. Definitely unstable. Are there any other period 2 cycles? Pardon me? Well, 0 and 1 are trivial period 2 cycles because they are actually period 1 cycles. They are already fixed points. So I said a period p cycle is a set of p points such that it's a fixed point of the p-theterate of the map but not of any of the earlier iterates. So there are no other period 2 cycles here as far as we can tell. What about the next iterate? The slope would be 8 and you get a large number of fixed points but you get a new period cycle as well but the fact is that all of them are unstable. So it's immediately clear that all the points here, all fixed points of the map as well as all its iterates are unstable. Then the question arises if a point does not belong to a periodic orbit, where does it go on iteration? Where does it end up? It turns out it wanders forever on the unit interval, never leaving the unit interval but in a completely aperiodic and irregular fashion, in fact in a chaotic fashion. So this is our first example of what chaos is. We will define chaos much more precisely but the fact is that you have an infinite number of points which sit on periodic cycles, all of which are unstable. So if you start with a point exactly at one of the points of the orbit of a periodic cycle, this point will move in this periodic orbit but all of the points the iterates will uniformly and densely fill up the entire unit interval given enough time and this is an experiment you could do with pocket calculators. Start with a number, random number between 0 and 1 and iterate and keep doing this and you will see that gradually its iterates fill up the entire interval. You could then ask what are the points that actually fall on periodic orbits? We found that one-third and two-thirds falls on a periodic orbit. What about the point one-fifth? Where does it go? Well one-fifth would go to two-fifths which would go to four-fifths which would then go to eight-fifths which is the same as three-fifths which would go to six-fifths which is the same as one-fifth. So you end up with a period four cycle, the point one-fifth would go to period four cycle and so on. So let's generalize this. What's the sensible way of doing this? The convenient way of doing this is to write this x0 in binary decimal notation using the digits 0 and 1. So it's a number between 0 and 1. So let me write it as 0 point a0 a1 a2 a3 dot dot dot where each ai equal to 0 or 1. If this is x0, what's x1? What's x1 equal to? I have to double this number in other words multiplied by 2 and if it's greater than 1, I throw away the one part. If x0 is less than a half then a0 is 0 because this is the coefficient of 1 over 2 to the power 1. If x0 is greater than a half but less than 1 then a0 is 1. At the next stage when I multiply by 2, it's equivalent to taking this decimal point and shifting it there. That's all it does because I've written it in binary. So it's immediately clear that x1 on doubling so twice x0 is a0 point a1 a2 a3 dot dot dot or therefore 2x0 mod 1 this is x1 equal to 0 point a1 a2 a3. This is the reason for calling it the binary shift. All you have to do is to move this decimal point one place to the right and erase whatever is here. Is this map invertible? In other words, if I give you x1 as a function of x0 looks like this or xn plus 1 as a function of xn looks like this. Is this map invertible? If I give you an xn you can find a unique x0 xn plus 1 but if I give you an xn plus 1 can you find a unique xn? No indeed because if you give me a value here I find the map function or here I find this function next but if you give me the value of xn plus 1 there are 2 possible xn's from which it could have emerged regardless of what a0 is you get the same x1. So the map is not invertible because it's not linear non invertible although it's piecewise linear. This non invertibility is crucial because it means if I give you xn and asked what x0 could it possibly have emerged from there are 2 to the n possibilities all of which would lead to the same xn. So you can see at each iteration I am losing a piece of information I am losing one bit of information because this gets erased at the next stage a1 would get erased and you would have just 0. a2 a3 and so on. So you have no way of going back and recovering what a1 was or what a0 was. So this plays the role of this is responsible for many of the properties of this map that is not invertible and moreover the number of pre images of this map the number of possible x0's which lead to a particular xn is actually increasing exponentially with n it's 2 to the power n now let's see what are the points that lie on periodic orbits can we say that from here what points would lie on periodic orbits if it's part of a periodic orbit it's clear after sometime this pattern should repeat what's the necessary and sufficient condition for that well either this number terminates either this expansion here in binary decimal form terminates at some point after which you just get 0 0 0 everywhere or it repeats it's a periodic pattern by itself what do you call those numbers rational numbers all rational numbers between 0 and 1 would lie on periodic orbits and all irrational numbers for which this never repeats itself would lie on the single chaotic attractor here so that tells us at once that all all irrational in this case it's the unit interval itself the full unit interval is the chaotic attractor in a sense which will become precise in a moment the rational numbers are dense on the unit interval arbitrarily close to any point in the unit interval you can find rational numbers so they are dense on the unit interval but they form a set of measures 0 the total length of all the rational numbers is 0 you can count them off they are denumerable they are infinite but denumerable you can count them off you can put them into one to one correspondence with the integers yeah yeah so it's clear that's a good point he says that what if you have a system what if you have a number where the first k digits are some specified digits and after that a pattern starts what sort of object would that be what sort of point would that be it would be a point which after a certain number of iterations falls into a periodic orbit so it's a pre image of a periodic orbit would you call such an initial point rational or irrational rational absolutely once again it's rational it's just that it doesn't start off right away with a periodic pattern yes why not why not it's a rational number why do you say that why do you say it's not the pre okay we will come back to this question and answer I want you to think about the answer to this question what if I have k digits here which are completely arbitrary and after that you have a 0 1 0 1 0 1 etc after this many the next number is etc this is certainly possible forever what kind of number is this what and what kind of orbit does it belong to so what would you say is this a rational number or not it's a rational number therefore the correct statement is that all rational numbers either lie on periodic orbits or on pre images of periodic orbits a finite number of pre images of periodic orbits in other words after a finite number of iterations all rational numbers would fall into periodic orbits but the number of irrational numbers is infinitely larger than the number of rationals it's uncountably infinite you can't count you can't enumerate all the irrational numbers between 0 and 1 those numbers the fate of those numbers is that they uniformly and densely fill up this unit interval never hitting a rational point and never settling down to anything specific any specific part of this unit interval but always moving about always wandering on the unit interval back and forth so that's one aspect of this code and code chaotic behavior which we still have to define precisely which I will do in a very short while the other aspect is that if you start with two numbers which are close to each other then here's an instance where the error after n iterations becomes as large as the unit interval itself and this is exactly what I said when I said we have exponential sensitivity to initial conditions so it's very obvious that if you start with an x 0 so I start with an x 0 and that leads after n iterations to x n equal to 2 to the power n x 0 modulo 1 and I start with a y 0 which is equal to x 0 plus epsilon and that goes to a y n which is equal to 2 to the n x 0 plus epsilon modulo 1 so the separation between the two is 2 to the power n epsilon modulo 1 and if n becomes large it's clear this number for arbitrarily small epsilon could become as large as the unit interval itself so this was the statement I made that there's exponential sensitivity to initial conditions that the separation between neighboring tragic neighboring points is in fact exponential in time remember that n is time discrete time and 2 to the power n is e to the n log 2 we'd like to make that a little precise we'd also like to find out how do we how are we sure that the iterates of these irrational points fill up this entire space and finally we'd like to ask what's meant by exponential sensitivity so let's take that up next so this is at the root of chaos now what I mean by exponential sensitivity to initial conditions pictorially it means that if you start with an initial point here and let me do this in continuous time dynamics just for picture pictorial for illustration sake and suppose this is the trajectory of this part of this point and I start with the neighboring point here and let's suppose it's the trajectory here then I'd like to find out if points arbitrarily close to this point spread away from it exponentially fast or not as time goes on so I'd start with this initial distance here between them after time t I'd find this distance and if this is exponentially multiple x e to the power some lambda t times this then I would say this exponential sensitivity but I want to do this carefully so how would I do this and let's do this in terms of iterations either discrete dynamics or continuous dynamics doesn't matter how would I do this how would I write this carefully if this is epsilon and this is e to the lambda t epsilon and I want to extract this lambda clearly I take this distance divided by this distance take the log and then divide by 1o and divide by t in order to extract the lambda so I define yes the what's the sanctity behind to what to the exponential sensitivity okay if the separation is a power law separation in time like square root of time or time t itself or t squared or t to the power 20 whatever then you can actually compute the difference you can calculate the way the error amplifies in polynomial time it's completely computable but not possible if it's exponentially fast in other words no matter how small your initial error was eventually things would blow up exponentially and we saw in our study of dynamical systems continuous time dynamics that accept that points where the vector field vanished everywhere else the flow was actually exponential the solutions locally were always of the form exponentials of time multiplied by some eigen values of some linearized matrix so that's generic always besides that's precisely the sort of separation which leads to amplification of uncontrolled amplification of errors of initial uncertainties or imprecisions if they yes it means that there's exponentially instability however for chaos you need something more than that you need to have that in whole regions of phase space not at individual points so certainly we can control what happens in most cases in regular behavior but here it becomes uncontrolled the error will actually grow till it fills up the system size itself that doesn't happen in normal systems okay the linearization is an approximation of a nonlinear system yes the reason it became exponential there was because we had first order dynamics we had first order exponential we had first order dynamics but the point is is the system exponentially unstable in whole regions of phase space or only at isolated point just the fact that you have an e to the l t times x 0 locally when you rectify a vector field doesn't imply chaos at all you need many other conditions which is yes yes yes the solution is exponential but the errors do not multiply exponentially in a linear system they don't multiply exponentially in a linear system at all right you need nonlinearity for this sensitivity to happen it's not enough to have nonlinearity but a plain linear system it doesn't do it doesn't do this at all okay because we're going to put down conditions for chaos you have in mind the system of the following kind x dot equal to x on the real line the solution here is x of t is e to the t x of 0 therefore if I start with an x 0 here x 0 here then suppose here is x at time t and I start with an x 0 plus epsilon here that of course will go to x 0 plus epsilon e to the t whereas this is x t is x 0 e to the t and this separation is an e to the t times this original separation this is not chaos this just says that the solution is an exponential this is not chaos because the phase space is unbounded for chaos we're going to define it for systems for which the phase space is actually bounded and yet you have amplification of errors such that it mixes up in the space so badly that the error finally amplifies to the system size itself the size of the phase space itself that's not happening here this is integrable completely integrable there is a concept of chaos but it's not very interesting if the exponential instability is because of an unboundedness in some phase space in some axis that's not very interesting because the system itself is integrable so you'll see what else is needed in order to produce chaos yes though they would be chaotic as difficult yes I agree I agree entirely but they're not very interesting because there we know the system is unbounded and that's the reason why things are becoming exponentially large this is not very interesting because we're looking at systems which are not integrable that's very clear chaotic systems are not integrable in general again I need to qualify that we will come back to this in the case of discrete maps so I want to define here this multiplier lambda in a careful way I want it such that it's a property of this trajectory I want to see how things are thrown away from it as you go along so if you have a neighboring point here that's thrown away if you have a neighboring point here that's thrown away another one here is thrown away and so on and I want to probe the rate at which it's being thrown away so let me define it in the following way I start by saying I have an initial point x0 and an initial point x0 plus epsilon here see that's my initial distance and if after time t this point x0 finds itself at x t and the point y0 finds itself as at y t then I compute modulus of y t minus x t divide by y0 minus x0 this case and in doing so I've got this e to the power whatever it is I've got this I take the log of this that brings down this exponent and I divide by a 1 over t and that's supposed to give me this lambda but I need to make sure that I'm on this trajectory back here so I go to the limit in which epsilon goes to 0 recall that this is equal to epsilon and I need to do this and this is supposed to be the asymptotic behavior so I take the limit as t tends to infinity of this quantity and that's my lambda this is the definition of the Lyapunov exponent I want you to pay particular attention to the order in which these limits are to be taken if I don't do that then it's clear that if the phase space is bounded this number here is always finite and then I have a whole lot of points it doesn't diverge or anything like that and then I take its log divide by a t and as long as this is finite the whole thing goes to 0 on the other hand if epsilon goes to 0 first you may have a very non-trivial lambda this limit may exist and give you a non-trivial number that's the one we want to calculate but for maps we could write a similar thing this was for flows for maps what would you write again I'd write lambda is equal to limit n tending to infinity limit epsilon tending to 0 1 over n the log of the nth iterate divided by that would be my definition of the Lyapunov exponent if it exists now our map function says xn plus 1 is f of xn so what does this become this becomes equal to limit n tends to infinity limit epsilon goes to 0 1 over n the log of f nth iterate of x0 plus epsilon because that's the nth iterate of y0 minus f nth iterate of x0 minus y0 minus y0 minus y0 minus y0 divided by epsilon but what's this equal to we could simplify this it's very clear that if this map is assumed to be differentiable then what happens so coming over from there therefore lambda is equal to limit n tends to infinity 1 over n and then what do we write here yeah it's just the derivative at the point x0 so it's equal to log of the derivative over dx at x equal to x0 that if it exists is the Lyapunov exponent and it's a function of the starting point so it's in fact definitely a function of the trajectory on which or the orbit on which x0 finds itself could change from one orbit to another most certainly but what do we have here remember that x1 equal to f of x0 that's what we got x2 is f of x1 and so on xn is f of xn minus 1 and so on so could we not write this in a simpler form you write this as limit n tends to infinity 1 over n the log of so let me write this df and x what's fn of x it's just x sub n right so this is equal to df and of xn minus 1 over dx n minus 1 times dx n minus 1 which is df of xn minus 2 over dx n minus 2 and keep going down all the way df of x0 over dx0 evaluated at x equal to x0 agree but that can be simplified a little more you can write this as equal to limit n tends to infinity 1 over n a summation from j equal to 0 to n minus 1 f prime of xj so it's just the time average this is an average over time it's the long time average of the log of mod f prime of x at all those points on which the initial point x0 falls as you iterate in time over a very long period of time what's this for the Bernoulli shift that we just looked at what would this be what would this be for the Bernoulli shift it could become an integral yes we are not sure yet but we have yes we should be able to convert this to an integral but then you have to say over what and so on but this can be computed very easily for the map we just looked at the doubling map the Bernoulli shift for the map was just 2x mod 1 and what's a slope it's 2 everywhere the slope is 2 at all points on this map and therefore you get 0 to n minus 1 log 2 and you got to divide by this so what's this equal to it's log 2 and this system is exponentially sensitive to initial conditions could turn out that lambda 0 in some cases could certainly turn out that lambda 0 incidentally if you had a power loss separation between two trajectories what would you get here what would you get suppose you had an error which increase like a power law remember in our map here x n was 2 to the n x 0 mod 1 and x y n y was equal to 2 to the n x 0 plus epsilon mod 1 and what we found was this amplification of this error 2 to the n epsilon and we got this we essentially found this we wrote it as e to the n this quantity is e to the n log 2 and we detected this log 2 there if instead of 2 to the n x 0 plus epsilon suppose this went like n squared what would happen if you went through this process we have a power law separation now and instead of errors amplifying n fold suppose the initial precision imprecision epsilon goes to epsilon times n to the power k yes when I take this and divide by epsilon this goes away when I take the log I get k log n and then I have log n over n and the limit is 0 so this method this definition of the Lyapunov exponent is geared to find exponential instabilities it doesn't mean that trajectories have to stay close to each other they could separate sub exponential and if they did that you detect 0 Lyapunov exponent Lyapunov exponent would just turn out to be 0 in those cases could even turn out to be negative we are going to see instances of that we will see the physical interpretation of what happens if things become negative but you can actually guess suppose I find a lambda to be negative what would you conclude yes absolutely right I'd conclude that things are converging exponentially rapidly either falling into some periodic orbit or into some fixed point which is stable exactly so I'd immediately detect stability if I had negative Lyapunov exponents but if I have a positive Lyapunov exponent it certainly implies chaotic behavior we will define chaos carefully but now the time has come for me to say what the definition is I will repeat this again we want a finite phase space we want exponential sensitivity to initial conditions in the sense of one or more non-zero Lyapunov exponents and finally you want a dense set of unstable periodic orbits buried in this phase space that's what throws things aside on both sides exactly like a separate rix does the statement was and I will repeat this again because we are over today.