 Yeah, before we begin today, let me answer a question that was raised, which had to do with the ergodic sampling, sampling and ergodicity. The statement made was that, if you have a time average, you could in principle convert it to an ensemble average, if you have ergodicity in the system. And this is used in practice in sampling, in experiments, in physical practice, it is used all the time. And let me give an analogy for this. Because we wanted to measure, we have a long roll of wire, homogenous uniform everywhere, and we wanted to measure the resistance of this wire of 1 meter of this wire. The two ways of doing this, one of them would be to say, I have available to me a single piece, a meter long, and I measure its resistance, and I do this over and over and over again. And each time I make this measurement under identical conditions, I get a slightly different answer due to various fluctuations. And I take the arithmetic average of all these measurements, and that would be the time average of this resistance, and it gives me a certain number. We are assuming here, that this wire does not in any way change its characteristic due to these measurements. It does not become old, it does not get heated up, it does not get aged, and so on and so forth. The other way to do this, to measure, get an accurate value or average value for 1 meter long wire, would be to take this long spool of wire and make a measurement for each of those pieces. So the next piece is like this, and the next piece is like that, and so on. And I measure each one of these guys, and ask what the average value is. This would be like an ensemble average. An ensemble just means a collection. The whole set of these identical copies, and it is again assumed that these pieces are all identical to each other, and that gives me one average value. And that may not always be practical to do. You may have available just 1 meter and no more, in which case to get an accurate value, you would repeatedly measure the same thing under hopefully identical conditions, and then assume that the time average that you get by measuring the resistance of 1 meter of wire, a single specimen over and over again, is the same as the ensemble average. So this would be an illustration of ergodicity in the sense that you have replaced a time average by an ensemble average. Now what is implied in this whole business in some sense? What is implied is that there are certain sources of fluctuations, which lead to an answer which is slightly different each time, when you make this measurement over and over again. Some random processes are operating internal, external, we do not care such that there are fluctuations in the answer, when you make this measurement here. The assumption is that the system as time goes along runs through all the realizations of that random process, which is leading to the fluctuations, which would otherwise be reflected by the small differences in the different samples. In other words, whatever fluctuations are going on in time here, that is already captured at the same instant of time in the different parts, in the different copies or members of the ensemble, which is why you equate the two. And in dynamical systems, as you can see when a system is ergodic, it says instead of replacing, instead of computing time averages, long time average over a trajectory, at a given instant of time, I compute a phase space average over different portions of phase space with some given measure. And that is again the same ergodicity. So, it is precisely the same thing that is being done in both places. And the hope is that under suitable conditions, the random process or whatever else is determining the fluctuations here is such that a time average could be replaced by an ensemble average. This is at the root of all ergodicity. So, the system is ergodic and so you can replace. Yes, absolutely. Sir, is there any way by which you can find out the system is ergodic without actually evaluating the dynamics? This is a good question. It is a deep question. How do we know a given system is ergodic? First of all, you have to specify a system much more accurately to do this. Given a dynamical system, given the rules of evolution in our context, we can certainly test if a system is ergodic or not. We can run a typical trajectory through or we can use other criteria for ergodicity. If we know something about the dynamics, if we know something in detail about whether it is expanding in one direction, contracting in another and so on, we may be able to prove ergodicity in a rigorous sense of the word. But in practice, coming to this experimental question of no, I am assuming that we have a dynamical system for which the equations are given to you, the dynamical system. Without an explicit solution, yes, you can still do that. You do not need to be able to solve the system completely. You can still prove that it is ergodic. You can still show that it is ergodic. You do not necessarily need to solve the equations explicitly. In fact, in most cases, I cannot solve things explicitly. So, that is not the problem. Proving ergodicity for a given difficult, complicated dynamical system is not a trivial matter, but it can be done in principle in most cases. On the other hand, yeah. What are the things where we cheat? What is the rule where we cheat in approach it? The next question is, how do I know? How do I do this? This is going to involve something called coarse graining in phase space and then looking at what happens as the system visits different parts of phase space. I will talk a little more about it a little later. So, it involves other criteria, other quantifiers for ergodic behavior, which we have not yet considered, such as I break up the phase space into cells, sufficiently small cells and keep track of where a representative point is. I can do that numerically without actually solving the equations of motion. And then depending on the statistics of how different parts of the phase space are visited and filled up, I can decide whether the system is ergodic or not. I can see what the dependence of various visits, the statistics of recurrences to various cells and how they change, how this thing changes as a function of the cell size. That will give me another indication of whether the system is ergodic or not. So, there are quantifiers for ergodicity. On the other hand, for a purely experimental question such as this, the resistance of a piece of wire, which I am talking about, in practice it is clear you do not have a large copy. You do not have a very large sample. You certainly do not have two kilometers of wire to play with. If you did, then the ideal thing to do would be to cut it into one meter pieces and measure for each one of them what the resistance is and take the arithmetic average. Since you do not have that, you take it as an article of faith that whatever fluctuations occur in the remaining portions are all present in a given sample over a period of time, given enough time. And therefore, you use just one sample, but you repeatedly make measurements in order to find an average value. And then the hope is that these two averages are exactly the same thing if the system is ergodic, if it is ergodic. So, this is the point of order. In fact, when you do statistical mechanics or thermodynamics, you are using ergodicity essentially. Because what is going on is that you put a macroscopic system under given experimental conditions, such as for instance in contact with a heat path in thermal fluctuations, thermal equilibrium in contact with a heat path. And then you assume that the system is ergodic. In other words, a statistical average over a given ensemble with a certain probability distribution is enough to give you long time averages for the system. Because you cannot follow the trajectories of a complicated set of interacting particles in time, but you assume that whatever information you would have got by doing the time average is already there when you do the ensemble average. And then it only remains to write down the correct measure, the correct probability distribution. And that is the task of equilibrium statistical mechanics. So, once again you do precisely that. Now, what is the reason why you are able to do this? Again, you do not have an infinite number of copies of an ideal gas in a container. Although when you derive statistical mechanics or the rules of statistical mechanics, you assume you pretend you do and you have an infinite number of copies to take averages over, but a given system, a single system, the gas in this room for example, runs through all the realizations of the random processes involved given enough time. So, this is the whole point. For instance, in different samples, the molecules would be at different positions at the same instant of time. And the assumption is given enough time, the molecules of a single specimen would run through and assume all those positions. So, you certainly have to do an averaging for a sufficient amount of time that you think you have got a satisfactory enough long time average which could then be compared with the ensemble average. Now, since the equilibration time in systems like this is very, very short, this is actually true in most cases in practice. Need not always be true if the relaxation times in the system are extremely small, then this is no longer true. If the system is extremely sluggish and there are time scales for equilibration or the running through of all the realizations which are much larger than the time scale on which you make measurements, then the system could get stuck in one or two preferred configurations. Then it is no longer a true average. This happens in many systems. It is called glassy dynamics. It again happens once again when you have extremely sluggish systems. Systems with very complicated, what are called very complicated or rugged energy landscapes. So, you do not have clear free energy minima, but you have minima on all local minima on all scales and then the system could get locally trapped in some place and take a long time to get out of it and then of course, it is not easy to take averages after that in such situations. So, they do these situations do occur in physics everywhere, but for us in the study of dynamical systems the equations themselves specify everything. So, in principle that is all the information you have and everything has to be derived from there. So, let us go back now and consider what one-dimensional maps had to tell us. We looked at the logistic map. We looked at it at fully developed chaos and we discovered that it had an invariant measure, an invariant density which was non-trivial and had kind of square inverse square root shape. So, for the logistic map at mu equal to 4, this was the map f of x equal to 4 x times 1 minus x. We had fully developed chaos the Lyapunov exponent lambda was equal to log 2 the same as for the Bernoulli map and the invariant density this was equal to 1 over pi square root of x times 1 minus x. So, this is where we had got once you have this then it is not hard to compute various physical quantities because the average value of any function of x is simply its weighted average with this invariant density and that is guaranteed to be the long time average. And of course you know that apart from a set of points of 0 measure all points in the unit interval lie on chaotic orbits and these orbits wander back and forth without settling down anywhere and they fill up the interval according to this density which looks like this. Now, there are certain universalities about this map which are common to all one hump maps of the unit interval. We will talk about some of these but there is another phenomenon I would like to talk about today and that is the phenomenon of intermittency. Intermittency comes in many varieties and very roughly speaking it is the phenomenon by which a chaotic system displays periodic behavior in between or apparently periodic behavior instead of being fully chaotic for long intervals of time and then it is followed by bursts of chaos followed by bursts of laminarity which is regular behavior of some kind or approximately periodic behavior. So if you look at the time series of any variable such as x it would not show the truly chaotic up and down zigzag motion in an intermittent situation it would actually show long bursts of periodic behavior and then all of a sudden once again you have chaotic behavior. Now how does this phenomenon arise? Several roots to intermittency as I pointed out but the simplest one of these is the following. Let me draw in pictures and show you what happens. Suppose you had a chaotic one dimensional map here is the bisector which look like this and this was the map function. It is easy to see that in this case you have a stable fixed point here and an unstable fixed point here where the slope is greater than 1. So if you start with points in between they would end up at this point here and you start with points here it would end up in this stable fixed point. Now suppose you varied a parameter such that this map function as the parameter varied moves up and there comes a stage when it has got a tangency at that place. So the slope is 1 at the point of tangency and then as you varied the parameter further the thing moves out and goes off like that. There may be other portions to this map function but locally in the neighborhood of this point where of tangency suppose it looks like this as you tune a parameter and I will give an example shortly what would happen and here is an example right away. Suppose you consider xn plus 1 equal to some constant mu plus xn plus xn squared near the origin. So I have shifted this point to the origin and the map looks like that what would happen well clearly if you had the map near the origin if mu is 0 then it is xn plus xn squared and it looks exactly like this there is a point of tangency with slope plus 1 and then it takes off like an xn squared. So this would correspond to mu equal to 0 and this would correspond to positive values of mu but it is just moved off from there completely and this would correspond to negative values of mu. So as you cross mu equal to 0 from left to right the picture would go like this from here to there. Now at this place in this situation there is no problem this thing here is a stable fixed point things get attracted to it. At this point you have something that is marginally in different marginally in different marginally slope is equal to 1. So the fixed point is marginal here as you come in here things would flow into this point but if you started off on this side things would flow out so you would have a behavior like this in here could go in but if you started here things would quit out. In this situation if I start here I go to this function I come here I go here I go take the staircase route and then I am off as you can see if this is infinitesimally close to this point the bisector there is a tunnel region where the system takes a long time to get out of this tunnel region and then eventually it does and does chaotic motion elsewhere and then once it gets trapped in this region again for a long amount of time there is again approximately period regular behavior it is not really going anywhere it stuck in this tunnel and the moment it clears the tunnel it goes off gets out and comes back. We can even estimate how long it would take to cross this tunnel behavior this this tunnel region and this is type 1 mean intermittency the simplest type of intermittency we can easily estimate how long it would take to get out of this tunnel region as a function of this parameter µ which is supposed to be infinitesimal here so µ just a little bigger than 0 µ equal to 0 and this was µ less than 0 so let us look at this map here it says xn plus 1 is µ plus xn plus xn squared so it is clear that xn plus 1 minus xn is µ plus xn squared and in this region the dynamics is essentially differential dynamics because it is making ever so small steps and I can replace the difference equation by a differential equation in time and it is clear this thing here is just the first derivative so it looks like Tx over dt is approximately µ plus x squared itself in this region which I have drawn very exaggerated way but the solution to that is obvious because it says square root of 1 over square root of µ tan inverse x over square root of µ is equal to T assuming that I start near 0 at your at T equal to 0 and I move out it is of this form so that immediately says that x is like square root of µ tan T root µ in other words to reach a point x to the right of the origin you need a time which is related to the space the x by this relation here and what happens to this when T hits T root µ hits pi over 2 becomes infinite so essentially it says that to cross this tunnel region the time of crossing T is of the order of 1 over square root of µ that is the time it takes to get out of this tunnel region in an order of magnitude way now that is the reason why if µ is infinitesimal and you get closer and closer it is going to take longer and longer in other words the laminar interruptions the laminar regions in this chaotic time series are going to become longer and longer and it looks like the system is not chaotic at all but in reality it is except that for long bursts of intervals of time you would essentially periodic behavior or regular behavior now this kind of thing is seen in experiments in a variety of situations in liquids for instance very well known that there are models of liquids dynamics of liquids fluid dynamics this appears all the time there are many many other areas in semiconductor physics chemical reactions and so on where intermittency has been seen different types of intermittency have been seen the reason I said this is called type 1 let me mention this very briefly and perhaps we will come back to this little later is the slope at this point becomes 1 this is in marginal fixed point but in higher dimensions if you have maps in more than one dimension then the marginality appears not when the slope hits one alone but in the eigenvalue plane of the local Jacobian matrix every time eigenvalues cross the unit circle you have this kind of behavior you have marginality the three ways in which eigenvalues can cross this unit circle one of them is to cross the value 1 which is what happens here in a one dimensional map they could also cross minus 1 this direction and then you have what is called type 3 intermittency perhaps I will come back to later and then you could have a pair of eigenvalues crossing at complex conjugate points and this will only happen in two or higher dimensional maps and then you have what is called type 2 intermittency we will try to come back to this when we do higher dimensional maps right now in one dimensional maps the slope crosses the value 1 and this is type 1 intermittency and what you need to know is that this phenomenon is very common and it is part of chaotic dynamics and the travel time through the tunnel region can increase scales like this parameter mu like a 1 over square root of mu now let us try to study this in a little more detail and see what happens in a map which we perhaps could solve and see the effect of this marginal fixed point so let me do a map a map exhibiting there is no chaos everything gets attracted there yeah I said that the time that I evaluated was for mu greater than 0 the situation greater than 0 when it was less than 0 it just got stuck there that is the end of it what good will it do when eventually things are going to fall into the fixed point right whereas I am interested in finding out what the time scale or the way this the intermittent region scale as a function of the parameter in a chaotic situation in a chaotic map when you have a fixed point it just falls in so it is not of great interest in the stable region it will fall in yeah so that oh but that integral isn't true anymore right I mean this is not true I mean dx over mu plus x squared is not equal to 1 over square root of mu tan inverse x over root mu if mu is negative right then it becomes logarithms and so on so it's of course the tan inverse function and the log function are essentially the same by analytic continuation right but the interpretations are very different all together so it's not just the same tan inverse function okay let's look at a map exhibiting intermittency let's call it the cusp map again this is a map of an interval this time say for convenience I'll take it from minus 1 to 1 and it looks like this xn plus 1 equal to f of x sub n equal to 1 minus twice square root of modulus xn and x0 is an element yeah between mu equal to 3 to 4 the logistic map has regions of values of mu where intermittency is displayed once again but the intermittency is not in the map function itself that doesn't have a slope but you'll easily recognize that iterates of this could have this behavior so again a digression the logistic map itself perhaps look like this at a value mu less than 4 for instance on the other hand the iterates of the map would start looking like this so there are plenty of opportunity for regions like this to be set up there is plenty of opportunity for regions of that kind to be set up which could then lead to intermittency because it's not just the map function that determines the dynamics but all its iterates as well so that's why the map does exhibit intermittency in between not at 4 at 4 it's fully developed chaos it's not intermittent pardon me between 1 plus root 8 not quite up to 4 1 plus root 8 to another value numerically determinable the map has stable period 3 cycles has a stable period 3 cycle so this is actually periodic it's not intermittent it's actually periodic it's not chaotic in that region at all so the chaos disappears and then it comes back so now you could ask how does this happen we'll do a little more I'll bring we'll talk about it numerically let me show you the exact bifurcation diagram for the logistic map eventually this has been well studied what happens is at certain parameter values there could be collisions between the chaotic attractor and unstable fixed points and these could lead to things called crises so they could different kinds of crises there are boundary crises there are inside interior crises and so on and they could lead to sudden changes in the nature of the attractor and that's what happens in the logistic map so yes there is when I say fully developed chaos what I mean is the following the entire unit interval is the attractor and that's not true unless mu is equal to 4 because it's not an onto map unless mu is 4 it doesn't fill up it doesn't map the unit interval to the full unit interval but rather to a point which is less than that something like mu over 4 it maps it from 0 to mu over 4 and unless mu is 1 you don't hit the full interval so let's look at this map and see what happens yeah is another question pardon me yeah yeah yes yeah the system gets stuck so the stickiness if you like the many many dynamical systems of this kind including some Hamiltonian systems where you don't quite have this kind of phenomenon but you have stickiness of some kind and we will talk a little bit about that too and here you see the mechanism by which it gets stuck as you can see and it's a very when mu is small you could see it's really could get stuck for very very long periods of time but there's no doubt that it will escape eventually and then the system becomes chaotic again so it's a laminar burst occur in chaotic burst occur in the middle of laminar regions and vice versa that's really what intermittency is so let's look at this map this will fix many many ideas of intermittency clearly it's a map which is solvable where you can actually write down solutions and so on explicitly in the following sense so let's first draw it and let me draw it here and then I come back there this map is from minus 1 to 1 so let's draw a little square here from minus 1 minus 1 1 and 1 here and this is f of x or if you like xn plus 1 as a function of x of n and that's the origin of this is the bisector once again and it's clear from here that this is equal to 1 minus twice square root of minus xn for xn negative and it's equal to 1 minus twice square root of xn for xn positive and the slope at any point if I differentiate this and compute this f' of x is equal to minus 2 over twice square root of minus x and then a minus 1 again so this is equal to 1 over square root of minus x for x less than 0 and for x greater than 0 it's 1 over f' of x equal to minus 1 over square root of x x greater than 0 so the slope diverges at the origin from both sides this is sort of cusp pair and the slope at x equal to minus 1 is plus 1 and x equal to plus 1 is minus 1 so it's immediately evident that this graph goes like this is infinite out that point at that point and then it's symmetric and false back there in this fashion and this slope here is exactly 1 and for any point x greater than minus 1 it increases so this is the region of the marginal fixed point where you could get stuck for a long period of time and in fact what would happen is that you do this staircase thing here and eventually go up you go there and then you get re-injected and so on and so forth so this map oh by the way the fixed point here is unstable the slope is greater than 1 it's easy to check now what would the iterates of this map look like yeah they'd even be they'd be even more steep than this for example the first iterate would look like this go up like this and come back the slope here would always remain 1 for those maps so they'd always be this marginal fixed point but then you have unstable periodic points and all the periodic points are unstable and the map is actually chaotic but you have the effect of this marginal fixed marginally unstable fixed point here and therefore you have the phenomenon of intermittency in this case we could write the Frobenius-Pronom equation down for this so let's do that we need to be able to solve this guy write down the two roots and then compute what the invariant measure is do you like me to do that or do I take it that you'll do it yeah you can actually write all you have to do is to write the Frobenius-Pronom equation minus 1 to 1 dy so let's write this as minus 1 to 0 delta of x minus f of y but f of y for y negative is x minus 1 plus twice minus y in this fashion rho of y plus integral 0 to 1 dy delta of x minus 1 plus 2 root y rho of y now you have to convert this to a to delta functions in y find the slope of the functions at that point divide by the magnitude of the slope and you get a functional equation for rho of x so this is equal to some functional equation which is fairly complicated on this side and the question is does this equation have a solution or not it's non trivial again the functional equation but this solution has been found and the exact answer in this case in the normalized distribution is 1 minus x over 2 which is quite remarkable because it's an explicit function of x it's a linear function of x looks quite simple and if you sketch this function since rho of x must be non negative so let's plot rho of x here versus x it runs from minus 1 to 1 and at minus 1 it's equal to 1 and at 1 it's equal to 0 so it's simply a linear function of this kind so this is a half and this is 1 that's what the invariant measure looks like now in heuristic terms and crude terms why do you think the map is symmetric about the origin but why do you think the invariant density is piled up on the left rather than on the right rather than being spread out uniformly yeah there is a sort of this thing here is doing it this marginal fixed point it's unstable it's marginally unstable but because of this intermittency because of the fact that the system spends long periods of time here remember hergodicity the measure in any region is proportional to the fraction of the time a typical trajectory spends in that region and it's clear that typical trajectory a chaotic trajectory would spend a lot of time here so that's reflected in this fact here but unlike the logistic map where you actually had unbounded invariant density and very little at the middle here it's not like that it's actually quite bounded it's linear goes down in this fashion and the area under the curve is 1 I might add that this stickiness is actually sufficient to prevent you from having an invariant density of this kind in fact you'd have just a delta function here things would get stuck here the only normalizable solution would be a delta function here unless you had this infinity here for reasons I won't go into here you need to have some point in the map other than the fixed point where the slope actually becomes infinite infinitely sharp then under suitable conditions you can have an invariant measure of this kind so what I'd like you to appreciate although I'm not proving this is that the behavior here and the behavior here are related to each other you need to have yeah suppose you do suppose or you instead of cutting it off suppose you did something like this suppose you came along like this and did this or something like that so it's an then it's not an on to map I'd like to have an on to map I'd like to have minus 1 to 1 mapped on to minus 1 to 1 no not at all you don't need to have it yeah oh yes yeah okay okay yes yes you need not have an invariant measure of this kind we need to have an invariant measure for example if you took a map like this then there's no guarantee that you have a map an invariant measure of this kind you could just end up with the delta function here and nothing more yeah yeah the behavior will change the invariant measure will change there's no normalizable solution which is non negative of this kind at all you could just have a Dirac measure here you could just have a delta function here and the system gets stuck because what would happen in that case is that instead of chaos the maps Lyapunov exponent would drop to 0 because the effect of this stickiness is so strong it prevents the chaos from happening it actually makes the Lyapunov exponent 0 and you need to have some place here with infinite slope unbounded slope in order for it to actually be chaotic yes they may not yeah so the statement is there is no normalizable invariant measure which would do this there's no normalizable invariant measure you need this thing to be normalizable this density to be normalizable it shouldn't get singular suppose suppose for argument sake the density went like 1 over x plus 1 what would happen then you can't normalize that you can't integrate from minus 1 upwards because it's not it's not a integrable singularity at all so this can happen yeah yeah because you cannot decouple the two because anything that comes here is bound to also go there sooner or later doesn't matter now we are talking about what happens about reinjection we are talking about the entire dynamics not a single passage talking about it has to once it gets reinjected here then that differential approximation I made would be a reasonably good approximation provided it gets reinjected in a finite amount of time right so it's the map is actually exploring the entire play space it's not that trajectories are just exploring the neighborhood of this in which case the behavior is trivially determinable but you need to know what's happening everywhere else what sort of collection region do you have what sort of reinjection do you have and so on all of them play a role and the statement I'm making is that in order to have a normalizable density like this you need to have and I'm not proving this statement you need to have something which has an infinite slope the slope has to become unbounded at some other point I urge you to verify that this is indeed a solution to this equation you have to first convert it to a functional equation and then verify that this is a solution and as I said if you have a non-negative normalizable solution you're guaranteed by certain theorems that the solution is unique now where does that get us we need to know what kind of we have the invariant measure we need to know whether it's it's chaotic or not so the first thing we do is to find out what the Lyapunov exponent is let's do that for this map so the lambda for this map is integral minus 1 to 1 dx 1 minus x over 2 times log of the slope of f prime of x the modulus of that and that's equal to log of 1 over the square root of modulus x because the slope was 1 over square root of minus x for x negative and minus 1 over root x for x positive neither case I took the modulus it's just this number here so it's mod x to the minus half modulus x in this fashion now of course it's an even function and that's an odd function so that portion goes away and you're left with just an integral half log mod x it's easy to do and I urge you to do this and the answer will turn out to be a half notice this is negative in the region of integration so that cancels the minus sign and you end up with a half so this is certainly greater than 0 implies chaos but it's intermittent chaos in fact finding this finding this invariant measure invariant density numerically is non trivial you do this by finding a taking a long time series and drawing a histogram and this thing takes a long time to build up so you really have to run for a very very long time you have to run a trajectory and you have to leave out the initial transients which would be specific to initial given initial conditions and then eventually you end up with this but this is an analytic solution you can easily check that this is an exact solution to this problem okay so we have a chaotic map with a chaotic map you still have intermittent behavior and this is exactly solvable so it's like a paradigm it's like a model like the logistic map now of course you could make this uniform you could make this density uniform without changing any other properties of this map by taking this portion of it and doing exactly what we did for the Bernoulli shift instead of the tent map in other words do this in this fashion so this map function would be f of x equal to 1 – twice root x for x negative but it would be twice root x – 1 for x positive and that's sufficient to make all the difference because now you have a marginal fixed point here and you have a marginal fixed point here too and they compete with each other so you don't have any right to expect an invariant density of that kind you can write down the functional equation once again and I urge you to do this for this map and verify that in fact the invariant density is a constant so for this new map the invariant density is simply this so the area under the curve is again 1 and it's just a constant so rho of x is just equal to a half for this anti symmetric map the earlier map was symmetric but the invariant density was not symmetric on the other hand here the map is anti symmetric but the invariant density is symmetric in the logistic map and the tent map at parameter value 2 with slope 2 the map function was symmetric about the midpoint and the invariant density was also symmetric this doesn't pardon me at well it will have intermittency why not why not no not necessarily true not necessarily true because what's happened in the crude sense is that these regions have kind of overlapped in this case so once again it's easier to handle than the other map simply because the invariant density so I don't have to take weight it with any function of x everything is uniform here yeah yeah yeah yeah yeah yeah yeah but the weight jumps from point to point is not necessarily periodic whereas here once you're here I'm drawing this in an exaggerated way but once you're here there's a long staircase behavior that behavior is lost in this middle region and again out here there's a long staircase behavior which is lost in the intermediate region so definitely it would look very periodic but not with a constant amplitude slowly increasing maybe something like that when you're in these regions so that's what intermittency is it's not as if it's strictly periodic any function which is monochromatic or periodic would just go on forever minus infinity to infinity so the periodicity stops definitely if you look at it with infinite accuracy certainly it's not periodic there's no single periodicity yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah for the same oh yes yes I agree I agree because of this because of this yeah once you have a density then the time the fraction of the time t over a long trajectory the fraction of the time that it spends in any interval is in fact proportional to a to b of x dx for a normalized invariant density so this is certainly true this is a fraction so if you took a long orbit and you asked how much of the time does it spend in a given region that's measured directly by the integral of the invariant density the measure of that region measured it's given directly by that that's certainly true so this map has interesting properties so does the other one but the other one has this extra feature that the map is symmetric whereas this one doesn't now you could ask why did I choose why did I choose something which went like x plus perhaps x squared here if you expand the map near x equal to minus one the map function would have a leading term like x plus one and a term which goes like x plus one whole squared and so on and so forth so let's see if you can generalize that a little bit and let me shift it to the origin so let's suppose your intermittent maps are like this here and this map function f of x near the origin is like x that's the leading term plus something or the other so plus some constant may be multiplied by x to the power alpha so let's take out an x one plus this this is typically what would happen near the origin alpha if it's a constant if alpha is 0 first of all then nothing interesting happens it just is a map which is going to look like this straight away linear with a slope greater than one because it's slope is one plus C but if alpha is not 0 but a positive number generically if you have a function which is expandable smoothly around this point alpha would start with one and then higher powers go on like in the case of these maps this map can be written in the neighborhood of this point as x plus one plus a term which is x plus one whole squared and so on in this neighborhood but in principle you could have an alpha which is positive doesn't have to be one could be greater than one could be even less than one so the degree of stickiness out here as you can see is measured directly by alpha and for such maps you can actually plot the Lyapunov exponent as a function of the parameter alpha and it turns out that if you plotted lambda versus alpha then if you are if you start at some point like that which is certainly positive because if alpha is 0 this is a map with slope one plus C and this is unstable and the assumption is the map is chaotic by its behavior elsewhere so it start with some Lyapunov exponent and as alpha increases it is getting more and more sticky here and therefore the Lyapunov exponent actually drops in this fashion and it turns out that alpha equal to one it drops to 0 and the map is no longer chaotic some kind of phase transition takes place on the other hand we also know that in the cusp map for example there is a Lyapunov exponent which is nonzero which is positive and that happens because of the infinite slope elsewhere so you actually get rid of this stickiness of this point by having this sharp spike somewhere else in the map so they don't fall in this class these maps don't fall within the purview of this general statement here this is a technical aside I want to get too deep into this but let me go back you could also ask can I construct maps of this kind can I construct a map where for example I have not a square root cusp up there and a square root behavior in the map function which had square root of x and so on what about x to the power one third or x to any other power cube roots and so on the answer is yes you can construct whole families of these maps and whatever exponent alpha you have here you need to have a one over alpha type behavior in the slope up there roughly you need to have something which becomes unbounded and therefore the nature of this stickiness here can be related to the nature of the cusp elsewhere to get a finite point in fact you could construct an infinite family of such maps for which you have invariant densities which are all sorts of prescribed functions of x not necessarily linear functions this is the inverse Frobenius Perron problem if you give me a smooth function as an invariant density can I construct a map of which this is the invariant density that problem can be solved modulo certain qualifications and it turns out for a huge variety of such functions row of x you can actually find you can tailor make a map whose invariant density the given function would be so that is actually done turns out to be non not too difficult problem yeah yeah yes because of ergodicity yeah pardon me I can find the map function yeah I can find the map function yeah it is a non trivial problem but it is doable after all the Frobenius Perron equation is an eigenvalue problem so in some sense you are saying you are giving me the eigenvector and now you go back and construct the kernel this is yes the whole thing is true only for certain classes of chaotic systems absolutely and in one-dimensional maps so things are restricted to one dimension very much so I do not know how these things I do not often have direct statement about how this generalizes to higher dimensions I am not too sure yeah so for one dimensional maps a great deal is known much much is known about these chaotic maps yeah the whole thing we are dealing only with chaotic systems now let me go on to we will come back a little bit later to understanding what coarse graining in phase spaces and so on but let me go on to two-dimensional map such maps exist to and let me give you in particular an example of a very simple two-dimensional map which is invertible and yet exhibits chaos and this is in fact a model which is used as a model for Hamiltonian systems whereas you know things are conservative so we will look at an area preserving map where you have chaotic behavior unlike the case of the Bernoulli map or the logistic map and so on they do not model conservative systems but now we are going to talk about a map which models a conservative system and yet exhibits chaos and the map is the following it is called the Baker's transformation or the Baker's map because it is supposed to mimic the way in which Baker's make bread from dough the Baker's not a proper name it is common noun and it is supposed to model the way Baker's make dough which is take the dough to stretch it and fold it back and they keep folding it back and this mixing is what is producing chaotic behavior eventually we saw in the Bernoulli shift or the tent map you stretched and you folded you stretched and you folded in one dimension now let's do this in two dimensions so the map looks like this you have two variables xn and yn at time n and each of them runs between 0 and 0 1 and the map function looks like this so here's xn here's yn 0 to 1 and you take this square and do the map following manipulation on it stretch this square by a factor of 2 in the x direction and compress it simultaneously in the y direction by a factor of 2 so at the next stage this becomes like this so this is x and that's y and there's 0 1 this is 2 and that's a half and then cut this piece exactly as we did in the Bernoulli shift and put it on top here so this goes off into this and this is xn plus 1 yn plus 1 so just to see what we have done let's do the following let's take this map and shade this region the other half so I keep track of where it's gone and when I extend it that shaded region has come here and now I cut and put it back and that shaded region has gone there every point on the square has been mapped on to some other point on the square but this is the transformation what's the actual function and it's also clear the area has been preserved you haven't done anything at all you stretch this side by a factor of 2 but you also compress the other one by a factor of 2 and you put the square back onto the square but you mixed up things here so a point here will go somewhere else a point there will go somewhere else and so on in this map and what is the map function look like well it says xn plus 1 equal to 2 xn modulo 1 because you're going to extract and then you're going to put things back yn plus 1 is equal to half yn you brought it down provided xn was between 0 and a half otherwise you added a half to it so you compressed it but then you cut and pasted it back so you actually added a half to it right so this was equal to this so provided xn is less than a half but it was equal to half plus half yn xn was greater than a half and the whole thing is modulo 1 everything is modulo 1 the map is invertible it's definitely invertible because I can tell you precisely we where each point came from there's no 2 to 1 business here every point has a unique pre-image if I started at some point here I stretched so it went to double this and it came to half its height on this point if I started at some point here then when I stretched it came down somewhere here and then I cut it and put it back so it went up somewhere here so certainly you can identify the pre-image of every point and the area is preserved so it mimics a conservative system on the other hand there are two Lyapunov exponents one corresponding to the stretching or contracting in the x direction and the other in the y direction so you have two Lyapunov exponents and let's call them lambda sub x and lambda sub y and these are easy to write down by inspection what would these be what would be the stretch factor in the x direction it's the original Bernoulli shift so this is log 2 and what would lambda y be log a half absolutely right and it's clear that the area must remain the same because after all under this map what's happening is that dx dy is going to dx prime dy prime this is the area element if you like and this is supposed to go like any area element expands so it goes like e to the power lambda x plus lambda y t dx dy that's the whole point about the Lyapunov exponent and it doesn't change so this cancels that so the map is area preserving but it's definitely chaotic you're definitely losing information because what happens to the next what happens to the next iterate of this iterate once more and I do the same thing then it's not hard to see that you're going to have something like this this region is going to get scrambled up a bit it's going to look like this you increase it a little more it's going to get even more striated and so on so you're going to have a if you take a cross section here you're going to have a sort of fractal structure this whole thing so you certainly mixing up the entire thing so this pulling and cutting and putting it back cutting and putting it back is like a baker's transformation and the fact that it has one positively Lyapunov exponent is enough to show that it has chaos but now I pointed out that this map and I'll stop with this that this map is invertible so you should really be able to recover you're not losing any information in the sense that you should be able to say where you came from is that really true or not we saw in the Bernoulli shift the way to understand the shift was to write x0 as 0.a0 a1 a2 etc in which case you ended up with x1 is 0.a1 a2 a3 not not because you multiplied by 2 and threw away the integer part which means you got rid of the knowledge of a0 what would you do for the one for the baker transformation one of those goes to the y coordinates so there's a clever way of doing this and I'll stop with this in the clever way is to say suppose x0 is this and suppose y0 the initial is 0.b0 b1 b2 then write the pair x0 y0 in the following strange way so put a dot here and write a0 a1 a2 and write this guy backwards b0 b1 b2 in this fashion represent the pair x0 y0 by this strange number and then in the next shift all it does is precisely what it did earlier so you have b2 b1 b0 a0 dot a1 a2 a3 so if this is x0 y0 is represented by this then x1 y1 is represented that half which was sitting here has moved over to this side so you haven't lost anything in other words you can tell precisely where you came from so this in fact establishes that the map is invertible you're not losing anything at all and yet you have chaos so it's important to remember that chaos doesn't imply always shrinkage of volume and loss of information you still have exponential sensitivity to initial conditions but yet you could have something which is invertible in this sense completely solvable if you like and still display exponential sensitivity so I stop here this time and then we'll take up some other aspects such as coarse graining next time.