 Yeah so we are going to go on to some generalities from now on recall that in our kind of dynamical system that we have been looking at once again by first order couple differential equations of this kind we have seen several kinds of motion are possible there is periodic motion there is quasi periodic motion there is a periodic motion of various kinds and then there are all kinds of complicated windings and so on that the trajectories can do we would like to make this a little more systematic now and we are going to do this in several stages and then lead up to the notion of deterministic chaos and what is meant by chaotic dynamics in its generality we will do this by virtue of several examples by means of several case case histories and so on but we should first try to understand a little bit about general kinds of flows in phase space so from now on I am going to assume that we have a phase space and let me pictorially represent it by this little picture here in this phase space if I start with some initial point x0 then as time goes on you have a phase trajectory that meanders around and one of the things that's preoccupied us is what happens if I start with a set of initial conditions in a small phase space volume element and follow the set of trajectories that each of these points goes into as time goes along and we saw that for Hamiltonian flows this volume element doesn't change at all the reason we are interested in this is ultimately because we'd like to see what happens what the fate of trajectory is which initially start off close together is we'd like to see what happens if I have an initial condition and another infinitesimally close to it and then I let time evolve and see what happens under the evolution to these two trajectories which initially started off slowly for instance if there is a little resolution error in specifying the initial condition what would be the future that I predict in the two cases that's a question of direct interest and we see that we're going to see many many interesting things happen the first of these is periodicity now what's meant by periodicity it would simply mean that if I start with a point here it doesn't goes through a phase trajectory of this kind perhaps comes back after some time and then if you took this entire volume element it's possible that this volume element goes over there after some time and then here it's here and it's here and so on and eventually returns to its original position this would correspond to very simple periodic motion remember that in this phase space in an autonomous system phase trajectories cannot intersect themselves so the simplest kind of motion we have is periodic of course we'd like to see what happens to different initial conditions so if for instance you took a harmonic oscillator no matter what the initial condition is the motion is periodic so for every initial condition you have periodic motion and if you plot in the case of the harmonic oscillator what happens to the trajectories this is what a one-dimensional oscillator does if you start with an initial condition here it traverses this trajectory if I started with one here it would traverse a different trajectory and if I started with a set of initial conditions here you guaranteed that this set of initial conditions after some time would find itself here and then would find itself here and so on and therefore the whole thing simply becomes periodic for every initial condition you have periodic motion and nothing else in this problem we saw a slightly more complicated form of periodic behavior when we considered the example of two one-dimensional oscillators perhaps two different directions orthogonal directions a two-dimensional harmonic oscillator and what happened then we discovered that if the two oscillators are uncoupled from each other then each of them is periodic with perhaps different periods but then the net motion itself could be described as the motion of two angles on the surface of a two-dimensional torus and we saw immediately here that a slightly more complicated possibility arises namely if the frequencies of the two oscillators are incommensurate as a ratio in other words the ratio of frequencies is irrational then the motion is never periodic in phase space but rather quasi periodic because when you have periodicity in one direction you don't have an integer number of periods in the other direction what kind of trajectories do you have then you have trajectories on this on the surface of the torus which wind round and round on the surface of this torus never come back to the initial starting point and eventually as time goes on the torus is densely covered with the trajectory which starts off at any arbitrary point densely covers this entire torus never repeats itself what would happen to a small patch of initial conditions this patch would move along each of them in its own trajectory and densely cover visit every portion of this torus and cover it completely but at the same time no periodicity occurs no initial condition leads to periodic motion but then you can see that this motion can be decomposed into two periodic motions of frequencies omega 1 and omega 2 with an irrational frequency ratio such motion I would call quasi periodic quasi periodic motion that's the next in complexity to plain simple periodic motion and we examined this case in some detail when we looked at the case of the two harmonic oscillators when we looked at Hamiltonian systems which were integrable we discovered that if you have an n degree of freedom Hamiltonian system the phase space is 2n dimensional and if the system is fully integrable it means you have n constants of the motion and functionally independent constants of the motion in involution with each other and then the motion got reduced not to the 2n dimensional phase space or the 2n-1 dimensional energy hyper surface but rather to a smaller subset of this phase space an n dimensional torus so it was a generalization of this picture where you had motion such that n action variables I 1 to I n were constant and n angle variables theta 1 to theta n changed linearly with time so the motion there occurs on an n dimensional torus on an n torus and is again in general quasi periodic because there is no reason why the frequencies in along the different cycles of this n torus are going to be the same it's again completely quasi periodic in general so integrable Hamiltonian systems display if the motion is bounded they display quasi periodic motion we haven't talked about unbounded motion and that's the sort of simple case trivial case we're not going to get into it right now but it also implies that the phase space is infinite dimensional where an infinite in extent sorry on the other hand we've restricted ourselves always to cases where motion is bounded so that things don't go up to infinity in that case for an integrable Hamiltonian system the motion is quasi periodic on some n torus and of course if you change the values of these action variables or the constants of motion you get a different n torus just as here for a linear harmonic simple harmonic oscillator in one dimension this ellipse or this circle depending on the choice of units is like a one dimensional torus and you change the initial conditions to something else you are on another torus and in fact these one dimensional thori laminate or striate the whole of this phase plane in exactly the same way if you have an n dimensional integrable Hamiltonian system every initial condition goes on evolves into a trajectory which lies on some n dimensional torus in the space and of course the entire motion is regular no matter what the initial condition is you guaranteed that the motion lies on some n dimensional torus in a suitable set of action angle variables so that was the next in complexity but now we have more complicated possibilities even in the case of a Hamiltonian system if it's not integrable then all one can say is the following you have n degrees of freedom and therefore a two n dimensional phase space and on this phase space you have a Hamiltonian function h of q, p and we are guaranteed that in an autonomous Hamiltonian system the Hamiltonian is always a constant of the motion therefore you are guaranteed that whatever be the motion this is equal to a constant whose numerical value is decided by the initial conditions where you are so substitute the initial values of the q's and p's and that gives you what h is the numerical value and that's guaranteed to remain constant all the time. Now in such a case if the Hamiltonian system is not integrable in other words you can't find n constants of the motion in involution in the worst case scenario you can just find one constant of the motion namely the Hamiltonian itself what then is the dimensionality of the phase of the space on which a trajectory lies two n minus one absolutely right so in general this gets reduced to a two n minus one dimensional energy hypersurface and the question arises what kind of motion can we talk about on this energy hypersurface what kind of thing would this be how would this it would certainly not be quasi periodic it would not be periodic no close trajectories typically what would then happen well one possibility is that if this is your phase space imagine this to be the two n minus one dimensional energy hypersurface and you start with a set of initial conditions it's entirely possible in fact this is what happens typically that this little patch of initial conditions as time goes on it moves there and perhaps after some time it moves here remember that it's Hamiltonian flows so that the volume cannot change so the size of this little patch cannot change as it moves around and then this patch in general could wander all over the available space space namely the energy hypersurface and given enough time parts of this patch would visit arbitrarily close to every point in the space and when that happens we say that the motion is ergodic on the energy hypersurface so this would imply a property called ergodicity it's a very important property it are no this the statement I made was the statement I made was the following if the Hamiltonian system first of all it's applicable the statement I'm making now is applicable only to Hamiltonian systems for which you have n degrees of freedom and two n dimensional phase space in such a system if it's integrable for which a necessary and sufficient condition is that you have n constants of the motion in involution with each other functionally independent and so on then you can go to a new set of variables the action angle variables in terms of which the motion is on some n dimensional torus and not a 2n-1 dimensional surface that torus is a part of this 2n-1 dimensional surface and that you are guaranteed and in general the motion on this torus is quasi periodic and that's a general statement there now the next statement is suppose the Hamiltonian system is not integrable the worst case scenario would be one where you have no constants of the motion in involution with each other except the Hamiltonian itself that's the worst no reason why you should find any other constant of the motion which is in involution if that happens then the motion is on this 2n-1 dimensional energy hyper surface and you are guaranteed it remains there because h is a constant of the motion and then the question is what happens to neighboring initial conditions what happens to a little volume element in phase space comprising a set of initial conditions and the statement I'm making is that in general in general such a system would this patch would given enough time visit every neighborhood of the available phase space namely of the energy hyper surface corresponding to the specific value of energy that you've chosen and that property is called ergodicity something where a set of initial conditions a little patch of initial conditions a little volume element visits every neighborhood of some subset of phase space or maybe even all of phase space in the case of the Hamiltonian system it necessarily has to visit only all parts of the energy hyper surface for each set of initial conditions and such a property is called ergodicity what does that imply it implies that even if the motion is extremely irregular on this surface and guaranteed that this point visits this patch visits every neighborhood and it covers this space completely and this suggests immediately that if I want to compute long time averages of physical quantities then instead of worrying about individual trajectories which I may not be able to trace at all because the motion is not integrable I may be able to convert the average over time to an average over space over the phase space provided I know how often every part of phase space is visited those parts which are visited more frequently than other parts I would have to wait more and I would have to give less weight to those parts which are visited less frequently but ergodicity implies that every part is visited and in fact it is a general statement which can be made that for Hamiltonian systems it turns out that under suitable conditions which one can write down which one can specify any initial patch of initial conditions typically visits all of the energy hyper surface with equal probability in other words the invariant measure on this hyper surface is uniform every part is visited equal to proportional to its volume so that it is like a fluid element which has uniformly spread out throughout this space and therefore the weight which I have to associate with different regions of the space space is uniform is constant we make this more precise very shortly so next to quasi periodicity would be ergodicity and this concept could be generalized to non Hamiltonian systems as well come back and explain what ergodicity is little more carefully at the present instance it simply means in heuristic terms that given set of initial conditions typically visits all of the phase space the neighborhood of every point of the phase space that it is allowed to visit given enough time nothing is said as yet about how long it takes on the average to visit a particular spot or the neighborhood of a particular spot and what the recurrence times are and so on none of that is specified as yet but right now the statement is that all of the phase space available is visited by a typical set of initial conditions and that properties ergodicity clearly it is more general than quasi periodicity which is more general than periodicity itself and this is in fact the rule rather than the exception because Hamiltonian systems as a rule are not integrable for integrability you need some special symmetries otherwise you do not find those constants in involution you cannot integrate the Hamiltonian system so that is the next property but we can go a little further we could ask in doing so in making this visit does it necessarily have to be so that this volume element retain its shape and there is no reason why that should happen either even if the volume element is maintained in magnitude it does not have to preserve its shape it could actually get distorted so in the next step it could actually do this and after some time it does this and so on even if the volume element is preserved in magnitude it could still happen that this situation occurs and of course if the volume element is not preserved which is what happens in dissipative flows then even wilder possibilities could happen but independent of that this kind of phenomenon can happen and in fact as time goes along no reason at all why something which starts off very close to each other does not start putting out little filaments and perhaps it puts out 10 drills of this kind and this is the volume element after some time it is supposed to be have the same area as that and so on now what is the consequence of this it immediately means that if you start out with two initial conditions infinitesimally close to each other it is entirely possible and likely that as time goes along these two elements these two points find themselves very far away from each other in fact as time goes on they could find themselves as far away from each other as the size of the phase space itself you could just be diametrically opposite each other far apart now this implies a goddess key because you are also told that every such distortion of the filaments as it goes along also visits all points of the phase space so this property of distortions simultaneously with a goddess it is even more general and that is a property called mixing and I make this precise so mixing implies as goddess it but not the other way about and now let us come to a little precise definition of mixing many different ways of defining mixing the many kinds of mixing but for our purposes let me define the mixing in the following way entirely in terms of a picture of this kind now if you imagine that this is your phase space and for pictorial purposes now let us go to an analogy let us imagine this is a fluid an ordinary fluid like water I have it contained in some volume and I take a little drop of ink and here is my drop of ink to start with and I inject this drop of ink at t equal to 0 in there and what's what happens is a function of time all of you have done this experiment you know that after some time this ink puts out little tendrils all over and eventually even if you did not stir the fluid this one this little ink would dissipate and as you would pictorially call it you would say it disperses throughout the fluid and eventually it spread out everywhere in the fluid it's all there still except that it becomes so dilute that you can't see it this is what happens and if it's uniformly mixed then how do you test uniform mixing what would you say then you would say well it's uniformly mixed if I take any reference volume here and there's as much ink in this as there is in every equal reference volume then you would say it's uniformly mixed now let's make that precise so let's start with an initial set and let me call this a 0 and let's this reference window let's call it be that's a set that I have there and let's suppose this is my total phase space let's call that omega and I'm going to use the symbol mu for the measure or the volume if you like in phase space but this could even refer to volumes in real space if I take the fluid analogy after a certain amount of time let's say 1 minute this little droplet of ink has moved off and become like this and that's a 1 after 2 minutes it's perhaps become like this this is a 2 and some of it could start falling into this reference window now after n instance of time or after n such a time units if the set a 0 has evolved into the set a n I could ask how much of that a n is inside b that would be the intersection of a n with b that tells you how much ink there is in this reference window or how much of the initial condition after time n is inside the reference window b the measure of this let me call that mu that's the volume if you like of the set which is the intersection of a n with b the ratio of this to whatever you started with mu of a 0 that's what you started with the total amount of ink if you like if this is equal to the measure of b divided by measure of the holes phase space namely the size of this window relative to the entire volume if the limit as n tends to infinity of this is equal to that then you would say it's uniformly mixed it's completely mixed independent of b if this is true for every reference window b then you would say the ink is uniformly mixed and this property in terms of measures and phase space is called strong mixing and that's what I mean by mixing and the moments thought will show you that mixing implies ergodicity but ergodicity doesn't imply mixing because ergodicity simply says this little patch keeps going round and round and visits everywhere but it doesn't have to mix it doesn't have to get distorted at all but mixing implies ergodicity definitely because there are parts of this little piece which would find everywhere given enough time so that's the next in complexity beyond ergodicity now we're going beyond integrable systems we're looking at things much more generally and they would certainly have properties like ergodicity and sometimes even mixing so this property here is what I will use as my definition of mixing it's a very strong property it says something very very profound about the nature of the dynamics but there are weaker forms of mixing but this is something which is geometrically easily explainable as you could see now even this doesn't exhaust the possibilities because the next question you would ask is very nice you start with an initial point which after some time starts putting out little filaments and looks like this then the next question that would arise is if this is what a n looks like how rapidly do these things separate from each other so something which starts off arbitrarily close at t equal to 0 as a function of time how rapidly do they separate from each other the trajectory is separate from each other so now we're talking about a time dependent quantity namely the rate of separation how fast is this rate going to be well if you and I start next to each other and we walk at constant speeds in two different directions our separation is going to increase linearly with time because each of us has a path which is linear if each of us accelerates with uniform acceleration then it's going to increase quadratically with time on the other hand if you leave it to processes like diffusion if I put a drop of ink and I don't stir it and I don't have thermal currents I don't have convection currents and so on then a little patch of ink it starts off here after some time it's a little fuzzy thing like that and then a big more fuzzy thing like that and perhaps there are little tendrils all over and so on you could ask typically what's the linear dimension of this in key spot if you like and that typically goes like the square root of the time because it's some kind of random process and typically for such a random process a diffusive process the separation would increase like the square root of the time on the other hand if it's ballistic motion at constant speed it would increase linearly with time if it's accelerated at constant acceleration it would increase quadratically with time and so on the question is can it go faster than that and the reality is that in such systems as we are considering in phase space with nonlinear differential equations it turns out a very typically initial conditions separate exponentially fast in time and we'll see how that comes about and what its implications are so the rate of separation can be very very rapid indeed exponentially fast with some typical time constant whenever you have an exponential separation increases like some e to the lambda t then you'd like to know what this lambda is the inverse of lambda is a characteristic time scale on which initial conditions or initial imprecisions amplify and you'd like to find out what these lambdas are what these constants are and they play a profound role in general dynamics they call Lyapunov exponents we're going to study a lot more about them but the typical separation is exponential separation so beyond this beyond mixing comes exponential instabilities exponential separations so I'll make that more precise exponential separation that's an imprecise way of saying it we want to make it a little more precise but this would be the general rule on the average of course they would still be possibly some initial conditions where you have periodic motion some very special initial condition or some set of special initial conditions where the motion is quasi periodic or perhaps even ergodic or perhaps mixing with very low power not exponential and so on but if on the average typically in the phase space you discover that there's exponential separation then you've gone one step beyond mixing and of course this exponential mixing implies separation implies mixing which implies ergodicity so we're going to more and more general possibilities here finally you could have a situation where except for sets of measure 0 in the phase space almost all initial conditions would separate exponentially and be exponentially unstable so you have global exponential and instability and that would be the next step which would then go to the next one and that of course would imply the earlier ones everywhere exponential separation everywhere so this is even more general than the previous step so from here the next step of randomness which says that you have basically exponential separation everywhere now what's the bad thing about this exponential separation what does it imply well the moment you have this then you can give up the idea of computing quantities by looking at trajectories themselves because it says that initial errors would amplify exponentially moment this happens if there's even one positive exponent such exponential separation in even one direction in phase space it means you cannot compute time averages anymore you can't follow trajectories the error simply amplify too fast unless you have infinite precision you cannot compute anything in polynomial time you cannot make computations of physical quantities time averages of physical quantities you're forced to take recourse to statistical methods you're forced to take recourse to distributions in phase space and talk about time averages being replaced by these averages over distributions so that's the lesson which we have to draw and that's where the subject of chaos enters and this is what we're going to gradually work our way towards as we go along but I want to convey to you the idea that periodic motion is the exception rather than the rule a little more generally so is quasi periodic motion a goddess it is very common but even more common is mixing and even more common than mixing is very strong mixing exponentially fast in phase space and that's the situation which you typically have to deal with and for that the methods of standard integration and so on which we have so far for integrable problems they're useless we really must find proper methods statistical or probabilistic methods of handling this kind of dynamical instability so what's happening is that the fact that most dynamical systems have this kind of property implies that for these dynamical systems you necessarily have to find methods of computation which are different from the ones that you use the special techniques you use for integrating ordinary equations and writing down solutions explicitly so you have to have much more powerful methods and this is what we are going to aim towards to see how we can develop such methods and what do we do with them and we're going to take this in several slow steps as we go along but I hope you've got this clear that it the problem we have to deal with in dynamics even for the kind of dynamical system we're talking about is a non trivial one highly non trivial one there's one more aspect which I didn't mention about these differential equations which I'll do do now then we come back to this and that's the following going back to this dynamical system of the kind X dot equal to f of X if you looked at this system in one dimension if X was just a one dimensional real variable then we saw there could be attractors of the form there could be critical points which were either attractors or repellers or higher order critical points and you just essentially had just a line as the phase space if you looked at the situation where X was two dimensional comprising two real variables X Y this led to the phase plane as the phase space and since it's an autonomous system and phase trajectories can't intersect each other we have a rather simple classification of all the elementary critical points in terms of nodes spiral points centers and saddle points and so on and then the coalescence of these critical points led to higher order critical points and if the system was degenerate you got a little more complicated kind of stationery sets but otherwise nothing much else happened except limit cycles you also saw that in dissipative systems you had the possibility of isolated periodic orbits which were limit cycles they were like the generalizations of point attractors of critical points you actually had a limit cycle periodic orbit somewhere isolated periodic orbit into which trajectories either fell or from which things got repelled well if you go to higher dimensions it's easy to see that you could generalize this idea of a limit cycle and you could have a torus attractor you could have something where you have periodic motion which is composed of two independent periodic motions like in a torus or you could have a three dimensional torus or a four dimensional torus higher dimensional tori but something else much more interesting happens when you have three or more variables so when this phase space has x y and z and the phase space is three dimensional not a Hamiltonian system in this case it's in general some arbitrary three dimensional system then with three coupled ordinary non-linear differential equations a new possibility opens up a new kind of attractor is possible which is not a torus not a limit cycle but something which is an extremely complicated curve in three dimensions and can't intersect itself so when I draw it it's obviously going to look like it's intersecting itself but some extremely complicated object of this kind into which the region of phase space into which if a trajectory falls it never leaves this region it continues in this little ball of wool and this ball of wool is not a regular geometrical object at all and such an object is called a strange attractor we'll see why it's called strange because it has some strange properties geometrical properties in specific terms it's got a fractal a dimensionality which is a fraction which is it not an integer called a fractal dimensionality so such objects are called strange attractors and this was one of the big discoveries of dynamical systems namely in three or more dimensional phase space you have attractors which are very irregular geometrical objects called strange attractors they're not like the usual limit cycles of the torus or anything like that and the motion is not periodic anymore this is not a periodic trajectory just that this goes on and on and on in a certain confined region of phase space and has very peculiar geometrical structure and of course you could have higher dimensional strange attractors as well in 4 and 5 and 6 and so on and the strange attractor the class of strange attractors is not fully been classified as yet it's not something which for which we know everything about it there are different well known attractors specifically in three dimensional systems a couple of which we look at but the full set of possibilities is quite wide quite wide open because of this because of the possibility of strange attractors it's very difficult in general to analyze differential equations in three or more variables and in fact the first strange attractors which was seen appeared in equations which looked almost linear of the three equations for x dot y dot and z dot two of them could be linear and the third one could have just a quadratic linearity nonlinearity and that's sufficient to produce a strange attractor those were the initial models due to Lorentz and Rossler and several other such models some of which will write down where you have this kind of behavior already this tells you that if you have three or more couple differential equations numerical integration of these equations becomes in some sense useless if you have such behavior if you have chaotic behavior then following long time trajectories becomes extremely difficult not even computable in some sense and you need more powerful methods statistical methods or probabilistic methods to deal with such situations so we will come back to that too and look at it so not only do you have this kind of strange behavior in general to deal with but that kind of behavior appears even for low dimensional systems even for three or higher dimensional systems even as low a dimensionality as three you already have this funny behavior with differential equations and these are just ordinary differential equations but they are nonlinear in general if you have partial differential equations things get much much more complicated this is one of the reasons why the problem of turbulence is so difficult because you have a Navier-Stokes equation for fluid dynamics and that's got a quadratic nonlinearity in the velocity but it's three dimensional equation and a partial differential equation so in the language of dynamical systems a partial differential equation is essentially equivalent to an infinite dimensional dynamical system the phase space is effectively infinite dimensional and therefore the possibilities are quite horrendous and that's why you have very complicated things like turbulence which are not fully understood as yet but again we should like to I'd like to emphasize that you must appreciate the fact that the moment you have coupled nonlinear differential equations the possibilities can become very complicated indeed dynamics is not as easy as it seems okay with this sort of preliminary introduction let me go on and introduce some ideas which would fix these things in our mind by illustration the problem is that solving couple sets of differential equations a highly non trivial problem one way in which you try to solve things is to say alright I write a set of equations for x1 through xn and then if I could eliminate all the variables except one of them I write an nth order differential equation for x1 and in principle try to solve this then I find from that x2 and x3 and so on but that's not possible in general given a set of couple first order equations differential equations for the variables x1 through to xn it's not guaranteed that you can eliminate all variables except one and write an nth order equation for x any of the xi the converse is true given an nth order equation for a single variable we will always write it as a set of n couple first order equations by simply taking the variable x and then x dot and x double dot and so on and defining them to be independent dynamical variables x1 x2 x3 etc but if you're given this if you're given a couple set of first order non-linear differential equations there's no guarantee that you can eliminate all components except one and write an nth order equation for it so that makes immediately meets with failure you have to deal with this set as it is and we've seen the possibilities I've already mentioned that you could have very crazy kinds of motion and therefore we need somewhat more sophisticated techniques to handle such equations one of the ideas that people had early on starting with Boncare himself was the following suppose you didn't look at this system as a function of the continuous time variable but you simply looked at it at discrete intervals of time what would then happen well in general it would say that the value of the system at time n plus 1 times the time step tau should depend on the value at time step n so one would in general write something like the value of the variable at time n plus 1 and let me now use the subscript rather than a bracket T for a discrete time variable in steps of some fixed time step tau would be some function of what this variable did at time n where n is a discrete time index in steps of some unit into some interval of time tau which could be one second or one minute or whatever and there are many problems where this would be in fact the way you'd analyze the system for instance if I give you a population of bacteria I would look at it perhaps every minute or so and then the population at after 10 minutes would be a function of the population after 9 minutes which in turn would be some function of the population after 8 minutes and so on and you get an equation of this kind whereas this is a continuous flow in continuous time something like this would be regarded as a discrete map and this sort of thing is called a map and it essentially says you give me the variable at t equal to 0 n equal to 0 and I tell you what it is at n equal to 1 and then I reiterate this map over and over again to tell me what the value of the variable is at time n this is a differential equation but this is a difference equation in terms of this discrete time n and of course if the variable x is one dimensional is itself a scalar then you have a one dimensional map which is of the form xn plus 1 is some nonlinear function of xn this is a 1d map the question then reduces to what can we say about these maps what kind of dynamics would you have in such maps and what kind of counterparts of critical points would you have here what kind of equilibrium points would you have what kind of stability do these points have what are the attracting sets in such situations this would be the question of interest and let's look at some of the simplest maps and see what happens we start with one dimensional maps and let's look at maps which are to start with linear in fact so that things become extremely simple and we take it from there so let's suppose that you have a one dimensional map which says xn plus 1 equal to some linear function of the previous variable so this is equal to some a xn plus b where a and b are some constants and to start with let's assume that x0 the initial point is some real number and the phase space is you like the whole of the real axis the x axis it's clear that to write this down x1 is a x0 plus b x2 at time 2 is a times x1 plus b equal to a times a x0 plus b plus b and so on so it's not hard to write down the solution in terms of x0 as before we want to solve an initial value problem just as here if you give me x at time 0 I want to find x at time t here if you give me x at time 0 which is x0 I want to find x at time n an arbitrary time n of course you can write this down immediately so it says this implies that x at time n is equal to a to the power n x0 plus other terms it's also a linear map of some kind so you have this slope and then you have something else some constant but there's a much easier way of analyzing this map and that's the following that's to plot both sides so let's do the following so let's call f of x in this case equal to a x plus b and let's simply plot it so here's x and I plot f of x on this side what's f of x in this case it's just a straight line with slope a and intercept b so it looks like this so this is a x plus b and we have a geometrical method of finding out what xn is which is what you do numerically in the method of successive approximations so to solve an equation of this kind and to find where the nth iterate is what you do is to start with some x0 calculate what this number is and that's this coordinate here put that on this axis and calculate the function put that on this axis calculate the function and so on but that's equivalent to saying that I'm going to take a little 45 degree bisector let me draw this properly in this fashion this is x itself when I start with an x0 I calculate what f of x is that's x1 so the value of this is x1 which I implement by going horizontally to this line and that's x1 here and I calculate what f of x is that gives me what x2 is so this quantity is x2 I calculate what x3 is and so on and by this ladder construction I go vertically from a guess value to the function horizontally to the bisector vertically again to the function horizontally to the bisector and so on and you can see from this picture that pretty soon you're going to converge on this point here had I started on the other side that I've done exactly the same thing had I started here initial value what happens next I started this value I go to the function go horizontally to the bisector in this fashion then go to the bisector go down here and so on and pretty soon I converge on this point what's special about this point that's the analog of the critical point except here I call it a fixed point and the reason is it's a solution to the equation f of x equal to x itself that's the intersection of these two things so it's evidently what's going on is that if I write xn plus 1 equal to f of xn if under the map the point doesn't change at all it's a fixed point of the map so this the roots roots are fixed points and quite clearly as n tends to infinity x at very very long times if there's a fixed point it's not going to change at all as n tends to infinity if this equation has solutions those solutions are fixed points of the map they're like the equilibrium points in other words under further time evolution things don't change at all so asymptotically you'd expect that things could fall into these fixed points as you saw here no matter where I start I'm going to end up with this fixed point and the location of this fixed point in this case is trivial it simply says x is equal to a x plus b so this implies let me call this x star equal to b over 1 minus a the fixed point would you say this fixed point is stable or unstable yes naturally I'd call it stable because no matter where you start you're falling into that in fact that's a global attractor for this map wherever you start whatever be your initial condition at any finite point to start with any finite x0 as n tends to infinity you're going to reach the point b over 1 minus a this problem was simple because you had just one fixed point but if you have more than one fixed point then the question becomes a little more interesting and the matter is not so easy to resolve for instance here's the 45 degree bisector and suppose my map function looks like that what would happen in such an instance I start here between the two I go to the map function and I go here and it falls in this two is a fixed point and so is this but if I start up here you discover that I fall in in towards this but if I start here and I hit this then the next time I go here and then I hit the function further down and I go away and I disappear similarly if I start anywhere here I soon climb up and go towards that so you would say that this fixed point is actually unstable because it's repelling on both sides whereas the other guy attracts on both sides what do you think is deciding whether a fixed point in these one dimensional cases is an attractant attracting fixed point or a repelling fixed point the slope of the curve decides this completely so what's the criterion yeah the slope of the bisector is one therefore it's clear that if the slope of this curve the slope of the map at that point is less than one in magnitude you have something that stable and attracts and if it's greater than one you have something unstable what if the slope were in the other direction so let's see quickly what happens if the slope pointed in the other direction so here's the 45 degree line and let's suppose the map function in the neighborhood of this point was like this then I start here at the map I go here I go out here I go here and I go off I move away from this and you could see that the magnitude of this slope even though the slope is negative the magnitude is greater than one and again it repelled on the other hand if this slope were less than one it would attract and that's fairly straight forward to see so if you had a curve like that then I start here I go there I go there pretty soon I fall into this fixed point there what happens if the slope is exactly equal to one in magnitude well let's take something which looks that's just that so here's this slope and the other guy is also at 45 degrees then if you start here you go down there you come back you get into a loop you go neither closer towards it nor do you go further away from it so if you can't call it as stable fixed point or an unstable fixed point it's an indifferent or marginal fixed point so we have our first criterion which says that a fixed point is less than unity and it's unstable greater than unity marginal equal to these are the cases that are going to be of great interest to us in some sense because you would like to see what happens under small perturbations they play the role of centers in the earlier case where we saw that a center was structurally unstable in the sense that a small perturbation which took you away from pure imaginary eigenvalues could move you either into a stable region or an asymptotically stable region or an unstable region exactly the same thing is going to be true for these marginal fixed points it's also very clear for one-dimensional maps that if you have a stable fixed point the next one must be unstable and so on and so forth it's clear that things must alternate exactly as they did in the case of saddles and centers in original potential problems with maxima and minima alternating now a lot more is going to go on in one-dimensional maps and we're going to see many more possibilities going to arise and we're going to see that there are periodic orbits and so on so let me take it up from this point next time of course the map that we looked at so far was linear these functions are not nonlinear a linear map has only one route because we solve a x plus b equal to x you got a unique route but if this function f of x is nonlinear which is what is going to be the case of interest then you get many more complicated possibilities so we're going to take several such maps prototypical maps and ask what happens when you iterate them I might as well say that the solution of difference equations is much harder than the solution of differential equations that's reflected in some sense the fact that even one-dimensional maps maps with one variable scalar variable x even such maps are sufficient to produce chaos dynamical chaos and very complicated dynamical behavior whereas in the case of differential equations you needed at least three couple differential equations before you got chaos so even 1d maps are going to produce wild kinds of dynamical behavior which is one of the reasons we'd like to study these maps because you can draw things you can explicitly write things out and so on and still have very complicated dynamical behavior so this kind of thing is called low-dimensional okay awesome we go through this in some care and then extrapolate to higher dimensions so let me stop here now.