 So, let us begin this course on non-linear dynamics and the title of the course is selected well not even selected it is just topics in non-linear dynamics and it will deal essentially with the classical dynamics of various kinds. Now I should say right away that the word non-linear dynamics has become popular in the physical sciences for a certain kind of study of classical dynamical systems whereas in mathematics per se the word dynamical systems has a very specific rigorous connotation it is studied under something called ergodic theory and I am not going to get into those technicalities except to use some of those results whenever it is relevant to whatever we are going to talk about here. So I should start by asking first in the physical sciences what exactly we mean by a dynamical system as far as we are concerned anything that changes with time is a dynamical system. So it is a very very general definition all together and this time variable itself is something which could take on either continuous values as it normally would in Newtonian mechanics or even quantum mechanics or could take on discrete values. So let me start by defining a dynamical system as a set of variables which change with time according to some prescribed rule. So it is essentially a set of dynamical variables evolving in time. Now that of course is extremely general so just about everything you can think of is a dynamical system and we need to make things a little more specific so we know what we are talking about. First the time variable itself well this could either be continuous time or it could be discrete time and let me explain what I mean by this. If you had a set of particles moving in three dimensional space under the influence of some potential and some mutual interaction then time is taken in Newtonian dynamics to flow inexorably from the past to the future. It is a continuous time system and the systems behavior is described by perhaps a set of differential equations and the technical word for it in the case of continuous time is called a flow. So whatever happens here we study flows. In the case of discrete time for instance in population dynamics you might want to know what the populations of a set of mutually interacting species are and you might sample these populations perhaps every week or every month or every year or even every hour or every minute if it is a bacterium of some kind and the time variable then is a discrete variable as far as you are concerned. I am going to use the symbol t for continuous time the standard one and for discrete time I will use a suitable variable when the time comes but this would go under name maps. So we study flows and we study maps. Now what is the set of rules that we talk about what sort of variables are we talking about? That too has a huge diversity for instance if you look at the position of a particle moving in three dimensional space it is a set of three dynamical variables the position coordinates and there would also be an associated set of momentum or velocity coordinates. So you would have six continuously varying dynamical variables for this system. So the dynamical variables could themselves be dynamical variables could be either continuous or they could be discrete the set of dynamical variables could be discrete just a set of variables finite number of variables or may be even an infinite number of variables but countably infinite. On the other hand you could have variables which are continuous take on values in a continuum. So the variety is enormous and we need to be able to deal with each of these in its own way I might as well mention that there is an even greater even more even more general possibility which is you may have a continuous infinity of variables to describe a dynamical system. For instance a simple example is if you took a string extending along the x axis and its ends are 0 and l and you clamp it at both ends and you now ask for the transverse displacements by various points on the string perhaps at some instant of time the string looks like that at another instant of time it looks like this it is vibrating for instance in standing waves then if you have a u here to represent the transverse displacement at any instant of time at any point it is a function of both this continuous variable x as well as the time which is continuous. So it is now dependent on this system is described now by a continuum of dynamical variables if you like. So the dependent variable in this case is u and there is two continuous independent variables. The dynamical equation in this case would be a partial differential equation whereas in these cases the normal cases that we talk about they would have where you have a finite number of variables for instance which vary continuously in time would be a set of differential equations. You could go a little further and say even this number here this need not be a scalar if you look at Maxwell's equations for the electric and magnetic fields those are fields which are vector valued fields. So you have an E of r and t and a B of r and t you have a couple set of partial differential equations called Maxwell's equations which involve both spatial derivatives with respect to the position variable as well as partial derivatives with respect to time and this couple set of equations describes the dynamics of two vector valued fields the electric and magnetic fields the function of space and time. So those are certainly fairly complicated partial differential equations they two constitute a dynamical system. Now with this huge number of possibilities we need to make things a little specific I should mention that throughout this course we are going to do classical dynamics of course it is true that you have quantum dynamics as well and there you have a little more a general possibility which is if you recall the Schrodinger equation for the state vector of a time it looks like d psi over dt is h psi this is a time dependent state vector of a system governed by a Hamiltonian whose time evolution is governed by this Hamiltonian here we have a very interesting situation where the dynamical equation is not for any physical variable not for any direct dynamical variable but rather for something abstract called the state vector from which you can deduce the probability distributions of all the dynamical observables that you have in the system. So that is one more possibility you may have evolution equations which do not directly describe the evolution of dynamical variables but rather of their probability distribution. So there is some source of randomness in the system then you might have equations telling you the evolution of probability distributions or probability amplitudes as in the case of quantum mechanics. So that too is a possibility we are not going to touch upon quantum dynamics at all in this course going to restrict myself to classical dynamics and that too classical dynamics in a very precise particular sense. So to make things specific here is what we will talk about as a dynamical variable we will assume that our dynamical system is described by a set of variables which are denoted by x 1, x 2 up to x sub n, n of them where n will be finite for the large part some discrete set of variables x 1 to x n each of which is a function of time and this evolution is given by a set of differential equations which tell you how each of the x's vary. So the equations look like this you have x 1 dot I will put a dot for the time derivative that is going to be my standard notation this is some function of all the x 1's all the x's x n and possibly the time as well and I prescribe such a rule for each of the x's in my set so all the way down to x n dot equal to I need some other distinct function. So let me make it the notation simple by calling this f 1 and this is f sub n of x 1, x 2 up to x n and t such a system is going to be what I am going to call a dynamical system for most of this course the field the functions f 1 to f n are supposed to be prescribed and we have a set of ordinary first order differential equations for these variables which are coupled to each other and which are horribly non-linear in general which is the reason for calling this non-linear dynamics and these functions here in general are fairly complicated but we will assume for the large part that they have a well-defined nice smooth functions with sufficient numbers of derivatives and so on this is going to be what I call a dynamical system. So it is a finite dimensional dynamical system the number of variables is n and the equations are first order in time now one might ask why do we focus on first order differential equations well first of all let me spend a couple of minutes and explain very briefly why we think that why in most of the physical sciences we end up with differential equations to start with to describe dynamics. The reason for this is that we believe there is no action at a distance in general so we believe that whatever happens to some variable at some position in space and at some given instant of time is going to be affected by and in turn is going to affect whatever happens in some infinitesimal neighborhood of it and then this disturbance or whatever can propagate and that is how the system as a whole will evolve and this idea of locality that it is always local influences that determine what happens at any particular point in space and time is what leads us to differential equations to start with and now why first order well a first order differential equation has this property in the simplest case even in a single variable you know that immediately that the solution given the differential equation the solution is uniquely determined if you specify an initial condition. So this set of equations is believed to have a unique solution if you specify a given initial condition namely you specify the values of all the dynamical variables at any instant of time and then the future is supposed to be determined by this and the point is that if you have a sufficient number of variables if your description is complete then you are almost guaranteed that the equations going to be first order differential equations. If you took this chalk to be a point particle for instance and I hold it up here and I tell you the instantaneous force on it the force of gravity due to the earth I tell you its instantaneous position for which I need three coordinates assuming this is a point mass then the future is determined if and only if I also specify its initial velocity if I do not do that and I specify this position and the instantaneous force Newton second law determines the acceleration of this particle if I let go it drops in a trajectory straight down but if I give it a little horizontal velocity it goes into an orbit if I give it a little higher velocity it would go into an elliptic orbit and if I give it an even higher one it would escape the earth in a hyperbolic orbit. So it is clear that for this point particle you need six variables dynamical variables three of which are the coordinates at the given instant of time and the other three other momentum or velocity components it is immediately clear that dynamics is not happening in configuration space in real space it is actually taking place in the full space of all the dynamical variables all the six dynamical variables in this case this is important to recognize because if I wrote Newton's equation down for a particle moving in one dimension as x double dot mass times that equal to the force acting on it f of x this might appear to be second order dynamics but it actually is not because to solve it uniquely you need to tell me both x at time 0 as well as x dot at time 0. So the correct way of writing that set of equations is to write x dot equal to the momentum divided by the mass whereas p dot by Newton's second law is equal to the force acting on the particle in this fashion. So that is the set of coupled first order differential equations there are two dynamical variables in this case x and p and each of them obeys a first order differential equation and the fact is that you get a unique solution provided you start with initial conditions x0 and p0 if these are given then in principle I can solve these two equations to determine x and p as a function of t for all time. So I hope it is clear that in almost all the cases we can think of if your set of variables is complete enough the equations would be first order equations for that matter even the Schrodinger equation for the state vector is a first order differential equation in time. So this conclusion is not affected by quantum mechanics it is still true even in quantum mechanics. Alright now that we have a set of equations of this kind let us see where we can get with it. The first thing to observe is that these equations are coupled to each other and there is no guarantee at all given such a set of equations that I can eliminate n minus 1 of these variables dependent variables and get an nth order differential equation for any one of them because one way to tackle this would be to say alright I have a couple set of equations let me uncouple these equations by eliminating n minus 1 of these variables and writing an nth order differential equation for x1. There is no guarantee that for an arbitrary set of functions specified on the right hand side you are going to be able to do this. By the way I should mention here that the converse is not true for instance suppose you had an equation in one variable which looked like dnx over dtn plus some coefficient times n minus 1 derivative plus etc equal to 0 then this equation nth order equation in a single variable x can indeed be converted to a set of n differential equations each of which is first order by simply doing the following define x1 equal to x x2 equal to x dot and so on all the way down and xn for instance now what is going to happen here is that you are going to get a set of coupled equations for the x1 x2 x3 etc variables which would all be first order equations the last one will involve this function that you have a non-trivial thing but the others are going to be fairly simple because the last equation for this variable would involve all the earlier dynamical variables. So you would indeed get the set of first order differential equations right up to the last point so hope you can see this xn minus 1 equal to x that right sorry xn is dn minus 1 n and then this thing here if you move this to the right hand side would give you an equation of the form xn dot equal to a function of all the other earlier variables. So here is a system of here is a single nth order differential equation which can be converted to a set of n first order coupled equations but what I am trying to say is that the converse is not always true but we are going to stick to this as our definition of a dynamical system because clearly that is much more general the second thing we notice about this set of equations is that there is a t explicit in these functions which means the rules by which these dynamical variables evolve are themselves changing with time. So the system is not stationary in that sense there is an explicit t dependence and such equations are called non-autonomous and the system is a non-autonomous one. Now non-autonomous systems can be far more complicated than autonomous systems and in real life very often you are faced with non-autonomous systems for instance suppose you have an electrical circuit and this is a set of equations for the charges on various capacitors and so on and you are pumping energy into the system or you have a set of dynamical variables in any system electromechanical for instance into which you are pumping energy all the time with a time dependent external influence or force then these equations are clearly non-autonomous in this case. However there is a little simplification that can be made from a purely technical point of view although from the interpretation point of view it is a very different matter and that is the following if I were given a set of n non-autonomous equations of this kind I could always define an x n plus 1 to be equal to t itself although time is not a dynamical variable dynamics happens in the arena of time as a function of time formally if I define x n plus 1 equal to t then I could simply write these things down in terms of all n plus 1 variables and I have a final equation which is x n plus 1 dot equal to 1 and that looks like an autonomous system where there is no explicit time dependence. So this looks like an n plus 1 dimensional autonomous system so to repeat an n dimensional non-autonomous dynamical system can always be made to look like an n plus 1 dimensional autonomous system. Now of course the price you pay for increasing the dimensionality of a dynamical system can be quite serious because something which is fairly simple if it is a two dimensional system may become extremely complicated if it is a three dimensional system which is the reason why even an ordinary simple dynamical systems such as the example of a particle moving in one dimension along the x axis which is a two dimensional dynamical system there is an equation for x and the associated momentum P if the system is non-autonomous then the equations look very very different. So you have x dot is P over m but you have P dot equal to a function of both x and t if it is in a time dependent force and then the dynamics of this system can be much much more complicated than if you did not have a t dependence here for instance just to give an example if you did not have this this system cannot display what is called dynamical chaos on the other hand you put in this t it can display dynamical chaos immediately because now the system is got a higher dimensional phase space. So one has to be aware of this possibility that the dynamics can get much more complicated but the fact remains that from a technical point of view you may as well discuss only autonomous systems although if one of those dynamical variables really came by converting a non-autonomous system to an autonomous one then the interpretation of the solutions has to be very carefully done. We will come across examples of this later on but for the moment because of this little trick let us assume that the systems we talk about are autonomous in this sense and then a great deal of simplification occurs. So we will look at autonomous dynamical systems in general n dimensional it looks like that. So this is going to be my basic dynamical system that we are going to talk about in the rest of this course. So this is the framework in which we will now restrict our attention for the moment till we get to discrete time dynamics and maps we will restrict our attention to flows of this kind where n independent variables n of these dynamical variables dependent time are governed by their evolution is governed by this set of dynamical equations with some prescribed functions out here. Now I should say right away that even though these are first order differential equations except in the simple case when these are all linear functions of the excess this set is not immediately integrable. You cannot write explicitly a general solution for such a set of couple differential equations for all values of time this is not possible in general we will see why but let me say state right here that it is not possible in general. On the other hand we can extract a great deal of information about the system without explicitly writing down a closed form solution for it and that is what we are going to discuss in a large fraction of this course what exactly what information can we get for this. First we need a little bit of a compact notation. So let me define this variable x to be x 1 x 2 an n dimensional column vector if you like which has these components and similarly the natural thing to do is to define a vector valued function f which is equal to f 1 f 2 f n in which case this set of equations becomes simply x dot equal to f of x autonomous. So, there is no extra t dependence here in this sense now what do we need for the existence of a unique solution at the very least you need a set of initial conditions and all these variables. So we need to know this plus the initial conditions. So, you have a certain x naught this is x 1 of 0. So, you have to give me an initial value of all the variables initial point in the space of these x's and then in principle the statement is the task is to now solve this differential set of differential equations and extract information on the behavior of this x as a function of t. So, that is the task now as I said our ability to write down explicit solutions is extremely limited very few cases such solutions can actually be found system can be integrated completely, but a great deal can be said even without doing so provided we understand what is meant by the qualitative theory of differential equations the geometrical behavior of the qualitative behavior of the solutions of such coupled equations for that we need to be able to do this geometry we have to introduce a concept called the phase space. This is simply the set this is simply the space of the set of dynamical variables x 1 to x n and that in our case is n dimensional just to set a physical example if I had a single particle moving in 3 dimensional space then I need 3 position coordinates 3 momentum coordinates in the phase space is 6 dimensional in this case if I have n different particles moving capital N different particles moving so here is an n particle system this would correspond to n equal to 6 n 6 for each of these particles so you have 6 of them n of them so therefore the phase space is 6 n dimensional if you took the gas in this room with an Avogadro number of particles the phase space dimensionality is very very large astronomically large it is 6 times Avogadro's number if you like and that therefore makes for very very complex dynamics indeed in this space in this phase space the state of the system at any point instant of time let us say t equal to 0 is specified by a certain point in this space and then what happens next well what happens is as a function of time this point moves in this phase space if you solve this set of equations once you tell x tell me x of 0 I tell you what is x of delta t and then x of 2 delta t and so on and then it is clear that this point moves in this fashion and it describes a trajectory called the phase trajectory now of course it reach this point from an earlier instant of time through the specification of exactly the same equations so in this way of looking at things make casting this as an initial value problem gives me half trajectories what happens in the future but of course this would have come from somewhere in the past to this point and then it further evolves in this direction but we are interested in the half trajectories running from 0 to infinity unless specified otherwise there be cases where I would like to write down the full dynamics from minus infinity to infinity now there is a lot that can be said immediately just based on this and nothing more than that and one of them is the following for instance this phase trajectory for a given initial condition might look like this but if you had a slightly different initial condition then it could look like this it is a different phase trajectory so it is evident that for any single collection of small collection of initial states initial points values I have an set of individual trajectories which may or may not come closer together or further apart and so on and I can construct what is called a phase portrait a phase portrait will now tell me more or less what happens to all initial conditions wherever you are what is going to happen in the future well one possibility is this it could go out like that and it could come back and one might ask is this possible is such a thing possible in a phase trajectory and the answer is no such a self-intersection is impossible for an autonomous system the reason is that you could choose any point as the initial state and we could have started with this point as the initial state and then there are two futures which goes against the theorem that you have a unique solution to these differential equations so a self-intersection of this kind is not allowed so phase trajectory for an autonomous system cannot intersect itself on the other hand if this had been a non-autonomous system there is no for nothing forbidding this because when you go out there and you come back here the rule has changed and therefore the subsequent evolution could indeed go back in this direction since the rule itself depends explicitly on time that's possible on the other hand in the cases that we are looking at autonomous systems this is not possible there's only one exception and that exception is if you start at some initial point in the system can come back to that same initial point and then thereafter continue forever in the same orbit the reason being that this set of equations is explicitly independent of time so therefore if you started at this point and it came back to this point it's doomed to continue the same path forever afterwards and what does this mean at this point all the variables dynamical variables have returned to precisely the starting values which means the motion is periodic so this immediately implies that a closed phase trajectory implies and is implied by periodic motion that's an important elementary but important conclusion because we certainly should like to know be able to identify those parts of phase space where the motion is periodic as opposed to those where it's not periodic so this is an important observation that a closed phase trajectory in the phase space the full phase space is periodic motion it's not enough for some of the variables to come back to the starting points for instance if I took a simple pendulum if I go out to the end of the amplitude and come back it's come back to its starting point as far as the amplitude as far as the position is concerned but not as far as a velocity is concerned because initially the velocity is directed to this side but when it comes back here it's directed to this side to the left so it needs to come back all the way and that completes a full orbit in phase space and then you have periodic motion so that's important to recognize the next point is what in principle is stopping me from solving these equations what is stopping me at all the answer is the following it's interesting and important let's take a look at these equations again if I tell you x at time 0 I can tell you what x is at time delta t that's straightforward because all I have to do is to use the value of whatever these functions are at the initial instant of time multiplied by delta t because remember these are first order differential equations here so this is equal to x at 0 plus f at x naught so let's call it x let me call it x naught x at 0 delta t I've just taken this differential equation kept things to first order in this infinitesimal time delta t and that's it so if I started some point in phase space a little time delta t later I'm there so I know this and if I make delta t fine enough I should be able to draw a complete a continuous curve and call it the phase trajectory and once I'm here I'd simply the same trick again and I move forward and this method like the method of isoclines you build up a phase trajectory in this fashion so it's clear that there's no difficulty in principle to numerical solvability of these equations and then I could go to a slightly different initial condition here and play the same trick and compute step by step by infinitesimal step I compute what the phase trajectory looks like so certainly local solvability local infinitesimal step at a time is not difficult in principle it's straightforward to find local solutions for any given for almost all given initial conditions that's not the difficult part now the way to look at this is to understand that the velocity of this representative point in phase space is given by this vector field this f is a vector field and it gives you the local phase space velocity okay once you specify that local solvability is not difficult and what's being plotted here are really the field lines of that vector field and this is always guaranteed local solvability in fact it's codified in a formal theorem in calculus multivariable calculus called the rectification theorem and it says the following it says if in the space of excess at any given point you give me this vector field whose field lines look like that I can always make a change of variables a smooth change of variables from x to some y which is some function of x such that this point x0 gets mapped on to some point y0 here and more important than that these field lines which look curved in this point they get mapped on this infinitesimal neighborhood gets mapped on here such that the field lines look like this and that's why it's called the rectification theorem because they look like straight lines parallel to each other now what it means in this language is that under this change of variables under this change of variables the new set of equations in the variables y2 y1 to yn looks very simple looks like y1 dot equal to 1 in suitable units y2 dot is 0 all the way to yn dot equal to 0 and what are the solutions of this this is simplicity itself where it says y1 of t is y10 plus t and y2 y3 up to yn are constants they don't change the time at all so that's why the field lines go parallel to each other it's only the one direction that you have any change of dynamical variable the all the others is supposed to remain constant here and if you like the vector field has been rectified so this is always possible by a rigorous theorem which says that this equation can always be solved numerically solved if you like locally solved now if I want to go beyond this region then I must start with this as the initial point and in this infinitesimal neighborhood I must apply the rectification theorem again which would perhaps mean that you now have a new vector field which looks like this in this case and so on so infinitesimal interval by interval I can now solve this equation locally why is it that this does not imply that I can once and for all solve the set of equations and write once you give me some initial value x of 0 given why is it that I cannot solve the set of equations and write once and for all x of t equal to some known function which I can write down explicitly of x of 0 and t if I were able to do this then for all t I have an explicit solution to this whenever I can do this I say the system is integrable there's different kinds of integrability but the one that I would talk about here is simply in this sense that if I can write an explicit time dependent solution valid for all t I say the system is integrable okay there are very few systems which are integrable we will come to that but I want you to appreciate the fact and this is what I am going to emphasize that local solvability does not imply integrability so solvability does not imply the other way it's true if you give me if you tell me that in a given system I know the solution explicitly for all t then of course you know it locally for infinitesimal changes in t but the going the other way is not directly possible in most cases and the reason for it is technical there are several possibilities the most common ones are the following these map functions phi that I write down to go from one to the other by the way I should have used some other notation here let's call it some other psi it's not the map function here this map function here could be such that my initial neighborhood out here the next time it does this and then it does this and then it does this and this can peter out to a point so this mapping may be valid in smaller and smaller neighborhoods till it kind of disappears shrinks to a point in which case the method is useless after that there is another possibility and that is I start at some point here and I continue this sequential mapping here and I come back to the original point to this point here and I find a different map which means that there is no single valuedness anymore so both these possibilities happen and you can sort of guess that in this case there must be some kind of singularity in this vector field somewhere which is preventing this from being you solve globally everywhere here so there are other possibilities as well but without going into that let me just state as a matter of fact that integrability is a very rare event in dynamical systems it does happen in some very important examples we are going to talk about criteria for integrability for instance for Hamiltonian systems and so on but it's a rare event now let me go on right here and explain why it's such a rare thing what's so difficult about this in slightly different terms what is it that we need to integrate let's go back and ask for about the meaning of integration from elementary calculus you take any of these equations and you integrate it there's an integration constant so when you integrate n such equations you are going to have n integration constants whose numerical values will be determined by the initial conditions so it's clear that there are certain constants of the motion which are determining what's happening to the phase trajectory for instance suppose it turned out that some given function f 1 not this function some other given function f 1 of all the x is equal to constant let's call it C1 during this motion I discover that this particular combination is a constant for instance a particle moving in a one dimensional potential which is a conservative system no friction or anything I know the total energy is constant the total energy is a function of the velocity as well as the position kinetic plus potential and that's a constant of the motion so this thing here is a constant of the motion what does that imply for the phase trajectory it implies that this combination has always got to be constant and this defines for you in this n dimensional phase space and n minus 1 dimensional hyper surface for instance in the case of the particle moving in a potential if this is what the potential looks like and the particle moves back and forth in one dimension we know that the sum of the potential and kinetic energy is constant the motion is periodic and the phase trajectory looks something like this now what is this thing here it's an equation which says p squared over 2 m plus v of x equal to a constant this is the x axis and that's the p axis and this constant has the physical meaning of the total energy of the system so the system is now constrained if you like restricted to moving on this surface quote unquote in this case it's just a curve so every time you have a constant of the motion the phase space available to this system is decreased by one dimension and you have an n minus one dimensional hyper surface if I discovered the second constant of the motion so let's write an independent second constant of the motion f2 of x1 to xn equal to C2 independent of C1 these are functionally independent quantities here then this two restricts your motion to some other n minus one dimensional hyper surface and this representative point must move on this surface as well as this in other words it must move on the intersection of these two surfaces but that's n minus two dimensional because every time you put one more constraint the dimensionality of whatever is available to you drops by one therefore the phase trajectory itself it's clear is the intersection being a one dimensional object is the intersection of n minus one such surfaces hyper surfaces in other words for you to integrate completely you need to be able to write down n minus one functionally independent constants of the motion if you could do that then the mutual intersection of these n minus one constants of the motion already fixes your phase trajectory it also tells you why the phase trajectory can be exceedingly complicated because it's clear that the intersections of surfaces even if they themselves are quite smooth can become quite intricate when you have more and more of them and in principle the phase trajectory in n dimensional phase space can be exceedingly complicated looking but not self-intersecting in for any initial condition and you need to be able to write it down explicitly in heuristic terms you can see that you need n minus one independent constants of the motion but you don't have this because constants of the motion are very rare to come by let me anticipate what we're going to say later by saying that the existence of a constant of the motion implies in general a certain symmetry in these equations so the more symmetry you have the more likely it is that there are constants of the motion but if you took an arbitrary generic system there's no reason why there's any special symmetry then you may not have constants of the motion availability so that's one way of looking at why integrability is very rare unlike local solvability which is very common but integrability is rare let me end by giving the same example of n particles moving in space and then we will come back to this just to show you what happens in that case so let's look at a system of n particles for which the phase space is six n dimensional therefore I need six n minus one independent constants of the motion to be able to integrate this completely but then what happens is that the system is described by an energy function called the Hamiltonian which looks like summation i equal to 1 to n p i squared over twice m i but this is the mass of the ith particle plus a potential if there's no external force and the particles are mutually interacting with each other which is a function in general of all the coordinates of the particles and that's a general system we'll make it more specific by saying oh the interaction is always pair wise these particles interact toward a time with each other and depends only on the distance between the two particles which is what happens in most cases so if you had a particle i here and a particle j here the distance distance r i j is all that this interaction would depend on then this Hamiltonian simplifies a little bit and this becomes summation p i squared over 2 m i i equal to 1 to n plus a summation over these pair wise potentials each of which is a function only of the distance between two particles i j summed over from i j equal to 1 to n and you don't want self interactions of these particles so i not equal to j that's what the total energy function or the Hamiltonian of conservative system of n particles interacting by a pair wise interactions which are central forces this is what it looks like now what is the symmetry of this Hamiltonian here what is the number of constants of the motion that we can write down here well we know from elementary mechanics that the Hamiltonian itself for this non-autonomous for this autonomous system it is not explicitly depend on time it's a constant of the motion and the value of the Hamiltonian is the total energy of the system so the constants of the motion of the system are the Hamiltonian itself which gives the total energy what other function of the dynamical variables can we think of which are constants of the motion which don't change as a function of time although the r's and the p's all change is a function of time what are the constant can we think of well this whole system is rotationally invariant I can make a the some chosen axes I can always orient it as I please and the system the Hamiltonian doesn't change at all another way of saying it is that the total angular momentum is a constant in this case so it's clear that this quantity L which is a summation I equal to 1 to n r I cross p I that's a constant of the motion it's guaranteed that the total angular momentum of the system about the origin is a constant of the motion there are actually three constants of the motion here because it's a vector function okay what else is constant well there's no external force on the system so the generalization of Newton's third law is applicable here and it says that the total linear momentum of the system p equal to summation I equal to 1 to n p I is a constant of the motion so now you have three of these quantities here because they too is a constant of the motion so we have 1 plus 3 4 plus 3 7 constants of the motion okay what else is a constant of the motion well it's clear that in this case since there's no external force on the system the center of mass of the system must either be at rest or be moving with constant velocity given by capital P over the total mass of the system so it's clear that this quantity r which is a function of time by the way equal to p which is also constant of the motion so you might as well write it as p t over m this stands for summation over I equal to 1 to n m sub I plus at 0 so the combination r minus p of t p times t over m equal to this is a constant of the motion there are three of those constants of the motion but they're time dependent and that's nothing strange because the time dependence in this quantity will cancel the time dependence due to this and make that a constant of the motion so those are three more constants of the motion by the way these are called Galilean invariants Galilean constants of the motion and for a general potential about which you don't know anything else that's it there are no more constants of the motion so the total number that we have here 10 constants of the motion these are 10 constants of the motion whereas the number that you need for integrability you need need 6 n minus 1 for integrability and that's pathetically small compared to what this number is for any reasonable n forget about Avogadro's number even for any equal to 3 it already becomes impossible so in general even the three body problem in this case is not integrable in that sense this particular kind of Hamiltonian forget about more complex systems so the one and two body problems are indeed integrable as we can show but the three body problem is not integrable here so already in Newtonian mechanics you see the seeds of complexity here in fact such a system will in general be chaotic as we will see and the reason is that you don't have a sufficient number of constants of the motion but these are going to play a fundamental role the constants of the motion and when we come to Hamiltonian dynamics I'll explain what significance of this these constants of the motion are but this is the reason in a nutshell why a general dynamical system is solvable but not integrable because you don't have sufficient symmetry for the existence of a sufficient number of constants of the motion to integrate the system explicitly so we'll take it from this point next time