 Now let us discuss today the quiz paper that we had recently and then we will go on and see where this takes us and as I said last time we will switch to a different topic right after this. So first the quiz first we had a number of statements which was supposed to be marked true or false and here they go any map of the unit interval that is non invertible leads to dynamics that is chaotic true or false, definitely false because the non invertibility is not enough you need much more than that in order to have chaos we have numerous examples of maps which are not invertible but which are not chaotic at all. Give me an example a non invertible map that does not lead to chaos one dimensional map yeah the logistic map at parameter value less than mu infinity for example at the value of mu like 3.1 or something like that that is not chaotic definitely there is some stable some period cycle which is the attractor so it is not chaotic it is not invertible goes up and comes down like a parabola so that is certainly not true the Lyapunov exponent of the logistic map xn plus 1 is mu times xn so xn plus 1 equal to mu times xn 1 minus xn at mu equal to mu infinity equal to 0.566 etc this is the end of the period 2 cascade of doubling period doubling cascade of bifurcations this at this value 3.566 etc so the statement is that the Lyapunov exponent at this value of mu is equal to log 2 it is false it is 0 because it is the onset of chaos a little bit beyond this mu infinity you have chaos and as signified by a positive Lyapunov exponent in the logistic map this becomes to the Lyapunov exponent reaches log 2 only when mu hits 4 and the map leads to fully develop chaos till then it is less than log 2 for any chaotic attractor the generalized dimension d 0 is equal to the dimensionality of the phase space itself I repeat for any chaotic attractor the generalized dimension d 0 is equal to the dimensionality of the phase space itself for the maps we looked at at fully develop chaos in one dimension the attractor was the unit interval itself except for sets of measure 0 and then this statement was true but that is not true in higher dimensions certainly whenever you have chaos in a dissipative system in general there is shrinkage of volume and the system sits on some attractor whose dimensionality is less than that of the dimensionality of the system itself definitely even in the case of the logistic map it is clear that the dimensionality of an attractor could be a fractal dimension between 0 and 1 even though the phase space is one dimensional so this is a false statement not necessarily true if the Lyapunov exponent of a one dimensional map is positive we may conclude that the dynamics is chaotic for all initial conditions false because we know that there are initial conditions which lie on periodic orbits unstable periodic orbits and then this is not true that set of initial conditions may have 0 measure or may not but you certainly cannot conclude that the Lyapunov if the Lyapunov just because there is chaos the Lyapunov exponent is positive we cannot conclude that the dynamics is chaotic for all initial conditions not true stability analysis using a Lyapunov function enables us to decide on the stability of a critical point even in cases where linearization in the vicinity of the critical point is invalid this is true this is one of the great advantages of Lyapunov's direct method now once you have a proper Lyapunov function you can make definite statements about the stability or asymptotic stability or instability without having to do linear analysis so that is the great advantage of a Lyapunov function the logistic map the same map undergoes a bifurcation at mu equal to 3 a hope bifurcation at mu equal to 3 it undergoes a flip bifurcation from a period 1 fixed point to a period 2 cycle the period 1 cycle the fixed point becomes unstable the fixed point at 1-1 over mu becomes unstable and bifurcates to a period 2 cycle which is stable for some range of mu thereafter and that is a flip bifurcation it is not even a pitch fork bifurcation period doubling bifurcation it is certainly not a hope bifurcation this system does not have any limit cycle at all the hope bifurcation cannot occur in a Hamiltonian system it leads a limit cycle which happens only in dissipative systems not in a conservative system and therefore the statement is true the origin so the next question was the origin x equal to 0 y equal to 0 is a global attractor for the system given by x dot is y and y dot is x-x cube-y this statement is false what sort of system is this what kind of system is this can we think of a physical system which has this set of equations any any potential yeah this damping it is clear that if you regard this as a position in one dimension this is the velocity with appropriate units and this is a damping proportional to the velocity in magnitude so it is a damped system provided this portion can be identified as a force arising due to some potential now what would that potential be so if I plot x here versus v of x v of x would be minus the integral of this since the force is minus the derivative of the potential the potential is minus the integral of the force so what would this give you this leads to a potential v of x which in these units is equal to what plus or minus x squared over 2 minus x squared over 2 so it is minus x squared over 2 plus x 4 over 4 and then what what sort of potential is this it is a double well potential so definitely it is a potential looks like this and the origin is a point of equilibrium it is a critical point but it is actually a saddle point in this case and you end up with two in the undamped case and then you end up here with two attractors they are actual attractors here because this is a damped system they are not centers if the damping were absent this would be a center these two points would be centers and that would be a saddle point but now you actually have two critical points here which are asymptotically stable what sort of critical points are there is damping present in the system yeah these things would actually be asymptotically stable spiral points and then in between you have an unstable equilibrium point so this is not a global attractor nor is this nor is that there are actually two attractors in the system and depending on what your initial conditions are you would fall into one or the other there will be two basins of attraction for the two attractors so this is just the duffing oscillator the double well potential with unforced duffing oscillator with the linear damping it is a duffing problem the winding number of the singularity at the origin of a planar vector field has been given to you and you are asked to show that asked whether the winding number is equal to minus 2 or not let's check let's check so f of x, y is equal to x squared minus y squared over x squared plus y squared the whole square minus 2 x, y over x squared plus y squared the whole square it's a planar vector field which is singular at the origin as you can see probably blows up at the origin because of these denominators now what do these things suggest I mean this x and y components of this vector field suggest that they are the real and imaginary parts of some function of complex variables z what would that that be so this these things are this thing here is 1 over z squared that's the vector field so if I write this as x plus y I y whole squared I get an x minus I y whole squared on top and below it's x squared plus y squared the whole square so that's precisely the real and imaginary parts of this so you could write this in complex notation as z dot if it's a dynamical system you would write this as z dot equal to z to the minus 2 and the vector field is z to the that of minus 2 so what's the winding number I go around once around the origin in the z plane and what's the amount by which the argument of this vector field increases or decreases well if z goes to z e to the 2 pi I goes around once then how much does that increase by minus 4 pi I right so that's what the argument changes by so the argument changes by minus 4 pi as a multiple of 2 pi it's minus 2 times 2 pi therefore the winding number is equal to minus 2 yes yes the dipole field would correspond to z squared plus 2 yes but that's all important that's absolutely all important you see whatever happens in the dipole field at the origin suppose I put z is equal to 1 over u for example then clearly whatever happens in the origin in z happens at infinity in u and vice versa so because it's 1 over z squared what you should really do is to make a change of variables to 1 over z in which case this field is just u squared in the winding numbers plus 2 but now you map back to 1 over u and then it changes sign because whatever goes around counterclockwise once in this direction viewed from the point of view the point at infinity it's going around in the opposite direction right if I imagine the point at infinity to be the north pole and the point or the origin to be the south pole in stereographic projection and I go around once around the south pole in specific direction viewed from the north pole it's going to be in the opposite direction so that's precisely what's happening this minus 2 is what appears here and therefore the winding number is minus 2 rather than plus 2 okay so that is just a small change on the usual dipole field this field is singular at the origin it doesn't vanish at the origin it blows up at the origin becomes infinite at the origin the next question was the damped unforced duffing oscillator cannot have any limit cycles true or false it's true we prove this by the Poincare Bendixson the Bendixson criterion we showed that in this case in the damped unforced or duffing oscillator whose equation we wrote down a little while ago you can't have any limit cycles at all simply because the vector fields divergence has a specific sign so we saw that as soon as that is the case you can't have any more any limit cycles we use Green's theorem in the plane to establish this fact the Bendixson criterion then the next question was consider the map xn plus 1 is xn times 3-4 xn squared so let me write this map down xn plus 1 equal to xn into 3-4 xn squared and x 0 is an element of minus 1 1 and the statement is this map has a stable period 3 cycle is that true or false it's false because we can see this immediately what does this map reduce to yeah it just reduces to like the Bernoulli shift except in ternary the slope 3 because we can see that directly let's plot this map let's plot it here this is minus 1 1 this is minus 1 and 1 it's an on to map as you can see and when xn is equal to 0 xn plus 1 is also 0 when xn is plus 1 then it's 3-4 it's minus 1 and vice versa so this map does something like this it's a cubic map it does have fixed points there's a fixed point here the slope at the origin is 3 that's bigger than 1 so it's immediately unstable that's clear and it's an easy matter to see that these slopes are also greater than 1 in magnitude and therefore all the fixed points are unstable if you iterated this map what would happen if I took F2 and F3 and so on and iterates of this map what would happen they just go up and down a few more times but the slope would again be much greater than 1 get increasingly as the number of iterations increases so it's clear that this map has no period cycles at all no stable periodic orbits at all so not only are the fixed points but all the higher periods periodic orbits of this map are completely unstable it's fully chaotic so you shouldn't imagine that these three points form a period three cycle to start with they just fix points of this map and it has no fixed points which has no periodic points which are stable now to see that this map is actually a shift of some kind is not very difficult all you have to imagine is to put Xn is equal to some trigonometric function sin or cosine or whatever it is so you put sin theta n and then you discover that you have a formula for sin of 3 theta as sin theta in terms of sin theta here and it says that 3 theta theta n plus 1 is 3 times theta n minus 1 theta n therefore theta n is 3 to the power n times theta 0 and that's like the Bernoulli shift in this theta variable so since we have it between minus 1 and 1 the obvious thing to do is to make it like this so let me call it Yn and then Yn runs from minus 1 to 1 so with this change of variable this is immediately solved this map is immediately solved in closed form it's the analog for the cubic map of what the logistic map would have been at mu equal to 4 where also we made a trigonometric change of variables and we got the Bernoulli shift the doubling change the next question was let X of t be a dichotomous Markov process in which X jumps randomly between two values X1 and X2 with mean residence times tau 1 and tau 2 in the two states we said let the mean be 0 that's not necessary let the mean value be 0 the statement is the autocorrelation function of the process is a decaying exponential function of t this is a true statement for a dichotomous Markov process no matter what this levels are and no matter what the rates of transition are the autocorrelation function is an exponential function the reason I said let the mean be 0 was because I didn't want X minus the mean value at X of 0 minus the mean value times X of t minus the mean value that's the general autocorrelation function and I wanted to get rid of that mean so I defined the mean to be 0 here so in this case you have a process which goes up and down in this fashion between two values and we saw that if the value here is X1 and the value here is X2 and we arrange the rate so that the mean is 0 and if this rate is lambda 1 and this rate of transition is lambda 2 then this autocorrelation function X of 0 X of t this goes like the mean square value whatever it is X2 multiplied by e to the minus 2 lambda and 2 lambda is lambda 1 plus lambda 2 that's a general statement not very hard to derive for a dichotomous process so it's an example of a very simple model of a Markov process which is exponentially correlated occurs in numerous applications and the characteristic time is 2 lambda inverse which is the sum of the inverse of the sum of the two rates now that's worth noting if the mean time of stay in this state is tau 2 which is lambda 2 inverse and this is tau 1 which is lambda 1 inverse the mean times are tau 1 and tau 2 then one shouldn't come to the conclusion that it's the correlation time is tau 1 plus tau 2 not true it's in fact this so this guy here 2 lambda inverse is the inverse of this which could be written in terms of tau 1 and tau 2 and what is the correlation time come out to be in terms of those two tau 1 tau 2 over tau 1 plus tau 2 so so much for that then the next question give a set of numbers between 0 and 1 so let S be the set of numbers such that the decimal expansion of any x element of S is of the form x equal to 0. a 1 a 2 etc. So we are starting with all the numbers between 0 and 1 this is the S is an element of 0 1 and then we have the set of numbers is of the form x is 0. a 1 a 2 a 3. each ai is odd namely they can't have the value sorry in other words the digits each of the digits is even can't have the value each ai is even doesn't matter but equal to 0 2 4 6 8 and you are asked to calculate the fractal dimension the capacity dimension or the box counting dimension of the set S. so what you do is simply say take the interval between 0 and 1 break it up into 10 equal parts so 1 2 9 and 10 so 0 to 1 and this is 0.1 0.2 here etc right up to 0.9 so this is 0.3 0.4 0.5 0.6 0.7 0.8 and 1 now a 1 can only be even it can't be odd therefore it can't be between 0.1 and 0.2 because then a 1 would be even an odd number so this is forbidden this region is forbidden this is forbidden this is forbidden this is forbidden and that's forbidden therefore a 1 can only be in these intervals then you break up each of these into 10 parts and the second decimal would again be in one of the five intervals out of the 10 every alternate one is permitted and this process is self similar at every stage it's exactly the same process therefore what's happening is that you've got a resolution a demagnification factor epsilon which is one tenth and at each stage you're breaking up a unit of the previous stage into n of epsilon parts where this is equal to 5 because the other five have been erased and therefore d0 equal to log 5 divided by the log of 1 over epsilon which is log n that's the fractal dimension now of course you could make this question a little more sophisticated by asking for various probabilities if I didn't associate equal probability measures with all these things but I had biases to one side or the other then I would get a multi fractal and I'd get generalized dimensions dq which are different from d0 but otherwise once I do this kind of course draining and I continue this then all the dq's are the same as d0 nothing changes and it's just a regular fractal rather than a multi fractal finally we come to the last question pardon a multi fractal is something where well many ways of defining it but if you have many dimensions generalized dimensions associated with a set I call it a multi fractal in other words if you have a whole spectrum of dimensions dq as we defined it generalized dimensions then it's a multi fractal but if all the dq's collapse to some single value d0 then it's just a fractal so these are in that sense regular simple fractals okay the next one was just a problem in matrix algebra and when three identical tall glasses ABC contain water to respective heights x0 y0 and z0 the levels in A and B are first equalized by pouring water from A to B from or B to A depending on which one has more the levels in B and C are then similarly equalized and then C and A are equalized so that's a complete operation and then you repeat this operation over and over again and it's intuitively clear that eventually the levels in all the three would become equal if no water is spilt and that level would be one third of x0 plus y0 plus z0 but the question is what's the rate at which this limit is approached in other words what's the actual value what are the values of the levels in the three glasses after any iterations of the step so this is done in a very straightforward way we could randomize this problem as well but this is completely deterministic so it's just a set of three recursion relations I start with some three glasses which are and this one has some x0 this one has maybe more we don't care y0 and that one has some z0 and you keep pouring from one to the other and you ask what happens to the levels finally well we start with x0 let's put them down as a matrix y0 z0 and I take the two glasses AB and equalize so what I've done is to make this x0 plus y0 over 2 x0 plus y0 over 2 and then I do the same for B and C so this remains as it is but these two guys get equalized so this is this plus that divided by 4 so it's x0 plus y0 plus twice z0 divided by 4 and so is this and then I change C and A I add I equalize these two levels which is equivalent to saying I take this and this and take the arithmetic average of the two which is a 2 x0 plus an x0 so that's 3 x0 plus 2 y0 plus a y0 that's 3 y0 plus a 2 z0 the whole thing is divided this the common denominator already so it's equalized to 8 okay and that's exactly what you have here 3 x0 plus 3 y0 plus 2 z0 divided by 8 and this remains as it is so what has essentially happened is that you have a column vector x which is x y z in this fashion and what we are doing is to say x at time n plus 1 is equal to some matrix let me call this matrix T x at time xn and the matrix T is written down there it's equal to 3 8 3 8 and 1 4 on this side it's a quarter a quarter and a half and again it's 3 8 and 1 4 that's it and this of course would imply that x at n is T to the power n x at time 0 and we need to find this we essentially need to take the n power of this matrix and what's interesting is to find what its limit is going to be we are guaranteed that what will survive finally for x y z would be one third of x0 plus y0 plus z0 all the three would become equal in this case so the system would tend to the uniform distribution all three glasses have exactly the same amount of liquid now what's noticeable about this matrix it has zero determinant but it's a stochastic matrix because the material is not being lost so in some sense basically a stochastic matrix in the sense that the sum of the rows is equal to one each row sums up to one each column also sums up to one it's doubly stochastic in this case is it a symmetric matrix not as it stands not as it stands so we really can't immediately assert that the left and right eigenvectors are the same can't assert that at all notice that because the rows add up to one you end up with the uniform eigenvector so 1 1 1 is an eigenvector and similarly 1 1 1 a row vector is also an eigenvector left eigenvector what are the eigenvalues of this matrix yeah 0 has to be 1 because the determinant is 0 1 would be an eigenvalue exactly 1 would be an eigenvalue that's what would correspond to the equilibrium distribution finally so eigenvalues lambda 1 equal to 0 lambda 2 equal to something and lambda 3 equal to 1 plus 1 1 would be an eigenvalue the third one in this case not hard to discover it's minus 1 8 so now we can diagonalize this matrix slide down what the t to the power n is and so on 0 would just goes remain 0 and the 1 would remain 1 but then when you take the nth power this guy would give you minus 1 8 to the power n and that gives you a time scale in the problem I am not going to work that out it's a matter of detail but we can see that eventually eventually x will tend as n tends to infinity xn this would tend to x0 plus y0 plus z0 over 3 1 1 1 normalize to this and the question is what's the time scale how does it do so what's the characteristic time so you have an exponential here and if you raise lambda 2 to the power n you have 1 8 to the power n which could be written as e to the minus n log 8 so this would imply that there is a terms of the form e to the minus n log 8 so the characteristic time tau would be like log that's the interesting part so it's a deterministic process it's a map but it's very much like a Markov chain in this case it's a given by a certain transition matrix and this thing here is a stochastic matrix so it's exactly like the transition matrix for Markov chain and it has an equilibrium distribution finally as I said we can make this problem more interesting by putting in random elements into this some probabilities with which you do certain operations before others and so on then the matter would become a little more intricate and interesting but this is completely deterministic you equalize a and b first b and c next c and a but you can do this in various orders with probably various probabilities and then you could ask similar questions what's the probability of finding the distribution of the level such a certain stage this is also answerable okay so I'm more or less done I would like to sort of conclude anything else any other questions and all that we've done so far I have to yeah the question is what's meant by a correlation function an autocorrelation function for example the simplest of these cases so let me explain that in brief again let me take the example of a random variable x a scalar random variable and ask what can you say about this random variable it doesn't have to be a Markov process could be anything at all then clearly the following are relevant questions to ask to the relevant weight relevant physical quantities to analyze for a random process one of them would be to say if you have this what's the average value at any instant of time it doesn't have to be a stationary random process if it is then the average is independent of time because you devaluate this over a distribution which is independent of time you can generalize this and ask what's the nth moment of this object this to if it's stationary would be independent of time now you could have stationarity at the level of the mean at the level of the autocorrelation and so on and so forth but let's assume we take the definition of a stationary process in the strict sense of the word namely all the distributions all joint probability distributions single time ones are independent of time two time distributions depend on the time difference and so on in other words the origin of time is irrelevant the next question you could ask is all right what's the average value of the product of the value of the variable at some time t1 and the same variable at a later time t2 what's this equal to this would in some sense characterize the amount of memory present in this variable in this random variable if it's a delta function in t1-t2 then I would say it has no memory whatsoever like a noise a white noise but in physical problems this would always depend on t1 and t2 if the variable is a stationary random variable then I would expect that this quantity would become actually a function only of the time difference between t1 and t2 and not of the two absolute times t1 and t2 separately in fact I should be a little care more careful here I should really define x of t1 minus the average value at that time x at time t2 minus the average value and that time and then take the average value of this and this is what I would call the correlation function which in general is a function of t1 and t2 this is what I call the auto correlation function of this random variable if this is a stationary random variable then certain simplification occur so stationarity implies immediately that this thing becomes the average value of x at time t1 minus the average value of x the average value of x and that's independent of t so it's just this but we can separate this out it's x at t1 times x at t2 average minus average of x times average of x here minus another of those same things because these quantities no longer get average because they're constants and then plus the average of x the whole square so you could also write this as C of t1 t2 to be equal to x at time t1 x at time t2 minus x average the whole square so it's like the generalization of the variance except that you have time arguments here and now the statement is if this is a stationary random variable what's meant by this thing here let's try to write it out suppose it's a continuous variable let's try to write out what you explicitly what we mean by this explicitly it's an integral over all values let me call x1 the way value at time t1 and x2 to denote the value of the variable at time t2 so it's an integral over all x1 it's an integral over all x2 and then x1 x2 because that's what you're averaging this product is what you're averaging multiplied by the joint probability density that you have x1 at time t1 and x2 at time t2 but this quantity by definition can also be written so this quantity is equal to that let's forget this for the moment so apart from that this thing is equal to this and then this is equal to t2 dx1 x1 x2 this joint probability if for instance t2 is greater than t1 t1 is the earlier time I use slightly different notation I use the later times I move to the left so let me leave that same notation let me write this as x2 t2 x1 t1 pardon me yeah I can shift the origin but let's do it slowly so this is x2 t1 t2 given x1 at time t1 multiplied by p of x1 t1 this is the single time probability this is the two time probability probability densities but since it's a stationary random variable we've assumed that this is independent of t this quantity here and then once again since it's stationary I write this as t2 minus t1 and I could write this as 0 but let's just remove that it stands for some origin of time so we come to the conclusion that C is in fact a function of t1 minus t2 so this thing here is a function of t1 minus t2 to start with let's forget about this I mean let's set that equal to 0 or shift the whole origin of this variable so that the mean is 0 but you can say a little more than that notice that if it's stationary you could actually write this C of t1 and t2 so this quantity x of t1 x of t2 if I subtract from both sides t1 I said t1 equal to 0 or I shift the origin to t1 you could also write this as x at 0 x of t2 minus t1 but I could also have shifted t2 so you could also write this as x of t1 minus t2 x of 0 but if these are classical variables there's no problem about commuting them so this is also equal to x of 0 x of t1 minus t2 so in fact it says that this correlation function is a symmetric function of the time difference the two time arguments so you actually come to the conclusion that C of t1 t2 is equal to C of the modulus of t1 minus t2 it's a function of the magnitude of the time difference and it gives you some idea of how rapidly the system loses memory in the case of a dichotomous Markov process this was a single exponential but it doesn't have to be so it could be much more complicated than that but what we do know is that you end up with something which is a function of this modulus alone now physically I would expect if the average value was 0 for example I would expect the behavior of the C so let's call it C of t as a function of t and let me just plot the positive side of it the negative side would be symmetric in this case it would start here and perhaps go to 0 it would decay maybe exponentially maybe slowly like a power law but the memory would gradually go down and that would be decided by how fast the limit of this quantity as t2 minus t1 goes to infinity goes to p1 because I know that in the limit this forgets the initial condition and the limit of this quantity is in fact just p1 of x2 so it would depend on that it doesn't have to do this it could do this do crazy things physically I would expect it to be a non-increasing function I would expect the correlation to increase but of course it could be purely periodic suppose this variable is not a random variable at all but it's the position of a simple harmonic oscillator and I take averages over time what would you expect this correlation to be suppose it's a simple harmonic oscillator of frequency omega what would I expect this correlation function to be I know average there is no averaging I average over the actual dynamics I know if I start with an x0 I know what the system does it would just be a periodic function once again don't go down at all it would just start at unity and then perhaps do this but in real random variables more realistic situations I'd expect this correlation to die out and it's a measure of how much memory there is in the system in fact you can find a time scale from this c of t because notice that you could integrate this c of t so I could do the following in the case of a stationary variable I could take c of t I could take the quantity x of 0 x of t divide by x2 so as to make it dimensionless and integrate this over t from 0 to infinity and I call this the effective correlation time because it has physical dimensions for time as you can see if this is mean value of x2 e to the minus lambda t and I do this integral then the correlation time is just 1 over lambda for a single exponential but otherwise I get some effective correlation time there's another thing you do with the correlation function and that's the following you could also take it's Fourier transform so you could also take let me call c of t the correlation function you could also take integral minus infinity to infinity dt t to the power i omega t c of t that's a function of omega here so you could ask what is this equal to call it something else so let me call it s to show that it's over the function x for the random variable x let me define it in this fashion and this is called the power spectrum it's the power spectral density or for short or power spectrum of the random variable x if x is a noise then this thing here tells you in rough terms the intensity of the strength of this noise in a frequency window between omega and omega plus d omega and if it's white noise this is a delta function and then it goes to a constant so that's the reason why you call it white noise because the power spectrum is flat physically that will never happen of course we know that things will always go down so the power spectrum if you plot this s of omega versus omega in the ideal case for white noise it would look like this ideal white noise but in practice it would come down in this fashion and a great deal of information is obtained by looking at the power spectrum so you could in fact start with any time series for any random variable even a chaotic time series and look at the power spectrum if the system has hidden periodicities in it then they will be detected here so anything that becomes periodic here you notice if I have a single period here of period omega 0 what would this become if you give me a pulse it give me a delta function at omega not so what happens in practice is that you have a power spectrum it looks like this maybe there are some spikes of this kind and at those spikes you know that there are periodicities in the system so in a complicated time series the power spectrum which is a Fourier transform of the auto correlation function helps you to detect hidden periodicities if the system is completely noisy white noise it would have absolutely no such structure at all even the way it falls off tells you something about the underlying random process whether it goes like a 1 over f squared or a 1 over f or whatever where f is a frequency there is a whole class of processes called 1 over f noise which corresponds to a power spectrum which dies down for large omega like 1 over omega to a power which is roughly between say 0.8 and 1.2 and this is called 1 over f noise it is very ubiquitous it appears everywhere if this were purely Brownian a Brownian motion kind of noise then it would go like 1 over omega squared so the asymptotic fall off of S of omega also gives you physical information if it is a chaotic power series as opposed to pure noise true randomness or noise then the power spectrum behaves very differently and it is very broadband there are no periodicities so it is sort of spread out very irregular very broadband whereas if it is complete noise be like white noise or falls off like a 1 over omega to some power and if it is periodic then it ends up with spikes if it is quasi periodic it ends up with a large number of spikes. So the power spectrum the physical way very practical way of seeing something about the underlying dynamics regardless of whether it is deterministic or noisy or chaotic or mixture of all these.