 This is where we stopped yesterday. We have an oscillator with a Hamiltonian which has eigenfrequency, omega naught as a parameter. The mass is 1, p is the momentum, this is the Duffing nonminiarity, and this is the drive. So we talked about it yesterday and Yaroslav today very nicely summarized these results that we had. And we derived essentially these equations of motion doing the RWA, which is the conventional abbreviation for rotating wave approximation. So at this point there is no approximation, we just switch to the rotating frame and then we make an approximation by looking at smoothly varying parts of QNP which are the biggest part. QNP also has smaller, but fast oscillating part. So we are looking at the big part of QNP, we have these equations of motion and now it is our choice to essentially rescale variables in the way we like. And there is no unique way to do it, we can do it in several different ways. The way I used in the notes was to switch to the dimensionless time which is determined by the frequency detuning. The frequency detuning, delta omega is omega f minus omega naught. Obviously this is a bad choice if you drive your oscillator exactly on resonance because if omega f is equal to omega naught, it's really a bad choice. I will not be considering this case, I can choose a different scaling, I'm a free man, it's just my first choice. So I switch to this dimensionless time and I will choose my parameter C. So now I have to choose my parameter C and I will choose my parameter C in such a way that, which is incorrect here, so far it's good. I will choose it in such a way that this coefficient will become equal to one. Now what happens when I change to this time? I essentially divide this equation by one over delta omega, one over delta omega. And then Q dot which is dQ dt, if I divide it by delta omega it becomes delta omega and this is dQ detail. So once I have done this, I will call my C squared equal to, in such a way that 3 8, so I divide this whole thing by delta omega, I will choose C so that C gamma C squared over omega f delta omega is one, which gives me the scale of C. So this is omega f delta omega 8 over 3 gamma to the one half. This is the typical amplitude of my oscillator. Because I want my variables Q and P dimensionless variables to be over the one. As you see the amplitude of the oscillator, if I write that Q is a cosine omega f t plus some phase, then I see that my amplitude squared is C squared times Q squared plus P squared just by comparing this expression and this expression. Therefore if Q and P are over the one, C is my characteristic amplitude. So essentially what I am doing, I am scaling my variables in such a way that their typical values are over the one. C squared is the vibration amplitude. Now my Hamiltonian. My Hamiltonian I can transform it accordingly. And what will happen with this? I look at these equations, which now I will write one of them in these dimensionless variables. I will also introduce parameter kappa, which is gamma over delta omega. And then look what happens for example with first equation. It becomes minus kappa Q at gamma over delta omega minus P divided by delta omega. This coefficient becomes one. It's C squared gamma 3h over omega f delta omega. So plus P times P squared plus Q squared. And we know that this is minus kappa Q plus some auxiliary Hamiltonian over Dp. I call this auxiliary Hamiltonian G. And essentially what happens is that G is again a scaled Hamiltonian, which I construct in the following way. I rewrite, I have to make a canonical transformation of this Hamiltonian H and change to scaled variables. The easiest way to see what will come out where the parameters are coming out is just to look at this term. I have gamma, so I make again canonical transformation, scale. And the result is some typical scaling energy times function G of Q and P. This typical scaling energy, the easiest way to see what it is, is just to look at this term. Because this term we know that because of the scaling we have instead of omega naught squared we have delta omega. We have changed the rotating frame and this is a non-inertial frame. So there is if you like a Coriolis force when you change the, you know, Coriolis forces? Again, speaking of micro-mechanics, when you rotate your iPhone the picture changes. How does it happen? It's one of the applications of micro-mechanics. So Coriolis forces are important. Because of that this term is ugly. This term is good, nothing happens to this term and therefore you can see the scaling. Gamma Q, the scale of Q is C. So my E scaling is gamma C to the fourth, right? It's Q to the fourth. And then there is this parameter of 3.8 which comes from averaging. Cosine to the fourth, Yaroslav was talking about it today. So this is 3.8. And with this 3.8 I know what my Hamiltonian has become and my equation of motion for the scale variables have the form dQ d tau is minus kappa Q to the decay rate plus dG dP. And dP d tau is also minus kappa P, we have that before, plus minus dG dQ. Let's try to understand the form of this Hamiltonian G because this is one of the miracles of the nonlinear oscillators of the rotating frame. So this Hamiltonian G has some terms which are very clear. So this is the derivative of G over P. So let's try to guess the form of this term. So G is one quarter P squared plus Q squared squared. If I differentiate this over P, I will have P, 2 times 2 is 4, over 4 is 1. So I have this term here. I will have that same term here with the right sign. And I'm done with this part. I am still having this part, delta omega Q, delta omega P, which now just becomes minus P and plus Q. Well, this is also easy, minus 1 half P squared plus Q squared. If I differentiate this over P, I will have minus P, this term. And similarly, over Q, I will have the right term there. And the only thing that is left is this fourth term, the term of the fourth, which I will write as minus beta to the 1 half Q. Beta to the 1 half is for historical reasons. This has been used in so many papers that I prefer to use in so many people that I prefer to use this notation. The fourth amplitude is a bad characteristic. Amplitude is essentially a square root of P squared plus Q squared. So it's better to use squared amplitude. F can be positive or negative. So I use squared amplitude and the squared amplitude will be beta, will be F squared over 4C squared omega F squared, delta omega, because I divide by delta omega squared. So with this, I have a full description of my dynamics for P and Q in slow time in the rotating frame. So now I am absolutely clean. I have two equations of motion. These equations have two parameters. We started initially with a lot of parameters. We had omega naught. We had gamma. We have field amplitude. We had field frequency, four parameters, right? And I have reduced these dynamics to just two parameters, which is much better and much easier to comprehend. Now I have to understand how these dynamics evolve in time. And the first question that I want to ask you. Do you think this is useful at all to have these equations? And if it is useful, tell me a single measurement where these equations can be used. So let me give you a hint. Q and P are the quadratures of the oscillator, right? This is what you measure if you are measuring quadratures, or sometimes you call it in-phase and out-of-phase components. So these are Q and P. So tell me of a single experiment that you can think of where these equations can be used. Go ahead. Homodyne, right? And what will you measure in homodyne? You will measure quadratures, right? And will you see the evolution in time? How will you observe what do you do to see that your quadratures evolve in time? So clearly, if you wait for sufficient a long time, the system will come to a stationary state and they will be constant, and you can measure them. Can you measure the evolution in time, any idea of any experiment that would tell you that they vary actually in time? When you turn on the field, the system is not in the stationary state, right? It has to evolve to the stationary state, and you will see how these quadratures evolve in time, approaching the stationary state. Or you change the field, or you change the frequency of the field. So when you change parameters, this is when this time evolution becomes important and comes into play and becomes observable. Now the question is, this is a complete mess. Yeah. Yeah. This is minus. It's p squared plus q squared with a minus. And if it is not with a minus, then I made a mistake, but I think I did not. I think it's correct. No, no, this is correct. This is correct. And so this will be minus, time minus will be plus, and this is a plus, and dp is minus. So this is correct. This is correct. It's right. So this is something, thank you. This is something that I keep in my back pocket. The other one, not the one where I keep the mic. So I just, just for convenience, it makes no difference. Of course I can put here absolute value of delta omega and put absolute value of gamma everywhere. What is important to have by stability is to have that delta omega times gamma is bigger than zero. This is necessary for the by stability. And I kind of imposed upon you that gamma is positive. And therefore I quietly presumed that delta omega is also positive. I can choose delta omega the way I want. And as I said, this is just the choice of my scaling. It's not, it's not card and stone. And that is connected to the minus, right? But, okay, well, let me move on. And I will, yes, formally, yes. And there is a slightly deeper reason. So what happens with this Hamiltonian and what happens with this evolution in time? So let me try to rewrite these equations now in the explicit form. Just for p is minus kappa p plus u minus q p squared plus q squared plus beta to the one half. And this expression for g. So tell me something. So g is the energy, the Hamiltonian. What is the most striking difference of this Hamiltonian from this Hamiltonian besides the time dependence? So this Hamiltonian is time independent. Okay, look at this Hamiltonian. There's kinetic energy, right? And potential energy. What about this Hamiltonian? It's not a sum of the kinetic and potential energy. Can a Hamiltonian be not a sum of the kinetic and potential energy? Do you know a system where the Hamiltonian would not be a sum of the kinetic and potential energy? It's independent of time. But okay, potential energy can depend on time. That's fine. It's still a potential energy. So think of a system where the Hamiltonian is not a sum of the kinetic and potential energy. You know this system very well. Everybody knows it. The simplest one. How about a spin in a magnetic field? So once you have angular momentum, it becomes three-tier. And it's not a sum of the kinetic and potential energy. So for a spin, it's not a sum of the kinetic and potential energy. For a relativistic particle, this is actually the first term in the relativistic Hamiltonian of a particle in the magnetic field. So you cannot separate potential and kinetic energy. And you have to live with it. That's life, life of stuff. But also nice. And now what I want to do, I no longer need this line. I will try to describe how to understand this motion. Except that this thing doesn't do anything. Let's try this one. Not much better. Well, as I said, by looking at the Hamiltonian, of course, I cannot tell if I made a canonical transformation. And as I said, I had a canonical transformation and scaling two things. Now, the transformation that I did, and I didn't want to write it and don't want and will not write it in the classical terms because it's a mess. But in quantum terms, it's actually quite simple to write the canonical transformation from the lab frame to the rotating frame. And I will do it later on when we will be working on classical effects more closely. So how do you describe a motion of a system? And one of the buzzwords today is, well, suppose you open the archive and you look for the most frequently used term. What do you think you will see? Let's vote. My vote is for topology, right? So actually, topology emerged in the linear dynamics well before it became a buzzword in physics. And let's see how, where this topology is coming from. So suppose I have just a harmonic oscillator with just a free harmonic oscillator with a friction force. Then I know that the equation of motion is q of t is a e to the minus gamma t cosine omega naught t plus y. And p of t is, if gamma is much less than omega naught, then it's a omega naught e to the minus gamma t sine omega naught t plus y. This is the solution that you know for a for a demharmonic oscillator. It is pretty ugly and hard to picture because it has the amplitude, it has phase, and it evolves in time. So I know, I know that q of t depends on time, something like that. And p of t depends on time in a similar way. And I change the amplitude, this plot will change, somehow change the phase, this plot will change the mass. I can do what is called parametric plot. Now, how many of you have been using Mathematica? How many of you have seen this command? Okay, almost, almost everybody on that side of the room. Just one or two persons on this side of the room. That's for you about credibility theory. So what I can do, I can, from this equation, I can express t, this gives me p as a function of t. But I can solve it and find t as a function of p, and then find q as a function of p, or vice versa. And that's what I will have is the following thing. What I'm plotting is called phase plane. It has p and q variables. And I know that as the system evolves in time, ultimately it comes to the point where p and q are both zero. But it starts somewhere here, and then p and q evolve in time. If there were no decay, what will be the dependence of p and q? I will tell you that energy is one half p squared plus one half omega naught squared q squared. Energy is constant. This is the expression for a very special curve in Mathematics. Do you remember how this curve was called? An oval circle. Ellipse, correct. So this would be an ellipse. And what it tells you that in this case, the system just moves along the ellipse as time goes. So rather than plotting these complicated functions, I can just, where there is no decay, I can just draw an ellipse. And this will tell me how the system evolves in time. If I change energy, what will happen with this ellipse? It will become just smaller or larger, right? And therefore, essentially, if I have this ellipse, if I have one, I have all of them. I know the topology of the phase portrait. Now, if I have decay, I understand that instead of an ellipse, what will happen is that the ellipse is correct. The ellipse will become a spiral. So it will be spiraling, spiraling in. And again, this is my initial condition. I start from this point, I know how it will spiral. If I have different initial conditions, I start from this point, I know that it will still be spiraling, spiraling, spiraling in. And therefore, this spiral is the phase portrait. This is how it is called phase portrait of the system. Now, when we are talking about phase portrait, think about your portrait. What do you have on your driver's license? You have your eyes, your nose, your ears. They have a particular shape. But if you are a human being, then all your portraits and my two have nose, eyes, brows, ears. So they are all in some sense similar. And therefore, what I have to say if I want to characterize my system is what its phase portrait looks like. If I know phase portrait, I know I can start from different points. I can start from different initial conditions, but I know how the system will evolve in time generally along this. So let's see what is the phase portrait for this system that we are dealing with now in these variables Q and P. Again, I will be trying to compromise with the bottom line. So now Q and P. And first of all, I want to understand what are the stationary states of this system. So I have to set this equal to zero, this equal to zero. And if I want to find the phase portrait, I have to find how many eyes my system has and maybe noses. And there's a stationary state. And as Yaroslav told you, and I told you too, this system can have three stationary states. So let's see what happens when it has three stationary states. So it has one stationary state, it has another stationary state, and it has another stationary state. So these dots are dots where this Q and P don't change in time, stationary states. But now I want to understand how the system behaves. And to find it out, I have to look at trajectories on this phase portrait. What you will find is that these trajectories are peculiar because I can start from this point and approach this. And from this point and do the same. But if I start from here, I will move like that. And if I start from here, I will also move like that. I will approach a different stable state. So these states, which the system approaches, they are called stable states. Mathematicians would kill me because they call them asymptotically stable states. But stable is good enough for a physicist. But what I see now is that apparently this behavior is very different. What happens? These two kinds of trajectories have to be separated somehow. And it did. There is a line here, which is called the separatrix. It separates these two groups of trajectories, one that goes to one state, the other that goes to the other state. And this is the phase portrait. This is what you want to know about the system. The rest are details. If you know the phase portrait, you change parameters a little bit. This phase portrait changes slightly, and you have this topology of the system, except. Except. I go back to what Yaroslav was saying before. And let me try again here. I said that my system has two coexisting stable states, and one stationary state, which is unstable. But I know that my system has two parameters, SCAPA, the decay rate, friction coefficients, and beta, the squared amplitude of the driving force. Therefore, I can ask myself a question on the plane of beta and CAPA. Where do I have three stationary solutions? And as we were talking yesterday, it is clear that for very weak drive, that is for very small beta, my oscillator is linear essentially. The amplitude is small. The oscillator is linear. I can have only one stationary solution. For very large beta, for very strong drive, again, as we discussed yesterday, my oscillator can have only one stationary state and one stable state. And therefore, there is a region here, which you can find. Actually, there is a simple trick, which I don't have time to show, that allows you to find these boundaries of the region where there are three states. And here is one state. And here is one state. These lines are the lines where the topology of the phase portrait changes. And the changes in the following way. So I was telling you that if I change parameters, my stationary points move around. My phase portrait changes a little bit. But what happens if, for example, this point merges with this point? Then this whole area disappears, right? And then you have only one state left. And then the topology of the phase portrait has changed. This parameter value where this happens is actually this line. On this line, this point, which is called saddle point, merges with this point, small amplitude, and they both disappear. And this point can merge with this point. And then they also both will disappear. So these are bifurcation points, or bifurcation lines. Bifurcation, bi means two. And we have two states which merge together. Now I want to reproduce the plot that I did yesterday when I was plotting a squared versus f squared. And I can now write it as beta, which has this form. And this is the bifurcation point. This is the amplitude at this state. This is the amplitude at this state. At the bifurcation point, they merge together. This is the bifurcation point for a given frequency of the field. So for a given frequency of the field, it's this point. For a different frequency of the field, kappa is different. It's this point, it's this point. This is what happens. So this is another bifurcation point where these two points merge together. So these bifurcation points are the points where the topology of the phase portrait changes. And this is a buzzword for you in this sense. Now it turns out that these bifurcation points are very useful and they have been used in quantum information, in particular for what is called bifurcation amplifiers. And if I have time, which I doubt, I will tell you how it works. But this is a very important device, which essentially was one of the first devices to do, probably the first, correct me if I'm wrong, experts here, to do measurements in circuit QED to measure the states of qubits in circuit QED environment. They were based on bifurcation amplifiers. And they are still used in this area and in other areas too. So this is the picture that is totally done by mathematicians. Mathematicians knew it all, I don't know, probably at the beginning of the 20th century, maybe later. So far not much physics. Now let's put physics into it and let's see what it is interesting for and what we can learn about it. Well, the physics here comes from, yes, a question? Okay, if you don't ask me questions, I will be asking you questions, okay? So you decide what you prefer. What is good about this system? It's that this system is far away from thermal equilibrium. I drive my oscillator strongly. This poor guy, which without a drive, was just a particle in a potential well, very simple. Now with a drive can have three states, complicated dynamics. It's a very, very different thing that we got almost for nothing. Now we understand the dynamics of this system. We want to understand what happens with fluctuations. And another buzzword that you can find less frequently in comparably less frequently, but still quite frequently, is fluctuation theorems. Have you heard this? No. Jalsinski equality? No. Okay. Well, I tell you that these are less busy words, but they're still used very frequently, particularly by people who study processes away from thermal equilibrium. And this system actually demonstrates that these buzzwords have a much narrower range of applicability than people who use them claim. Yeah, I tell them in their face, so I can tell behind their back. But it allows you to understand what happens if the system is away from equilibrium, because if the system is in thermal equilibrium, we know that the probability distribution is e to the minus Hamiltonian of p, let me use small, q and p, e to the minus Hamiltonian of q and p over kt and one of the partition functions. So if I have the Hamiltonian of the system, I know where I will find it with what probability. Now, I don't have this distribution for the system that I am talking about. And to understand where the distribution is coming from, I have to understand where fluctuations are coming from. So here fluctuations are coming from temperature. Here, so far, there is no temperature. Temperature comes classically from the noise. Now, the difference is, and we will come to it, that these equations classically are fully consistent. I can say that I have, I can think of a classical system that would be completely described by these equations, except that this system has to be, the dissipation has to come from coupling to a thermal reservoir at zero temperature. Quantum mechanical, as we will see, these equations are incorrect, incorrect because they don't take into account quantum fluctuations which come into play invariably if you have a non-equilibrium system. These quantum fluctuations have a different physical origin from thermal fluctuations, not entirely different, but they are irreducible. Their intensity doesn't go to zero if temperature goes to zero. We will see how it comes about. But in the meantime, if I want to study fluctuations, I have to add noise that comes from the same coupling to a thermal reservoir that leads to dissipation. So where does this dissipation come from? It comes from the fact that I have my oscillator coupled to some thermal reservoir which has a lot of degrees of freedom, you can call this back-action of the reservoir. The oscillator perturbs the reservoir, the reservoir reacts, and this reaction is friction. But when the oscillator is coupled to the reservoir, the reservoir on its own has, it has a lot of degrees of freedom, they perform thermal motion, and therefore the reservoir exerts a force on the oscillator. The miracle is that this force comes from the same interaction as the back-action which is dissipation. Now, you are familiar with this language that relates dissipation to back-action because my impression is that many of you are in quantum optics and have it background in quantum optics, so you throw that familiar with this terminology. Not really. Where does dissipation come from? How does friction come about? If you have a particle in free space, absolutely free space, not coupled to anything, will there be any friction? No. Where does friction come from? Of the environment, from the coupling to the environment. And you have these lectures that Orel is particularly talking about. So you can think of Brownian friction, you have this particle, it collides with molecules, and when it just doesn't move on average, there are collisions from the right, there are collisions from the left, it is shaking back and forth, but if it starts moving with a regular force, there are more collisions from the right than from the left, and therefore there is a force, and this matte force is the friction force. So this friction comes from the interaction with the environment, and you can write far more sophisticated model than that, and usually this is what causes friction in nanomechanics. But friction comes from the interaction with the environment, and that same interaction leads to some random force, some noise, which I will now add here. It leads to the noise that drives my both quadratures, Q and P. And we can say, as I said, there is a wonderful theorem which is called the fluctuation dissipation theorem, and you have heard this term used here several times. And fluctuation dissipation theorem tells you that there is a relation between this decay rate and the properties of this noise. This relation is that if I look at the correlation function of the noise, it is KBT, Boltzmann, delta of tau minus tau prime. Let me make sure that I'm not missing the scaling factor. I think there is lambda over h bar, something like that. Yeah. Yes. Two lambda... times two lambda over h bar. This is from the scaling. You remember that lambda is our plan constant, scale plan constant, so this ratio is independent of the plan constant. We're in the classical world. And the correlator of Cp is exactly the same, and these two noises are independent of the classical limit. So what these two noises do is the following. So suppose I have waited for some time and my system has approached my stable state, for example, this stable state. If there is no noise, it will just approach the stable state and stay there. But because of noise, it is fluctuating around it. And these fluctuations have very specific features. First of all, you have heard about squeezing in the lectures before that. And squeezing is essentially that the average value of q minus q stable squared is not equal to the average value of p minus p stable squared. That is, if I look at the distribution of my system about this stable state, it is not a circle. It is an ellipse. And moreover, it is even tilted. So squeezing is inherent to driven systems. Your stable state, fluctuations about your stable state are necessarily squeezed. Second thing is, well, I don't know, has anybody in this room ever measured a power spectrum, a spectral density of a nanotube? Some people have. So what you do, when you measure power spectrum, for example, of a harmonic oscillator, I think of my harmonic oscillator as a very tight spiral and it oscillates and weakly decay. If I measure the power spectrum, it's q over omega. I will use different letter. q over omega, which is defined. Now, you did the experiments, right? So what you do, you integrate from some time q of t, the coordinate, times e to the i omega t dt. You use fast Fourier transforms, don't you? OK. So this is what your fast Fourier transport does for you. Then you square it, and then you divide by 2t. This is how you calculate the power spectrum of the coordinate q. This is the operation where you do it in the experiment. The theoretical way is to write it as the integral from minus infinity to infinity dt average value q of t, q of 0, e to the i omega t, where these angular brackets indicate for this oscillator without drive, thermal averaging. These things, this is called Wiener-Hinschen theorem. So you measure for an oscillator, what is the shape of the power spectrum that you measure? Somebody told me that he or she measured the spectrum. You told me. So what is the power spectrum of the oscillator? What did you see in the experiment? Lorenz and Pick at frequency omega naught, right? Well, I can do the same measurement for my driven oscillator, which is close to this stable state. I will just do the measurement of what? q of t? That's what I would do in the experiment, because what else do I have? But before we look at the answer, let's think about the physics of what is going on. So here, the physics was that my oscillator is fluctuating in time about this equilibrium position. So if I plot q of t as function of time, these are oscillations, frequent oscillations with slowly changing amplitude and phase. And because of this slow variation of the amplitude and phase, this spectral peak has a finite width. If the oscillator were just oscillating at frequency omega naught, it would be a delta function. Because it does not maintain constant amplitude and phase. It has noise, it has volume. So here, about this point, I can have a motion which is very similar to the oscillations of the oscillator. That is my q, capital q of t. So q minus q stable as function of t is almost like oscillations. What is the typical frequency scale of these oscillations in real time? In dimensionless time. What is the scale in dimensionless time? Look, there are no parameters here. So the scale in dimensionless time is 1. Therefore, the scale in real time is delta omega. So there are these random oscillations at frequency. On the order of, it's not equal to delta omega. It's on the order of delta omega. And therefore, if I made power spectrum of q, capital q, it would be a Lorenzen type peak at frequency delta omega. So this is gq. What will I have in the spectrum that I measure in the spectrum of q in the lab frame? Well, you remember that q in the lab frame is cq cosine omega f t plus p sine omega f t. So if q is oscillating at frequency delta omega, at what frequency does this thing oscillate? Cosine delta omega t times cosine omega f t. At what frequency does this product oscillate? Plus and minus. Exactly. Therefore, if I look at the power spectrum of my oscillator that is fluctuating about this state, what I will have is the following thing. So now my gq of omega. Here is omega f. Now, in principle, there may be a delta function of omega f which you can always subtract. It's just force vibrations which are always there. And then I will have a sideband here and the sideband here, they are not of the same height, which are separated, they're equidistant, and they're separated by roughly delta omega, of order delta omega. So in my spectrum I will have two peaks. Well, this is if I'm here. Now, here is another question for you. If I am prepared in this state, will I stay there forever? Not necessarily, right? Because if I think of a particle in a double well potential and I prepared my particle in this state, will this particle stay here forever in the presence of noise? No. It can switch, right? Has anybody seen such switching? It's hard to see. But people who are trapping, optically trapping particles have seen this and measured it and again in this bifurcation amplifiers people are seeing this. So it switches from state to state. If my system is in thermal equilibrium, I can estimate the probability to switch, right? Because to switch my system has to gain energy which is equal to this barrier height. Therefore my rate of switching is proportional to minus delta u over kBt. This was discovered by Arrhenius at the end of the 19th century. Then there was a very famous paper by Kramers. Now the name of Arrhenius probably have not heard. Has anybody heard the name? You have. Good. I'm pleased to see that. Who has heard the name Kramers? Okay, many people. So Kramers actually showed how this works for a Brownian particle and calculated what is skipped here, the p-factor. So for a particle in thermal equilibrium this has been done. Here, as I said, I don't have anything that would tell me what is the distribution far away from my attractor. And actually this is today a micromechanical system. These experiments have been done in micromechanics. The only system away from thermal equilibrium where this basic feature of non-equilibrium system, the probability to switch between the states, has been measured, carefully measured and characterized. It was done in the work by Chen in the classical limit and it was also done essentially with the interval, I don't remember to what side, in the group of Davare and Siddiqui who was at that time at Yale. So it was about 2004-2005. Davare and Siddiqui did it, I think, a little bit earlier for a Josephson junction, for a driven Josephson junction. And Chen did it for a micromechanical system. And this is the only system that I know of which is far away from equilibrium where these rare events leading to switching between the states have been measured and compared with this area that exists for this system. People who are, this is another buzzword that you have heard, protein folder who has not heard this word. Everybody heard, right? We are all eating. Very energetically folding proteins. So protein folding occurs in a non-equilibrium environment and therefore you have to develop means to calculate how this folding occurs. And there is a lot of numerical algorithms which are, proteins are very hard to characterize in the lab. So there are non-equilibrium, there are many factors that come into play, you have to be very careful to reproduce results. With this non-micromechanical system in Josephson junction, this has been done and today this is the only system where it has been carefully done. This is a basic thing in systems away from some of equilibrium, basic insight into statistical physics from some of equilibrium. And the method that were developed to describe these processes, they are now used in all this biochemical and blah, blah, blah science. Now I want to switch to quantum phenomena and any questions? Of course not. Of course not. And therefore when I'm saying that you prepare the system depending on where you prepare the system, it will go to this state or this state, I'm saying about something that will happen on a short time scale. If I wait for sufficiently long time, my system will most likely be in this state or in this state. It will switch as this system will ultimately, you will find it here much more likely than here. There is a special parameter range where these wells have equal depth and this is the analog of the first order phase transition. Does it sound familiar to the first order phase transition? You have two phases and their free energies are equal. The probability to switch from one to the other is the same as back. This is what you do when you boil water to make tea or coffee. So this is first order phase transition. And when you have these two minimum of equal depth, this is the first order phase transition. Here you can also find the value of the parameters beta and kappa where the probabilities to be in both states are equal. And this is the analog of the first order phase transition. And people who work with stochastic resonance used this as a kinetic stochastic resonance and so on. So it's a whole world based on this phenomenon. But let me switch to quantum. Yes, that's a very nice question. In this problem, no. But in nanomechanical systems, this onset of oscillations, when you drive a nanomechanical system, you drive it and all of a sudden you see that instead of a stationary state in this P and Q, what you have is what is called a limit cycle. The system starts self-oscillating. And I am not allowed to say where it happens, am I? So Eva has seen it. And it's quite a miracle. So it's beyond this simple model and remains to be understood. I think it's an outstanding observation. Now, Zyosha has seen it in the situation where you have coupled modes. So people have seen this. But in this simple model, no. Excuse me, it doesn't happen. So quantum world. Quantum world. What I want to start with is yet another buzzword. This is a buzzword day. And here is the buzzword. There are two groups of people, one group who knows this and the other group who doesn't know how to read it. So this is Red Plaquet. He's French, a French mathematician of the 19th century. And this has become a buzzword again. If you check on the archive, I think it was the second after topological. Probably, right? Very, very frequently used. So Floquet states, what I did, I started with the Hamiltonian H, which was equal to I reproduce it. And this Hamiltonian is a function of time. But it's not just a function of time. It's a periodic function of time. It is equal to H of t plus 2 pi over omega f. If I increment time by the period of this oscillation, my Hamiltonian reproduces itself. Because this Hamiltonian is a function of time, I try to solve the Schrodinger equation. What we have for time independent Hamiltonian is that psi of t is e to the minus i e t over h bar psi of 0. The dependence on time is determined by the energy for the eigenfunction. And I have eigenfunctions, which are called stationary states of the quantum system. How does an atom have stationary states? And all other systems, which are not driven, also have stationary states. If the Hamiltonian is independent of time, you solve the problem h psi is equal to e psi, and you find stationary states. No such luck if my Hamiltonian depends on time, obviously, right? So there are no stationary states. And it's tempting just to give up at this point. But we should not rush to give up. We almost have our chance. We notice the very important property, the property that the system is reproducing itself over a period of time. Therefore, now, who of you knows yet another name, block theorem? Block theorem. So several people know, and several people don't. Who you know about electron transporting solids? So electrons move through a solid, and you're talking about electron momentum. But does an electron have a momentum if it is moving in a potential which is periodic in space? It's not a free electron. It moves in a non-uniform potential. How does it have a momentum? The answer is it does not. It does not have a momentum, but it has a quasi-momentum. And this fact that it has a quasi-momentum, which is called the block theorem, is a consequence of the periodicity of the crystal. Crystal is, you should just take an elementary cell and repeat it, repeat it, repeat it, repeat it. Ad infinum. Now I look at this Hamiltonian, and what I notice is there is an operator of time translation, which acts on the function of t so that it becomes, and I will call tf is 2 pi over omega f, but I will use essentially a more general, it doesn't have to be the Hamiltonian of this form. It can be of any form. So I will write tf. Now this operator t commutes with my Hamiltonian, right? Because if I act with this operator t times h of t, it becomes h of t plus tf, which is h of t. So I can as well move my operator t to the right. Now if you remember the basic quantum mechanics, you know that if an operator commutes with the Hamiltonian, you can think of the eigenfunctions that they share. And in this case, I will be thinking about eigenfunctions of the operator t, and the eigenfunctions are called Floquet states, and they have this property that tt psi epsilon of t is identically psi epsilon of t plus tf, right? This is the definition of the operator. And you write it in the form which is exactly the form of the block theorem in solids. So in solids, you have an operator that moves you from one elementary cell to another. And since the solid is infinite, when you move from one cell to another, you don't know that anything has changed. So your function has to be an eigenfunction of this translation operator. And this eigenfunction is defined in exactly that same way. And so I will be talking about these eigenfunctions. I want to make sure that I use the same notation that I am using in the lectures so that the sectionals can be of any use. Yeah, I used tf, right? So this is what I have with my states, and I call these states Floquet states. And this quantity is called quasi-energy or Flakek eigenvalue. So quasi-energy, because as I told you in crystals, momentum of the oscillator is not really momentum, the structure is not a momentum and quasi-momentum. Same here, epsilon is not the energy of the system. The system doesn't have conserved energy because the Hamiltonian depends on time. Instead, it has quasi-energy. And the major difference, which I want to emphasize because sometimes it is hidden again under the rug is that the values of quasi-momentum in a crystal are just determined by the boundary conditions. They are quasi-continuous set of numbers. The values of quasi-energies are found from the Schrodinger equation because after all, what I have to do, I have to plug this function into the Schrodinger equation and to find this function psi-epsilon. So before I do that, I want to show you and to persuade you that I can write psi-epsilon of t in the following form, e to the minus i-epsilon t over h-bar, phi-epsilon of t, and phi-epsilon of t plus tf is phi-epsilon of t. This is again something that people who worked with condensed metaphysics have seen in the blockchain. And you can see that if these two conditions are equivalent, if this condition holds and I increment time by tf, what I will have here, I will have this factor. And this will not change. So this function is the eigenfunction of my operator t. And now to find this function phi-epsilon, to find the value of epsilon, I have to plug in this function i-h-bar dt e to the minus i-epsilon t over h-bar phi-epsilon of t is equal to h of t e to the minus i-epsilon t over h-bar phi-epsilon of t. I have to solve this Schrodinger equation with this boundary condition. So the periodicity of the wave function is the boundary condition for the solution that I have to find. And this will give me the values of epsilon. This is a nasty problem and people have been working on it in various limitations. We are in a privileged position because we can find this quasi-energy very easily. So to show this, let me remind you what we were doing in our problem. When we started the analysis, I told you that this transition to the rotating frame is a canonical transformation. And because this is a canonical transformation, I have a relation pq, what is going on? pq is minus i lambda and lambda is the dimensionless Planck constant. Now I have a quantum mechanics for my variables p and q. And this quantum mechanics has the Schrodinger equation in the rotating frame. The Schrodinger equation is i lambda d psi dt d tau. Remember that tau is delta omega times t is equal to g psi. And we had g which was one-quarter p squared plus q squared squared minus one-half p squared plus q squared minus beta to the one-half q. Is this an operator? If I write a Hamiltonian h is equal to one-half p squared plus one-half omega naught squared q squared. Is this an operator? It's an operator. It's not just that I have to multiply it by a function of q. What should I do to reveal its evil operator nature? If I'm solving the equation i h bar d t psi is equal to h psi of q psi of q is equal to h psi of q. This is an operator, right? So it's just not just multiplies psi of q by a number. It does something to psi of q. What does it do to psi of q? So I have to say p is minus i h bar d d q. Then it becomes an operator. Now here we have this commutation relation which tells me that capital P is minus i lambda d d q. Again, this is the property of this canonical transformation that we have made. And therefore I have the Schrodinger equation of this form in which g I have to use for p this form. Any pitfalls? I want to show that this operator g has eigenvalues. And I want to relate this eigenvalues to the flow-care eigenvalues, the quasi-energies. And then I want to understand this tells me what the dynamics of a quantum oscillator is. Because otherwise I have found the classical description. I want to find the quantum description. And the quantum description of a periodically-driven system is done in terms of the flow-care states. So I have to find these states and I have to find the flow-care eigenvalues. So this is the goal of what I'm doing. And what I will claim is that if I solve this Schrodinger equation g Psi is equal to Epsom g N Psi is equal to g N Psi N these eigenvalues g N will give me the eigenvalues Epsom. Which is essentially obvious. The obvious thing is that as you remember, when I worked with this Hamiltonian to make from this change from this Hamiltonian to g, I did two steps. First, I did a canonical transformation which is a purely causal step. Nothing wrong about it. Then I did the scaling and therefore what I have and then I did the rotating wave approximation. So I have my h has become the scaling energy number times g operator of P and Q. And therefore I understand and I can show it more carefully that the eigenvalues of this operator g are related, simply related to the eigenvalues of this operator in which I disregarded post oscillating small post oscillating terms just by the relation that Epsom N is E scaling g N. Therefore my quantum formulation in the rotating frame immediately gives me the spectrum of the quasi-energy, of the eigenvalues. So this is a full description of the quantum dynamics that I have obtained almost for free just from this transformation from the nature of the transformations that I have made. And what I have next is now the set of eigenfunctions psi n, the set of... which will be flocated eigenfunctions because when I increment time I change to the frame which oscillates with period omega f t which oscillates as e to the i omega f t. So if I increment time by omega f nothing happens to the frame. The only thing that happens is this spectrum that comes out is this energy g n. Therefore I have a set of flocated eigenstates. I have a set of flocated eigenvalues. And essentially this is as complete description of a non-equilibrium quantum system as it gets. And so on tomorrow that is we will see unusual consequences of this description which come about when we start talking about fluctuations and somewhat unexpected things. And just to tease you I will tell you I know that if I have a particle in a potential well which has energy levels I couple this particle into a thermal reservoir at zero temperature. What will happen to this particle after some relaxation time? In what state will I find it? In the ground state here, right? So should I expect something like that for a given oscillator? If I couple it to a thermal reservoir with zero temperature I will persuade you that no, it doesn't go to an analog of the ground state because quantum fluctuations are in some sense irreducible. They are always there. There is always quantum noise and this quantum noise heats up the system. This effect is called quantum heating which has been seen in the experiment. And there are other very weird fluctuations phenomena in this quantum system with no analog at all in systems in thermal equilibrium. On this, thank you very much.