 Time to start the second lecture. OK. Second lecture is Professor Jensang Lee. He's in Kias School of Physics. He will be talking about the stochastic thermodynamics. OK. Please welcome. OK. Thank you for introduction. So it's nice to meet you all in this nice weather. So on my lecture is about stochastic thermodynamics. And during the last past three decades, there has been tremendous advancement in stochastic thermodynamics. So the goal of my lecture is to cover from the basic to the important cutting edge theories of stochastic thermodynamics. But the issues of my lecture is too wide. So probably I'm not sure whether I can finish what I have prepared. So probably I will be running out of time in every lecture. But I don't care about that. So I will stop when time is over. So whenever you have any question, then please do not hesitate to ask me or interrupt me. OK. OK. So my lecture is about, as I said, stochastic thermodynamics. So let me first start with the question, what is thermodynamics? So thermodynamics is a branch of physics that deals with heat, work, and temperature, and energy and entropy, and their relation. So from their relation, we can understand important physical properties of a system. So let me give you an example. So this is a heat engine we are well aware of. So in this example, an engine observes a heat, qh, from the heat bath with temperature th. So this is a hotter temperature. So by using this observed heat, it produces some work, w. And then remaining energy is dissipated as heat, qc, into the colder reservoir with the temperature, tc. So this is a meaning of heat, work, and temperature. Here I emphasize some important words with some boldic fonts. OK. Then what is the relation? So there are several important relations. So the first important relation is the thermodynamic first law, which is nothing but energy conservation. So here, let's say that e is energy of the engine. Then its change is given by observed heat. And dissipated heat minus extracted work. OK, that's simple, energy conservation. And the second relation is so-called thermodynamic second law, which states that entropy production should be non-negative. In this example, entropy production is given by Clausius entropy form, which is heat divided by temperature. OK, so what we can know from this relation? By using this first law, we can write qc in this way. And then now we plug this equation into this. And then arranging the terms. And then we finally get this inequality. So OK, let's look at the right-hand side. Right-hand side is extracted heat divided by observed heat. So this term can be interpreted as an efficiency of an engine. Then this efficiency of engine is bounded by some quantity, which depends only on the temperatures of the reservoirs. And this quantity is called current efficiency. And current efficiency does not depend on any details of the engine. It only depends on the temperatures. So it is like some kind of a universal bound. So from these simple relations, we now understand that we now know that any engine efficiency cannot be larger than this current bound. This is what we usually do in thermodynamics. OK, so let me summarize in this way by using this schematic diagram. So there is a system we are interested in. So you can think it as an engine in the previous example. And in the thermodynamic system, and we need environment. We usually call this environment as a reservoir or a bath. So there can be multiple reservoirs or multiple baths. And each bath has its own temperature or its own chemical potential. So what is heat? Heat is energy transfer between environment and system. And what is work? Work is energy transfer between system and external agent. So we have to distinguish this concept. So intrinsically, so the thermodynamic system is not close the system, not close the system, but it should be open system intrinsically. And this kind of relation, first law, second law. So from this relations in thermodynamics, the thermodynamic properties are determined. So this is what we usually do in thermodynamics. OK, so up to now, I talked about the thermodynamics. And the next question is then, what is stochastic? OK, so in the previous example of heat engine, usually it has a macroscopic scale. So in such a macroscopic scale, motion is deterministic. However, if we are interested in this very small system, mesoscopic or microscopic system, for example, Brownian particles or many biological systems such as a motor protein or some RNA folding and folding process, something like that, in such a small scale, the important thing is that in this scale, motion is not deterministic. But as we can see from this movie, this is a movie about a molecular motor, myosin molecular motor. As you can see from this movie, the motion is not deterministic, but it shows a very random and stochastic motion. OK, so what is the origin of this stochasticity? OK, so let me show you another movie. So OK, let's say that we are interested in the motion of this yellow ball, Brownian particle. Then we can directly understand the origin of the stochasticity of this motion. Actually, it originated from the interaction between a system and environmental particles. So in principle, even though it seems like a stochastic motion, but in principle, the motion is deterministic if we know all the Hamiltonian of this system. However, in practically saying, it is invisible to keep track of all the degrees of freedom of these reservoir particles, because there are, I mean, infinitely many number of particles in reservoir. So to describe the motion of this kind of a Brownian particle system, then we need some phenomenological description or equation of motion. So in my lecture, I will talk about two phenomenological descriptions. So the first one is long-term equation, which has a continuous space, continuous state. And the second one is a master equation, which describes the Markov-Jumper process, and it has a discrete state. OK, clear? OK, so the stochastic thermodynamics. So is that thermodynamics for small stochastic system? So to do the thermodynamics, then what we have to do first is we have to first define heat and work. So in lecture one, I will talk about how to define heat and work in stochastic small system. And in this small system, we have to understand the concept of stochastic trajectory. So in lecture one, I will talk about what is stochastic trajectory and how to calculate its path probability. OK, then in lecture two, I'm going to talk about what I mean the definition of entropy production and the fluctuation theorem. So fluctuation theorem is we can regard this fluctuation theorem as a generalized thermodynamic second law. And in the third and fourth lecture, I will talk about very important relations. The first one is thermodynamic uncertainty relation. And the second one is thermodynamic speed limit. So I mean just before this lecture, the professor Joe organized or distributed my lecture notes. So you can see the files. OK, so in lecture one today, so I will talk about two things. So the thermodynamics for long-term dynamics and Markov-Champ process. OK, so let me first talk about the long-term dynamics. OK, here, let's suppose there is a Brownian particle here. And it is immersed in some reservoir. Reservoir consists of many, many reservoir particles. And this reservoir has its own temperature T. So to describe the phenomena, I mean to make a phenomenological description of this motion, then we need two ingredients, basically. So the first ingredient is dissipation. So it means that, let's suppose that the initial velocity of this Brownian particle is V0. V0, it is its initial velocity. And here, now, I mean, then after that, we do not apply any external force. Then the V0 will dissipate exponentially in time. So this is the meaning of dissipation. So this is what we usually observe in this kind of system. So we can write this phenomenologically in this way. So this explains the exponential dissipation. Here, M is mass and gamma is dissipation coefficient. So we know how to solve this equation. And this is the exact solution. So it means that in the long time limit, it tells us that V goes to 0. But is it really true in such a system? No. Because there is, I mean, if we observe the Brownian particle motion, actually, it continuously moves, right? So I mean, it does not go to V equal to 0 state. So it means that we need another ingredient, which is random noise. So we have to add some Gaussian, white Gaussian random noise. So the property of this Gaussian noise that it's average value equal to 0. So from this, if we take an average of this, then actually it becomes, I mean, this equation. So actually, this equation of motion is correct in terms of this average value, in the sense of this average value. And the second property is that the noise-noise time autocorrelation function is given by this delta function with some noise strength B. So the delta function means that the noise, this random noise is Markovian. It means that for a different time, there is no correlation between the two noise. This is the meaning of Markovian noise. OK. So of course, we can solve this equation exactly. So this is a solution. You can do it by yourself after the lecture. So from this, we can calculate v square average. And then if we take time goes to infinity limit, then this value approaches B over gamma m. OK. You can do it by yourself. But we know from the Equipartition theorem, in equilibrium, the v square average is given by kBt over m. This is a well-known relation. So from this relation, we can determine the B, the noise strength. So B is given by gamma kBt. So this relation is called Einstein relation. So in such a way, we can set up the long-term equation which describes the motion of this Brownian particle when it is immersed in reservoir with temperature T. OK. So any question? No? OK. OK. So we basically set up the equation of motion, long-term equation. So this term, we can regard this force as a phenomenological force exerted by a heat bath. Now here, we apply some external force. So external force can have two, I mean, two, it can have two, I mean, force. So the first part, which is conservative force. And the second part is non-conservative force. So non-conservative force means that it cannot be derived from the gradient of some potential. So here, potential u here, u has two components. So lambda and x. x is position. And lambda is time-dependent protocol. So for example, if we write the potential in this way, then it means that the stiffness of this harmonic potential is time-dependent. So in such a way, we can make time-dependent potential. So by adding this external force to this equation of motion, then now it becomes a phenomenological description for this Brownian particle when external force is involved. And this is some different presentation of this one. So this is a stochastic differential equation form. It's the same thing, actually. OK, now we have, now we set up the stochastic differential equation for this stochastic system. Then to the father, we need some knowledge on stochastic calculus. OK, so I'll talk about the very basic one, so you don't worry about that. So OK, now let's consider this regular function product. So it means that this is a position at current time t. And this is a function f evaluated at also current time t. Also current time t. And this product is current position at time t, but f function is evaluated at the next time. Next time, right? It's a different time. So the question is that these two product averages are same. I mean, intuitively we know that they are same, so I'll show it step by step. So because this one can be written in this way. So the next position is determined by the current position and current velocity. So because this is a very small number, we can expand this function f in this way. And then this product becomes this one and this product becomes this one. And we know that because this average is order one quantity, so this whole term is order dt. So if we take dt goes to zero limit, then we know that it vanishes. So in such a limit, this product is same as this product average. OK, this is a well-known result. And then now make it more in general way. So this is a function at current time. And this is a function evaluated at the next time. And then we can choose one point in between them with a fraction a. So here a is in between 0 and 1. So by adding 1 minus a fraction in front of this next position and a fraction a with this current position, then we can make this general product as a function. Do you got my point? Then now let's define this kind of a special product. So this special product is defined as FA product with XT. So by using this definition, then we can write this one 1 minus a fraction with next position and a fraction in the current position. And but in this above line, we show that this one and this one are same. So this one and this one are same. So it is nothing but just the original one. So in this regular product, wherever we pick the position, actually this product is same as this original product. OK, this is a well-known result. However, now let's consider this product. Here it contains CUSY noise. This CUSY is this random noise. And this is Gaussian white noise. And this is non-analytic function. So when this kind of a non-analytic CUSY noise involved, then are they same? This is the next question. OK, so let's first look at this first part. And because the random noise is independent of the current value of v, so this average can be divided into two parts, v average and CUSY average. Because a CUSY average is equal to 0, so actually this is equal to 0. Then what about the second one? So OK, to calculate this one, from this equation of motion, we can re-express vt plus dt by using other terms. So this is the result. And then the first term, actually, it is simply 0. And the second term also, it is multiplied by CUSY noise. So there is a CUSY average here, so it is also 0. Only the thing is that the last term, it comes from this noise. But the thing is that this noise-noise correlation function is a delta function, and here, times are same. So actually, this term does not vanish, but it makes order 1 value from the definition of this noise-noise correlation. So it means that this one and this one average are not same when such a noise is involved. This is an important point in the stochastic calculus. OK, then now we consider more general product. As I defined, this one can be written in this way. So 1 minus a factor with the next velocity and a factor with current velocity. And then this is simply 0. And we know the value of this one, this one. So it is simply becomes this factor times 1 minus a. So now in this case, this general product depends on a. So this is all about you have to remember in this lecture. So when non-analytic white Gaussian noise is involved, then we have to be careful when we make a product and average. OK, and then there are some special value of a. So when a is equal to 1 half, this case is called strata-novici product. So if I use this notation, this open-circle notation then it means that the product should be strata-novici a equal to 1 half. And when a equal 1, then this product is called etoproduct. So if I use the notation, the field circle, then it means that the product is etoproduct. So there are two frequently used product in stochastic calculus. So any question here? I'm a bit confused about the last term. This one? No, no, no, the 2 gamma kBt over n. This one? Yes. So certainly you have small tt, but the delta function is also diverse. You can think it as just a 1 over dt. That's all. I mean, it is not mathematical way, but physical way to understand the delta function. So I mean, delta function is like a very small gap, dt, and integration over this whole range should be equal to 1. So the height should be 1 over dt when we make it very narrow and narrow and narrow. So in such a way, you can physically, intuitively understand this delta function. So physically, that delta function is just a small having a Gaussian with something like that? Probably you can think it in that way, but what I mean? So yeah, you can think it in that way, but intuitive way to understand the delta function. And if we think it in the delta function in that way, then you can physically understand the meaning of this noise, I mean, this calculation. OK, thank you. OK, so any other question? OK. OK, anyway, so this is important in research in the stochastic calculus. So I will use this on property. So OK, from this stochastic differential equation, here I multiply v dt to both side of equation with the Strato-Novych product here. Then OK, let's look at first this one. So because of this Strato-Novych product, then vt should be calculated in this way. The middle point of this current and next below the value. Then by calculating this, we have this relation. And we know that mv squared over 2, this is nothing but kinetic energy. So it means that this value is same as the kinetic energy change. This is why we have to use this Strato-Novych product. If we use other stochastic product, then actually we cannot have this kinetic energy change. OK, and then let's look at the next term. And this term, because even though there is a Stochastic, I mean the Strato-Novych product here, but the left hand side, right hand side, there is no cussing noise here. So this product can be changed into just a normal product. And then this vdt can be written in this way, dx. And then we know that the total derivative of u can be written in this way by chain rule, partial derivative of x and partial derivative of lambda. So by using this, we can change into this total derivative and then partial derivative of lambda. And then d lambda can be written in this way, lambda dot dt. Here lambda dot is a time derivative of lambda. OK, oh, OK. I wonder, is it OK to, so here in the second line, above you integrate v using Ito's scheme? Sorry, what? So here? No. That's all? So from the Langevang equation, here mvt plus dt minus vt equals to something is Ito's scheme. So you are using Ito calculus, right? So I didn't get the point itself. So there, so. Here? Yes, yes, yes. OK, here, what? So I think you are using Ito calculus there, but when you define the kinetic energy and something, you are using Strato-Novitz scheme. Yes. So I am wondering, is it OK to use both schemes? I mean, OK, so this is a given stochastic differential equation. Here I multiply by vt with some defined product. So I mean, we can multiply any stochastic calculus. We can use any stochastic product because it is just multiplied by something. So we have to, I mean, we can use any product here. Because, OK. So I mean that, I think vt plus dt minus vt, this is from Ito calculus. No, no, I mean, it just denotes the difference between the current and next velocity. I mean, this is a given equation. So this is a given equation. And here we multiply by some same number to both sides of equation. So that's all here. Not mixed the stochastic product, but we just multiply here. The meaning of this product is actually we multiply by just this number to both sides of equation. That's all. Yeah, that's all. This is just a number. I mean, just a number. So the number multiplication, OK? OK. OK, then if you have more questions, then we can discuss it later. OK, and OK, so by using this total derivative, so we can now write this term in this way. And the third term, also same as this first, I mean, the previous term. So even though it is a strident of the product, but because there is no, I mean, because it's always here. So actually, this strident of the product can be converted into just a normal product. So if we move this du to the left-hand side, then it becomes this one. So what it means? So the schematic energy change and potential energy change, so their sum gives a system energy change. So the left-hand side means a system energy change. And then this second term is work done by external force. So I mean, so it can be interpreted as work. So work can be divided into two parts here and there. So the first part is work done by conservative force. And this work done by conservative force is usually called Czerzynski's work. And the second part is work done by non-conservative force. And their sum gives a total work. And the final term is actually, this is a heat bath force. So the meaning of this term is that work done by heat bath force. So we can identify this term as heat. So heat is work done by heat bath. I mean, this is a heat bath force. It's a phenomenological force, of course. And the important thing is that when we evaluate heat, we have to use stradonovich product. So this is delta E equal to W plus Q. So this is a thermodynamic first law in long-jabong system, under the long-jabong system. So this is work and this is heat. So the important thing is that we have to use stradonovich. OK. Is there any intuitive explanation why we must use stradonovich convention for heat? Intuitive. Intuitive. Intuitive. I mean, intuitively, I have not thought in that way. But what I know is that to make a thermodynamic first law, what we have to do is that we have to make a kinetic energy term here. So to make a kinetic energy, we have to use the stradonovich here. So that's why we need here a stradonovich product. But I have not thought in that way. So, OK. Thank you. In the third, red underline. And this one? No, the, yeah. You said that the stradonovich product is same with the regular product. But I think VT is also a random variable because it contains noise. But why those two can be same? OK. So if it is T plus, sorry. If it is a, OK. Sorry. So, OK. So there is some assumption here. So it does not have a velocity dependence. So in such a case, we can write in this. If it has a velocity dependence, then actually it should be stradonovich. OK, thank you. OK, thank you. OK, no more questions? OK, so this is the first row. Oh, OK, sorry. So if we started with the ito calculus, then we would have recovered the first law of thermodynamics. Sorry, what is the last one? You finally recovered the first law of thermodynamics from this by following the stradonovich calculus. But if we started with the ito calculus, then we would have recovered this kind of thing. Because first law of thermodynamics is still hold intuitively, I guess. So you mean by the ito calculus is that if we multiply VT as a ito product, you mean? I say, yeah, yeah. If we use ito product here, actually, it basically comes to us as a normal product. So in such a case, we don't derive this kinetic energy difference. But there would be some other consequence from other terms. And we should still recover the first law of thermodynamics. Means? So how? Because it actually holds in every case, I think, I guess. I mean, in the case of ito calculus, it should also hold, means? Yes, of course, it holds. I mean, the reason why I use here the stradonovich product is that to derive the thermodynamic first law. Yes, that's my question. So if we follow the way of ito calculus, then shouldn't we recover the first law of thermodynamics? So what do you mean by ito? Because it is the underlying law, I think. So it should still hold in that case also. I mean, yeah. So it is just two different procedures. But intuitively, that should hold for both of these calculus. Or am I wrong? I mean, we can use any calculus. But the thing is that what we want to derive, that's the whole thing I use here, the stradonovich. But if you can derive some properties by using ito calculus, then you can use it. I mean, so actually, it depends on the convenience, I mean, convenience to derive something. OK. So it's more convenient way of doing this, to recover this first law of thermodynamics. So these calculus. Yes, so to derive the thermodynamic first law and to define what is heat and what is work, then actually, I mean, I don't know there is another way to show the thermodynamic first law. OK. OK. Thank you. OK. And then now let's consider some over-dental limits. So over-dental limit is that m over gamma goes to 0 limit. So in such a limit, the velocity relaxation time is too short compared to the dynamics of the position. So in such a case, we can neglect this inertia time. So if we neglect this inertia time, then this equation becomes this one. So this equation is called under-damped Longjubong equation. And this equation is called over-damped Longjubong equation. So over-damped Longjubong is an equation we use for general, I mean, usual biological system, where the velocity relaxation time is very, very short. Probably picosecond, something like that. So now by moving this term to the left hand side, then we can now write in this way the stochastic differential equation for over-damped Longjubong equation. So here, let's check the stochastic product for this over-damped Longjubong system. So in the under-damped Longjubong system, I showed you this product depends on A. Then what about the over-damped Longjubong equation? Because in the over-damped Longjubong equation, now there is no velocity variable, because a velocity variable is integrated out. We do not care about the velocity variable in the over-damped Longjubong system. So we do not have this kind of the product, but we have to check whether this product gives a lot of value. So by definition, this product can be written in this way. So 1 minus A fraction with a next position and A fraction with current position. And then we can easily see that this average value divided into separate average, x average and qz average. So this term vanishes. And only this term remains. And from this equation of motion, this x t plus dt contains this qz noise. So because it contains qz noise, there is a qz correlation function here. So it is not 0, but it makes order 1 quantity as we saw in the example of a under-damped Longjubong system. So the difference is only that in over-damped Longjubong limit, then this product gives all, OK. So this product gives this A-dependent vector. OK, so let's now make a thermodynamic first low in this over-damped Longjubong system. So here, let's do the same thing. So for both here, we multiply x dot to both sides of the equation with Strato-Novich product. Then, OK, now let's look at this part first. And this x dot dt can be written in this way, dx. And this product is Strato-Novich product. And here, there is one important thing to understand this term. OK, so let's consider now the expansion of this function. So this is evaluated, potential evaluated at the next time. So this lambda t plus dt can be written in this way. And x t plus dt can be written in this way. And then now, we want to expand this u function. So this is a leading order. And this is the first correction term, partial derivative of lambda, partial derivative of x. So in usual case, actually, we don't care about the next order. We usually keep up to this order usually. But however, in this over-damped Longjubong system, when we expand this function, we have to keep up to this order. Because you see that there is a dx square here. And dx square contains this factor. And as I mentioned that, you can think this factor is 1 over dt. So this whole term gives order dt correction. And this order dt correction is same as the order of this one. So that's why we have to keep up to this order. So this can be written in this way by bounding this common factor dx in this way. And this term is actually, so this term can be written in this way by using the notation of Strato-Novich product. So the thing is that. So when we expand some function with respect to x in the over-damped Longjubong system, then it should be Strato-Novich. It means that this is a total derivative of u, some function. Then by using the chain rule, this is a partial derivative of lambda. And this is a partial derivative of x. But here it should be Strato-Novich because of this reason. OK, this is the second thing we have to memorize. OK, so from this equation, so this term can be rewritten in terms of du plus this term. OK, so this term can be changed into this one. And this d lambda can be rewritten in this way, lambda dot dt. OK, fine. Question here? OK. And then now let's move this term to the left-hand side and then move this term to the right-hand side. Then if we arrange the terms, then we get this equation. And then in over-damped Longjubong system, because in over-damped Longjubong system, there is no velocity, so there is no kinetic energy. So the system energy corresponds to just the potential energy in over-damped Longjubong system. So this du equals to energy change now. And then this second term, this is almost the same as the previous under-damped Longjubong case. So this is a work. So of course, this can be divided into two parts. So in this case, this is a work done by conservative force or Zhazinski work. So the same definition as we saw in the under-damped Longjubong system. And this is a work done by non-conservative force. So if this non-conservative force have x dependence, then it should be Strato-Novich. And so their sum becomes the total work. Finally, so the final term can be identified as a heat. So the same definition as we saw in the under-damped Longjubong dynamic. So to define heat, we have to use this Strato-Novich product. OK, so this is the thermodynamic first law for over-damped Longjubong dynamics. So is there any more question up to this point? No, then I will. OK, then let me give you some example how to calculate in the real example. So this is an example of optical tweezers experiments in over-damped system. So there is a particle, Brownian particle here. And then we apply a laser. So this is optical tweezers. And the center of this optical tweezers is denoted by lambda t. And lambda t moves linear in time. So a is some constant velocity. So because these optical tweezers provide this harmonic potential, so this is the motion of the Brownian particle. OK, here, then how to calculate work? So this is a definition of work done by conservative force or jazz and circuit work. So by using this harmonic potential, so we can directly calculate the work in this way. But in this example, because there is no non-conservative force, so the work done by non-conservative force is 0. And then the total work from time 0 to tau, then if we integrate time from 0 to tau, then we can have the total work by the jazz and circuit work. And in simulation or experiment, we have to use this summation notation for this discrete time. So in such a way, we can evaluate work. Non-conservative force. OK, so I will show you the next example. Thank you for your question. OK, so no more question. And then this is the heat. So this is definition of heat. So from this equation of motion, we can replace this term by using this external force here. So this product applies to first time this one and the second time this time. And then, OK, now let's look at this term. Because we have to consider, I mean, this strata, no, which product. So this term should be 1 over 2 and next time position plus current time position. So this term gives this factor. And the second term, actually, because lambda, lambda does not have any kusy noise. So I mean, it does not have exit dependence. So I mean, this strata, no, which product can be changed through just a normal product. So in such a way, you can calculate the heat. So in such a way, we can evaluate heat in experiment or simulation. OK, and then, OK. Sir? Strata, no, which product? Is it public heat? Can this strata change heat? Is it, say, does any strata change heat? So for example, your question is that if we write dx, circle xt is the same as in first time. Yeah, yeah, in such a case, yes. OK, let's turn to the next example. So it is about the two-dimensional Brownian dry rate in over-damped long-dibong system. So it is a two-dimensional system. So this part, this part is conservative part, conservative force, harmonic potential. And this part, this is a rotational force. I mean, rotational force cannot be derived from the potential. So it is a non-conservative force. So how can you calculate work here? So in this case, because there is no time-dependent protocol in this conservative force, so this Jardzinski work becomes zero. But non-conservative works does not vanish. So because there are two directions, x-direction and y-direction, so we have to calculate separately. So for x-direction work, non-conservative work can be, by definition, can be written in this way. So this is a non-conservative force and times dt in a Stradonovich product. And for y-direction, this is a non-conservative force, so times dy, Stradonovich. But in this example, because y, I mean, y has this kusy-y noise and x has kusy-x noise, but this kusy-x and kusy-y, actually, they are independent. So in this example, fortunately, this Stradonovich product can be converted into just a normal product. But usually, we have to consider this Stradonovich product. OK, then we can also calculate evaluated heat in x-direction and in y-direction, we can also calculate heat. So you can read it again in my lecture notes. OK, so in experiment, that's a good question. I don't know whether there is some general thumb rule to generate this non-conservative force. So for example, this type of force is actually the rotational force. Then the question is changed, like, how can you make a rotational force? So in some experiment, probably you read the article. So I mean, there is an ATP synthesis motor, so probably they put some bead, and then by using some laser, they can provide some rotational force in that biological system. But I don't know how to make this kind of a non-conservative force in general way. But in a specific way, we can make such a force. For example, by using virtual potential method, which can make an arbitrary force by using optical tweezers. So there is some kind of experimental method. OK. You say that the heterocalculus and strong-pitch calculus, but in this lecture, are we using strong-pitch calculus? I think. Then when are we using heterocalculus? Yes, I mean, OK, in the next section, I will talk about how to calculate the path of probability. So when I use the path of probability, then in some case, it is convenient to use heterocalculus. But in some case, it is convenient to use a strong-pitch calculus. So in the next example, then I will show that how to use heterocalculus. But you can choose for your convenience. Just then there is not trying to say that you are just calculus by my preference, just my preference. So you can choose the calculus. But when we define the heat or work, so when we define heat, we have to use strato-novage. But if you calculate something, then you can choose whatever you want for your convenience. I got it. Thank you. Any explanations about why the strong-pitch calculus were plausible in physical systems? Well, to make a consistent theory, what we have already know. So we have to use strato-novage to make a consistent theory to the previous, our knowledge. But as somebody also asked that why strato-novage is in, how can we understand intuitively the strato-novage product? But physically, I don't know about it. But mathematically, we have to use that because we have to derive the thermodynamic second law. And from the thermodynamic first law and from the thermodynamic first law, we can define work and heat in a consistent way. But if you want to make other thermodynamics, then probably you can do it. But in the stochastic thermodynamic, we consistently make this definition. OK, now I'm going to talk about, in the last 10 minutes, I'm going to talk about stochastic trajectory. So here, I'll only focus on the over-damped long-term system equation, not under-damped long-term equation, but essentially same. So let's look at this equation of motion. And here, I will change a variable in this way. So I will define dw as a QC dt. And this actually, w is called the Vener process. But anyway, you don't care about this. So this average value of dw is equal to 0. And the correlation function of dw becomes 0 and this one. You can check it from this correlation function. OK, so this is nothing but rewriting of the previous equation of motion. OK, then here, I will consider some one single trajectory. So here, I discretize time in this way. Here, of course, dt is infinitesimalismal time. So the final time is tau, and the initial time is 0. So from this discretization, we can also discretize a position. So this is the initial position, and the next time position, and next to next time position, and something like that. So this is a final position. And I will call this a whole trajectory as gamma. So what I want to calculate is that probability for observing one segment, for observing this one segment transition, so how to calculate it. So before estimating this probability, let us first look at the meaning of Gaussian random variable. So let's say that z is a Gaussian random variable. Then the probability for observing this z is simply given by this Gaussian distribution. This is the meaning of Gaussian random variable. So because dw is actually Gaussian white noise, it means that dw is also Gaussian random variable. So the probability for observing single dw is same, simply given by this same Gaussian distribution. Simple, right? So here, the sigma variance, variance is, we can know what is variance here. So from this, we know the variance is given by 2 gamma kb t dt. And so from this Gaussian distribution, now we know what is dw. So dw is given by this term minus this one. So let's plug this equation instead of this dw. Then it becomes this one. And I also substitute a sigma by using this quantity here and there. And of course, this dw square is just a normal product. So this square should be e to product here. Christian here? OK, so finally. So I change the integration variable from dw to the position by using this equation of motion. So we can write in this way. So this is a probability for observing one segment transition. And then I can rewrite this difference into x dot dt here. So finally, we get this result. So this is a probability for observing one segment transition. So this is called on saga-matrua function. And then now, let's calculate the probability for observing this whole trajectory. Then the whole trajectory, but each segment, each segment is determined by each dw, right? So each is different segment determined by different dw. And this dw is independent each other. So it means that the whole path probability is given by the multiplicative product of each segment probability, because they are independent. Each dw variable is independent variable, random variable. So now by using this equation, then we can now write, by using this equation, we can write in this way. So now the whole probability for, actually, this is a conditional probability, which means that the probability of the trajectory gamma, given that the initial state is x is 0. So to calculate the whole probability for observing gamma, we have to multiply the initial probability. So we have to multiply initial distribution here. So this is the whole probability for observing one single trajectory in the over-damped long-term system. OK. Sorry, sorry. OK. This one? This one? The x, this one? Can you just explain to us? I mean, this is just actually integral variable. This is nothing but here. I mean, this is a path integral. Now it becomes a path integral formalism. So you can think it as just a, you don't need to, I mean, expand it, but it is just the integral variable. Oh. Yeah. And because you think you can get all that. Yeah, yeah, yeah, yeah. Path integral, I mean. OK, so when I, OK, here, actually, this is a, this is nothing but the integration variable, right? So here, I just change from dw to dx tn plus dt. So actually, this is nothing but just the integral variable. OK, from this one. So this term is determined from this one. Actually, this one and this one are constant in this situation. They are linearly, I mean, linearly related. And this term and this term is constant at this point. And this term is determined by this random noise. OK, OK, so this is probability. And here, let me make a one note. So as you asked me, so I mean, here, I use the e2 product. But we can exchange this e2 product to other general product in this way. So let me show you. So this square means that the first term squared, this one. And the second term squared, this one. And this cross product becomes this one. Here, we have to use this general product here, a product. And then what is it? So by definition, this is the next position with 1 minus a current position, current time with fraction a and x dot. So by expanding this, we can show it becomes this equation. So by multiplying this one and this one, this is the first term, this is the second term. And here, there is x dot square dt term here. So because x dot square contains kz, kz noise square, so it also gives the order 1 quantity. So this term becomes this one. OK, so by using this, we now substitute to this equation, and then we have this equation. So this is the first term, this is the second term. And then now we make these three terms into this square term. So of course, this square term is e2 product. So it means that this e2 product can be expressed by this general product plus something else. So there is a relation. So by using this relation, we can change this e2 product into this way. OK, so this is a summary I've talked about. So here in lecture 1, I talked about how to define work and how we evaluate the path of probability for over-damped long-distance system. And even though I do not talk about how to evaluate the path of probability for under-damped long-distance system, but it is essentially the same. But there is one difference here, because this is a delta function means that there is one constraint. Constraint means that x dot equal to v. This is a under-damped long-distance equation. So that's why we have to put this delta function here. But other things are almost essentially same for the over-damped long-distance case. OK, so my plan was, I mean, also I plan to talk about then the thermodynamics for Markov jump-up process in lecture 1. But it is already time it's over. So I think it's right time to wrap up my lecture today. OK. I have a question? No? Oh, OK. Yeah. Speaker. One time, these two limits. Sorry again. When it's exactly perfectly damped, do the formulations coincide? So what formulations do you have? No, the under-damped and the over-damped. If it's exactly perfectly damped. Yes. Do you get a, do they coincide, the two forms? So coincide means that when m goes to zero limit, it approaches to this value. Yeah, I mean, because in the boundary, right, between over-damped and under-damped, there is perfectly damped. So I mean, to evaluate from the under-damped dynamics to over-damped dynamics, first we have to integrate our velocity variable, because it is relaxed too fast. So your question is like, if we integrate our own velocity variable, then it approaches to this over them to pass the probability, probably. So that's what I'm wondering is, does it go smoothly across, or is there some limit? Because for the exactly perfectly damped, then m and gamma have to be related, right? I didn't do it. No, I know. You didn't talk about it, right? I'm just saying it's a natural kind of question, right? I mean, I derived each path of probability for each equation of motion, but I have not tried to integrate out this velocity variable first. And then whether this remaining term is same as this path of probability for under-damped case, but anyway, so for going from this to this, we have some limit that m over gamma goes to zero limit. So probably I think if we take such a limit, and then if we first integrate our velocity variable, then I think probably it goes to this path of probability. Thank you. In the summer slide, you wrote heat for the over them dynamics x dot for dq heat. But in the under-damped dynamics case, you wrote v. So I wonder x dot and v are rigorously strictly different for this over them and under them. OK, so in under-damped long-distance dynamics, actually there is a velocity variable. However, in over-damped long-distance dynamics, there is no velocity variable because velocity variable is integrated out. So we have to express only in terms of x variable, not v variable. So that's why we use here x dot. I understood. Thank you. OK, so I think today we should close this whole discussion. OK, so dinner is already in the same place. I mean, cafeteria, first floor. And then if you are staying in the hotel, you can take a bus. I mean, there will be a bus at 6.30, OK? So you can take the bus and go back to your hotel. OK, and then that's it. OK, see you tomorrow, 9.30, OK? OK, see you.