 Okay. So, welcome back. So, today's last lecture will be given by Professor Jensong Lee. Actually, this is his last lecture. Lecture series, yes. Okay. So, okay, please. Okay, thank you. Okay, so this is my last lecture. So, yesterday I talked about TOR. So, this is a form of TOR, right? So, and this holds for over-damped Langjubang system and Markov jump process. Okay, and this holds in the steady state. This is important. And also, additionally, we also showed this KUR, Kinetic Uncertainty Relation for Markov jump process. It also holds in the steady state. So, these two relations holds in the steady state. So, one of you yesterday asked me, then what happens if the system is not in the steady state? So, I mean, yeah, that's a very important question. So, some modification is necessary to, I mean, for this arbitrary state case. So, this is TOR with arbitrary state when arbitrary time dependent protocol is given. So, these holds for over-damped Langjubang system and Markov jump process. So, this is the modification. The other things are same. So, the modification, so here we put some operator in front of this average of observable. So, here tau is final time and omega is protocol change speed. So, for example, if there is no protocol, which means that omega equal to 0, then this term goes out. And in the steady state, actually, applying this in front of this observable value in the steady state, actually, this is nothing but just observable average. So, in the steady state, this equation inequality returns back to this original steady state TOR. So, this is a modification. And then what about under-damped Langjubang dynamics? So, actually, these are holds for over-damped Langjubang dynamics. So, this is a TOR with arbitrary state when arbitrary time dependent protocol is given for under-damped Langjubang dynamics. So, similar to this previous case, in front of this observable average, we have to put some operator. And additionally, we have to put some, I mean, some constant here, i. So, here i is related to initial Shannon entropy. So, a little bit becomes a little bit complicated. But anyway, when we take over-damped limit, which means that m over gamma goes to zero limit, then we can show that this expression goes to, I mean, this TOR in the steady state. So, in this way, actually, these TORs are consistent theory, each other. Okay. So, this is some variance of TOR in arbitrary state and arbitrary protocol is given and for under-damped Langjubang dynamics. Okay. And then, actually, I prepare some how to apply TOR to some experimental situation. But because of time, I will skip that part. And then I want to go to the lecture four today. So, I mean, so this is a schematic of a stochastic system we are interested in. So, T equal zero is initial time, T equal tau is final time, and these are initial states, and these are final states. So, this initial state constitute this initial distribution. And final state constitute final distribution. Okay. And gamma is stochastic trajectories. So, we are interested in observable, observable theta. So, for example, there are heat, work, and displacement, and so on. And this is entropy production, which is a function of gamma. So, actually, now we want to find the general relation between these quantities. So, yesterday we learned TOR. So, TOR is a relation between the observable and entropy production. So, as I showed you in the first page, it looks like this. So, I mean, this is a relative variance and a relative fluctuation, and this is entropy production. So, as I explained yesterday, it is a kind of a trade-off relation. So, we have to pay more thermodynamic cost for reducing fluctuation. This is the meaning of TOR. And today, and actually, so this is a relation between these two, observable and entropy production. And what about this measurable quantity, which is a distribution? Distribution is also measurable quantity, right? So, today I'm going to talk about thermodynamic speed limit, which is a relation between these distributions and entropy production. Okay, so let's suppose this situation. So, here P0 is initial distribution, any initial distribution of a system. And this is a final distribution of a system. So, we want to move this initial state P0 to final state during a finite time tau. Let's suppose that this situation, then this inequality holds. So, let me explain the meaning of this inequality. So, tau is transition time from initial state to the final state. Final distribution. So, this is a transition time and this is entropy production. And this A bar, A bar, you can think it as a dynamical activity, which is a number of jumps, number of transitions during the process. And here DTVPQ, PNQ, it is called Total Variation Distance. And actually this is one of the distance between two distributions P and Q. So, here there are two distributions P0 and P tau initial distribution, final distribution. So, I mean this is a distance between the initial and final distribution. So, I will explain the explicit definition of these two quantities in the next slide. So, let's look at this equation. So, for example, if we reduce this total transition time, then from this inequality it means that entropy production should increase. Or if we want to reduce entropy production, then the transition time should increase. So, roughly saying it is also kind of a trade-off relation. So, we have to pay more thermodynamic cost for faster transition. It is quite intuitive to understand, right? So, this is the meaning of this speed limit. Okay, here. So, I will show you the definition later. Of course, I mean this term also depends on some protocol. So, actually this is not constant. So, that's why I say roughly saying, roughly saying it is a trade-off relation. But anyway, this is a positive number. So, any other question? Okay, no? If no, then... And conventionally we write the speed limit in this way by moving this entropy term to the right-hand side in this way. Okay. So, in lecture 4, in the first section, I will show you how to derive this speed limit. And in the second section, I will show you how to apply this speed limit to the problem of finite time land that was bound. Okay. Okay. So, this is a speed limit. And the motivation of the study of this speed limit is actually the quantum speed limit, which was introduced in 1945. So, this quantum speed limit is about the closed quantum system. Actually, it is for closed quantum system. So, let's say that this is the initial state of a quantum system and this is a final state of the closed quantum system. So, the evolution is unitary, of course. So, this is a quantum speed limit. So, the transition time is larger than this value. Here, L is some kind of distance between two states. So, the thing is that if we take how bar goes to zero limit, which is a classical limit, then you see that this term becomes zero. So, actually, in this limit, the quantum speed limit becomes trivial. It means that transition time is non-negative. It's a very trivial statement. So, people asked then whether there is some meaningful and useful speed limit in a classical system. Okay. And then, I mean, it is for any Hamiltonian but closed quantum system. So, evolution is unitary evolution. Actually, I don't know what about this theory but as far as I know, I mean, this is a general bound. So, I don't know whether there is some restriction for Hamiltonian. So, it holds for, I mean, general, some unitary evolution case with a certain Hamiltonian for closed quantum system. Sorry, I cannot hear you. Yeah. Yes? I mean, this limit is a, we know that this is a classical limit. So, quantum to continuous state space. So, that's why I say that this is a classical limit but anyway, this is a closed system. So, your question is that this limit goes to some thermodynamic. Yes, right. I mean, this is a simple limitation. So, one trial. So, we want to see that what happens if we take a classical limit, whether we can have some meaningful speed limit in a classical situation but anyway, if we take this kind of a classical limit then we cannot find any meaningful speed limit in a classical system. But of course, I mean, I'm dealing with a classical system even though it is an open system. So, I mean, that's why people asked whether there is a meaningful classical limit in a classical, in a classical system. And so, that's why it becomes a motivation to study a speed limit in a thermodynamic classical system. Yeah. But of course, nowadays people are also studying the quantum speed limit for open system, open thermodynamic system. So, then we can derive a similar structure, a similar inequality in such an open quantum system. Okay. So, this is a motivation. So, this is a speed limit. And in this lecture, our focus only on the discrete state model which is a Markov-Jumper process. Of course, we can consider long-term dynamics with continuous states but if we consider the long-term dynamics we have to use other distance measures such as a so-called a vasestine distance. But here I will not talk about the vasestine distance. So, here in this lecture I will only focus on the Markov-Jumper process. Okay. So, this is a system setup. So, the master equation, so R and M is a transition rate from M state to N state at time t and PN is a probability of state N at time t. And entropy production rate. We learned how to write entropy production rate in terms of rate, transition rate, and probability. Of course, this speed limit can be derived when we use different entropy production. For example, this same inequality holds for when we use Hatanosas entropy production or non-ideobetic entropy production we learned what is Hatanosas entropy production yesterday? Or the day, okay, yesterday. So anyway, this inequality holds for Hatanosas entropy production but for simplicity in this lecture I will only focus on this just total entropy production rate. And this is a dynamical activity rate. So, this is nothing but the number of jumps rate. So, this is the definition of dynamical activity rate. So, by integrating this dynamical activity rate from time zero to tau then it gives a total number of jumps, right, during the process. So, we call it total activity. And this A bar is defined by total activity divided by total time. So, it is a mean activity. So, here this A bar means mean activity here. And finally, this one, DTV, Q and P and Q is called total variation distance and the definition is looks like this. So, there are two distributions, P and Q. So, it measures how far they each other. So, you can check that this distance is in between zero and one. So, when the two distributions are exactly the same then actually it becomes zero. So, when these two distributions are same and it becomes zero and when the two distributions are completely different then it becomes one. So, it is a measure for distance between these two distributions. A different one. I'll explain it later. Okay. That's a very good question. There is actually the relation between this one and Kohl-Benn-Liver divergence. Okay. So, for simplicity I will use the notation Kohl-Benn-L to denote this total variation distance. So, we can simply write the speed limit in this way. Simple way. Okay. So, this inequality, speed limit was first derived. And this is a first, I mean proposed form in 2018 and of course it was derived in 2020. But here in this lecture I will introduce you some general form of speed limit. What I mean by general is that from this relation we can derive this, the first proposed form. So, if we know how to derive this then we can automatically derive this result. Okay. So, to derive this general form we have to start from this key relation. So, this key relation, key inequality means that there exists some convex function h, convex function h which satisfies this inequality. So, I will show you how to derive this key relation later but at this point please accept this inequality first and then by dividing the same factor here for both side of equation divided by same factor and then here we define function g of x, h of x divided by 2x. Then let's look at this left term. Then you see that actually this is a form of g of x function. This is argument x. So, by changing this as a g of x and then taking the inverse function of g then we can have this inequality. Finally, then from this relation now we change this relation to this one. So, we can derive this general form. So, now we know how to derive from this key relation to this general form. So, the remaining task we have to do is that we have to know how to derive this key relation. Okay. So, no question up to this point. Okay. G must have some... Sorry, g function. Yeah, right, right, right, right. You're right. It should be some kind of monotonic function in some specific, I mean for interested range. Okay. So, this is a definition of total variation distance. So, distance between two distribution. And this is the same for this integration, right. And then from the triangle inequality this term is a lot smaller than this one. And from this master equation here we plug this term into this one. Then the result is like this. Oh, sorry. This is typo. This should be diluted. And then this one it should be smaller than this one from the triangle inequality. Okay. So, up to this point it's very easy. And so the total variation distance is smaller than this equation. Now, I will define some function q and q star in this way. So, q and m is related to this first term and q star is related to this second term and divided by a dot. A dot is a dynamical activity rate. So, this is a dynamical activity rate. But you see that actually this is a kind of a normalization constant. So, it means that this q and q star variables are actually the we can regard q and q star as a probability because this is a normalized quantity. So, by using this definition we can rewrite this equation in this way. And then this is actually the definition of a total variation distance. So, total variation distance of q and q star is actually this one. So, by using this relation we can rewrite this term by using the total variation distance in this way. Okay. Now, I will use some important property of the statistical distance and their relations. So, this is a total variation distance. I explained. So, it is in between 0 and 1. When the two distributions are exactly the same it becomes 0 and completely different it becomes 1. And there is a, and this is a Coolbank library divergence as you asked. So, this is the definition of a Coolbank library divergence. When you see that when two distributions are exactly the same it becomes 0. So, it is similar to this total variation distance. However, actually it does not have upper bound. Actually it can be infinity. It can diverge. So, there is some relation between the total variation distance and the Coolbank library divergence in this way. So, there exists some convex function h which satisfies this inequality. It means that the h of total variation distance is smaller than Coolbank library divergence. So, this is an important relation between these two, I mean distance and divergence. So, I will show you, I will show you explicit form of this convex function h later but here let's accept this inequality. This is a well known inequality in information science. So, by using this inequality and taking the inverse function of h then we can show that this total variation distance is smaller than this quantity. And here because this is a convex, h is a convex function then the inverse is a concave function. Now, with this definition of q and q star now we can calculate this Coolbank library divergence q and q star. So, we can be written in this way and now you probably you are familiar with now this expression. This expression is about the entropy production rate and divide by this tot, I mean dynamical activity rate. So, by using this equation we can write in this way and now here let's divide by a and multiply by a the same thing and then because now this becomes some normalized quantity I mean so we can regard it as a probability because it can be normalized. So, it is like a average value of this h inverse function and as I explained that and this is h is a convex function and h inverse is a concave function. So, it is inverse of a Jensen's inequality. So, because this is a concave function this is the average of this whole function and this is the average of this argument and h inverse function. So, this is larger than this one. So, this is a kind of inverse of Jensen's inequality we learned in the first day. So, finally these are cancelled out so the integration over this entropy production rate gives a total entropy production. So, we have this conclusion. So, is there any question for this derivation? Yes, right. Dynamical activity at times 0 is 0. This one or this one? In the denominator shouldn't it be a tau minus a 0 in the denominator? This is positive number, yeah positive number. I mean this one? No, the first very first one. This one? So, dynamical activity this one? Yeah. So, in the next line a dot by a tau is there right? A dot by a of tau. This one? Yeah, this one. Yes? You said this is probability like. Yeah, probably because if we integrate out this one then it becomes a tau. A tau minus a I mean a 0 is a 0. Okay. Yeah, a 0 is a 0. Yeah, right. Because the definition of a t is that total number of jumps during the process from times 0 to tau actually so when t tau goes to 0 actually the total number of jump is 0 so that's why a 0 is 0. Okay. Any other question? Okay, if you know. Okay, so I showed that the total variation distance is smaller than this number. So, by moving this term to the left hand side and by taking the inverse function of this then finally we have this key relation. So, this is a way to derive this key relation and there is one important property of this Colben-Liver divergence here. So, this is a Colben-Liver divergence. So, Q and Q star. But, you see that this indices N and M are actually dummy variable dummy indices. So, we can exchange them. So, this is the exchange indices. But, this is actually the Colben-Liver divergence of Q star and Q. So, you see that in our case the Q, Q star and Q star Q are same gives the same divergence. So, in our case Colben-Liver divergence is symmetric. But, this is in general they are not symmetric. Colben-Liver divergence is not symmetric in general but in our case by using this Q and Q star we can show that Colben-Liver divergence is symmetric. So, I mean this is I mean I will use this property in the next derivation. So, please remember this property in our case. Okay. So, because now we know how to derive this key relation so, we finally derive this general form. Okay. Now, let me show you then the explicit form of this convex function H. Okay. So, this is a relation between the total variation distance and Colben-Liver divergence. And, one simple example of H function is a Pinscher function which is given by 2 x square. So, very simple function satisfy this inequality. But, I mean the I mean I will not show how to derive it but you can you can see the derivation in the information science book text book. But anyway so, we are now considering two kind of distance. This is a distance but it is not distance but anyway the it is in between 0 and 1 and it is larger than 0. So, let us consider two simple distributions. So, first P distribution is 0 1 and Q distribution is 1 0. So, we can calculate the total variation distance by using this two distributions. This is a calculation. So, you can check that the total variation distance is equal to 0. So, it means that two distributions are completely different. And, of course we can calculate the Colben-Liver divergence here. So, due to this term actually it diverges. So, by putting this number to this inequality by using this Pinscher function we can easily check that this inequality holds. So, this inequality holds but very loosely holds. What I mean by loose is that the number here is 2 and but here the number is infinity. So, 2 is smaller than infinity is very very loosely I mean the bounded by this bound is very loose. This is what I mean by loose. So, when the total d is close to 1 this Pinscher function gives a very loose bound as we saw in this example. So, the meaning of d is close to 1 is that as we saw in this example two distributions are I mean completely different right. And, the meaning of this d is close to 0 is that two distributions are very much same very much same. So, when the two distributions are very much same then Colben-Liver divergence also is close to 0. So, when d is close to 0 then Colben-Liver divergence also close to 0 then when we put this two number to this inequality in such a case now it becomes much tighter because now it becomes a 0 it's smaller than close to 0. So, it means that Pinscher is very loose near d equal to 1 but it becomes tight when d is close to 0. What does it mean? So, we are now considering this Colben-Liver divergence with q and q star and the definition of q and q star is given in this way. So, the meaning that two distributions are very much the same which means that this is a forward direction rate and this is a backward direction rate. So, the meaning that d is close to 0 and then two rates are equal I mean they are almost equal which means that the process is nearly reversible and the meaning that d is close to 1 and then these two rates are very much different which means that the process is highly irreversible. So, it means that the Pinscher function gives a very loose bound for highly irreversible process but it gives a tight bound for only for nearly reversible process. So, this is the meaning of this bound Pinscher bound. So, do you understand my point? So, to enhance this looseness near d is close to 1 then we can use other functions. So, for example, this is a Bretton-Hubert function and this is a Kilar-Doni function proposed by Kilar-Doni. So, let me show you these three functions. So, this x axis is the total variation distance d and y axis is this h function and this blue curve denotes a Pinscher function and this g and a bh function denotes this bh function and this is a g function. So, as you can see from this plot at d equal to 1 here so p function is finite however this g and bh function diverges and as we know from this example when d equal to 1 the cool back library divergence diverges so these two functions actually provides much tighter bound than the Pinscher bound near d equal 1. But if the cool back library divergence is symmetric in this special case then we can use more tighter bound. So, this is a plot of this tightest bound so actually this is tight everywhere compared to other functions and as I mentioned that in our case the cool back library divergence is symmetric, right? So, that's why the reason we can use this tightest bound in our four-hour speed. Which is more tighter? Precisely saying actually I will show you the example then this is a tightest when the system has two states we cannot find tighter than this bound when the system state has just only two states but I'm not sure whether for example a three state and a four state model which one is tightest. Information science are concerned this one almost I mean they are almost a mathematician. So, this is a general form of speed limit and this is a definition of G. So, for example if we use this Pinscher function to x square and then we can easily calculate this one and this G if we use this function then it is simply x Gx equal to x so identity function. So, we can easily found that this speed limit and this is actually the first proposed form I introduced you in the first page. So, this is actually the previous result which means that this result is tight for only for nearly reversible process but it becomes very loose it gives a very loose bound for highly reversible process. However, if we use this tightest bound then I will call this bound as a symmetric KL debound then this is I mean the speed limit form then this is actually the tight for all processes and this is actually gives a tightest bound. I'll show you one example why I call this tightest. Okay, so I explained the how to derive the general form of speed limit but here then I will show you how to derive this tightest symmetric KL debound. Okay, so what I want to do is that when the Colbert library divergence is symmetric then I want to show that this gives the bound between the Colbert library divergence and total variation distance in this form. Okay, so this is a total variation distance definition and square. Now, I multiply by this term and divide by this term, so same thing. And here I use Cauchy-Schwarz inequality. So this is square and this is square and their product. And because this summation gives one and this summation gives number one so actually this summation gives number two. So that's why it becomes this one and this quantity is defined as a look-am distance. So it means that total variation distance is smaller than look-am distance. And this is a re-expression of this look-am distance. So we can divide this term into two parts, this one and this one. And then here we divide by this quantity and multiply by this quantity. So this is a re-expression of this look-am distance. And because you see that, I mean the total variation distance is defined in this way. So this term is actually normalized quantity. If we sum over this, then it becomes one. So we can regard this term as some kind of a probability. So let's denote this term as p tilde n. And we can also immediately show that the tangent type of this function gives this value. You can check it by yourself. Then this is kind of, because this is a probability, so this is kind of an average of this tangent type of function, right? And then for positive side, tangent type of probability function is a concave function, right? So we can use the inverse of Johnson's inequality. So this is the average of a tangent type of probability function and this is the argument average and tangent type of probability function. So this is larger than this quantity. And then by using the definition of p tilde, we can now write in this way. And this one, so this first part, first term is this part and the second term gives this part. So by defining this one as a symmetrized Kodak library divergence, so we can write in this way. This is actually different, in general this is different from these quantities. And so by using this definition, then we can write in this way. So because now this term is smaller than this term, so finally we have this inequality. And then by taking the inverse function of tangent type of probability, then we can show this relation. And we know that tangent type of probability can be written in this way by using the logarithmic function. So you see that this function is actually the same as this one. And if the Kodak library divergence is symmetric, then in this case ds is equal to d. So in this case ds is equal to d. So in such a case, we can change ds is equal to d. So finally we derive this inequality. In such a way we can show this bound. So is there any question about this derivation? Is it clear? Okay. So up to now I talked about how to derive a general form of speed limit. And then now from now on, I'm going to talk about how to apply this speed limit to the finite time random bound problem. So what is random bound? So it tells us that for erasing one bit of information, then at least KBT log 2 heat should be dissipated. So it can be formulated in this way. Heat should be larger than KBT log 2. Then this minimum value. So this minimum bound is attained only in the quasi-static limit. I mean the quasi-static erasing process. In such a case we can have this minimum bound. We can approach this minimum bound. So for finite time process, because a quasi-static limit takes infinite time. So for finite time process we can easily expect that some additional cost is necessary. So from this series of studies they found that this additional cost is proportional to one of our tau. Here tau is erasing time. So when tau goes to infinity which is a quasi-static limit then we can have this minimum bound. But if tau is very small then this additional cost diverges. So this is a problem of a finite time lander's bound. Then what is the erasing process? So let's suppose that there is one single memory, one bit memory. So it can have two states, one state and zero states. So in this example the system is in the zero state. So we read this memory is in zero state. So if there are many memories initially then this is a 1, 1, 0, 1, 0, something like that. So let's say that initially the probability for observing the memory is in zero state is half and the probability for observing the memory is in the state of one state then it is also half. So initially the zero and one probability is given half and half. And then the erasing process means that it is nothing but reset to zero process. So after the erasing process the final state will be like this. They are all reset to zero. So it means that probability of observing zero state is probability equal to one and probability of observe the one state is zero. So this is a meaning of the erasing process. So erasing process is nothing but the distribution change from the initial distribution to the final distribution. Half-half distribution to the 1, 1, 0 distribution. So for this distribution change we can easily calculate the total variation distance which is simply given by one-half. And we can also easily calculate the Shannon entropy change. This is the result, I mean calculation result. So the answer is minus log 2. So now we can use our speed limit. So this is a general form of speed limit and if we rearrange these terms and then we can show that the entropy production is larger than this value. And here v is defined as L divided by the total activity. And we know that entropy production can be divided into two parts. So this is a system entropy change and this is a reservoir entropy change. So we know that what is a system entropy change. So this is a Shannon entropy change and this is a system entropy change. And then this is a heat queue. So by plugging this equation into this inequality and arrange the terms then we have this inequality. So it means that dissipation heat is larger than KBT log 2 plus something else. So from this inequality we can easily identify that this term is additional cost for finite process of erasing erasure. So I mean this is the importance of G function. I define the what is G function. G function is Hx divided by 2x. So this is the meaning of G function. Actually it gives the additional cost. So for example if we choose the Pinsk function H equal to 2x squared then actually in that case it is a simply identity function so it becomes just this number. And as you see from this factor so it is proportional to the additional cost is proportional to 1 over tau. So this is a previously observed behavior. But if we use this symmetric K on the bound then we can calculate this additional cost. It becomes this quantity. It is different from this one. And here we define it this way. But it will give a tightest bound. So let me give you an example. So this is a discrete 1-bit model and this is a coarse-grained bit model with a continuous state space. So the discrete 1-bit model is actually it has a two state, zero state and one state. So let's suppose that initially the state is in the one state here. So by raising the energy state of the one state then we can move this system to the zero state here and then lower the energy level again. So this is a process of erasing process. So this is a discrete 1-bit model. But we can also play with a long-term system with a coarse-graining. So let's say that this is a long-term system and this is some kind of some symmetric potential. So we can coarse-grained in this way. If the particle, Brownian particle is inside in this area then we can read this system is in one state and if the particle is located in this area we can read this system is in zero state. In such a way we can do a coarse-graining to the system and then use this continuous system as some bit model. So that's why I call this one coarse-grained bit model. So in this kind of model let's say that let's suppose that initially the particle is in this one area and by applying some asymmetric potential then we can move the particle from one state to the zero state. So in such a way we can erase the memory. So this is a result. So this is a calculation result of this discrete 1-bit model by using some protocol, by using this protocol. So y-axis is entropy production and x-axis is inverse of v and v is defined in this way. And the reason why I plot x-axis by using this inverse of v is that actually inverse of v is proportional to time tau. I mean in large in this limit you can think the inverse of v is like time. So this is plot of these two bounds. So this orange one is pin square bound and this blue dashed one is a symmetric KLD bound. And as you see from this plot at v equal to 1 here at this limit the symmetric KLD bound diverges but the pin square bound is finite at this point. So let me magnify this area. Okay so this is a magnified view. As you see from this plot the pin square bound actually the v equal to 1 this area is highly irreversible area and this is a nearly reversible area. So in this highly irreversible process area then pin square bound gives a very loose bound because it is a finite but data and symmetric KLD bound diverges. So I mean infinitely loose bound the pin square gives an infinitely loose bound but you see that this symmetric KLD bound is really tight. I mean it touches the data and other three lines is about the other H function so Gilardoni and BH and Vajada functions so you see that always they are looser than this symmetric KLD bound. And we can also find the condition for optimal protocol which means that the protocol which gives the result that touches this tightest bound so this red dot actually is calculated from this optimal protocol so it means that actually this is a really tightest bound and this green dot green dot is a simulation result from this coarse-graining bit model by using in the overdamped long-term system by using this symmetric potential and I mean I derived this this speed limit from the discrete state system I mean I derived this speed limit from the master equation Markov-Jumbo process but our speed limit also can be applicable to this coarse-grained bit model with a continuous space so this data I mean this simulation data is really above this our bound so I mean it clearly shows that this bound is clearly applicable to this continuous space but anyway coarse-grained bit system but you see that there is a gap here a large gap here so this gap actually comes from the process of coarse-graining so to reduce this gap actually we have to use a different distance measure such as a basestine distance something like that okay so this is the final page so in this final lecture I talked about the speed limit so this is kind of a trade-off between time and thermodynamic cost and so I talked about how to derive this speed limit general form of speed limit so from this speed limit we can derive the previously known speed limit form and by using this speed limit we can apply this speed limit to the problem of finite time land that was bound and it gives a really tight test bound so this is all about my final lecture okay so that's all okay questions common compared to this quantity so yeah I mean if this is I mean it can diverge so I mean if the time is very large then this portion will be very small but so in the highly irreversible limit which means that the process is very fast in such a limit then actually it diverges it can diverge I mean it can be very large compared to this one so I mean it means that for our daily lives our computing erasing computing is very fast right so it means that in such a case I mean it should be very much larger than this KBT log 2 okay so your question is about probably this one so you mean yeah here so you mean that I calculated by using this constant value yes right so actually it depends on I mean there is no perfect erase I mean there may be some small error right so error will depends on time and protocol and something like that so we can actually write in this way so when there is error then we can write 1 minus epsilon and here epsilon in such a case we can also calculate it and it will be 1 half plus something epsilon dependent term here and also there's a Shannon entrepreneur this also depends on epsilon but even though it has an error so I mean I mean the speed limit is generally applicable to any the probability distribution changes so it gives the same bound it gives this same same bound actually so this data I mean this data point actually calculated from this discrete one bin model so when we calculate this one bin model of course there is a finite this each point has a finite some error epsilon here so all data points are above this the symmetric k on the bound so it holds for I mean any any case even with it has an error so I mean this data point is calculated when the erasing process time is relatively long and in this case the data points are calculated when the erasing process time is relatively short so it actually contains I mean any almost any possibility so the discrete time Markov chain what do you mean I I drive this one yeah I drive this from the Markov process yeah Markov chain yes I mean if we can define distribution then we can I mean this is a general for any I mean we even so if we can define a distribution at certain time then we can apply this theories to any I mean any system yes yes I think so yeah so I don't understand your point so discrete time discrete time I got the point yes discrete time yes I got the point I I have a question in the course screen to beat model yes you show that there is the bound is not tight yes and is there any other good way to is there any other good course screening method to get tight bound so in this game okay so in this game actually I mean if we use the relation we have derived then I think it is actually this is a we cannot avoid I mean this gap because I mean this term comes from we neglect some intra-entropy production so the meaning of this intra-entropy production is that entropy production which is produced inside in one course grained one course grained state so because we cannot avoid this and intra-entropy production so actually if there is a course graining then there should be some gap here then if you make more state then the gap will probably I think so thank you a simple question regarding the erasing process example so as I see the picture there is always the step where the energy barrier is resetted to the initial configuration here so is that necessary procedure no not necessary but so but I mean this is not necessary because I mean okay but I think the whole process for erasing if we started from this state this energy level state and I think it is natural to think to return back to the original state I think it's a natural way to think the erasing process so the reason why I wonder is that so in my point of view this becomes erasing process because so for the initial process so both sides have the same probability to have the particle and that information erases because after that energy barrier there is only one side where the particle resides so as I see it at the last step that kind of two possible states reborn again so my question is can we really call that erasing process erasing process is defined in this way so it is some losing the information of the I mean the previous state that's the erasing process so regardless of the initial state so if the final I mean from the final state if we cannot guess I mean the previous state then actually we can say that the information is erased so so regardless of the initial state actually the final state is in the zero state so we can say that this is an erasing process