 drop me an email and I will be very happy to discuss. Okay, there are many passes to quantum thermodynamics, and I come from a quantum information community, where we use information ceramic approach to study thermodynamics. We are interested in non-symptotic behaviors of quantum nanoscale systems, and this non-symptotic thermodynamic is especially crucial to the nanoscale devices and to the nanoscale information processing unit in particular. Definitely we do not want to dissipate too much heat in the information processing and burn up the chips. Studies of dynamics in this regime require the use of a smooth interface to capture fluctuations. In the extreme case, we study only one shot of the experiment, and we want to guarantee the world cost instead of just observing its average value. This regime is sometimes referred to as the single shot or one shot. Consider the example, I have a worker who aims to lift the box from the ground to the table, but half of the time the worker just threw the box too high, but the other half of the time the worker just threw the box too low. But our average work cost just meets our requirement. This is definitely not what we want. People invented smooth max and mean work to characterize the performance. The smooth mean work is at least this much, except for our probability epsilon. My talk today is for one shot regime instead of the R&D thermodynamic limit. So information is included in the patterns that deal with some of the fluctuations. So the more accurate the information processing is, the more cost it requires. In this work, we want to regress to justify this observation, and we want to explicitly review the trade-offs between the cost and the accuracy. Moreover, we will use our results to demonstrate some dynamic advantage of quantum devices in certain tasks. So our goal is to characterize the non-equilibrium cost of accurate information processing, and there are three questions to be addressed. First, how to quantify the non-equilibrium resources. Second, how to model information processing task, and how to benchmark its accuracy. So let us begin with the mathematical model we used to information task. An information processing task aims to build a design relation between inputs and outputs. For example, the truth value table of the AND operation, all possible inputs and outputs correspondents specify the AND operation. For quantum information processing, we have quantum states as input and quantum state as output. So we can specify the information processing task by specifying all possible input output state transformations. Here, this is rho x to x prime where x is a label. For example, let's consider quantum cloning. That aims to output or generate additional dead uncorpses of the same unknown pure state. So this is hoped for arbitrary pure states, so there are infinitely many state transformations to specify quantum cloning. And because we know that because the non-quantum theorem, there is no quantum channel that can perfectly implement the task. So the definition here we use for information processing goes beyond the quantum channel. Okay, then how to characterize, how to evaluate any real implementation for aiming for accomplishing the desired task. So we use that operational test. Suppose there's a machine m and it outputs this. So we just ask how similar of the real output by given by the machine m to the desired output rho x prime. And our measurement will give us a score. And we use this worst case score to quantify the accuracy of the machine m. So when the output is pure, and in this case, we choose o x equal to o x prime, which is a projector, then this is the fidelity. And in this case, f becomes a worst case fidelity. Now coming back, how to quantify the sum of dynamic resources. We use a resource theoretical approach. So resource theory is a mathematical framework used to quantify resources, which has wide applicability. It specifies three states, which are states are free to get free to obtain. And it says it specifies three operations, which are operations that has no cost to implement. Then three operations are implemented by non free operations are implemented by three operations at the cost of resource for states. And in this theory, in the resource theory, we need to answer how much resource states are needed to accomplish non free operation. And we also is, specifically in this work, we use a series of some dynamics, given by the three states, which are give state at temperature T, and for your positions gives preserving maps or gives preserving channels, which are channels that send give state to give state. So first of all, why we choose give state as free state, because it's the only complete passive state that no matter how many copies of the state is given, you cannot extract work from it. Then why gives preserving map, because otherwise a give state will be mapped to a non give state. And you can repeat the process for many times and extract work from it. In fact, you can abstract arbitrary amount of work. And this leads to the collapse of the whole framework. So what do you see here in general specifying this is gives you the most general framework of some dynamics of resource theory. This implies that any bound valid in this framework will also be valid in any other frameworks that has further restrictions. So we have widest applicability. So to count how much non-equilibrium cost is there, we also use a battery model, which is called information battery. The idea is that information is equivalent to energy, and the equivalence is established by Landau's principle and sell out the engine. This means that the work costs or process can be counted by how many pure degenerates qubits are dissipated to maximally mix it after the process. So to summarize, we need to find the minimal number of pure qubits dissipated in the battery, which is regarded as the non-equilibrium cost state here. Over all possible free operation, which gives preserving, so our machines are gives preserving, which ensures no additional resources is getting into the process. And such that we minimize over all free operations over all battery sizes and over all some more ancillary space, which gives you randomness and minimize that such that machine into them and design tasks with at least accuracy F. So this problem can be can be made into a semi-definite programming. And using the semi-definite programming technique, we get our main result. So our result is a fundamental bound on the non-equilibrium cost of any possible machine that has accuracy at least F. So the non-equilibrium cost is bounded by the lower sum of this accuracy and a reverse entropy of the task that depends only on the task, but not any specific implementation. The task definition of the reverse entropy is quite technical and complicated, but we will see a simple case to understand it. So for the special case, this completed D-gen in Hamiltonian, the Gibbs state now becomes Mcmick state. Now to every the direct task, directs to S-prime, we associate it with reverse tasks that interchange the input and output. Now, similar to definition of accuracy for the performance, we can define the accuracy of the reverse process. Maximizing all possible quantum mechanical implementations, we can have the highest achievable performance for the reverse process. And now the reverse entropy is given by some negative of the lower sum of this maximal accuracy of the reverse process. So a bound actually connects the forward and backward process. Now let's see a simple example. For Landau-Ri-Riger, we want to transform the complete mixed state to pure state zero and the reverse task is summarized for pure state to maximum mixed state. The maximum accuracy of the reverse task is one-half is calculated, then the stability base into a bound. We have generalized the Landau principle for imperfect R-Ri-Riger. So a reason one bit of information imperfectly costs slightly less than one KT-1-2. We can also use our result, our framework to establish some dynamic advantage of to process information using quantum devices. So for given information, quantum information processing tasks with quantum input and quantum output, there are two ways to implement it. One is coherent processing and the other is made by classical devices that first measures the input and get into classical information, process it and depend on the result, prepare the design of the quantum state. This process is impactful in breaking and we call it incoherent processing and that this incoherent processing costs more energy. Just a reminder that you entered the question time. We derived most stringent bound and we proved that for concloning this particular task, the bound is strictly larger. So we establish a quantum some dynamic advantage for concloning for replicating information. And to conclude, we establish a fundamental tradeoff between nonquery equilibrium cost and accuracy. So the bound is given by what we call the reverse entropy and its bound is tied for many cases and we also establish using the average quantum advantage. With this, I conclude my talk and thanks for listening. Okay, thank you very much for this very interesting talk. We will have time for one or two very quick questions. If there are questions or the question could be posted later. I think there's a question here. Yes, so we just minimized this. This is written in quantum state. So we pulled all the problem into this framework and it becomes a linear algebra of minimizing the quantity such that this transformation are possible. So with some kind of linear problems and find the results. Okay, thank you very much. Yes, thank you. Okay, so I would suggest that we could now move to the next talk by Anthony Moulton who will be speaking about work rates of with complexity and computationally restricted thermal defects. So the floor is yours. All right, let me just share my screen.