 Franklin Rodriguez, who will talk about non-equilibrium thermodynamics of quantum coherence beyond linear response. So I'll ask you, Ariane, to stop sharing your screen at the moment. And Franklin, can you try to share your screen? Yes. It's coming up. Yeah. Does it work? Okay. I'm seeing now. Yes. Please go ahead. Just a second. Okay. Now let me just just it in an inconvenient point. Okay, here. Now it works. Well, yeah. Thanks for the introduction and thanks for the organizers for giving me the opportunity to present here my work. So the main idea of this talk is to try to give an answer for this question. After all, are coherences useful or not for work extraction? And well, just okay. This talk is divided in three parts. Can you see my mouse just for reference? Yes, I can see the mouse. Thanks. So I divided this into three parts. First, I'll try to make everyone understand what I mean by this question here. Then I'll present the toolbox that I used to attack the problem. And then I'll give the result in which, yes, coherences are used for work extraction, but it depends on the situation. So I'll basically give a comprehensible set of rules for one to know whether coherences will be used for that or if they will be a hindrance or not. So breaking up the sentence into three parts, first, what do I mean here by work extraction? Well, I have here a concrete example. It's the example that I will use throughout the talk. I have just laser light shining, for instance, into a two-level atom, which is in an excited state. And then I want to extract energy from it. There are two ways in which the quanta of energy inside of this atom will go away. Either it will just follow the laser through a simulated emission, for instance. And then since I know the direction of the light that is going, both of them will hit a mirror, then the radiation pressure will be bigger. And in the end, I can lift the weight with this. So work for me is if I have any operational way, it doesn't matter how complicated it is to lift the weight in the end. And the counterpart for this would be heat in which the laser goes by, but the system just goes spontaneous emission. And then we don't know for certain where it's where the light that is going out is going. Maybe it hits the mirror, maybe not. So it will just be fluctuation so that we cannot consistently lift the weight with this. Energetic coherence is just as a recap. It's just the off-diagonal elements of the density matrix in the energy basis. And to measure this, I use the relative entropy of coherence, which is just the entropy of the defaced state with respect to energy minus the entropy that it had before. Since defacing also always increases entropy, this quantity is always positive. And for us, it's very useful that it's an entropic quantitina so that we can relate to other thermodynamic figures of merit such as work, which is what we're trying to see here. And by useful or not, I'm really just making reference to the second law of thermodynamics. The classical one is this, where work is bounded by the difference of free energy. And with coherence, can we do better than this? Can we extract more work than the free energy difference? So this is what we're going to talk about. The toolbox used here is mainly that of fluctuation theorems. So fluctuation theorems are just a generalization of the second law for fluctuations. I mean, this was already, many talks already talked about it, but the second law of thermodynamics is just an average statement. It's a statement about the average work that it can perform. And fluctuation theorems go beyond this, talking about fluctuations of that or any other moment. And it's usually done using the two-point measurement scheme. I have put it here. The normal protocol I start with a thermal state. And the Hamiltonian of the system is H0. Then I measure the state into one of its energy eigenstates. Then I'll take this eigenstate I. I will drive the system onto H tau, then measure J again. And the backward process is basically the same, where I just measured J first and I afterwards. And the Hamiltonian is done on the opposite. There's a minus sign missing here, but I just do the reverse protocol also with the Hamiltonian. And the fluctuation theorem is just a statement between these two probability distributions, the detailed ones at least, where the ratio this guy will cancel here because of micro-reversibility of the evolution. Here I'm assuming that it's unitary. And then the ratio will just be the exponential of the stochastic entropy production here. I can also just sum over all indexes. Then I have the integrated version of this. Then I'll apply it as an equality. And I just get to the second law of thermodynamics. And this result, it doesn't matter if the evolution is classic or quantum, it will be the same. So a naive interpretation of this is that quantum mechanics has nothing to offer in terms of the second law. But although the evolution is quantum, the state that we used to do the evolution is classical. It's just a thermal state. And with the two-point measurement scheme, we cannot go further than this, because if we have a state that has coherences in its basis, just like Necolse, I will seal all of this off diagonal elements through the strong measurement. And there are some ways to go around this. One is using weak measurements. And then you have the Dirac-Kirkwood probability, quasi-probability distribution, which was Necolse's approach. My approach is a bit different. Instead of measuring the energy in any way, I will try to make inferences about energy. I will rather measure the eigenstate of the system, as I can show here. I will come back to this slightly. But I will first measure the eigenstates of my system. I will evolve its eigenstate and then measure it again. But the eigenstates of the system, they have a relation to the energy base, which is something that I measure before. For instance, I have a system that I prepare many times. I measure its energy. Then I know the probability of having i given that I measure alpha before. Alpha is its eigenbasis. The eigenbasis is always Greek letters. And then I will have this on my pocket. And then I just can put all of the probability distribution here, where alpha and beta are the actually measured states, and i and j are states that I measured before, some of the information that I had before. And this looks a lot like the formalism of Bayesian networks, whichever will come here. It's the method that we use to do this. Basically, I have all of these random variables here, x, y, a, and b. And they are all connected somehow. I mean, x implies a with a given probability. x implies y, and y implies b. And the deal with Bayesian networks is that we will make a clear distinction between the variables that we have access to and the ones that we do not. Here, for instance, in a normal setting, I would say that x and y may be diseases. But diseases are something that we cannot measure. We don't have a measurement device that tells me if I have cancer or not directly. The measurement is the cancer itself. I always have to look at symptoms. And the symptoms are the things that we can measure. Basically, by making models that relate x to a and y to b, I can make predictions about x and y just by measuring these guys here. This is the thing that we do in our setting. Basically, these are the eigenstates of the system, which, I mean, here in this work as a theoretical work, we say that we measure the eigenbasis. But there are ways for you to infer the eigenstate of the system by just making measurements and bases that you can. And these are the observed states, right, as I just said. The backward evolution, then this just coming back to this, this is just a way to construct the probability distribution of any bias network. You get the parent, you say that you have a given probability of this. And then every single probability that comes with an arrow is just a conditional probability distribution. You have about five minutes left, including. Okay, okay. So this is the forward process and the backwards process is basically the same. I'm just inverting this arrow here. Then I can make a ratio about these guys. And then I can arrive at a new detailed fluctuation theorem, where I have another term here that relates to the coherences. And this other term here that will relate to the automality of the state. If the state has no coherences, but it's not thermal. Then I can do Jensen's inequality here again. And this will be just a difference in the relative entropy of coherences. These guys are just the stochastic versions of these ones here. Yeah. So basically, if this sum here is smaller than zero, then I can extract more work than would be possible with just a free energy difference. But if the coherences are zero or the abnormality, the abnormality is something that is, you don't need to go to quantum mechanics to have that. So I will just drop this for just for a while. See, we can assume that this is very small. If the coherences are zero initially, then Delta C will always be greater than zero. And then coherences are always better. So this is important. So this is what is called in the community inner friction. So we're always talking about initial coherences now, because the case in which they don't exist is trivial. It's always better. So yeah, let's go here to the explanation. So in the unitary case, in adiabatic evolutions, we just need to notice the energy levels, just phase shift. So the coherences, they will always stay the same in the adiabatic limit. They will just go the phase shift of the two different energy levels. And thus, if we want Delta C to be negative, we need to go beyond this. Otherwise, it's just zero. I mean, the coherences, they simply don't change. They're locked. This is just the example again. And I'm just putting in the Hamiltonian red lines here, this red line adiabatic case, and you can see that red line is exactly there. So Delta C is equal to zero. But if I am in this case here, where the evolution is a bit faster than that, then I have these regions here, where Delta C is negative, and then I can extract every point here, by the way, is at half the rabbit period. So it's always like the maximum work extraction. So the unitary condition is already that. So for unitary systems, it's pretty easy to know whether the coherences will be used for. Here is just a graph again, showing a specific pointer in the evolution. And you can see that the maximum work extraction, work extraction happens, your negative means extracted. Work extraction happens only when there are coherences, but no coherences. It's trivially, it's trivially better. And the maximum work extraction goes way beyond the linear response regime. Here we're making a linear response with an issue coherences, and this is the work to be published. But we can see there, it's way beyond that. The red dot, when it's on top of the red line, this just means that beta w is equal to this. And hence, there is no entropy production, which is predictable for the unitary case. Now, in the non-unitary case with the environment, then the situation is a bit different. This is the same graph as before, but now we connect to the environment. There's this gamma here. And you can see that now, the blue dots, they don't lie on top of the red line. So there is entropy production, and almost all of this is due to delta C. So we can see that irreversibility here is really coming from the difference of relative entropy of coherence in the system. This means that the relative entropy of coherence then is going to the mutual information inside of the buff. So if we want to use the coherence, we really need to make sure that the coherence time, the coherence time scale is much bigger than the work extraction time scale, so that we can extract the coherences before the work extracts them somehow. I mean, it's the same. It's like the environment, sorry, the environment. It's like the environment is another observable, also trying to extract work, but it's extracting to itself. So you need to be positive in the main message. Here you can see the different simulations for different time scales for the coherence. So if the coherence time scale is much bigger than the work extraction time scale, then you extract work pretty much on top here of the half-hubby frequency, as you would expect from the unitary case. But as it becomes closer and closer to this time scale here, you will extract less and less work, and it will be always a bit earlier than this half-hubby period. So it will also be harder for you to do that. Okay, at this point, okay, thank you. Yeah, this is just a conclusion. Exactly. So in the conclusion is that delta C negative is necessary but not sufficient. You must extract work before the coherence, and it must be known adiabatic. So the takeaway message is just coherences are good as long as you extract them before the environment does it first. That's it, thank you. Okay, thank you for the very nice talk. Maybe one quick question, and in the interest of time, I'll just read it from the chat. Does your formalism work for quantum correlations or only for coherences? It works for quantum correlations in general. This is also a work that, sorry, this work here is published, not published, but it's an archive. But we are also putting in archive the case in which you will have initial correlations with the environment. This Bayesian network pretty much allows you to deal with these quantum correlations in general. So you always have these net shots of where the entropic quantities are going and to where. This is nice. Okay, thank you very much. We don't have time now for more questions. If you could stop sharing your slides and I'll have the next speaker, Matthew Jerry, to share his slides and title