 So if you can un-share your screen, create, share mine, and I will be presenting this talk. Yes, good. So my talk will be about a recent result on the non-equilibrium thermodynamics of uncertain stochastic processes. And you will be able to find this presentation at my website, slides.com. And it's related to a recent paper that is posted to archive. I just submitted a revised version that is hopefully better to understand, and there are some more examples. So it will appear tomorrow. And I'm glad if you will go through it. So what is the basic idea? The basic idea is that in, and this is not only in stochastic thermodynamics, but in any thermodynamic or any system, what we basically do is that we have our system of interest. And in case of stochastic thermodynamics, we consider that there is some uncertainty in the states of the system. So basically, if we have a trajectory or we have a set of states, we basically consider that we model it as a through the theory of probability. So there is a probability distribution over the state or over the trajectory of the system, et cetera. But on the other hand, when we talk about the system, typically it's coupled to a heat bath or particle bath or there are some parameters of the system. And in this case, we very often consider that we know these parameters like temperature, chemical potentials, et cetera, with infinite precision. So basically what we hypothesize is that we know it exactly. But in reality, what an experimenter does is that if we want to make an experiment like that, if we want to make an experiment like that, we try to measure temperature and there is maybe some uncertainty or we prepare the system. And of course, the device is not perfect. So we prepare the system and the temperature, for example, can slightly change. Or there are some external things that we cannot affect. So that might, for example, change the temperature of our system. Or basically there are some other external things that come into the game. And in this case, there are two very different approaches to the system itself where we can acknowledge that there is some uncertainty and we cannot model the system deterministically. On the other hand, we require that for the bath and the system parameters, we have to know them exactly. And in our case, what we consider is that as a typical experimenters, as typical experimenters, we do not know the exact value of many things. So we have to know the exact number of heat baths, temperatures, chemical potentials, energy spectrum or control protocols, transition rates, initial distributions. And there might be various reasons for that. But in general, this is something that is worth investigating also from the practical point of view, that there are some uncertainties about the system and its environment. So let's start a simple example. So let's have a system that has three states. And let's say that the system evolves stochastically. So there are some transition rates between three states, let's say with different energies, E0, E1, E2. And then we say we couple the systems to three apparatuses. And these apparatuses affect the exact form of the transition rates. So the question is, do we know this or can we somehow say something about the apparatus? So now we want to measure the distribution at the final time and there are two possible scenarios. So either we have the apparatus, then we can run the experiment many times. So we run the experiment and we want to measure, for example, a probability distribution at time tf. So what we do is that basically we let the experiment run, stop it at tf, measure a position of particle, repeat the experiment and do it so many times that until we get a decent estimate of the probability distribution. And in the case that if we know that the apparatus is the same for each run, then what we can measure is this conditional distribution of, for example, energy, given the apparatus. But sometimes we, and this is what we call effective scenario. And then we can do many measurements for the apparatus, alpha 1, many measurements for apparatus alpha 2, many measurements for the apparatus alpha 3. And then for each of these, we can measure this probability distribution. And then if we want, we can also make an average over the apparatus and make the average value over the each apparatus, averaging over the probability distribution of each apparatus. But very often what happens is that we don't know exactly the apparatus and each apparatus is randomly chosen each time you run the experiment. The temperature slightly changes because the sun is shining on our experiment, or there is some air conditioning that changes the temperature, something that we don't know. And then in this case, we can only measure the, let's say, the, the average of all the apparatuses at once. And this is what we call phenomenological scenario. And one can think, okay, so in one case I can measure the quantity for each apparatus. In one case, I cannot. But the question is, does it change the experiment? And the answer is yes, it might change the experiment. So, because in the case where we know that we are coupled to a single apparatus, and then we can adopt our control protocol, for example. So then we can basically adjust the experiment to the conditions we have. So let's say we have a simple example of a moving optical tweezer and the moving optical tweezer has this stiffness function. This is this, but basically it's quadratic potential. And this k is a stiffness. And let's say we want to move the trap from zero to some finite value lambda f in time tf. And we can calculate the optimum protocol and everything. Let's say we choose the optimal protocol so that it minimizes the average work. And the problem is that if we know the apparatus, if we know the stiffness, then we can change the protocol for each value of the stiffness. And if we do not know this, then we basically measure the stiffness many times and then we, for example, say, let's measure the stiffness of many times and just use the average value that we have. And we think that this average value is due to the imprecision of the measurement, but the imprecision doesn't have to be due to the measurement but due to the fact that the real stiffness changes in every run. So one can calculate the work in case of this we call it adapted scenario where we can change the protocol for each stiffness and unadapted when we cannot change the protocol and we have to use a single protocol for all cases. And then this is expected that in this case, there is some extra dissipated work that we have we spare we are spending in this unadapted scenario, because we cannot adapt our protocol to the, to the, to the environment. So, and then the question is, what is then the optimal protocol for this unadapted scenario. And how we can deal with these quantities. In this case, so let's start with a simple definition. So let's say we have set of apparatuses. And here we use this, let's say, generic one generic description apparatus but it can be really set of temperatures set of chemical potentials set of whether it's connected whether the system is coupled to one. One reservoir or multiple reservoirs and all of these details. So, the only assumption is that in each sense, in each case for each apparatus. Let's say that the system satisfies the local detailed balance so that we can interpret the quantities from the terminal point of view. And let's say that we consider a probability distribution over the apparatuses. And then of course the effective value of any terminal make quantity is basically just the average over all apparatuses. In this case, the effective distribution that can be calculated, fulfills this equation. And unfortunately this is non Markovian equation so this is the first complication in our case because the averaging over many apparatuses causes that basically there is a coupling between the probability distribution of an apparatus and probably distribution of the system. So then the evolution is not anymore Markovian in this effective case. So we can investigate the first and second law of thermodynamics. The expected first law of thermodynamics is as expected so basically the change in the internal energy, the effective change is equal to the effective change of the heat and effective change to the work. For the kind of second law or let's say the, the, the composition of entropy into the entropy production term and entropy flow term, this still works. So there is this expected entropy production that is not negative. The problem is that the expected entropy flow that is normally equal to the inverse temperature times the heat flow is not valid anymore because the expected entropy flow is the average value over the beta times the Q, which is this is just for the case of one heat bath. So there is no explicit relation between the entropy flow and the heat flow, which is the cornerstone of the second law of thermodynamics so here the thermodynamic description or interpretation is a little bit more complicated. Okay, so now we were talking about these two scenarios so basically for the adapted scenario for each apparatus we can choose the optimal protocol that minimizes the work. What we do is that for each apparatus, we choose the optimal protocol that minimizes the work, and then we average. And for the unadapted, unfortunately we cannot do that we can we have to use the single protocol. And then of course, the difference between these two things that is telling us what is the difference between the work in the unadapted scenario and adapted scenario. Basically, this is what we can call an extra dissipated work due to this lack of knowledge of apparatus. Let's now just focus on a very specific scenario because one of the one of the uncertainties that can be is just the uncertainty about the initial state so let's say that everything is fixed everything is known temperatures are known. Temperatures are known. Chemical potentials are known everything is known, except for the initial distribution. And let's say we have a set of possible initial distributions. And each distribution has some probability to appear. And let's say we have a distribution that minimizes the expecting the expected work. We know that any other distribution for any and for any other distribution there is a formula for the dissipated work that is the minimum dissipated work done by this optimal by this optimal distribution. And there is this drop in the scale of the virgins, which is called mismatch cost that that basically tells us what is this difference between the dissipated work for non optimal distribution versus the optimal distribution. And one can show quite, quite a nice fact quite interesting fact that this average dissipative work dissipated work. And in the case where we average over the all possible initial distribution can be calculated as a so called Janssen Shannon divergence where we basically take the, all the distributions way done by their probabilities. And this is the KL divergence. And as a, as a, this new model, the average distribution. So then we can easily calculate the dissipated work that is justice in Shannon divergence. The last thing I want to talk about is the way that it also in this case we can formulate some fluctuation fluctuation terms, and we can also decompose the entropy production into two interesting terms so let us know now just basically that we are in this case of phenomenological approach that where we cannot measure the probabilities for each apparatus, particularly trajectory probabilities but only the probabilities average overall practices. And here we denote the trajectory probabilities as follows. And of course the effective ensemble and production is the KL divergence between the forward tragic forward probability of the trajectory versus the backward. This is the standard standard result. And by using the chain rule for KL divergence we can show that this can be decomposed into two terms. So this is the KL divergence over the average probability of the trajectory with respect to the average backward trajectory. And there is a term where the role of this x and alpha is changed so is the probability of observing alpha given the trajectory. And basically we have now three types of the entropy production. So one is the effective one. One is the phenomenological one. So this is the effective that is average over all apparatuses. This is the entropy production over the average trajectories and this is something that we call likelihood entropy production. And if you look at it, it's interesting to see that here in this case you don't calculate the probability of the trajectory given the apparatus. You are calculating the probability of the apparatus given the trajectory. So basically you do the reverse thing so you are learning about the apparatus from observing the trajectory. And in the case of phenomenological entropy production you can basically rewrite the express the KL divergence versions of this. And in this case you get the trajectory entropy production, which is simple to show that it fulfills the detailed fluctuation theorem. And of course the conclusion from this is that this phenomenological entropy production is the lower bound of the full effective entropy production. So by observing the only the average trajectory probabilities over the all apparatuses, you can estimate lower bound of the effective entropy production. And for the second quantity this KL divergence or P alpha given X, you can also calculate likelihood trajectory of entropy production. And if you look at it, it really looks like look like likelihood function that you are trying to calculate the probability of parameters, given the trajectory. So this is your data, and this is this is your parameters. And then again, it fulfills a detailed fluctuation theorem. And interesting is that from integrated fluctuation theorem, you get a result that the average lambda this is the average trajectory entropy production, given the trajectory is still for one trajectory is the larger or equal one. What it means is that basically the detailed fluctuation trend tells us how much we learn from observing the forward trajectory versus learning from versus how much we learn from observing the backwards trajectory in the in the backwards scenario. So basically, it's determining the difference between observing the trajectory forward and learning from it and observing the trajectory backwards and learning the parameters from it. This is one example how it can be calculated. So basically if we have, let's say a distribution of temperatures and we have a simple system of two states, then basically we observe a trajectory and this is, this is a sample from some unknown temperature. So we can then update the the probability of temperature observing the trajectory X. Let's say that the prior distribution over temperatures is uniform from this zero to three. By observing this trajectory, we can calculate the probability of observing T given X. This is what we get. This is the probability of serving T given the forward trajectory. This is the probability of observing the temperature given the backwards trajectory, I can calculate likelihood entropy production. And this is again given the X and you see that for some temperatures, there is a slightly negative value. But if you calculate the overall value, it's positive. So here it's very much larger than zero. Here you see that there are some cases where some temperatures where it is smaller than zero, but it's just a small region. And I just want to say that I didn't have time to go through all of the details that are all of the results mentioned in the paper. There is also a section on maximum work extraction with uncertain temperatures, also dynamics of the thermodynamic value of information when there are some uncertain thermodynamic parameters. And this is just a very first step to the whole new field of processes with uncertain environments, where you can consider systems with uncertain energy spectra, experiments with uncertain control protocols, extension to time dependent apparatuses. So basically then you say that your temperature can change in time, but you don't know how you can have some probabilistic equation on that. And you can maybe try to calculate the trajectory probabilities of these temperatures and other things also extension to a hidden mark of models that would be very convenient for several systems. And that's basically all from me. So thank you very much. And now it's time for questions. And I will try to open the list of participants. And please raise your hand if you have a question. Yes. Hello. Thank you for the very wonderful talk. As you can see at the end you're talking about if you have the environment is not only uncertain, but it's fluctuating itself. I was wondering if you thought at all about environmental parameters that not only are they uncertain fluctuating in time but but they themselves are affected by the dynamics of the system. Yeah, that that will be another interesting step. Here I just want to mention that basically the difference between all of these great field of, let's say time dependent temperature or time dependent parameters is the fact that we consider that we there are fixed or can be maybe even time dependent, but we do not know the exact value. So, in this case, there would be another great, basically great interplay between the system and the environment where you just know the part of the environment, like the part of that is your system and you have only some information about the environment and there can be some some some feedback loop for example in this, you haven't considered that that will be great. Future step. Thank you very much. Thank you. Are there some more questions. I could have a, I would have a question. Yeah, picture. So, you show these two scenarios. Right one scenario the adaptive area where you know, you measure also you know what it is and then you carry out the optimal protocol for that and the other is where you only know the average. And then you do your work extraction for the average. But I was just wondering, so suppose that you know the distribution of alpha. And then you do your experiment for extract work for example, is then, but then you're only allowed to choose let's say one alpha from that distribution. And the optimal alpha is that necessarily the average or I guess. Yeah. So, so, so that's, that's the, that's the thing so that what we see typically is that choosing just the average is not the optimal. It's not the optimal one. Also, it's true for the protocol. I didn't have the time in the paper we have a simple example of a bit ratio where you basically couple the system to a bath of uncertain temperature, and then you choose your protocol that raises your bit. And then it shows that the protocol is not simple average of the two protocols. Also it's not average of the temperatures or something like that. It's really very complicated and in general very difficult to find. Yeah, I can I can understand. Yes, thank you. Yes. Are there some more questions. Yeah, maybe I can also mention that the devil be also next interesting step where you don't know the protocol you observe the trajectory, you try to estimate the apparatus and then you try to change adapt the protocol. Regarding to your information you gain from serving the trajectory and in this case you would have the feedback loop and then there would be a natural question. What is the, what is the trait of between observing the trajectory and changing the protocol and this well known exploration versus exploitation problem from reinforcement learning. That would be great next next step so exactly. Rohit was the next one. Yeah, yeah, my question is pretty simple, I didn't understand how you got the non Markovian feature through averaging in the beginning when you are averaging over the effects of all the apparatus. Yeah, that's that's the problem that basically if you look at this. If you look at the transition, then if, if this equation would be Markovian, it would be on the right hand side you would have some function some some transition matrix, some rate matrix time the average distribution. Unfortunately, you cannot decouple this integral into the product of the some rate matrix times the average distribution so basically the integral of a product is not the product of integrals, and that's why you get this non Markovianity in general. Okay, okay. Thanks. Okay, so next question by Sungela Chen. Hi, my name is Sangela thank you for the talk. Could you repeat. What was the, what, what are the apparatuses are they just different bats that you can couple to yes yes so so we had this this abstract term for any kind of parameters that you have it can be temperature above. It can be chemical potential. It can be even the stiffness parameter everything that is a parameter of your of your experiment. So then I'm a little confused about what the protocol and how this is just like choosing different. So if an apparatus is some parameter, then you're choosing different apparatuses meaning you're choosing different parameters in some order and that's what you call your protocol. And in this case, the protocol means changing the energy spectrum so that you do what you want. And of course you do it in some way, depending on the parameters but if you know do not know them you have to somehow deal with all of these scenarios that might happen. Okay, thank you. And, okay, and the last public question by John. Yeah, thanks. So, if I understand the uncertainties that you're considering are all parametric in the sense that I have a stiffness of a potential and I don't know. Have you thought about kind of less structured types of uncertainties like maybe I have a potential and it's almost parabolic but you know differs a little bit in shape. Sure. Yeah, so for example you have a, you have a class of potentials, and they are parametrized. I mean, at the end of the day. It's always that you can parametrize them somehow or this is the typical approach that you use somehow label them if you want. And in our case it can be discreet can be continuous both of them. So basically you have some set of these, let's say potentials or apparatuses. You have a potential average over them. And, and this goes a little bit into your direction of stochastic control protocol. It will be interesting to see how this whole business changes when you don't know the parameters exactly, and you have some uncertainty about them. Great. So thank you. I think we are going to the next talk. So thank you again. The next speaker is alias from Max Prang Institute and