 today. So I'm going to be talking about energy harvesting from an isotropic fluctuations. And this is a joint work with Amir Tawaii, Ruifu, Yongxin Chen, and Trifun Chojo. So before you start, let me give a couple words in the title. So the first part energy harvesting, hopefully everyone knows what I mean by this. So I'm going to harvest energy from stochastic thermodynamic system. But this is not going to be your typical sort of carnal cycle or sterling cycle. In this setting, we're going to be extracting work from anisotropic fluctuations. So what do I mean by anisotropic fluctuations? What I mean by anisotropic fluctuations is that the different degrees of freedom of my stochastic thermodynamic system are going to be subjected to different fluctuations. There's different temperature, different noise amplitudes. Now the reason and the motivation to study this type of systems is because they are really ubiquitous in this as energy harvesting mechanisms, but also in biological systems. So for example, here we have the very famous Feynman's ratchet. So this is an autonomous engine that's able to sort of extract work from the system by lifting some weight. So this is due to the asymmetry in these two wheels and these two wheels being immersed in these two different temperature heat paths. So far from going into the details of the system, I just want to say that we can think of the system as having two different degrees of freedom, each of which is subjected to a different temperature and therefore in this sort of setting of anisotropic fluctuations. On the other hand, we also have a myriad of proteins that really live across our membranes inside our bodies. So this is an example of the ATP synthase, this molecular motor that really synthesizes ATP by using an ion concentration gradient. So we have this membrane here in yellow and on one side of the membrane we have our high concentration of hydrogen ions and on the other side of the membrane we have a low concentration of hydrogen ions. So again, the system is extremely complex and not well understood, but I just want to point to the fact that very simplistically we can think of it as having two subunits. The upper subunit is immersed in this heat path which is in this environment which has a lot of bombardment coming from these high concentration of hydrogen ions. While the lower subunit which is somehow coupled is immersed in this environment in which there's low bombardment from these particles and therefore we can think of this system again as having two degrees of freedom subject to different amplitude in the fluctuations. So but again, I mean the scope of my talk will be sort of more simple than this, these are very complex mechanisms. So what I will be focusing on is more simple systems and symmetric systems, such as the one portrayed by the Brownian gyrator system. So the Brownian gyrator was introduced by Felicer and Raymond in 2007 and what they thought of is the thought of one particle that is able to move in two dimensions. So in one dimension, it will get a lot due to this hot temperature. While in the other dimension, we won't get it that much due to a cold temperature. And this particle is sort of kept in place by this potential which in this case is quadratic and makes the dynamics linear. So these same dynamics are also able to model the dynamics of this circuit and in particular the charges at these two different capacitor plates and how these charges evolve in time. So these charges will again fluctuate due to these two different resistors being subjected to different temperatures and therefore giving rise to these Nyquist-Johnson fluctuating currents that makes these two degrees of freedom, these two charges also in the setting of these anisotropic fluctuations. So one of the reasons this system has been so widely studied both sort of theoretically and experimentally is because of its simplicity. The linear dynamics makes it analytically solvable, so we know a lot of things about this system. But the other reason it has been so widely studied is because it displays what we call non-equilibrium state states. So we have already seen a couple of things about non-equilibrium state states, which is me, let me give the basics. So essentially we have our thermodynamic system and we're going to leave it by itself, so we're not going to change anything. It will eventually reach a steady state, so a state that won't change in time. But the state state will be very particular because it will be a state that's not at equilibrium. In this case we would have, for example, swirling particles around the origin. The swirling motion doesn't change the overall sort of state of the system, so we're still at steady state. But it mediates heat transference between these two heat bars, therefore driving the system far away from equilibrium because we have this constant entropy collection at the environment. So the motivation to study this non-equilibrium state states and the reason we are so interested in them is because they're really a signature of life because life by necessity operates far away from equilibrium. It's characterized by this order and this complexity that can not really be found at this equilibrium state, the state of maximum entropy, maximum disorder. So this is a very broad sort of motivation of the study of this type of system subject to anisotropic fluctuations. Let me actually go to the object of my talk. So I will talk, I will split the talk into different parts. The first part will be sort of basics on stochastic thermodynamics. So we'll forget about anisotropic fluctuations. We'll simply consider a one-dimensional system with one temperature, which in particular will be overdone. So what I will try to sort of make sure that everyone's in the same page on the notation. And moreover, I will try to draw this sort of relationship between stochastic thermodynamics and a field of mathematics known as optimal mass transport. This, this relationship between these two will give us a geometric view that will be extremely useful for the second part of the talk. So in the second part of the talk, I will be actually talking about these systems subjected to anisotropic fluctuations and in particular, we'll try to extract energy from these systems. And through this sort of geometric picture, and we will show that the problem of maximizing work output is really related to anisoprometric problem. Well, that of maximizing efficiency is related to anisoprometric inequality. So let's get started. So throughout, I'm going to be considering this overdone langivine equation. So essentially I'm modeling the position of this Brownian particle with x. So here's my Brownian particle. This particle is immersed in a heat bath of smaller particles, the red ones. These particles are at some temperature, and therefore they jitter, and they will eventually bump into this Brownian particle. So this will exert some force on the Brownian particle. This force is what's being modeled by this Gaussian like white noise, essentially this Brownian motion. So we have this particle at this heat bath that is very viscous. So we're in this overdone setting. So really, this is the assumption that the inertial effects can be negligible in the time scales we are interested in. So the velocity degree of freedom really decays much faster than we're interested in. So it doesn't really hold any energy. And therefore we can think of the system as the forces directly acting on this position. So the forces are directly changing the position of my particle. So this will be my sort of setup, my model for my thermodynamic system. So we have that the position of my particle changes due to two forces. One force comes from these thermo fluctuations, these red particles. It's proportional to this B, the Brownian motion, the square root of two times gamma, the friction coefficient of my heat bath, KB, the Boltzmann constant, and T, the temperature of my heat bath. On the other hand, I will also be considering a potential forcing. So a forcing that comes from a potential function, the potential energy of this Brownian particle. As such, it is really a function of its position. But we will also take it to be a function of time. And this is because we will assume that we have external control over this potential energy of my Brownian particle. That's typically the case in some experimental setups. So we'll really think of this T as not only time, but also our control parameter. So again, this is a stochastic differential equation. As such, we can perform an experiment and we'll get one trajectory. If we perform the same experiment again, we'll get a different trajectory. So instead of looking at it at the level of a single particle trajectory, we can look at its statistics. So the statistics or the information is essentially inside this function, the probability density function, this row of T of x. What this row of T of x is telling us is what's the probability of finding a particle at position x at time t. So this probability density, because we have this type of model, now evolves according to this Fokker-Flanck equation, which we have written here as a continuity equation. This means that really the mass of rho is conserved over time. So the integral of rho over dx is really always equal to 1, which makes it a reasonable probability distribution. So in order to write it in this continuity form, we have defined this mean velocity field, this B of T of x, which is minus 1 over gamma times the gradient of u plus kBt times the gradient of the logarithm of rho. So now again, sort of with this very brief sort of introduction, what I'm trying to say is we have dynamics at two different levels. At one level we have dynamics of single particle trajectories and at the other level we have the ensemble dynamics. So with these dynamics, let me now go into the energetics in order to actually do some thermodynamics. So the energy of a single particle, as I sort of hinted at in this overdone setting, simply given by its potential energy, because the kinetic energy really can be neglected in the setting. So let us take the differential of this energy and we'll get two different terms. One that accounts from how u varies with respect to t, so partial u with respect to t dt, plus partial u with respect to x dx. This first term really takes into account how the energy varies through these control degrees of freedom in this sort of useful way, this is why we'll call it the differential of work. On the other hand, the second term really takes into account how the energy varies through these sort of uncontrolled degrees of freedom, not directly controlled over, and really counts for how the energy of heat. So these two different terms here are sort of fluctuating terms, random variables, so they really depend on each of the realizations. But as before we can still have a picture at the level of an ensemble simply by taking the expected value. So we can take the expected value of each of these terms and also integrate over time and we'll get to the work and the heat at the level of an ensemble over a time interval. So these are now sort of deterministic expressions. So using doing the exact same thing sort of with the same with for the whole expression, we'll get to the first law at the level of an ensemble, which is simply stating mass conservation, sorry energy conservation. So the energy of my system only changes either through work exchange with the external potential or through heat exchange with the heat path. So with this now, let me try to derive the second law and this will be the last slide of my introduction. So the second law of thermodynamics states that the entropy of an isolated system always increases. So the entropy rate is always positive or at least non-negative. Now my isolated system is now formed by this combined system of both the system of particles and the environment. So I really have to take both of them into account here. So the entropy rate of the system is given by the partial derivative with respect to t of minus kB times the integral over log of this just the entropy of my system. On the other hand, the entropy rate at the level of the environment is given by this sort of classical expression because the environments seem to be at equilibrium. So it's minus the heat rate divided by the temperature. Now putting these two terms together, summing them up, really playing around with them and integrating over time, we'll get to the following expression for the total entropy projection over a time interval. So we're going from one probability distribution to another probability distribution in some time tf. So we're going through some path in this sort of space that I will now call the space of thermodynamics states, the space of probability distributions. So the total entropy production we can quantify as a constant and times this integral over time of the expected value of the square norm of the velocity. So in particular, this is always positive and this really constitutes the standard second law. But of course, in the setting of stochastic thermodynamics, we can do better than this and we can have a finite time correction to this. And a way to do so is through optimal mass transport. So this branch of mathematics has been really studied for almost 300 years and its goal and its sort of object is to study this optimization problem. So the goal is to minimize this sort of action integral. So an integral over time of this kinetic energy of the particles that flow according to this continuity equation from an initial probability or mass distribution to a final probability or mass distribution. So what of the Lama transport tells us is that the solution to this problem exists, it's unique. And it's given by the square of a distance, in particular the fascist time to a distance between the two endpoint distributions, rho zero and rho tf. So now by noting that really this cost is nothing but a constant times this total entropy production, we can really use this optimal mass transport theory to have this tighter bound on efficiency. And in particular, we still have this constant, we have one over tf and this square of the distance. So if tf goes to infinity, of course, we recover its standard second law. But if tf is finite and if rho zero and rho tf are different to each other, they're not exactly identical, then we have that this term is always positive. So we really have this finite time correction to the second law that is on top of that always achievable. So optimal mass transport also tells us what is the optimal path, the optimal rho of t that goes from rho zero to rho tf, and the velocity to achieve this. So the control that you need to achieve this. So this framework is really extremely useful in this problem of minimizing entropy production. But it's not only useful in this problem, but it also gives us a way to sort of metrize this space of thermodynamic states. So now let me think of my space of thermodynamic states as this space of probability distributions. So this wastes time to distance what's really telling me is that we can really measure distances in this between two probability distributions in the space of probability distributions. And we can not only do so, but we can also measure lengths of given paths. And this will be extremely useful in the second part of my talk. And this will really give us a geometric view that will help us solve this problem of harvesting energy from anisotropic fluctuations. So now in the second part of my talk that let me consider n dimensional system, and now each of the degrees of freedom is going to be subjected to a different temperature. So we're differing the setting now of anisotropic fluctuations. So in particular, I'll have the same exact overdamped dynamics. But now the x lives in Rn and t is a matrix that is n times n. And it's diagonal with unequal entries to low working extraction and to have this anisotropic fluctuations. So again, each of the degrees of freedom here are really subjected to a different temperature. So my goal in this talk will be to find the optimal control u that maximizes work extraction system while driving the system over a cycle. So because we are going over cycle, really, we start and we end at the same state and therefore the energy over a cycle of my thermodynamic system doesn't really change. So the energy over a cycle is zero, which means by the first law, that really the work output, which is minus the work because of the sign convention of chosen, is really equal to the heat. So for us, it will be more convenient to focus on quantifying this total heat over a cycle. So this is what we will do. So what we did is we got this definition for the heat rate that we had a couple of slides ago. We played around with it and we wrote it in terms of V and rho only, so without the gradient of u. So we express the gradient of u in terms of rho and v. And we got these two different expressions. So this first expression is linear in the velocity while the second expression is quadratic in the velocity. In fact, the second expression should by now be familiar because nothing but the cost that we were trying to minimize this optimal mass transport problem. And as such as Tf goes to infinity, then this term can always go to zero and therefore we will call it the dissipative heat. On the other hand, we have this term that is linear in the velocity and because of this, we can really interpret it as a line integral in the space of thermodynamic states. So as a line integral, it doesn't really depend on how fast we go along this line on how we parameterize it, but only on the shape of this line. And this is why we'll call this term the quasi static heat. So okay, so now let us go over a cycle. So in particular, we're going to go over a cycle in the space of thermodynamic states. So this is an infinite dimensional manifold and the cycle will simply represent a closed curve. Now let me sort of simplify and restrict myself to a two-dimensional sub manifold. So now we'll look into a two-dimensional sub manifold, which is parameterized by this curve alpha. So a closed curve in this two-dimensional sub manifold parameterized by lambda one and lambda two, these two degrees of freedom. And this curve will encircle a domain that we call D. So it's a closed curve. So why are we interested in this sort of very abstract setting? Well, the reason it's like it can be useful. So here's an example and this is actually an example that I'm going to go back to over and over again. And this is the setting of the Brownian generator. So here we have essentially quadratic potential linear dynamics, which makes our probability distributions Gaussian. In particular, here we'll have zero mean and as zero mean Gaussian, they can really be represented in terms of the covariance matrix. In this case, we have a two-dimensional system. So the covariance matrix is just two by two symmetric metric. In this case, we're going to simplify a little bit and parameterize it by r and theta. So theta will measure sort of the rotation of this metric. Well, r will measure the relative magnitude of the two eigenvalues. So again, we'll have two degrees of freedom. So yeah, two degrees of freedom are and theta. So two parameters are on theta, which if you give me an r and you give me a theta, then I'll give you a probability distribution. I will give you the state of my thermodynamic system. So okay, so now let me zoom out and I'm actually going to go back to this example over and over again. But I will also sort of in parallel try to give you a more abstract vision of this problem. So let me now zoom out. Let me write the quasi-static key as we had it before as a line integral. Now because we have a closed curve, it's going over the perimeter over of this domain. Because we have this finite dimensional parameterization, we can really use Stokes theorem to write this line integral as an area integral. So we can write this area integral in a very general way, but let's not focus on this sort of abstract setting. And let me go back to our example. So in our example, we can really write this quasi-static key as this line integral. So it's clear it's an integral over the perimeter, something times dr plus something times d theta. Now we can explicitly use Stokes theorem to write it as an integral over the whole domain, over the area of some function that comes from Stokes theorem that we'll call the work density, some sort of unit area. So let me plot this work density here color coded, okay here color coding. So here what I'm plotting is each of the points in this plane is really corresponds to different R and theta and they're really really correspond to a different state of my thermodynamic systems with a different probability distribution. So let me give you so what we've learned is really that if we go along this particular cycle, you can just an example in red. So what we found out is that the quasi-static key is simply given by the integral of the area enclosed within this cycle, weighted by our work density that is here color coded. So we found out the quasi-static key, the quasi-static work is nothing but an area integral over a work density. So let me now look at dissipation. So dissipation again can be a result of this cost that we can minimize. I don't know if there's a question. So because we know it's the cost and it's problem, we can really use this geometry to lower bound this expression. So for any given we'll be able to lower bound this equation by some constant divided by tf the final time of my transition times the length squared of the of this curve. So the length squared of this given curve, this length measured in this fascist into sense. So what this is telling me is that if you give me any curve, then I can give you through optimal mass transport the optimal parameterization that will minimize this quasi-static dissipation. So okay, so let us write this length in terms now of my two parameters. So we have it, we can write it as an integral over this perimeter of the norm of the velocity of the curve and this norm in the fascistine to sense. So weighted by this fascistine metric GW. So for our example, this GW takes the following form. So we have this metric, which is a diagonal and it's also very simple because it's data independent. So it's rotational symmetric. And this will be extremely useful later. So again, going back to our plot and to our example cycle, what we found out is that if we're going along the cycle, then minimum dissipation is really given by something proportional to the square of the length of this red curve. And this length measured in this fascistine to sense. Okay, so let us put these two things together. And then we can write the work output as is different between the weighted area here and the length squared. And this length squared is actually weighted by some parameter mu, which is this ratio between this characteristic time, which is constant for our given system, and this time of my period, the time of my cycle. So our goal now is to find, to maximize this work output. So to find the optimal cycle in the space of thermodynamic states that will actually maximize this difference. So what we realized when we looked at this problem, so this problem is really related to an isometric problem, which is that of maximizing an area integral with such that a length is fixed. So the isometric problem has an extremely long history on sort of the canonical isometric problem is that in Euclidean space and flat space, so we can think of it by grabbing our shoelace and trying to close the curve for our shoelace. And then we can put the shoelace on top of our table and try to find the optimal shape of our shoelace that will maximize the area within it. So playing around with it, we will shortly find out that the optimal curve is, of course, a circle. So here we have the exact same problem, but instead of a flat space, we have a curved space, we have the space of thermodynamic states. And moreover, we also have weight areas. So unfortunately, the solutions to our problems will no longer be just circles, but will be a little bit more complex. So what we can do, so we can write the first order in the specific condition for optimality, so a condition that has to be satisfied if our cycles are optimal. And this is the condition for optimality for this problem of maximizing work output. So through this condition, we can really understand this relationship, because we can really think of mu as either a parameter for this problem of maximizing work output, or as a Lagrange multiplier for this problem of maximizing area such that the length is fixed. So we really share this first order in this condition. So now, next, what we did is, okay, we couldn't solve analytically in whole generality this expression. So what we did is we went back to our Brownian-Jaritor example, and we solved them numerically. So what we found are here the optimal curves that solve this isometric problem, so for different fixed lengths, so for smaller lengths and for larger lengths. And these are portrayed over this grid that exemplifies the work density, this WQS. So of course, the optimal curves are always trying to go over this point with maximum work density. Another way to understand this plot is through this sort of relationship between these two problems, is by instead of looking at different fixed lengths, looking at different fixed mu's, because really, the higher mu, the more penalty on this length squared, and therefore the smaller cycles will be the lower the mu, so this would be the lowest mu, we would have longer cycles. So again, let me be a little bit more explicit of how the solving this isometric problem actually helps us solve this problem of maximizing work output. And so here we have plotted the optimal. Sorry Olga, just a brief comment. You have five minutes including, including questions, but I guess you're finishing in around a minute. Okay, thank you. Okay, so I'll have to skip some slides, but okay, so here we have numerically solved the isometric problem. So these are the optimal areas for different fixed lengths. So here we have different fixed lengths. So the solution to this problem really helps us find the problem, the solution to the problem of maximizing work output, because this is the maximum work output is simply the maximum vertical distance between this optimal isometric line and the line with constant slope mu, because really the work output is related to this difference. But moreover, by solving this isometric problem, we also have a solution to another problem for free, which is that of maximizing work output for a fixed efficiency. So I haven't defined efficiency yet, but I will do so very shortly. So just believe me very with me that the points with constant efficiency will be located at a line in this plot. And therefore, the operating point for this problem of maximizing work output for a fixed efficiency will be simply located at this intersection, and the work output would then be given by this vertical distance. So okay, so as I promised, let us now take a look to efficiency. So efficiency can be defined for the system subject to isotropic fluctuations as this ratio between the work output and the quasi static work. And this really makes sense because the quasi static work is the maximum amount of work that we are, we are able to extract from any fixed curve, if we're able to parameterize it in infinite time, so if we can get rid of this patient. So of course it means that the efficiency is always bounded by one by its second law. Now our goal is going to be to try to find a tighter bound on efficiency and try to find a bound that takes into account the fact that we are working on finite time. In order to do this, we'll again use our sort of geometric tools. So let me write this efficiency in terms of my area and my length as one minus mu times this ratio between the length squared and the weighted area. What I will do now is I will use isoprometric inequalities to bound this efficiency. So isoprometric inequalities really bound this ratio between the length squared and the area that is enclosed within any length in a given manifold. So they come in very different forms and very different shapes and some of them might be tighter than others depending on what physical setting we have, what model, what institutionally, what underlying space we have. So what I'm going to do is I'm going to exemplify how to use these through our dimensional Brownian character example. So in that example, if you remember, we had a rotationally symmetric metric, which means that we can use this particular isoprometric inequality. Since you're working out the details, we'll get that the length squared is always lower bounded by two pi times the area enclosed within it. Now this is an weighted area. Well, for efficiency, here we had weighted areas. So what we have to do is the problem, have to multiply and divide by the unweighted area and then use the isoprometric inequality to bound this term. So here we define this F bar, which is the ratio between this unweighted area, sorry, this weighted and this unweighted area. But this really still depends on the cycle we choose. So because we want this bound inefficiency to be sort of universal independent on the cycle, we will estimate this F bar by the maximum of F, which in this case is one half. So here we'll have our bound inefficiency that now is independent on the cycle. And he has this finite time correction. So it's one minus four pi times to see this characteristic time divided by TF, the time period of my cycle. So not only do we have this finite time correction to efficiency that is independent of the cycle, but we also have a speed limit. Because if TF is smaller than four pi times to see, if we're going too fast, then this negative term will dominate over one. And the efficiency will always be negative no matter what cycle we choose, we're always going too fast. And what negative efficiency means is that we cannot extract any positive work. Okay, so with this, I was going to talk a little bit about non-conservative forcing. And we've shown how this problem maximizing this power output when we have control over non-conservative forcing at state-state is really related to an impedance matching problem in circuit theory. I'm not going to go over the details. I also want to point out that we can use this sort of problem that we try to solve of maximizing work output through this potential forcing to really have design principles to design autonomous engines. And this is what we did based on this electrical embodiment of the Brownian generator. In another work, again, I won't go into details. Let me just sort of wrap up and summarize and hope that I've conveyed this idea that optimal mass transport can really be useful in stochastic thermodynamics, and in particular in the setting of thermodynamics. Not only because we have a neat sort of geometric view, but also because it really opens up our toolbox to other type of tools like differential geometry, like isometric inequalities, for example. So what we showed is that the problem of maximizing work output is really related to an isoprometric problem, but that of maximizing efficiency is really related to finding an isoprometric inequality in the space of thermodynamic states. So with this, we would be able to sort of have some design principles for autonomous engines. And let me just finish up with sort of things that we haven't yet tackled, which is essentially that we had to restrict ourselves to dimensional submanifolds. So it would be really nice if we could do this in infinite dimensional submanifolds, and also to the fact that we had to use isoprometric inequalities, but did not account for densities in this area. So it would also be really nice if we had some results that intrinsically take into account those densities. And with this, just let me thank you for your attention, and I know it's late, so I'm sorry to give you all so late. I also thank the team.