 I'm sorry and thanks to the organizers for giving me a chance to speak. Just check is my screen visible in cursor? Yes, I think everything looks nice. So yeah, I'd like to talk about, I mean we've heard lots of really fascinating and deep theory today and I'll close out the session with an experimental talk. You might want to put your camera on, so that would be nice if you can. Yeah, sorry. I think I have to stop sharing for a second to see my thing again. Okay, looks good. Everything's fine. Great. Yeah, so experimental work done by Tushar Asaha with theory contributions by Yannick Eric, Joseph Lucero and my colleague David Sivak. So what I'd like to talk about is first introduce a kind of modern, very simple information ratchet that can extract and store work. It kind of started as a textbook design but has had lots of applications. And then the main thing I want to talk about today is what happens when the information that is gathered and being acted on by such a machine is noisy. And basically we found a kind of unexpected, at least to us, phase transition where the machine just stopped working if the noise was too high. And then we had to think about why did that happen and what could one do about it. And then if I have some time I wanted to talk about what happens when you run such an engine in a non-equilibrium environment. Okay, so to set the stage imagine that you have a heavy mass here, which is hanging from the ceiling by a spring. It's small and brownie and so it's fluctuating up and down. You have a demon that watches it. And when it hits a threshold where the spring is compressed a certain amount, the demon quickly raises the ceiling up by just the right amount, not to do any work on the particle. But now the equilibrium position has shifted up. And if you could have repeat this cycle, you're basically raising a heavy weight without apparently doing any work. Of course there's computation and information processing costs that are included, which are not the focus of what I want to talk about. But for the point of view of the system itself, you're raising a heavy weight without having to do any work on it locally, just powered by fluctuations from the back that go up and then you act on them at the right time. And we realize this experimentally with optical tweezers. So we have a horizontal laser beam. The particle now is hanging down and fluctuating. And then we simply raise the beam and go about like this. So the tweezer apparatus is fairly standard. The main thing for our purposes is that you have to build it to go fairly fast so that you can really react quickly to fluctuations. Okay. In a little more detail, the rule that one would ratchet up looks like this. So let's set lambda as the position of the horizontal laser beam. And you make an observation of the position. And if it's higher than the laser beam, the center of the trap, so that's when the spring is uncompressed, then we increase it by an amount that's proportional to the amount that it exceeds it. If this is above some threshold, otherwise we do nothing. And so the resulting motion then would look like this where you have, for example, long periods where it might have some down fluctuations and you have to wait. And then when it comes back up and eventually exceeds the threshold, then you quickly raise it and so on. And the effect is to have a ratcheting up of the center of the beam and the equilibrium position of the particle. Okay. As I said that if you want a pure information engine when you're not putting any work in at all, which is sort of the extreme case of such an engine, then we have to adjust this feedback gain to the proper value. And naively what you would do is this is sort of a horizontal version of it. If this is the potential and you fluctuated to this position by translating an amount here where you're not changing the energy, you'd be doing no work. But the equilibrium position is now shifted to this position here. And in the coordinate system that I've adopted, this will correspond to a value of alpha equals two. However, if you, we can measure how much work the spring system is doing on the particle by looking at if we have a fixed position and we quickly change the position of the laser beam how much work we're doing on it. If we measure this, then the average value per unit time, this is the power of the trap, as a function of the feedback gain in this ratchet rule, does cross through zero, but at a value that is slightly lower than this expected value of two. And as I explained, there are two reasons for this. One of them is really simple to understand. And that is that there's always a delay between the time that we measure information and the time that we're able to act on it. And the simple picture would be that we observe it here, but by the time that we're able to act on it, on average, it's gone down a little bit. And so instead of translating over here, we should translate on average, we should be translating less. So this turns out to be one of the reasons that alpha is less than zero. And in the beginning, that's what we thought was going on, but it turns out that there's another one that I'll get to in a moment. Okay, so again, the picture is that we have this ratcheting up, which leads to an average rate of increase of the free energy, the gravitational potential energy that's being extracted in a kind of work reservoir. The first test is just to understand how you should operate this engine. And briefly, as you increase the sampling frequency, you get the the bee goes up faster and faster, but then it sort of saturates once you get past the relaxation time of the bead in the trap. And so that's one condition that we want to sample as often as possible, although there's not too much benefit once you get to this sampling frequency of one. This the sampling frequency of one in these units. The second is where you should put this threshold. We find that we maximize the rate of increase of work by setting the threshold to be zero, zero again, being where the spring is just at the neutral position of the trap center here. So it might be hanging down here and has to fluctuate up and then you immediately react. And so you want to sort of catch all of these up fluctuations, rather than, for example, waiting for occasional rare large fluctuations. That turns out not to be as good a strategy. Okay, so one can then sort of understand this kind of engine in different operating conditions, by for example, varying the stiffness and the mass. If you do these in dimensional units, it looks a little bit like a mess. This is the velocity that the particles is going up. This is the rate of free energy extraction or gravitational energy extraction. But if you put everything in proper units where we scale by the size of the trap, the equilibrium fluctuations of the trap for length and time by the time it takes to relax down in the trap is an overdamped dynamics and energy by kT, then in fact, all of these collapse onto universal curves. These are for different colors or different sizes of beads. And so as you increase the mass of the bead, if it's light, then the velocity is independent of the mass. But beyond a certain amount, which is given here, gravity starts to slow it down. It slows down as it's heavier. Likewise, there turns out to be an optimum mass of the bead to extract the highest free energy gain. And basically if it's too light, then there's not enough free energy that's getting extracted. And if it's too heavy, then the fluctuations up become too rare. OK, so this is sort of the background to what I wanted to focus on today, which is what happens to this picture if the measurements that you're making are noisy. So up until now, we've assumed that there's some position x of t and that the x of t that we measure is the true position. But what if to this true position, there's added some measurement noise, which of course there always is so that schematically you have not this curve here, but kind of like a band of probabilities. So the first thing to do is define how much noise and signal I have. So I'll define a signal to noise ratio where by signal, I mean the size of typical fluctuations of the bead in the trap. And then the measurement noise would be given a fixed position of the bead, how much to measurements spread about it. So they'll both be Gaussian distributions because it's the quadratic potential in this because of the kind of measuring system that we have. And its standard deviation would be sigma m and this is sigma. And so the signal to noise ratio I'll just define as the ratio of the standard deviations. OK. And experimentally, we can control the signal to noise ratio. I should say this is sort of real measurement noise. So we have a laser that is detecting the position of the particle onto by projecting an image of the scattered light onto four photodiodes that record the position. And by reducing the intensity of the laser, you reduce the signal to noise ratio, you increase the noise. OK. So this is the picture that we had before we're adjusting this feedback gain. We could make the work done by the trap go to zero. That was at a relatively high signal to noise ratio. And remember we explain this by appealing to the delay. And I said that wasn't the full story. And the reason it's not the full story is that we can repeat this at different signal to noise ratios. And what we see is the following where at high signal to noise ratios, this gain that makes the trap work zero. So this is on average, we're not putting any work into the system. At high signal to noise ratios, it asymptotes and the difference between this asymptotic value and two is indeed due to the delay. But now you can see there's something else going on that as the system is noisier and noisier, we're also being forced to reduce this gain to lower values. So this is the experimental point here. And if we go a little higher in signal to noise, we get a little closer to two. But if we go lower, then we really go down. And in fact, it goes down to what looks like zero. And so the first question that we were asking is what's going on here? Is this just smoothly decreasing to zero? Or is there some kind of phase transition? And I've already sort of suggested that there's a phase transition. And so these hollow points I want to claim are truly zero and not just some small value. So how do I know that? Well, we looked more carefully at the dependence of the amount, the rate of work that the trap is doing, the average trap power as a function of feedback gain. And so this was the picture that we had before. What I want to suggest is that as we change the signal to noise ratio, this curve, which is just a piece of a curve, actually kind of looks like this for high signal to noise ratios and looks like this for lower signal to noise ratios. So in this case here, we're able to find a value of feedback gain that makes the power equal to zero. Of course, there was always another possibility that we hadn't talked about, which is that if we don't try to raise the trap, then we're not doing any work on the trap at all. Right? So that's kind of the trivial solution. But it's always a possibility. Here we chose to operate in a condition where we could be raising it without doing any work. Here you see that there's only one possibility and that's to do no work. So this is what it looks like here if we instead focus on this area very, very close, a very small values of alpha. So these are going very close to zero. We can do this at very signal to noise ratios. And I guess they're corresponding to the gray levels here. So this is small signal to noise and high signal to noise. And so what you can see is that there's a change in shapes. And if you measure the slopes carefully, then you can see actually that there really is a crossover. And you can pinpoint that there's a phase transition point. So it's a phase transition between being able to find conditions where we can have this pure information engine going up without doing any work on it directly mechanically and just choosing not to play the game and not doing anything and letting it just sit there. Where of course we're also not doing any work. Okay, so that's explaining this behavior and why we put sort of open circles where here it's truly zero that there's no, I think, fine. And then we can ask what is the rate of free energy gain? And so what we see is that at high signal to noise ratios, the speed that the particle is going up and therefore the rate of free energy extraction of gravitational potential energy from gravitational potential energy asymptotes to a value that you would get with perfect measurements. But then it really decreases and it decreases strictly to zero here at the phase transition and then here it's zero because we're not doing anything. We're not raising it. Okay, so that explains a little bit what's going on but not why. So why is this happening? And the reason is that there's a kind of bias in the position that the particle of the measurement. And it's a little bit strange because normally the system that we're looking at is unbiased in the sense that if we just have a particle diffusing in space and we make a measurement, the errors are symmetric. That's what the Gaussian distribution meant. But now we're adding a condition and this condition can shift things. And let me try to explain. So there are two cases here where we see a particle that is apparently at the threshold that we've set. And in one case, if this is the equilibrium position, the actual position can be way up here and the noise fluctuation is negative and the net is that we see it here. In the other scenario that I want to think about, the fluctuation was below and then we had a positive noise fluctuation. Remember the positive and negative noise fluctuations are equally likely. However, the equilibrium is here. So in terms of fluctuation probabilities, this is much more common than this rare fluctuation. And so what I want to claim is that when you see a particle at threshold, on average, it's actually below it, which means that there's a bias. And this bias is coming from the conditioning of the imposed by the threshold. And so because of this bias, in effect, when we reduce alpha further, we're trying to get rid of this bias. And what we find is that if the noise is too high, in effect, this alpha would have to be negative, we would have to be lowering it on average to deal with the bias and have zero power, but we always have the possibility of doing nothing. And so then that's a better solution. So that's really what's going on with this transition. So that's the problem. I think we've understood the problem now that we get this phase transition. It's due to this bias. And then the question is, well, what can you do about it? And the solution that we came up with was to use a more sophisticated way of sort of processing the information that we get. So up until now, we've just been running this engine in the same way that we were running it with perfect measurements, which is to say, in that algorithm that I showed you for how to ratchet up the feedback, we were just putting in the measurement in place of the true position. But the engineers will tell us that there are other things that one can do. And in particular, we know what the dynamics of this trap is. So we should be able to take advantage of it. That is to say, we've measured the spring constant associated with the laser trap and relaxation times and so forth. And so we can take advantage of that to essentially use the past history of measurements. Not just the one measurement, but the entire history of it to estimate the actual position and to remove the bias. So I don't want to really go into the technical details of it, but it's a combination of Bayes rules plus the dynamics and the technical information of this is something called a common filter, which uses prediction to remove the time delay. So remember we had the separate problem of a time delay. So that you can remove by just recognizing that you have information from the past. And so you have to propagate it forward one time step using the known dynamics, which govern the average drift. And so when you do that, say, for one particular case of signal noise, we can repeat the characters. And then the nice thing is that having having accounted for these biases due to the rare fluctuations, which we now can take advantage of because we know what the dynamics are and the delay, we now get something that actually passes through zero at two just as if the naive way. And if we repeat that for all different signal to noise ratios, it really is two for all of them. And that means that we can essentially use this estimate of the position from the past history to come up with an estimate of the state at the time that we're acting on it that has no delay because we've used prediction and has no bias because we've used these Bayesian priors. Okay. And so the result then for how fast we're able to raise it up is this blue curve here, which compares to the old one here. And it's probably better to look at the difference between these two. And what's remarkable, I think, is that there's a maximum not too surprisingly right at the phase transition. So this represents sort of how much better you perform using this more sophisticated way of incorporating your noisy measurements than you would by the naive way. And what we see is that there's a kind of a peak in the benefit at the phase transition. It's easy to understand because when you have a very high signal to noise ratio, that means that your measurements are essentially telling you the perfect position. Then it doesn't matter if you use the naive method or not. There's no real benefit to being sophisticated. Likewise, in this regime, which is we can't really do too much in the way of measurements down here, but we can see the trend, if your measurements are just complete noise, then no amount of Bayesian trickery will get you out of that. But for intermediate signal-to-noise ratios of order one, then this really is important. And this is really the regime that typically one likes to be in because, I mean, I haven't talked about information processing costs, but there is a cost for measuring more information than you actually need. And so by minimizing information costs would tend to put you in the signal-to-noise ratio of order one anyway. But what's remarkable to me and is worth thinking about is that even when every measurement that we're making is a little bit noisier than the actual signal, we're slightly below one here, you're still able to get more than half of what you could get if you had a perfect measurement. And to me, this was kind of remarkable. Okay, so if I have a couple, I'm not sure how I'm doing on time, but if I have a couple things left, I wanted to talk about a different problem. You're now just into the question, time for questions. Yeah, I'll just sketch this very quickly. So far, I've been talking about a situation where this information engine was operating in a bath that was at thermal equilibrium, water that's at thermal equilibrium. But we just heard, for example, from Ziwei that real life is out of equilibrium. And so the typical environment that we live in the real world is out of equilibrium. This can be due to either active matter, so bacterial molecular motors or in a larger scale, you have these little hex bugs sort of stirring up things. And at a really macroscopic scale, it could also represent just the wind and waves and so forth. There are all sorts of reasons that the actual environment typically is not in equilibrium. And experimentally, we can create a non-equilibrium environment in the following way. We have a resistor, which is at room temperature, which is where we're running the experiment. And it has fluctuations, but we can put it through an electrical amplifier and connect it, put electrodes in our cells so the particle is charged. And so the fluctuating Johnson noise, the voltage fluctuations, then is amplified and produces fluctuating forces on the bead. And so if we turn up the gain of the amplifier, we go from an equilibrium situation where this is the sigma that I was showing before, and that that scale of fluctuations is increasing. And we can define a sort of non-equilibrium diffusivity if you want, or kind of like an effective temperature. And here I'm neglecting measurement noises. And maybe I'll focus on this, that basically as you increase the forcing amplitude, the amount of this noise, perhaps not too surprisingly, you get more power out. So that these information engines are interesting because using exactly the same algorithm that they would use in thermal equilibrium, they automatically extract more in environment. And it doesn't have to be something where you have white noise. It can also work when you have colored noise as well. So let me just maybe skip over. But one important point, I guess, is just that we talk about effective temperatures. But I guess it's important to recognize that you can get huge effective temperatures in non-equilibrium systems. In our experiments, we got to something on a boarder 10 to the 4k. You can do this in granular media and get 10 to the 17k. And you can ask, where do these huge temperatures come from? And the reason in brief is just that these out-of-equilibrium modes are just some small minute fraction of the total number of modes of the system. So it's possible to get kind of non-equilibrium forcing that are orders of magnitude greater than what you would get by just having two physical temperatures where all the modes have to be having increased energy. And so this leads to the possibility of running autonomous information engines in these environments. But I think I have to leave that for another day. So let me just say in summary that I've introduced a kind of very simple but neat realization of an information engine and thought about a little bit about what happens when your measurements are noisy. We found that we had a surprising phase transition as a function of signal to noise where the machine just stops. But if that you're more sophisticated in how you incorporate the information by putting in basically the past history of measurements rather than the current measurement, you can get around this and work surprisingly well at unit signal to noise ratios. And just briefly suggested that if you put this all in an un-equilibrium bath, it works even better. And finally just a plug. I've been thinking a lot about control theory and signal processing and trying to put this into language that that physicists would find easier to absorb perhaps than engineering or mathematics texts. And so you can read about Bayesian and common filters in that work. So with that, thank you. Thank you, John. And we have time for a couple of questions. So anyone? I'll start. So one thing that characterizes a lot of information engines is that you really have to fit the information you gain to how you use it. And you have here some way. Is it in a some sense an optimal way to use the information given that you know that you have a harmonic oscillator? Yeah. In this very simple case, yes. Because the, I mean, essentially we have a particle that's moving in a harmonic potential sort of as linear dynamics. The noise is all Gaussian. And so then you can really prove that what we're doing is the optimal use of the information that you get. But you're completely right that in general this is a hard problem that if you had non-Gaussian noise, non-linear dynamics, then you have these Bayesian filtering equations are very easy to write out in general, but very hard to solve in particular cases. And so there's a whole branch of engineering theory, I guess, that's devoted to coming up with good approximations or as good approximation as you can get in these more complicated situations. Peter. Yeah, great talk. Thanks. I was just wondering, I mean, have you thought about sort of the optimal non-equilibrium fluctuations to harvest free energy in this example subject to some constraint on these fluctuations, right? So you can sort of repeat the experiments, but then change the nature of these fluctuations, right? Make them non-equilibrium. And then I guess you can ask what's the non-equilibrium statistics given some constraint that would sort of maximize the rate at which you can raise the free energy for a given zero work, let's say? Yeah, I mean, I think we thought about it in the more in the converse way of given an environment with a set of fluctuations, what would you do? But can you turn it around indeed? Yeah, I think one could. We haven't really thought about it that way. I mean, the general comment is that in this setup that we have here with the rules that we have, it's more or less designed for white noise to take advantage of white noise. And I didn't have time to really dwell on it, but when you have colored noise, then you get less out of it. I mean, if it's Ornstein-Ruhlenbeck noise, then in some sense, the adaptation would be to sort of set the relaxation time of the bead to kind of match that with the noise. If it's more complicated and has a funny spectrum, I'm not sure. But I think the general idea would be that you would have to have a frequency response that would make the kind of result sort of white noise fluctuations looking through the combined system. I think it would be something like that. Thanks. One more question, maybe. Hello. First of all, thank you very much for the very, very nice talk. Could you expand a little bit at some point when you compare the Bayesian filter method compared to the naive, that one would be more computationally expensive than the other in terms of information cost. Could you expand on that? How much you would expect Bayesian approach to be more expensive than the other? Yeah. So, I mean, we've thought some about that. I mean, I don't have a precise answer for you. But I guess one thing to point out with this Bayesian approach is that it incorporates information from the past history of the time series, but it does it in a recursive fashion. So, the analogy I would make is that if you want to compute the average, you can either take n quantities and add them up and divide by n, or you can have a recursive relation where you take the old value weighted by some factor and add the new value with some other and then you keep going. And when you do that, you're storing just one other piece of information. So there is an additional cost, but it's likely to be somewhat moderate. It's not some cost that is diverging within. Yeah. Thank you. Okay. There is one last question from Sunjeeva. So, following up on that, it seems like there's a memory cost in addition to the processing cost. And can you comment on what's the, well, I'm not sure how to phrase this, but what's the relative cost of the memory versus the processing? Okay. This is a really a question for Yannick Erich, who's I think in the audience. But, I mean, first of all, as I said, it's one additional, you know, it's a finite memory. It's one additional register of information. And I mean, okay, I think I may take the experimentalist fifth here and not comment too much except that if you look up the paper by Yannick Erich, he's tried to estimate some of these costs of sort of operating the controller in these ways. Because at finite information, I think I had one slide on that from what he had said. This is from his work. Yeah, this is the paper that I'm referring to that essentially the information power, you can set a minimal cost by looking at the change in conditional or joint or joint entropy between essentially the controller and then the motor element or the cargo. And so it's asking me, how much does the controller reduce the change in entropy rate from the feedback? That's an approximate interest, but I think if you want to see details, it would be in this paper. Thank you. Okay, I suggest that we'll thank all John and all the other speakers in the session.