 I moved a bit quantum in the ASPALMIA team in Vienna. So yeah, well, let's start the talk by first thanking the organizer a lot. It's very, I'm very glad to be here. And I think it's very nice that everyone can gather online like this, very convenient. And as I said, the talk, the work I represent today was done in the UNS Lyon with Jorge Pareda. That was a postdoc also at the time, Sergio Ciliberto and Ludovic Belon, who were my supervisors. And basically, we worked on information and thermodynamics and more specifically on how to optimize information processing using underdamped systems. So let's start by something I think you all know, but I would like to start from the basics, which is the cost of information. I will define it as the energy spent to perform one basic operation. And if you look a bit of this phenomenology, this one, this low, called the cumulo, it's a bit similar to the murolo, it's stating that regarding the advance we made in technology, we are able to roughly double the efficiency of our computers every one year and half, meaning that we are able to perform more and more basic operations per the same amount of energy. And nowadays in 2020, we were able to perform at best one basic operation for 10 to the power minus 15 joules. And you could assume that it will still keep raising this way. But at one point, we will achieve, I mean, we will reach a fundamental labor theoretical bound that was stated by Ralph Landauer, working for IBM at the time in 1961. And he claimed that, I mean, he stated that one bit erasure or similarly one bit writing will cost you at least KBT log of two, where KB is the Boltzmann constant, T the temperature of the device you're using to proceed, and log of two is just the logarithm of two. And this can seem very small, three times 10 to the power minus 21 joule at room temperature. But if we keep going this way, within 40 years, we will reach this fundamental bound that will prevent us to go even further in terms of computer efficiency. So that's why what we were doing was trying to go back to the fundamental physics behind information processing and scale down your computer or whatever device you're using to the smallest possible one bit memory that is naturally evolving at the scale of the fundamental bound stated by Ralph Landauer, which is the thermal energy at room temperature. So during this talk, I will show you how we built an underdamped one bit memory and using it to compute the cost of a reset operation, also often not operation, which is reversible. Then I will discuss a bit the specificities and the benefits of underdamped memories. And in the end, I will try to open by showing how we can do better, basically. Well, let's start on how we built the one bit memory. You need to encode one bit of information so you need a state zero and you need the state one. The easiest way to do that is to encode it in one degree of freedom. Let's call it X in a bisable potential. So if the system is stable on the left here, it will be state zero in the right, it will be state one. And the better is to have an energetic barrier high enough to prevent the bit from switching before the two possible states so that you're secure. And the potential has to be tunable in order to perform whatever operation you want to perform on the memory. What you have to keep in mind is that in the end, the performance of your memory will be ruled by two things. The first one will be the potential and the protocol you're using to operate on the memory. And the second one would be the system that is used to encode the memory. So the thing that is on the left or on the right. Usually, if we want to scale down to the thermal energy ladder, we will use microsystems that are ruled by stochastic dynamics. One way to do that is, for example, to use a particle in water trapped by optical tweezers. If it's on the left tweezers, it's zero. On the right tweezers, it's one. Some team did it already. You can see the quotes there. And the characteristic of this system is to be overdamped because it's evolving in water, which is a very viscous fluid. And therefore you are with a quality factor much lower than one. And what we did was basically to wonder, can we do the same this time with underdamped memories? So we were using this system, which is the cantilever, usually used for atomic force microscopy. And basically it can be banded a bit. And this deflection of the tip of the cantilever on the vertical X axis will encode the information. If the cantilever is down, it will be zero and up it will be one. But this time, this system is a perfect harmonic oscillator as you can see on this plot between the measurement and the harmonic fit. And if you look at the spectrum, there is a resonance frequency because this time you are underdamped because you are evolving in the air and not in the water anymore. So there is inertia, there is speed, and there is a natural oscillation frequency of your cantilever, about 1.2 kilohertz. So here at air pressure, the quality factor is about 10, but we are able to go much higher in quality factor by removing the air and going a bit lower in pressure. So this is our memory. I don't want to spend too much time in experimental details on that, but just so that it's not magic how we create the bi-stable potential, we apply a voltage between the cantilever and the facing sample. And doing that, we are able to attract the cantilever a bit down or a bit up. It's just a voltage monitoring. And as you want a bi-stable potential, we are using this trick of using a feedback loop. Basically, we will measure at each time the position of the cantilever X. We do that by an interferometric system. So we have the position with a greater accuracy. And in real time, we compare this position to a threshold X naught, which can be, for example, zero for the example. And depending whether the system is on the right or on the left, I will change here the voltage so that the cantilever is attracted on the right or on the left. Let me show you this video here, which is actually experimental data. And you can see the system is evolving in one well. And whenever it crosses the threshold now, I switch to the force. So that in the end, it's called a virtual effective potential because I just lie to my system and I show him the correct potential each time it's switching. So thanks to this virtual loop, I can have a perfect bi-quadratic potential. And if you want to be convinced, this is the experimental data we have for the potential using the probability distribution. We obtain a very proper bi-quadratic potential. So this is my underdown to memory. And now I will use it to figure what is the cost, the thermodynamic cost of different operations. First reset, and then we'll go on not operation. So a reset operation is also called an erasure and we expect this fundamental lower bound. Just to go back a bit on the context on this one, basically when you reset your memory, when you erase an information, you start with two possible states, zero or one to possible information. And you want to reset your memory to state zero, whatever was the initial state. So doing that, you're compressing the phase space, you're shrinking the number of states available by the system from two to one. And if I choose to write the Boltzmann definition of the entropy, you can see that this comes with an entropic loss. And this entropic loss, KB log of two, because you divide by two, the number of states available by the system will bound the average heat required to erase. So in the end, when I erase my memory, we'll need some work to proceed with the operation and we'll release some heat in the surrounding environment. As I'm doing cycles, the memory state is the same in the initial and final state. I mean, the memory, the potential, not the memory state, the potential. Therefore I will have an average, the heats equal to the work and this will be bounded by the entropic loss, so KB T log of two, T being the temperature of the surrounding environment. This inequality can be saturated when I proceed very slowly in a quasi-static regime. So this is for the theory. Now let's see how we can compute the costs of the erasure, the reset operation for an underdamped memory. We have to design an erasure protocol. In our case, we decided to do the following. I will describe it to you and then show you a video because video are always better. But what we do basically is we have the two wells first in the initial time and we will progressively merge them together so that we lose all of the information. We don't know anymore if it was zero or one. Then we bring everyone back to zero and we build again the second well to have again the same state of the memory. So here on the video, you will see in yellow what happens if I have a one as an initial information and blue what happens if I have zero as an initial information. This is experimental trajectories and on the right, you have the potential landscape. So at first, the bit is secure in the well. You have either zero or one. Now you'll start merging the two wells and the information is progressively lost because the system jumps from one well to the other. And now you don't have any information from the initial state anymore and you will just bring back your system to state zero and in the end reconstruct the second well. So you have a cycle. You'll start with the memory, the bisable potential. You end up with the bisable potential but now the only final state is state zero. So this is an erasure. Just so that we speak the same language, the first stage when I merge the well together I will call it compression because we are compressing the phase space. We are forcing the system to go from two states to only one. And then I just translate my system to state zero to then again build my memory. And this is called stage two. And you will see in the following it has very different description. Then how do we compute the costs? The cost is a thermodynamic cost. So I want to compute the work and the heat. These are stochastic thermodynamic quantities and you can compute them when you know the trajectory which is the case in blue here. For example, I have one trajectory of my system. So the position X here jumping between the wells then in one well and then going back to zero. I also have the driving of the two wells. So it will be the red and black line and using some formula. I don't want to go in the details but it's just derivating the potential during the operations. I can compute for each trajectory the work and the heat. And then I average on thousands of trajectory and I know the average cost of my erasure. This is what we did for different erasing speeds. So let me comment this graph with you. Here we show the average work or the average heats because they are equal, they are supposed to be and you can see the cross and the circles are always very close. So this is also very good news in the KBT scale. And I plot that as a function of the erasure speed between being the characteristic time of the speed at which I merge my wells together and operate also the translation. If I go very, very slowly, I have log of two which is the expected Landau bound. Let me highlight here that's compared to overdamped previous study. We have a very better accuracy on the work and we are able to compute the heat which is very complicated because it involves a derivative of the position and it's only doable because we have the position with a very great accuracy here. And then if I go faster, I will have to pay a bit more which makes sense because I just I'm manipulating my system a bit faster. And as it was predicted by in overdamped studies we observe kind of the same in underdamped regime. There is a scaling in one of the characteristic time of this operation. And if I want to perform the erasure several order of magnitude faster I basically have to pay a double price. So this is a kind of measurements we were able to do computing the costs of a reset operation. But now I want to show you also what we did to perform and compute the cost of not operation. Indeed, if you have one bit of information you can do only four possible operations on it. Either you reset to zero, reset to one, you do nothing which is called hold or you flip the two bits. So this is called not. If I have a zero at the initial time I want a one at the final time. This is another operation or it's also called a bit flip sometimes and this time it's logically reversible because I'm not compressing the phase space available by the system. I start with two possible states and I end up with two possible states. I just want to switch them. And here being underdamped allow us to use the phase space and the speed degree of freedom to perform this computation. So basically we will rotate our two states in the X position and speed landscape. And during that, if we do that bit smartly then we are able to reach also a non-agirreversibility meaning that the operation is logically reversible. So in principle there is no entropy costs and if there is no damping using a smart protocol we can also achieve zero work on average required to proceed with this operation. Now you all want to know what is the smart protocol in question is the following. You start with your memory so your two wells and when you decide to operate the bit flip you will switch to only one well and you will wait exactly half an oscillation of the natural oscillation frequency of your underdamped oscillator because it's underdamped remember so you have natural oscillations. You wait for half an oscillation and so your particle on average will reach the other side and then you recreate the memory. And during that you just benefit from the speed degree of freedom to have a bit flip protocol. Again, let me show you a video here that will show you if you start on one or if you start on zero what happens. First you have your two possible bits one or zero and I will switch now half a period and you're back. So as you can see it's very fast it's just half a period and you switch between the yellow and the blue which was two different possibilities starting on zero or starting on one. And we compute the work required to do that. In theory it's pi over the quality factor times the energetic barrier. So if I want to have five KBT for example for an energetic barrier in experiment we had 0.44 KBT for the average work required to do so. Why it's not zero is because we didn't have zero damping. We were working with a quality factor of 100 so we had a bit of dissipation in the environment still. So we are able to beat the Landau bound because there is no more entropic cost from the irreversibility but we still have a bit of energy military to proceed just because of the damping. So this was for the not and I will shortly explain to my opinion what are the big specificities and benefits of using underdamped memory. First as I said we are able to perform fast half a period and not operation and cheap not operations. And I want to stress out that this couldn't be done in overdamped dynamic because in Markovian dynamic you cannot switch the auto states using only one degree of freedom. So if you don't have the velocity degree of freedom you cannot do it in overdamped dynamics. So one thing you could do is to use two special degrees of freedom and do the same circle around but during that you will have to pay the price of dragging your system in the two dimension circle and you have to do that super slowly. So here, yeah just to conclude it was much better in underdamped dynamics because we could go faster and cheaper. The second benefit is that we can also be faster and cheaper to reset operation. This is again the plot you had before this time not with the speed but with the characteristic erasure time and you can see that there is the same trend you have to pay more if you go faster but using underdamped dynamics we were able to reach the minimal possible cost which is the Landau bound several order of magnitude faster than what was done in overdamped dynamic in previous studies. So again, it's a really huge benefit. But now let's go for the specificity. This is I think very important to understand what happens. If you're underdamped there is no more coupling with the bath, you're isolated from the environment meaning that you cannot thermalize as quickly as you would in overdamped dynamic. So if I write the heat flux it's obvious that if you're overdamped or go very slowly the memory will always be thermalized with the environment at T naught. On the contrary, if you go fast or underdamped there is no coupling so the temperature of the memory can increase. And actually it's exactly what happens like you would do for a gas if you can press the phase space very fast so you reduce the volume available by your system to evolve by a factor two, your temperature will increase. So just I will go a bit fast on this and just show you how we can measure the temperature. We just measure the kinetic energy of our system during one erasure protocol. So this way we have access to the temperature and the emotional temperature of our system. And as you can see you start as the room temperature the environment one and you will raise this energy during the compressing of the space space and achieve twice the initial temperature just because you do that without being able to thermalize with the bath. And then you have to wait for the thermalization and you can proceed with the translations stage. So to summarize what big specific one specific specificity of this kind of systems is that if you proceed very fast by compressing you will in the end increase the temperature and instead of having KBT log of two you will have KB effective temperature log of two. This is bounded by a factor two so the maximum work you will require is just KBT not but it has to be mentioned because it doesn't happen in overdone system. And now just briefly before concluding I will show you two ways to a bit improve the performance of this kind of underdown to memories. Basically, yeah, there is two solutions either you optimize the memory properties either you optimize the erasure protocol. So you have five minutes including questions. Okay, maybe then I will go fast on this one I just show you this results here which is optimizing the protocol. Here I said we just translate the system linearly without thinking really further but there is a smarter way to do that which is using this kind of protocol that was computed in the literature that I put on this slide. If you kick the system a bit at the beginning and at the end, then you will give it straight away the final velocity for the translation. This is just to show you in red it's using the optimal protocol and in blue is using the natural linear protocol. And you can see that using optimal protocol will reduce the oscillations and reduce the overcost at the end of the translations. So this is what we did and you can see that here in red this is the optimal protocol compared to in blue the nonoptimal one and you can see the erasure cost so it's really improving a lot the performances of your memory. So on that I think I will go straight to the conclusion or maybe just a word on this it's just a way maybe to speed up a bit the thermalization between when it's the temperature rise we could apply some specific protocols after the increase of temperature just to thermalize more quickly. And on this I want to thank you for your attention and if you have any question, please go ahead. Thank you very much Sambo. We talk the questions from the audience. Please raise hand. First students, I think there are a lot of questions. Please watch it, okay. Oh yeah, hi, very nice, I have a quick question. So if I may understand correctly that your potential is created by the voltage, right? And you are measuring the tip of the potency and that's like that tip doing the motion like a bead in a optical trap, right? Yeah, it's exactly like in optical trap but this one is the mechanical properties of this leader. Yeah, but how did you do that translation thing with that potency here?