 Back So before We start with the exams we have Mentally his perspective on What Jorge has been telling us so please mash. Thank you. This is like reliving my phd defense Well, at least a part of it. Okay, so I Am doing this extempo. I haven't had time to collect my thoughts or anything and I did this experiment as part of my phd thesis because I was extremely disappointed with the state of Turbulence it's called the graveyard of theories the last unsolved problem in classical physics so on and so forth and One of the committee members went at me The evening before I sent a kind reminder to all my committee members about my thesis defense And he had responded. I sure you I will be there and I'll be on fire So I said thank you for letting me know I'll bring enough bring a fire extinguisher so Which was my way of saying you want to pick a fight pick a fight with someone your size Why do you want to show your strength on a student? But it became clear as the defense went on his beef with My thesis was that I had written an entire PhD thesis on turbulence Without ever mentioning the Navier-Stokes equation Daniel boyanoff ski you he was a professor. He still is a professor at Pittsburgh and Very nice person after the defense but until then he was tenor and He said so he said how can you write a PhD thesis without the Navier-Stokes equation? Said that is the point of it. What has the Navier-Stokes equation taught me about turbulence? I Don't learn much from it Anyway, so we were going back and forth and at some point my advisor took me out of the room and said just Define and get out of here So About fluctuation relations How does an experimentalist approach it and this is Partly philosophical and partly the experimentalist's dilemma because we can't measure everything and this is the point I was making my first slide in my lecture the first day. It is important for an experimentalist to know the requirements that went into the development of the theory and The assumptions that went into it the limits under which the theory works if you want to construct a viable test of that theory Just as for a good theory, it is important to keep in mind. What are the physical observables that are experimentally measurable? We can't measure everything computational digital experiments can do a bit better, but only so much more better so we have gone through for his Lectures developing it from equilibrium going through linear response to the driven nonlinear regimes and We have arrived at the fluctuation relation and when the first fluctuation relations were discussed by the various Dramatis personnel that or gave mentioned One of the problems for the experimentalist was okay This this looks exciting because there's a first time in so many decades that we have some Important-looking result that we can say something non-trivial about non-equilibrium regimes because we don't have much of a theory But the problem was we can't measure entropy production rate So what can we measure experimentally that works as a proxy for entropy production rate? so We will start with some quantity that I will call x of t. It could be anything. I will constrain it further later on and Let's say this is the system I'm not saying what is the system it could be and there have been many systems on which the Fluctuation theorems or fluctuation relations have been tested We are measuring some quantity. We are putting energy into the system at some steady state. So we are forcing the system and The system is doing some work and there is some dissipation and All of the dynamics is somehow subsumed in some signal. I'm measuring as an experimentalist I will call that signal x of t. All I get is Some time series so and the system is a constant temperature thing No, I'm not even getting into temperature. Okay, okay I'm doing it in a room at constant temperature But when I get to the end of it and this is one of the reasons I left fluctuation relations It doesn't give me a meaningful value for temperature. That is my problem with fluctuation relations so now Jorge gave this Correspondence between the ensemble picture and the time domain picture how we can think about switching back and forth between them now If I want to measure the entropy production rate, which I cannot really in experiments I want a proxy for it. So it has to be something related to the entropy production rate and for a already gave us One such quantity the power f dot v Okay, so if I'm putting force into the system and I'm getting some quantity out of it Okay, so x of t could be power. It could be the energy dissipation rate, which is also a power in other words in the non-equilibrium stationary state the rate of energy injection that is the power put into the system Is on average the same as the energy dissipation rate So I could either I could measure the fluctuations at the input end or at the output end or within the system Input end is not going to make much sense because I am by fiat fixing a constant energy injection So what makes sense is I measure some quantity here Or I put some probe here and measure the power at the output and the dissipation side now The time if I want to do the averaging so According to the protocol that was established by the fluctuation relations We are if I have a time series, then I have some distribution for it So I don't know the shape of the form of the functional form of the distribution It could be a Gaussian. It could be something else exponential anything. I don't I'm saying nothing about it Usually it cannot be a Gaussian I'll come to that in a moment now What the fluctuation relation is saying that if I take this quantity X of t I'll put t prime dt prime and I integrate it over some period tau and I'll call this x sub tau the fluctuation relation is saying that and I could plot distributions of this time average quantity right so as the averaging time tau increases as Or we explain this is going to be increasingly peaked this is so Let's say this is tau small This is that was tau small that is tau intermediate and this is tau large And not defining what is small what is large just yet It has all it has to be relative to something. So what is that something that is the first question the experimentalist asks Which is another way of saying if I have to average What is the range of this time window over which the average in occurs the fluctuation relation says that the choice of the time window tau should be less than the correlation time of the system of that signal which means If I calculate x of t x of t plus t prime So this is the autocorrelation Which I'm plotting on the y-axis as a function of t prime and It's going to decay in some manner to zero that is that is the time the area under this curve is the duration over which the Signal is losing memory of its past So tau has to be Smaller than the correlation time But much it should be many sampling time. So you have to pick your sampling time of your signal fast enough that you have several Samples within the averaging time the more samples you have the better because you get you get to construct more and more distributions to test this over several values of tau now The trouble experimentalists ran into that there are several beautiful papers by Sergio Chiliberto Narayanan Menon on granular matter Chiliberto did experiments in turbulence on the resistors current carrying conductors with a resistor with a very high value of resistor which fluctuates because of Changes in the air current which cools down the resistor momentarily So you could there were various systems in which this was tested So now the statement of the fluctuation relation from following from here is that If my time averaged X X sub tau takes a value plus a the ratio of that To X of tau taking the value minus a So I will now blow up this distribution for simplicity so when I plotted my Histogram and normalized it I got several points. That means I need lots of statistics to be able to have a PDF of this quantity so this is except tau this is Probability density function, so I normalize the histogram and I have the PDF. This is zero Let's say that is the most probable value, but that is not what we are concerned about we are concerned about fluctuations So I look at this quantity versus this quantity So if I call this X sub tau taking the value plus a so if this is plus a Then this would be minus a if this is plus a this would be minus a right It goes in a particular way in that particular way Which Jorge proved for us in the context of entropy production rate not for X of tau It goes as e to the tau times a times Some quantity sigma which we call the entropy production rate, right? I should call this X sub tau Am I right? Something is wrong Something is wrong because we know the left hand side is dimensionless and the right hand side should also be dimensionless So what is wrong? Yes, here. This is coming out as the entropy rate and So Since this is not an entropy rate it won't come out as such that is the problem Just give me a moment So this is where I have to go back and No, that so far it's correct. So I Can now write this as That means if I plot the left hand side, so I'll just call this the LHS as a function of X sub tau So This should go as a straight line that that was one way that Jorge showed us that the fluctuation relation is verified tested and The slope would come out to be sigma which is interpreted as the entropy production rate. This is the way the experimental Computation works because we don't have access to the entropy rate directly I am aware of only one experiment that was able to do it that way So otherwise we always work with power. We always work with dissipation. So either it is f dot v or torque times the rpm the revolutions per minute angular velocity or I squared r Or v squared over r for electrical power. So that is the quantity that is always available to the experiment So if the entropy production is Larger which means that I'm driving the system faster Then the probing that probably the solution should be more peaked Is this right? Yes, is there an intuition for this? I would imagine that It should be broader because the fluctuations are larger So you're saying as the system size increases No, I mean the system is driven harder. I'm saying you are measuring X. Okay, and now You are drawing your system with f now if the entropy production is zero then you should get that your X is more or less Now if you are if your entropy production is large I Would imagine that your fluctuations in X should be larger Yes so And that's not what you get there And because there you'll get that if Sigma is large so Sigma acts like towel if No, that if Sigma increases then no the distribution is sharper the difference arises in the following sense If the forcing is increasing the fluctuations will increase and the Sigma will change Sigma will get steeper the slope will increase but Under your condition, I keep the forcing constant So I'm getting a certain magnitude of fluctuations, but because I'm averaging over a different towel I'm smoothing the fluctuations and Therefore I should get a different Sigma. No The system did not change So whatever. Yes, this is I only say that the dependence on Sigma that is is a little bit strange. So I would put it maybe in the denominator And then it's also This is also an issue of dimensions. Yes. No, it's still bothering me and I need to figure out why Sigma times tau times except how Yes, I understand why Because of this I did not complete this definition. This is divided by the same quantity in the limit That tau goes to infinity. So this is a dimensionless quantity. Sorry We have to take the full long-time average of the signal we got which is to say this signal should have Extended over over several several correlation times So this this experiment has to run very long It has to sample very fast so that you have several points within a correlation time and you need several correlation times So that you can reliably build a PDF Now the dimension part is okay But it still does not satisfy your other question and I maintain that Whatever physics is there is subsumed within this Sigma Which we can only which falls out of the analysis We don't have a way to predict what is the value of Sigma in experiment a priori in very rare cases We can do that No, but this I understand this and this and so but say can one say that Okay, you you do this experiment you measure X of t Then you do one over t logarithm of the Rage of the probabilities and then you find a straight line and then let's discuss about the the slope How the slope is related to entropy production? So let's let let me do it slight. Let me backtrack a bit. I have moved the tau back here I haven't done this so let's say this is the left-hand side, which I am plotting if I were to plot this I would get a family of straight lines for For each tau. So this could be for One tau this is for two tau three tau Or let me call it C correlation times So when my tau runs four correlation times three correlation times two correlation times as the tau changes the slope will change But that change in the slope is systematic because it is coming from this and not from this which is why We always plot it this way so that all the curves will fall on one master curve So the Sigma does not change during the course graining That is coming from the choice of the averaging time now Is that does that answer your question? No, that is not your question when Hey Then I haven't understood the question is not I mean the question is you made okay So my first question is what is the statement? So the statement is that when you measure one over tau logarithm of this ratio probability You get a straight line This is an experimental factor Yes, that's your radical fact. No, well that is Where the theory the experimental verification corresponds to the theoretical fact Or the theoretical prediction Is it a theoretical prediction? So you can Yes with an A that is not entropy production. I have to check it It's I'm familiar to me but And Let me check it and I'll come to you tomorrow But if a is entropy production and there is no doubt or power or son So there was one only one experiment that Was designed such a way that you would be measuring the entropy production rate Yes, yes, yes, but some assumption has to end. Yes. Yes, I have to I have to see what exactly The assumption is You're right, and I don't remember those assumptions after all these years. In fact, there are two assumptions one was When Gallaudet first presented this result I don't know what meeting because I was not a student of physics by then this is a story I have heard from my PhD advisor There was a lengthy discussion about from the experimentalist They said we cannot measure this if because it was a global measurement And if you're measuring the and signal for a global system, it's a globally average quantity You're never going to see violation of the the violation of This of the signal signal violating the second law momentarily So then Gallaudet apparently went back and redrived it and it was called the local fluctuation Theorem you would you are the expert on this The one that was explained to me by Gallaudet was the local fluctuation relation which I then went and tested Okay, as far as I know the locals Never You need very special assumptions. They're not generically true But I understand perfectly well that people wanted something local because global quantities Fluctuate very little. No, this is why thermodynamics is useful unless you have unless you have a tiny tiny So this is where the experimentalists come to the theorists conference in Split has saying on what basis can you equate the Evans seals Morris Fluctuation relation with the Gallaudet Cohen fluctuation relation versus the Jarsensky equality because It is relatively easier for the experimentalists to see more violation events when you are working with a Microscopic system or a system with very few degrees of freedom as the number of degrees of freedom of the system increased, of course the number of fluctuations will die down and you will have Globally average smooth quantities, then we are not going to see these violation events so easily so for example well, I remember the boost amount of Experiments where essentially X was the work done pulling DNA or say in say opening There the temperature makes sense Okay, okay, because we are saying that but there are essentially it was essentially Was measured in work. It was measured a very specific thing which One knows yes see they're measuring the work makes sense because you have a handle on what the temperature the temperature You're using there is the thermodynamic temperature Take turbulence You are driving the system hard deep into the non-equilibrium nonlinear regime If I were to back calculate and ask what temperature do I get through the fluctuation relation? I'll get something nonsensical like 10 to the 19 Kelvin the water should have evaporated ionized and turned into plasma by them Makes no sense. So I have asked often what have I learned from the fluctuation relation about the system I'm using it to verify that I did not know before What I'll so from the experimental standpoint here is what I have learned in the first line of Anna Karenina by Leo Tolstoy He writes every family is happy for the same reason, but every family is unhappy in its own way we have a very Sophisticated theory that explains equilibrium statistical mechanics to us. It has been applied with such mathematical sophistication to very difficult problems like critical phenomena and phase transitions and renormalization We don't have any such handle for systems that are out of Equilibrium we have results when the system is nominally removed from thermal equilibrium or in the linear response regime But when we are when we go deep into the non equilibrium regime like a turbulent flow, we have no handle What the fluctuation relation? Says is no there is a method to the unhappiness of every family also that is to say Even though the symmetry breaking in the time reversal is happening It is happening the symmetry breaking itself is happening in a systematic manner It's not every system to itself. There is a method to this madness Where the fluctuation relation falls short in helping the experimentalist is when I try to pull out what is the analog of a Temperature for this strongly driven non-linear system. I don't get a useful answer That means the thermodynamic Description is not so easy to push forward into the non non equilibrium regime Which of course the theorists already knew because if they hadn't they would have figured out the theory by now Yes, so I'll give you one example Please say the way that the way that you choose Minus a minus x was to what we have done in the in class because it was a EPIC Proxima in this axis and minus c-minus the other axis and you in your case you pick like on your Distribution you pick two different point you say that this point is plus plus a and to this and this point It's not very different what he what Jorge explained and what I did so so what? Professor Kurchan explained was in the analytical Derivation and the physical intuition that goes with it The experimentalist always has to discretize stuff. We don't have anything continuous, right? So the discrete form of that is if this is zero then all the part of the This left tail of the probability density function So this is what I'm calling x sub tau and this is the probability density function of except out so the probability that it takes a value Positive value of a certain magnitude to the ratio that it takes a problem that it takes a probability of Negative fluctuation of the same magnitude that is the ratio we are developing here So the only difference is whereas Jorge wrote sigma Sigma sub tau. I'm writing sigma sub tau equals a certain particular value and Then from there. I'm developing So the value taking a bcd and so on okay, all right, so You said that you are going to come on the fact that it can be a good point. That's a good point The Gaussian distribution actually you can show trivially satisfies the fluctuation relation I know some of the theorists will not be happy to hear this I got many angry emails when I made this remark in a paper. Let me explain why so let's say my probability distribution goes as e to the Yeah, right so if I ask what is the probability that this will take a plus value to Minus value. This is already sitting inside it. So this is gone Now if I take the ratio of the positive and the left and the right tails of the distribution So this distribution is not centered about zero It's not centered about there. It is It's coming out something like that, right? So I'm going like that so a Lot of effort many initial experiments Either did not get enough statistics or they got fluctuations that were that were Gaussian distributed Which trivially satisfies the fluctuation relation through? So there was a hunt on for distribution for systems that give us strongly non Gaussian distributions and the first one to my Recollection and this has been many years was of an experiment by Narayan and Menon and Klebert-Fetosa published in PRL 2004 or 2005 they took basically a System of granular discs and they vibrated them from the boundaries and so these balls or discs were going about Like billiard balls going and banging across the walls, etc And they were measuring the power fluctuations when they measured the power fluctuations They got distributions. Now. I'm plotting it in a log scale So they were measuring power. So I'll just call it. So in the log Linear scale a Gaussian will look like a parabola, right? but At very short time. So this was For tau the averaging time tau very short Right, maybe couple of correlation times But when they went to say 10 correlation times or 20 correlation times the fluctuations were smoothing enough to give them a Gaussian distribution So this was the first experiment to my knowledge that gave an Experimental handle on verifying whether the fluctuation relation works for a non Gaussian distribution the Other problem was as I said getting the negative fluctuations. You don't get enough statistics here. So how are you going to verify it? So you have to take long time series large amounts of data and hope that you're collecting enough statistics on the negative tail to be able to verify it So just to be clear so the statement that you are making is that for a Gaussian The log of the ratio of these two probability is linear anyhow. Yes, it's trivial Okay, so and The ratio is essentially mu divided by sigma squared. Yes, then I mean the slope is Mu divided by sigma square. So the experiment we did was we had a tank large tank of water one meter by one meter filled with water to a depth of 30 centimeters and We had a bunch of pumps Sorry, we had a eight horsepower pump that Recirculated the water from the tank and reintroduced it through a bunch of sprinklers So if you look at it from the side, it looked like or if you look at it from above a bunch of sprinklers so the water comes in through here and Comes out like a jet So these sprinklers rotate and they generate such sustained turbulence in the tank from the bottom and You have water filled to the top the problem is the intensity of turbulence in this tank and the water goes out from a pipe at the bottom through Eight horsepower pump and is injected back Okay, the problem is The turbulence intensity is a strong function of the height as I go as I move up the intensity of turbulence is falling down so you make measurements somewhere in between or It should it shouldn't be so close that you're feeling the effect of the forcing the water that is coming out like a jet But it you should have enough churning of the water and enough intensity to get strong fluctuations What we did was actually put particles that naturally float in the surface So these are hollow glass particles that are naturally buoyant So they look like foam or creamer in a coffee cup before it mixes if it's bad creamer and So you're churning from below But you're not you're driving it hard enough to get strong fluctuations, but not so hard that you get waves It's more or less flat at equal zero introduce particles and you take fast camera recordings at thousand frames per second two thousand frames per second and You track the particles to construct the velocity field So now I have the well of the what we call the Eulerian field the velocity field of this whole thing the problem with this is The water the is incompressible the flow is incompressible Right, but the system of particles that are floating on the surface of water They form a compressible system because they can't follow the water molecules into the bulk So they always so if you naively imagine you're setting up some convection rolls The particles will always flee these upwellings and They will always cluster around these downwellings So if you look at it from above This distribution of particles will look Like a cluster the measure is concentrated if you want to talk about it in terms of measure theory So one of the assumptions that went into the Galavoti-Cohen Relation was that the dynamics of these particles in phase space follows a particular kind of distribution Clustering which is known as the Sinai Ruel Bowen Bowen statistics SRB statistics This statistics is supposed to have what is it smooth along the unstable manifold and a fractal measure along the transverse direction That was the starting point for me. I Had no clue. I just stuck on to that and asked can I just test this distribution of these particles in the steady state floating on my turbulent sea do they have a Smooth distribution along the unstable manifold and do they have a fractal structure? Close enough. I didn't get fractal. I got multi-fractal. So what it's still better than anything anybody has done with phase space now because this system is the next the next point of correspondence between phase space dynamics and the system of particles on the free surface was In equilibrium if I take an ensemble of experimental systems that are in equilibrium like a cup of water in this room at temperature 25 Celsius or whatever and I make a measurement on all of them each one of them is in a different point of phase space But if I superimpose all of them on to the same phase space, they are uniformly distributed But if I take an ensemble or hundreds of such experiments that are turbulent at the same Reynolds number or same driving same Forcing same identical conditions. The particles are not going to be uniformly distributed They're going to be clustered at different points. That was the point they were making which is to say that the Dynamics of phase points in phase space follows a hydrodynamic Evolution of the compressible kind when the system is out of equilibrium This is related to the divergence term in the In the in the probability continuity equation, right so If I measure the divergence of the flow of water, I will get Zero divergence because this incompressible flow But if I take the dive if I measure the divergence of the particles I get a non-zero divergence because the particles are clustering the divergence is negative Okay, so let me call I'll call it omega. Omega in fluid dynamics is usually used for Entropy or the sorry the vorticity, but Here it is that the two dimensional divergence of the free surface and now if you Say that I am going to take the distribution of these particles and I'm going to call it n0 log n0 as The entropy of the system at time p equals zero I started at t equals zero with a uniform distribution of particles I have the velocity field that I got from the experiment now I am doing this a computer simulation using an experimentally obtained velocity field So I'm running Lagrangian trajectories as we call them of surrogate particles on on this velocity field and I'm asking so If I look from above on the surface and if these particles are distributed in this funky way I take one particle and I ask how does it reach this clustering? so Let's say I have one particle But the one here will go to another cluster assuming the cluster is here and Something like that, but these are the particle trajectory So I'm basically following each particle trajectory and at every instant in time. I'm asking what is the Divergence of velocity at that point for that trajectory This is actually if I do the correspondence between this 2d flow and the phase space this divergence is like a Local entropy production rate So using that correspondence then we ran the machinery of the so I told you I'm aware of only one experiment where We could do a correspondence with a two-dimensional phase space and ask does the fluctuation relation work then It was terrifying for me and for Galavoti because there are no free parameters of this kind either it works or it doesn't work So you measure this how do you measure this? divergence in the sense that you have the position of the particles right so to compute the Gradients think of it this way I'm I'm measuring the velocity field at every point in space on the on the surface So but you don't have particles in every point in space Sorry, you don't have particles in the experiment. I had particles that were uniformly distributed And I was sampling every point in space because these were new to me Yes, but then when a cluster then you don't have yes So what I do with the clustered particles is I'm asking with the velocity field that I come Obtained from experiment if I now constrain the particles to remain only at the surface So initially I did the experiment with particles that were that did not have this constraint They could pop to the surface they could go into the bulk with every water molecule that means they are divergence free It's an incompressible flow, but now if I put in by fear that I want particles that are stuck to the surface So even though the three-dimensional flow is divergence free the two-dimensional part system of particles behaves like a compressible system and so For that which is another way of saying If I compute the three-dimensional divergence that is zero, but if I compute two-dimensional divergence, so It's not zero This is purely by fear because I said I am constraining the particles to live on the surface only Yes, so this what is yes So I have the velocity field so I can compute these gradients and From there I pull out the divergence Okay So once I have that I can then I collected data or something like 200 correlation times What we call in turbulence a large eddy turnover time and from there, sorry 200 It was many more than that I can't remember the exact number now, but it took me something like seven months to collect just the data now I define the entropy if I define the entropy as a local concentration of the particles and I can evolve this in time So I can so I'm starting at t equals zero and I'm asking how does entropy change with time now? I can do that or I can do it through the divergence. This is directly giving me an entropy production rate This is not related in any way to the thermodynamic entropy production rate It's because this is not as first of all There's not a system that is close to thermal equilibrium where temperature is well defined This is also not a system that is it's not a global measurement. I'm locally sampling and third I am Looking at a real physical system as an instantiation of phase space. So I'm drawing a mathematical analogy So this entropy rate actually turns out More closely related to what we call the Kolmogorov-Sinai entropy rate The way this whole mathematics work out. I won't go into the details But if you're curious, you can go and look it up and when we did this We were able to basically when I plot this left-hand side and ask How do the points fall? They fall on one straight line up to a certain point So it goes and then deviates and it deviates at a time at a value of divergence where my errors just start blowing up Which means it is setting the limit on the tolerance of my measurement. So isn't that the Point where your distribution becomes known Gaussian actually know as the My distribution was strongly peaked. It was non-gaussian. It was strongly. I don't have the connector I would have showed shown the plots but It was one of these it was a second experiment that had a strongly non-gaussian distribution This was basically we have to ask how many standard deviations out we are going how many How many standard deviations or how many mean shifts one moment because we are dividing by this is a Yeah, we are asking how many mean shifts. So we went six mean shifts So it's not Possible for experiments to go out to so many mean shifts normally That which means you need a really fact-tale distribution Gaussian wouldn't go that far. I'm sorry if I have confused you even more Perhaps one thing that is worth discussing is about two things one is What are experiments testing? because If we derive something on the basis of laws of which we have no doubt whatsoever One could ask Well, here is a context that is in the limit. No, so you are testing something But many of the experiments where they use very few particles in a bath, etc it's not very clear what they are testing because Nobody doubts the Newton's equations or things like that. So Some of the experiments are reproductions of just Exhibitions of a result that is not under doubt It's not like when you're trying to measure if the top quark exists that a priori you don't know if it exists Here a priori you in principle know so There has been a lot of experimental activity which was Not clear what it was demonstrative more than Experimental physics, no, yes, and then the second thing is that it would be nice that you tell people How disappointed you were about the usefulness for turbulence of this theorem so So in the end, what did I learn about turbulence? I really did not learn much I changed the Reynolds number and I asked myself does this does it in some way changed and Sigma Also, I couldn't understand. Why do I get a very strongly non Gaussian distribution and Where does why did this whole machinery work? It was one stupid simple-minded Experimental PhD students dreaming that oh, I see these particles floating on a free surface They seem to have correspondence with what the mathematical physicists have Said would be the distribution of face points and face space and the strongly driven conditions Who said Science is done by analogy so easily there was no reason for it to work So it bothered me why it worked so I was hired for my postdoc at Los Alamos to work on granular systems So I left turbulence in disappointed because I don't get I don't get the feeling That I have solved something every time I do an experiment I went to Los Alamos and I was chatting with a good friend of mine column Connaughton who led to who later went to Warwick mathematics and We both got together and did he did the simulation and I did the analysis of the data and We finally understood why the distribution turned out to be so strongly non Gaussian and this is a point I forgot to mention I Said earlier that X of tau which is a proxy for entropy rate if it is something like dissipation or power It is rarely going to be Gaussian and yet most of the early experiments saw Gaussian fluctuations And that they were dismissed for that reason they were not Not dismissed, but it did not settle the issue so here's what we did this was a simulation of Turbulence in two dimensions There is such a thing so the atmosphere is considered an approximation of two-dimensional flow because the thickness is much smaller than the lateral extent of In experiments we do it in soap thumbs because the thickness of the soap film is of the order of ten microns whereas the Width and the length is of the order of a meter So you have Let's say a two-dimensional plane a fluid two-dimensional fluid and you are somehow generating turbulence the way we generate turbulence and two-dimensional flows There are two ways one is what we call gravity-assisted soap films where we basically Take a nozzle and sense soap solution and we have two nylon wires that come down And they're hanging from a weight the kind of weight you use in gyms for weight lifting So it's under tension and then I pull these two apart. So This will now look so these are strings that are pulling apart So I pull these two strings apart and I have a soap solution So film here with the solution flowing down the film So it is like a quasi two-dimensional So film then you take a comb the comb you use to comb your hair you stick it into the film So if you stick it in from the front you will get Turbulence downstream Okay, the other way is done through what we call electromagnetic forcing what people do there is you have a bunch of Alternating magnets So if one is pointing north the other south north south north south and you have a square or a hexagonal lattice of these and You have a thin layer of salt solution brine Which is floating on an even thinner layer Of a dielectric fluid Typically this dielectric fluid is used in diffusion pumps in low-temperature physics So why because you have to separate out the metal bottom where the magnets are from the conducting fluid Which is a salt and then you apply an AC current That keeps alternating so it applies So if your magnetic field is pointing this way if the electric field is pointing this way There is a Lorentz force acting some other way right and that will start churning the salt water and you get So that was the one we simulated we had this setup in the lab at Los Alamos we took that and did a simulation and We what we measured was we assumed that the force is Gaussian distributed so we put it by fiat we've we put it by force We decided a priory the force is Gaussian distributed and we are measuring the velocity so if I evolve a particle that is Going about in this 2d turbulent flow at every instant in time I'm measuring so I'm sampling it Equal intervals of time, but these points are not equally spaced because sometimes the particle accelerates sometimes it decelerates So they are not equally spaced in space So I'm getting two time series f of t and v of t and Jorge has point shown us that f dot v the scalar product divided by temperature would be What? entropy rate right But f dot v is power Right, so I have the power fluctuations what we call Lagrangian power this so this is a Lagrangian trajectory as we call it So I am measuring the local power fluctuations in the 2d turbulent flow as I go about so if I If I now I Have said that that's this velocity force I have said that My force is normally distributed my velocity is normally distributed. I'm plotting it in log Log scale so that this looks like a parabola. So if my force is Gaussian normally distributed velocity is normally distributed and if I'm defining P of t as f of t dotted with v of t at the same time The product of two normally distributed random variables, what would that distribution be so we first figured out and wrote a paper on What would be the statistics of that? Turns out it was worked on by a mathematician by the name of Craig and it goes by the name of craigs x y distribution This distribution has some funny features. So Here I call it p of t I can call it x of t equals so or z of t equals x dot y So it has it is very sharply peaked about zero It's actually a logarithmically diverging cusp. So this is very strongly non-gaussian second the asymmetry the skewness of this distribution Depends upon the instant the coefficient of instantaneous cross correlation between these two quantities So it is possible to mathematically derive all these properties. He had shown this So once we know we can control the asymmetry It becomes a game now for us to Increase the probability of negative fluctuations was as positive fluctuations. Why because both these quantities even though this quantity is Positive always I'm always dumping energy to the system. This quantity can fluctuate positive and negative But by controlling the instantaneous correlation between these two by turning some knob I'm now able to increase the probability of viewing negative fluctuations Because we knew the functional form of this distribution Thanks to dr. Craig from 1938 or 1936 we could actually calculate the large deviation function and Predict what would be the entropy production rate even though we were not working with Entropy rate to begin with and even though we can't define a temperature for the strongly driven system and That is when I realized. Oh, this is all falling out of statistics so I'm not learning anything about the physics as a result in a way I am learning as I pointed out, which is the time reversal symmetry breaking and the degree to which it is breaking But beyond that I'm not learning much more That was the disappointing part. I'm sorry. I could have shown you the plots, but I don't have the Adapter with me. Oh, we went longer than Okay I finally get I used to think oh, I'm always in the info lab