 top-sharing. I will try to share now if it works. Yes, yes, yes. So, Alias Kudetz from Max Planck Institute for Biophysical Chemistry, and you will be talking about dimensionality in sample path statistical mechanics. Thank you very much, Jan. Also thank you for the invitation, it was a pleasant surprise. So, I will apologize because I will skip a couple of slides. I thought I had 30 minutes, I had 20 only. So, I invite you to ask me during the discussion about slides, I will skip. Yeah, yeah, yeah. Take your time. Apologize upfront. No, no, no. It will be extremely interesting. We are looking forward. Thank you very much. So, maybe I will give a little bit of a background before I give you the outline of the lecture, why I chose this title and why the specific structure of my talk. So, it was motivated by some recent reports we got which were not negative at all, but nevertheless they sort of raised a little bit of concern and we wouldn't fully able to understand them. And basically what they were saying is that we should not phrase our results in higher dimensions because everything or most of the basic and fundamental physics can be learned from one-dimensional systems anyhow. So, everything that is past that is basically just a blow-up and complication of notation. So, what I will try to do in my talk today is maybe give some arguments that that's not necessarily the case and maybe highlight that the one-dimensional systems as much as we like them because we can solve them and we can understand them in detail are somewhat singular and that we should not overrate the insight we get from from that. This will be the outline of my talk. So, I'm supposed to tell you something about the fundamentals. So, I will start with the fundamentals of the fundamentals and then I will give a motivation. And then I will try to explain what I mean by sample past statistical mechanics. So, it's going to be really strongly related to stochastic thermodynamics but it's not going to be about thermodynamic potentials today at least not mostly. Then I will give some some facts about diffusion processes which you may know or should know or you just should recall. So, this the titles here or the sections in black are basically textbook stuff. And then I will have two sections where I will present some results or actually just highlight them or maybe briefly mention them. And one is what I understand as the Kring-Kubo principle for path observables or a Kring-Kubo principle for functionals in the sense that what we can learn about the physics of dynamical fluctuations. And in the final part I want to highlight that there is actually a fundamental need for spatial coarse-graining in statistical physics and then I will finish. Okay, so since I understand that not everyone in the audience is an expert so there were Edgar mentioned that there are some students present. I wanted to give a little bit of a broader overview. It's going to be very superficial. So, what I mean is that in my universe I'm describing systems that as a whole with all degrees of freedom are mechanical. So, they obey Newton's equations of motion or some fancy rewriting like Lagrange on Hamilton. So, I have two n degrees of freedom they evolved according to Newton's equations of motion. Of course, I cannot observe them all and as it happens I often get lucky or we often get lucky that many of those degrees of freedom are actually evolving much faster so that we can actually effectively integrate them out and replace them with some friction and some noise term which is scaled by the square root of the friction. So, this acts like a thermostat. Of course, while we go from deterministic to stochastic to the Langevin equation we already lose some temporal resolution and that is we don't see anything which is faster than those degrees of freedom we ignore. Now in addition to that we are often also lucky in the sense that also the momentum is much faster than the positions or we can also think about momentum relaxing infinitely fast. So, then we get to overdone Langevin equations. So, this is the type of dynamics I will talk about. So, this is how we typically start by assuming that this is some fundamental equation of motion it's not it's just fundamental on some relevant time scale and under a given setting and this is a stochastic differential equation just for the positions and now we don't have forces anymore but we rather have drifts and if the system has a stationary state there are always there is always a component deriving from the potential so this first part here is time reversible so this is the time reversible drift and then we have something that drives the system out of equilibrium and then again noise scaled by the diffusion constant and if we get lucky in the sense as well that this potential contains many deep basins that are separated by high free energy barriers then on an even longer time scale which is longer than the relaxation in these basins we can actually obtain a Markov jump process. So, this Markov jump process has then m states and the reason is that the generator of the dynamics of this ombrand dynamics has a gap between the low light the first low lying m eigenvalues and this means that the Markov process we are observing is actually just a process in the subspace of the lowest lying first eigenvectors so the point I'm trying to make here that as we go from the more fundamental Newton's equations of motion to the stochastic equations of motion which we assume often as a startup there is necessarily a temporal resolution involved so this means that if I make a statement about under them or over damped longer than dynamics I most likely also make a statement of for Markov jump dynamics but it doesn't work the other way around at least not if I assume that the system as a whole is mechanical and I will stick today to over damped longer than equations so I will make this more precise a little bit later. Okay so what I mean now by path-based statistical mechanics is what has already been mentioned today so what I what I have is I have in mind some experiments which generate trajectories of some process we record them as a time series and unless we have a superbly laborious phd steamer postdoc we cannot generate a proper ensemble to do ensemble averaging we cannot repeat the experiment 10 to 20 times or something like that so we have a finite set of trajectories that are typically finite of a finite duration so what we do is we basically time average we analyze the data by time averaging and because the trajectories that we are observing are stochastic then also the time average is stochastic and now we are interested in this sample to sample fluctuations of our time average observables and what is often also involved is that in general experiments are not infinitely precise this has also been mentioned a couple of times today so there is some sort of coarse-graining or a scale involved in any natural experiment some experiment also record just some low-dimensional projection of some of the high-dimensional dynamics I was talking about before so it can also be a one-dimensional observable like an extension of a macro molecule observed by force spectroscopy or plasmon rulers so this now really is a one-dimensional process but you can imagine that this process will not be Markovian it will have memory at least on reasonably long time scales and also questions like time reversal symmetry and dissipation in the presence of such low-dimensional projections become much more intricate and much deeper I will not discuss that today I would just recall that stochastic thermodynamics we assume that everything we are not observing is infinitely fast at least as far as the full system is concerned so we are assuming the so called local detail balance paradigm so when I will talk about the dimensionality of the system I'm not referring to the dimensionality of the observable but rather the dimensionality of the full system I can on top of that make some observation I just want to clarify that okay so what I will assume is that I can find some over-domed stochastic representation of my full systems dynamics so it's d-dimensional in general it obeys this over-domed stochastic differential equation in either form here f is the drift which I will assume is is smooth so has a bounded local variation such that a unique solution of this differential equation exists sigma is the noise intensity which just scales the noise and v of t here w of t will be the d-dimensional linear process I will also assume that the drift is sufficiently confining so that my system will find eventually a stationary state this state is stationary state will have an invariant density p sub s and if it's out of equilibrium it will also have an invariant current which I denote by j s and this invariant current has zero divergence otherwise we cannot have a stationary state so if we are now far away far from equilibrium out of equilibrium we don't need only the invariant density to do the statistics but we also need the invariant current I will phrase the results in a slightly more general setting you don't have to bother with that but I just want to sort of declared it as won't be surprised because I will assume that just for the sake of generality my noise can be position dependent it will not change anything except that I also assume that this noise intensity is also a sufficiently smooth function or that sufficiently smoothly varies in space otherwise you can also ignore this part and I need to take a little bit of care about how to interpret this differential equation but it's not really a big concern okay so as I said we now need two things in order to describe the system we need an invariant density or the steady state density and steady state current and this is a very hard problem to solve for this is in general nearly impossible or by nearly I mean fully impossible in general but what we know roughly is the structure of or the decomposition of this drift field so if the system has a stationary state we always have a time reversible component of the drift so there's this part in blue and something which is time irreversible so whenever this part in red is non-zero we have broken detail balance and A here is an anti-symmetric matrix field so this irreversible drift always has two components one that is locally orthogonal to the potential field and one that may have a projection also into this field and the spatial dependence of this anti-symmetric field basically tells us if the current is constant along legal level sets of the invariant density or if it varies so the generator of the dynamics is the Focke-Planck operator I write it here in a form which separates the time reversible part in blue and the time irreversible part in in red and it's joined and then the transition probability density g tells us what is the probability density to transition from some state y to some state x in a time t which is given by this propagator and we know that unless we have that the steady state current or this irreversible component vanishes we don't have detailed balance but we have something which is called generalized time reversal symmetry or dual reversal symmetry which basically states that the probability density to go from from y to x in time t is the same as the probability density to make a reverse transition but if we simultaneously also invert this irreversible part of the current and I will use this dual reversal symmetry later on when I will try to rationalize this green cool type formulas and finally I assumed as a system is strongly ergodic so I assumed it with a strong confinement condition and it basically says that after a finite but potentially large time which is much larger than some longest relaxation time the probability density function of the system would be exponentially close to the invariant measure irrespective of what is the initial condition okay so so far this has been all ensemble averages so this is basically what we cannot infer unless we have sufficiently many trajectories etc so now the question is what are the path based estimators of this invariant density and current okay so now we want to infer this from a finite number of individual trajectories they can also be a lot of individual trajectories and of course we are not the first to ask this question the answer comes in the form of so-called the empirical density and empirical current so the empirical density basically reflects the fraction of time the process x tau spends at some in the vicinity of some point x or at some point x and the empirical current basically measures the time average current through this point and this circle dx here just a stratonomous differential which should not be any concern you can just think about it as the correct way of writing an integral over the velocity if this velocity nowhere exists okay and delta here is the direct delta function which means we are thinking about some limit of some distribution say gaussian or anything that actually converges in that sense and what has been found is that these type of those variables when averaged over an ensemble of steady state paths so paths propagating from the steady state distribution on average actually give the steady state density and steady state current so this is what we what we like to have and it was also found that in dimension one so when this x of tau is one-dimensional also other moments are well behaved and um there are um actually no problems we don't have to worry about much so we tried initially to figure out how to extend this to high-dimensional systems so when d is larger than one and then we fail we simply we're not able to understand what these equations means we still cannot but at least now we know why we cannot so I will try to explain why there is really some conceptual part missing as we go from dimension one to higher dimensions so I will list now a couple of textbook properties of diffusion in dimension one and other out of dimension one so in dimension one so first of all we know that the diffusion overdone diffusion processes are locally dominated by by browning motion so this is a well-known fact but I will assume still that I have a confinement so that my process is ergodic so it settles into a stationary state and what we know is that with probability one the diffusion process in one d hits any point starting from any initial condition it actually hits this point infinitely often um and it also returns to the point infinitely often in in in large enough time we know that along each individual trajectory the empirical density and current that we will infer will converge almost surely to the steady state density and the steady state current so this is all not beautiful there exists something which is the correlation time so after a certain finite time scale the value of those two observable at two different times will be essentially uncorrelated so this is also nice because we can use large deviation principles etc but then again we also know that the non-equilibrium steady state of such processes is trivial so first of all it exists only with periodic boundary conditions if there are no periodic boundary conditions a steady state without explicit time dependence in the driving cannot exist and the triviality comes from the fact that the gradient and divergence in one d are the same so if I say that the current is divergence free this means it's constant so you can imagine in higher dimensions this is not the case and also dynamical functionals such as this timing time average current are trivial because they're not really functional because an antiderivative always exists so they basically contain a part which just depends on the initial and end point so it's plus minus one and zero in the case of empirical currents plus something which just reflects the number of turns so all the path dependence comes into this number of terms which actually dominates as t goes to infinity now in higher dimensions things are different so diffusion never hits points so with probability zero it hits any point it also with probability zero it returns to any point i will show later that these two observables defined as a delta function doesn't they don't really seem to be mathematically well defined there is no such thing as a correlation time also non-equilibrium steady states are in generally highly non-trivial simply because there is a big difference between divergence and gradient and this fries this opens up plenty of possibilities for to generate non-equilibrium steady states that are not trivial also functionals are genuinely path dependent so there is no and in general antiderivatives don't exist so in that sense i would really say that the d1 case is singular and the insight that we get from these one-dimensional models as much as we like them and as much as we use them may not be so representative so we need to we need to we should be forced to think about also what happens in higher dimensions okay so now we return to this problem we try to define um those empirical densities and empirical currents differently so we say instead of putting in a delta function we just put in a distribution or representation like a Gaussian or any differentiable square integral function which is translation invariance of this means that the dependence around the the centroid just enters as a difference and that in the limit when we take this parameter h which measures the extent of the coarse-graining with this window function as we call it goes to a delta function so if all goes well when we do the analysis and take the limit h to zero nothing can go wrong uh otherwise something can go wrong we will see so what we are measuring here is basically the fraction of time spent in this window averaged over the volume of this window and the sum of the displacements through this window and the displacement vectors are basically the point spanning and entrance and exit points of this window and then added up to trajectory okay so i will skip now this side remark one can also define this for other more general types of currents and then prove thermodynamic uncertainty relations but unfortunately they don't have time to that so i will basically try to illustrate what these observables look like so what i'm showing you here is a two-dimensional irreversible Ornstein-Ulmec process so there is a curl which drives the system um say here clockwise and i'm i'm showing you the statistics of the current so it's an x component of the current along this red line okay defined with a Gaussian so you see initially it has a negative component because the current goes to the left as we move further on it goes through a zero and then as we continue it then becomes positive and the blue line measures the ensemble average of this component of the current and the gray area are plus minus standard deviations so this is basically how much we should worry about how significant or how reliable our estimate from such a bundle of trajectories would be and now we are interested what happens when we take this h to zero okay so first thing what i can tell you is that if we define those empirical densities and currents with a window function they almost surely converge st goes to infinity to the steady state versions so along in all each individual part so the process is really strongly ergodic so these are now steady state density and currents integrated over the window function for any positive h right so for any arbitrarily small window function and the other thing what we can show is that the continuity equation holds pathways so there is an equation there is a continuity equation which connects these two arbitrarily rough probability measures but the spatial derivatives are in the central position so there is no there is nothing spectacularly and that's it's just neat okay but now we want to go higher we want to look at higher moments and what i want to look at is just covariances at two different points x y and a b are either the empirical density and current and i want to look just look at the results if i consider an ensemble of steady state initial conditions the general initial case is can be found in this paper cited here and what i want to find is or what i'm anticipating is that you know in statistical physics whenever we are looking at variances and covariances of observables we know that they are somehow related to time correlations so time integrals of autocorrelation functions via green cubo formulas so now we are interested what would be the green cubo analog of a function okay so now by direct computation using stochastic calculus one can isolate this green cubo type thing so there are two integrals over time and then there are two integrals over space which basically just localize um are observable at two different points x and y or actually in the neighborhood with with with size h around these points i will not talk about the density correlation function because this has been to a large extent explained already by darling and cuts and in the fifties which already knew that one should not put in a delta function um and i will just focus on the covariances and correlations and um what pops up immediately is that these observables relate to time reversal quite naturally naturally so in a sense what they measure is in the covariance between density and current is the so-called stratonovich increment along pin paths so these are paths that propagate between two fixed points in in time and they look at the initial increment and the final increment along the time reverse paths so these increments are basically sample path representations of probability currents and in the case of current fluctuations or current covariances they are actually current so scalar products between these initial and endpoint um increments along pin trajectories and if you look at variances so when x is equal to y what they measure is basically correlations between outcalling and incoming increments along loops of different lengths so this is basically what what is measured okay but now the problem is that while the endpoint increments are easy to calculate because these are just probability currents the initial ones are very hard because this is initial increment conditioned on the endpoint and this is a hard problem and there are several ways how to evaluate such correlation functions which I will skip due to time I will just try to give you a physical intuition how one can use time reversal do a reversal symmetry to evaluate them so we want to have so this is now an assemble propagating from x to y the mean path is denoted by this line here and what we are interested in is the initial point current conditioned now that all trajectories hit that point here so this is hard we know that this is not the same as the final point current for the time reversed project path ensemble simply because they are not the same you see that the mean path is different goes the other way around but we know that it's it can be inferred from the dual reverse so reverting time and reverting the irreversible part which in this case here is just the shear flow which we get this green arrow but now we just need to flip it so we just need to reflect it you see that minus the green arrow is exactly the blue arrow and this is how we can compute all those observables and then it turns out that this green cuba formulas are really just correlations between outgoing and incoming probability currents along pin paths okay so basically this is what the green cuba principle here is it's slightly more complicated than that but I cannot really go into the details it's just that we can relate now fluctuations and correlations of this functional to something that refers to time reversal symmetry and it's breaking and in particular if the steady state current vanishes so if we are in if you have time reversal symmetry then at no time there are correlations between the steady between the current and the density so they vanish whenever this thing here is positive we have broken detail balance so this is also the physics that we came from that okay I will skip this example now as well and now I come to the limit when h goes to 0 so now this means I'm taking the limit to the delta function and what we find is that the variance of both the density and the current diverges for any time t so now this is a problem because this means that when we take the limit h to 0 first and t to infinity later this doesn't commute with taking the limit t to infinity first and h to 0 later so which means that we cannot or must not use a delta function in the initial definition what will come soon we are just preparing it is an analogous result for level two large deviations from this empirical density so now so far I consider just the empirical density in a single point but we can now also consider the entire field so the empirical density everywhere and one can compute this with level two large deviations fuga to shed gave a beautiful lecture on thursday about that so i can i must not go into detail there and in detail balance the rate function that enters large deviation principle is given here by the integral and one can use that to compute the l2 norm now of deviations from the empirical density from the invariant measure and they also diverge so it doesn't just diverge in a single point it also diverge if we consider the entire field okay so now we have a we are in a bit of a pickle because we have something where the first moments converge and we can take the limit but the second moment diverges and if you think about what happens is that almost surely each individual trajectory will give us a zero right so we can show by the numerical experiments and not by also by theory that if we consider long enough trajectories empirical density and current will always go to zero so we have a first moment finite second moment infinite and all individualization zero this might be considered as being a bit odd it's really not so here is a minimal example that can explain this in terms of a Bernoulli observable so if i if i define a random variable such that i say it with probability one minus n it attains a value n and with probability one minus one over n a zero then this observable will almost surely in the limit n to infinity go to zero it will have a finite mean which is equal to one and it will have infinite fluctuations in the limit n to infinity so this just highlights what is known in probability theory that there are very different types of convergence in the mean in probability and distribution etc which one now also has to consider while doing physics so it's not just to sort of we can sleep better okay i will skip this as well and now i will just come to the final part which is basically how one now can use this flexibility that we now have by having the possibility to coarse-grain our observables instead of understanding them as a necessity for coarse-graining is if we use the thermodynamic uncertainty relation to try to infer a bound on the dissipation here given by this sigma so the thermodynamic uncertainty relation in this vanilla steady-state form tells us that variance of some observed current divided by the mean current squared is always bigger than two over t times the steady-state dissipation and what we do now is we choose the empirical current as we define it with a finite window and now we consider this left-hand side of the thermodynamic uncertainty relation as a function of the coarse-grained right and now we know that as h goes to zero the variance will go to infinity so this bound will become poor and as h goes to infinity the mean will go to zero so the left-hand side will again become poor so this must this means it must be some sweet spot in between and indeed it's true they have they can have many local minima but in general there is always at least one sweet spot where by just coarse-graining the spatial resolution we can make the thermodynamic uncertainty relation sharper and then this allows us to infer a tighter bound on the steady-state dissipation just by coarse-grained so we don't have to repeat the experiment or anything and this brings me to my end and I'm already a couple of minutes over so I apologize so I try to convince you that a path-based statistical physics does require coarse-graining in dimensions greater than one but not in one so there is one difference quite a big one I would say I try to explain or highlight that fluctuations of those observables encode information about time reversal symmetry and it's breaking in terms of green-cubal formulas I and my main case was that although the one-dimensional case is practical intuitive we like it we always use it we should not overrate the insight we can get from one-dimensional examples only so there is much more physics in higher dimensions and as an outlook of course now we need to do this for under them dynamics and functionals of projected normal cognitive paths and with this I would like to thank the funding agencies for generous funding you for your attention and I also just briefly state that there are positions available so if someone is interested please drop me an email thank you very much thanks a lot algeas for the talk I switch with Jan the chair he has to leave