 Nothing, nothing. You need my title? Okay, so Friday morning, Survivals. So the first talk of today is the gradient flow structures and functional inequalities for quantum evolution equations with detailed balance by Jan Maas. Thanks a lot. So first of all, I would like to thank the organizers for organizing this very nice event for giving the invitation. So what I would like to present today is a joint work with Eric Carlin. What I would like to show in this talk is some problems coming from quantum evolution equations. And we would like to show that we can actually use some ideas and techniques which have been very successful in the field of optimal transport. And we would like to apply them also in this quantum setting. So let me start at the beginning. So introduction. So the starting point of the story is really sort of around the year 2000 when it was discovered by Felix Otto and Jordan and Kinderlehrer that you can actually look at diffusion equations in a new way, namely as gradient flows in the space of probability measures and out with the Wasserstein metric. And where the driving function is the Boltzmann entropy. So this is this discovery by Jordan Kinderlehrer Otto who showed that the Fokker-Planck equation for the following PDE that we saw, so dt mu equals Laplacian mu minus divergence mu grad V. This equation is the gradient flow of the entropy functional so of the relative entropy functional in this case. So the functional which maps mu to the relative entropy of mu with respect to mu, which is the roll log row part that we saw. So d mu x log d mu x dx plus a potential energy term. So the integral Vx d mu x. So it's a gradient flow of this functional and this mu is the measure e to the minus phi. Jan, you may want to write slightly bigger. Okay, I'll do that. My impression. Yes. So it's the gradient flow in the space of probability measures in Rn and out with the Wasserstein metric W. So what is the Wasserstein metric before? So it's the distance defined on the space of probability measures in terms of an optimal transport problem. So W2 mu nu squared is the infimum. The square distance integrated against gamma where you minimize overall transport plans. So all probability measures on the product space which have marginals mu and nu. Gamma has marginals. So that's the Wasserstein distance. So what does the statement actually mean? Because I'm speaking about the gradient flow in the space of probability measures, which is not the Riemannian manifold, so it is not a priori obvious what it means to be a gradient flow in this setting. Well, in fact this statement can be interpreted at different levels, but one way to do it is to say that, well, this equation, so what's the interpretation of this statement? So this equation can be written in the following form. So dt mu equals minus k of mu times the differential of the entropy. Where this k is, in some sense, the inverse of a Riemannian metric, and this k is given here. So k of mu is minus divergence of mu rad psi. So this is k of mu applied to a function psi. And you see that, I mean, what is the differential of the entropy? Well, the differential of the entropy is given if you just compute. So you differentiate this, you get 1 plus log of d mu dx plus dx. And now if you do the computation, which is two lines, so if you apply k of mu to this function, then you see that this equation gives you precisely the Fokker-Planck equation. So this is sort of the gradient flow form of this Fokker-Planck equation. Okay, well, and what does this have to do with this optimal transport metric? Well, if you compute the Riemannian distance associated to this operator k, so this metric k, then you find actually the following expression. So the Riemannian distance associated to k and so to the metric defined by this formula, this is given by the following expression, so d distance square between mu and nu is the infimum, and now I minimize over all curves in the space of probability measures, the grad psi t square d mu t dt where I minimize over all curves mu and over all functions psi such that the continuity equation holds. So this is the equation d t mu plus divergence mu grad psi equals zero and I want to have to write boundary conditions. So mu at time zero is mu and mu at time one is nu. And now the thing is that this expression is actually according to the Beno-Moubrenier formula gives you precisely the Wasserstein distance. So this is in fact mu two of mu nu according to the Beno-Moubrenier formula. So that's in a nutshell at the formal level, it's the gradient flow structure of the Fokker-Planck equation in the Wasserstein space. So what I would like to show in this talk is actually that there's a very similar structure in a setting of quantum mechanical problems where we do not work in a space of probability measures but we work in a space of matrices. Let me density matrices which are non-negative and which have trace one. So let me set up the setting. So the quantum setting. Okay so here we work in a Hilbert space. So h is Hilbert and actually for the purpose of this talk I will assume that this Hilbert space is finite dimensional. Actually many of the essential ideas are already there in the finite dimensional setting and everything that I'm going to say is rigorous in the finite dimensional setting and it's work in progress in the infinite dimensional setting. So many things go through but it's not written yet. Okay so we have a finite dimensional Hilbert space and now sort of the rule of probability measures is taken over by density matrices so I'm going to set d. So these are actually all operators now so bounded operators on my Hilbert space which are positive definite and which are trace class with trace one. So if you think about the traces being sort of the non-commutative analog of the integral then you see that it's really the direct analog of being a probability density. Okay so now what is the equation that we are interested in so we're not interested in the Fokker-Planck equation but there's an analog in the quantum setting which is an equation that describes the derivative quantum systems or open quantum systems which are quantum systems which interact with an environment. So they satisfy the Schrodinger equation but there is interaction between the system and the environment and we're interested in the equation for the system itself and this has a particular form. So the equation that we're interested in is the following Lindblad equation and this is now an equation in the space of density matrices which looks as follows so dt rho equals minus i times the commutator of h and rho. Let me just write the equation then explain what is written plus the sum over j vj commutator with rho vj star plus vj rho commutator with vj star. So this is the equation that we're interested in and this is the equation which is sort of the general form of an evolution equation at least in this finite dimensional setting which is Markovian so there are no memory effects and it has two important properties namely the equation is trace preserving so this property will be conserved it also preserves positivity so in particular we do not leave this space of density matrices but it has an even stronger property so it's actually completely positive which means that if I look at a corresponding semi-group then and if I tensor this semi-group with the identity then it still preserves positivity so it's a strengthening of the usual notion of positivity so this is the equation that I'm interested in so and this is the general form of say Markovian quantum evolution that is trace preserving and completely positive so that's a classical theorem from the 70s which holds a finite dimension and also in some cases an infinite dimension and in any case also in many infinite dimensional examples the generator that people study is exactly of this form okay so this H is the Hamiltonian of the system which is a self-adjoint operator on the Hilbert space so this first term here this is really the term which is Hamiltonian so it really corresponds to the Schrodinger equation so if this term wouldn't be there this first part would really be equivalent to the Schrodinger equation and the second part describes the dissipation and these VJs these are operators let's say in finite dimensions they are boundless operators on the Hilbert space in infinite dimensions they might be unbound okay so this is the equation that we're interested in now there is a result also from the 70s by Spohn 78 and that sells the following so suppose that Sigma is a density matrix which is invariant for this system suppose that Sigma is invariant or stationary then let's look at the relative entropy with respect to this state so what is the relative entropy that's given by the following so let entropy rho with respect to Sigma is the trace of Rho log rho minus the trace of Rho log Sigma which is really the analog of Delftia usual relative entropy but now in the quantum setting where the integral is placed by the trace and expressions like Rho log rho they should be interpreted in the sense of functional calculus or spectral theory so let this be the relative entropy then it was shown by Spohn that actually the entropy with this sign convention decreases along the evolution so the time derivative of the relative entropy so there is an entropy structure for this for this equation okay so now the natural question is is there actually a stronger property just like we had in the commutative setting where we even had the fact that this equation is the gradient flow of the entropy the question is now very natural whether on the suitable conditions these Lindblatt equations are also gradient flows of the entropy in some sense the question is what is the right Riemannian structure that we need to put on density matrices such that we become so that we get an analog of this result so what is the right analog of the Wasserstein metric in this setting of density matrices okay so before coming to that let me just write down one example that we are interested in example which is sometimes called quantum Ornstein-Ulembach and that's given by the following so take an operator just one which satisfies the following commutation relation so let V satisfy following commutation relation V commutator with V star equals to identity this is the canonical commutation in relation just for one operator so this is you can realize this explicitly by taking by taking the following Hilbert space so let V be L2 over R and out with the Gaussian measure so gamma is the Gaussian and let V be the derivative then you can check that its adjoint in L2 with respect to the Gaussian is given by x minus dx and you check that it satisfies this relation okay then the operator that we are interested in so consider the operator which is the node by L L Dagger, L Dagger of rho is one half times e to the beta over 2 and then we get an expression of this type so V commutator with rho V star plus V rho commutator with V star plus a similar term but now with e to the minus beta over 2 and now I exchange V and V star so V star rho V plus V star rho commutator with V okay so this is one of the simplest examples of this type though it is in an infinite dimensional setting and what you can check in this example the invariant state is explicit and it is given by the following sigma is some normalizing constant times e to the minus beta h where h is the operator V star V and this is just dx so Laplacian minus x dx so this is the ornstein-louderbeck operator, this is the classical ornstein-louderbeck operator now for this operator there was actually conjecture which was contained in a paper that appeared on archive in the summer of last year by Huber, Koenig Verschenina this is sort of June 2016 they conjectured that in this example there is the following entropy so the entropy along the evolution so let P be the semi-group associated to this system so the semi-group or the relative entropy decays exponentially with a rate which I call lambda beta and they had a conjecture for the rate maybe lambda beta is the hyperbolic sign of beta over 2 and this conjecture came from the fact that they actually compute explicitly the time evolution for certain states and for those states they got exactly this decay and they conjectured that this was optimal so what you see here is this sort of the, well, what we also said many times during this week so this is sort of the decay of the entropy which would be equivalent to log sobolefin equality entropy dissipation inequality so in some sense that's the conjecture that the system satisfies log sobolefin equality with this constant and then actually what I would like to show also during this talk is that this is true and that's what we proved of Eric Carlin and we really proved this fact using this gradient flow structure for the Limb-Platt equation so the theorem is just shortly afterwards with Eric Carlin, September 16 that we posted on archive is that this is true okay so let me now go back to sort of the general setting so these equations and let me try to explain what this gradient structure looks like and so in particular what is the metric which takes over the role of the Wasserstein metric for this kind of systems okay so to do that I need to make actually one one assumption because also in the classical case I mean it's not true that every Fokker-Planck equation is a gradient flow it's only true if the drift is the gradient of a potential so there is some symmetry condition there so the operator should be self-adjoint in L2 with respect to the invariant measure and that's exactly the assumption that we also need to make in the quantum setting so I make the general assumption from now on that we not only have an invariant state but that it's actually that it satisfies a stronger property with its detailed balance Sigma is a density matrix satisfies a detailed balance so what does that mean that means that I wanted to give this guy a name so for this dissipative part of the evolution let me call this L-Degar so this is L-Degar of rho and now what I can consider I can consider the adjoint of L-Degar with respect to trace duality and what I assume is that that operator is actually self-adjoint with respect to the Sigma so more precisely I assume that L is self-adjoint with respect to the following scalar product so AB Sigma is the trace of Sigma A star B so this is the analog of the fact that the generator for this Fokker-Planck equation is self-adjoint in L2 with respect to the invariant measure from the quantum version of that actually I can say a little bit more about the form of this dissipative term namely it has the following structure so if Sigma satisfies detailed balance then I can write my generator or the operator L star or L-Degar I can write it in the following form sum over j e to the minus omega j over 2 and then something which I already wrote vj commutator with rho vj star plus vj star rho commutator with vj so there are these coefficients e to the minus omega j over 2 and they satisfy the following so first of all so this self-adjointness implies that actually these vj's come in pairs so if vj is included in this series then also its adjoint is contained so the collection of vj's is equal to the collection of vj stars moreover I have the following identity vj commutator with log Sigma equals omega j vj so there is a strong structure here behind which tells you that there is really the symmetry in the system and also there is this relation between the vj's and the invariant states but it is explicit formula so for now and this is sort of the general form of the generators that I am going to work with let me erase the classical part sort of the question can we write under this detailed balance assumption, can we write that generator over there in gradient form can we write our operator El Degar as k of rho times the derivative minus times the derivative of the relative entropy for a suitable operator to assume density matrices so let me explain how this works in the simplest possible setting and the simplest setting is actually that my invariant state is just a multiple of the identity if you want this is the analog of saying that the invariant measure is just a Lebesgue measure so assume that Sigma is just the identity we don't need this but the formulas are considerably easier to assume that okay so then it turns out that we can actually write our generator in divergence form so then we can write our generator El Degar this will be self-adjoint now with respect to the usual trace duality so it is equal to L and we can write this as the sum of dj Degar dj so I write partial derivatives these are just the commutators so dj of a is the commutator of vj and a and dj Degar is the adjoint which is the dj Degar of a is the commutator the vj star okay so in some sense if you view these commutators as non-commutative analogs of partial derivatives you can really see this in the sense that you can write your generator as a Laplacian okay so that's useful because this already gives you some analogy to the classical setting but now we actually need to go one step further because what did we do in the classical setting to write our Fokker-Planck equation is a gradient flow but we wrote the Fokker-Planck equation or maybe I'll just write it here again so the classical is setting so what we did let me write it for the heat equation so if you want to write it as a gradient flow what do you do is you write actually this as a continuity equation so dt rho plus divergence rho root psi equals 0 where psi is minus log rho so this is also what we would like to do in this quantum setting in some sense in the quantum setting let me try to define this operator k as the sum over j dj dagger this x now on matrix is a rho times dj a this would be the direct analog of what we saw before but now you see actually that in this quantum setting there's a bit more freedom because what we did here is we actually would like to write the product of our density rho with another density matrix or with another matrix and of course in the quantum setting or if you multiply matrices there are different ways to do it I can multiply from the left, I can multiply from the right or maybe I can take some sort of a mixture of the two so there is here some freedom that we don't see in the classical setting yes, no I think if I write it like this it's a commutator I could write it in different form where you include anticommutators but I think this formula is correct okay so let's go back to this point so there is more freedom in the matrix setting because we can multiply in different ways so the question is what is the right way to multiply and now look again at the commutative setting so for the heat equation if we write it as a continuity equation we used a very simple trick which is actually the chain rho so if you check that this is the same as this well what you do is you compute the grad of log rho times rho this gives you the grad of rho and then you get a lot of pressure so of course this is elementary but this is exactly what doesn't work as easily in the non commutative setting so what we need some sort of a chain rho so we want actually that rho times dj of log rho where this dot is now some multiplication that I still need to specify so this should be equal to dj of rho because then the algebra goes through so now the question is what is the right multiplication which gives me such a chain rule okay and let me show you how to get that which is a well known thing so I need this chain rule for these commutators so how do I get it so let me just put it here the thing is that these commutators they satisfy a Leibniz rule so djAB equals A times djB plus djA times B this you can check that this holds for commutators so in this sense they behave like derivatives okay so in particular I get then this I can write as a sum over k to n minus one of A to the n minus one minus k times dj of A times A to the k I'm just by iterating that formula and now if I just make a substitution so if I substitute A to the n by rho I get dj rho equals the sum k equals zero n minus one rho to the one minus k plus one over n the dj of rho to the one over n times rho to the k to the n so this is a formula for dj rho which holds for all n which I just got from Leibniz rule so this is true for all n and once you have this then you can pass to the limit so let's now pass to the limit n to infinity and then you can actually recognize that this looks like a Riemann sum so this converges to the following it converges to the integral zero one rho to the one minus s dj of log rho times rho to ds ds so you see that we got actually a chain rule at least for this logarithm function just from the Leibniz rule and this is exactly what we needed so now I can actually define what is the right multiplication to be used in this Riemannian metric and this is exactly the multiplication which gives you this formula so I define or I can actually write it here so I define this dot multiplication so rho dot b is the integral zero one rho to the one minus s b rho to the s ds okay and then the result is the following so assume let's take my satisfies detail balance this Lindblatt equation is the gradient flow of the root of entropy so then the Lindblatt equation dT rho equals L dagger of rho is the gradient flow of the relative entropy and where the Riemannian metric is coming from this K so with respect to the with respect to the Riemannian metric induced by this K given here okay so that's the gradient flow structure which is really sort of the quantum analog of the Jordan-Kindallier-Auto theorem that I started with and this was obtained in a joint work with Mieke that we mentioned that there is also independently this result was also obtained by Alexander Mieke and Markus Mittenzweig and actually I formulated just for simplicity the result in the case that sigma is the identity but the results holds true in general and then actually this formula looks a little bit more complicated but maybe I skipped that for the purpose of this talk so this is independently obtained by Mieke and Mittenzweig and there's also sort of work closely related by Chen, Tannenbaum and George Yu okay so that's the gradient flow structure so let me spend sort of the rest of the talk trying to explain how we actually got the proof of this of this conjecture okay and so the proof of this of the logarithmic sobolefin equality that we obtained in this setting is obtained sort of by following the strategy of a classical proof but some interesting additional ingredients come in so let me give you sort of the simplest as far as I know probably the simplest proof of the classical logsobolefin equality for the Gaussian measure and then I explain how we can adapt it to the quantum setting so recall sort of the classical Gaussian logsobolefin equality so the inequality h gamma f is less than one half gamma f where h gamma f is the relative entropy with respect to the Gaussian so integral of f log f d gamma and i gamma f is the relative entropy so the integral of f squared divided by f d gamma for densities with respect to the Gaussian and so let me give you in a few lines the proof by what you think is due to Michel Lidoux and it goes as follows so what you do is you look at the Fisher information so e gamma of pts under pt is the group so then the Fisher information of this is equal to the grat of ptf squared divided by ptf d gamma and then you use the fact that that for this Gaussian or the Unumek semi-group actually the gradient and the semi-group they commute up to a factor e to the minus t so this is equal to e to the minus 2t times the same thing with now pt grad f divided by ptf d gamma where now the pt on the gradient just acts the same way on each of the components so this is a very specific identity for the Unumek semi-group okay and then the next step is that you observe that now you have this pt applied to something both here and here and you use the fact that pt is given by some integration against the kernel and this function if you want to function a squared divided by b is jointly convex so you can actually apply Jensen's inequality to put the semi-group outside of this convex function so you can bound this by e to the minus 2t integral of pt grad f squared divided by f d gamma and this use now the fact that the function a squared divided by r this function is jointly convex okay and then you use the fact that the Gaussian measures invariant for the semi-group so I can get rid of the semi-group so this is just e to the minus 2t times the Fisher information so you see that you get this exponential decay of the Fisher information and then because the Fisher information is minus the derivative of the entropy along Ornstein-Unumek this gives you the same exponential decay for the relative entropy and it gives you the logarithm example of inequality now integration yields h gamma of f is the integral 0 infinity of the Fisher information of p t f d t which is according to this inequality it's less than one half the Fisher information of f okay so that's a very short proof of the log-sobolaff inequality for the Gaussian measure and now the fact is that we can actually transfer this proof to the quantum setting and we can do a few things so you see what was essential in this computation were basically two steps the first one is that there is this commutation, this intertwining relation between the semi-group and the gradients and actually it's just a direct computation to show that the same intertwining relation with a different constant actually also applies in this example so that's the first element and then the second crucial step in this computation was the fact that we could apply Jensen's inequality and exploit in the convexity of this function and for this also we need a counterpart in the quantum setting and fortunately there is such a thing maybe I put it here so there's a beautiful matrix version of this inequality and this tells you the following if you look at now a matrix value or not a matrix value function but a function of a triplet of matrices so if you take matrices A, R and S and you look at the following you look at the trace of the following expression 1 divided by T plus 1 divided by T plus R now there's an A and now there's a 1 divided by T plus S and it integrates with respect to T so this quantity is actually jointly convex and what does this have to do with this fact? Well if R and S are the same and if everything would be scalar you would precisely get this statement so this is really a generalization of that convexity statement and the reason why this particular expression shows up is that the operation which maps a matrix A to this quantity is in some sense the inverse of this multiplication but that's why it shows up in this computation. So with this analog of this convexity statement we can actually follow the lines of this proof and obtain a proof of this conjecture with the sharp constants so maybe to summarize so the idea is that there's now a very natural analog of the Wasserstein metric which really allows you to obtain these gradient flow structures, it allows you to obtain functional inequalities so if you look so you can get telegram inequalities so I think there's quite a lot of potential for using tools from optimal transport also in this setting which are both exploring I think that's it, thank you.