 ready to start again, okay, okay, so it's a pleasure for me to introduce Ludwig Rusa, who is a PhD student at the Connormal Superior in Paris, and who will give us an introduction to free probability and tell us how it emerges from mesoscopic quantum systems, so from quantum dynamics out of equilibrium as we will see. So please, Ludwig. Yeah, thank you. First of all, everyone can hear me or I need to talk louder or less loud? Good, okay, great. So I will tell you how free probability appears in noisy mesoscopic quantum systems, and of course this title is more sketchy than what actually is, so maybe if there is an experimental physicist working on mesoscopic systems he will not agree, but I hope it still comes close, and this is what I my talk is based on this paper and we wrote recently which is called career inflectrations in noisy mesoscopic systems, the open quantum step in free probability, and okay, and we will first discuss free probability, like most of the talk will be free probability, and then I show you a little bit the physical application because I guess most of you are more interested in the mathematical side of free probability. So let's start with an introduction to free probability, and I'll start this by recalling what it means that two random variables are independent. It means that the joint moments, expectation value x to the n, y to the m factorizes into the individual moments x to the n and y to the m for all n and m, and in that sense you can see independence in classical probability theory as a rule to obtain the joint moments, which is here this combination x, y from the individual moments, which is this. So knowing the individual moments of independent variables is enough to characterize the expectation value of any multiplication of those variables, and this is something we would also like to do in the case of non-commuting variables. So let's now explore a little bit where this leads us to if we try to continue looking for such a rule. Let's take as an example two random matrices a and b of size n, and since we know what independence means, let's consider their entries to be mutually independent, and then we would like to do the same thing. We would like to obtain the joint moments from the individual moments, so let's try to do that. First we could write expectation value of a times b, and since a and b have independent entries we can see that this factorizes, but now you write something like expectation value of a times b times a times b, and then it's less clear how you can write this in terms of only expectation value of a, a squared and so on, and expectation value of b, b squared and so on. Of course you can write this in terms of the moments of the individual variables, but we would like to consider that at the level of the matrix. Now maybe you could object that it's not a very good idea to, as a measure, to take the expectation value because it just returns again a matrix and something more reasonable would be to define a c-valued expectation value where before we take the trace of the matrix and then we take the expectation value such that we end up with a c-valued number, and again we can ask ourselves is the notion of independence from classical probability theory enough in order to recover all the joint moments now with respect to this new measure of phi from the individual moments. And let's test it. Let's take a phi of a times b and take them to be two by two matrices. Now you can calculate what are these entries and you get that this is equal to this, and in general the condition that the entries a are independent from the entries b is not enough to ensure that we can write this as a factorization of the individual moments. So the idea of free probability theory to solve this issue is to replace the notion of what is independence and to replace it by something which is called freeness. And freeness, the definition of freeness which I will show you in a minute, is constructed in such a way to ensure that the joint moments of free variables can be obtained from the individual moments of these variables. Okay, any questions so far? So here's the definition of what is free. We are given two non-commuting variables in some abstract algebra. Of course I chose m in order to be reminiscent of some matrix algebra but for the moment think of it as an abstract algebra. And also we are given a linear functional which has the function of the expectation value that maps an element or from this algebra to a c number. And with respect to this expectation value, a and b are independent, sorry, are free. If for all polynomials p i and q i that satisfy this condition, that means the expectation value of p i of a and the expectation value of q i of b is zero, you can think of this a little bit like having centered random variables, okay, the expectation value of number zero. Given these polynomials with these conditions, a and b are free if they satisfy this condition. So that's like the most formal part of my talk. But I thought it's nice to give this definition and to show you a bit of flavor. So what's written here? It's written that phi, this new expectation value of an alternating product of p of a and q of b where p one is a different polynomial from p two which would come here and so on until p l of a times q l of b that this is zero. If you have this condition fulfilled, then a and b are free. So now it's not at all obvious why this condition will ensure that we can recover joint moments from individual moments. But let's do some examples and explore the consequences of this for the example where we take a and b to be free and we choose the polynomial, I mean this holds for all polynomials so we can choose a specific polynomial to be a minus the expectation value of a. Now let's just verify if we take phi of p of a then we have phi of a minus phi of phi of a but phi of a is already a c-value numbered so the second phi doesn't do anything and so we get zero and the same we do for q and b. And now I can use this condition of freeness here and and just apply it once. So let's do that first. Don't look what's on the right. We write phi of the polynomial p of a which is here times the polynomial q of b which is there and since a and b are free we know that this must be equal to zero. Now let's multiply what's written there in the brackets. So we multiply a times b then here's phi of a times phi of b and then there are the crossing terms and if we keep in the end phi of a times b on the left-hand side and put everything else on the right-hand side then what we find is that phi of a times b is equal to the product of phi of a times phi of b. So in that sense this definition ensures at the most basic level or for the most basic examples that we can recover the joint moment of free variables from individual moments. Now we could in this definition like make a easier change we could for example put an n here and an n there then this is still a polynomial satisfying the condition that that's zero and we do the same here maybe with a different which with an m then we have an n there an m here and in the end we will have n phi of n times m is equal to phi of a to the n times phi of b to the m. Okay so so that looks like we recovered exactly the same as for the expectation value just that we have to be careful of the non-commutativity of these elements so actually that's not showing that we can recover everything because you saw that the problem for random for just matrices with a classical notion of independence came up when we were considering alternating products of A and B. So that's why we also have to care about things like this how to evaluate for example phi of a times b times a times b and in a similar way as I showed you above we can use this definition by putting some polynomials exactly in the same way and find that this decays into a product or a sum of product of the individual moment and here you notice the curious structure that appears like it's not at all obvious why you find phi of a to the 2 times phi of b the whole thing squared times the there's a small mistake sorry times the last term but just to show you this that like this new definition is quite rich and has a lot of structure inside but indeed it assures that you can find all the joint moments from the individual moments of the free variables any questions so far yeah phi of a squared is normally not the same thing as phi not sorry this n and m are inside if you take yeah but we assume that a and b are free if you take b equal a then they are not free any longer okay yeah yes yes I think so because usually you can take a functional it depends I've been a what is this phi you choose what's what's yeah but I showed you before that often one takes this phi for matrices to be expectation of the trace and so in the trace they commute so there's a usually one takes this fight with some cyclic invariance but this really depends on what phi you choose okay so then let let me just point out that there is a connection to random matrices and that's why free probability became so popular maybe and that is due to a voicolesco who is the founder of free probability and to develop the subject first completely abstractly in the context of some operator algebras and then in 1991 he noticed that there's a connection to random matrices and in particular he found out that if you take a and b from the Gaussian unitary ensemble so this means a and b are Hermitian matrices where all entries are Gaussian centered variables complex Gaussian centered variables with variance one over n and the dimension of the matrix then if you take n to infinity a and b become free in the sense that they satisfy this definition and very importantly one has to take n to infinity otherwise it doesn't work you saw before in the example in the first slide if I take n to be too then like it doesn't work yeah another example another example would be if you take random matrices as hard random rotated matrices so hard sorry hard random rotated yeah matrices were tended by hard random unitaries hard random unitaries are unitary matrices that are distributed homogeneously on the unitary group and now you multiply a diagonal matrix d a from the left and the right with with that's a matrix and you do the same for d of b with another independent here hard random unitary v and then in the large and limit you find that a and b are independent and yeah yeah so yeah I think that's true that because now for the moment I'm always considering phi to be something like the expectation for random matrices I'm considering it as before to be the expectation value and then with the trace a lot of these results hold if you take away the expectation value because there's a concentration of measure but you can see it for this example the last one of hard random unitaries you can see it a bit in a sense that the eigen vector structure of these matrices becomes negligible and the only thing that matters are the eigen values because the the the eigen vectors become so so perturbed it's just an intuition that the only thing that's important are the eigen values and in particular the fact that a and b defined in that way are free and allows you to calculate the spectrum so the eigen values of a plus b in the large and limit from only knowing the spectrum of a and the spectrum of b which would not be possible if a and b are finite dimensional yeah you and v could be yeah true I think hard distributed random matrices are also independent from this term in sneak random matrices and Gaussian random matrices from the Gaussian unitary ensemble are also free from deterministic matrices yeah and I think for many other random matrix ensembles but I don't know any all of them so okay then let's continue right since the connection to my physical problem is is via the notion of cumulons I will take a few minutes to talk about cumulons and I guess all of you know what cumulons are right but maybe what you don't know is that you can define cumulons in this way that is so the cumulons are also known as the connective expectation value of a random variable with which has moments mn can be implicitly defined through this formula where the moment the nth moment is equal to a sum of all partitions of the set of numbers one to n of a product where for each block in this partition you multiply the classical cumulons of order equal to the size of this block so there is a combinatorial way in which you can define cumulons and since we will talk a lot about partitions and non- crossing partitions I just want to make this clear so you have the set let's take one two three four okay and now a partition pi is something that groups this into several sub-blocks for example I can take one two and three four that would be a partition and here I took n equals four okay is that clear I think it's yeah and so B are the blocks in this partition so this is B and the size of this block would be two in this case just okay so and let's give you an example for n equal 4 I like to represent these partitions in a graphical way and what I do is I draw nodes on a circle one two three four and then I connect all nodes belonging to the same block so here this corresponds to the partition that consists of a single block which contains all the four elements another example would be this one which has a partition with a single element containing one and a partition containing two three and four and then you can go on you can put one and two in a block and three and four in a block and many other ways you can also have each of the nodes in its own block and the curious thing that happens you can have a partition that when you would draw it graphically like this the lines cross and this is something that becomes important in the notion of free humans which is something I'll come to next but first of all let me give you an example how to use this formula if you have in now a little bit more general random variables x1 to x4 then you can use this formula and write them as the joint moment of x1 to x4 as the classical cumulant or connected expectation value of these four variables which corresponds here to this partition okay so here I chose a partition where which contains only one block and so cn is c4 in that case the the cumulant of x1 to x4 then for the next partition which I drew by this I have one variable in the expectation value of one variable alone and the other three together okay is that clear like the logic of how you apply this formula and then you go on and you have other terms that appear and also this last one you can associate it a term okay and that's how you can express the joint moments through the joint cumulants using the sum of all partitions now free cumulants they are defined exactly in the same way just that okay free cumulants which I denote kappa of n which you can also see kappa as a function of this variable n times of a variable a which has moments n mn are defined in the same way just that here in this sum pi is runs over all non-crossing partitions so this means this partition here no longer appears okay so I take it away and then I do the same thing as before I can write you an example oops I'm still there where I can write you an example where where I evaluate the moment of a one to a four written as the by three cumulants and let me illustrate you why this is a good definition of free cumulants okay what we could do is to write down the cumulants of a classical Gaussian random variables which you know all how to do because the moment of this is given like this x to the n integrated with the distribution and if you evaluate this integral you will find that this is equal to 0 if n is odd or n minus 1 double factorial if n is even and now comes the thing you can interpret this as a sum over partitions and in particular this is equal to the sum of all pair partition that's all partitions where the blocks have all size 2 and that makes sense because if n is odd then you can't partition this in blocks of 2 if n is even then you choose the first element for the second element in this block they are n minus 1 choices the third element you choose again randomly and the fourth one you have n minus 3 choices and so on so this is how to connect to the formula and now comparing this to the formula which we had before I will just flesh it again namely this formula here you see that the cumulants are defined here and that's why the cumulant of second order is 1 since here we only sum of a pair partitions and all higher cumulants are 0 now you can do the same thing with a matrix from the Gaussian unitary ensemble and in that case we use this kind of expectation value you know that it's the eigenvalues are distributed according to the Wigner semi-circle you calculate what this is again for n odd at 0 for n even at some number called the Catalan number and you can do the math to check that this Catalan number is equal to a sum of all non-crossing partitions where all blocks have just two elements so non-crossing pair partitions and now you see that when you deal with Gaussian unitary the matrices from the Gaussian unit ensemble the free cumulant behave freely in analog to the classical cumulants for a normal random variable and in particular here you can reduce that the kappa 2 is 1 and the kappa for n bigger than 3 kappa is the free cumulant is 0 okay so they really behave in an analogous way and then there are many other things in which free cumulants are the right analog of classical cumulants yeah in the case of Gaussian unitary matrices yes but I mean this is more general because you can yeah yeah yeah yeah okay now you're asking what happens if you're looking at the joint moments of variables then you will get I can't tell you by heart what it what's the formula like okay yes sorry can you speak louder yeah yeah true yeah in fact yeah I presented you the formulas for single variables the moment cumulant formula for a single variable but then in the example I showed you the moment cumulant formula for many variables and in that way you would generalize okay so this is the end of my introduction to free probability and I see that I'm already 25 minutes so now theoretically I have five minutes to talk about the physical importance of this any any questions before I go on okay so then let's change topic completely and let's look at this picture where the physical situation we're interested in is a conductor between two reservoirs at different particle densities and a and and b and since they are different you will have a current flowing from the left to the right and since we we want to do quantum mechanics I assume that these particles are fermions and fermions are created by this creation operator see at position x they go and if you look at such systems they usually display a diffusive transport that means that the current at position x is equal to minus the diffusion constant of the gradient of the particle density this is also called fixed law or for the temperature of Fourier law and is a very it's an empirical result and secondly we want to look at systems in which are in the so-called mesoscopic regime so now there's this word appearing in my title and what I mean by mesoscopic is that the length of the system is smaller equal to the coherence length of these fermions traversing my conductor and the coherence length of these fermions is the length over which they are able to quantum mechanically interfere if you think that in a real material electrons will scatter by impurities and with every scattering they will become less coherent and if you take a fermion from there and a fermion very far away from there they will not be able to quantum mechanically interfere any longer but if the system is small enough then they haven't scattered by enough impurities yet and they still can interfere with each other and that's a regime where quantum mechanics is important because if you consider such a situation then you could describe this sufficiently by a classical mechanics and as a statistical physicist the big aim in such a setup would be to describe the probability distribution of the density profile of fermions in this conductor so what's the density at position x and what's the probability to observe a certain density profile in the steady state of this current carrying conductor then what is the current profile which is related to the density by the continuity equation and finally we could also ask ourselves what is a probability distribution of so-called coherences and just come back here the density quantum mechanically would be the number operator at site x and then the quantum expectation value with respect to the state of the system and if you were to replace x by y then you get something which we call the coherences which are for those who know what the density matrix is density matrix is just an off-diagonal elements in the density matrix and if the system has lost all its coherences they are zero and if it's coherent then they will be non-zero and the idea would be to obtain the probability distribution of these three quantities in a universal way that means from only a very few system dependent constants such as the mobility or the diffusion constant so that we would have a theory that at the metal mesoscopic scale describes many mesoscopic systems regardless of the microscopic interactions happening once we know these few system dependent constant and actually on the level of the density profile in the current profile this is known and a theory that does this is called the macroscopic fluctuation theory and actually this is purely classical because it deals just with quantities that you can also describe classically but what we are studying is how do the coherences fluctuate in such a system and that's something that's purely quantum mechanical oops and the big big aim would be to have an analog theory of this macroscopic fluctuation theory that includes coherences to study such a any questions to study such a system we have a toy model and this toy model is inspired by a classical stochastic process near called the symmetric simple exclusion process and we have a modification of this in order to include the effects of coherences and the toy model works like this you have fermions that hop to neighboring sites with random amplitudes and the randomness or the amplitude is given by the increment of a Brownian motion on each site and these Brownian motions they have a variant DT so in a time step DT the probability to jump is given by DWT that's the interpretation and since we have fermions they can't hop to the right if there's already a particle at a right I'm considering spinless fermions okay we have to make things easy and so this system is described by a density matrix I'll just go over this and then I come back why free probability appears there the density matrix at a time step T plus DT is recovered from the density matrix at time T by conjugating with this evolution and basically what this represents here is just noisy free fermions where the Hamiltonian increment is given as a sum of the jump from I to I plus one multiplied by this noise and the emission conjugate and in order to model the boundary situation we add a Lin-Bladian term that models the boundary never mind if you haven't heard that term okay this this problem is maybe first of all a bit difficult because the density matrix lives in a two-to-the-end dimensional space each side has two dimensions so n sides will be two to the n but actually it's a quadratic model and that allows it to reduce it to n dimensions when we define this matrix Gij which is nothing else than the matrix of all the coherences between side I and side J and then we can write an equivalent evolution for this matrix G which evolves with time according to some new Hamiltonian increment of size n and some new boundary operator I'm not going into the detail but what I want to show to you is that this basically reduces the problem to the study of a random matrix matrix that under whose probability distribution undergoes some time evolution and that finally converges to some steady state the steady state in which the system has reached probably a count of a density profile like this and has fluctuations around it that don't change with time anymore and now we could ask ourselves this is a random matrix in a stochastic process with an yet unknown stationary distribution how could we use tools from free probability theory to solve what's the steady state of this of this stochastic process and that's how my last slide where I show you two examples how free probability appears in this stochastic process okay so something which this system satisfies like which this random matrix Gij satisfy there are three conditions satisfied by this random matrix which are sufficient to make the connection to free probability theory and we think that these three conditions hold more general for noisy mesoscopic systems and I'm not only limited to the time model we are studying and these conditions are I will just go quick through them and the probability distribution of the matrix elements is u1 invariant can multiplied by random phase by faces from the left and the right at each side this means sorry that if you look at the expectation value of matrix elements of this form they are zero except if this is equal to J and this is equal to I because in that case the phase cancels otherwise it doesn't cancel so it makes sense to look at expectation values of matrix elements where all these indices are arranged like in a circle and that's why the second condition is that the loops so which are such a circle where I range the matrix elements scale with some power of the matrix size if all indices are different and I present this by a circle here where I put all the indices on this circle and then if you have the expectation value of several of these loops it's it factorizes and actually these three conditions they pop up again in a completely different context which Sylvia will talk to you about in the next talk okay so it wasn't for nothing that I explained you these three conditions I hope and then what you can show with these three conditions that if you compute the moments now of this matrix G a little bit as we did before with the matrix from the Gaussian unitary ensemble where we took phi to be the expectation value of the trace so of the normalized trace so I have one over n times the sum which is the trace and then here the expectation value I can write it in a way where there pops up a sum of a non-crossing partitions okay the term appearing here I think I'll not explain it but I I just like want to make point out that these three conditions are enough such that all terms corresponding to crossing partitions there vanish and that this object here is something like a free cumulant in free probability theory the second point where free probability theory pops up is when we look at the stationary measure of the stochastic process and to do so I define just a rescaled version of the expectation value of G's which are arranged in a loop such that I have a quantity that's of order one because this scales with this order here and now the curious thing is that if you sum these G's up in a way as if they were free cumulant of some unknown measure okay so what you do here is that you sum over all non-crossing partitions then you have again this multiplication over all the blocks of these connected expectation values of our matrix elements arranged in a circle you find that that's equal to just the minimum of these variable x1 to xn so a huge simplification happens if you respect the the this non-crossing partition structure of free probability theory and you can even realize this these are the moments as the variables of some measure and this result was found by the Philippian which is a mathematician with who we collaborated so I think that's the end of my talk I will yeah this is something else I will just like give you are no like a small recap first we talked about the fact that independence needs to be generalized to Freedness then I gave you the definition of Freedness showed you some examples why Freedness seems to be reasonable then we talked about classical and free cumulants and the notion of non-crossing partitions here I gave you this example then we had the excursion to a current current conductor and the description of this current conductor through random matrix and the characterization of this random matrix through something that is free cumulant like and that's it thank you very much so you saw that second point I made which was this one here okay the second point I made is that the connected correlation functions in the steady state are the free cumulants of a measure that has moments minimum of x1 to xn so this solves completely the steady state you see what I mean so in the end what we're interested in is to understand all the correlation function of the matrix elements g ij and so on in particular the connected correlation functions and you get them if we sum all these correlation functions here the g are these correlation functions if we sum them up as if they were free cumulants and then we find that this is the simplified considerably so that's one way how it probably helps us to find a steady state and to get there we actually use this result I showed you before you don't see the simplification that's possible yeah maybe I can explain you later if you want yeah the factorization of loops at leading order is something that is just a property of this measure and that up here it has nothing to do with freeness we haven't worked this out and we suspect that it's very difficult that there is also a branch of reproducibility theory that's caring about these lower order corrections and I haven't looked into that yet but I think it's interesting