 In the following two hours, we will have a look at open systems as a tool to study some interesting physics. Interesting physics that I will focus on will be non-equilibrium physics of many body systems. So it will not be like a lecture that I will give on a blackboard. So I will use blackboard on few occasions to really derive few things. Otherwise I have slides. So that we can cover a bit more ground perhaps. I will not always go into every detail. So you're encouraged to stop me as questions. I mean, we have enough time. So basically, here is a plan. First there will be some introduction, blah, blah, just to wake us up and give you some arguments what are the interesting things that I will be interested in. This will be non-equilibrium physics. I will say a few words about two settings in which you can study non-equilibrium physics. First will be unit revolution, for instance, starting with some non-equilibrium state and just evolving unitary and look what happens. And the other that I will later on focus on in this section five open systems will be that of steady state where we will have explicit driving. And so after a long time, a state will reach a steady state, state independent of time. And to give you an idea or a specific problem to think about, I will say a few words about transport. So transport is like one of the simplest non-equilibrium problems that you can address. And here I will tell you why it's interesting, important and very old and we know very little about it. In essence. And the models that I will almost exclusively focus on will be 1D models. I will say a few words why this is so. And then like say, this is basically, even point two is more or less extended introduction if you want. And then in three, we will have a look at something which is called matrix product state ansatz or matrix product operator ansatz, which is like compact way how to write complicated states in a large Hilbert space. And this will then allow us to discuss time evolution of such states. First, we will look at unit revolution. Here we will see that basically, the best methods that we have don't work. So we are sort of at a loss. And then we will go to open systems where we really can look at steady state setting. And here we will see that things are better, at least sometimes. I will say a few words about Linblat master equation, why it is a nice setting, few words about how to, well, derive it. And then I will mention some, show you some exact solutions. Then how one can efficiently numerically simulate such an equation. And then the last point six is sort of some research results using this open setting. And we will see, my goal is basically to cover up to five. And if we run out of time and don't cover six, I'm fine with that, you can ask me afterwards. Okay, so let us start. So many body physics just, I'm sure many of you are familiar with that, but let me just try to make a distinction between like single particle physics and many body physics. We will be interested in many, I will be interested in many body physics. So single particle physics, so those are systems of essentially free particles. So sometimes when you are given a Hamiltonian, the Hamiltonian might not look like a free. It might seem that it describes interacting system, for instance, transverse easy model, where you have a field in the transversal direction, say x direction, and then zy interaction, in the longitudinal way, it might look like interacting model because nearest neighbors do interact. But then if you do some simple transformation, in this case, Jordan-Vigner Fourier, you really decouple all the modes, so you end up with non-interacting particles. So at the end, you have non-interactive bosons or fermions. And in such systems, at the end, you can transform your Hamiltonian to a non-interacting form, which looks like, you know, if the imaginary lattice system of L lattice sites say, and then after you do this simple transformation, you will end up with a sum which will basically run over L, say, non-interacting modes of something like Ej, energy of this single particle state times occupation number. Say, if we have fermions, something like that. The point being that you can write your Hamiltonian as a sum of L, non-interacting guys. And then the state, any eigenstate, you can write any eigenstate simply by occupying these, let's say, orbitals. You have occupation numbers. So eigenstates are really just, here I have it, just a single slated determinant, if you think, in terms of first quantization. So quasi-particles, you do have quasi-particles, they are just these eigenmodes, which is another word of saying single particle states. And complexity of such system is, well, depends, but it's linear or in the worst case, polynomial in N. N is actually my L here, the same thing. The number of lattice sites that you have. So I will call this simple, so because it's polynomial, even if you, at the end, have to do some numerics, these numerics will be efficient. You will be able to do large systems. On the other hand, I mean, it's clear, Hilbert space is a large space, they say, no. So, of course, the majority of states in the Hilbert space do not go in this category. So typical system is not a free system. You cannot write Hamiltonian in such a way. And in this case, you have to deal with truly many body physics or physics of strongly interacting particles. So there are no single particle states, no quasi-particle states. Now you might object, of course, that close to the ground state, there might be some quasi-excitations, but considering that I will mostly focus on non-equilibrium physics, which means that you also populate high energy eigenstates. In fact, I will focus on even infinite temperature dynamics, non-equilibrium. There, you can safely say there are no quasi-particles. And so eigenstates are, you know, superposition of many slated determinants. They are strong correlation and complexity is exponential. So, okay, let's say such problems qualify as being called difficult ones. Now, of course, there are many strongly interacting systems. I will mention some later on more in detail. Here is, for instance, Haber model. Now you can discuss whether it's like a sufficient model to explain high TC superconductivity, I mean, two-dimensional Haber model or not, but everyone will agree that Haber model 2D or even 1D is very interesting model that harbors interesting non-trivial physics. So, when you put many particles together, many body system, so in the thermodynamic limit, you can have new interesting collective behavior. And this is what we are interested in. Okay, and then there are many other systems. Today, we can do, you know, we can either study such models like Habert or the simpler, even simpler model, Heisenberg model that I will later focus a little bit more on. You can study it analytically, you can study it numerically. For instance, one-dimensional Habert model is even integrable, but this integraality should not deceive you. It does not mean that we can calculate everything in particular transport properties. It turns out that even for integrable systems, we are not able to calculate transport properties. So this integraality does not mean that we know everything about the model. Yeah, so today, people know how to do very nice experiments, so some of the experiments are perhaps even a little bit ahead of what we can do numerically or theoretically. There will be talk about that in the following week, so I will not dwell too much on that, but I mean, in cold atom setting, you can have well-controlled situation where you can control the interactions, you can even put control disorder, quasi disorder, you can study one, two, three D system. So it's a very nice setting where you can engineer your model and then study it in a lab. And there are also real materials, I will mention some of them later, where these sort of very simplistic models, like Habert model or one D Heisenberg model that I will talk a lot about are realized in a real piece of material. Okay, so non-equilibrium physics. Why do we want to study non-equilibrium physics? Now, of course, let's first say a few words about what do we mean by non-equilibrium. I mean, non-equilibrium is, of course, everything that is not equilibrium. Equilibrium is like eigenstates are stationary state of unit revolution, so they are equilibrium or some convex combinations of eigenstates would be called equilibrium like a Gibbs state or a Boltzmann state. So thermal states. So it's clear that non-equilibrium is really a vast universe. Now for equilibrium physics, I mean, it might be hard to calculate things in equilibrium for a complicated system, but at least we know what is the principle, right? You take, I don't know, Boltzmann distribution, you have to calculate partition function and then after you have that, you are done. You just have to take appropriate derivatives. Out of equilibrium, it's not known and it doesn't exist like a uniform setting. There is no non-equilibrium probability distribution that would enable to calculate every non-equilibrium setting because it just, there are two different non-equilibrium types of situations. So if there is supposed to be any hope of having some universality, some nice theoretical description, then somehow you have to limit yourself within this non-equilibrium universe to, I don't know, perhaps the simplest situation. And one can argue that one of the simplest setting is that of non-equilibrium steady states. So what are those? Well, a steady state just means that your state, for instance, if you describe it by the density operator, is independent of time. So it is a steady state. Even though such a state might have non-trivial, meaning non-zero expectation values of, say, currents. So you transport and you will have independent state, but current will never the less flow through your system. Now, you can think about two different approaches to that. You can look at unit revolution. So for instance, to be concrete, suppose you have some lattice system, you prepare your lattice system in such inhomogeneous state where you have a jump in your, say, density or magnetization or energy density, yeah? Like this domain wall-like step. And then you let it evolve with your unit revolution. So this profile will then relax in some way. How will depend on transport properties? And then you can, you know, if you wait long enough, it might happen, this depends on the system, that this profile in a certain piece of your system will look independent of time. It will not change anymore with time. You might call that a steady state, but it's just a profile that is independent of time. The state itself will still evolve in time because if it would not, then it would mean that it's an eigenstate. So in such a setting, the state itself is not really time-independent, but observables might be, the expectation values of observables might be. Yes? Okay, yeah. So by you, you mean now unit revolution or perhaps, yeah, okay. So then this will be the second setting and here you will have true steady state. Was that your question or does it answer? So in the, if you have leuvelium, yeah, so master equation, which I will call, you know, open system or driven system with the buts, then indeed you can have two non-equilibrium steady states. So for instance, you can imagine like the picture to have in your head is you have some buts on the left, but on the right with different, I don't know, chemical potentials. And then after a long time, you will indeed reach a steady state independent of time. Does it answer me? Other questions so far? So far I'm just more or less giving you some ideas. And it will turn out, I will explain you how and why that often this non-equilibrium steady state setting that is driven where you have explicit, well explicit where you do account for your buts is somehow better behaved. So it's easier, well easier, but in this setting you can get exact solutions sometimes. And in other cases you can have good numerical methods with which you can study large systems. All right, so one question, non-equilibrium question that you can ask is that of transport. Now it might seem to you that, well, it's a sort of particular question why there are many others, but it turns out that this question of transport is related to many other. I will mention some for instance relaxation and things like that. And it has very long history. So I mean, what is the question that we would like to understand? We would like to understand the nature of transport. The nature of transport for instance means everyone knows the famous Fourier law, right? Where it, which says that current is proportionate to the gradient of the driving, say potential. If we have heat or energy transport the temperature, if we would look at charge transport, it would be gradient of potential. So the question is given Hamiltonian, perhaps more complicated than that one, will such a law be obeyed in the model? So if such a law holds, meaning that this coefficient here which is called conductivity or diffusivity or depends on the setting, meaning that this coefficient is independent of system size. Then you would say, okay Fourier law is obeyed, you have normal or diffusive transport. But you will see that there are many situations in particularly in one, the systems where such a law is not obeyed. So the basic question is given a model, what kind of transport do I have? Do I have diffusion or do I have probabilistic transport or something else? Going back to this Fourier's work, this goes back to 18, I mean, well, oh, seven. At that time, the question of transport, of heat transport was one of the big problems. People try to understand how it arises. And everyone's approach to that problem was sort of I guess influenced by celestial dynamics where you have point particles and forces between them. Everyone was thinking about transport problem as some complicated many body problem. You have a solid, but then you have many particles inside the solid and each one interacts with every other particle. This obviously is very complicated thing so they couldn't make any progress. So what Fourier's breakthrough was really to go to, now of course it seems natural and obvious, but it's only in hindsight, right? So what he did, he really went to the continuum limit and he wrote a PDE for it, diffusion equation. And this is really how PDE's basically came through big door to physics. And then he also introduced, well, what we today call separation of variables for a method. He wrote a paper submitted it to French Academy and then, well, there were referees but the papers were, the paper was rejected and some, well, we would say technical, mathematical stuff, it was simply sort of ahead of its time. There were some questions raised about the convergence of the Fourier series which we know is a delicate point, whether you converge on an absolute sense or in a mean and so on. This was not clear at the time. This was clarified only decades later. Anyway, so Fourier later published it on its own as a book, like in 23. Oops. All right, so let me mention what are those, I said transport type, what are those different transport types? So writing such a phenomenological law, yeah? We can for instance, let me write it perhaps on a blackboard. So we study current of some quantity, yeah? There is some transport coefficient, let me call kappa and then this gradient, I will write it as a difference of potentials divided by systems length, L. And then what you can do, you can fix your delta U, fix difference of potentials and then you can study how current scales with the systems length. So if kappa is constant, meaning normal diffusion, normal transport, then J should scale as one over L. But in general, it turns, it's not so. So in general, you will find out that J current scales as one over L to some scaling exponent gamma. And this gamma is what we would like to understand. For a given Hamiltonian, give me gamma, calculate gamma. And here there are some options that you can have. So for instance, if you have localization, then J current will scale exponentially with system length, that's clear. Well, if you have diffusion, then okay, it's one over L. If you have ballistic transport, ballistic transport means, well, in a way it's like a superconductor. There is no bulk resistivity, so meaning you can take your system twice the length and the current will not change, it will not decrease. So in this case, J will scale as one over L to the zero power, so gamma is zero. And this is the fastest transport that you can have in a locally interacting system. What I have in mine is locally interacting systems, so not longer-range systems. And then besides diffusion and ballistic, you can have also anything in between, which is called anomalous transport then. So you can be faster than ballistic, sorry, slower than ballistic but faster than diffusion. This would be called super diffusion. Gamma is, yeah, gamma is between zero and one. But you can also be slower than diffusive, which would be called sub-diffusion where gamma is larger than one. So gamma going to infinity is basically localization. And already a number of years ago, Bonito, Joe Lebovitz and Rebele, in a paper, Fourier Law, A Challenge to Tourists, it's a paper written after a talk given at the Congress of Mathematical Physics, they wrote something like that. So we present a selective overview of the current state of our knowledge or more precisely of our ignorance about the Fourier Law, where even though empirically, meaning numerically, it's well tested. So you can do simulations. This comment is more or less for classical system, but it holds for quantum as well. It's well tested, nevertheless, there is at present no rigorous mathematical derivation of Fourier's Law for any system with Hamiltonian microscopic description. So it means that we are not able to calculate the right for, yeah, to derive Fourier Law for Hamiltonian system. I will say a few words. I mean, I will give you some explicit examples and tell you what is known and what is not known. Yes. Well, okay, I mean, I don't know exactly what you mean by finance law, but no. So I mean, it means that, well, actually it's a good remark. I mean, perhaps I will answer first something different and then I'll come back to your remark. So what is important to stress here is that all these different transport types, I mean, transport type is uniquely defined only in the thermodynamic limit. So if you have a finite system, then you can not really distinguish this from that behavior. I mean, you know, you have to find a range of L. Of course, you can fit some dependence inside, but you cannot with certainty decide whether you have one or the other. So it's always meant in the thermodynamic limit. So you need large systems and large times. And all these transport types are really meant in the thermodynamic limit. So L going to infinity. So yes, even in L going to infinity, you will have such a behavior. And there are systems where you do have. Does it answer? Yeah, okay, very good. So yes, let me perhaps also say a word. Why do I focus on 1D? In 1D is very special, that's clear. And it's special for many reasons. One of the reasons is that in 1D, you do have integrable systems. In more than 1D, there are more or less no integrable systems. So in 1D, you can study this interesting interplay of integrability, then you add some perturbation, you break integrability, you can have chaos and so on. So you have the whole range of complexity. Of course, even though in 2D, you might not have exact integrability, it might be that system will, for very long times, behave as integrable. So that's why studying 1D system is a nice playground. Forgetting whether in nature you do have, whether you have 1D systems or not. I mean, actually you do have, but it's a nice playground because you have all the zoology of different transport types. But yes, typically in integrable systems, actually you might expect that typically you will have ballistic transport. If you think what classical integrability is, you have tori and dynamics on the torus is just some rotation, right? So unless your observer is very strange, then this observer somehow will ballistically change with time. So you might expect that you will always find this. This is actually not the case. And so because of this integrability, yes, it's perhaps more common to find all these different transport types in 1D than in 2D. In higher dimensional systems you might be, diffusion might be more prevalent than in 1D. And well, on the previous slide we had this statement from 2000, but the history of transport or very related problems of relaxation, relaxation goes even way back. So everyone that uses statistical mechanics of course asks when does it work or why does it work? Because I mean something, I guess it's clear that, I mean prescription statistical mechanics is okay, you have this distribution, distributions like Gibbs distribution with which you can get equilibrium expectation values. But of course for real system, I guess no one really expected that the density operator will be exactly given by this raw Gibbs distribution, right? Typically you might even deal with a pure state and then the question is okay, when I get expectation values for that pure state which agree with the equilibrium ones. Or you can ask yourself, suppose I prepare state which is not an eigenstate, a very special state, like I don't know, domain wall state, and then clearly expectation values might not be equal to the equilibrium one once, but then I evolve this state in time uniterally and I ask myself whether I will get to the equilibrium after a long time, whether I relax or not, whether I thermalize or not. And many people were interested in that, there were some failed attempts, but perhaps let me say a few words about one very famous numerical experiment performed by Fermi, Pastor Ulam and, well usually this last name, Meri Tzingo, is left out. So the story is such that, well this was after second world war, Fermi was in Chicago, Pastor Ulam and Tzingo were in Los Alamos, so they had computer there, I mean, or computer, a machine, big beast, main equine. During the week, it had to do some weapons research, but on weekends they could do some serious stuff, like doing physics, and there were just few persons who could program that beast, and one of those persons was Meri Tzingo, and this is a sketch of her, I don't know how to call it, program flow, there is some very complicated thing. And what Fermi's idea was, so I mean Fermi was very much interested in the question of ergodicities. Here I mentioned that 20 years earlier, Fermi published a paper where it was quickly realized that it's incorrect, but he tried to prove that generic system is ergodic, which is wrong, today we know it's wrong, I will tell you why on the next slide. So Fermi's idea was, okay, then just numerically try, I mean, let's take some system, some initial state. So what they took is harmonic oscillators, so chain of harmonic oscillators, which in addition to the harmonic coupling had quadratic coupling, so it was unharmonic coupled oscillators, so there is nonlinearity, and then they started with the initial state where only few normal modes were populated, and they let it evolve. And what Fermi expected, at least if the system is ergodic, you would expect that, that after a long time this very strange initial state will relax to a state where all normal modes will be equally occupied, like equi-partition tells you. But they did not find that, even after the longest times that they could afford, they did not find equi-partition. So this was very surprising, and, well, fully means like really mathematically, we still do not understand it. So there are several things worth mentioning at this point. I mean, the reason Fermi is expected to see this equi-partition and thermalization, or ergodicity, is that Pongare in 1890s, in response to a challenge or a price offered by a Swedish king, so for his birthday Swedish king announced the price for anyone who would answer, solve the stability of solar system. Is it stable or not? Like three body problem, you have three body problem and then you want to know, is it stable in the long run or not? So, and what Pongare shout is that two body problem is integrable, so it's stable for all times. And then you add another particle and the question is what happens? Then what Pongare shout is that there are some resonance tori that break, meaning that in the long time, there are some orbits which are not stable. And so because resonance tori form a damn set in the space of tori, then everyone I guess thought, believed that, I mean, although Pongare managed to show this only for resonance tori, this is typical behavior. So that's typically every tori will break down. So that's why perhaps Fermi tried to prove ergodicity in generic systems, even though today we know it's just the opposite of that generic system after small perturbation and it's not ergodic, just the opposite. And this is the statement of KM theorem. So in 50 form, Kolmogorov, without knowing about this FPUT experiment, gave a basic idea for a proof, how to prove that if you have integrable system, then if your perturbation is sufficiently small, then you are able to prove that there are non-resonant tori and the measure of those non-resonant tori goes to one as perturbation goes to zero. There are non-resonant tori which do survive. Yes. Yes, okay, so let me perhaps make a sketch. So suppose we have a two degree of freedom system. So phase space is for the measurable. Energy surface and system is Hamiltonian here. Energy surface is then three-dimensional. And then we make a cut of this three-dimensional surface. This is sometimes called Poincare's surface of section. So we have two-dimensional cross-section. And I will plot this two-dimensional because I cannot plot three. So we have this cut and then tori will just look some like this. So this now you can think in 3D, this is like a bicycle tube going around, all right? And so on this tori, on every tori, now if you imagine orbit will somehow wind around and then come around. So there will be two frequencies for two degrees of freedom. One will be frequency of rotation in this direction and another second frequency will be frequency of dead rotation. And if these two frequencies are rational, then this is a resonant tori. It's irrational, this is non-rational tori. And now this calm theorem says that if this, and then there is a statement of how much irrational, a given irrational number is. And that this is what matters in calm theorem. So if you are strongly irrational tori, then this tori will survive. And that there is some condition which unless someone asks, I will not comment about what this means, how strongly irrational you are. And so this is now, this would be an integrable picture and after you add some perturbation, then the question is what happens? And this calm theorem says that there are some resonant tori say this one and this one which survive. But then there is some other tori which is resonant, which breaks down. So essentially orbit after introducing this perturbation stays here inside between these two, in a two degree of system. This is basically the content of calm theorem. So any other question about, so does that theorem explain observation of Fermi, Pasteur, Ulam and Zingo? Well, it's not so easy and clear because there are conditions in the theorem like perturbation has to be sufficiently small and so on. Also for the solar system, you might ask, okay, does it explain is solar system stable or not? We don't know really. Because probably all these perturbations in the solar system are stronger. So the theorem does not apply here. What sort of state of the art numeric shows that probably solar system is not stable on very long run would just happen to be lucky to live short enough so we haven't noticed yet. And but it is true that the parameters that took here was sort of non-linearity was very, was small. So it was quickly after this FPU work, Felix Isarillo and Boris Chirko did some additional numerics where they showed, where they find out that stronger non-linearity indeed leads to a repetition. So unfortunately an FPU was small non-linearity and actually the choice of this potential. So what they took is, as I said, unharmonic potentials or something like that. P of X was X over two, so harmonic nearest neighbor potential and then there is some pre-factor beta times X to the fourth. And it turns out that in continuum limit, and if you appropriately scale a lattice distance of this beta, then the partial differential equation that you get for this FPU beta model is called is Cortwig-Devry KDV equation which turns out to be integrable PDV equation. So it supports solitons. So excitations which don't change shape as they move. So unfortunately, I mean from the point of view of erudicity, unfortunate choice is also that this model happens to be close to this integrable point. So being close to the integrable point apparently matters. All right, the final thing I will mention here in this classical KM business is that the theorem holds for system, I mean, it's a classical theorem. There is no quantum KM theorem. We don't know anything. I mean, we know some things, but there is no uniform theorem. It holds for system with finite number of degrees of freedom, so finite D. So we don't know, it's an open problem. What happens in the thermodynamic limit when D goes to infinity? And people believe that because of some specific results which are able to show either that some orbit are unstable, some are stable, that there is no common theorem that would cover all PDEs. So it's a case dependent situation for PDEs. So it's more complicated. Okay, so any questions so far? So this was like perhaps a bit extended introduction. Now we will move to the question, I mean to quantum physics really. So we are in Hilbert space. This Hilbert space is huge. So if we have a pure state, then the number of, if you want to write this pure state in a given basis, then the number of expansion coefficients that we need is exponentially large in the number of particles. And even without thinking about time evolution, this kills us. Either doing numerics or even for analytical, well okay, let's say that we have in mind numerics. So can we do better? Can we write the state in a more efficient manner? And the answer is yes. It's called so matrix product state ansatz. And it works for pure states as well as for density operators because the only thing it requires is that we have a linear space. So let us have a look at the case for pure states. So these are all expansion coefficients. This is just some basis. You can think of each, if we have spin one half particles, then this each of s i's is either up or down or zero or one, depending on what you prefer. And then instead of listing all these two to the power of n coefficients c, we will write coefficient c as trace of a product of matrices. So for each side, j, let me write it more explicitly, for its side j, we have a matrix, namely one matrix for each basis state at this side. So if you have spin one half particles, state zero and one at a given side, we have one matrix corresponding to say state zero and one matrix corresponding to state one at this given side. I will give you an explicit example. And then, so for each, so there are like, if we have n sides and spin one half particles for each side, we have two n matrices in general complex matrices. And then the expansion coefficient, we obtain for given string of those s's, like zero zero one zero zero, we just plug in appropriate matrices, choosing between these two for each side, multiply them and then take a trace. And therefore doing that, we get this expansion coefficient. That clear? Okay, an important parameter, the all important parameter is the size of those matrices. I denoted sd, d times d. Namely, the efficiency will crucially depend on how large these matrices are. So the question is, what is the necessary dimension of those matrices that we need for a given state? And then there is a simple answer. Namely, the number of the size of those matrices d is equal to the number of non-zero eigenvalues that the reduced density matrix has for given v partition. So for instance, if you have a system of l lattice sides, so for each side, we have two matrices. And we split our system in two parts, a and b. We can call, say we are dealing with a pure state for now. We can calculate the reduced density matrix on subsystem a, raise over b, type psi. And then the number of non-zero eigenvalues of this beast is equal to the size of the matrix at, well, the size of this matrix here, at the cut. And more precisely, there is sort of quantum information way to approach that. And it's called Schmid decomposition. So you can write any state in what is called Schmid decomposition. Yes, yes, exactly. So the sizes of those matrices are not necessarily of the same size, yeah, very nicely. So indeed the shapes will depend on the size, but they will depend in such a way that you can still multiply them. So yes, we will come to that point how we deal that in practice. And the answer will be we make a cut somewhere. If they're sufficiently small, yes, indeed. All right, so Schmid decomposition. So every state can be written in such a form. This square root of lambda, lambda is just an eigenvalue of the reduced density matrix. It's called also Schmid coefficient. And this phi a and phi b, i are mutually orthogonal on the subsystem a and on the subsystem b. And there are nothing but the eigenvectors of the reduced density matrix here. Let us perhaps have a look at an example of this Schmid decomposition. So it will be clear. So let's try to write, you know, without this, I will pick sufficiently easy examples that we will be able to write those matrices without really doing Schmid decomposition just by inspection. So let us first take a trivial example. Suppose our state is a product state, alpha one times zero, beta one, one, alpha two, zero, beta two, one, and so on. And now let's try to figure out what are those matrices. Say on the first site, matrix corresponding to basis state zero. So first, what is the size of those matrices that we will need? The number of eigenvalues of the reduced density matrix, but reduced density matrix of such a state is just a projector on that guy. So there is one eigenvalue one and all others are exactly zero. Of course, it's a product state, meaning that those matrices can be chosen to be of size, well, one. So it's not really a matrix, it's just a number. Anyway, I will call it a matrix and write it down. So what it is, well, it's just this coefficient, right? Alpha one, this is one by one matrix. Matrix corresponding to state one on the first lattice site is what? Well, it just beta one. And of course, you see what is the rule. The rule, this one by one matrix on site K is just alpha K, matrix corresponding to state one on site K. And indeed, yes, if I now, you know, want expansion coefficient, say C zero, one, zero and so on, this expansion coefficient is equal to what? Yeah, well, alpha one times, here I have one, so alpha one times beta two times so on. And this is exactly what I get, multiplying those numbers. So there's nothing, I didn't gain anything here. So let us look at some entangled state where we will need really matrices. So not just one by one by, but larger matrices. So for instance, what if we have a state like, so say all zeros plus one's with equal coefficients here. First, what size of the matrices will we need? So this is, the number of non-zero eigenvalues is called also Schmidt rank. So what is Schmidt rank? What is the number of non-zero eigenvalues of the new density matrix? So remember Schmidt decomposition here, right? So to get this Schmidt eigenvalues, we just have to write state in such a form. Some over a number of states, each of them is a product of something on A, something on B, and those guys on A should be mutually orthogonal. So now looking at that state, well, this state is already in Schmidt decomposed form, right, because if I make a cut somewhere, yeah, this is subsystem A, this is subsystem B, A, B. That's zero, zero, zero, zero is orthogonal to one, one, one. And that zero, zero, zero is orthogonal to one, one, one. So indeed, this is Schmidt decomposed form, and the two lambdas are just this one over square root of two. Well, lambdas is one over two. Schmidt coefficient is one over, so Schmidt rank is here two. So we will require matrices of size two by two. What are they? Can we guess? Let's try to guess. So now two by two matrix. Let me take a matrix like that, oh, sorry, zero. So I will just put a guess, and then we will test whether it works or not. So that's a perfectly legitimate method of getting exact solutions. And here I will put the other way around, so one in the lower right corner. Forget about this normalization one over square root of two. Up to normalization, I take this one. Okay, so what do I get now? If I look for the coefficient with all zeros, then I have to multiply only those matrices. And indeed, at the end, multiplying L such matrices, I will get one zero on the diagonal. Trace of that is one, good, perfect. This is up to this square root, it's this one. If I do the same with those guys, same story works, one. What about, so one, perfect. What about if I have some mixed, like zero, one, zero, one, one, something like that. Then we see, yeah, as soon as I multiply one by zero or zero by one, I get zero, which is also correct. So all those mixed coefficients are zero, only these two are one, perfectly. And this is how it works. Perhaps last example, just to show you that things are not always so obvious. Let me take something which is called in quantum information, W state. It's the following state. Well, up to normalization, I will not bother with normalization. So you have one and all others are zero, and then one is on all different places. So one, zero, zero, plus zero, one, zero, plus, so on. Plus, and then you have all zeros and one at the last place. So it's a uniform superposition of L, such states with N at all L different positions. It's called W state. All right. Now, looking a little bit at this state, we would be, it would be easy to figure out that again, Schmidt rank is two. So we need matrices of size two. And then for the matrices itself, I will just again write them down. And then I will make a comment. Namely, comment will be the following. Perhaps I first tell you a comment. This state is translationary invariant, meaning if I shift with periodic boundary conditions by one, I get the same state back. Also, this MPS matrices are translationary invariant. They are the same on all sides. What translation invariance means in that case. This state is also translation invariant. And the point I want to make here is that the matrices, two by two matrices that I will write them down will not be translation invariant. So let me write them down and I will just cheat not to make a mistake. So this is just identity. It will turn out that we can take same density matrices at all sides except one, either first or last. So let me put this less to L. So on all sides except the last one is this. One on K less than L will be sigma plus. Sigma plus is Pauli, sigma plus matrix. So like one in the corner. And then if you would have that at all sides, then it's easy to see for instance. Say we calculate coefficient one, zero, zero, zero, zero. So we have sigma plus times identities. This is just identity. So we get sigma plus, but trace of sigma plus is zero. So it doesn't work. So we have to finish up with at the last side with some other matrix to somehow make a diagonal and then get non-zero trace. It turns out that the following thing works. Yeah, so this one can be chosen one, zero, zero, zero. And zero L can be chosen to be zero, one, one, zero. I said can be chosen. The choice is not unique. So those matters are not unique. But you see, it's not translation invariant anymore. And it's actually kind of funny situation. You might ask, okay, this one is not, but perhaps I did not work hard enough to find one that is translation invariant. So you can ask, okay, does it exist? Translation invariant MPS. We don't know the answer. I mean, no one knows the answer. So it's like a very simple problem, but apparently hard to solve. People believe, I mean, there are some arguments that one cannot find translation invariant MPS with finite rank. So finite rank meaning D being independent of L. You have to allow, it seems, you have to allow D to grow with L and then you can have translation invariant here. So it's kind of a setting where, yeah, not everything is obvious all the time. All right, just a final remark and then we can make a short break. And remark is, now we have looked at states, matrix product states. Now you can play the same business also for operators. For instance, density operators. How do you do that? Well, density operator can again be expanded in a basis of some basis operators. Those can be chosen whatever you want. For instance, for spin one-half systems, Pauli matrices form a basis. So identity plus three Pauli matrices form a basis. So I can write, okay, my general row will be again some expansion coefficients. I will write it like that. I victor, so this I has many components, sometimes sigma, how should I write it? On the first side, we have sigma I one and then direct product on the second side, I two and so on. So now each of those indices I goes over four different values, meaning that, yeah, I mean space of operators is again just a Hilbert space again. So the only difference is that basis now of dimension four, right? Whereas before it was two, spin up and spin down state. In other words, and then because, you know, the only thing that we need here is linearity. There's just linear algebra involved, eigenvalues of reduced density matrices. We can do exactly the same business here. So dealing with states with local dimension D is formally the same business works also for operators. The only thing is that local dimension for operator is CT square, yeah? For spin one half, you go from two to four, otherwise, yeah. So it's computationally more demanding, but the same method works. All right, so any question about that or yes? So what do you mean by graph? Okay, so like for instance periodic, if I would have periodic boundary conditions or something but okay, I mean, yeah, perhaps, I mean state, you have state in the given basis, yeah? So there is no boundary conditions, yeah? Only after you put Hamiltonian, then you can speak about boundary conditions. So we are now on the level of the states. There is, I mean, just linear space, you have a basis of this state and that is all. So there is no like connectivity or anything yet for the Hamiltonian, there will be, and then if you ask me the same question for unit evolution, then I will answer something, you know, give some substance. But here, yeah, there is no concern like that. It's just a linear basis and that's it. Okay, so let's make now a 10 minute break and then we continue, yes? How did you get the equals to two for this state W? Ah, okay, so we can try it, yeah, yeah, yeah. You could not. Yeah, okay, so let us continue. Right, so we now roughly understand how this MPS or MPO procedure goes. Now I have said the required D is exactly equal to the Schmidt rank, the number of non-zero eigenvalues. Sometimes you don't have access to exactly the number of non-zero eigenvalues that you are dealing with. So as a rough estimate, it might be good enough to just calculate some, for instance, for Neumann entropy of your eigenvalues. René entropy might also be good enough and then the required D will be roughly, will scale exponentially with the entropy because the entropy is basically the logarithm of the number of non-zero Schmidt coefficients you have. Well, extreme René entropy would be equal to that, yes? Ah, yeah, okay, in which sense, in hand waving, why? No, I mean, there is a, well, I mean, if you want exact statement, then it's actually, yeah. So often people prefer for Neumann entropy even though in practice it's better, even from the rigor side, it's better to look at the René entropies. They often have, they are more relevant ones. For instance, in this particular case, you can take, René entropy has an index which can be, you know, can have any real value from zero to infinity. And then René entropy two is something which is called purity, which is just trace of rho square. René entropy one is just in that limit you get for Neumann entropy. René entropy infinity is, René entropy infinity is the number of non-zero eigenvalues, so this would be the one that you would then, you know, use if you want to exactly get the correct one. But like in typical situation, using for Neumann entropy will give you correct order of magnitude. So there is some non-uniqueness in the choice of A's, which I already mentioned. It's easy to see. If I plug in here identity, and I write identity as u times u dagger where you use some unitary matrix, right? Then I haven't changed anything in the trace and I can do this on every side. So there is a lot of freedom. I can rotate those matrices, but there is a preferred way how to do this. Preferred way in a sense, and this goes to the question of how do I deal with cases where eigenvalues are not exactly zero? Then you can ask yourself, okay, suppose I want to limit myself to some finite d. What d should I, or I mean, how should I write this expansion in order to best approximate my state with the given d? And then if your criteria of best approximation is two norm of the difference between your state and approximate state, then there is like the best procedure to take. And this fixes this gauge freedom here. And how it fixes, I will not tell you. I mean, I can give you details afterwards, but it's connected with eigenvectors of the reduced density matrix. So there is well defined procedure how to make this optimal decomposition, because this freedom. All right, so now we have a state, but we would like to evolve this state in time. So what are the methods available to us? Well, I mean, there are different renormalization group methods, all have RGs in their name. There is a very old numerical RG, where basically, so, okay, what's the idea? The idea is now that if we allow this d to be exponentially large, the size of the matrices, then we haven't gained anything in efficiency. So we have to truncate, make some truncation, and then all these different methods differ how they can do this truncation. In other NRG, you truncate according to the energy. This method works fine if you have separation of energy scales. Then, for instance, for ground states, it's a ordinary DMRG, density matrix organization, group of white, where sort of a, I mean, this method actually can be framed in modern language, in this form of, I mean, why did it not use this MPO, MPS, ansatz? But it was realized in like by Giffrey Vidal and then many others, that you can write the original DMRG in this MPS language. And this MPS language is kind of natural setting. And if you do that, then it's also very natural to do time evolution. I mean, this original DMRG was just for, say, ground states. But you can also do time evolution. And the method here, it's called either, I mean, there are different names, T DMRG, like time-dependent DMRG, or T-E-B-D time-dependent estimation. It's more or less the same thing. And I will describe on the next slide how you do that. So we have our MPS form, or MPO, doesn't matter. You just lean our space, yeah? And then what you do, perhaps let me write it on a blackboard. So even before looking at that picture, so suppose you have unit revolution, yeah? You would like to calculate E to minus HT. And your H is a local Hamiltonian, say nearest neighbor in one D. So I will write it as Hj, J plus one, coupling between the two nearest neighbor sites. Then what you do, you first split this evolution for long time T in small steps of delta T. So you basically want to calculate this guy, delta T. And then plugging this, if delta T is small enough, you can approximate this as a product of E to minus I, the small H, small H nearest neighbor one, J, J plus one, delta T. You can do actually better. It's called trotter Suzuki, the composition, but I mean just for the sake of argument, this is enough. So the basic step that you want to make, if you want to do time evolution, is transformation on two nearest neighbor sites. And this is crucial. And this is what is sketch on that plot. So you have that guy, you want to, and this is now a pictorial representation of NPS. So these blobs are supposed to be matrices, this A's, where do I have them? Well, no way, those A's. So the horizontal connecting rods rods are indices between two consecutive matrices, which are multiplied. And those vertical yellow lines represent these physical indices, like those zeros ones in here. So now you have two side transformations, some unitary. This unitary will map two asses of two neighboring matrices to some other guys. So you have something like that. If I write, you know, we have few, and now let me write it by components. So I have something like S one prime, S two prime, S one, S two. So these are left two indices, and the right two indices, it's a two side transformation. And then I act with these matrix on two A's somewhere, like A, S one, and I will write only those indices here. And then I have to sum over these two guys. There's still matrix multiplication here inside. And now if I do this summation, and perhaps nevertheless, let me write the indices here. So I, K, K, J, summation over K, and I and J are the outer indices connected to other matrices, which I don't write here. So if I do this summation, then I can write the resulting thing in, let me see what notation do I have? Well, actually I have it on transparency, so we can also look at the transparency. So I do this summation and I get the matrix B, and the matrix B, well, okay, I don't have it. So let me write it here. So when I do this summation B, what will be my indices left? After summing over those S's and over that K. I will have this S primes and I and K. So it's a sum object, which I can write in the following way. S one prime I, comma, so this is left index of the resulting matrix, and then I have S two comma J. So I can reshape the object in the form of a matrix where I have these double indices at the end. Now, what I would like to do, I would like to write the resulting B again in NPS form like that. I would like to write this as some new matrix with index S one prime times some new matrix on the neighboring side with index S two prime, where this I and J will be I here, J here and there will be some alpha index here, over which I sum. How do I do this? Well, one just have to do single or value the composition of that matrix. If you remember from linear algebra, just a moment, which are connected? So that one and that one. Yeah, you're right. Just for, I didn't bother to write this one. You're exactly right. Yes. I mean, perhaps I can remark also that you can write this NPS either as a trace of a product matrices, or sometimes people also write it as they just contract with some left vector and right vector. So you write it like that instead of a trace. This guy NPO. Well, you can call it whatever you want. It's just some big fat B. Yeah, exactly. So yeah, important point, yes, that I mean have to make at this point is, yes, I get one B, but the bond dimension of this B will be larger. Yes, will be, if you have a two level system will be four times, well, for states, it will be two times larger for operators with to be four times larger. So basically, I mean, you can see it here. If I had initially, if I had initially E values for that, D was my original bond dimension. And if I have small D, local Hilbert space size, different values for that one, then it's just D times capital D. And as you see, this is the crux of a problem, right? Because if I repeat now, then I will at each step, you know, after these steps, I will multiply, it will grow exponentially. So I have to truncate. The way you do is, you do this SVD, you appropriately, well, SVD basically writes this matrix as a product of one unitary diagonal matrix, another unitary, and these two unitaries basically become these two. A till does new matrices, which are again of still large size, right? And then if I want to keep numerically everything under control, I truncate, for instance, back to the original dimension D, which means I drop the smaller Schmitt coefficients among those guys. So from the Schmitt coefficients, I get, I don't know, say 2D, and then I drop D smaller ones, and I again end up with matrices of size D. And then the question is, how important is this thing that I dropped? You can ask me details afterwards, I will not go into, yes. So after you do SVD, you get these singular values, yeah. And then if your criteria is best to norm, you might remember from in algebra, then exactly the rule is, you drop these smaller ones, the two norm of the difference, yes, exactly. So it's like, you know, I don't know, doing some fitting or something, yeah. And this, the two norm basically in physical terms, this would mean you drop those Schmitt eigenvectors which have the lowest probabilities, because eigenvalues are just probabilities here in this sense. So actually it makes sense. Okay, so now we have this TD-MRG. Let's see, can we do, so now we are, well, here is our schedule. So I wrote it down so that we can keep track. So now we are going to this unit revolution. Okay, so the setting that we have in mind, suppose we have some inhomogeneous initial state like a domain wall, and we want to calculate unitary time evolution. So we want to propagate our state with you. Yes, I will explain here. It will change. You cannot. You cannot. So the bottom line of this section here, I can tell you now, it's a, well, sorry, it doesn't work, almost never, yeah, no goal. So the result is, it doesn't work. Yeah, so your remark is in place. And here I will just explain a little bit why it doesn't work. It's not very surprising that it doesn't work. I mean, you could guess this. And that is the reason why we will go to open systems, where it will be better. So the reason it doesn't work is exactly, I mean, just looking at this one. So from discussion in previous slides, you know, okay, TD-MRG will be efficient only if safe for normal entropy grows slowly enough, starting from simple product state, for instance. If we start with a state which has a lot of entropy, then we are, you know, there is no way we can do time evolution, because we start to, we have to start with low entropy state, low entanglement state, to have at least a chance of having efficient evolution. All right, and then, okay, you can ask yourself, okay, how fast can this grow in the worst case? Then the answer is in the worst case, it can grow in locally interacting systems at most linearly with time. There is simple argument, why? At most linearly in time. So suppose we have local Hamiltonian, we do the same, this trotter, the composition into a small time step, so we have guy like that. Let me draw a picture here. This is now spatial direction x, this will be time direction. So we have our lattice sites, which have nearest neighbor coupling, and then, okay, looking at these two side nearest neighbor transformation, what does it do? I mean, if I make a perturbation here, if I make something to my system, then this perturbation will be felt at the later time step, only at the two neighboring sites, plus the site itself. Because in one unit of time, I basically can hop by one lattice site. And then you can continue that, and you get this, well, a causal coin or a cone, light cone, also called, so there is a causal relation only within a light cone. This can be formalized in something which is called leap robins on bound. So there is exponentially little influence outside, whereas here inside, you can in principle get entangled and you can get correlations. So if you start with the product state, then this single guy at some later time can be correlated, can be entangled only with states inside here. The width of this thing is V times T, where V is this velocity, perhaps two times VT, but it doesn't matter. And now, which means that for normal entropy, we'll width time in this worst case scenario go linearly with T. So S will scale as V times T. This is an upper bound. Is this upper bound saturated? Yes, it is, for instance, in cautic systems. You can argue that in cautic systems, you really do expect that things will mix here inside as much as possible, and therefore you will saturate this bound. And you can even exactly for something which is called the random circuit, you can exactly show that indeed there is this VT. The random circuits, let me perhaps spend one slide on that. It's a nice model of, say, very cautic quantum systems. Nice in a sense that you are able to analytically calculate things because there is some randomness in the circuit itself over which you can average and then think simplify. How do they simplify? Let's have a look here. So random circuit is just, you know, you have time goes to the right here and then you apply some two-site transformation. Imagine that these two-site transformation are random unitaries, according to some measure. And then if you start with some product states, say all sites in the state zero, then at the end, if you apply enough of those gates, you will end up with something which is called a random state. And such random state has a lot of entanglement. In fact, it turns out that such a random state will actually saturate this bound. And the way you can do this is in the following way. So you can write your row, you expand it in polymetrices, and then you write down a rule, how these CIs transform after one application of this random gate. Now because you have a random gate, you can average over this randomness. And it turns out that if you write a map for the squares of those coefficients, then you might think that, okay, squares will depend on all different combinations of the coefficients. But after averaging over this randomness, it turns out that squares depend only on squares of these diagonal matrix elements. And in between is just some Markov matrix. So now we are sort of with one foot in a business, if you remember a lecture on Monte Carlo methods. And very nicely, it's even more. You can use this Markov matrix to some, well, simpler Markov matrix, then then somehow magically turns out that it's equal to some solvable spin chains. I mean, there is no direct relation, but there's some symmetry. It turns out it's just XY model, which you can then diagonalize. And if you remember convergence time of the Markov matrix is basically given by the gap of the Markov matrix. So the difference between the largest eigenvalue, which is one and the next largest one. And you can calculate this convergence time in model language, this inverse convergence time would then be just called the entanglement speed, which is this V here. So you can get this V exactly. So you know, after a long time, okay, I have such and such V for a given random circuit. Interesting mark here is, for instance, that you might think that picking this gate as random as possible, for instance, random four by four unitary might be as random as it gets, according to hard measure. It's not the fastest way to generate entanglement. The fastest way to scramble, it turns out it's some other gate that's faster. At first sight, it might look surprising, but after giving it a little thought, it's not so surprising because this hard measure, it's a bit, you know, it's a... On four by four matrices, it's a bias towards, well, gates which at the end do not entangle as much. In other words, if you take two qubit gate, fixed two qubit gate, which is strongly entangling, like control node gate, plus one qubit unitary, you will be better off than taking full four by four unitary. It's just some curiosity. Anyway, so the point being, we always have this linear growth. So it doesn't work for chaotic system, random circuits doesn't work. You might ask, okay, does it work for integrable systems? Well, unfortunately, again, doesn't work. So there are results using CFT, also some America's on transfer seizing, that also in this case, entanglement entropy will grow linearly in time, and therefore the required dimension of MPS would also grow exponentially with time. Because D, remember, is two to the power of S. So if S goes linearly, D goes exponentially. Doesn't work. So you can ask yourself, does it ever work? Well, I mean, it sort of works if you really try hard, and try hard means, okay, throw some disorder on your system. Because if the system will be localized, then, okay, perhaps it works if the system is localized, right? Because like localization is almost synonymous for, I mean, think where things don't spread, yeah. And indeed, for instance, you can try free fermions with the hopping, which is this order, you get logarithmic growth of entanglement. You can take interacting particles. Again, you get logarithmic growth. This is now called many body localization. And what is nice about this logarithmic growth of entanglement for many body localization is that if you have a single particle localization, like Anderson localization, then in fact, this is for normal entropy, how it goes through time, then for normal entropy actually saturates. So it doesn't grow with time, yeah. So it simply saturates. Whereas in many body localized systems, there apparently, there is no transport of energy or particles in NBL, but still there is some transport, some leaking of entanglement if you want. But it's very slow logarithmically slow. And so in this case, the required bond dimension D is exponent of log T, so it grows polynomial in T. So yes, for localized systems, you can do something, but for anything else, it doesn't work. Okay, so this is a bit disappointing. Let us try to understand why it happens. Actually, it's not so surprising. So for instance, looking at chaotic systems, it's not much different than integral systems. After some time, you will end up with a state which will look more or less generic, like a random state. Now random states have a lot of entanglement, yeah. So that's why the failure. But on other hand, this extensive entanglement in a random state is hidden in some non-local degrees of freedom. Meaning that, I mean, in practice, we usually don't care about some weird many-body correlations. What we care is some local observables. Actually, yeah, it fails, but it fails for a good reason because we are trying, doing this unit evolution, we are trying to calculate too much. We are trying to calculate really the exact full state with all many-body correlations, which at the end probably we don't care about. So this somehow leads us to ask the following question. Would it be better, perhaps, if we would instead of evolving a pure state, rather go to the Heisenberg picture and evolve some local observable, right? I mean, the two pictures are equivalent. We can either evolve psi or evolve an operator. But from what I have said, it might appear that perhaps evolving local operator will be better. Well, it turns out it's not. It's more or less the same. So again, even evolving local operators and doing MPO in this case, again, the growth of entanglement in this operator space is again, leaner in time for generic system. There are some exceptions, but in generic system it doesn't work. Yeah, okay. Yeah, I mean, perhaps you shouldn't take too much of those words. It's more like philosophy and post-facetum arguments. So yeah, I'm not claiming that this is always so, but it's just some very hand-waving idea of what I think it's clear that, I mean, random state on one hand is very entangled. But on the other hand, it also, I will show you on the next slide, also somehow it's not the best way to think in terms of pure states. Because for instance, let me perhaps just go directly to this remark, it will be easier to answer. So a random state is typically a high energy state, somewhere from the middle of the spectrum. So if you calculate expectation value of some local observer in such a random state, you will get something which will look like equilibrium expectation value at infinite temperature or at high temperature. So a random state is like an infinite temperature state. Now you can do two things. If you are interested in infinite temperature physics, you can either take a random state and do your stuff, expectation values, but here you will have to deal with a lot of entanglement. On the other hand, if you go to statistical mechanics, then you know infinite temperature deep state is just, well, e to minus beta h, blah, blah, beta is zero, so this is just identity, of course. Say we have a finite Hilbert space, so that we don't have to, you know. We have fermions or spins. So an infinite temperature equilibrium state is just identity, but, I mean, whereas a random psi is very complicated and highly entangled in the space of pure states, this identity in the state of operators is just a product of identities. So it's like a separable state. If we write this identity as MPO, we will need matrices of dimension one. So, I mean, this is like zero, zero, zero, zero. There are no other Pauli matrices. So in this sense, I think, I mean, this is, again, a non-regular statement, but it appears that just pure states are, in a way, a singular concept. They correspond to single point in phase space, whatever I mean by phase space in quantum mechanics. Whereas density operators are better behaved. They would, in classical mechanics, correspond to densities in a reveal space. So some smooth objects. This is somehow reflected also here. Now this is a bit misleading because I focused on high energy, which is like the most in favor of rows, right? But somehow later on I will be interested in high energy physics, so. Okay, so let us somehow shift our focus from pure states, which is a bit singular concept to density operators. Okay, so now we are in this 0.5, open systems. All right, so let's ditch unit revolution. It never works, more or less. Let's take our system, because anyway, we are interested in transport, for instance. And I mean, how would an experimentalist measure transport? I mean, you take your system, well, you couple it to reservoirs. I mean, if you take a resistor, you put it to a battery and then you measure how much current flows for given battery voltage, right? So let's try to do something in the same spirit. Let's couple our system to a bot and let's try to study such a system, which is then our central system in such a setting, which would then be called an open system setting. Now there is a complicated way how to do that. Let me just sketch and my goal is to scare you and agree that this is horrible and you don't want to deal with that. So I mean, the canonical and I guess appropriate way to approach this problem would be, okay, let's take a Hamiltonian of our central system H1, some Hamiltonian of a bot, then some coupling between our central system and a bot and we have unit evolution on the total system, total universe. We solve evolution equation for our total density matrix. We calculate the reduced density matrix on smaller space at the end to be our only interested at the central system, one. We care only about row one. Cacodys and I will not even write what this case is, but the equation you get is horrible. I mean, you can imagine, right? But tracing this out, it's some, you know, in general it will be some integral differential equation where you have this memory kernel and time derivative of your evolution at time T will depend on all the history in principle. So it's very messy. Now I will argue that at least if you're interested in transport, dealing with this mess is sort of unnecessary. Oops. Namely, I mean, before going to sort of mathematics, let's imagine. So transport is well defined only in thermodynamic limits. So we are dealing with very large system. And now we couple it at the boundaries. Now, if you have well defined system, then the precise thing that you do at the boundary should not really matter for the bulk physics. Because for instance, if you have chaotic system, then far away from the boundary, system will sort of self-thermalize and the physics in the bulk will be independent of the details how you couple. In other words, I mean, if I take a resistor, connected to the battery, it doesn't matter what kind of battery it is, whether it's lithium ion or whatever. I will get the same resistance out of it. And so let's forget for a moment for this microscopic description where we start with the Hamiltonian and try to derive to reduce dynamics. Let's ask ourselves, okay, we are interested only in our central system and let's try to figure out what is the simplest mathematical framework which will describe evolution of my central system. Totally forgetting about the environment. And we will only be limited by the basic rules of quantum mechanics, right? So which is? All right, so our generator script L, it's called Eubillion, will be a linear generator. So we will preserve linearity of evolution on the total space. It makes sense. Then L should also preserve density matrices, right? So if we start with the correct density matrix, so positive semi-definite operator at time zero, also at time T, we should end up with positive semi-definite operator. There is no doubt. So this brings us to a concept of completely positive trace preserving map. Trace preservation is kind of trivial, but this complete positivity is not. So this is, I mean, basically you can argue, you cannot go around that. You have to satisfy that. And then on top of that, we will demand another thing, 0.4, which is we want this bat to be S, S if you want random S possible, which in technical terms, we will call it that there are no memory effects, that it's Markovian. And what does it mean mathematically? It means that if I make evolution for time T plus tau, then I can split this evolution for a long time step in two smaller steps, Markovian condition. Then there is a very nice result due to Gordon Lilleblatt for infinite systems and Gordon Koshakovsky and Sudarshan for finite systems, that if you demand all those four things, then you can always write an equation which has this, it's called Lilleblatt equation, or sometimes Gordon Koshakovsky, Sudarshan, Lilleblatt equation. And also the other way around. So any Lilleblatt equation and Lilleblatt equation is here, following equation. So any equation of the following form generates completely positive trace preserving map, which is Markovian completely positive trace preserving map. So what do we have? So it's a first order in time, master equation. This L is generated at the whole right hand side. So there is this unitary part, just commentator with H. And then there is dissipative part, which can be written in terms of Lilleblatt operators, LJ. So this LJs can be anything. They don't need to be Hermitian. They can be any operator. And they, in some effective way, describe a bat. Whatever else you take, you have correct evolution. So this operator, script L, the total right hand side, is due to this dissipation known Hermitian. So this brings some technicalities. What is nice about this operator is that in a finite dimensional system, there is always a fixed point to this generator. So there always exists a row such that L acting on the row is zero, which means that the propagator, which is L times T acting on the row, will give you back the row itself. Such row is called the steady state. And for typical choices of L's, there is a unique steady state. So there is only one such steady state, meaning that if we start with an arbitrary initial state, after a long time, you will converge to this steady state. There is a number of books on this open system settings. So, I mean, there are ways how to derive this Lilleblatt equation. Equation, I will not go into details. You can ask me afterwards if you're interested. Let's, perhaps, well. Let us look at an example. So, this example is now for a case where Hamiltonian is zero. We will only look at Lilleblatt evolution, only at the dissipative evolution. So, and let's take the simplest case, one spin, one-half particle, so two-level system. And let's have a look at Lilleblatt operator, which is just proportional to sigma z. You can compactly write one spin, one-half particle in this block form, where r is a vector within a block ball, and sigma is a vector of three polymetrices. So, r has three components. And now it's easy to check putting this row inside this operator. You will find out that the Lilleblatt equation transforms to this set of three equations for the r, the Bloch vector, is the exercise. What do these equations tell you? So, first of all, you see that rz dot is zero, which means that such Lilleblatt operator preserves magnetization in z direction. So, z component is not changed. Whereas x and y components, they exponentially decay with this rate gamma. So such an operator is called a defacing because it kills off diagonal matrix elements, right? Because in standard basis, sigma x and sigma y are off diagonal matrix elements. So if you start with some row, then such defacing would kill off diagonal elements and you will end up with a diagonal matrix at the end. There are some other operators, say, sigma plus. What do they do? Looking at this term, sigma plus, for instance, acting on a state where spin is oriented up will essentially flip that spin. So if you do the algebra in this case, then because of that flipping, the magnetization will indeed decay in this case, as well as x and y components. So for instance, if you start with spin oriented up with a pure state, then such Lienblat evolution will, after a long time, cause you to end up in a steady state which is spin oriented down. Yeah, indeed, sigma minus tries to orient spin in the down direction. Now you can take such Lienblat operators and you can study transport. For instance, and this is the last thing that I will mention. If you now take your chain and then if you do, if you act with Lienblat operators, some gamma times sigma minus here, so say gamma one minus here plus, I mean, you have two Lienblat operators here and two Lienblat operators here. So gamma two sigma plus one, and then similarly at the right end, then what do these Lienblat operators try to do? Well, this one tries to orient your spin down with some rate, this one up with some different rate, and if you choose these rates appropriately, these two combined try to set magnetization to some given value, which is given by these two gammas. You use such driving here, different driving here with different gammas here, meaning that for instance you try to polarize here with given magnetization, with different magnetization there, and then you can study transport. And you can use TDMRG that I mentioned in the same MPO setting now. Now you might ask about the efficiency. Is entanglement large or not? It turns out that often it's low, so you can do very large systems. With other methods, for instance, you can do 10, 20 sites, here you can often do a few hundred sites. You can reach thermodynamic limit, you can study transport, about which you can ask me in the break, so we have to end now, so thank you.