 Other lectures entanglement was occurring very often. So there will be many things that you have already heard, that you are already using. Of course, many things that you are very familiar with. And I'm going to go very slow. So I will go back to the very roots, start with the very beginning, and then go along to also generalize things to multi-partite entanglement. So I will start with. Can you hear me? Yes? OK. OK, so I will start with chapter 0, which is the motivation. OK, so this will be very short, because you all know that entanglement is very important. We know that we can use it. We know that we have to learn it. OK, so this will be very short. Then I want to separate this lecture in two parts. Part one will be about bipartite entanglement. Now, you might think, OK, well, we want to learn about multi-partite, and we know everything about bipartite. I hope that you will see that this is not the case. So there are still some things that are new to you. But of course, there will be many things that you are very familiar with. And I think that this is good, because as I said, I would like to use your knowledge to then generalize it to the multi-partite case. OK, so we'll discuss here things that are very special for bipartite cases, but also some other things that can be generalized. OK, so this will be divided in three sections. The first one will be about not entanglement. So it will be about separability. Then we will talk about pure bipartite states. And then we will discuss how we can measure entanglement. So we'll talk about entanglement measures and what is called entanglement monotone. And after that, as I said, we will go to the second part of this lecture. So by this lecture, I mean the three lectures that we have on multi-partite entanglement. And then we will talk about multi-partite. So here, I will not give you now the more details on this chapter or on this part. We'll do that later on. But the intent here is to see what we can generalize from here and what we cannot, and where are the problems. So let us start with the motivation. So when we talk about entanglement, I mean now, of course, probably many of you have a different notion of entanglement in mind. They all mean the same, but you associate to eat something special. This depends on the things that you are interested in. This depends on the lectures that you heard, and so on. But what the very root of entanglement is is the following picture. We have A, which is Alice, and B, which is Bob. And they are spatially separated from each other. So this is one is in Trieste and one is in LA. Now, each of them has a quantum system. Some quantum system is taught here. And what they can do, of course, is they are spatially separated from each other. They can only communicate with each other classical. So they can do some operations on their systems, and they can communicate with each other classically. So Alice, for instance, could apply here some P of M, some generalized measurement, and send the outcome that she obtains to Bob. Then Bob can, depending on the outcome, do some operation on his system, and so on. But the point is that the idea of entanglement, the origin of it, is to say that these two parties, or if we have more parties, are spatially separated from each other, which means that we cannot act on both systems at the same time. We cannot do that, because they're far away. So this is what is called LOCC's, local operation assisted by classical communication. And we know that if they have now not only LOCC at that disposal, but also entanglement, so suppose that they have here a maximally entangled state, let's say the 5-plus state, then you all know that now they can do things that they could not have done before. So with this state, for instance, they can do fancy things like teleportation. So you can teleport a state that was here to Bob. But in order to do that, you need entanglement. Without entanglement, you cannot teleport. So here you cannot teleport. So this is the first lesson that we learn, which is that entanglement is a resource. And it's a resource to overcome the natural restriction that we have to LOCC. So it is local operation assisted by classical communication. So I guess that in one way or another, you all know that. So one motivation now is, of course, to say, well, why do we want to know about entanglement? Why do we want to know which state is more entangled than the other? And so on is, of course, because it's a resource series. So we want to first see what is the better resource. And second, what can we do with that? So we want to also find new applications. Another important topic is, of course, and this is also what you heard today in the morning lecture by Norbert Schuch, for instance. If you consider, for instance, a condensed meta system, so you have many, many systems. And you consider the system at t equal to 0. Then, of course, the correlations contained in that whole system coincide with entanglement because you have a pure state. And so this entanglement dictates the physical behavior of your system. So if we want to learn what happens to our system, then it is determined by the entanglement that is contained in that system. So this is clear that because of all these reasons, I mean, we have to learn better what is entanglement. Here we talk about bipartite entanglement. Of course, this can be generalized to more. And this is clearly a multi-partite system. So if we would know much better how to deal with entanglement, then we probably would also have a much better or more insight than we already have here because the insight is already very big using entanglement theory. Okay, so this was the motivation. I think that, well, I think it would not have been required at all because you all know that studying entanglement is important. So let me now come to the first part, talking about bipartite entanglement. Okay, so let's go back to the very roots. So what is the definition of entanglement? So if I give you a pure state, when do we call the pure state entangled? If it's not separable, and what does this mean? Exactly. So psi is called entangled, which means not separable. Okay, I'm not going to be mathematically super precise because otherwise I'm not able to teach anything. So I'm not going to say now if there exists no state, blah, blah, blah. So let's say that if psi cannot be written as a product state, okay? So we know that. Okay, now of course we can generalize this because we don't have only pure states. So what happens if we have a mixed state? So when is a mixed state called entangled or separable? Exactly. So rho is called separable if it can be written as convex combination of product states. And now I do this for the bipartite case. Of course this generalizes to the multi-partite setting. Okay, so now where does this definition come from? So remember that we were talking here about the very roots of entanglement. Okay, so this definition comes right from here. So if I can prepare it with LOCC, then I will have a tensor product. Okay, so it would look like this. Or if I have a mixed state and I prepare it with LOCC, then I will end up with a separable state, always. I will not be able to generate anything else in this separable state, okay? So this is why these definitions are physically motivated to make perfect sense. You match exactly this picture. Okay, now, is it difficult to decide whether a state is separable or not? So the so-called separability problem is given a state rho, decide whether it is separable or not. Now, is this difficult? What do you think? See a little bit louder? We have what? We have? Periscriterion. Periscriterion, so the BBT criterion. Yeah, but is this necessary and sufficient? Okay, so why is it difficult to decide? So the answer here is that it's difficult to decide, but why is it difficult to decide? Well, the reason is that for one mixed state, there are infinitely many decompositions, okay? So I can write a mixed state in infinitely many different ways because of the following theorem. If you have a state rho, you can write it as a sum P i psi i psi i. Then you can also write it as a sum Q i. And now this index, so this goes from one to M and this index goes from one to N, it can be different. And you have here the projector onto the state phi i. This is possible if and only if there exists an isometry such that the unnormalized states can be expressed like this. So you have this isometry and you sum overall square root Q j phi j, sum over j. So you see, you can write it in a different decomposition whenever there is an isometry between those states. But there are infinitely many isometries. So this means that there are infinitely many ways of writing a state. So what does this mean now? Physically it means that I can prepare my state rho in infinitely many ways because I can prepare it by either generating with probability P i the state psi i or with probability Q i the state's phi i. And there is no information in the system about how you prepared it. It doesn't matter how you prepared it. You will always end up with this state, okay? So no matter how you prepared it. So the separability problem is now asking whether out of this infinitely many decomposition there exists one that is in a product state, okay? So out of infinitely many decompositions we have to find whether there exists one that is a convex combination of product states, okay? And from here you see that this is very hard to solve this problem because how do I look for infinitely many decompositions? Okay, so let me just summarize this saying that there are infinitely many decompositions and so the separability problem seems to be hard, okay? Because it's not a proof, but it seems to be hard. But in fact, it has been proven by Gourfit in 2003 that it is indeed hard because it has been proven that this problem is in the complexity class that is called NP-hard. So everybody familiar with complexity classes? What should I briefly recall? Because I recall that. So you know that you have the complexity classes P. This is polynomial time. This is a problem like multiplying numbers. That's something that you can do in polynomial time as a function of your input size, okay? So multiplying two numbers is something that is easy to be done. Now the next complexity class is called NP. And NP does not mean non-polynomial time, means non-polynomial time, not polynomial. Okay, so this means these problems are maybe not solvable in polynomial time, but what they are for sure is the solution can be checked in polynomial time, okay? So this is something like, any example? Who knows an example of something that is in NP? Think? Factoring, exactly. So factoring would be in NP because if I give you a number and I ask you for its prime factors, then this might be difficult to do, but once I give you the prime factors, then it's easy to do the multiplication so you can easily check whether this is the solution or not. Okay, so this is why it's in NP. And now this complexity class NP, and you know that one of the biggest open question is whether this is the same, okay? Now the nice property about this NP class is that there are so-called NP complete problems which are problems that are inside NP, and they have the following nice property that any problem inside NP can be reduced to this problem in polynomial time. So what this means is that if you have an algorithm that solves this problem, then all the other problems in NP can be solved as efficient, okay? Because there's a reduction in polynomial time. Okay, these are NP complete problems. Example would be the three satisfiability problem. And then there are problems that are, that have the same nice property as this NP complete, but they might not be in NP. So these are the NP hard problems, oops. Okay, so this might be outside. But it has the same property that any problem in NP can be reduced to this problem. Okay, now this result says that the separability problem is NP hard. Okay, so this means that we are for sure not aware of any efficient solution to this problem. Okay? So this problem is indeed difficult to be solved. This is the proof for that. Now the nice thing is that by David Perez-Gasir, and there's another paper by in, let's forget the name, this is in 2007, and this one is in 2000. So what David did was to solve this problem as well as it can be solved, okay? So there's a solution to the separability problem. Of course, there is no efficient solution because it's in NP hard. So we don't expect that there is an efficient solution, but there is a solution, and the solution is as good as it can be. Okay, and this problem has been reduced by David to something that is called a semi-definite program, which is something that can be solved efficiently, but the system size for the separability problem is exponential, okay? So this is why there is now no contradiction. And let me just briefly, I mean, we're not going to talk much about this, but just briefly tell you how this problem has been solved. The idea is related to what we are going to discuss in a second, which is entanglement witnesses, and it's using entanglement witnesses in order to find a certificate that something is entangled or not, okay? And this entangled or not is not super exact what I'm saying now, but I will come to this in a second. But maybe it makes more sense to first talk about entanglement witnesses and then come back to this problem, okay? So please keep some space here because then I can explain it better. But what I want to say for now is that this problem has been solved, okay? But still, I mean, you hear many talks and you read many papers about conditions on separability, right? Now, you might wonder, I mean, why are people still working on separability if there is a solution? So why is that? Any answer? Anybody working on separability here? Any guess for why people are still working on that? Why is it still interesting? Can you speak up? I don't even see who speaks. Better solution in which sense? Yes, but it cannot be much faster because the problem is in NP-heart. Okay? So it's as good as it can be. Yes, positive maps are used to find conditions on separability, but why are people are still working on that? Well, the answer is clear. Imagine that you are doing an experiment and you create a state row and you want to show that it's not separable. You don't want to run something that's very difficult. You want to find a simple way of checking that it's not separable. Yeah, this is one way of doing that, exactly. So we'll come to this. So I'm now just motivating why it's still very interesting to talk about separability and to find conditions on separability because, first of all, I mean, you might, from an experimental point of view, you want to use something that allows you to ensure that your state is entangled by just making very few measurements, for instance. Okay? The other thing is that, of course, for this program, for this solution, I need to have row. I need to know my state. But maybe I don't know my state in an experiment because I cannot do tomography because it's too large the system or whatever. Okay, so I don't want... First of all, I want to have simple conditions for certain states that I can easily check. And second of all, I have to take into account that in an experiment, maybe I don't even have access to the whole density matrix. Okay, so this is why it makes perfect sense to continue working on this separability problem. In particular, if you do something for an experiment or if you consider certain subclasses of states and things like that. Okay? And now, as many of you were already saying, so there are some very simple conditions. One is that you mentioned before. It's the PPD condition. So this says that if we have a Hilbert space, C2, then so C2, or C2, then so C3, so two qubits or a qubit and a cutrid, then row is separable even only if the partial transpose of row is positive semi-definite. Now, you might recall the definition of partial transposition. It's really doing the transpose only on a subsystem. Okay? So this means that if my row is sum over i, j, this is system A, tensor, an operator on L is on BOP, then the partial transpose is just, I do the transpose here. So it's j, i, A, tensor, row B, i, j. Okay? So this is necessary and sufficient for two qubits or a qubit and a cutrid. In higher dimensions, we have that if row is separable, then the partial transpose is definitely positive semi-definite, but we don't have the other direction. Okay? So it's just a sufficient condition, but not necessary insufficient. Okay? Now, this can be proven very easily. So it's a line because you have row is separable, so you can write it as a sum of, so it's a convex combination of product states. So now if I do the partial transpose, what happens? Yeah, but what do I get if I compute the partial transpose? Sum P i, what happens now? I do the transpose with respect to system A, which is here, complex conjugations. Not the same because I do the transpose. It would be E i conjugate, tensor, F i. F i, and now you might wonder, okay, well, if I do a conjugation, this depends on the basis in which I do the conjugation, of course, but the positivity does not. Okay? So this is positive semi-definite in whatever basis you take the conjugation. And clearly it is. I hope that you all see that because it's just projectors. So if row is separable, then the partial transpose is for sure positive semi-definite. So that's a simple condition and something that one can easily check given that we have to state row. Now, another powerful method are the so-called entanglement witnesses. Can you read still here? And entanglement witnesses now without entering too much details. So the definition is simply that you have a Hermitian operator, W, which is not positive semi-definite. And it has the property that it is positive. So the expectation value, with respect to any separable state, is positive for all row separable. Okay? So this is an operator that, its expectation value on separable states is always positive. However, it's not a positive semi-definite operator. So there exists a state for which this is not positive. Okay? And so what this witness does is the following. This is the set of all states. So it's our Hilbert space. Then the set of separable states is somewhere here. And what this entanglement witness does is due to the so-called Han-Bannach theorem, it separates two sets. Okay? So it is here, it would be negative and here it is positive. So this means that if I have now a state that is here and have a witness, then this expectation value will be negative. So the witness detects the entanglement of that state. Okay? So it guarantees, this fact guarantees that your state is entangled. However, if you are here, and it's positive, you don't know. Because you don't know whether you are inside the set of separable states or somewhere here. It would always be positive. In order to make sure that you, so we certainly know whether it's separable or not, you need to consider all entanglement witnesses. Okay? And again, there are infinitely many. Okay? But this is a way of somehow you can design entanglement witnesses to detect certain states. So for instance, if I know that my states have certain properties, then I can design my states here. I can design witnesses that allow me to detect it. Okay? And for instance, I can also look for witnesses that are kind of ideal from an experimental point of view. Like they are not using many measurement directions and things like that. Okay? So all these things have been studied. And actually, as far as I saw in this, so next week in the workshop, you will hear some talks about entanglement witnesses. Okay? And entanglement witnesses are one to one related to positive maps that we heard before. Okay? So positive maps are kind of stronger than the witness because it's not only detecting whether the trace is negative, but whether an operator, namely the outcome of this positive map, is a positive semi-definite operator. Okay? So it's stronger. Okay, but I don't want to go into these details now. What I would like to do is to use this definition of entanglement witness to briefly explain that to you. The solution of the separability problem. So the idea here was the following. It was saying that, okay, I don't want to know whether, so this is now a very rough idea, okay? So you don't want to say, I want to know whether row can be written as a convex combination of any states on A, tensor, some states on B. But you say that, okay, what I do is I discretize one of the two Hilbert spaces, okay? So here in A, I don't consider the whole Hilbert space. I consider what is called the delta net of the Hilbert space. And the delta net is simply something, pictorially, so you have here your Hilbert space, in order to see whether something has a decomposition like that, I would need to consider all possible states in AJA for these values of EI, okay? A delta net is something that takes just sufficiently many states, but the discrete number, okay? So I take here some points, and I choose these points in such a way that for any other state that is not in this set of points, there is one point in this set that is close. That's called delta net, okay? So the distance of any state in the Hilbert space is at most delta to a state in this set, okay? So now you're asking not, can you write it like this, but you're asking, can I write it like this, where these EIs are in my delta net? Okay, with this you discretize the problem. And then it has been shown, so this gives us a set, where is this set? This is the set of all states. This is the set of separable states. Where is the set of states for which I can write and can find such a decomposition? Is it inside the separable? Is it outside the separable? Does it cross it? Where is it? It's inside the separable, exactly. So this set of states, C, so where these EIs are in, and let me now just call this S delta. This is the delta net of Hilbert space A. And the FIs are just in the Hilbert space of B, so they are arbitrary. So this set then is inside here, okay? And so what you're asking now with this semi-definite program is, is your state inside C? Or is it really outside of the separable state? Well, outside of C. Is it really here? So you're asking, this is what is called the weak membership problem, so you're asking if something's really inside or really outside, okay? And the reason why you do that is easy to explain because let me use this. So what does this mean that something is positive semi-definite? So imagine that I have an eigenvalue that is minus 10 to the minus 17. Is it positive or negative? There's an error, no? There's an error, this is this error. That's the same error here, okay? Of course, it's always up to an epsilon as we have it for the PPT also. Okay, and this semi-definite program does not the following. It looks for a solution in C or finds a witness that detects that your state is entangled. And this is why this problem is solved in this way. Okay, just one more thing. The point is then of course also that for any separable state, there is a state that is close and is in C. Okay, otherwise this would not work. So this C is really, the boundaries are very close. Okay, so there is always something close. Now, this brings me to the next interesting thing because when I say that something is close, what do I mean by that? That something is close. What does this mean mathematically? Well, in order to say that, I need to have a distance, right? And the distance that has been used here is the one that is induced by the trace norm, which is somehow a very strong distance measure, okay? And interestingly, it has been shown that if you relax that, so if you don't consider the trace norm, but you consider a different norm, then the problem is no longer that difficult, okay? Because it has been shown that if you consider a different norm, then there exists a quasi-polynomial algorithm to solve the separability problem. That's something very interesting, in my opinion, which is due to Fernando Brandao, Matthias Christianl and the art. And this was in, I think, 2010. What they did was the following. They say, okay, now let's not consider the trace norm. So I consider the trace norm and the distance that I have here is induced by this norm, okay? Now you don't consider the trace norm, but you consider what is called an LOCC norm. So this is a remark, okay? And the LOCC norm has the following meaning. So you might know that the trace norm, the distance that is induced by the trace norm, is a measure of how well you can distinguish two states. Okay, so this is why it's so relevant. Okay, so it gives you, it's related to the optimal success probability of distinguishing two states by all means. So you are allowed to do whatever you want. The LOCC norm has exactly the same meaning, but you are restricted to LOCC. So it tells you how well you can distinguish two states via LOCC. So in our context, such a norm makes perfect sense. Okay, because I'm restricted to LOCC. And so what they showed is that if you consider now this norm, maybe to just give you some details, so this you can write as the maximum of all possible operators, trees, 2M minus the identity, probably a square root. Whereas this you can write as the maximum over all M that are in LOCC, so that are realized of a via LOCC and the same expression. Let me check what it is, the square root. So for the set you mean or, so by this you mean if you give me X then I can give you a parameterization of, no, no. No, okay, no, this is like that. Okay, so you see that it is a generalization of the trace norm to something that can, what generalization, it's a reformulation of the trace norm to something that can be done with LOCC using the physical interpretation of the trace norm, okay, which is, I think very nice. And in my opinion this makes perfect sense because I want to know whether my state is separable or not. And if I care about separability, my systems have to be spatially separated, okay. So otherwise I could do whatever I want on the whole system and don't care so much about separability. So, but if these are separated then they can only act locally and communicate classically. This is what we had at the very beginning. So if I want to know how close is my state to a separable state then it makes perfect sense to use this LOCC norm because you want to know, can I distinguish this state from something that is separable, okay. And here you are only restricted to LOCC. And the nice thing is as I said, that if you do that then you can show that there exists a quasi-polynomial algorithm to solve the separability problem. So changing the notion of a distance that you use, okay. Okay, sure. So the norms, so we are in finite dimension. So the norms are all equivalent, right. But the, exactly, but the point is that the scaling goes with the dimension of the Hilbert space. And because of this delta net, you have an exponential scaling, okay. So and there comes the difference, yeah. That is exactly what I was trying to explain here, that you change the norm from the trace norm to the LOCC and then you get something that is faster, okay. Okay, I forgot now when I was talking about other conditions on separability, I forgot to mention some more conditions. There are of course many others. So there is what is called the realignment criterion. There are linear contractions and all these kind of things. Maybe you have heard about them, otherwise just have a look at the references that I didn't give you. Let me just write that. So there is, there are many reviews on entanglement. One is by the Hordetskis, all four of them in 2008. It's called quantum entanglement. And there are many others. So for instance, there is a recent one by Karol Tzukowski. Well, there are many others. So Karol Tzukowski, there is one by Jens Eiser, there's another one by Ottfried Güne, there's one by Jens Siebert. So there are many reviews on entanglement. This one, the one of the Hordetskis covering a lot of the things that I'm talking about here, apart from of course the new things, okay. So this was written in 2008 or 2009. Okay, but it's a good reference, I think. And so these, all these other conditions on separability and so on, you can for instance find it there. Now, so this was as much as I wanted to talk about mixed states, because now I would like to focus on pure states, okay. So this would be now chapter two. This would be pure by-partite states, okay. Okay, so why do I want to focus on pure states? Well, because I would like not only to know when a state is separable, but I would like to know how entangled it is in case it is entangled, okay. Because on the one hand, I want to study this resource, because we have seen that entanglement is a resource theory again. And I would like to study that resource. So I would like to know what is a better resource than in some state. And doing that, I would also like to know what are the applications of entanglement, okay. So I want to focus on somehow how can we use entanglement. And for this, it makes sense to look at pure states because for pure states, it will be somehow more apparent what you can do with that. Okay, so a mixed state is very complicated and we have seen that the problem, even asking whether something is separable or entangled is very difficult to do. And so if you want to see now how can I use a mixed state in a certain context or what should the context be so that it's useful, then it's much, much more involved than if I would ask the same question for pure states. And on the other hand, for pure states, they are somehow more entangled, right? So we have already more information about the system. So maybe it's easier to use this entanglement also. Okay, so this is why I want to focus on pure states for the rest of the whole lecture basically. Okay, so again, so we have said that entanglement theory is a resource theory. So what does this mean now? Well, a resource theory needs to have free operations and free states. So what are our free states? What is, what doesn't cost us anything to prepare? So you should always keep in mind this very basic picture that you have Alice and Bob and they are far away from each other, okay? Now, well, what can we prepare without any costs? It's clear, no? Separable states, exactly. So our free states are the separable states. What are our free operations? A, L, C, C, exactly. Okay, so now what does this mean? So yeah, yes. All local operation assisted by classical communication are free because that's something that I can do without using entanglement, okay? So what happens or what do we learn from that? So imagine that I have now a pure state psi which I can transform to some state phi via LOCC. Okay, I write for short, I write it like this. What I mean by that is that I have some state and this could be now also multi-partite, but let me do it for the moment for a bi-partite. So I have some state psi. I act with LOCC on this state. So Alice applies some operation, communicates this result to Bob. He applies some operation depending on this result back in force until they end up with some state phi, some other pure state phi, okay? This is what I mean by I transform psi with LOCC to a state phi. Now entanglement theory is a resource theory as we just convinced ourselves. So whatever I can do with LOCC can only make it worse. Okay, so with LOCC I can only lose entanglement. I cannot create entanglement. So this means that entanglement of this state here has to be larger or equal than entanglement of phi. Now which entanglement? So what do I mean by E? By E I mean for the moment an arbitrary entanglement measure because entanglement, however we measure it, cannot increase under LOCC, okay? I cannot win, no matter how we measure it. As long as it's a valid entanglement measure, this can never happen that E of psi would be smaller than E of phi because I cannot create entanglement with LOCC, okay? So this holds for all entanglement measures. And here you see that, okay, I'm a bit weak because I'm not explaining what is the definition of entanglement measure and so on. I will do that in a second, okay? So we'll come to this, yes? So the probabilistic one is in so-called entanglement monotone, we come to this in a second also, okay? So here I just focus on really transforming psi to phi via LOCC, okay? I really get to a pure state at the end. And of course I can generalize this in saying instead of psi to phi, I consider a mixed state rho to sigma. If this can be done via LOCC, then it must hold that E of rho is larger or equal to E of sigma, okay? Simply because LOCC cannot increase entanglement. It can only decrease it or leave it the same. By whatever measure we want to use. Okay, so this is the message that I want you to keep in mind that entanglement theory is a real theory and that we have that even though we don't know yet what is the definition of entanglement measures and so on, okay? Now let me focus on the bipedite case because in the bipedite pure state, so we are talking now about pure bipedite states and in this case there exists a very well-known and very important decomposition that you all know which is the Schmitti composition, exactly. So let me recall that because we will see why it is so important and you will see that this is also one of the reasons why multi but entanglement is so difficult because there we don't have such a decomposition, okay? So the Schmitti composition, let me just get some numbers here, sorry. So this was A as introduction to this chapter. Now we had B, Schmitti composition. Okay, so what is it? Schmitti composition. If you have qubits. And otherwise, if you have more higher dimensions, not exactly. And the reason for that is that for all psi's, so any state psi can be written up to a use as a sum of some positive coefficients, the state ii is from zero to d minus one, okay? So we have d coefficients here and what do I mean by this symbol here? So by this I mean that psi is equal to some unitary on system A, tens of some unitary on system B times this state here, okay? So LU equivalent is a local unitarist and any state is LU equivalent to such a state. And these coefficients here, these lambda i's, are called the Schmitt coefficients, okay? They are larger than, larger equal to zero and they sum up to one. And what I want to use now is the following order. So I will call this vector lambda psi with an arrow that goes downwards, points downwards, is a vector with the Schmitt coefficients where the coefficients are sorted. Okay, let me start here from one actually because up to d. And so lambda one is larger equal lambda two and so on. Okay, so in the following, what I'm going to talk next I will always consider sorted Schmitt coefficients. Okay, this will be like that. Okay, without loss of generality. Now, the first thing that we see from this nice Schmitt decomposition is that LU equivalent. So when are two states LU equivalent? So we have bipartite states psi and phi. Then they are LU equivalent. Psi is LU equivalent. Phi, if and only if. Yes, this is the definition exactly so that you can write psi as ua, then so ub applied to phi. And but when is this the case? Well, that's if and only if the Schmitt coefficients are the same, right? Because this state is LU equivalent to that state. The other one is also LU equivalent to a state in the Schmitt decomposition. And so the states are LU equivalent if and only if the sorted Schmitt vector is the same. Okay, so this means that this decomposition already gives us for free basically when two states are LU equivalent. Now, why is this interesting? Why do I care where the states are LU equivalent or not? Well, let's go back to this other blackboard here. What happens if this state is LU equivalent to that state? Of course, I can go via LU, LOCC, right? Because local unitaries are of course in LOCC. This is the simplest case of LOCC. Alice applies a unitary and Bob applies a unitary. So it is for sure that if psi and phi are LU equivalent then psi can be transformed to phi via LOCC. But also the other way around, right? Because I can undo the local unitaries. So from here we see that the entanglement of psi has to be what? In relation to the entanglement of phi. The same, exactly. And this is again true for any entanglement measure, right? Because this thing here tells us that the entanglement of psi is actually equal to the entanglement of phi for any entanglement measure. And this tells us the opposite. So this has to be true for any entanglement measure for LU equivalent states. And this is why we care or better, we don't care about LU equivalents. So of course, if I have two states that are LU equivalent, they are as useful. That's the same resource, okay? Whether I apply some local unitaries or not doesn't matter because I can always undo it without any costs because these are all free operations. Okay, so we want to consider, actually we want to get rid of these local unitaries. These are just some parameters that are very disturbing and they are not useful because I can always apply them or undo it or whatever, okay? So I actually want to consider LU equivalent classes so I would like to consider a representative of an equivalence class instead of considering the whole class because I don't win when I do that. Okay, so this is why it's important also to know when two states are LU equivalent, okay? Because you want to see better what are the entanglement properties so you want to get rid of those parameters that are not relevant and the LU are definitely those parameters, okay? Okay, but from the Schmitt decomposition we see right away, this can be easily answered, this question, so when are two states LU equivalent and we have simply that the Schmitt coefficients have to be the same, right? Okay, so now, how can we measure entanglement? Oh, one moment, okay. So how can we measure in the bipartite case? Any idea? Exactly. So one very nice measure is as an example for a pure state we have that the entanglement is given by the von Neumann entropy of the reduced state. Okay, so I guess that you all know the definition for completeness already done. Okay, now what is this measure? So when is this one? Let's talk about two qubits for instance, when is this one? Yeah, so what is an example of a state that would be the five-blasted, exactly. So the five-blasted, then we get that E of, this would be equal to one, so it would be maximum, okay? So I'm asking these kind of questions now because we would say now, this is maximum entangled, right? But we will see that we have to be a bit careful because what the only thing that we saw is that it optimizes one entanglement measure, okay? But there might be others for which it's not the optimal value. So please keep this in mind. Let me just, as a very last sentence, say that this measure is really very nice because it has a very nice operational meaning. Does anyone know what the operational meaning of the von Neumann entropy of the reduced state is? Entanglement or invariant of what? So you mean, okay, you can write it as a function of the concurrence which is an SLOCC invariant polynomial, but what I wanted to know now is what is the operational meaning of it? So what does it mean physically? If I have a certain, if this is a certain number, what does this mean physically? What it means physically is that if you have many copies of your state, psi, so I consider here a pure state psi and I have n copies of that and I apply now just LOCC, okay? As I'm always restricted to LOCC. The question is, how many maximally entangled states, according to this definition, can I get? So how many states phi plus can I obtain? Let me recall this, the number m, and what you can show is that the limit m over n, so the number of maximally entangled states per input is equal, and going to infinite of course, is equal to the entanglement of psi, okay? So that's very nice, and even nicer, it works the other way around too, okay? So this entanglement of formation, this is what this quantity is called, tells you how many maximally entangled states can you get in a reversible way asymptotically, okay? Very, very nice, elegant meaning of this measure, okay? This is probably also why it's so useful because it has such a nice meaning. Was there another question? No? Okay, so I think that with these, I have to stop now, but I will continue then tomorrow, okay? Thank you.