 OK, so we resume now with Professor Plano giving us his last lecture. And so thank you. Good. So it seems that I'm starting 40 minutes late, but anyway. So let's move on a little bit. So I want to now show you another way of looking at entanglement and then at general resource theories. And so now I tell you actually about another entanglement measure and its relation to statistical distinguishability. And I also tell you actually how actually we happened to came up with this entanglement measure. So you can read about the properties and so on in this paper and in that paper, 57, 1, 6, 1, 9, 1, 9, 9, 8, where this is discussed at a great length. And then also the connection to statistics is discussed. Oh, that's funny. OK, 56, OK. So actually in those days, or 20 years ago, we were wondering whether there is not another way of defining entanglement measure. So far, we had this entanglement cost and this distillable entanglement. And they are kind of pretty nasty concepts actually when you talk about mixed states. The expression, the formal expressions for those, roughly as nasty as the expressions that Professor Giovannetti presented for the channel capacities. Lots of supremum of the staff and limits and everything. So and then we were not so happy with that. And so we were wondering again about the concept of correlations. So I mean, you remember that we had the mutual information in the classical setting, which was the entropy of one reduced probability distribution plus the entropy of the other reduced probability distribution minus the entropy of the total probability distribution. And so you can, of course, just write this down for quantum states as well, and then it looks like this. And it is said, in those days it was also certainly said that this is kind of a measure of total correlations. And at some point then, we realized that this can actually be rewritten in a very suggestive manner, can be rewritten as this. And I have to tell you in a moment what this actually is, so rho A, then so rho B. So that is the relative entropy between the density matrix and the tensor product of its local reductions. So I haven't defined this yet, so I have to define what is the relative entropy. It's defined like this. It's trace of sigma, logarithm of sigma minus sigma, logarithm of rho. And you will notice immediately that this is not symmetric, so it's not really a proper distance, but I will explain in a moment why that is and that this actually has a very good reason and it's perfectly well-motivated. And so when you look at this, you see that somehow it seems that the total correlations are the relative entropy distance between the density matrix and its reductions. But this state here doesn't have any correlations at all. Then the thought was, and that was a leap of faith basically, we said, OK, why don't we define a quantity which later on will turn out to be also an entanglement measure, which is the minimum of rho with respect to sigma, where sigma is any separable state. That means any state that is only classically correlated. So just to remind you, this means that sigma is of the form p i omega i a tensor, well, whatever, eta i b, these are all density matrices. So because the idea was, well, if these are the total correlations quantified as the distance between a quantum state from a naturally chosen totally uncorrelated state, why not try and quantify the quantum correlation part by computing the smallest distance between your state and the set of classically correlated states? So then the idea was a little bit like this, if you draw it. So this is the set of all states. This is the set of separable states. Somewhere in here is this tensor product state. Here is our state rho. And we're measuring the distance from this state to this set. And so in that way, the idea was that we account for all the classical correlations, and what is left over as distance must be related to quantum correlations. There was the motivation behind that. But that's, of course, only a motivation, and then one has to prove that this is a proper entanglement measure. And I will come back to that in a moment. Now, if I do this, if I take this as a distance measure, I mean, some of you might get nervous and say, well, this doesn't make much sense. It's not symmetric. In fact, so in general, typically, S of rho is unequal to S of rho sigma. It's almost always. It's even worse. Sometimes this can be finite, and this can be infinite, for example. And so one has to spend a moment's thought, what this actually means. Does this make any sense? And well, it does, because it has something to do with statistical tests to visit the distinguishability of quantum states, in this case, rho and sigma. But again, the quantum proofs are difficult, so I show you the intuition from the classical point of view, why it makes sense that we choose a quantity that is not symmetric. Let's take a coin. So let's take a coin, which has a probability for, well, OK, it's a kind of a coin. Normally you say heads and tails, but I say probability for 0 and probability for 1. So it has a probability for the outcome 0, and it has a probability p1, which is 1 minus p1. So that's just a coin that you pick from your pocket. And let's make it actually. So let's make it even a bit more specific to show you really the difference. Let's say p0 is equal to p1 is equal to 1.5. Let's say this is one coin. The other coin, there's a second coin, q0. And for this coin, for q0 is equal to 1, or let's say I make it equal to 0, and q1 is equal to 1. So in fact, this is a fair coin. This is a totally unfair coin. This is a very biased coin. Whatever I flip it, it will show 1. OK. And now I set you the task. The coins are look identical from the outside. So I give you 1. And I say flip it, flip it once. And tell me which coin I have given you. OK. So let's do this. There are two possibilities. So let's take the fair coin. Right. If you flip it, then in the half the cases, you will find 0. In half the cases, you will find 1. Well, what can you do now? You flip it only once. And you get a 0, or let's say you get a 1 first. The best guess, actually, that you can make is now that this is actually the unfair coin. You've thrown it only once. And you know that if you would have used this coin, you would have surely gotten the 1. So your best guess is actually that this is an unfair coin. So therefore, you have a 50% probability of making the wrong inference of guessing wrongly. So here in this case, 50% probability that you assume that I have given you the unfair. OK, fine. You could say, OK, fine, fine, fine. Once is not good enough. I mean, let's flip it twice. Well, you flip it twice. There's a 25% probability that you will, in both cases, you will get C1. And actually, you become now even more certain that this is an unfair biased coin. So there's still a probability. There's a probability of 25% that you get it wrong. In fact, when you throw it n times, the probability that you wrongly assume that it's the unfair coin is 2 to the minus n. If in one of those throws, you get a 0, then you know it cannot have been this one. It must have been this one, and then you get it right. So the probability of wrong inference decays exponentially. And in fact, I can refine this expression a little bit. If I compute this and I say, well, this can also be written as the probability distribution as the relative entropy between the probability distribution of the QIs and the probability distribution of the PIs. So if these were quantum states, then I would say, here's one quantum state, here's the other quantum state. So that's fine. Now take the other situation. I have the totally biased coin. I flip the coin, I will find one. And I say, ah, must be the biased coin. I flip the coin again, find again one. For sure, there's no other possibility. And again, I will therefore infer that it's a biased coin. So in this setting here, you will actually never make a mistake in your inference. When you're given this coin, your inference, your maximum likelihood estimate, basically, is the correct estimate. So the error probability, it turns out, is also 2 to the minus n times infinite. The error probability is 0. And it's such that this is actually the relative entropy between the probability distribution PIs and the probability distribution QIs. So the arguments interchanged. That's infinite, and that is 1. So it makes perfect sense that we have this quantity that is actually behaving differently when we interchange the arguments in the expression because there's a very well-defined statistical test of, in this case, which coin you have, whose error probability is described by this quantity. And so therefore, this is a perfectly acceptable way. And it actually is interesting because this is very useful because then you can now determine a lot of properties of this entanglement measure in terms of the statistical tests about quantum states. And the rough interpretation of this measure actually is, how likely is it that I am given a state row? And how likely is it that I accidentally assume in a quantum mechanical test that it's actually only a classically correlated state? And this intuition can actually be made very, very rigorous. That's really complicated. I mean, we had this. We thought of this in 97, took 11 years to actually turn all this into really mathematical proofs. And it required an extremely brilliant PhD student to really go through this and turn this into a mathematical proof. I mean, take this intuition and really turn it into a mathematical setup. OK, so now, but really, so I defined this. And I say, actually, look, guys, we had to check that this is an entanglement measure. But what do I actually have to check when I say this? When is a quantity, a mathematical expression, a good entanglement measure? Because I have not told you that yet. And so this we have to agree upon. And well, in words, I have said it already. But now let's put this down. So conditions on a functional E of rho to qualify as a valid entanglement measure, or entanglement quantifier, let's say. OK, it's actually not very many things that we want to ask from this. First thing, well, if the state is separable, then ideally, this function should be 0. So E of rho is 0 for all rho element s, so separables for all separable states. So yeah, separable. Well, that makes sense because those are the states that I can make by local operation to classical communication. They only have classical correlations. And I want to quantify with this only the quantum part of the correlations. And so therefore, for these states should be 0. Well, perfect. That's certainly the case here. If rho is a separable state, then here I can choose as a possible choice sigma equals 2 rho. And one can check very quickly from the definition that then this expression is 0 and we are done. So secondly, well, in the original papers, we had this, I mean, people in general had this second condition, although it's actually not really independent. But nevertheless, I write it because it's kind of nice to see it. So if we make a local basis change, that means a local unitary transformation, then the entanglement, the correlations, should be unaffected. So we want this. This is perfectly natural, because if you rotate your basis in which you're describing things locally, it will not affect the correlations between the two systems. You can get the same correlations in your measurement record by also simply rotating all your measurement observables. And so this should make no difference whatsoever. So you better make sure that this is true. But now, so far, we have not included measurements. And so now, what we also would like to have, typically, is the following. So now, this is the ensemble described by our entangled state row. And now, we may make local unitary operations. We make local measurements. Local meaning to add. Yeah? Sorry, sorry, again? How can we create entanglement? Oh, but this is not all the unitary operations that there are in the world. So we can also have non-local unitaries. And they create loads of entanglement, actually. So for example, if I have a unitary operation that maps the state 0, 0, onto 0, 1, plus 1, 0, divided by square root of 2, then, obviously, this takes a product state and makes out of it an entangled state. But this unitary will, for sure, not be of this form. It will not be a tensor product of two unitaries. So this is the distinction. And we make this distinction because, I mean, I don't know, it must have been yesterday, I guess. We have this constraint that it's very easy for us to make local operations, but it's very difficult to make joint quantum operations between the distant labs. Oh, yeah. Well, it goes both ways. And this is also, this is an important point here, because here, I'm speaking about a local basis change. An entanglement is invariant under local basis changes. Entanglement is not invariant under global basis changes. So for example, a unitary, so let me complete it, actually. So 0, 1 would be, so this is psi plus. Then let's say psi minus, u10 is phi plus. And u11 is phi minus. So this unitary, if you apply it to the state, it will change the entanglement. And everything becomes different. So entanglement, in this sense, is a basis-dependent concept. It's just a basis change that all of us accept that it's actually quite a tricky one for current technology. OK, wait a second. Again? What about local invertible operation? Local invertible. Well, trace preserving? No, no, no, that becomes important. So I mean, it has to be, I mean, so let's say it has to be a physical operation, so unitary or maybe local measurements and so on. If you just generally make a non-trace preserving local invertible operation, I mean, many things can potentially happen, I would say. Then you have to take care of the normalization and so on. And in fact, this will now show up here in some sense. So here we have our density matrix row. And now we make some complicated protocol that might involve a lot of measurements and communication and so on. And the end result will be that afterwards we will get some sub-ensemble with described by state row 1, for example. And it happens with probability P1. And then another sub-ensemble, row 2 with probability P2 and so on until maybe Pk, row K. So this might be, for example, if you just make an ordinary projective measurement on one side and you find either 0 or 1. Then depending on the measurement outcome, you will get two sub-ensemble, one corresponding to the measurement outcome 0, the other one to the measurement outcome 1. And of course, if you make life really complicated and you make one measurement set here, then you communicate, then Bob makes a measurement, then maybe he communicates back and you make another measurement, then you can get many, many, many outcomes and many sub-ensemble. But the procedure should be such that it's local, only local operations, and classical communication. And under such a setting, again, we would like to expect that quantum correlations can not be created and they can also not be increased. So if I give you an entangled state, then on average, the entanglement should not increase in this procedure. So that means, formally written, E of row will be bigger or equal to sum i, E i, entanglement of row i. And where row i has trace 1, the normalized state. And typically, so we would like, I mean, so actually it was perfect. So in the lecture before, you learned about completely positive maps and stuff like that. So you would have a map that takes row as described in this form. So v i row v i dagger. So this would be a possible map where v i's are linear operators. And it takes the state row and maps it to something else. So this would be proportional to row i. And if I just add them together like this and form the resulting density matrix, then I do not do this subselection, but I get a measurement outcome and afterwards I forget it. Now I don't want to do this. I just want to keep the measurement outcome. So that means that I have row i, which is so row i will be given as v i row v i dagger divided by the trace of v i row v i dagger. And well, and p i is actually exactly this denominator. And so this is also something very reasonable because we are only applying operations that cannot, that are local and class communications. So they should not increase the amount of entanglement. So this is another condition that we like to ask for. And that's really what you ideally should test. When you come up with a quantity that you claim is a very good quantifier for quantum correlations, then I challenge you and say, prove these three things. This has to be true. Oh, I have to say, this operation here, of course, has to be created locally. So locally, you prove these three things, then I would say, OK, this is for me a decent entanglement quantifier. So sometimes we make the distinction between entanglement quantifier or entanglement monotone. It's also sometimes called, actually, yes, and entanglement measure. And entanglement sometimes, I mean, I kind of like to do this. It's not really relevant. Well, it's kind of, well, there's a fourth condition that sometimes we ask. And we say that for rho being pure, E of rho should be the reduced density, for normal entropy of the reduced density matrix. So this is not strictly necessary. It just is an additional nice property that some measures have and others don't. And it's nice to have an entropic measure because this makes it easier to connect things to thermodynamics. But let's say the most important ones are these ones here. So now, exercise, show that two follows from three. Actually, condition two is really not independent of condition three, OK? Right. So, well, question, of course, now is, does this quantity satisfy all these properties? The answer is yes. Do I want to show you the proof? The answer is no, because it's actually really complicated. It took months to find that. So, but this is now already starting to know these very similar conditions I will write down in a moment for general resource theories, OK? So now, one last thing about entanglement. We have this condition for pure states. Now you can wonder, what is the value of these entanglement measures for mixed states? Is there maybe at least an entropic lower bound? And the answer actually is that for, OK, the entanglement cost for the distillable entanglement on for the relative entropy of entanglement, we have that E of rho AB is bigger equal to the maximum of 0 and rho A S of rho A minus S of rho AB and S of rho B minus S of rho AB. So that's something that is also not so easy to see. Although, when you know the right trick, it's not so difficult. So this is in J first A 33 L 193 2000. And that's kind of nice because it shows you that at least as a lower bound, well, if it's pure, then the lower bound is actually tight. And then when you start mixing the state, this bound starts to go down. But it, of course, also shows you that when the state is very nearly pure, then you are still close to this. And it's a purely entropic bound now. And this can help you in some cases when you want to connect this to some thermodynamic arguments, to have this kind of bounds. It's also true that these ones smaller than the maximum of S of rho A and S of rho B. And well, OK, again, there's a proof for that. You can find it there. So this is kind of nice. So, right, fine. So this is entanglement theory. So but now I always dress this up in terms of resources and so on and constraints. And so now let's write out very similar things and very similar reasoning, but now completely in general. So where? I allow any. I allow any separable state here. And I allow you to write it in any decomposition you like because this expression here will not actually depend on the specific decomposition of sigma. It actually, I mean, it's independent of that. The only thing it will depend on is the specific form when you write out the density matrix as a matrix. But you can actually decompose this in an infinity of ways, but no matter what decomposition you choose, as long as the sigma is the same, the result here is the same. Oh, anything. Whatever you want. Well, OK. The Pi should be bigger or equal to 0. So that's what I want. And these should be density matrices themselves. But apart from that, you can choose whatever you like. They don't have to be orthogonal. There can be anything you like. So you mean that this is, you mean these are the classical correlations? No. This one? This one. But this is a classically correlated state. Yes, it's a separable state. Yes. Yes. No, but I mean, I'm just saying that these are the states that I can create merely by using local operations in the laboratories and classical communication between them. Local operations don't create any correlations whatsoever. The only way you can create correlations in this setting is by classical communication. And then there are obviously classical correlations and not quantum mechanical correlations. So that's why this is a classically correlated state. But of course, I mean, it's a quantum state, but it only has classical correlations in it. At least in this paradigm that we are using of separate labs and so on. OK. Yeah? Oh, because so I say this map here is a map that is the result of only local operations and classical communication. So therefore, it might be that for some of these sub-ensemble, the entanglement is larger. But then they only appear with a certain probability. And so what we expect is, or what we want is that if I take the weighted average of the entanglement in each of these sub-assembles, it should be less than before. Because otherwise, we would have generated entanglement by local operations and classical communication. Pardon? In that case, with particular probability, we'll generate more entanglement. Oh, yes. Yeah, yeah, yeah, yeah. Absolutely. And in fact, I showed you this morning a protocol where actually this happened. I didn't point it out very much. But in this concentration protocol, actually exactly there was a small probability that we actually get a more entangled state. And yeah, there's no harm done. Quantum mechanics is, I mean, quantum mechanics even violates energy preservation with a certain probability. But on average, energy has to be preserved. And so here, it's similar. OK, right. So now, general resources. So now there are different ways of dressing this up or building this up. So general, OK. So now when I now say things like we have a constraint, think about local operations and classical communication, for example. If I say resource-free state, then you think of separable state, for example. And then you see that basically then for everything that I'm telling you, entanglement theory is an example. But I would like to just state it a little bit more general. So now, every one of these resource theories starts, well, can be started, for example, by saying I have a constraint. That means not all physical operations or, let's say, more extreme are actually possible. So for example, I cannot exchange quantum states coherently between two laboratories. So that's the constraint. So I will have, let's say, A is the set of all operations. And then C is the set of possible IE-free operations. OK, so what could this be? So LOCC we've talked about already. Let's just go a few, well, 120 years back when we did not have quantum physics. Actually, we did not have lasers. We did not have really coherent light available to us. It was actually, in those days, it was not possible for us really to create quantum coherence, OK? So actually, the ability to create on, so this we could live, I mean, in those days, we lived in a world where, of course, there were many operations. All of quantum mechanics existed already. We just didn't know about it. But the only operations that we could do were the classical operations, shining incoherent light onto some system and so on. So we could not create quantum coherence. We could not create a coherent superposition of ground state of an electron and an atom on an excited state of an atom. OK, that's also a perfectly valid constraint, you know? So that's a possibility. Then here you have the set of states. And then you have here a subset of states. Let's call them S. The subset, those are the states that you can prepare from the maximally mixed state using the free operations. So S is the set of states from starting from the maximally mixed state using the free operations. So actually, we have to be a little bit careful here because there's one setting in which this does not apply, an important setting. But I chose here the maximally mixed state because that's kind of the state that represents our least luck, I mean our biggest ignorance, basically. So if we forget everything, so to speak, then we would have a maximally mixed state. Now this is not quite true because if we forget everything but we have, for example, we know that the mean energy has to be preserved, then we actually get a thermal, a gypsum state. So I mean, I should normally be a bit more rigorous here but let's say let's stick to this definition for the moment. So in the absence of total knowledge, it is the maximally mixed state that is the best starting point. And so all of those states that we can from here will be the ones that we can make. So we can actually start here, but then actually we can create a whole set of states. But those are the cheap states, the free states. So that's fine. This set also has the property that when I apply an operation from the set of free operations to every representative of this set, it will be mapped onto itself. It will not grow. The set is mapped onto itself, not each state. There are in some theories, actually people try and ask for something that you just say, but this is a very specific setting. So what we want is only that the set is mapped onto itself. Of course, what can happen is that you start out here and you are mapped into the set, but if you start here, you will never go out. Right. And then we typically have resources. And while it's not completely clear, typically every state that lies outside is a resource state, because at least we cannot produce it for free. And so therefore, it is useful for at least one task, namely preparing the state itself. So all states outside of SR resource states allow you to realize quantum operation outside of the free operations, outside of C using this state, this resource state, operations. So that's the same as saying in entanglement theory, the states out here are entangled and they allow you to implement some operations that you cannot create by local operation and classic communication alone. But OK, so now I wrote it a bit more general. Constraints, resources. And well, in a way, this is starting point. Now you will ask questions such as, if I have a state out here, how useful is it? Which protocols can I produce with it? So which operations in A can I produce with using a particular state outside? I can ask the question, is there a maximally resourceful state? So probably something that would sit here, from which I'm, for example, able to make any other resource state using operations in C. Well, translated into entanglement theory, the singlet state is the maximum resource state because from that state I can make by local operation, classic communication, any other qubit entangled state. And so you can ask these questions. So all these questions that I have explained to you now translate quite naturally into this setting. You just have to always sort of remember that you should not talk about entanglement anymore but some other constraints. So now what are examples of such theories? Well, OK. And also, as one should say, I would say there are three very natural quantifiers of resource, of the resource content. One is, again, I'm giving you resource states and you're trying to concentrate the resource to, let's say, the maximum resource state. So that's like the distillable entanglement. You can ask the question, how many maximally resource states I need to produce more, less resourceful states? That is like the entanglement cost, the resource cost here. And then the one other natural way is to actually say, OK, what is the relative entropy distance of this resource state to the set of resourceless states? So that's the last measure that I told you before. And these three are kind of natural measures of resource fullness. And one can show in fair generality that they, in such theory, they should be monotones. And of course, the conditions are now exactly the same. So any other quantifier that you come along with should be 0 on the resourceless states. And if I have a protocol that I realize only within the set of free operations, it should not increase, on average, the resource that I have. So it should have, so if I call this C of rho, then it should be bigger or equal to the sum P i C of rho i. And where these rho i's are now generated by these set of possible free operations. And of course, there might be measurements involved. So you have probabilities. And you want this property. And that's really then what you have to check. So OK, what are theories? OK, so quantum coherence. So in a way, actually, so we started in quantum information very much at the beginning, we immediately started to look at entanglement. And there was good reason, because we thought about communication, and that's natural to have some correlations, and so on. But actually, we think about it 100 years ago, people could have also started to think of in terms of this coherence point of view. And then they could have said, OK, at the beginning of quantum mechanics, single particle quantum mechanics, they could have thought of what distinguishes classical from quantum, well, it's the presence of coherence. And so therefore, they could have said, OK, the free operations are the operations that only happen in one basis. So I have a preferred basis, it's given to me. So for example, the basis made up of the state 0 and 1. And then states that are incoherent in this basis are states that are simply of the form p0. Well, OK, in general, let's say pi, so diagonal states. So now this is defined with respect to a particular basis, namely this one. And in that basis, we will have incoherent states, those are diagonal ones. And states that have off-diagonal elements, at least in principle, have some coherence in it. The free operations are those operations that map the coherent states onto themselves. So they map the set of coherent states onto itself. For example, permuting basis elements, so OK. Measurements in this basis, also OK. So stuff like that is allowed, and the rest is not free. It costs you something. And then you can actually find out. You can ask yourself, what is the maximally coherent state? Well, the maximally coherent state, if I goes from 1 to, let's say, n, then the maximally coherent state is 1 divided by square root of n, sum i from 1 to n, an equally weighted superposition of all the basis states. If you write this as a density matrix, of course you know that this has loads and loads of off-diagonal elements. So it's kind of natural to think that this might be the right candidate. In fact, you can also show that by incoherent operations, you can take this and generate the rest of the states. So therefore, it's a good choice. And then you can build up exactly the entire theory that you build up for entanglement theory. You can also do that here. Now, this is not having a life completely on its own because there's actually also a connection between this and entanglement theory. Because now you can ask yourself, OK, if I have only incoherent operations and I have only incoherent states, then can I actually create entanglement? So if I start with something like this, so I mean, I start with a state of this form. And then maybe a tensor product with something like that. And then I just allow myself to have operations that are diagonal. Well, actually it's very easy to see that then you will not be able to create any entanglement whatsoever. However, if I give you some coherence initially in one subsystem, then it turns out that I can actually create entanglement using incoherent operations. So there is actually a connection between coherence theory and entanglement theory. If you want to put it very extreme, you could say coherence theory is the entanglement theory of one particle, so to speak. It's a bit exaggerated, but I mean, there is a direct link. Now, this is kind of obvious that there is this relationship. But actually it can be made very much more formal and rigorous and quantitative by actually making a mapping that actually translates coherence measures into entanglement measures and vice versa. And so this is in a paper by Alex Strelsov and several other authors, five of them, I think. And boy, I think it's in PRL in 2015. So I have, unfortunately, it's not mine. I sort of forgot the exact numbers of the pages and so on. But Alex Strelsov, you will find that for sure. And so this is interesting. And it also allows you then to do things like you prove something in one theory and then you have a chance to translate it to the other theory more directly. Because, pardon? Yes, and next week Alex Strelsov is going to give a talk surrounding this general topic area. I'm not quite sure what he's going to talk about here. Well, now another setting is thermodynamics. So you can actually, now I'm becoming really lazy, so here's the sets of all states. These are the free states. What are the free states? Well, actually, these are the GIP states. And the free operations are the operations that map the set of GIP states onto itself. So free states, GIP states, free operations, map GIPs. And that's actually something, I mean, in a way, actually this is quite old. It's already started in the 70s, but it got forgotten partially because I think the mathematical techniques were not really ready. But then there were 10, 15 years of entanglement theory which did all sorts of mathematical structures and so on that can be applied here. And then it experienced the revival. So in fact, the first revival came by Dominic Jansing in 2000. And I think, if I remember right, it's in the International Journal of Theoretical Physics. So now I've forgotten again the precise page numbers, but Dominic Jansing, he doesn't publish masses of paper so you can actually find this relatively easy, was around that time. He kind of, it was not 100%, I would say. I mean, one has to read very carefully to understand it. I think he probably understood it simply. He defined something like that. And then later on, it was Oppenheim and Michael Horodetzky who pushed this further. They actually proved a lot of properties. And then there was another paper by Horodetzky, Oppenheim, Fernando Brandao, and one or two other people that really started to find out what are actually then the laws that determine how you can manipulate states that are outside here, how you can use them for physical work and all these sort of things. And what are the constraints that you are suffering? And the interesting thing is also that they basically found a lot of structures that are also known in entanglement theory, mathematical structures. And then they could translate them to this setting. And for very small systems, this doesn't look quite like thermodynamics. But when you start to go to large systems, there are many copies on many levels, then actually all these different constraints that they had that hemmed you in, they actually start to unify to one, which is the second law of thermodynamics. So that helps you to interpolate, basically, from the microscopic level to the microscopic level. There's a lot of things that still need to be clarified here. I mean, it goes a little bit far, maybe. I mean, I cannot read it. Well, I also don't know it very well. But they are often talking about exact transformations. They assume that initially your system is decoupled from the bath and from the environment and so on. This is also not really true for these microscopic machines, microscopic thermodynamic engines. And one has to become very, very careful what is these are beautiful mathematical results. One also has to see how well they can actually be realized in the practical setting. But this is a very, very nice way of interpolating from nanothermodynamics to microscopic thermodynamics. Well, I mean, you have to now define this with respect to some Hamiltonian and temperature and so on. Yes. But then? It changes the temperature. So? It's the average. Oh, but yeah. But it maps it from chip state to chip state. Yes. Well, I mean, so now you have to now, I mean, you can make a mathematical setup where this is free. But that's why I'm saying now. So you have to now start to think, what is actually natural as additional constraints, maybe? What should be left out? Because then it, yeah. So I mean, it's the same in entanglement theory. I mean, entanglement theory is not really fully uniquely defined. We can have LOCC. We can have a lot of separable super operators. We may also allow PPT operations. This is all slightly different. And then you have to say, OK, which ones can be made? And so. And I don't want to go into, I mean, because I'm also, I don't know so much about this. So and then, I mean, but this is, of course, a nice application of all this. And then there's also, well, there are more things. And so next week, you will hear a talk about the question of how quantum super positions may also be cast in terms of resource theory. And that may help us to shed some light on to question of what is classical and what is non-classical even more. And so the talk will be given by Thomas Toeber up there. So he will basically expand that particular aspect very much and explain this probably much better than I do now with this hand-wavy things. So this is really meant to be now only to show you there are other interesting things in entanglement theory. So there are many other possible settings that you can think. You think of a constraint that is kind of natural because your experimenter tells you that this is possible or this is not possible or so. From a certain fundamental point of view, maybe you're interested in non-locality or something else, you can try and frame this in this particular point of view. And the advantage is then that some of the general results that have been obtained in resource theories directly apply to what you're doing. And so that's why this is also a useful thing. If you have a general structure, you only have to prove things once if you are careful with what you're doing. OK, so I think with that I really should stop because I'm 60 minutes and so on. This can only give you a glimpse on things. Normally I should have spoken the entire week about all these all sorts of things and giving you more details. But I've also given you some references here and there. And I really encourage you to look at them. And also maybe I should say again, so the ones that are kind of basic, but try and bring out these principles that I've been talking about is this one. Then there's also an article more about the quantification of entanglement, quantum in comp 7.1, 2007. So this is kind of a bit of a review article, but it also tries to bring out why is it that we are asking these questions? What do you have to watch out for and so on? And then if you want to know what has up until 2009 been published in Entanglement Theory and really probably nothing is missing there, then you should look at reviews of modern physics. I don't know the page number and so on, 2009. But the authors are easy to remember because they are Horodetzky to the fourth tensor product. So it's Karl Horodetzky, Michael Horodetzky, Pavel Horodetzky, and Richard Horodetzky. It's a family. And so they wrote a review article with about, I think it's actually more than 1,000 citations. And so in 2009, this was a pretty comprehensive view of what was known and what was not known, let's say. Of course, things have developed a bit, but I think these are things that I would recommend. Look at them and then from those you start and you maybe read more and then you read more of the specialized articles that I cited all the way through. And then you can get a good feeling of what's going on. But maybe what I have also forgotten is that, so here. So this is also, OK, Baumgratz et al, PRL 2014. I don't know exactly the page numbers. And here, well, I've shown Dominik Jansing, but I should also say Oppenheim Horodetzky. Nature Communications, I think. And I think it was 2013. Right. OK, and then everything about resource theory of superposition you will learn from Thomas next week. Thanks.