 For those of you who haven't seen this material before, I would recommend that we try to solve the problem that all the exercises have a problem fashion. So let's start. Okay, so the quantum systems are gonna be modelled by a finite dimensional Hilbert space, so I'm gonna restrict myself to finite dimensions in this course, and the domain of the results extend beyond that. Okay, so the definition of a Hilbert space, I don't think I need to go over this. So a notation for linear operators, right, from H to H prime is this, right, and from H to H bar, to H prime. Important, I'll join the operator S star, S star is an operator from H prime to H, okay, which satisfies this property, but it's the inner product. Okay, so then there's some important classes of operators that I'm going to be considering. So unitary operators, they are the ones with the inverse is the adjoint. Hermitian operators, for which the self adjoint, and positive operator, which is very important role. So it's Hermitian operators, which have a non-negative ideal. Okay, and from a projection, that's great, let's see what's that. Okay, so very quick review of the way I see the bracket location. Okay, so I can see a ket as a linear operator from the complex numbers to a Hilbert space. And okay, the adjoint of this is denoted by the bra, as you know. Okay, and so then the inner product is naturally a ket. If I can compose these two operators, the bra was the ket, and so it gives me the inner code. Okay, so, yes, I'm also using the outer product. That's fine, simple. Okay, so now let's get to the spectrum of the composition. Okay, so any Hermitian operator, I can find a normal basis. Okay, I'm not here at EI, but that S is written as the sum of lambda i to the eigenvalues and the EI to the eigenvector. Okay, so again, S is positive and only if all the eigenvalues are non-negative. And so one thing that we're gonna be doing here is often applying functions to positive operators in particular, or the mission. And so this just means that you apply the function to the eigenvector. Okay, so for things to start getting interesting to doing computation and information, we will have multiple systems, have composite systems. So yeah, I have systems that are called ABC. In general, I'd be calling X and Y the classic of systems. Okay, so on each one of these systems has an associated Hilbert space. And of course the Hilbert space for the composite system is the tensor product of the two of the spaces. Okay, and the tensor product is just, you can see it as the Lekker space, which is scanned by the tensor product of the vectors in the unit. Okay, and the inner product is defined by the product of the tensor products, the product of the inner products for the tensor product, and then you will send it in. Okay, and then I can also define the tensor product on the linear operators. So if I take S and T as linear operator, then the tensor product of these two linear operators, when it acts on tensor product vectors, it's just the tensor product outputs. Okay, and then you extend it in. And then there's some natural identifications that we'll be doing is, so if I take the tensor product of the set of linear operators from HA to H prime A, with HB to H prime E, then this is identified with linear operator from HA, then so HB to H prime A, particular case of that is okay, good. So yeah, so this was all completely abstract. Let's start with now modeling quantum systems. So yeah, as probably all of you know, so the state of a quantum system is modeled by a density operator, okay, which mathematically it's an operator on the relative Hilbert space H that is positive and the row is positive and it has trace one as a normalization connection. Okay, and I'll be denoting the set of density operators as S of H, S for states. Okay, and so I'll say that the state row is pure if it has rank one. Okay, rank one and pure for me, it's the same thing. Okay, one vocabulary I might be using about this is when I say the maximum mixed state is just the state which is proportional to the idea. Okay, so the uniformity. Okay, good. So yeah, so here I talk a little bit more of the density operator formalism. So yeah, so if you're used to quantum states and just as vectors on a Hilbert space, then the corresponding density operator is the outer product of the site in itself. Okay, and so now I'm looking operationally about how to model the system. So if I have a composite system, I have to tell you what is the corresponding Hilbert space and it's the tensor product of the two corresponding Hilbert spaces. Okay, and if I physically prepare two systems independently, so A and B for example, then the corresponding state is just row A for row B. Okay, but of course in general if there's correlations, they could be more general. Okay, so yeah, just a notation also for short, for now that instead of talking of saying H A for the Hilbert space corresponding to the system A, I'll just use A for the Hilbert space and for the label of the system. Okay, so now that I defined the corresponding Hilbert space, what is the evolution? How does the system evolve? So there's many ways in which it can evolve. So if I have an isolated evolution on the system A, then this is modeled by a unitary operator. You are acting on the Hilbert space A. Okay, and if I have a composite system on AB, then after applying a unitary on A and doing nothing on B, then the state after applying this transformation is modeled in this way. And we'll find A is UA tensor identity and row B. So as you notice, I tried to, when possible to include as a subscript the system on which the operator acts. It's not always possible to do that one to be a bit heavy, but I try to do it when it's okay. So this is one, so one type of evolution is this isolated evolution as a unitary. Another type of evolution is the measurement. So here a measurement would be modeled by some operators, Mx on the Hilbert space A. Okay, and X is the outcomes of the measurement. X labels, X is just a finite set which labels the outcome of the session. Okay, and of course there should be a normalization condition and the relevant one is this one. The sum over X of Mx star and Mx we could find it. Okay, so when I do a measurement I get two things. I get the classical outcome and a post-measurement state and so the probability of outcome X is given by this formula. So trace of Mx, rho A, Mx star and here I put the formula in case that another system around the system B. Okay, so this is the formula for the probability of X and it's easy to check here that the sum of these probabilities is equal to one. Then I impose exactly, I mean you can see that this condition was exactly so that this one. Okay, and the post-measurement state is given by sandwiching rho A, B, U, S, these operators, Mx and Mx star and dividing by the bit. So this is conditional C and X. Okay, so here I talk specifically about, okay, so you might be more used to the setting where Mx is a projector, okay, but here in the definition I gave it can be more general. So the Mx are arbitrary operator that satisfies this problem. It's normalize this problem. Okay, and with this measurement you can model several things. You can model like a unitary followed by a projective memory. Okay, yeah, maybe let me introduce the notion of PODM here as this would come up. A lot is, okay, so sometimes I'm not interested in the post-measurement state, right? I'm just interested in the probability of the outcome. For example, I would throw away the system completely and I would just see the outcome. That's the only thing I'm interested in. And this would be, for example, the case when we look at state discrimination. And so in this case, you see that the probabilities only depend on this operator, Mx star Mx, okay? But in this case I just, I can forget the Mx itself and just keep the ex, so. And yeah, so this gives me the definition of a PODM, or positive operator value measure. PODM for short on A is just a family of positive operators. Now on A, that's on blinding. And in this case, just the probability of an outcome is just trace of ex. Okay, good. Okay, so that is what I want to say for states. So now quantum channels, some of you didn't see it, so I'll try to go a little bit more slowly on quantum channels. So yeah, a quantum channel is a general way of describing the evolution of a state or a system. And so this is general in the sense that the Hilbert space that you're looking at can change. So you have an input system A and an output system B, so this you can think of it as an arbitrary physical operation. So it could be, for example, for getting a system or adding a particle or anything like that. So this is modeled by, so what we see is a quantum channel. Okay, so of course, I mean, so what should be, what should be the property that is satisfied by quantum channel? It should map the set of density operators, to the set of density operators on B. Okay, so as you'll see, that will require a little bit more than that and I'll discuss this in a minute. So let me first give you the definition and then we will comment. Okay, so I'll say here that the quantum channel B, it's a linear map from the set of linear operators on A, two linear operators on B. So notice that the linear is natural here because I would like convex combinations to be mass convex combination. Okay, so here are the two constraints that we put that ensure that we satisfy the two constraints corresponding to the definition of the state. Okay, so the first one is to ensure that positive operators stay positive. But I'll ask a little bit more and so, so again, let me define it and then we'll comment about this is so, yeah, I say that for any input, positive operator role, the output should be a positive operator. But this is not only for when I input some positive operator on B, but I would also take the tensor product with an arbitrary input space R, it's a reference system R, and then I would just do nothing on the system R. So here I, I, R is just the identity on the set of linear operators on R, okay? So it seems like a complicated definition right now because I have to test overpost for different spaces. Okay, but we'll see in a minute it's actually simpler in some sense, okay? So, so I asked that the, again, to summarize, I asked that this linear of this quantum channel E on positive operators, it maps to positive operators, but this stays true even if I take the tensor product and it's identity, if I get C4, okay? And the other natural thing is to say that it's space to learn, okay? So if I start with a state, which has space one, then I'll put something like this, okay? So now I want to discuss why, why complete positivity and not just positivity, okay? So what would be a positive map? And yeah, this is the definition of a positive map is that it just maps a positive operator to a positive operator. So this would be the first thing you might come up with, but it turns out that this is not very well behaved in the sense that the positivity of a map is not stable on the tensor product, okay? So you can take two positive maps and tensor product then together and you get something which is not positive, okay? So this is physically not valid because you should be able to, if I have two physical quantum channels I can apply, I should be able to do also the tensor product, right? Which is, I look at just one of the system and I apply it with the physical operation and the other system, I also do the quantum physical operation. So if the tensor product becomes non-node, non-physical then this is a problem. Okay, so yeah, here I said more specifically what do I mean by it's not stable on the tensor product. I mean that there exists positive maps E and F, right? Such that if I take the tensor product, which is defined in this way then this is not positive. Okay, so let's just see a little bit more specifically how can this happen? So of course if I take input and input which is tensor product, right? So if I take rho which is positive and sigma which is positive and I applied with the tensor product of my two positive maps, then of course I would get something positive because by construction it's just the tensor product and the positive operator is positive. Okay, so and more generally if I take just convex combinations of rho tensor sigma I will also stay positive, right? So and the convex combination of product operators this is also called a set of separable operators of states if it's normalize. It's just the convex hull of tensor product. The positive operators, okay? So yes, so okay, again naturally so if I take a tensor product of positive operators on separable states it would stay positive but it can fail to be positive. So tensor F of rho can fail to be positive if I take a state rho which is in a state on A tensor A bar but not separable, okay? And these are exactly called the entangled states. And a typical example that you will discuss in the problem session, most typical example is to just take that E to be the transport map where you fix the basis and you take the transpose and you take F to be the identity map. Then you will see that if you choose the rho you choose a when chose an entangled rho then you find an output and it's not positive. Okay, so that's the idea and so on the other hand as you'll also see in the problem session complete positivity which I defined in the previous slide is stable on the test point. Okay, so let's look at some few simple examples of quantum channels. Okay, so the first example is if I take a unitary operator on A, right? So the quantum channel which applies the unitary operator is just to conjugate T by U and U, okay? And it's easy to verify that this is completely positive, right, because just in general when you sandwich a positive operator with arbitrary operator and it's conjugate and then you get something called, okay, so I'm gonna check. So yeah, so this is actually what I wrote here. So in general for an arbitrary S, right, take S to S star, then this is a completely positive. And okay, how about trace preserving? The other condition trace preserving this just follows from the fact that you put one there. Okay, another important map that we'll use all the time is this departional trace map, okay? So this is, again, the map which just forgets the system. If I have a joint state for A, B, I just forget the B system, then this is what is called the partial trace and it's defined this way. So it's sum over, so I fixed this on the basis of E and I take the sum over E of sandwich with this operator, this basis on the D system. Another way of seeing this partial trace, and it's like this, is that it's the identity map that's on the trace map. Yeah, again, the partial trace, it's placed the role of taking the margin of joint property distribution. Okay, so for example, the special case, the special classical case, for example, if you have the two systems A and D are classical, in some fixed basis A and B, like the partial trace, which again, yeah, so this is an important notation I'm using is that if I have row A, B, which is a joint state of A, B, I use the notation row A for the partial trace over D of row A, B, yeah, so it's matching the margin of this, I think it's here in the cell. Okay, why is this a valid channel? Again, simple to see, because as I said, so these maps which are obtained by sandwiching with an operator are pretty positive, and then I get the sums, pretty positive. Okay, again, trace to everything, pretty fine. Okay, so now how can I see a measurement in this formula of quantum channels? Yes, remember, the way we defined it is just by a set of operators, Mx, such that the sum over Mx star and X equal to identity. And yeah, so imagine I want the output of the channel to contain both the outcome, the actual outcome X, as well as the both measurement states. Okay, so now my input, the input space would just be A, and my output input space would be X. X, okay, so this is just a input space of dimension corresponding to the number of outcomes of my measurement, and tensor product with the same input space as the input space. Okay, so this would contain the outcome, and A would contain the both measurements. Okay, and so what is the corresponding quantum channel in this case, it just has the following form, so it maps an arbitrary operator T to the sum over all X's of here, the projectron X, tensor, Mx, T, Mx star. Okay, again, it's simple to verify that it's completely positive and trace deserving by all the property that we, thereby the same properties as what we described before. Okay, and so it's easy to check that this is consistent with what we discussed before. Remember, I told you that the probability of the outcome, X is given by this quantity, okay, and so if I just multiply and divide by this quantity, which we call p of X, remember we call this p of X, just like go. So if I just multiply and divide by it, I see that just this is the probability of obtaining outcome X, and conditioned on having the, conditioned on the register, on this classical register, being in the state X, then the post-measurement state is actually what I see in the A register. The two different spaces, and it's called the classical quantum state. So specifically, so a state of this for order, of the forward form, so I fix the basis of the X system, and then if I write it to the forward form, so there's some of the probability of X tensor, some conditional state of the A system, this is what I call C state, okay, good. Okay, so now I'm going to move to, how to represent the quantum channel. Okay, so that it's a linear operator, so there are there's many ways of representing it, but there are some common ways of representing it, which are useful in order to test whether it's a channel or not, and useful in terms of interpretation. Okay, so one of them will be what is called the Cholian operator of quantum channel, and so this will be an operator from A to C, another one is the crowds representation, this will be the list of operators from A to B, and Stein-Spring representation will be given in terms of an operator from A to B, tensor another X versus 10 B. Okay, so let's start with the Cholian representation. So okay, there's variants of it, but okay, now I'll take a simple fixed one, which is called the Cholian operator. Okay, so for that we fix the basis, so basis dependent, we fix the basis on the A system, remember A is the input of the channel, okay, and we consider a system A bar, which has the same dimension as A, okay, so think of it as a binary system, and what I do is I see one way to see it is that I prepare a maximally entangled state between A and A bar, and I apply to the A system the channel, and I leave the A bar system on its own, okay? Another way to see this is just to say that, okay, so like a basis of the linear operators on A is just the set of A, A prime. This forms a basis of all the linear operators, so it's sufficient to just list all these out to the right, so E per A, A prime, so and this Cholian operator is just a way of putting, of arranging this list of operators in this way, right? So you can see it as a big block matrix if you want, where I put this matrix E of A, A prime in the place corresponding to A, A prime, okay, yeah, so there's the mathematical point of view, where it's just a list of A, A prime, but it also has this physical interpretation in terms of preparing a fixed state, right? Fixed maximally entangled state between A and A bar, and applying this channel to the basis, sorry? Can you say again? Hi, I can't hear you well. Why is there a problem? Ah, okay, because it's to distinguish if you could. Ah, yes, here? Because, okay, so A and A bar are two isomorphic systems, and I just take, okay, because formally the channel is from A to B, right? So I take a copy of the A system, and it's the A bar system, and the A bar system, like, I don't act on it, and I act on the A system with the channel B, right? So you can think of the A bar system as a copy of A. Okay. I could have written A as well, if you want, but here I would have A and A, I would have two systems, which are called the same thing. Yes? No, no, G is an operator from A bar tensor B to A bar tensor B. It's an operator from space to itself. Yes? Yes, you mean, yeah, so it's a matter, yeah, I could have called the A bar A and A A bar. Okay, it's just that now the channel goes from A bar, I define the channel to go from A to B, but it's not the big issue, it's just a naming convention. Okay, so, yeah, so notice that, so yeah, sometimes, so notice that here I took this state to be unnormalized, because I didn't, so it's, yeah, I think it's quite good. Okay, you can take the, if you normalize it just by one over dimension of A, then this will become a state, and sometimes you talk of the choice state of a one. So I will interchange the operator or choice state, just the only difference is the factor dimension of A. Okay, so yeah, so let's see some few examples. So if I take the identity channel, okay, so the channel does nothing, I don't do anything, and so then yeah, B and A are the same space. Then, okay, then of course E does nothing, and so the choice state is just this operator, this maximum time of operator, so A, A, A prime. Okay, so if E is just doing the trace, it's a value in your operator, trace, then in this case, the choice state is just identity, and if, for example, the channel is just outputting some constants, some constants, sigma and the trace, then the corresponding choice state is identity on A then. Okay, so one thing you might wonder is, does this capture everything? I mean, what I said, it should be clear that capture everything, so it contains the output of E of all possible on the basis of the type of linear operators, and it does indeed capture everything, and there is, sometimes there's even, an isomorphism, which is called the Choy-Chamnitowsky isomorphism between a channel and its corresponding state, its corresponding Choy operator, okay, so I mean, the Choy and the Chamnitowsky, I don't think they're not slightly different, but so here I chose the Choy one, because it's a little bit simpler, and you can make this even very explicit, so like you can write what is the, so of course, given in E, it's easy to write what is the corresponding choice state, this is exactly what I defined, but I claim it's an isomorphism, so you can go the opposite way as well, and it's given by this map, so from this operator J on N of A tensor E, I can construct a channel, okay, which is defined in this way, so it maps an operator A to this other operator on B, which is given by taking transpose of this operator on the A system, tensor identity and multiplying it with J, and then taking partial trace, okay, so I won't, if you'd like to check this, I left the calculations on the lecture board. Okay, so why is this representation useful? This is an important thing, is that, so this representation is useful as it can be used to easily check whether a given linear map is a valid quantum channel or not, right, so in the way I defined it, right, if I give you a linear operator by just specifying it in some way, how do you test whether it's a valid quantum channel or not? Okay, in the definition I gave, it looks very complicated, you would have to test all possible Hilbert spaces and tensor identity and tests on all positive operators that it should not be something positive, this looks like quite complicated, but you see now that it's actually relatively simple, okay, why, because of the following observation, is that a given quantum channel E is completely positive if and only if the corresponding Schole operator is a positive operator, right, and positivity is that to be simple to check, right, you just have to check that the operator is positive and this is an important thing, it says that not only checking whether a channel is a quantum channel can be done efficiently, it's even you can optimize like a linear function, let's say, on the set of all possible quantum channels efficiently, so yeah, this is an important property here, and actually this is to be contrasted with positivity, so you might think that testing positivity, just that E of some positive operator is a positive operator, you might think that this is a simpler definition, but it's actually not complicated, so because I mean as you see in the problem session, positivity of a channel is related to the separability problem, testing whether a state is acceptable or not, and this is hard, much harder than just testing that the operator is positive, okay, so yeah, also there's a way of seeing very easily on the Schole operator whether the map is straight preserving or not, this is just by looking at taking the partial trace over the B system and checking that this is equal to write it, okay, and so yeah, if you want another, oh yeah, another interpretation of this is that, yeah, okay, not again, okay, so I included a proof here when you put it over, right, so let's just say that, okay, one direction, so if you start with a map which is completely positive, then the fact that the choice state of the Schole operator is positive, this is easy, right, this is direct, right, so this is positive, by definition of complete positivity, if I do identity terms are easy, it gives me something positive, so this is immediate. The part which requires proof is the other side, is that if this operator is positive, the Schole operator is positive, then the corresponding quantum channel is completely positive, and again, it's not very hard, just write the negative decomposition of this operator, GAB, and just by expanding things out, you will be able to write E as in this form, right, in the sum of X, KX, times the input times KX, and we saw that this old maps of this form are directly completely positive, okay, again, the trace preserving aspect is also quite immediate, so it's not very cool, right, okay, so now let's move on to the second representation of completely positive maps or quantum channels, it's this Kraus representation, and it follows immediately also from the previous theorem is that any completely positive map can be written as in the previous form, right, so I can write it really as a sum of the input sandwiched by KX for some arbitrary operators, and these are sometimes called Kraus operators, and you see actually that this R is directly related to the rank of this choice, and of course, again, the trace preserving aspect is related to having the sum of KX star KX so this is sometimes called also the operator sum representation. Okay, so the third representation that sometimes used is this time spring violation, it says the following, so if I take any completely positive map, then I can write it as, okay, so I take my input, I sandwich it with an operator M, okay, which maps it from A to not B directly, but D tensor, some other input space, okay, and then of course the E, I don't need it anymore, so I take a positive place, okay, and in this case E is trace preserving if and only if M is an isometry, so M star M is equal to, okay, and yeah, this condition is why it's called isometry that it preserves more, so if I start with Psi with a given norm, then M times Psi has the same node, okay, and again, this is not complicated to check given the Kraus representation, for example, and one way to do it is to take this map M, which is supposed to be an isometry if the map is trace preserving, has just the sum over X of K X tensor X, and so naturally the E space is the space which is span by X, okay, why is this, so it's also easy to see, so if you take M as an operator times M star, you will get this operator, and then when you take the partial phase over E, then the cross term will go away, so you get the delta X X tensor X, so you get the line, okay, so yeah, one way of interpreting this if you would like is that you can see an arbitrary quantum evolution if you take a larger space, so for example, yeah, if you add an extra space R here, but you fix it, you put it in an arbitrary, some fixed state zero, and then you apply some unitary, and you get B and some, also some environment system, then your map from A to B corresponds to this bigger unitary of this bigger space from which you have appended zero on R and then you put your partial phase over E, and again it's simple to check that such a unit there exists from previous, and it's pretty scorched, okay, any question? Okay, so the last thing I wanted to mention is something that doesn't have much to do with, with directly at least with states and channels, but it's a useful, it's a mathematical concept that we'll be using starting from tomorrow, so I thought it would that I introduced today, and for those who are already familiar with some of this material, you can focus on the problems in the problem session related to this concept. Okay, so yeah, so this is the idea of functions that are monotone or convex in an operator case. So you all know what the monotone function is and what the convex function is, and now in these lectures, tomorrow will be applying functions a lot to operators rather than to scalars, okay? As I alluded to that at the beginning when I told you you apply a function to the eigenvalues, and so in this context it's useful to define these notions of operator monotone and operator convex functions, okay? So yeah, so as you know, so a function F from some interval of R, let's say, R is said to be monotone if when I take A bigger than D, then F of A is bigger than F of D, okay? And now I think that it's operator monotone if this holds not only if I plug in scalars, but if I plug in operators, okay? So now I take any dimension B and I take any Hermitian operators A and B where the spectrum is included in I, where this function is defined, then I will require that if A is bigger than B, and here this is seen in the, this is now a partial order, right? That if A is bigger than B means just that A minus B is a positive operator. If A is bigger than B, if A is bigger than B, then F of A is bigger, again in this point, if same definite sense, than F of B, okay? And so the reason this concept is not trivial is that it's different from the usual concept of monotone, right? So it goes the same, then there will be no need to define a new thing for it. And you can check this by seeing that there exist functions that are monotone, but not operator monotone, okay? So the typical example is the function x squared, okay? And even if it was an exercise, we showed it, okay? Find operator in B, this is satisfied, but when you square it, then it's not satisfied. But still, there are some operator monotone functions and typical examples are like the log function or the square root function, okay? And you can define exactly the same thing for complexity. Exactly the same thing. So again, it's defined on scalars and then now you say that it's operator complex if it's not only true for equal one, notice that yeah, so one thing maybe I stress on is that the usual complexity or the usual monotonicity corresponds to just saying that equal one, there's one by one matrices and then it becomes the usual definition. Here operator, complexity and operator monotonicity, I asked that this inequality holds for any, okay? And so, yes, so it's exactly the same thing. I say f of complex combination of A and B is more than the complex combination of the outputs and again, it's similar to before is that there are examples of non-convex but not operator complex functions. And but yeah, there are also operator which are convex and this you should do it in the problem session. And yeah, I should say that there is a nice theory of operator convex and operator monotone functions and they're even in some sense fully characterized so we know exactly what are the functions which are operator monotone operator. Okay, and so I'm fine for today, I'm exactly on time. So I'll stop here for a minute.