 OK, so let's start from where we finished yesterday. I remind you that we are discussing about quantum communication. And basically, the idea is to represent the communication channel in terms of a mapping phi that takes the input state, which are prepared by Alice, and transform it according to the action of the noise that is acting on the information carrier that they are using to exchange messages, and mapping it into a new density matrix phi of rho, which arrived to Bob. So that's the basic structure. And as I told you yesterday, this mapping is completely characterized by the fact of being completely positive, trace preserving, and linear. And most of the time you find in literature, these guys are just called CPT maps, OK? This is the acronym that allows you to identify this kind of channel. And as we discussed yesterday, these maps arise whenever you have a quantum system, say the carrier that is used to send information, is interacting with an external environment for some finite time. OK, so in this context, we are assuming that Alice is trying to communicate some, for the moment, classical message to Bob. And in order to do so, she take the classical message, which for us is going to be represented by some random variable x. And according to the value of this random variable x, it's preparing a proper state of the carrier that she's sending through the channel. So this passage here is an encoding procedure that, given the value of the classical message, prepare the initial state of the object she's going to send. So we can call this mapping a CQ mapping from classical to quantum, OK? So this is the encoding stage. Encoding stage of the process. On the other hand, Bob is going to receive this object here, which is a quantum state of the carrier transformed under the action of the map. And from this object, he would like to recover the message that Alice wants to send him, OK? So in order to do so, the best things he can do is to measure. And the measurement will produce a classical outcome. Let's call it x prime. Of course, depending on the kind of measurement, this value can change. But the idea is that from this outcome, a classical outcome he gets, he's trying to guess what was the original message of Alice, OK? And this stage of the process, we can call a quantum to classical process. So you take a quantum state, and you transform it into a classical message via a measurement. And this is what you may identify as the decoding stage. And now, of course, they don't have control on this object here. So this is the noise. These represent the noise of the communication line. So they cannot do too much about that, because it's associated with the structure of the environment. But in order to improve the communication, the efficiency of the communication, that is, in order to improve the relations between this x prime and this x to increase the, let's say, to diminish, to reduce the error probability that this value here does not correspond to the value that Alice has selected, they can try to different encoding and the coding stage, a procedure. So they can optimize the whole process with respect to the encoding and the coding procedure. OK, but not only that, what they can do, they can try to use the channel more time. So let's say this is the first channel uses. This is the second channel uses. And for me, a channel uses means simply is a second message quantum state that is transferred from Alice to Bob. So this is first communication, second communication, third communication. So you may think about this line as different pulses that are sent at different time through the channel. So they can try to use multiple times the channel in order to improve the probability of sending the right message from Alice and Bob. So every time you send Alice is going to prepare a new state of the carrier, which is transformed according to the channel. And therefore now, instead of encoding the classical message directly into the state of a single carrier, as we were considering here, what Alice can do is to, depending on the value she wants to send, she's going to prepare a joint state of multiple carriers that are sent through the channel. This is a possibility that she has. And similarly, Bob received this state that has been transformed by the action of the noise. And now what he can do is he can try to detect the full collection of states that is arriving to him and performing, say, a joint measurement that is going to give him a better estimation on the value x. So this is really the situation that we are considering here. And of course, as I mentioned yesterday, when you consider this kind of extended protocol with respect to the original one, there is an issue here. Because now, in order to send a single message, you are using many, many quantum pulses just to carry a single message. And so naturally, this bring has with the notion of rate. So the rate is defined. This is a notion that we have introduced already last time as the number of bits that can be transferred for an assigned encoding in the coding procedure divided by the number of channel uses. So the number of channel uses is just the number of pulses that you are using to encode this classical message. And the number of bits is just the number of bits which are stored into this random variable. So this is a definition of rate. This is the rate of the communication, rate of the protocol. And the protocol is simply defined by fixing this mapping here and fixing this measurement here that allows you to extract the information. So for each protocol, you have rate. And also for each protocol, you have an error probability. The probability, the error probability of the protocol is simply associated with the correlation between this x prime and this x. Because these two variables need not to be necessarily exactly correlated, there will be necessarily a error probability associated with a given protocol. OK? Yeah. This is a function of the protocol, of course. OK. And now the point is that we are looking at the rate, the protocol that allows us to reach asymptotically in the number of bits that you are sending through the channel, vanishing value of the error probability. So you select, among all possible protocol, those that in the limit in which n goes to infinity, and this n here refers, for instance, to the number of channel uses that you are allowing during the communication, you are looking at those protocol that allows for an error probability that asymptotically goes to zero. If you have a protocol that allows you to reach this kind of result, you call that the associated rate of such protocol is called achievable. A rate is achievable if allows zero error probability in the long run. And now, of course, among all the possible achievable rates, you want to find the one that maximizes the rate. OK? And this is the definition of capacity of a channel. So the capacity of the channel, as I wrote here in the slide, is the maximum value of the rate which has to be computed over the set of all achievable of the protocol that allows us to guarantee zero error probability. And the formal expression is just given by this limit here. OK, so this is a notion that simply derived from the classical analogous of a communication line. This notion of communication of capacity was introduced for the classical models by Shannon. And we are just using it to describe the efficiency of this quantum communication line in sending the message. Now, there are a few things that you may notice. Because the capacity has been obtained by optimizing with respect to all possible encoding and the coding procedure, the capacity itself is going to be just a functional of the noise. So the capacity of a channel, which is the maximum rate, by construction, is just a functional of the noise. OK? It doesn't depend upon encoding and the coding because you are optimizing with respect to those procedures. OK, and of course, you can also say that the higher is the capacity, somehow the better is the quality of your channel. So you can use the notion of capacity as a figure of merit to quantify the amount of noise that is introduced by this special channel. So you can order your channel, in a sense, in terms of capacity because this is a nice figure of merit. You can say this channel is better than this other channel because it allows me to reach a higher rate of communication. So it's an important quantity. OK, so this is the formal definition of capacity. Now, you can be a little bit more specific in the construction of this functional. And for instance, you can distinguish between different situations. So for instance, here we were saying, OK, we are going to use many, many, many times the channel. And we are encoding our message into a joint state of these carriers. And similarly here, we are going to detect the full collection of states that arrive in order to recover this value of x. However, these two processes can be performed in rather different ways. So for instance, first of all, you can decide to select as encoding procedures those in which x is mapped into a state of these carriers which is separable. So you can restrict yourself to what happens if instead of selecting a joint state which maybe is entangled, you just consider the situation where you restrict yourself to separable encoding. And the reason why this may be interesting is because we know that preparing entangled state is kind of difficult. So maybe we can just restrict ourselves to separable encoding. And similarly, you can restrict yourself to the situation in which instead of performing a joint measurement on the received state, you restrict yourself to the case in which you measure each one of the carriers independently, you get some outcome, and then you use classical post-processing in order to reconstruct the value of x prime. So this is exactly the situation I depicted in the first figure over there. So you can restrict yourself in separable encoding and local measurement, OK? Do you think we need the product now? No, separable, just separable, not entangled state. So you can even allow for classical correlation. It doesn't matter. In the hand, you can prove that just the product states are those that optimize the capacity in this case. But yet in this case, we just consider separable. And by doing so, the resulting value of the capacity that you get, we just give to this guy a special name. We call it capacity cc. Those double c refers to the fact that we are using kind of classical encoding because we are not allowing entanglement. And we are using separable measurement so you don't do joint measurement. So this is the first example of capacity that you can define. And this quantity will be, of course, a function of phi of the map once. But of course, you can do a little bit better. So for instance, you can decide to use entangled state as input state. But still, you can try to recover the value of x by performing local measurement. And we call that capacity cqc, meaning that we are using quantum encoding, but classical decoding. By classical, again, we mean performing local measurement. Of course, you can have the opposite situation in which you use classical encoding that is separable state and you perform joint measurement, not local one. What is a joint measurement? A joint measurement, for instance, is a measurement that you perform by letting this guy to interfere, say, in an interferometer. And then you measure after the interferometer, something like that. It's a joint measurement, like a bell state measurement. And this gives us a new definition, introduce a new notion of capacity. This is the cq capacity, classical to quantum. And finally, you have the best you can hope. You encode quantumly, allowing also for entangled state, and you decode quantumly, allowing for joint measurement. And this is the cqc capacity. So you have four alternative definitions of capacity. And by construction, there is a natural ordering between some of them. So for instance, it's clear that the cqq, which is the unrestricted capacity, in which you allow for all possible strategies you can think about. You allow for entangled here. You allow for joint measurement here. So the cqq is the best capacity you can, so it's the highest capacity you can hope. So we know that this guy is certainly the largest capacity you can hope. And of course, the smallest capacity on the other hand is the cc capacity, because there you have strong requirement. These other two, the cqc and the ccq capacity instead, are kind of intermediate. So we know there must be an ordering of the following form. cc is certainly smaller than ccq and cqc. And all of them are certainly smaller than cqq. And what people typically call c is just this guy here, by the way. So this is c. In literature, you find the name you give, c is used for this guy. And the ordering of this form. Now the issue, and this is not trivial, we don't know, which kind of relation he's here. So we don't even know, maybe there are some channel for which cq is larger than cc, or the opposite we don't know. As a matter of fact, nobody has been able to ever compute cqc. We don't know what is the value, sorry, cq. So this capacity, we don't have an expression to compute cqc. Yeah, no example whatsoever. What we know is how to compute this quantity. And this is known in literature as c1 also, or Olévo capacity. We have a formula for this guy. We know how to compute this quantity here. This is the Olévo-Westmoreland Schumacher theorem. And of course we, in principle, could compute this one. Because this is just an optimization with respect to different channel model. But we don't have a closed expression for cqc, for ccq. We don't know really nothing about qc capacity. Yes, in principle you can compute it. But of course, so you have to optimize with respect to all possible classical encoding and local measurement is not a easy task. You mean if this could play a role here. Most likely, of course, so without this code, the capacity is going to be smaller, in principle. Oh, yes. No, no. OK, so this is very nice. So we have four different definitions of classical capacity for a quantum channel. And as I mentioned before, we do have formula that allows you to compute at least the last two, these two ones. And the basic ingredient that allows us to compute the O-level capacity and the C capacity has to do with the O-level bound. So this is the fundamental ingredient that allows you to compute the capacity of this channel. So the O-level bound is a fundamental bound in quantum information, which is set on what is called the accessible information. What is the accessible information? So accessible information has to do with state discrimination problem. So the problem is as follows. Suppose that somebody give us a collection of states, something like row one, row two, row three, and so on and so forth. You have n possible state. Maybe they are even prepared with different probability, probability p1 for the state row one, probability p2 for the state row u, and so on and so forth. So therefore, this collection of object define what is called an ensemble of state. It's a collection of state with a weight. This is the starting point of the problem. Now somebody select a given state out of this collection, out of this ensemble, this row question mark object, and give it to you. And now you are asked to determine which one of the element of the ensemble correspond to the state that was given to you. And we want to determine what is the amount of information, so what is the efficiency with which you can determine the value of row question mark given that. You know what is the set where the state was extracted, and you need to answer this question. And the maximum information that you can extract from the state that is given to you in order to recover which one of the state was selected is called the accessible information of the problem. And of course, you can perform whatever measurement you wish. And you can quantify how efficiently you can solve this problem in terms of the mutual information that was introduced to you by Martin a few lectures ago. So the level bound is a bound that puts an upper bound on the accessible information, so on the efficiency in which you can solve this state discrimination problem. And is defined in terms of a functional, which this is this chi functional, which is just a functional of the ensemble that we started with, which is an entropic quantity, is given by the entropy of the average state of the ensemble. So this is the average state of the ensemble. You take all the possible state from the ensemble. You weight it with associated probability, and you compute the average state. And then you take the entropy of this guy, the von Neumann entropy of this guy, and then you have to subtract the average value of the entropy of each one constituent of the ensemble. So this is a very well defined functional. It's just a functional of the ensemble. No reference whatsoever to measurement that you have to perform on it. And the level bound says that the accessible information, so independently on what you do in order to solve the problem I gave to you, so independently from the measurement you want to select, the accessible information is going to be always smaller than this chi. It is important to notice that this bound is not strict. That is, there are examples of ensembles for which there is a gap between the accessible information and the level quantity. The two guys in general are just not really, so the gap is not strict. Of course there are examples in which the gap closed, depending on the ensemble, but in general the gap is not closed. Nevertheless, so in some sense that inequality there is a strict inequality. Nevertheless, by exploiting this result by a level, you can compute the capacity of the channel. And the point is that, you see, when you compute the capacity, you are looking at a state discrimination process. So you are trying to discriminate among a large collection of state. Infinite many state, so when you compute the capacity, you have to take this limit here, which is the number of channel uses. So you have a very huge collection of state and then the asymptotic limit, in this asymptotic limit, that gap closed. And that's why you can express the capacity, the C, the CQ, sorry, the CQ capacity in terms of the whole level bound. Okay, so if you put, so I'm not giving you the proof, but you can show that by using the whole level bound, it is possible to prove that C1, I remind you that C1 is the capacity CQC, you start, sorry, C, CQ, you start classically and you detect jointly what you receive. We have a single letter expression that allows us to compute the object, okay? So you have to compute the whole level information, optimizing with respect to all possible ensemble that you are injecting into the channel. That's the main, please. And these guys I told you is called the whole level capacity of the channel, okay? This one, yeah, I asked different names, it's called C1, it's called whole level capacity, it's called CQ and it's also single short capacity. It's always the same object, yeah? Different name for the single or the same quantity, yeah? Okay, so by the same token, you can also compute the C capacity, the CQQ, the big one. Now the C, the capacity, the full capacity of the channel is obtained in, basically, you can prove, and this is the context of the whole level Schumacher was more than theorem, you can prove it that this capacity can be obtained as a sort of regularized version of the C1 capacity, the capacity that we've seen before. So basically you have to take and application of the channel, compute the whole level capacity of these n collection or channel uses and then you take the fraction, you divide by n, you take the limit for n which goes to infinity, okay? And so this is the other formula that we know. So let me give you some example, okay? These are trivial examples, I'm not discussing complex situation here. So let's consider the case of a noiseless channel. So a noiseless channel is the best channel you can hope, basically nothing happens during the communication here. So you send the row and you receive row, yeah? So now here we have an intuition of what is going to happen here. And now you can use the formula that we have given and you can easily, so when you take the optimization you have to optimize the whole level information with respect to a channel that doesn't have any noise, you have to select all possible ensemble and you can simply verify by yourself that in this case the capacity is just given by log D where D is the dimension of the quantum carrier that you are using for the communication. So if you use a qubit you can just send one bit of information for each channel uses and this is kind of what we naturally expect and indeed the formula give us the correct result. And in this particular case there is no gap between CQQ and CQC, okay? All the capacity has always the same value, it's just one. So it's log two or one. Now a less trivial example is the case of this depolarizing channel. A depolarizing channel is this object here. We have introduced it also in the last lecture is a channel which with some probability one minus P give you the state, it doesn't perturb the state but with some probability P replace the state with a complete mixed state, okay? So there is some noise associated to this process and P of course is a probability and now you can show that also in this case C and C one, the single shot capacity and the full capacity coincide and they are given by this expression here log D minus S mean phi, this quantity here is the minimum output entropy of the channel. So what you have to do, you have to compute what is the minimum entropy you can achieve at the output of this channel. Of course this quantity is not trivially lower bounded because every time you send a message you mix it with a complete mixed state. So it's not exactly new, the value of S mean and of course this value is gonna be a function of the channel and in particular it's gonna be a function of P, the probability of error associated with the channel and here I'm just plotting the final result. This is the value of the capacity as a function of the probability of error and as you can see, as the probability of error goes to zero, the capacity becomes log D basically in the case D equal to two is log one so that is, if we are sending bits you can of course send one bit of information for these channel uses while instead if the probability of error is maximum that is one the capacity goes to zero because whatever you send is going to be transformed into a complete mixed state no way you can communicate, okay? And this is a decreasing function of the channel of the value. Okay so now as I mentioned before at least in principle we suspect that there must be a sort of ordering between these capacity, okay? So we know that C must be certainly can be larger than the others because the others are capacity which are kind of restricted somehow. But for a long time in the community there was an issue whether or not these bounds here were strict or not because it's still possible that all these quantities just collapse assume the same value, exactly the same value. And there was a huge debate about the possibility that indeed the full capacity and the short capacity could be identical. Now if this conjecture was true and now we know that is not true, yeah? But if this conjecture was true then that means that in order to send to improve the transmission of the communication you don't need to prepare entangled state. You could achieve the same rate without using entanglement at the beginning of the communication. So this is a very important point. So, and this conjecture was called the additivity problem of all level information. And for many years we didn't know how, whether or not this inequality was true or not. It turns out that in 2004 Peter Schorr was able to show that these additivity problem of the all level capacity was also related to other additivity problem that we face in quantum information. And here I put just the list of them. So that is, for instance, the additivity problem of the all level capacity is strictly related with the additivity property of the minimum output entropy of a quantum channel and so on and so forth. Yeah. And finally in 2008, Hastings was able to disprove the conjecture, the additivity of the minimal output entropy problem. So therefore it provides a specific conterexample that falsifies all these additivity issue. So now we know because of this result by Hastings, this is an important paper, we know that C and C1 in general, they don't agree, they are different. There is a gap. Of course it may be possible that for some special channel and we have explicitly seen some example of those channels, it is possible that these two guys are the same. But in general you have to assume that there is a gap. How big is the gap? We don't know. Okay. In the sense that there are no example whatsoever of channel for which we can compute C if C doesn't correspond to C1. So for all channels that we know and for which the capacity has been computed, these channels have no gap. So you can compute C1, you can compute C and the two guys coincide. Apart from this conterexample by Hastings. But in the conterexample by Hastings, what he proved, he proved the gap. He didn't prove the exact value of C. So he showed, okay, it must be larger than this and this guy must be lower than this. So there is a gap. But you don't know exactly the specific value even for that special channel. So how big is the gap? We don't know. So this is, I try to summarize here the situation. So here we have different capacities, C, C1, C, C, C. And we know for sure that there is a gap between C and C1. We don't know too much about the gap between C1 and C, C. Yeah, we didn't have no example whatsoever. And most importantly, we don't know where C, C fits in this picture because there is no definite ordering for this problem. Okay, so that's enough for sending classical message into a quantum channel. So for the moment, we assume we have a quantum channel and so a quantum channel in principle is an object that transfer quantum states, okay? But in the way we have analyzed the problem of communication, we were using these high performance communication line just to send classical information. Okay, of course you can do better than that. You can try to use the same communication line to send quantum state through the channel instead of classical message. So this is yet another problem you can try to solve. Okay, so what does it mean to send quantum message through the channel? So the idea is that you don't have this coding stage here. So you don't start from a classical message that you want to transfer to the quantum state. Instead, you have a quantum message. What is a quantum message? Basically it's a state psi that belongs to some Hilbert space of yours, yeah? And you don't know too much about it. You don't have a classical description of it. You have a single copy of it and you want to send this guy to Bob. So you want Bob to receive that state psi prime. And now what you can do, you can try to encode this quantum state into the state of the carrier. Maybe you just take the state itself and send it through the channel, but maybe this is not the best things you can do. So what you do, you encode. You kind of transfer the message from your local memory to the state of the carrier. You encode it. It's a quantum encoding. Send it to the channel. The channel is going to affect it somehow. And then at this stage, you try to undo the encoding, okay? And these undoing, typically, cannot be just performed by measurement. We know that if we want to preserve coherence, it makes no sense to measure the state we receive. So you have to do something more clever. And the best things you can do, typically, you just take this guy, you plug in into a quantum computer and then you run some algorithm that extract and reconstruct the quantum error correction procedure, the original state you were sent. Helis wants to send you. So in analyzing the transferring of quantum message through the quantum channel, what differs with respect to the previous analysis is this encoding stage where you start from a generic state from an Hilbert space that has to be transferred here and also the procedure that allows you to decode. And the decoding is gonna be some generic quantum error correction procedure. Quantum error correction. Okay. And again, instead of considering what happens in a single use of the channel, you can analyze this problem in this multiple channel uses scenario. The thing is, I take my initial state psi, this guy here, and I'm going to write it somehow into a joint state of many channel uses. Or many, yeah. So the idea is that as in the previous case, you have to consider when you take this limit here, basically you are considering situation where as you increase the number of channel uses, you increase the dimensionality of the Hilbert space. But given that, you fix an Hilbert space of dimension D, let's say, and then you try to send whatever is in D in that Hilbert space through the channel. Then as you increase the number of channel uses, you may increase also the dimensionality of the Hilbert space. Okay. So this is the dimensionality of your local memory if you want. Oh, yes. But here we don't consider this kind of limitation. So you assume you have an Hilbert space of dimension assigned and you want to transfer whatever is inside that memory into the channel. So no limitation whatsoever on that. Okay. If you want to include limitation here, by the way, you can include it here as part of the noise. Suppose that you have, so if you have some problem in the state preparation of the memory, you can treat it as a problem associated with the transmission. So you can include this limitation into the definition of the channel if you want to include this kind of thing. Okay, so this is the situation. And once again, when you, of course, as I was mentioning, you prepare a joint, you load this state into a collection of carriers. And then here, instead of measuring, you perform a quantum error correction kind of process into the collection of state you receive in order to recover the state of psi prime. Okay, so once more, you can define a notion of rate which is given now by the ratio between the number, not of the number of bits, but the number of qubits you transfer. What is the number of qubits? Basically the number of qubits is the logarithm of the dimensionality of the Hilbert space you are transferring. So this number here is basically given by the log two of D where D is the dimensionality of the memory. It quantifies how many qubits can fit in that Hilbert space. So the rate is defined as log two of D divided by the number of channel uses that you are employing in the communication. And once more, this is the rate. And once more, you have to, so of course you can do whatever you want, but in the end you would like to have this state very close to this state. So there will be some procedure that allows you to apparently send a lot of information but with huge differences between these two guys. So we need to quantify how accurately this guy corresponds to this guy. And of course we don't use the notion of error probability. Here you may use instead the fidelity, the average fidelity. You want the fidelity between these guys to be as large as big as possible. So you introduce a notion of fidelity, of the communication, yeah? And now instead of taking the limit for n, which goes to infinity, n is the number of channel uses, you take the limit in which the average fidelity goes to one. You want to optimize the fidelity. The other fidelity is the kind of opposite of the error probability for quantum state. And this is exactly what you do. And now you define again a rate to be achievable if it allows asymptotically to reach this kind of limit. Among all possible rate, you take the one which is the highest and this guy is a new definition of capacity. And we call this guy Q. So Q of phi as before. So is the new capacity is the maximum of all this rate. Of course it's different from the one that we have introduced before because they encode in the coding procedure deeper from the classical problem we have considered before. And this Q, which is called the quantum capacity. Quantum capacity of a quantum channel as in the previous case is gonna be a function of phi because you have optimized with respect to everything else. Okay, so C was the classical capacity of a quantum channel. Q is the quantum capacity of a quantum channel. Okay, so once more, we have to compute this awful object here and you have to optimize with respect to all possible procedure then take a limit as n goes to infinity. Apparently it's a very difficult problem. However, we do have a theorem also here and this is by Lloyd, David, and Shaw which allows us to explicitly compute this quantity, okay? And the theorem says that the quantum capacity, this awful object here, can be obtained as a limit. Of course you have to take a limit as n goes to infinity as in the case of C and then you have to take this expression involved a new entropic function of the channel which is called the coherent information of the channel. And this is defined here. I'm not going, I'm not entering into the explicit definition of this quantity is yet another entropic functional which is of course a function of phi and is constructed in terms of difference of von Neumann entropy in the head. It's different from the level quantity of the information is a new function. You have to take this limit and then you can compute the value of Q, okay? Now, what we know about Q? Well, first of all, Q, the quantum capacity, is certainly smaller of a given channel, is certainly smaller than the classical capacity and this is natural because when you study classical communication, so you can cast the transferring of classical message through the channel within this same framework by restricting yourself to a sub-selection of states that you want to send through the channel. So when you send classical state, it simply means that you say you fix a basis in the space and you just want to send that specific basis because every single classical message can be associated with an element of the basis of your space. So the classical capacity can be seen as a restricted version of the quantum capacity in which, in restricting the sense that you only want to send some special subset of all possible states that fit in your quantum memory. So for that reason, it is less demanding, it is less demanding to fulfill the goal of transferring classical message than fulfilling the goal of sending quantum message. Therefore, it is naturally to expect to have this kind of inequality. C is greater than Q, for sure. And we can prove it, there is not just a hang and the waving argument about that. You can prove that this is really the case. Indeed, there are some extreme examples of communication lines where Q, the quantum capacity can be strictly equal to zero and C can be different from zero. So there are channels which allows you to send classical message with zero error probability, but which will prevent you from sending any quantum message whatsoever. And these are the channels that we are using every day, basically. All communication lines that we are using, like the cell phones, the voice that I'm using to communicate with you is a communication line. It's capable to transfer, I hope it's capable to transfer classical information, but of course I cannot transfer quantum messages. So this is an example of a quantum channel with zero quantum capacity and non-trivial classical capacity, yeah? So this is a fact. Another important aspect associated with quantum capacity is this phenomenon which is called the superactivation. Okay, what is superactivation? Superactivation is a property of quantum channel which was discovered by Smith and Yard quite a few years ago and asked to do with the fact that there are examples of quantum channels, say, P1 and P2, which if you take them independently, if you just use P1 as a channel to communicate quantum message, this guy doesn't allow for the communication. So ask strictly zero quantum capacity. So P1, the capacity of P1 is zero and as well at the same time also the capacity of P2 is zero. So these two channels by themselves cannot transfer any quantum message. However, if you use them jointly, saying every time you want to send a message, you kind of use both channels at the same time. You send part of the message in the first channel and part of the message on the second channel, maybe possibly entangling the uses of these two channels, then it may happen that these two guys together have no zero quantum capacity. So it's like you have two bad channels which cannot be used to transfer message, but if you use them jointly, kind of correct themselves. Okay. People are studying this kind of example because they are extremely non-trivial and we have few example of this fact. Okay. Yeah, the point is that what happens is that instead of storing, so I'm just, so the idea is that instead of storing, you are going to store quantum information, quantum message into the coherence that connect the two guys. Okay, somehow coherence survive. Yeah. And you can use this fact in order to encode your message into this in this part of the communication. Yeah. Okay. Okay, so that's fine I think. Okay, so I will conclude by saying that, okay, so we have studied how to send classical message into the quantum channel, we have studied how to send quantum message into the quantum channel. But now we know that in quantum communication there is also another option that we have. And the option that we have is that for instance, we can use share entanglement in order to improve the quality of a communication line. So what does it mean? Suppose yet you have your communication line here and suppose that anything Bob has been provided prior before the communication started, they have been provided with half of an entangled state which sit in their labs. Okay. Now we know that entanglement by itself cannot be used in order to transfer information. So this is the fact that of course, you can summarize this fact by saying that there is nothing like a bell telephone. You cannot send message by simply using share entanglement, okay? However, we know very well that entanglement is a sort of catalytic resource for communication, resource for communication because there exists something like teleportation. Teleportation allows you to send quantum state while using a classical channel, a purely classical channel. Similarly, you can use entanglement in order to improve the amount of classical information it is sent through a given noiseless channel by means of super dense coding, okay? I think that probably this kind of, this fact were already introduced to you by Martin. Anyway, so you can ask the same question. What happens if now instead of having a noiseless channel or a purely classical channel, I'm allowing Helis and Bob to share entanglement while they are using the quantum channel that we are considering. Now, in this case, we expect we can compute once more the classical capacity, the quantum capacity, including in all possible decoding and coding procedure the presence of these sidelines of shared entanglement. And this gives you a new definition of capacity. These are called the entanglement assisted capacity. CE and QE, you have a CE for the classical version of the transmission line and the QE for the quantum rate of the communication. They are related by factor two. That factor two over there is a reminder of the relation between super dense coding and teleportation. You remember in teleportation you have to send two bits of information to send the one bit. That two is there. And we know exactly how to compute it. This formula was proven by Bennett and others in 2001. So we now know exactly how to compute this object. Now, I will skip all this stuff and here I am just going to present you an example of a quantum channel for which all these capacity have been computed. This is the lossy bosonic channel. It is a channel model which is very common and very practical because it refers to a realistic example of a communication line. Basically you have say an optical fiber which has some losses which is parameterized by this quantity eta. It represents the percentage of messages that can be transferred through the communication line. The number of photons that survive during the communication. And so that parameter eta is quantified the amount of noise. And for these special channels we can compute all the capacity that I have introduced here. So for instance, this black line is the quantum capacity. This red line is the classical capacity which is higher than the black line. And then you have the entanglement assisted capacity, the classical version of the entanglement assisted capacity which is the blue line and the quantum assisted version of the capacity which is this green line. And so this is one of the very few examples for which we do have the full picture in a sense. Okay, so I think that I can stop here unless there are questions and thank you very much.