 ... ... ... ... ... ... ... v tem, da je težko vse mojščenje vseče, in pričo je vseče vseče. Zelo, da je zelo, kvantumetrologi se površa v seberalih subsektorov, ki je parametra estimacije, revodne detekšenje, hipotesične testov, tako, tako. Ne sem vseče, da ga ga površa vseče, ali sem pa da ga vseče genetikno vseče vseče. So the how to look of my presentation is this one. So, to begin with, I will start by briefly reviewing what a quantum measurement is. How do we formalize these processes in quantum mechanics? And then I will move, once we have set these ideas, I will move to discuss explicitly state discrimination problem in procesu, kriminacijo in parametrišnje. Ok, zelo, kvantumečnje. V kvantumečnih mečaničnih mečaničnih mečaničnih mečaničnih je basično vse proces, kaj ima tudi klasične informacije na zelo, na kvantumečnih sistem. To je basično, kaj je kvantumečnih mečaničnih. Danes na več več varovitri kvantumečnih mečaničnih mečaničnih mečaničnih sem visko otvorila, da imamo vse male begraženje tudi so tofam vrtičnju, češnjime je, ki smo uspešno tudi požah vrtičnjih informacije na kvantumečni sistem. Tudi nekaj begraženje, basically a begraženje je procesu, tako, da jaz vse predstavila, da je pa vse odvorel, da se o sestem vsi, s tem je vse stah, je vse odvorel, je vse odvorel, pa tk. vse odvorel se z veseliki in vse odvorel. Tako, skupne vse. Vse je odvorel, v jelberi spes. So basically a projective measurement given psi is testing whether or not this state is one of the element of such a basis. And of course it is the result of the measurement itself is not deterministic in general, but is characterized by some probability, which is given by this expression here, which basically is obtained by taking the projection between psi and j, one of the element of the basis, and you square it. This is basically the Born's rule. And of course the same notion can be generalizing to the case in which your initial state is not a pure state, but a mixed state, in that case, this same relation generalizing this form. Now as an example of a projective measurement, just consider the case in which the state of interest, the quantum state you are probing, is a single photon, which has been prepared into possible polarized orthogonal directions, and the projective measurement simply try to determine whether or not this single photon that is coming in is either vertically polarized or orthogonally polarized, then what you do, you put here a polarizing splitter and then some photo detector on the two associated parts. If you get a click here, you project this state into the vertical polarization component or otherwise in the orthogonal one, in the horizontal one. And this is a projective measurement. Now as a matter of fact, this is a very basic notion of measurement, which is very useful, but it doesn't really capture all the possible measurement you can perform on a quantum system. Let's see what we can do beyond projective measurement. So as a matter of fact, in many cases of physical interest, given as the system that you want to probe, typically you don't have direct access to s itself. In fact, what you do, you introduce an ancillary system A, which is prepared in some state sigma, then you couple the system of interest with the ancillary system through some process and at the hand of the process you measure A. So basically in this representation, you are using A as a sort of probe that allows you to test the state of the system s. And this is very common. For instance, this is the famous Ratterford experiment, which is a typical experiment of quantum mechanics. So the system of interest here is this target here, which is a gold foil. OK. And then in order to probe the state of the target, you shine on it some half a particle, these are the ancillary system, and then you detect after the two guys have interacted the state of the half a particle. So this is indeed a measurement scheme, which correspond exactly to this representation here. And it turns out that if you look at the structure of information that this process produce on s, this is not describable as a projective measurement on s. Maybe it is a projective measurement on a, but it is not a projective measurement on the system of interest. So this kind of process is a slightly more general than the previous one, the simply direct projective measurement on the system. But it is not the full story yet. Indeed, there are other measurement process that we would like to characterize, which are not projective, which are not of the same kind of the one that we have just seen. So, for instance, we have things like noisy detection scheme. What is a noisy detection scheme? Well, basically, again, s is the system you want to probe, but now this guy is interacting with an external system, I call it a, again. But this time this a system is not a probing system, it's the environment that is interacting with s somehow. And after the interaction, now you perform, let's say, a projective measurement on the system s. OK? So now this time you still project the state of the system s, but however you do that after it has interact with an external environment. And this is the typical scenario of a noisy detection scheme. An example of a noisy detection scheme is given by this representation here. Once more you have, say, some photon, some field, some mode of the electromagnetic field you want to probe with the photo detector, but unfortunately, not all the photon present, which are present in this mode, reach the detector because you lost them because of the interaction with the other mode of the electromagnetic field. And in the hand what you measure is not s itself, but s after the interaction with the environment. OK? Once more, this particular measurement process does not correspond to a direct projective measurement on s. So we need to consider also these possibilities. And finally, there is a third possibility. OK. The third possibility is as follows. Again you have the system s, and now you have, let's say, an ancillary system a, again this guy is a probe. No, it's something that you can control. You let the two objects interact and at the hand of the interaction, now you measure both. You perform a projective measurement on both. OK? And once more, this process you can call it a joint detection scheme is not a projective measurement on s. It's something else. And an example of a joint detection procedure is provided here. Again, this is an optical field you want to... So this one is the optical field you want to detect. What you do, you put here a beam splitter, you mix it with the reference mode, which is the ancillary system a, which has been prepared, say, in some coherent state, you mix it into a beam splitter and then you perform photocounting on the tube. And this is exactly a joint detection scheme. As a matter of fact, this is what in optics is called a homodyne detection scheme. OK? Again, this guy cannot be represented as a direct projection on s. OK. So it turns out that all these procedures can be explicitly represented in terms of a very powerful theoretical tool, which is called the formalism of POVM. POVM is an acronym which stands for positive operators and valued measures. And basically it allows you to represent both all the value scheme we have seen so far, the joint detection scheme, the noisy detection scheme and the indirect detection scheme, as well as the direct projection measurement as follows. So basically what you do, you represent each one of those possible procedures by assigning to the detection scheme a collection of operators, e of j. These operators are positive semi-definite. OK, so they are mission with positive eigenvalues. And they have to fulfill the property that I wrote here, which is a normalization property. OK. So each one of this measurement can be described, formally described by assigning such a collection of operators. And now the probability of getting some outcome, which is labeled by this index j, is obtained by this formula here, which takes the place, which replace basically the born rule we have seen before. So if rho is the state of your system, you take the trace of the product between rho and e j. And this is it. Now it turns out that all possible scheme of measurement or detection you can think about can be casted in this form. So there is one-to-one correspondence between measurements and this way of representing them. Yeah. Yes. OK. So, OK. So let's discuss about the case in which you have s, in which you have a, this is the ancilla. Let's say this guy has been initializing rho. Let's say this guy has been initializing in some state, I call it zero zero. This is a probe, has been initializing in this kind of situation. And now let's imagine that we coupled the two through some unitary transformation u, which typically is gonna be induced by some Hamiltonian, which act interacting Hamiltonian that connect the two guys for some time t, OK. And therefore, before you perform the projective measurement on the ancilla, let's say the state which initially was this guy here, so this is the input state of the joint system, becomes this object, OK. It becomes this guy here. U dagger u, the two evolves like that. At this point, and let's say this guy is a, OK, and this guy is s, OK. Now at this point, let's say we perform a projective measurement, a pure projective measurement on the ancilla, OK, something like that. And this guy provide us some outcome j, which is labeled by this index here. And let's introduce the projectors that allows you to extract such an information. And so these projectors are associated with a collection of vectors j of a, OK. With j, which goes from one to d of a, d of a is the dimensionality of the probe. If the probe was a qubit, dh is two, otherwise is whatever. Therefore, the probability of getting j, the outcome j from our system s, can now be obtained as, so first of all, I have to compute what is the state of the ancilla after the interaction, because this is a local measurement on a, and the statistics of a projective measurement can be obtained by taking j, a, and here I put rho a prime j, a, where rho a prime is just the reduced density matrix of a after the interaction, something like trace of s u rho s tensor 00a u dagger, something like this, yeah? Well, let's replace this guy in sa, and by the way, this object, because this is a partial trace, basically correspond to taking the sum with respect to k from one to ds of ks u rho s tensor 00a u dagger ks, where ks are just element of the basis for the system s. Yeah, something like that. And now let's replace this object inside this formula here, and what you get is sum over k from one to ds, and then you get something like sum over ks tensor, and then I have j of a, this vector here, rho s tensor 00a, and then I have j of a tensor k of s, yeah? Well, so now, and here there is a u, of course, otherwise, yeah, there is a u. So now let's apply, so let's move this cat here on u, and this bra here on u dagger, and then you can rewrite the whole things like sum over k from one to ds, ds is the dimensionality of the system, of course, is something like ks, and then you have j, a, u, a, i, something like that, and then you have rho s, and then you have zero, u dagger, j, a, a, and then you have ks, and this is just simple way of representing the thing. Now, you notice that this is, so you have a sum over k here, and k is a basis, so basically this object, and this guy now is, originally u was an operator on s and a, but now I have contracted the degree of freedom away, so this guy is just an operator on the system s, okay? I call this operator m of j, because it's an, so let's call it ms of j, it's an operator which acts on the inverse space of s and depend on this index j, and this guy here is clearly, it's a joint, yeah, it's just the joint of that object, so basically, and also, so now we have an operator on s, rho s and operator on s, and then we have this kk, and this is just the trace of this operator with respect to the degree of freedom of s, so it's trace over s of ms j rho s ms j dagger, it's something like that, yeah? And now you use the cyclicity of the trace to move this guy on this other side, m of s of dagger m of s j rho s like that, and now this guy here is e of j, okay? Okay, so it's, so it's not particularly difficult, I probably lost too much time in order to derive it, but you ask it explicitly, yeah? So they are certainly positive, they are semi-definite, you should try to prove that this guy is positive, okay? Prove that this guy is positive, so it's, the claim is this guy is positive and also satisfied the normalization condition, you should try to prove it, it's not particularly difficult, but anyway, so we do have this formula, it's very handy, it can be adapted to describe all this situation. Well, and now it turns out that in many cases of physical interest, these POVM measurement are kind of more powerful than just direct projective measurement on the system itself. Depending on the problem you want to solve, say you want to extract some kind of information from the system of interest, and you ask what measurement should I choose in order to recover such an information, not always the measurement which allows you to solve the problem in an efficient way is gonna be a projective one. Maybe it's a POVM in one of those guys, okay? Okay, so this is a good kind of, this is a nice result because it means we have more tools at our disposal that possibly are more powerful than depending on the problem, than just projective, projection. Okay, of course there is also, so that means that we have more ability in extracting information from a quantum system than we originally suspect. However, you should always remind that when you try to recover information from the system, there are also the limitation. So when you try to recover information from a quantum system, there are limitation. Which kind of limitation? Of course there are technological limitation, but these are kind of trivial. We are not engineer, so for us it doesn't really matter if there are technological limitation, that's not true, but let's say. But there are more fundamental limitation for us and these have to do with uncertainty relation, for instance. So we know that certain information cannot be recovered while we are probing the system to check other property of the system. These are as to do with Eisenberg uncertainty in the hand. Not only that, but there are other limitation that quantum mechanics impose to us and these have to do, for instance, to no cloning. The fact that quantum states are not by themselves observable quantities. If you have a single copy of a system, that's it, you cannot really reconstruct it. So on one side we have POVM, which are kind of powerful, they extend our ability in order to prove the quantum system. On the other side there are fundamental limitation which are again associated with the fundamental structure of quantum mechanics. And when you try to, and the goal of optimization of a measurement has to face these two kind of problem, these two kind of, has to put together these two different competing aspect. Okay, so the fundamental problem in quantum metrology, the matter of all problems that you face in quantum metrology is this one. Which we have already seen when we were discussing quantum classical communication. So it's what you may call state discrimination. The state discrimination problem is a problem in which somebody is asking you to determine whether which state is a given one. So somebody is giving you the state row question mark. This is something that we have already seen last time and you are asked to determine if this state is one of the state row one, row two, row three that has been, that are possible candidates. And your goal is to find out which one of those possibility are. And you can try to solve this problem by performing whatever measurement you want. And in particular you can try to solve this problem by devising the most generic, the most powerful POVM you can think about. Not just projective measurement. And so this is a highly non-trivial problem which is the optimal measurement that allows you to solve efficiently this problem here. So as I mentioned, we have seen these kind of issues exactly when we were studying communication. So when you study classical communication on my quantum channel, you face this kind of problem here because Alice is preparing quantum state that encode classical messages. Bob received this object and has to check to infer what was the state Alice has to solve. And we have seen that there are some figure of merit that you can use in order to characterize this kind of efficiency. And which, and it turns out that optimal measurement in this particular context are known as pretty good measurements. They were first discovered by Olivo Schumacher in the kind of independent way. And these are other two generalization of this idea which use sequential decoding and bisection, decoding scheme that I contributed in discovering a few years ago. And basically the take-home message that I want to stress here is the fact that this optimal measurement that allows you to solve the specific problem of quantum communication, of classical quantum communication, rely on optimal strategies, optimal POVM, which are not projective. So the measurement that allows you to reach the bound that achieve the capacity in this special case are in general not projective. They are truly non-projective POVMs. Okay, so let's move on. So in the simplest example of a state discrimination problem is this one. So here you are basically given two alternative, row one and row two. Let's say with a priori probability 50%, you are given these two possibility. And now somebody is selecting either row one or row two with probability 50%, and it is in presenting to you such a state, but without telling you exactly what the state is. And the question is, can you guess? Can you determine which one of these two possibility is the state that I've given to you? You can perform measurement and you have to solve this problem. Now, you can try to solve this kind of problem, identifying the optimal measurement that solve it, but you need to introduce figure of merit in order to classify the values to answer the question. And the typical figure of merit that one can consider is what is called the error probability. So the error probability is simply the probability that you give an answer, but it doesn't correspond to the state I gave it to you. So you say one, but I gave you two, or you say two, and I gave you one. You can compute this quantity, and this is, of course, a function of the POVM that you have selected, and then you can try to optimize with all possible POVM you can think about, yeah? And it turns out that this optimization problem was solved by Hellstrom quite a few years ago, and you can show that by optimizing with respect to all possible POVM, you get this minimal error probability, which is expressed here in this very elegant and compact form. It is one minus, and then you have this quantity here, rho one minus rho two, and this is the trace norm divided by two, and everything else, then you have a factor of two to divide. Now, this object here, this same object here is indeed the trace distance between rho one and rho two. I don't know if somebody have introduced the notion of trace distance in previous lecture, but I guess so. So, this theorem by Hellstrom is telling you that the minimal error probability that you have when you try to solve the problem of this form is given basically by one minus the trace norm, the trace distance between the two. Now, you may wonder what is the optimal measurement that allows you to achieve this bound, and for this special state, it turns out that optimal measurement is indeed a projective one, okay? But this is just, no. I mean, the minimal error probability is exactly a function of the trace norm. Then if you introduce different measure of distance, they are not directly related with this one, and so there is not, I mean, the formula introduced that guy, okay? So, in some sense you can use these, if you wanna see in a different way, you can use this result to give an operational meaning to the trace norm distance, okay? So, the trace norm distance characterize the minimal error probability in the discriminating two state, if you want. That's all. Yeah? So, if row one is equal to row two, of course, this guy nullify, and the error probability, the minimal error probability is one half. So, you have 50% probability of being wrong, which is like you are just guessing. Yeah, if you want, but the point is that I'm asking you to tell me whether the result is one or two, and then I choose. Okay, it's like I put, I make a commitment, I say, this guy is one, and this guy is two. Then I'm cheating, I'm giving you exactly the same state, and then I give it to you. Guess whether I choose two or one. But, yeah, it's... Yeah, yeah, yeah. I'm just telling you, I just choose two state, and kind of cheating. These are two different state. Please tell me whether this state is one or whether this state is two. You don't have. So, I put a label on a system, on the same object, but I put it kind of randomly. So, there is no way you can guess where I put my, which label I choose. So, it's maximum error probability. Sorry? Yeah. Yeah. Okay, so let's move on. So, another result, non-result in quantum estimation theory, we can discuss later on if you want to. Another result that you... Interesting result you can... Well-established result in quantum estimation theory is this one. Now, you have the same problem. I give you row one and row two. The two possibility, row one or row two. Okay, I'm trying... You have to guess this label here. And, but now, I'm not giving you just a single copy of the system. I give you, say, n copies of it. So, you have many, many, many copies. And now, this is good, because if you have many, many, many, many, many copies, in principle, you can try to do a state tomography. You can try to construct the classical representation of the state, and once you have the classical representation of the state, you know exactly what the state is. Of course, you need, in principle, you need infinitely many copies. But let's say that, as the number of copies increased, you expect the minimal error probability to go down. And indeed, this is exactly what happens. So, of course, the minimal error probability, in this case, is given by this formula, which is exactly the formula I gave you before, but now is applied to n copies of the two objects. Sorry, this object. And it turns out that you can take an asymptotic expansion of this quantity, which goes down exponentially, which decreases exponentially, with an exponent, which is given by this quantity here. Okay, and this is called the quantum Chernov bound. Okay, the quantum Chernov bound is put an upper bound on the exponential decay of this error probability, in terms of this awful function that you see here. But it's a mathematical and analytical result that you can obtain. Of course, there are others I cannot cover. Okay, so, a related problem to quantum, to of state discrimination is process discrimination. So, in process discrimination, the issue is the following. Once more, there are two alternatives, phi one or phi two. But now these phi one and phi two are not states, but processes, like black boxes. There are two processes that can act on your system, and you don't know which one is going to act, and you have to guess which one is the process that is going to act on your system. So, you have to discriminate between the two alternatives, but now we are talking about quantum evolutions that is CPT maps. Remember, all the most generic evolution of a quantum system is a CPT map. You have to discriminate between phi one and phi two. I give you this black box and ask you to say whether this is this guy phi one or is this guy phi two. Now, of course, this kind of wide state process discrimination is relevant. Well, process discrimination is relevant because it's a problem that occurs in many different contexts. So, for instance, it has to do with noise detection. So, suppose that you have a quantum computer, you have some gate, you have prepared some gates, but, and this is represented by this phi two mapping, but then maybe, at some point during the evolution, something goes wrong and instead of applying phi two, your quantum computer is operating on your inputs with phi one, which is a deteriorated version of phi two. It's phi two plus some error. And, of course, it is of interest for us to determine whether this noise has occurred or not. This is exactly what you do when you do quantum error correction. So, in the end, process discrimination is strongly related with noise detection schemes. Another instance where process discrimination occurs is this one. So, for instance, suppose that you want to probe the presence or the absence of a target. And so, these are kind of military applications, unfortunately. So, you are trying to detect the presence of an airplane, let's say. Okay? This one, which is a very realistic and concrete problem, can be casted in terms of process discrimination because in order to probe the presence of this airplane, what you do, you shine some probing signal, say, some laser beam or whatever. And then, depending whether or not you get a reflection from the object, you have two situations. If you get a reflected signal, this corresponds to a transformation of this message, which is due to some map, which is phi one. Phi one is the process that takes this signal and reflects it. Phi two, instead, is the case in which your signal is not reflected, you get no signal back. So, once more, this target discrimination can be casted in terms of process discrimination. You have to decide whether the channel is operating on your system is phi one or phi two. And this is, again, process discrimination. So, how do we address the process discrimination problem? Well, we can transform process discrimination into state discrimination, that's very simple. Just put inside the black box state and then, depending whether or not this guy is phi one or phi two, you will obtain at the output two possible states, rho one or rho two. And now the problem becomes, again, in order to answer the original question, you have to distinguish between two states and we know how to do because we can just measure them. Try to measure it, we perform it. Of course, so this is a very nice way of transforming the previous problem, the process discrimination into state discrimination. But of course there is a caveat, or there is, if you want, there is more, because now you have the possibility of choosing which one initial state of the probe you want. So depending on the choice of this row, you may have more possibility in discriminating rho one or rho two. So the idea is to find which state is the most sensible or sensitive, sorry, in order to discriminate between these two process. So here there is yet another kind of optimization stage. Not only you have to optimize with respect to the measurement that you need to perform to distinguish the two states, but also you may consider the possibility of optimizing with respect to the input state of the probe. Finding the optimal probing state. Yes, there is, yeah? And it's the next slide, okay? So, so, now we know that so one thing that you can do, instead of just preparing the state of this probe and sending through the channel, what you can do, you prepare the state of the probe and you entangled with an ancillary system, okay? So you prepare, say, here a maximum entangled state at this level, then you let this guy to be affected by the noise, by the process. And what you get on the hand is a state in which this guy has been transformed and this guy is not. It's a sort of reference state system that is still correlated with yours, hopefully. And then what you do, you measure both guys. So you perform a joint measurement on these two objects. And now the problem discriminating between phi one and phi two correspond to discriminating between these two states, where this is the transform version of the input state probe, which now was an entangled state here, according to a local noise, which is phi one, or a local noise, which is phi two. Again, it's a state discrimination procedure, which is now performed over an extended Hilbert space, not just a local one. Yeah? And now, this is very similar, is very much similar to what you do when you perform interferometric measurement in order to detect, say, a phase delay in an interferometer, and you send signals, a probing signal, you mix it with a reference input, you let the two guys interfere, and then you measure. So this kind of process here is very much reminiscent of an interferometric detection scheme, where you have a probe, which is kind of correlated with the reference signal, and then you let the two guys interact, and then you measure. Yeah? Now, it turns out that this guy here is the best you can do. And as a matter of fact, is strongly related with what is called a Choi-Jamulkovski isomorphism, which is, I didn't introduce it to you in the previous lecture, but it's a correspondence that you can establish between maps and states. So the Choi-Jamulkovski correspondence says that for each process, you can associate a quantum state that fully characterizes such process. And such Choi-Jamulkovski representation of the state is exactly the guy that you get here when you send in through this process a maximum entangled state. So in other words, if you inject here a maximum entangled state of the system and the ansinla, after the interaction of psi, you get a state which contains the full information about the map. So the full information in the sense that whatever the state does is written into the Choi-Jamulkovski state. So in this way, the state discrimination of the Choi-Jamulkovski states correspond to a full complete discrimination of the map. It gives you exactly the large, so it characterizes exactly how far away these two guys are. So in other words, this is an intrinsic way of discriminating phi. This is exactly what you were asking for. So it doesn't depend on the input. It's something that you don't have to optimize. OK, so as an application of this idea, let's go back to this target problem. And this is a protocol that we introduced a few years ago and we call it quantum illumination. This is indeed trying to detect the presence of a target or the presence of the absence of the target. And you can try to solve this problem by optimizing with respect to all possible input signal. You can shine in, OK? And here you just perform, say, photocounting. And it turns out that, of course, the best way, the best performance is also in this situation here, are obtained when you produce and entangle an input signal which is entangled with a reference. And what you do, you just take a joint measurement between the reference state and the transmitted one. And, OK, I will not enter into the details, but you can apply this idea for this problem and you find out that the error probability that you obtain in order to determine the presence of the absence of the target is improved if you use this technique. OK, so, so let's move on. And now I want to make, to discuss a problem which is slightly more complicated than the state discrimination process we have seen before. And this has to do with parameter estimation. So in parameter estimation, so remember, in state discrimination, the problem is always like that. There is a finite number of possibilities over which you have to choose, with respect to you have to choose. So I'm asking you, is this state rho one or rho two or rho three or say rho n? I give you a finite number of possible answer. And you have to detect which one of these discrete finite number of possibilities is realized by the experiment you have performed. In parameter estimation, basically what you do, you extend, you enlarge the number of possible candidates, creating a continuous set of possibilities. So now, you don't have a finite collection of possibilities, but you have a continuous collection of possibilities. And this continuous collection of possibilities is basically one parameter family of object, OK? So it's like, it's a collection of states that you can parameterize with the parameter, continuous parameter x, OK? You don't label it with a discrete index, but with a continuous one. And now you are asked to determine which one of the state I'm giving you is, where the state rho question mark is in this family, OK? At which point of this parameter? So basically, another way of looking at the problem is, this problem is to say that you have a quantum trajectory in your space of states, which is parameterized by this parameter here, and you have to determine the value of the parameter. So that's why it's called parameter estimation. So your goal is to estimate the value of that parameter. And now it's clear that this problem is much more complicated than the previous one, because the parameter is continuous. So intrinsically, there is going to be always some uncertainty in determining the specific value of this guy. OK, in order to address this problem, it is better to start from its classical version. So this problem has a classical analog use that it is worth to consider. So in classical information theory, the same problem goes as follows. Suppose that you have a random parameter x, a continuous parameter, so it's a continuous parameter, and let's say, which is encoded somewhere, OK? And now let's say that you probe these black box that encode the value of this random variable x, it's a classical random variable, and let's say that you obtain some stochastic outcome out of your measurement. And these stochastic outcomes are represented by these values here, which are also characterized by a continuous parameter. You get x1, x2, and so on, and so forth. You perform many, many measurement, and you get different value of this parameter. And let's say that these parameters, which are the outcome of your measurement, are related with the x of the black, which is written in the black box, through some conditional probability, which is given here, and which fully characterize the measurement process. So I'm just putting by hand all this fact. I don't have a POVM structure here. This is just a statistical connection between two random variables. Well, so now the question is the following. Suppose that you have performed some of those measurements, which are this one, and now you may try, from these outcomes, to infer what is the value of x. So what you do, typically you do data processing. I have a collection of data, I have new points, and I have to infer what was the value of x. So what you do, you create what is called an estimator. So maybe you take the average of this quantity, or whatever. You have to process them somehow, and from the processing of these values, you have to say, what is the value of x. OK. So this is, of course, this object is a function of the vector of outcomes that you obtain. And of course, there are many possibilities. Let's pick one. I don't care. Of course, the statistics of this vector is just given by the product of the probability that each one of the components of the vector is obtained. This is simply because the value of x is always the same, and we assume that every time we prove the black box, we get the same kind of statistics out of it. Well, and now we want to, of course, you have many different possibilities in creating the estimator. And the question is, what is the best estimator? How can I better infer the value of x from my measurement? So you need a figure of merit to answer the question. And the figure of merit we consider, you may consider, is the root mean square error of the estimation. So what is the root mean square error? Is simply this expression here. You take the estimator, but evaluated on the data point you obtain, you subtract it to the real value, you take the square, and then you average. So this is, of course, if the estimator coincide with x, you get 0, so 0 error. Otherwise, you get some value. No, this is just a formal way of characterizing the object. And then we are going to put an upper bound, a lower bound on this quantity. So this is very well defined, and you ask yourself, how smaller can be this quantity. Now you can, and now the idea is that, how small this object can be. It can be. Now it turns out, of course, asking this question is again a kind of complicated problem. Because you have to consider all the possible estimation strategies, but nicely enough, there is a theorem by Kramer-Rau, which is called the Kramer-Rau bound, that says that no matter what you do, the root mean square error is going to be always greater or equal than this quantity here. So nu here, this nu here, is the number of measurement you have performed. So you have performed new measurement. And f of x is a function, which is called the Fisher information, which is only characterized by the probability, the conditional probability of the measurement outcome, and depend on the random variable that you are trying to compute. It doesn't depend on the estimation procedure. So it's something which is completely independent from the estimation, from the estimator that you choose. Indeed, it provides a lower bound for all possible estimation strategies. And it's just a function of the conditional probability that connect your measurement to the real outcome. So you get, so as the number of measurement increase, of course, this quantity goes down, it nullifies, and it goes down like the square root of a number of measurement, for all possible x. So you get this identity here. The nice fact about this result is that the bound itself is achievable for sufficiently large nu. So there exist optimal estimation strategy that have to do with maximum likelihood procedures that allows you to construct an estimator that indeed scale like that. So this is the best you can hope. You can determine the value of x up to that precision. No more than that. But of course, you have to find the optimal estimator. OK, so now let's move on to, let's go back to the quantum case. In the quantum case, there is no black box. Or to be more precise, the black box is provided by the state that encode the random variable we want to determine. So you start from this. And now, in order to extract this information, x is a random variable, it's a classical variable, you have to perform measurement. So let's fix a POVM. So you have to fix a POVM. For the moment, let's choose just a generic one. Fix the POVM, you measure the system, and what you get is some classical outcome. Again, c1, c2, and so on, and so forth. And now, the conditional probability that connect the value of x to the outcomes is just a functional of the POVM that you have selected. And of course, it's a functional also of the state. But that's another problem. Well, once we have that, we are in business. Because now we have again transformed the quantum, the estimation problem into a classical estimation problem in which we have to infer x out of this classical data point. Well, construct an estimator and try to minimize with respect to all possible estimator. So in the hand, what you can do, you can apply the previous result, the classical result, to determine the minimum value of the root mean square error, which is associated with this specific conditional probability. So each one choice of the POVM will give you a lower bound associated with the feature information, which is a function of the POVM. And of course, so this is exactly what I said. This is the root mean square error for a given POVM, which is lower bounded by its associated feature information, which is now a classical object. But of course, there is an issue here. And the issue is what is the mean, the best value of this quantity you can hope. So find the optimal POVM. Now, the issue is let's now choose the optimal POVM. And if you try to optimize with respect to all possible POVM, this classical bound gets replaced by this bound here, which is the quantum version of the Kramerau bound, which involve what is called the quantum feature information. The quantum feature information basically is the maximum of all possible feature information optimized with respect to all possible POVM. And so now we have this quantum result. You say that in order to determine the value of the parameter x, which is encoded in your system, the best precision you can achieve is given by this expression here, which again scales like square root of mu. And which is a function of this quantum Kramerau, which is a function of the quantum feature information. Which is a function of the quantum feature information. Well? Yeah, sure. Yeah. Yeah. So for the moment, so for me, for the moment, x, x. So let's say this is the space of quantum state of your system. So this is the set of state of your quantum system. And this is a trajectory inside this state. And this trajectory is a row of x. For the moment, I don't make any assumption. It's just some curve in that space. Let's say it's continuous. And you have to determine whether you are here, or here, or here. I don't care, OK? The full derivation still apply. No, you are looking for the best measurement that allows you to solve this problem. So the premise is the following. Your state is one of the point along this line. But we don't know which one. So I give you the state, and maybe I give you more copies of this state. You can probe it. And you have to determine whether you are where you are along this trajectory. Where is the point where the state is along? So there is this prior information, which is the state. It's not here. It's not here. It's on the line. Please find it. And I give you a mathematical representation of the trajectory. So you know exactly what this state are, but you don't know which one specifically is given to you. New copies. You have new copies. So you can perform several measurements. OK, so let's move on. No, you can even do that. I don't care. So I give you the state. You have your quantum computer. You do whatever you want. So indeed, in order to implement the most generic POVM, typically you need a quantum computer. So you have to introduce an ASILA, couple with a system, perform a joint measurement. You have to perform, say, belt measurement, whatever. So indeed, you will need very sophisticated control in order to implement the most generic POVM. But no matter what you do, you are bound to have this kind of uncertainty. You cannot go below that. That's a sort of no-go theorem. It's a. OK, so now, of course, as I told you, the quantum visual information is rather complicated. Computing it is not simple if the trajectory is whatever. But there are some cases in which this quantum visual information is easy to compute. And the most simple example of a quantum visual information is obtained when, for instance, this trajectory that we are considering, that one, is generated by some Hamiltonian. Not necessarily time-independent or parameter-independent Hamiltonian, but it can also depend explicitly on x. So this is equivalent to the evolution associated with the time-independent Hamiltonian. So let's say that you are evolving according. So your state is evolving according to some procedure. X now is the equivalent of a time, which fulfilled this dynamical equation here, where h is the generator of the dynamics. So this is a very simple kind of trajectory. So said in a different language, I'm saying that I'm considering the situation in which rho of x is given by exponent time-ordered. No, sorry, let's write it like that. Like u of x rho of 0, u of x dagger, where u of x is the formal integral of this equation, which is given by the time-ordered exponent of the associated Hamiltonian between 0 and x. And the time-ordering comes from the fact that h is explicitly time-dependent. So let's consider this special case, which is a very legitimate case. If you want an example, let's say that you try to determine the time t over which the system are evolved according to the Hamiltonian. OK, so when this is the case, then we can compute the quantum feature information very easily. So you can perform this optimization, and it turns out that, so as a matter of fact, do you get an upper bound for f of 0, which in some cases closes, and the upper bound is this one. So is the variance of the four times the variance of the time-dependent Hamiltonian of your system. OK, so you can prove this inequality f0. So if you get an upper bound for f0, you get a lower bound for the minimum uncertainty. So you get this inequality here. And in particular, if the gap here closes, if the Hamiltonian is not time-dependent. So if it is just a time-independent Hamiltonian, then f0 is exactly the variance of the Hamiltonian. Is that clear? Well, and the variance is just given. No, but you take the Hamiltonian, you subtract the expectation value of the Hamiltonian on the state at that value of the time, and then you take the square, and then you take the expectation value, it's the variance. OK, so now a nice side effect of all these is the following. So now we have that the uncertainty is upper bounded by the quantum Kramer Rao, which is lower bounded by this object here because of the inequality I've just given to you. Now if you look at this inequality, you can cast it in a very nice way. You can write that delta of x, the uncertainty in determining the value of x times the uncertainty associated with the Hamiltonian, is larger than 2 times square root of u. And this is kind of generalized energy time uncertainty relation because this guy is the uncertainty in the energy content of the system, and delta x is uncertainty in time. So this is indeed a sort of energy time inequality, the equivalent of the momentum position Heisenberg uncertainty, which now has been obtained in the context of quantum state discrimination or parameter estimation theory. Are you happy? OK. So in other words, the precision in determining the value of the time x depend on the spread of the energy of the system. And by the way, this inequality here, as I told you, becomes an identity. So these guys are the same if the Hamiltonian is explicitly time independent. And because the quantum Kramer Rao boundary is achievable, then also this inequality is achievable when the Hamiltonian is time independent. Yes, yes, in that sense, yes. Fortunately, yes. But still, no, still. So, for a final number of measurements, it's correct. OK, so I think how much time do we have? Five minutes. That's fine. So I want to discuss a final topic, which is associated with this parameter estimation problem. And this has to do with the difference between shot noise and Heisenberg scaling in parameter estimation. And the subtitle of this section is entanglement as a resource. And we know that entanglement is a resource. And we see in which sense, in this context, entanglement is also a resource. OK, so let's consider the following problem. We have a probe, and we have a black box that is affecting the probe through some unitary process represented by u of phi. And basically, the unitary transformation depend on this parameter phi that we want to determine. And let's assume that u is given simply by the exponent of some Hamiltonian time, the value of the parameter phi. So phi, again, can be seen as a time, as a temporal evolution. And your goal is exactly to determine the value of phi. And you can choose whatever state you want here. You can choose whatever measurement you want there. Let's try to see what's the result of this very simple estimation problem, which is very common in quantum optics. You typically have to perform this kind of measurement. It's equivalent to determine the delay of a signal or of a phase shift, or a phase shift. OK, so, yeah, there are several examples. Now, but of course, in order to address this problem, we allow you to probe your black box several time under the promise that the black box is the same. So every time you interrogate the black box, it responds exactly in the same way. Always add the phase phi. So now you can perform several tests and perform several measurements. But of course, you can even try to send entangled resources in through these multiple copies of the black box and then perform joint measurement. OK? So if you wonder how you can do that, because maybe you have a single black box and you don't have, say, n black box, but just one, how can I do that? How do you do that? If you have a physics, just one copy of the black box, which always respond to you in the same way. So how can you implement such a scheme? You cannot copy the box. The box is just one. So basically, you can just send one pulse of the time. Can you entangle pulses that live at different times or not? Entanglement is something that applied to the present state of reality. Sorry, I'm entangled. These two guys are entangled now. But what does it mean these guys entangled with this guy two days ago? Doesn't work very well. Yep. Yeah, you can do that. But it doesn't really explain how can you send, so the point is that, OK, use the same box many times, but how do you send an entangled state there? Again, there is a pulse here. And then there is a pulse I'm going to send in 10 minutes. How can I entangle the two? It lives in different spacetime point. So the answer is require the use of quantum memories, of course. So the idea is the following. So you have, say, two quantum memories. These are just, say, qubits in your computer. You create an entangled state of these two quantum memories now. And now you let this guy, you move the state of this quantum state of this qubit into some traveling mode, say, some photo that moves away, but remaining entangled with that guy. You send it to the box. When you arrive at this stage, instead of measuring it, you charge it into another quantum memory. So this is quantum memory 1. This is quantum memory 2. This is quantum memory 3. And this is quantum memory 4. So you create this guy. You send it through it, and you load it here. And now you have still some kind of entanglement between the two. Now you move this guy into a photon, send it through here, and then you charge here. And at the end of the day, you can realize this kind of configuration. So when you see this kind of picture, as a matter of fact, they kind of implicitly assume your ability of having memories that can be entangled, over which you can kind of process the theme. So in other words, I wanted to stress this point, because I'm not cheating here. If this is possible, it's difficult, but it's possible. Of course, if you have many copies of the black box, no problem whatsoever. But if it's just one, yeah. OK, so let's move on. And now you can perform the same experiment many, many, many times. Let's say you, every time you prepare n photons, and you send it through hit, and so on and so forth, you reprate the same experiment, new time. And the question is, what is the scaling that you get? And it turns out that, of course, you have different possibility. So for instance, you can decide not to put entanglement between the various boxes to begin with, or you can put entanglement. And also you can perform local measurement on the black boxes, or you can perform joint measurement, meaning you have to use this machinery here. Now, it turns out that you can prove that if, as long as you don't put entanglement between the various boxes, the scaling with respect to n, the number of probes that you say, using a single run of the experiment, is going to be like square root of n, which is, as a shot noise, kind of scaling. While on the other hand, if you entangle the probes all together, the square root of n becomes n, 1 over n. And this is called the Heisenberg regime of the probing procedure. Is the number of pulses that you entangled together. So you pass from the shot noise to the Heisenberg regime limit. So unless there are questions, I think I finish my presentation here, and I thank you all for the attention.