 after the entanglement of this state. And we want to prove, I promise I'll prove, that criterion, generally, whose violation is generally only sufficient for entanglement is also necessary in the case of two-mode Gaussian states. This will extend also to m versus n-mode Gaussian states, actually. So we have all the ingredients somewhere on the blackboard. One is the uncertainty principle, which take, actually, this form in terms of invariance. This is the complete expression in terms of invariance. So the determinant of sigma, it may be proven, must be greater or equal to 1 than 1. Sigma must be positive, because these conditions being only quadratic in the simplistic eigenvalues can never express strict positivity. I mean, they can distinguish between positive or completely negative sigma. So this must be added up. But it's not a great concern, as we will see. And then there's this condition, which is less trivial way, not only the determinant entanglement, but also this other quantity, which is expressed in terms of the submetrices of sigma that way. Sigma, I remind you, is the two-mode density matrix. And it also happens to be an invariant. Now, yeah. So then also we have a, we proved a lemma that is, well, we haven't quite proven it, but we have stated it. The say is that if the determinant of sigma may be greater than 0, then certainly the state is, the Gaussian state is separable. Now, I want to collect all of these. But first, in order to translate, I have to translate the expression of the partial transposition, which is, which we've given there in the Hilbert space level, in phase space. And there, the way I'm going to do it is I'm going to use that part of the Y-board, is Y0. So I don't need this any longer. That's the more quantum optical part. And yes, so partial transposition in, say, phase space. What is it? How does it act on these covariance matrices? So that's why we're going to see it. Let's say let's take one mode, and let's understand what transposition is. More generally, transposition. By the way, it doesn't matter which subsystem one transposes to check this positivity. And it doesn't matter which basis one takes. The reason is, changing the basis that you're transposing with just corresponds to a unitary operation that doesn't affect positivity. And transposing the other system can be obtained by transposing the whole state. And the global transpositions will again not affect positivity, whereas partial can. So we can't transpose in any basis, which is interesting now, because I'll remind you the following relationship that you will be very well acquainted with from quantum optics. This is the fuck basis that I've never written down so far. But the bladder operators on a single mode, there's transposition on one mode, a la mode. It's that this square root of n, n minus 1, this comes from solving the simple single one dimensional harmonic oscillator. And these are the bladder operators on which you know. The only thing I care about in this story, this would have been useful earlier when we determined certain properties of some states. But the reason why I care about this now is that these coefficients are entirely real. So I can express a n a dagger as on the fuck basis as some real operators, which has a bearing on this, because now what I mean is that a dagger equals a transposed. And therefore, in this basis, therefore, let's see what happens to x. x is a plus a dagger over square root of 2. So x transposed equals a plus a dagger over square root of 2. That is x. But p equals a minus a dagger over i square root of 2, which means that p transposed equals minus p. Because a becomes a dagger. A dagger becomes a. And i is left. So I'm not Hermitian conjugating, but I am just transposing. So I pick up a minus sign, yeah? So on a two-mode system, transposition is the linear operation that sends x into x and p into minus p, which means on a two-mode systems. So partial transposition on two modes is represented by this matrix, which I'm going to call t, which is simply this matrix that swaps one sign. So I'm working at the very far end of the whiteboard, of the blackboard. So then the partially transposed. So the partially transposed covariance matrix will be equal to t sigma t. And what this does, acting by congruence that way, because t transposed and t are the same. You can pick any one of the quadratures. It's the same in terms of positivity. And you leave all the diagonal terms alone and everything alone, except for the elements of one line and one column, which has swapped, OK? Now clearly, we are interested in seeing how partial transposition may ever affect positivity. And remember, positivity is expressed by this inequality. But then, so what will happen to the determinant of sigma? If I act with partial transpositions according to that t sigma t action. So nothing happens. Because, well, that's got determinant minus one. You multiply it twice. Bene theorem stays the same. So that is left unaffected and affected. And this relationship is also always true. As I mentioned before, congruences they can never affect strict positivity, unless congruence with anything that is not degenerate. So that's Sylvester inertial law, that's how they call it. So this one is always true again. So sigma tilde will still be strictly greater than zero. So the only relationship which might detect a violation of positivity is this one. And we can just focus on this one then. Because delta will change. And what happens to delta is that the sigma a again will not change, sigma b will not change. But this sigma will be, so in that action you will multiply by the identity by sigma z at the off diagonal block. And that will pick up a minus sign there. Therefore, there'll be a minus sign there. So then delta tilde will change, will be equal to delta sigma a plus determinant of sigma b minus 2 determinant of sigma a b, which is, by the way, equal to the old delta, well, the non-partially transposed delta, so these are the partially transposed versions, minus 4 determinant of sigma a b. And I think now we have all the ingredients. We need to check. So ppt is if and only if for a Gaussian state, that's sigma minus, actually, well, at the level of second moment. So for a Gaussian state, it's if and only if. And so now, fine, let me just write down a map. Let's distinguish a few scenarios. So I want a table where I'm asking is raw separable or entangled, obviously, by converse. What's the sign of this and what's the sign of that? Now, let's consider all the possibilities then. So if this is positive, we know that the state is separable. Yeah? And we like this to be respected by what I just fulfilled by what I just mentioned. And this will be the cage because delta tilde will be smaller than delta. Yeah? And so this will be greater than this, which is always greater than 0. OK? So if this is the case, then this is always greater than 0 or equal, and the state is separable. Fine. Then there's the case where if this is violated, there's nothing to say because this is going to be always entangled. Because a violation of this is like a violation of PPT. So if this is not true, then certainly the state is entangled. And that's general. So this criterion is sufficient at the level of second moments for non-Gaussian states as well to detect just detection of entanglement, OK? So then there, as we like, we want that to be the case. And obviously, it will also be true that this is smaller than 0. So the last case that we want to discuss and that will be done is when, so if this is violated, but what? No, no, no. If this is not violated, but this is smaller than 0, what happens? Now, what happens is in this case, if this is not violated, sorry, if this is not violated, so this is greater than 0, then the partially transposed state is a physical Gaussian state, yeah? And in particular, it's also a physical separable Gaussian state because its determinant of sigma b will be greater than 0. Remember, under partial transposition, that's the only thing that swap signs. So this sign is which to minus, OK? So if this is greater than 0, then the partially transposed Gaussian state will be a physical separable Gaussian state. And therefore, even the original state, the original state will be separable too because partial transposition preserves separability, yeah? If you partially transpose a separable, you can detect entanglement, but if you start from a separable state and partially transpose, you're still with a separable state. And that's very obvious. If I put a t there, it's still a state and it's still separable, please. No, no, no, because the one that correlates is this one, yeah? Not this line, as you see, OK? We want to correlate with this check or known check, yeah? So the general criterion, therefore, is just that this is satisfied. There's a very nice alternative proof, well, a much more elegant alternative proof that is pretty recent of this statement that it's done with sure compliments. But it requires a bit more of a matrix analysis, a bit of a machinery that it's a bit unwieldy in a kind of tight lecture. So this is easier to express. Well, it is easier to introduce. You don't need that much preliminaries. So slightly less elegant, but it's using this simplistic invariance, you can then show that this is the case. And so therefore, we've proven that PPT is necessary and sufficient for tumult Gaussian state. Let me give an example then of this and see. I mean, then we could go on and I could define the logarithmic negativity. This is left in I left it in the notes. But I don't want to do it now because it's maybe I can just mention something else, which is a bit more along the lines of quantum technologies as these lectures should be a try to give like a full account as much as I could. Right, so but first before we move on, let me give an example of how to check this entanglement. And so describing noise and noisy processes with these Gaussian systems, this is also a formalism of Gaussian CP maps. It's incredibly efficient and there's a very important types of noise because they also so bilinear means that you can reproduce all interactions with an environment that is coupled linearly to the system, which is often the case in the Bohr Markov approximation when you have weak coupling and you're perturbing the interaction with the environment. So in open systems that's a very widely applied formal situation. And then you can use Gaussian states only as probes of the dynamics, but then you can use this formalism to build up whatever master equation, but then you can apply the master equation to any state. That's a really good strategy to describe these open dynamics. Anyway, noise is very easy to represent. I won't have the time to do it though, but I'll just mention these that let's take this state where I'll have this big n that stands for noise times a state which is iconic in, let me call it, and I'll write it down explicitly because it's good to see these things, at least once. So this is a two-mode squeeze state. And it's the way you realize entangostates in degenerate parametric down conversion with these nonlinear crystals that mediate interactions between two modes of light. And you have those pictures with those colored rings. And in certain points in these rings you have this process that's kind of like classical optics. So this is the prototype of an entangostate. If you let r go to infinity, this state becomes the EPR state. It's perfectly correlated in position and momentum, by the way. So it becomes a common eigenvalue of two commuting observables in combined x and p. So it's what Einstein used in the EPR paper. But rather than just for any r, you will have different degrees of entanglement, as we will see, but the state will always be pure. If you look at this determinant, which is incredibly easy to calculate because you get cos minus sine is 1 times cos minus sine is 1. So you get always 1. So the state is always pure. But then I add this n, which represents the noise. It's an incredible, simple way, and it's expedient when one is lecturing, essentially. It means that the state will become noise, and it's become noisy because of any process, interaction with environment, thermal noise, whatever. So I want to see, and then the question I want to ask is, is this the state entangled or not? So what I do is, let's just apply, there are more detailed techniques which would be to use these inequalities. So you could calculate the smallest, simplistic eigenvalue of the partially transposed covariance metrics, and check whether it's greater or lower than 1. That's a complete recipe. But at this stage, let me just use this equivalent statement. You could then calculate also the logarithmic negativity. I left it in the notes. So I try to quantify this entanglement, although that quantifies as it's controversial, so I don't want to get into that. But so here, determinant of sigma equals, so this is 1, and then there's n to the fourth. And then there's delta tilde. And delta tilde is going to be, like always n squared times this plus that. So 2 cos squared of a. This is called the two mode squeezing parameter, this i. And then plus, this comes with a minus. But then I have another minus of plus. So this plus is the size. Otherwise, I'll have a 1 there and it'll be some trivial. So if I until that is the non-partially transposed relationship, we'll have a minus there. And we'll just be always satisfied because the state is always physical, as we know. For any n greater than 1, that's a constraint. That's what the relationship will tell us that this has to be the case for the state to be physical, whereas r can be anything real. So this is the equation that's going to tell us whether we are entangled or not. And this is simple because this is what like e to the, what's this thing? Can you tell me? You're certainly much better than I am. Try to figure it out. There's a minus and a plus and cancel out. But I'm going to write the wrong thing. OK, I'll do it then. No, I mean like I thought, what is this sum? Well, let's figure. e to the, well done. It's cosh 2r, yeah. Cosh 2r, fantastic. I believe you because it makes complete sense. We'll see. And then let's check ppt. ppt says n4 minus 2n squared cosh 2r plus 1. So let's say we want to write down a condition for entanglement. So we do this. If this is the case, then the state is entangled. And then OK, we just do it somehow. So that is cosh 2r must be greater than I bring in there, blah, blah, blah, into the 4 plus 1 over 2n squared. That's the condition. So cn quantifies the noise. It's basically the number of photons in a sense. But like, not really. Because that also goes up. The energy also goes up with r, obviously. It's the trace of that thing. So no worries. I mean, that's not very relevant. But essentially, the two-mode squeezing must overcome the noise. This would be an increasing function of n. Pretty much there's an n to the 4, the numerator, and we define this is a state which described fairly well what's found in certain labs. So these techniques are incredibly easy to then apply. This is easy. And we could determine how much two-mode squeezing you need in order to beat the noise. And then there'll be many other, more or less, realistic scenarios. One may devise and study, but it doesn't matter. I want it just to exemplify how this works. So yeah, so that's it. As far as entanglement is concerned, we still have a bit of time. But I know you'll be exhausted by now, and there's still another set of lectures after all these weeks. So I'll try and be really narrative in what follows. And right, so OK. We won't need any of this. And I'll just say something which goes along more the quantum technology direction. So we saw some squeezing before in terms of like, as it is related to entanglement and what a squeezing operation, how it is described in terms of symplectics. But you know that one of the interests of these systems is obviously sensing, because these are continuous systems which are good for analog sensing and estimation, like measuring stuff. And if you go very, very small to the quantum level, you will need, in the end, quantum estimation to measure stuff. Or you'd be advantageous to use it. And anyway, that is the fundamental limit to any estimation you can get, because it's the most fundamental theory of nature we have. So did you do this? Was there a quantum estimation bit in the college where there are no specific lectures about it? OK, so I'll just go through like a few of the basics. Which is good to cover if you've never seen them. I mentioned a few things about Gaussian that one can handle with the formalism that we introduced today. And then I'll add those to the notes as well. Right, so the simplest, like say, vanilla quantum estimation problem goes as follows. It's, well, actually, let's start from classical estimation, let's say. So the problem is as follows. Quantum Fisher information. Now, OK, say you have a process that is any general quantum process could be noisy or a calibration of some unitary operation in a lab. And say to fix ideas and simplify things that it depends on a certain parameter that you don't know. So we want to determine this parameter theta that characterize your operation. What one would do in a lab is that you will write down a row which depends on a set of quantum states of bounded trace class operators that we locked on some fiducial initial probe. You set a probe, you let it go through the phi theta operation, and so you define a set of states. Then the only thing you can do in quantum mechanics is applying a POVM measuring. And by which I mean some process such that this is the most general possible setting. So these are the effects of the POVM. Some measure that will give you one. Are you all familiar with POVM formalism? Yeah? OK, sure. And then Mu is the outcome of the measurement. And then you can sample this distribution. Once you fix, oh, by the way, when you have a quantum system, once you fix your measurement apparatus and you stack with that, the problem, the system becomes a classical system in the sense that it will put out a classical probability distribution, OK? So you'll have a probability distribution then, which is given as, so you have a probability of Mu conditional on the parameter theta that you want to find, which you can sample by doing the quantum measurement in a lab. Could be, again, homodyne detection or something that we mentioned before, or anything else. And this will be rho theta times, of course, that will give you the probability of getting Mu conditioned on whatever theta was. And we want to find out theta in order to understand what this operation was, any general CP map, OK? But we don't need to care about these details, because we just have a set. The formal problem can be defined by deciding that you have a set of states that do that. Now, once this becomes a classical system, you can ask, let me just see how I write this thing, OK? You can associate a probability, so the solution to this problem, if you're sampling this probability distribution, is given by a quantity called the Fisher information. And by solution, you see what I mean. I mean, the Fisher information is something that depends in the quantum setting. It will depend on the POVM we chose, and it will depend also on the value of the parameter, theta. And it will be given by, this is a classical estimation theory resultant, d prime squared, OK? So by prime, I mean the probability, the derivative of the probability with respect to parameter theta. And whenever I put a prime in the following, that would refer to differentiating with respect to the parameter theta. So the notation is slightly ambiguous, because there's the parameter theta and then there's true value of theta, which sometimes gets into each other's feet. But we won't really delve into any specifics. And this is important. It's the solution of the problem in the sense that there is a notion called Kramer-Rau bound, which says that if you're happy to quantify your uncertainty as the standard deviation of the parameter, that is the error in the lab, this must be greater or equal than 1 over square root of now n now is not noise, but it's how many times you repeat your measurement, how many times you're sampling. There's always this scaling in square root of n times the Fisher information. OK, so the Kramer-Rau bound can always be saturated within the assumptions that we're going to make and under certain regularities that the regularity hypothesis that they're always satisfied for us at this stage. But the good thing is that, you see, in quantum mechanics, we had the choice of what to measure. So go back to the quantum problem. So now we can define something called the QFI, the quantum Fisher information, and call it, how did I call it? i theta, OK. And that is the supremum of all classical Fisher information under any possible POVM. By KM, I mean any possible choice of these POVM elements. Supremum of i. That's why I kept this dependence explicit. They turned that up there. So the best ever result you can get in this regard is given by, and also at this basis, perhaps, because there's, yeah, I think I can easily. I mean, the many books that I could advise you to go through or reviews, but Elstrom has a classic book on estimation theory, for instance, although it's a bit, perhaps a bit dated. But those are the, I don't know, Weismann's book, quantum measurements and control. OK, so this one there? Oh, OK, so this is known. I did improve these. This is the classical results. And it's related to this Kramer-Rau bound. So this is the derivative of the conditional probability with respect to the parameter. And you square it and divide by P. And this is the derivative of a logarithm, basically. That's why this turns out to be the case. Yes. So yes, so these convergences are not, so this scaling is based on the fact that you have one system. But there are ways to think around it in the sense that in most situations, this quantity will be additive. OK, yeah, but that's true. It could be different. Well, it depends on what this operation is, but yeah. And then technically, this is just for a single Hilbert space. So it's a Hilbert space result. And let me just define then what the Fisher information is. I will give you a form which is slightly more its rho theta prime times something that I will call L theta. And L theta is defined implicitly as the following. That 2 rho theta prime equals L theta rho theta. This is called the symmetric logarithmic derivative. And it's an operator that will depend on theta that does this. Essentially, if you multiply left and right, you get the derivative with respect to the parameter. And also, it's a symmetrized version of this operator. You can see that you could have it non-symmetric, but you force it to be symmetric. And symmetric would be like a mission conjugate in this stage, in this case. And then, right, so that's what you want to determine. Now, there is no time for me to go through all of this. It will be impossible. However, let me give you just an idea of how you will be able to do it for Gaussian states. Just a very rough idea. This is an operator that has to do with the Hilbert space. You have to go to a Hilbert space description, in particular to the characteristic function description through the Fourier-Veil relation, that we just saw. And basically, you can prove everything using the characteristic function and using a trick which is a classic in quantum optics. And therefore, I would like to just show you that. That's a real building block of a lot of proofs. And if you take classics like Barnett-Radmore, which is, I don't know, methods in theoretical quantum optics, I think that's called Leida, I can't remember. Exactly. But they will detail these in all its possible nuances. And it's a very fundamental trick. And it goes like that. So we use the characteristic representation, essentially, to try and figure out what this L is. First off, we make an ansatz that L theta is quadratic in the canonical operators. So I use my vector. Let me write it in components here. I don't really know why, but I thought, I think, well, it doesn't really matter much. So it's some quadratic thing in X and P's with a linear part. We assume it because we're solving a Gaussian problem. We're just solving this under the probe. Now we go to the Gaussian version of the problem, where the probe would be a set, so where this would be a set of Gaussian states. There will be parametrized by first moments that depend on theta and a matrix of second moments that also depends on theta. And therefore, so now I'm just going to give you a really basic rough idea of how this goes about. We use the characteristic function representation. And in particular, I say the characteristic function associated to some operator O, anything. It's given by, OK, here I'm kind of cheating with respect to what I told you before. But so I want to absorb that omega matrix. It'd be expedient to absorb it here in the redefinition of R. And I put this squiggle there on the notes as well, so I want to maintain it on the whiteboard. But this is essentially a shift operator, OK? But we just change the name of these variables, and the squiggle version is without the omega. And that's, I don't know, like some operator O that emits a characteristic function description. So it's regular enough. And then, OK, it's probably easier that I do this because otherwise I'm going to get it wrong. So there's a classic trick, which is basically the Baker-Campel-Hausdorff expansion of this algebra, which is simple because the algebra is central, as they say. So this works that, yeah. So I can write it down as this. And I'll tell you what this is in a second. So here I'm grouping from R, I'm taking all the x's. So one element every two. And likewise in this vector of real variables. And likewise, then I'll have the p's, p. And then I'll have, so this is Baker-Campel-Hausdorff. So I pick up a phase. And in order to write it, aha. So that's the phase I pick up, OK? But this is also equal. I can invert this and pick an opposite phase. I, p dot, p, e, i. And then e to the minus, which is key. So I really find these vectors x and p as the list of x's and the list of p. And if you add them up as a direct sum, you form the old vector r, OK? Yeah? And what I wrote here is just a version of this. This is true because then this commutator for us, it's always a number. And it commutes with A and B. That's what it's meant when they say that the algebra is central. So the first order Baker-Campel-Hausdorff, or consequence thereof, is the whole story. And now, what I want to play, oh yes, now there's this trick, which is if you differentiate this, both relationship with respect to one variable, let's say, a differentiate partially with respect to p till the j, these all, I don't know, chi. Then you get two expressions, i trace of e to the i times the operator o, which was the original operator, times pj, minus the one you pick from the phase. So in this case, it'd be minus i half psi j chi. Or you get the same thing with trace of e to the i r till the t r pj o. OK, so this is the crucial bit. pj o plus i half j chi. So clarify. OK, so this equals that means that is that easy to see? Well, yeah, it's easy. So for instance, we get a first result, which is that essentially I can rearrange this. I'm multiplying, it means that I'm multiplying the characteristic representation, the characteristic function by xj is the same as the characteristic representation of, well, let me write it down then explicitly. So then therefore, I just do it in this one case, because we don't have any more time. So I put that one, these are the way, and then I get trace, I get rid of all the i's. This is just some characteristic representation of what of essentially, well, the commutator, but I don't want to call it necessary the commutator in this case. O pj minus pj o, the trace equals, remember that chi was a characteristic function associated with the original operator o. If we want an expression for something where you multiply by pj right and left with a minus, so commutator in this case, it's the same. So the characteristic representation of this is the same as multiplying the original characteristic representation times function times xj. If you combine this, so then you can do the same for pj by differentiating respect to x, and you can also get rules for this differential. You get four rules which translate between the Hilbert space, and in the end you get this, and the characteristic function representation, the derivative respect to any variable corresponds to doing this. I have multiplying by rj left and right as in a anticommutator. And instead, multiplying by rj is the anticommutator that way. By combining all of these, you can reconstruct what would happen to a Gaussian characteristic function if you put it into this. So you express this in characteristic function form, and you will be able to map this one and obtain an equation which can be solved for certain just like quadratic matrices. In the end, you end up with a general recipe to systematically evaluate the single parameter symmetric logarithmic derivative, and then it's the Fisher information for all Gaussian states. These aren't going to live in the notes because it's too much, and there's also a simple expression for single mode Gaussian states, which is more revealing, but it will take a little bit more to go through. And I think I've taken enough of your time. So thanks for listening.