 What will be in this second chapter of these bridge notes that I'm going to post? It's a Fourier-Vire relation, essentially. So of course we have this description for Gaussian states. When principle we already have a bridge to the Hilbert space. I didn't really show it, but in order to get some of these results, like evaluate the covariances, you need to go to the Fock basis, which is already in the Hilbert space. But the most sophisticated, I would say, and powerful bridge between Hilbert space and by the way, you always need to go into the, even though Gaussian dynamics may be trivial, this is sometimes a point that is raised against these sort of continuous variable problems, right? Because people in the sort of a field theoretical tradition, they call this theory is quasi-free. They call them quasi-free because of the normal modic compositions that I showed, which is you can always diagonalize these Hamiltonians and solve the dynamics. That's true, but that's not the point. I mean, solving a problem in quantum information such as typically is a state entangled or not. It's not equivalent to solving some dynamics, so that objection is that it's unfounded, essentially. So these problems are still difficult, as we're going to see. And whenever you want to face those problems, you have to go into the Hilbert space. So one way to go into the Hilbert space is the Fox space that we saw, but a very powerful way is the Fourier vial, which is essentially the characteristic function. And by which I mean, essentially, that displacement operators that we saw already form a basis in the form of an orthonormal basis or a complete set with respect to the Hilbert Schmidt product. And the consequence, I'm going to state this only for a single mode, because stating it with many modes will be unwieldy and unnecessary at this stage. But what I mean is I can write any row, any state. Let's talk about a state. It could be any bounded operator, but as an integral of a C2 of the alpha. Let me just be sure I use the right, the same convention. Well, I'm not. So D minus alpha trace the alpha row. This pi normalizes this convention and where D alpha is the displacement operator, but written in terms of complex variables with ladder operators as a basis of operators. That would be this. You're familiar with this, I think, because you've seen our coherent states and all in the day before yesterday. But this is the same as what we wrote before with XMP and that omega. This product corresponds to coupling XMPs through the omega matrix. Check it out. Like, upon this identification. Yeah, this is alpha is X plus IP over square root of two. And then you get the same and A dagger and A are the ladder operators. Okay, I don't want to go into these details, but essentially what this statement say, what this statement is, is that you can expand a row in the basis of D alphas. Yeah, I know that mathematically this this this statement is lightly questionable, but you know, allow me the freedom to be a base lobby. And that's what I call Fourier vile. And this in fact is that's the characteristic function. There's a symmetrically ordered characteristic function of of the state row. Yeah So then if we Yeah, if we Fourier transform this, we take the complex Fourier transform, you get the Bigner function. And we already talked about that. I don't want to get into this. But as you know, there's a number of these quasi-probability functions, these phase space representations. And another one is the P function. And did you, are you familiar with the P function so I get a sense of what the audience knows. You don't know about the P function. Okay, cool. So let me let me get a bit more specific. I'm not going to show this relationship, how to prove it, because that will be a bit lengthy. But I'm going to give you one bit, which is this is based on a very interesting representation of the projection on the vacuum. This is equivalent to this, to an equation that we're going to need later. And which is proven in the notes. Oh, why am I, no, no, C2, no, C, not C2, C, so minus or plus is the same. Yeah, so I just mentioned that and this is proven in the notes. Yeah, so that's how you derive this. And use the fact that essentially the set of coherent states is an over-complete set. You can resolve the identity through it. Let me just mention another important equality, which is the identity operator in the Hilbert space, at the Hilbert space level is 1 over pi alpha, alpha. Let's create alpha. Whereas alpha is a coherent state, so it's the alpha upon the vacuum. Okay, so so that's a resume of quantum optics in a sense and and this is the, I think this is the most cultural of all the, it's something that all physicists should know, I think. If I had to pick one bit of quantum optics that everyone should know, it's not a Wigner function and all that, because that's just derivative and it's an elaboration, fine. But this is the most important bit and it's the one that bridges really between Hilbert space and the phase space description through this characteristic function, right? So you can prove that all the, so if we put the Gaussian definition that we have before, this is explicitly done in the notes, though it's a bit pedantic, but you get you get the following. So the big, the characteristic function of a Gaussian state, chi alpha, this is something, so it's it to the, no, no, no. Chi, let me go back now to my variables there where I remember formula. Hopefully. It should be this. You always come with an omega because of the way this canonic, this commutation relations work. And that's sigma and then you multiply times i are transposed, omega transposed, I think, well, hopefully. This should be up to a phase that. So that's the first moment of the state. And they act as a phase on the characteristic function. And this is the covalence matrix. So this is a characteristic function of a Gaussian states. And this will relate between our phase space description given by sigma and R and the Hilbert space operator through this. Okay, if I plug this one in there, I get the expression of another alternative expression of a Gaussian state. So this relates to standard quantum optical descriptions. What is it? Oh, yeah. And then if you take the Fourier transform of this, you get the sigma will go sigma to the minus one, obviously, like for any Gaussian function. And this phase will become a shift. That's what happens. Yeah, and then you get the vignette function, which is really, you know, for one mode is a Gaussian blob centered in R with some shape that is determined by sigma. Yeah. And then the grades imagine a Gaussian blob. Okay. And you have this phase space, which is given by X and P or alpha through that equation. Okay, so that's very, very briefly concise summary of all this and but I want to then go to the peer representation a little bit slightly more specifically. So you can define other characteristic functions and in particular you can define chi minus one alpha as same thing, the trace of the alpha row times this. Okay, so as you notice, we're making things slightly more difficult. Because this will be, so when you Fourier transform this, because we're making it narrower, we're taking the risk. Okay, this will be worse behaved than the big, so once we Fourier transform this, then it might be, it might behave worse than the vignette function, which is the Fourier transform of that. Okay, because we make it tighter as a distribution. I'll come back to this point because it has a relevance for that entanglement even for of these systems. It's okay. It's kind of an interesting side remark, but it was very instrumental too. And then the peer representation is the Fourier transform of that. So I think, how do I usually, yeah, I think I normally, this is normalized that way without the square root, but with a pi there. And then the square beta it's e to the alpha beta star minus alpha star beta of chi minus one beta. Okay, and the peer representation as a very interesting and this was, well this line was almost worth, not alone, but it was instrumental to get in a Nobel Prize for Glaube, right? So this is the renowned Glaube-Südarsson representation, which is that rho equals simply this. That's some really freakish equation that, so they're so over-complete. Coherent states are so far-reaching that you can even express any state as a mixture in a sense of diagonal of projectors on Gaussians, on coherent states. What's the catch? Some of you will know what. There's a problem, of course, in what I just said. That P alpha is usually not even a function, right? It's something nasty. Not only negative, but like could be derivative of delta functions, some distribution, but not a function. Yeah? So it can still be defined consistently that way. And let me mention how to prove this, because I want to, this is really interesting that, so if you add a D alpha there and there, yeah? We will get a D alpha there, and a D minus alpha there. D minus alpha is the same as D alpha, that guy. That unitary, so. And so this equals then projector on alpha. This will just, this will shift this operator by alpha minus alpha, so by zero, but will then give a phase. It should be familiar with this, that if I combine two shift operators, I get the operator, which is the sum of the shifts times some phase. That phase embodies the canonical commutation relations. Yeah? In this case, that phase sort of reads this way. So, so if I write down what I have on the right hand side there to square alpha, alpha, alpha, P alpha. This is an interesting proof, so I want to show it explicitly. This will be one over pi C, D squared alpha, and then I do the other integral in, I don't know, say gamma, there's a P alpha, which is this one, but the the projector on alpha, I want to write it with this integral, and it's, yeah, into the minus, and then I've got the phase, which is alpha, gamma star, minus alpha star, gamma. The phase is the effect of these two, and this is gamma. Why do I have minus gamma there? Oh, it's the same. I can choose both plus and minus there, because it's phase invariant, that statement, so. And then, and then I've got, ah, yes, this is really nice, in fact. Now, this, yeah, so, ah, yeah, so this integral will give me the counter, so I have, I end up with the characteristic function with this, because I'm counter Fourier transforming the P, and then multiplying by that. Yeah, but think about it. This is, because we got P by Fourier transforming that. Yeah. So then, that's why this works. So the integral in alpha is, ah, eaten up, and I get my chi, why is it chi, oh, maybe I should have called it minus one. Ah, I see why I was an idiot. So, yeah, I've been an idiot. That's pretty bad. Okay, so that's why this, because I started to question myself because of this minus, and now it all works. So, and this, I said something really stupid before, which is, of course I make myself more, I make myself more trouble if I multiply this by something divergent like this exponential. Yeah. Okay, so, sorry about that. So that's why this P is less regular. And whereas if I put the minus, you get the cure presentation, which is more regular, my bad. But fortunately, I went through this and then I found out about my mistakes. And just let me just be sure I learned this. Yes, chi one, now this is a gamma, and it to the minus squared. Ah, yes. And this is d minus, which is one over pi integral over c of this squared gamma. Fantastic, chi zero gamma. We counter food and transform this and get chi one times that. But chi one times c to the minus alpha squared is chi zero. And chi zero goes there. Now what's this? That's the integral of chi zero times this, the bases. Yes. By using the Fourier-Weier relation, which is the first thing I wrote, this is raw, which is exactly what we were set out, we set out to prove, okay. So this is how you prove this celebrated representation. Why am I making all this fuss about this? The reason is that, yeah, I was hoping this would be a bit more obvious to see, but like never mind. So no, actually it is. It is quite clear. Now in going from alpha to this x functions, you get a factor two because of this square root. Trust me, try it out. And then you see that essentially what this becomes, this factor there becomes e to the r t r over four, which is also equal. So this is equal really. I mean for a single mode, but it doesn't really matter. Omega t, omega one quota r. Now, so for the p function, for the p function of a Gaussian state to be well defined, well, to be well behaved. And we'll see what that means. And by well behaved, I mean not worse behaved than a delta function. It must be sigma greater equal than the identity. Why? Because see in order to get the chi one, I need to multiply this by one quote this factor. Yeah. So the quadratic part would be essentially if I want chi one, I need to simply put sigma minus one identity there as a matrix. Yeah. And in order to be able to then transform Fourier transform this properly, I want all the eigenvalues of this to be positive. And if they are zero, you get some delta functions. But if they are worse, then you can't define a proper function like that. So you can define this, but it won't be a function or a delta function. It would be like some other distribution. Okay. So at this stage, this will not tell you much, but we'll see why this is very important to us. Yeah. That's the one result I wanted to go through. And it's also useful you get some manipulation of these p functions. And after all, a collagen quantum optics. So now we can go on to discuss the entanglement of Gaussian states. And so first off, what is an entangled state? You probably know it, but you know, this has contained, let me repeat it. So, so row is separable if and only if. So a state of a composite system with Hilbert space A and Hilbert space B is separable if and only if you can write it down as a convex combination, probabilistic mixture of product states. These are all the states that can be, sorry, A, J. These are the states that can be formed from uncorrelated states through local operation and classical communication. Yeah. Because the hypothetical Alice will sample this distribution. And then when she gets a J, calls up Bob and instructs him to create the state row BJ, whilst she creates the state row AJ entirely locally. So that's how you do it. That's how you correlate two coins through this ignorance. Okay? And row is entangled. So you probably will know this, but if and only if is not, row is not separable. So then, okay, there are many tests to check entanglement. The whole extent of the study would be beyond the scope of this lecture, but I want just to mention one test, which is PPT, Positivity of the Partial Transposition, which is that if row is separable, and can the FOB be written that way, then row tilde will stand for partial transposition, which means we transpose only in one of the two Hilbert spaces. Which one is irrelevant? Why is it, why is it that each one, which one is it? Well, I'll come to that later. But that will be transposed in this. And of course, if you transpose the whole state, it's still a state. It's still positive specifically. And so therefore it'd be true that this is greater or equal than zero. So it's a positive semi-definite operator. So this is a necessity condition for separability. And it turns out to be therefore, because of the mutually exclusive definition of the two notions, nature of the notion, it's also sufficient for, its violation is sufficient for entanglement. So therefore if row tilde is not positive semi-definite, it has one negative eigenvalue, then row is entangled. Yeah? So then, and this criteria is only sufficient in general. It's only necessary for separability that is sufficient in general for entanglement is not necessary and sufficient. There are PPT entangled states which are called bound entangled, because they cannot be distilled. But that's more like quantum information theory really. So here to this extent, let me just say this is a test to check that whether this is entangled or not. And it turns out that for the most iconic type of continuous variable entanglement, which is a two-mode Gaussian state, that's what you have in the lab more often than not. Although now in optical comms they've got these like billions of entangled modes, and so the, which would escape this criterion. But basically in all the early days and most of the situation that we're still interested in now. For instance, if you want to check whether the light mode of an optimal mechanical cavity is entangled with a single mechanical mode, that's still a, that's still a two-mode Gaussian state perhaps. And you can still check it's, it's entanglement through the covariance matrix. And it turns out that this condition, which is what we're gonna see now, is not only necessary, but also, it's not only sufficient, but also necessary for Gaussian entanglement of two modes. That's no longer true for, it's still true for one versus any number of modes. It's still true if you have modes which are bi-symmetric, which means that they are symmetric if you exchange modes within the same partitions. But it's no longer true in general for, for any generic Gaussian states. So of many modes, yeah? M plus M modes. But for two modes, this is, this turned out to be necessary and sufficient. Like for two qubits or for a qubit times a Q-trip. In finite dimensions. So, and that's what we, we set out to prove. And okay, so the first thing I want to prove is, it's first a sufficient condition for separability. And it's very general. Which is, is it true only for Gaussians, this one? Yeah. Yeah, I think so. That's it. And then, Gaussian state with the, with the, with the covariance matrix which is greater than the identity is always separable. And why? Because we have it already on the white board. On the black board. Yeah? Because of this reason. Because then P, look at what, what would happen. So this would be the characteristic function. And it would be a well-defined Gaussian. Because we just need to, to, sorry, it's, it's Fourier transform that goes there. Sorry about the sort of mix up in notation here. This is complex notation. These are just real, but it's just, so that I don't write wrong formula. But because if sigma minus one is, is that way, and most you can get a delta function which, which is obviously the P function of a coherent state. Yeah? But a projector on a coherent state, see? But this will be well behaved. And this means that in the multi-mode case, you get a mixture of product states. A well-defined positive Gaussian mixture of, of tensile products of coherent states. Yeah? I didn't write the multi-mode case explicitly, but you can see that this all carries on over. And the reason why everything carries over very simply from the single-mode case is that all these vial operators are tensile products over different spaces. They don't let modes talk to each other. Okay? So it's all a direct sum. So the P representation give immediately these decomposition. So the state must be positive, must be separable. Right? So that's the first state. And besides being instrumental to what follows, this, this statement, yes, between any mode, any mode, any mode within this partition, you split this between any partition of modes, you always end up, which brings me to the underlying truth which, which runs quite deep is that you always need squeezing to have continuous variable entanglement. By squeezing, I mean an eigenvalue of this smaller than the, than one. That's a very nice and powerful statement though. It's very simple. I mean, it's got like far reaching implications, no squeezing, no entanglement ever in continuous variables for Gaussians because if you truncate the space and do things like infinite dimensions, you can get different effects that would not abide by this. But okay. So, cool. So this is the first thing. Let me, let me do everything quite in detail now because the second part is that let's now, this is for any mode, for any number of modes. Let's focus, let's now consider two mode states. As we saw in the previous lectures, we can disregard the first moments as far as entanglement is concerned quite safely because they can be adjusted by LOCC as we like. So, let's just consider two mode states and then we call sigma which is where we encode all the information about the state as sigma A, sigma B, and sigma AB. So we, sigma A is the two times two covariance matrix of mode A and this is the two times two covariance matrix of mode B and, and, and this is where the correlations are. Yeah. And the first remark is that you can always reduce it. There is always a set of local operations that leads this to the standard form through symplectic LOCC, let's call them. I mean, there's no classical communication need. It's just LOBA, which they're going to, and the standard form is also called Simon normal standard form and this, this expression here, c plus a minus. So why, how do we do this? Well, so first off, you have local operation based on the, on the normal mode decomposition for a single mode state. You can always reduce this to Williamson. That's also the normal mode form is also called Williamson form because this was, that's because in the 30s, these things were still being studied to some extent in the third, quite late, considering, you know, how fundamental this stuff is just quadratic Hamiltonians that would appear in classical mechanics. But the trick was that it's quite difficult to classify everything you can do through symplectic operations on any metrics that is not strictly positive. The strictly positive case that I discussed is the simple one. There's a very well known and notorious appendix to Arnold, the classical mechanics books were all book where all these, there's also mistake in the appendix, but all these normal forms are described and that was Williamson theorem. Williamson was interested in all the pathological cases that we don't care about at this stage. So that's called Williamson theorem in the literature often, although Williamson theory is about the difficult cases that are not just this. But anyway, through bi-virtual Williamson theorem, this sigma a and sigma b can be reduced to normal form through local symplectic and now see these local, these normal forms are invariant under rotations, under local rotations which are also symplectic. They are the phase plates that we saw before. And then through those, you can apply one on the left and the different one on the right of the off diagonal block and do it singular value decomposition that leads it to diagonal form regardless of what this is. And you can even, you have enough freedom in these orthogonals that you can choose C plus to be greater or equal than C minus. Yeah? Okay, so this is, it's always true that you can apply local operation and classical communications to achieve this normal form with all these zeros and four parameters which are relevant. And those are the four parameters that are invariant under global and local symplectics. Okay? So then another lemma, it's state with determinant of sigma a b greater or equal than zero, then rho is separable. And this is a bit technical and I don't want to get into that because what you do is you show that if the determinant of sigma a b is greater than zero, that is if C plus and C minus have the same sign, you can always define a set of operations that you can do local squeezing and global beam splitters that leads to a state with sigma greater than one. And then it must be separable. The details are irrelevant here. I mean, again, they're all spelled set out and spelled out quite, fleshed out quite in detail in the, in the notes, but I don't want to get into that. So that's an important lemma that we have. And now we're ready to go for the jugular, which is find the, establish that this condition, the, the, the PPT is necessary and sufficient for such, for, for Gaussian states that we do it this way. Well, we need some few more lines of preparation. So the first one is that there's a quantity. So, okay. So what is PPT? PPT is an expression about the positivity of the states, right? But if you remember, we had another, other ways of expressing, well, we had a Gaussian or phase space way of expressing this positivity. Yeah? Which was sigma plus i omega with the equal then zero, the Eisenberg principle. Yeah? So one ingredient that we need is this and its consequences. Yeah? And among which are the fact that all the simplistic eigenvalues, these are the same. All the simplistic eigenvalues must be greater than one. Not the orthogonal eigenvalues, eh? Those can be smaller than one and hence violate these and have some entanglement. But the simplistic eigenvalues must always be greater than one. They're one for pure states, like the vacuum. And, um, but this can be expressed also in another way, which we're going to do now. How do we see this, that this is the best way to? Yes. So it must be able to express this in terms of all the quantities that are invariant under symplatic operations, because, you know, you can apply a symplatic there, a symplatic there, and, and this will disappear. So this quantity must only depend on, on symplatically invariant quantities. And there, for an n-mode Gaussian states, there are n independent symplatically invariants, which correspond to the n-symplatic eigenvalues. Okay? Specifically for a two-mode state, there is one, it's very simple, it's just the determinant of sigma. I'm mentioning this because there are several problems where it's convenient to, to apply this, to, to use these invariants and to utilize them. So one is two-mode, eh? Only for two modes. One is the determinant, which is always an invariant, because symplatic, uh, because symplatic operations, by definition, because they have to satisfy this, right? So by Binet theorem, there must be invertible because they're a matrix group. So they cannot have determinant zero. And by Binet theorem, obviously they must have determinant either one or minus one. And it turns out they always have determinant plus one, by the way. But, so they cannot change the determinant of sigma acting by congruence. So this must be an invariant. And the other invariant is a bit more exotic. But it's just, you can determine all the invariants by taking the characteristic equation that lets you find the symplatic eigenvalues. I couldn't go into that. And, and constructing the characteristic polynomial. This will be all the invariants, the coefficients of the polynomials. But, um, but one of them is very simple to write down. And it's this one. It's this quantity. And it's kind of interesting to see how that, that turns out to be an invariant. Let me see how much time, yeah. Let me do that because I don't want to. So the reason why this is a, a symplatic invariant can be seen by looking at the standard form. So any state can be brought into standard form. So this must be through local operation, right? So this must be invariants because local symplatic operation are still symplatic operations. So, and then, so the invariants must be, the global invariants must be, uh, expressible in terms of these four quantities. And so if you look at delta, that would be, uh, you know, a squared plus b squared plus 2c plus c minus, which turns out to be if you regroup this as, if you take these two blocks, so reorganize this, this, this vectors as this squares. So shift the space and call this the x block. And the circles are the p block. That would just be trace the Hilbert Schmidt product between sigma x and sigma p. Yeah. And then you can show that you can bring any metrics from the standard form to, uh, from this standard form to the normal form by operations that do not change this. And this proves that delta is an invariant. And it must be determined by the, by the, the eigenvalues. And the reason is that the beam splitter that we saw, that we wrote before will just be rotations among this. And if you mix this beam splitter and local squeezers, you can do, you can bring this state into, into standard form, into normal form, into a state which is new one, new one, new two, new two. But I don't want to. I think, I think it's a good place. So let me just clarify what we established, a few things. So we have the PPT, the violation of PPT, our positivity of the partial transpose is a sufficient condition for entanglement in general. So for Gaussian states too. And then we have a couple of lemons, one that say, well, the most important one is to say is that e in a two-mode Gaussian state, if the determinant of sigma a b is greater or equal than zero, then rho is separable. And then we have a way to express positivity, which is this, oh, which then can be re-expressed in terms of the simplistic invariants as, as this is this equation, that sigma minus delta plus one greater or equal than zero. And then we have this final statement that the deter, because of the Eisenberg principle, I can't really, well, it's not difficult to show this, but you just need to express this delta, the fact that that sigma is the product of the simplistic eigenvalues, squared, and this is just a sum. And then this must be greater or equal than one. If you put everything together, you find this relationship. Yeah, so we have that this must always be true. We identify two invariants, and this must always be true for any covariance metrics to be physical. And then I'll come back and tell you how to describe partial transposition in phase space, and we put everything together and find a criteria and show that this is necessary and sufficient for two-mode Gaussian states, which solve that problem. And I think that's it for this lecture. We still, we have one lecture in the afternoon, right? Yeah, cool. Thanks.