 Very nice. I've never had a bell. All right. Yes, please. You're right. I can show you. What I mean is, for any anti-symmetric matrix, there exists an orthogonal one. Well, I chose even-dimension here, because it's our case and it's easier with anti-symmetric. It'll be true anyway, but it's just easier to state. Where you can bring the anti-symmetric matrix to this form. And the way you do it, you prove it, it's very simple. It's just that A squared is symmetric, and then you diagonalize it A squared and get those. That's why I meant, oh, that's that job. And then through this very, very simple matrix manipulation, algebraic property, you can prove normal-mode decomposition in general. Yeah? Yes. So we wanted the, then again, the spectrum of this state, so that we can understand how the spectrum of any Gaussian state works. And we managed to boil it down to the spectrum of this exponential, which in turn means, but this represents a bunch of Hamiltonians which are given by as omega j xj squared plus pj squared. Yeah? So this is the standard harmonic oscillator and you all know how to, do I have a half? Maybe not, maybe yes, who knows, but yeah, there's a half, even in the standard definition. Yeah. So this, and so up to, well, h bar is always equal one. Oh yeah, I didn't even talk about h bar, but there'll be an h bar in the very first time. So you all know how to diagonalize this. It's the fuck basis, essentially, and it's just the standard harmonic oscillators, they're all equally spaced. They come in as many modes as you like, but in the end, in the end, do I have an expression for this? In the end? Yeah, I mean, there's nothing else to say. We just have to exponentiate this. Okay? So essentially, yes. No, okay, so the way it goes is any, so this is a definition of Gaussian state. Yeah? So yeah, I'm saying, I'm saying, well, I'm not saying anything because this is a definition, but then if you want, you can also prove, you can prove later that we can prove later, but this is just something that's very, maybe not today, but you can prove that these are all the states with Gaussian-Wigner function. Go back to your quest, yes. And I'm saying, though, something else. I'm saying that, you know, because diagonalizing this, this is just a function of h up to normalization, so diagonalizing this is the same as diagonalizing this, and diagonalizing this means that you'll have a displacement, so first a symplatic, so a second-order operation, then a displacement acting on top of it, well, and then a central Hamiltonian which is given by a tensor product of, no, actually a sum, sorry, a sum of these Hamiltonians. Okay? That's what we're saying. That's the path. And therefore, any Gaussian state is orthogonal in the fuck basis of those normal modes. Yeah? So we found the eigen basis, and the spectrum will be just given by exponentiating, so the spectrum is given by, to be more specific, a tensor product, obviously, of e to the minus beta omega j because of the way we decided to write down these variables, times nj, and then you need to take, so these are the eigenvalues, and the eigenvalues are products of this over all possible nj's. Okay? Yeah? And, yeah. But this specific exponentially decay spectra are then a property of Gaussian states. Okay? And this is something that I do in the notes as well. Let me get rid of this beta. I wanted beta for other reasons, but I can just put xi, j, absorb all the betas in the eigen frequencies of the Hamiltonian. Yeah, because it is irrelevant, so. And then, so that's the spectrum. We could then, in principle, determine entropies, but one thing I want to do is there's a more, so obviously this will boil down to Gaussian distributions in the end, right? So they can be mapped into. And like any Gaussian function, all of these parameters, which are, you know, the eigen frequencies, the first moments, and the s, which is in the end, then just expressed as a 2n times 2n symmetric matrix. Yeah? They can be encoded in other, more operational parameters. And which I'm going to do now. Blah, blah, we know what symplatics are. So, so, I don't need this anymore. So instead of describing the Gaussian state with all of those parameters that encode h and therefore the state through exponentiation, you can use more direct operational definitions. And that would be, so first moments are just simply the, you know, expectation values of this R. And surprise, surprise, it takes two lines to see that, essentially, the expectation, if you don't have any displacement, the expectation value will be zero, because you have odd combination of canonical operators acting on even, and it's very simple to see that they're always zero. But the displacement just displaces it. That's obvious. And then there's a covariance matrix, which is in its symmetrized form. It will be given by, transposed. So, by which we mean, again, the anticommutator is taken in the outer product sense, like here. So we want to do all possible pairs of X and P's and anticommute them and get the expectation value. Yeah? So this thing, now, yeah, I'll just give you a very brief idea of how to go about it. And it will be that. So the state, raw, which I wrote before, is dr and then s. And then e to the minus dr. Did I say everything right? Yeah, with a half, I forget. Over the trace of raw, I don't care what it is. I mean, we could calculate it, but I mean, it depends with it. Oh, the product. This is a tensor product, eh? The tensor product in Hilbert spaces become direct sums there. And this is something that we're going to see. So this is a tensor product. Over all j's, these are all the normal modes. And there's some symplatic acting on the Hilbert, some representation of a symplatic in the Hilbert space and then a displacement there. Now, when you take this sigma, if you put that raw in here, what's going to happen is that you are displaced. So this dr will just displace the commutator. Yeah, I can take this dr there and act everywhere. So that's not going to do anything essentially. It will just get rid of this, shift the first moments, yeah? And this is the reason why then the first line is true. And then, so, let me write it down. So this is the trace of blah, blah, blah. And then there's dr, dagger s s j e to the minus psi j half x j squared plus p j squared and then s dagger dr. Okay, so then I can bring this one that way and displace this commutator, which means I just end up with the vectors without the shift. And then I do the same with s, s dagger. But the beauty of this is that s, so that's why this formalism is so effective in this situation. I put the s there, I act with the s on this one, but what will the s do? Something that I didn't write explicitly down before, but we established is that this meta-plastic correspondence means this thing, that s acting on all of this, and let me say that's an s dagger, I can't decide any convention I want at this stage, it's going to be s r. Someone must be kind of lost because I explained this very badly on purpose, not on purpose. Yeah, see? So, yeah. Okay, so s was a 2M by 2M matrix? Yeah, exactly. So this is something that I said at the beginning and I kept it too implicit, I know. So basically what we show in this very part of the blackboard, unfortunately, like in the previous lecture, was that when you evolve through a quadratic Hamiltonian, she's also gone, but you will map, so the action of a quadratic Hamiltonian, so essentially this action, there could be a minus plus, I always get this wrong, but never mind. Yeah, so this action, that's an operator, this action, that is this operator, and by this action, I mean this action acting on each single canonical variable in the vector, yeah? This could be u, the time evolution operator, exactly, yeah, but I call it s because it's a symplectic, incidentally. Acting on each single x and p, x and p, x and p in this row will give me, we saw that it will give us e omega h, let me forget about the time, there's the time there, but I dispense with the time, I absorb it in h, and times the original one, which I call sr, because that's symplectic. Why do I? Yeah, because it's convenient, because this s is the metaplectic representation in an infinite dimensional Hilbert space of this 2n times 2n transformation s. Through this mapping, yeah? Let me state this once more. So there's this quadratic Hamiltonian, there's a half in my definition, quadratic Hamiltonians, just define this way very, kind of like deceptively, and I call this s-hack, yeah? This is without hack, yeah? No, they're not the same. So one is the metaplectic mapping through this procedure. This is a 2n by 2n matrix that says how the canonical coordinates were shuffled by this operation. That's why these operations are linear, because they always end up being like a linear function of the others. The new canonical operators will be a linear combination of the original ones, yeah? Okay, so that's how that works. That's why I'm writing these sort of things, but this is a really powerful statement. So this is a vector of operator. I act with the same unitary of all the elements and the effect of this action, which is the time evolution, is what happens in Eisenberg picture. The effect is that I combine them linearly through this other s, which is a 2n times 2n matrix, whereas this is an infinite-dimensional operator given by that, which is very simple to write down in terms of x and p, some exponentiation of x and p's. And see, the link is that this h, the h is the bridge. This h is the same as this h. It's a 2n times 2n symmetric matrix, please. This is the time to ask questions. Yeah, yeah, yeah, I could just put a t there and there. This will also change. No, okay, so there I'm just saying there is one that diagonalizes the h. This is instead a general statement that through the same h, you can define an infinite-dimensional operator that acts as a time evolution operator and a finite-dimensional matrix that will also describe compactly this same time evolution. So this is one statement. The other statement is that there's always an s that does that, okay? So this is something that generally is good to know, like if one is a physicist or like working in these areas. Yeah? So now you see how powerful this is because I can then... So this formalism is very, very effective because I can now, yes, through that, through this correspondence, what I'm going to do is you see, the action of these on these operators, regardless of the fact that there's a commutator and all, is just going to be that... So let me write it down here. So s-dagger are transposed s, it acts on everything. What's going to give you is s-r. Well, you just have to think a little bit about this one, maybe, but... And then I can take it out. So without even getting my hands dirty with any of these s, I'll take it completely out of the equation, out of everything, actually, here. These are just numbers, not operators, so the trace acts on the Hilbert space level. So this is s times the trace of r, rt, and times, you know, the other... But this operator is a very simple one. Yeah? Okay. So the action... d doesn't do anything on the second moments, it just gets rid of these, obviously, shifts. That's easy. S will reshuffle them acting by congruence in the end. So here we are defining the second moments of the state, second and sort of statistical moments. So the expectation values of xp plus px all second order combinations. And as in a Gaussian distribution, we expect it to describe the whole state. And they're also, like, immediately accessible. If you do quadrature measurements, you can reconstruct these parameters. So hence the interest. So this thing there. So then I just need... We would just need to determine these quantities on this tensor product. It's very simple. We can do it separately for each mode, because this is a local operator, like, tensor product of local operators, acting on each mode separately. And there's no combinations. And here, I'm going to just give you the result because it will take... This is very tedious. I mean, it's also fairly straightforward. And because what we're going to do is we're going to need to evaluate the expectation value of this and on... So we know how to diagonalize this in the fuck basis, each one of them. And then we would need to do the expectation values of things like 2xj squared, 2pj squared, and then xj pj plus pj xj. So one would need to go... I don't know. I don't think you want me to do this now. I mean, it's all in the notes, and it's just... It takes the transformation to... You have to go to ladder operators, rewrite these things. What do you want? Do you want me to do this? Or shall I give you the result? Yeah. So the result is just... Well, you can follow it pretty easily. The result is just that this tensor product at the Hilbert space level would transform into a direct sum in phase space where Gaussian states are described. This is a general property which makes these treatments very simple, in a sense, like... Experian. So this is going to be s, always. Acting on a direct sum of nu j, nu j for all j from 1 to n, as transposed, where nu j is 1 plus e to the minus psi j, psi j. This is the psi j. It comes from summing this geometric sum and their derivative is pretty simple. There's nothing mysterious there. So we determined that the covariance matrix... And this could also be... Well, but this is the covariance matrix of the most general Gaussian states. Along with the first moments, which are completely separate, it also encodes all the degrees of freedom that we need to describe the state. There's... In fact, these nu j, they are the simplistic eigenvalues of the covariance matrix and they are related to the simplistic eigenvalues of the Hamiltonian that define the states through this relationship. This was beta times them, but never mind. And likewise, the other parameters are in S. Yeah? Not likewise, but the sides. And... Okay. So essentially, we decomposed the state and one thing that we can do now is decide... Okay, so one very interesting fact is that not all covariance matrices, physical covariance matrices, yeah? As I mentioned, this is yet another way to describe Gaussian states and it's a very efficient one, because these two parameters, the first moments and the second moments, they are directly accessible in a lab, so you can describe the state through this. And if you put them... If you then go to the Wigner function representation, these parameters will be the covariance matrix and first moments of the distribution or what you get as the Wigner distribution. Okay? But... So now, something that I mentioned before when I referred to quantum key distribution is that there is, of course, an uncertainty principle. Now, one could express uncertainty principle maybe in a more sophisticated way as entropic relations, because in the end, they have to do with the entropies of some distributions, in a sense. But the standard, more traditional second moments approach, which is the first layer of, you know, uncertainty that you can ever define is perfectly fine here. And so because everything is defined in terms of second moments, so uncertainty. And this will also be very useful later on to discuss the entanglement of these states. Yes, let me define this tau as yet another matrix and it's given by... So at this stage, the state could be Gaussian or not. Okay? This is no assumption of Gaussianity needed here. Maybe with a 2 in front. The reason why I put a 2 is that this is equal to Gaussian. So to make multi-mode statements, this formalism is definitely the most effective. Okay? So, you know, this product, even in the outer product sense, is given by the commutator. So twice that is given by the commutator plus the anti-commutator, of each element. But then of course, this is sigma and this is plus i omega. Because this is just a constant number and that's the trace of row. Right? Now, then consider an operator which I'll call square root of 2. Consider the operator O which would be square root of 2 y, sorry, y dagger, y is a vector it's a complex vector and that's r. This is a scalar product. And then, what do I want to say? So, the trace of let's say o dagger must be positive. Right? Oh, dagger is positive, row is positive. Yeah? This is tau. This is the matrix tau. Well, this means but this statement means that the matrix tau must be positive semi-definite. Yeah? Because this is true for all for all vector y. Okay? And so we get the uncertainty principle. This is really very compact. This is the uncertainty principle then. Let's see. Yeah? That's the most general second-order expression. This is due to, well, in various stages, but to Robertson and Schrodinger. It's from the 30s, in slightly different forms. And let's see whether this is actually the uncertainty principle that we all know, right? So in a first, so single mode case, yeah? Single mode case, what happens is, yeah, so the covariance matrix let's, let's like first moments are irrelevant and we see y in a second. But the covariance matrix is given by, so there's the anticommutator of x, x which is what's the anticommutator x squared? So this is delta x squared. Okay? And then there's two delta p squared and then there's xp plus px. And so if the state doesn't have xp correlations which is something, an assumption which is often made. So say this is zero. Sigma plus i omega is just this. Yeah? So now this is a two times two matrix. We have to check that it's strictly, well, that it is semi-definite, positive semi-definite. What do we check? Trace and determinant, okay? Trace is positive. Nothing to say about that. So this is equivalent to determinant being positive which is four delta x squared delta p squared minus one plus one. Yeah. Thank you. What am I doing? Yeah, I was seeing something was wrong but I never found it. All right. So that's your uncertainty principle. Yeah? Delta x delta p greater than h bar half. We have to put h bar back. Yeah? So, but this is the most general form on any number of degrees of freedom. Yeah? And we found it with one line well two, two lines. Yeah? We're going to need this big time. Remember that this is, now we're going to fix this idea and then we'll use it later on because time-wise we're doing fairly well. Remember that we used the what do I say? Like we used yeah, we used the positivity of raw here. So this is a consequence of the canonical commutation relations and the positivity of raw. That's always the case for uncertainty principles. Yeah? Now, yeah, there's that too but first let me just say one more thing. So, now one final or a few final comments on how to manipulate Gaussian states. Something a bit more optical slash technological in some sense. So, first off there's the first moments and it's not like the first moments can carry information. So, for instance teleportation we continue as valuable was that first demonstrated with first moments. They are significant, they are relevant and you can use them for again, quantum key distribution of course. So, it's not like we don't care about them but also first moments we so, first moments can adjust to the will with displacement operators, okay? Those D that we wrote before. But those D's let me remind you, they're like that. So, they are generated by amiltonians. These are T, don't be fooled, there's just nothing by a number. Omega is just the matrix that describes couplings by who cares. So, they are generating by linear terms in X and P's. Okay? Which means that we could rewrite them as a tensor product of local terms. Because, you know, operators pertaining to different L2 like modes they will commute. So, I can just exponentiate the tensor products of exponent of yeah, exponentials would give me the exponential of the sum. So, this could be rj transposed omega 1 in the sense that this is omega 1 for a single mode and rj. So, they can be adjusted at first moments can be adjusted at will through local unitaries. So, no for instance quantity that is so, no entanglement property can never be affected, depend on first moments. Because I can apply a local unitary and set them as I like. So, there will be a bit less interesting and we're going to drop them for the rest of the lecture because I don't think they'd be specifically relevant. That's the first thing and how you do this could be applying a classical current or this is also the way you shift this fields for instance if you have a system in a cavity like a Fabry Perrault or any cavity really any optical cavity you feed it with a laser to change the first moments. It's a pump and with a strong classical field typically. So, that's how you do it in practice. So, so much for first moments. Now second-order operations instead the ones that apply this symplectic s that are behind these Hamiltonians, they're a bit richer and they will change they will let you, for instance they can create entanglement and squeezing as we are about to see. What are those? They are yeah, let me start with something formal. So, singular value decomposition of a symplectic it sounds like I'm about to say something awful, but trust me on this one. If you do a singular value decomposition you're going to find this and let's comment on this because there's all the building blocks that are used in practice. So, where R1, R2 belong to the intersection between O2N and the symplectic group. That is they are both orthogonal and symplectic. So, they preserve the trace for instance of sigma. So, they don't change the free energy, the energy of the free Hamiltonian. They're called passive operations and they are in fact realized on an optical table through essentially sequences of bin splitters and phase shifters. Phase shifters they change, they are like pieces of plastic that change the optical phase with respect to a fixed reference. For ions you just have to wait a little bit of time and they will keep like rotating in this frame and or bin splitters that just semi-reflectant mirrors that will mix modes. It's easy to get, there's a simple theorem that you can prove. It's the same theorem that you use to prove that you can do any unitary with two qubit gates and single qubit phases by using the very same theorem because as it happens, I just mentioned this, they're not that important but it's important formally if you have a work in this field in this subarea. This subgroup this is the maximum compact subgroup of the symplatic group. This subgroup is isomorphic to UN to the unitaries as it happens. It's very easy to see and to prove and so these operations can all be done with phase shifters so which are simply this cos phi sin phi so it's two-dimensional rotations for a single mode and bin splitters bin splitters are slightly more sophisticated, they involve two modes and in our notations maybe they'll be relevant later so I'll mention them. In our notations this is what they look like well I could have found a better way but never mind, I want to be very explicit. So that's a bin splitter basically they rotate axes and p's in the same way for the two modes and building this with this on all modes and this within any pair of modes you can build any of these R1 or 2 and then we are left with Z and Z is a squeezing operation Z is so you can prove that the these are the singular value of a symplectic and they come in pairs of inverses and this is a squeezing operation in the sense that if you apply this Z on a covariance matrix like this, which is balanced you're going to make, you're going to amplify this noise and suppress the other yeah, so that's what squeezing is and by a squeezed state we mean a state oh by the way, the vacuum state in this formalism has covariance matrix sigma equal to the identity and this is the only this is a pure Gaussian state Gaussian state is pure if the determinant of sigma is one, if and only if okay, so then this is a squeezer that and you can build up any symplectic that's how you manipulate and one thing which is also very interesting and very useful in practice is that tensor products as I already mentioned at the Hilbert space level they are always direct sums for Gaussian states so if you want to take the partial trace for instance and you have a big sigma with billions of modes you just have to take the submetrics pertaining to your mode of interest yeah, you do a pinching and so it means that sometimes you mean want to prove like I don't know say you want to prove what is the situation under this and that regime that not better specified but that gives you the optimal cooling in optomechanics say you want to cool down the mechanical mode then if you are in a Gaussian regime you can just look at the determinant of the mechanical mode and minimize it this will give you the lower now I didn't really go through this because it's quite a lot too but it's not that much after all but the determinant for a single mode state the determinant of the covariance matrix embodies encodes all of the entropies so if you want to max the optimal cooling you want to minimize the determinant and you can see well I don't know it's not that simple but now but it's quite easy you could have found the expressions for entropies based on these parameters that we have they all depend on the symplatic eigenvalues because we saw that this symplatic eigenvalue of the covariance matrix in the end they are also the ones that determine the spectrum of the states yeah so ok, based on this introduction maybe later I can mention a couple of things more of in order to I want to rely this a little bit to the phase space picture of characteristic functions and the functions that you're probably most of you will be very familiar with but and then I will go on and just want to give you an example of how to study entanglement in these systems and then we see how much time we got left towards the end thanks a lot for standing this