 So maybe the material I was asked for to deliver, perhaps a bit foundational for the next to last day, but we'll see as we go, I mean in case it's too, I'll tell you what I've prepared and now we're going to go through it, it'd be just a blackboard talk, well lecture. So the original program was to go through the basic formalism of continuous variables and then introduce Gaussian states and then show something about their entanglement because it's mostly because these are quite old results, like 15, 20 years old, but they're very the seminal in the sense that such techniques are quite far-reaching even in nowadays research and you can still apply them, so they could be beneficial. Alternately, in case we are a bit faster than I thought or you don't want me to go through all the details and such, because that's what I was about to do. So there might be an end of a final lecture on quantum estimation, on how to calculate the future information of, well, again in Gaussian systems because that's the formalism that I'll build up in the first couple of lectures. But maybe we'll see how it goes and then maybe you'll pitch in and tell me whether what you prefer. So, okay, but by continuous variables here we mean mostly in a winter college, though there's no winter anymore, but I mean in a college on quantum optics, obviously, you know, continuous variable just means systems. So continuous variables, this blackboard is so big. It's amazing. Wow. So I'll start from here then. Okay, well, continuous variables that let you know, because quantum, of course, it's quantum optics. So it's a formalism. So it's the systems that underlie quantum optics in the sense of the electromagnetic degrees of freedom in second quantization or any standard quantum mechanics 101 system that behaves like that. Let me just remind myself of what notation I want to use. So that's, of course, something that you are all familiar with. There's a big identity operator there, but you know, we call it a C number like to take on some attitude. Right. So these I'm going to write. And in particular there'll be, I can put these labels because I'm going to talk about a finite set of this. So a discrete set of canonical variables pairs. Yeah. And that's the difference between somehow, it's the difference between quantum optics and quantum field theory. Although quantum optics too is sometimes you get to deal with continuum of modes that maybe interact. So you know, when you, when you describe leakage from a cavity or from an atom, whatever. So these are the degrees of freedom. And that's what they're called. They're called continuous variables because as opposed to discrete variables where you have like, you know, quantized levels, stable quantized levels in an atom, you would have operators with continuous spectra. Yeah. That's very trivial. Something that is slightly less, well, it's still trivial, but like the way I'm going to organize my formalism is the following way, which is slightly less standard. I'm going to order all of these operators in a vector of operators. And then, and then I'm going to rewrite the CCR, the canonical commutation relation. This is how mathematicians call them. Even though typically mathematicians will tend to express, mathematical physicists will tend to express them as exponentials because they have problems with the domain of these operators, but we don't care at this stage. Well, certainly not. So the CCR, I'm going to express them this way as, okay, so what I mean here is that I'm going to take the outer product of these two vectors. That's why I write a vector and a row vector, a column vector and a row vector. So you take all possible pairs of commutators between these operators and you'll end up with a matrix. Obviously, technically, this is an operator valued matrix, but there's always the identity, so we don't care. We have to just to bear this in mind. And this matrix there, right, I can't raise this. I can't lift it. So it's this thing. It's, let's say we have n degrees of freedom. This is a direct sum of these blocks. It's an anti-symmetric non-degenerate matrix that's written that way. So this embodies, this sort of represents the canonical commutation relations in this formalism. I like this formalism because it's very powerful and will allow us to prove things rather quickly later on. Well, formalism, there's notation more than formalism. So these are continuous variables. It's a bunch of, why are they relevant? Well, you know, by the end of these three, two, three weeks, for some of you, you should be familiar with this, but, and I think you are, but just let me repeat. There are many degrees of freedom, obviously, way in quantum optics and other labs. There's a, we have reached a very high level of coherent control. And so they're interesting because, and by these degrees of freedom will be, obviously, in quantum optics, that's, that would be light, you know, the electric and magnetic field. So the, so, so, non-polarization, polarization will be discrete. So we will assume polarization constant or we'll disregard it in terms of optics, but also like as, for instance, trapped ions would be described by such a formalism, emotional degrees of freedom that is for atoms as well. Then there's atomic ensemble of samples of pseudo spins. But mostly now I would say a lot of optomechanics, which you've heard of yesterday from Moriol. So I think you, you, you will cover there. And none, and all the electromechanics that now are getting into the quantum regime plus monics to some super, some superconducting devices. So there's a lot of, of systems that abide by this formalism. So there's an interesting study in the properties of these systems in terms of quantum information, right? And yeah, well, I'll be more specific in a bit. So then obviously solving quantum information problems. So you all know that the CCR algebra cannot be represented in finite dimensions. You just need to take, I don't know, for instance, in finite dimension the trace of this will be zero, but the trace of the identity times something can never be zero, let's say. That's the first, the first and most obvious incongruence that would arise. So, so the, but you all know that this is just, can be represented on an L2 Hilbert space that you're familiar with, a separable Hilbert space that meet a discrete basis that we're going to see in a second. But so the system is infinite dimension in all quantum information problems, I've been, I've been very vague and generic here, but the dimension of the Hilbert space is one of the main parameters and make it more or less difficult. Solving general problems in quantum information for continuous variables would mean to solve them for all systems, yeah? Because you can embed any smaller dimensional Hilbert space into an infinite dimensional one. Up to, you know, there could be some misses with, I don't know, indistinguishable particles from ionic systems, but up to that it's basically all systems. And so it is difficult to solve problems very generally. So there is a description, so there is a restriction then to the Hilbert space, well to the set of states, which is usually, well, which is, which can be adopted, which is the Gaussian restriction. So the first thing we're going to do is introduce what Gaussian states are. And let me comment on this. We're going to see a lot of properties of such states. Why are they relevant though in the first place? I'm about to make it formal, but Gaussian states are nothing but the ground and thermostates. So as in Gibbs thermostate or some temperature of Hamiltonians which are bilinear in X and P. That's the most general definition. It's not the most standard definition. We'll come to that. But these systems are therefore very common whenever you can restrict your interactions, whenever it's relevant to restrict your interaction to second order. And this is very common in quantum optics because it is very difficult to let, to have light interact at higher orders. It's something that one typically wants, strong nonlinearities as well. It's been a holy grail in many senses. But the difficult thing to do for photons, which is one of the reasons why photons are so pure and they're so effective in, for instance, quantum communication. Because they tend not to interact much. I'm being vague, but obviously this is not the same in solid state systems where you have photons that are very dirty, typically. And so in quantum optics, the restriction to bilinear Hamiltonians is often a very good one. And for instance, in optomechanics too, although there's an interest in creating an anharmonicities. I'll come to the point of like assessing the sort of the value of Gaussian states as opposed to having known Gaussian once. But there, but even standard optomechanics can be linearized and it's a good description. Trapped ions typically, given the distances between them, the Coulomb potential can also be linearized typically and they will give you a very good approximate description of the oscillating modes of these ions. And those systems therefore are all described by Gaussian states. So there's quite a lot of interest. And to a theorist, the interest in working with Gaussian states is that they're such a nice playground where you can solve quite a lot of problems that are difficult to solve in general. And they are relevant in practice. So there are many implementation. So there are a couple of caveats obviously. One is that you shouldn't extrapolate because this is a very specific set of states that has certain properties that we will see, but you shouldn't make claims that go beyond the Gaussian realms typically if you are starting with Gaussians. And the second claim which is a bit more controversial and it's something that's been discussed quite a lot within the community is that, is the following. And I'm going to take it, I want to discuss it at the very beginning and then maybe later we'll come to it once more so it'll, in order to reinforce it somehow. And it's the fact that, so a standard critique to Gaussian states is a very old one and it goes as follows. As we will see, Gaussian states are states which have a Gaussian-Vigna function. So you're all familiar with Vigna function and so that's probably part of the lecture that I can just skip altogether. So Gaussian states have Gaussian-Vigna functions. Now the Gaussian-Vigna function and the, well, the Vigna function, so the marginal of the Vigna function, like the, you know, if you have one mode, you'd be, I don't know, the integral along one direction, yeah, of the phase space. It gives you a probability of measuring quadrature outcomes on the orthogonal direction. So integrative of P, you get probability over X, okay? That's a very well known fact. It's about the interpretation, it's the operational interpretation of the Vigna function, if you like. And so in all problems where you're restricted to quadrature measurements, which then, which means all Gaussian measurements, we won't go through Gaussian measurements formally, but they all boil down to homodyne detection, which is essentially measuring X's, P's or combinations, they're off. All problems where you're restricted to such measurements, they allow for a secret, for a hidden variable description naturally. It's provided by the Vigna function, which is a natural probability distribution then for Gaussian states, right? Normally, well, for general Gaussian, for general quantum states, the Vigna function is not a probability distribution because it can be negative. But not so for Gaussian states. So because of this model, you'll never be able to say, violate, well, to prove anything non-local. There'd be no quantum non-locality, you can never violate, you know, Ballina qualities, like with Gaussian states alone, and there's measurements. Now, that may be very well, which is, and that's true. And therefore, so people have questioned that because they can be mimicked, so entirely Gaussian systems can be mimicked therefore with classical systems, and now therefore, they will therefore be useless to quantum technologies. However, there are two, where there's a list, there's a few objections to this. One is that it's still interesting to create Gaussian resources such as entanglement, and then you'll be able to access, and then you need some non-Gaussian key to unlock them, and actually get, you know, sort of harness the quantum potential, as they say. And that could be, I don't know, even measurements like on-off detectors would be enough in principle. Yeah? So, there's still an interest in studying these systems and creating resources, such as entanglement. The second point is, and it's, the second point is the most fundamental one, is that although they can be mimicked, there is still the Eisenberg principle. And the Eisenberg principle limits fundamentally any quantum states, including Gaussian, obviously, as we're going to see. And, but it wouldn't be there in any classical system. Therefore, say QKD, like quantum key distribution security proofs are still fundamentally secure. Well, up to all the problems that security proofs have, but that is independent of the Gaussian character typically, right? So, so there's still the, so there's still an underlying Hilbert space, which a mimicking classical system wouldn't have. So, that's the other element of interest. The third element, I would say, is related to estimation. And it's one of the reasons why in the eighties people went after, well, other than, you know, the standard, sort of the inevitable technological process, but they went after squeezed states. And, you know, you can get squeezing even with Gaussian, well, with Gaussian states too. And the reason was, well, the first observation is, I think, or at least rumor has it, was, was that Braguinsky and co-workers pointed out that in order to detect gravitational waves in this big League of Virgo, you know, apparatus, it would be much more, it'd be more advantageous to use squeeze light. Yeah? When you have a reduction of noise in one quadrature at the expense of the other because of the uncertainty principle. Then again, this claim is enough to motivate interest, because you can do squeezing with Gaussian systems, which are easier to control and describe. So, we're going to see that to some extent. Of course, then gravitational waves were detected without squeezing a few years back. And now then they're redoing the experiments with squeeze light in League of Virgo. I can't remember out there, which one, the successful one, what experiments used that. But this then came to be, in a sense. So, yes. So, how do we introduce Gaussian states? So, let me just call I'll define a quadratic Hamiltonian. This, oh, by the way, the way I'm going to deliver this works that, it's essentially the lectures are drawn from a book that I wrote that you can find anywhere, unless some of you are part of, you know, the publisher, but you can find it anywhere. But I mentioned in my book, because I think it's the best in the many, mostly in quantum optics that they're certainly better books. But like, because it's mine, obviously, I'm taking my lectures from my book, because it's easier. But anyway, I put like, still an average version of the notes, well, of, you know, those parts on the website for the winter school website. Anyway, so everything that I say basically will be there in quite a lot of detail, maybe more detail than you need actually. I'm under the impression. We'll see. But so just in case, and try to follow pretty much it, lest I forget something. So that's what I'm going to do. And so this introduction of Gaussian states is not orthodox in a sense that I don't think you find it elsewhere in this form. But it's still the same. Oh, story. So what I want to say is, so let me call this a quadratic Hamiltonian. And as opposed, so let me see what I mean. Let me, let me explain what I mean. This H is a symmetric, real symmetric matrix 2m by 2n. And half is just, you see why I put up there in a second. And this r is that vector of operators. This small r is just a bunch of number, like a 2n dimensional vector, real. Okay. So that's what I call a quadratic Hamiltonian. And it's not strictly quadratic. This is the most general polynomial of order 2 in x and p. Yeah. And I can pick H quadratic symmetric because if you look at the commutation relation, you see that all anti-symmetric, any anti-symmetric term will just give you something proportional to the identity in the end. Some operator which is proportional. So it's just changing the zero of the energy if you like. But it is irrelevant to any dynamical or thermo dynamical as well, consideration in a sense. So we don't need that. And then this is what I call H. And then Gaussian states is, is this, is it minus beta H for H quadratic. Okay. And you see there's n squared plus n variables within the H plus another n in the 2n, sorry, 2n in R. Sorry, parameters. I added this fake parameter beta just because I want to describe the situation where if H is finite as non-limiting instances, and beta goes to infinity, then I get all pure Gaussian states. Yeah. You see that this state is pure if and only if beta goes to infinity. Okay. And then you go to the ground state. That's, but, but strictly speaking, beta is, it would be irrelevant. I mean, I could, I could dispense from it. Okay. It's one additional thing that I don't really strictly need. So these are Gaussian states. And, and then the first thing one, one may want to do is, yes. Let's say we want to find the spectrum of these states because a lot of the properties would depend on the spectrum and we'll see that Gaussian states have a very specific spectrum that scales in a certain way. And in this infinite dimensional Hilbert space. Okay. And the first thing I want to do is I want to try and, and sort of handle this bit, which would be instructive in terms of quantum optics. Maybe if this is too basic and you're very, very well acquainted with all this stuff, like, you know, vial operators and shifts and you let me know yelling me and that will, I'll be, you know, I'll fast forward in a sense, say, and then dwell on something else later on, which might be of more interest to you. But otherwise, yes, let me, let me state it. It's easier to just state it. I will call this, what to say? Yeah. So that's a displacement operator. It's easier to put, I put omega. There's a reason why, because it, because this encodes the anti commutation, while the commutation relations are, this can be expressed to respondents. It's convenient to put an omega there at this stage. I'll take it. Sometimes I won't use it. But, and then I just want to show that something that you, you know very well, probably that if you apply this operator, this is a vial or displacement or shift operator, where you want to call it, on R, the vector, then if I, you go the dagger, you get, of course, this will get, so I want a dagger there actually. I want R minus R. And so we want to prove this, that RT omega R. And then you go R. Okay. So how do you prove this? Oh yeah, that is equal to, there are many ways to do it. No, there's at least a couple I can think of to, to prove it. A couple of ways to go about it. But a very quick one is that, and I leave it to you, is that you take the derivative of this operator with respect to R, yeah, all RJs and show that they match. And then you can show that all derivatives match, yeah? Because the second derivative will all be zero and all the further ones. And of course also, and you take them in zero, that's fine. So smooth. So there's only one solution, even if it's, if this is operators, there's no, no, no, nothing else that can happen. So that's the way you prove it. And of course, this means that this, this, this we already knew basically, but that this shift in the first, in the operators is just, can be accounted for, essentially, by applying a displacement operator left and right on the Hamiltonian. It doesn't matter that this is a quadratic function, okay? It's all very well behaved. So, so therefore, yeah, so h equals, oh, no, I want, I want d dagger there, right? The way I wrote it, yeah. So then we're down to just handling this and the quadratic, the purely quadratic part, okay? And then we want to understand this part. And this we're going to do in the following way. They start to be a bit more interesting now. We introduce what's called a symplectic group. We do it that this way. Let's say, yes, let's say I want to describe now instead of using the Hamiltonian to define a state through this like thermal trick, I, we just use the Hamiltonian to describe a dynamics and we do it, we do it in Eisenberg picture. And in this regard, what you want to do is you want to evolve typically the canonical operator. And, and then here I always forget a sign that let me pick it the same way because I keep switching this, okay? Right. So, so let this canonical operator evolve through quadratic RTHR, right? Yeah, so we let, we want this to evolve this vector and see what happens, okay? Because I want to understand, we want to understand what the action on this quadratic polynomial, purely quadratic polynomial is on the canonical operators that defined our whole system in a, in a sense that has been already discussed, okay? So, so our dot equals this. Yeah, so that's something that you, we're all familiar with. So that's a very equation. And, and this is convenient. It's convenient to write these down in components for once. This is the only thing I'm going to do this, the only, the only time I think. And so, so I want to write down this, this Hamiltonian explicitly. I'll have a half, which will be useful later on. And then I take all the possible commutator of commutators. So it will be RK, RJ, and then plus RJ, RK, RL, yes, RK, RJ. So, right? And then you use whatever this, the, the sympletic form. So these are just given by I, those. And then I take into account all the factors. And what you're going to get in the end is that this will be omega JL, JK, R. So, so geometrically, we saw that the equation writes, let me go back to vectors now. The equation writes that way. That's, that's Heisenberg equations for these operators. We drop those. And then we have to use the symmetry of H and the anti-symmetry of omega. And you're going to get that. And that's why I put the two factors because you, you get a single, so the two cancels out because you have two terms. And now then this can be solved in one line. It's a matrix equation. So this is a matrix equation for operators. This is, these are operators that live inside, you know, matrices and vectors, okay? That's the formalism essentially. And then RT, then the solution is just E omega HT. Yeah. It's an exponential equation that will give us the evolution. Yeah. Do you want to ask? Yeah. Oh yeah. Thank you. Thank you. It should. I forgot, I forgot this. And there too. So it's KL. Yeah, I lost one of these, of these labels as well. So I just wanted to make it a bit quicker because I don't want to dwell on each single line because otherwise, sorry. Well, that was spotted. So yeah, so this is the evolution. And now comes the interesting part to us. Not quite what, well, you know, this is interesting too, but the interesting bit is that this evolution, so what we did is essentially through this Hamiltonian is evolving each component of this vector, which is still written down there thanks to this immense blackboard. We evolved each component with the same unitary evolution. Yeah. Okay. And when you do this, clearly the commutators between any of these cannot be affected. Okay. They must be preserved. But this gives us a non-trivial factoid, which is the fact that this must still preserve the CCR. So if I put it into the CCR and now you see why you start to see why this formalism is so effective. So the CCR were satisfied at time zero. Okay. So we get this non-trivial result that let me call this transpose of that. So this must still be, well, let me write. So this formalism allows me to write this, that I can take this out of the commutator. And the commutator is still whatever it was, which is i omega transposed. And this must still be equal to the initial commutator, which is i. So if you get rid of the i's, we'll find that this operation, well, this transformation, let's call it transformation, yeah. It's a two-end, it's a two-end dimensional real metrics. We'll have to preserve omega. And operation that preserve omega called symplectic, they are actually linear canonical transformation in classical Hamiltonian mechanics. So one is very well familiar with them. And they, well, the symplectic deform a group, obviously. And the symplectic group is one of the three classical groups as they call them, so which would be unitary orthogonal and symplectic. There's some, so let me, I'll get rid of time, that time can be reabsorbed into h. If you look at the algebraic properties of a matrix to be symplectic, you'll find that it must be generated by omega times the symmetric. This you can find, you can derive independently. So this one, we call it s, and it belongs to what we call sp to nr, the real symplectic group. So this will clearly then play a very fundamental role. And we already, with these three lines, we showed that you can sort of represent the group, at least the algebra, through on an L2 space through a quadratic Hamiltonian. And this mapping between infinite dimensional operators, like this h, and this finite dimensional s is, well, this is really technical and I don't really know, but I learned these names by heart. So this is actually a representation of the metaplectic group. You can go on Wikipedia and look up the metaplectic group, which has the same algebra, but it's the same story as SU2 and SO3. One is double covered to the other. And technically, this is a projective representation of the symplectic group. That is, you get a representation times a phase factor that depends on the operation. Whatever. Basically, to all extent, to our purposes, there's a mapping between finite dimensional, two-dimensional, symplectic transformation, and these operators, and e to the i h acting on the Hilbert space. That's what we're going to use. And this is one of the basic tools in this shenanigans. Let me just see the, I haven't forgotten something. All right. So then the first thing then you can do once you have the, okay, then we can solve essentially, what did I want to, oh yes, sure. So in order then to find the spectrum of Gaussian states, which we'll be interested, so that we can write them down in the Hilbert space and then calculate entropies there for other and as well as other quantities which are relevant to quantum information in many different ways. Before, we need one other really basic tools, but this is kind of interesting because sort of, it's very well known to all of you, but maybe not in its most general form. And it's essentially what you do to solve a system of couple harmonic oscillators. So you have all been taught that, taught that, when you have like connected springs, you can solve the system by finding the normal modes, right? So breathing, Egyptian modes, I've always liked those and the center of mass and that. But typically, you've done that in a very simple case where the P's of the momenta were not coupled and often the momenta were the same, were just all the masses of those springs, they tend to be all the same typically, yeah? So that is sort of a simpler case. Of course, you can decompose in normal modes more general quadratic Hamiltonians which have any coupling between X and P as long as they are strictly positive. Oh yes, something that I completely forgot to mention is that Gaussian states are these with h greater than zero because otherwise the operator on the space is not even positive and you have a Hamiltonian which is unbounded from below and so you won't have any well defined Gibbs state, okay? So that I forgot. This we need for Gaussian states, we didn't need this to define states, we didn't need these arguments anywhere here. So in this story, the Hamiltonian could be, so the Hamiltonian matrix could be, it is not necessarily positive and in fact that is the case for squeezing. So normal mode decomposition is just saying that given a certain matrix, you can always, a certain strictly positive matrix, as Stefano knows very well, you can always find, so given any m strictly greater than zero, there exists an s such that s m s t equals d and let me see, how did I call this and this d, okay? Where this d is given by two times two blocks, a direct sum of two times two blocks, all proportional to two times two identities, yeah? And you see why this is called normal modes because typically when you reduce those springs in normal modes, well there you just want to decouple them, it's slightly different but you would also be able with another canonical transformation to bring the coefficients of x squared and p squared to the very same one and then you get what they call the free Hamiltonian in the quantum, in QED, well, when you quantize the electromagnetic field, you have x squared plus p squared, that is the Hamiltonian. So you know, nature doesn't give any squeezing for free and they're always balanced these coefficients. Right, so that's, this statement is, it's kind of interesting to see where it comes from and it's, there's a very simple proof of this, so I'm going to mention it. So what was the proof? Yeah, so you can see that s equals square root of d, these are all, these will all be positives as we see because, and we know it because this is strictly positive and congruences, when you do, when you apply matrix and it's transposed on the left and it transposed on the right, so congruence transformation cannot change the, well if s is known to generate, which is always the case, cannot change the signature. So all the, the positivity of the matrix is preserved. So this will all be positive, we'll have to be positive, m is positive, strictly positive by hypothesis and so I can use the square roots and they'll still be real and then these, this is an orthogonal transformation and then I can read this down. So again, this matrix is well defined and you see that this obviously does the job because you know this action, m is, ah, also symmetric, I forgot that. Well when I, when I write positive I mean diagonalizable and symmetric as well. So this is symmetric and so this will give me the identity out of m acting on left and left and right. The orthogonal doesn't do anything on the identity by definition and then I, I then multiply by what I need to have there. So the, the square root of this squared will give me d, right? And then I want to prove that they're always, they resist a no such that this matrix is simplistic, okay? And it works that way, all transposed. I want to prove that the resistant a no such that this is the case and then, and now I'll cut, I'll cut it short because this is slightly more technical, though very easy. So this is a non-degenerated, so this matrix because of the symmetry of m and the anti-symmetry of omega is still skew symmetric, still anti-symmetric and it's also non-degenerate and there's a little theorem that says that you can always pull, there, there exists a no that puts an anti-symmetric matrix in a standard form. If it's known the general, I can't remember about the generacy there, but, and the standard form is essentially this one times some numbers and then finally d will undo these numbers. So it's, there's always a no that does this, basically. That's what I'm saying. I want to write down all the equations here, but they're written in the notes that I'm going to put up. So, so then, so then we prove that you can always decompose this any, so any strictly positive matrix can always be reduced to the normal form, which is what we refer to the, the sort of matrices in the rest of this left in the remainder of these lectures through a symplatic transformation and you see that, you know, that symplatic transformations are, are unitaries at the Hilbert space level. Yeah, that's what we just said. So this is very relevant to us because in the end everything we are going to need, everything that we, we're going to need, all the information about the spectrum of this Gaussian state, essentially this, will be contained other than beta that, you know, but will be essentially contained in the symplatic eigenvalues of this H. The symplatic eigenvalues are these, these numbers here. There's one per mode and the reason is that of course the spectrum cannot be affected by S. Okay, so we're stripping the, the state sort of down to its essentials and decomposing its form. And this is slightly technical then in the notes there's a slightly technical bit which is a bit boring, more boring than this lecture which is the, which is the, unfortunately for some technicalities the, the Gaussian, the symplatic group is such that if you, okay, so there it's, so it's not always the case that there is a not all element of S of the symplatic group that preserve omega can be written as a single exponential of a generator. Sometimes you need to. Who cares? I mean it's a fact that one might, must take into account if you want to be really technical, but then I'm not going to be, I mentioned it and that's it, but I don't want to get into any sort of swamp based on that. Okay, so then, so these symplatic eigenvalues, what are they? For a, for a Hamiltonian they are the eigen frequencies. Yeah, for a Hamiltonian matrix they are, so you can always write down there's an S that does this essentially or maybe better because the inverse of a symplatic is obviously a symplatic since that is a group so I can do S, yeah, a sum J equal one to N. Okay, so any of these Hamiltonians can be written, Hamiltonian matrices can be written as a symplatic acting left and right on a diagonal matrix with, with doubly degenerate eigenvalues. Okay, and these are called the eigen frequencies. They're the frequencies of the normal modes. For any general Hamiltonian, though, that's the non-trivial bit and there are ways to find them which we're not going to care about but to determine them, but going back to our Hamiltonian there that we're trying to decompose, I want to write it down as, well let me write it down the way I write it down here then, so I want to write it down so H equals the R dagger sum S which will, you know, which will be associated to this S without a hat by this metathletic mapping that we described there and, and this Rt is very simple. So essentially, say we want to know what the entropy of a Gaussian state is. What you need to do is, is that we just need to calculate the entropy of, of this exponential. What's the policy, what's the school's policy as to breaks, what to break, I don't. Yeah, I think it might be, it might be useful to everyone. Yeah, let's take it back in like five minutes maybe, yeah, five minutes then.