 Let me just announce again the time table for the mini-project presentations for next week. It was already here up on the screen. On Monday will be teams number 4 and 6. On Tuesday will be teams number 2, 9 and 10. On Wednesday teams number 11, 14 and 3. On Thursday teams 13 and 16. And on Friday teams 17 and 18. Okay, and so now enjoy the last lecture. Thank you for coming back. Let's see, so where we left off last time was almost at the end of lecture two. We considered a collection of dissimilar masses on springs and found that their eigenvectors, very much the same story except they obey a somewhat more generalized orthogonality condition and they have a slightly more complicated expression for the identity. And we also at this point paused to develop the notion of phase space eigenvectors, eigenvectors that live in the two-end dimensional phase space of the system rather than the n-dimensional configuration space that the original ones lived in. They're defined so the phase space eigenvector is just defined as having its first n elements being the plain old configuration space eigenvector and the next n elements are not independent of that. They're fully determined at this point and they satisfy this relationship, this orthogonality relationship which looks a bit like this one, but it's nicer, the thing that's nice about it is that up here I have to remember that if I for some reason pick the eigenvector associated with plus 24 hertz and the eigenvector associated with minus 24 hertz, those aren't distinct eigenvectors. They're the same and so they actually are parallel in configuration space. But if I define the phase space vector that is associated with 24 hertz resonance but either the plus frequency, that's when sigma is plus one or the minus frequency when sigma is minus one, those give me different phase space vectors because frequency appears here. It'll change the sign of this bottom half of the vector and at that point these two n-dimensional phase space eigenvectors obey a orthogonality condition for all those vectors. They're no longer twins anymore and they obey this relationship with respect to the symplectic measure s that we defined. So in some sense you don't need to worry about this at all if you've always just been using eigenvectors and remembering that there's a positive frequency and a negative frequency and taking advantage of that freedom to match your initial conditions. That's fine, this is maybe just a curiosity. But it's no longer a curiosity when we introduce damping. So now we have n oscillators with I guess n squared springs between them and n squared different damping coefficients. They're equation of motion we wrote before which would be solved for the following equation here. As I mentioned as long as k, d and m are Hermitian, which is usually the case, we get eigenvectors at plus frequency and minus frequency both having positive imaginary parts. So I'm imagining in front of me the complex plane. Here's the frequency axis, the real part, positive frequency, negative frequency both have damping. That's important because now when I go back to calculate the eigenvectors, remember what step we do then, we go back to this equation being equal to zero and we plug in one of our eigenvalues here and solve for the corresponding eigenvector. Now when we had undamped systems and this term wasn't here, it didn't make any difference that we had some negative frequencies. Plug them in here, they get squared, they give you exactly the same eigenvector as their positive frequency twin. But that is no longer the case when I have this term that's linear in the frequency. So now I will truly get two n distinct configuration space eigenvectors. And that's a little bit odd if you're thinking to expanding things in terms of normal modes because your configuration space is only n-dimensional and for some reason you're given two n truly distinct basis vectors. It's an over-complete basis. And for that reason it is nice, since that's a little bit peculiar, at that point it's really nice to say, look I'm going to forget about trying to squeeze all of this into n-dimensional configuration space. I'm really just going to go into two n-dimensional phase space where at least, I don't know that they're going to turn out to be orthogonal, but at least the prospect isn't crazy. The number of dimensions is correct. And if you follow the same kind of algebra that we did before, what you will find is that for, okay, let's stick in configuration space for one more minute. We already showed that they aren't, there are too many of them to be orthogonal, but let me just give you what you would find if you tried to calculate the following thing. Okay, we would like this to be the Kronecker or Dirac Kronecker delta function, but instead what we find is that it is the following. So even if n and m are completely different, the right-hand side in general is not going to be zero because there's some damping around. Okay, so compare this. So I guess I'm using x here and q as the same thing. The configuration space eigenvectors. So this shows that, actually this shows two things. One is that the eigenvectors aren't orthogonal, and the other is that if I, I won't show this here, but you can show it in a few lines of algebra, if I plug in the positive frequency eigenvector here and the corresponding negative frequency eigenvector here, I won't get one here, which shows that they're not parallel anymore. Okay, in the undamped case, it was okay to have these two eigenvectors because there were always two that were parallel. But if I take those positive frequency and negative frequency versions of the same eigen mode and turn on damping, they start to do this. Okay, so what about the phase space versions of damped oscillator normal modes? It turns out that once we go into phase space, all this nice orthogonality and completeness is recovered quite naturally. It doesn't suffer from the presence of damping. And specifically the results that I want to mention is that if I take those phase space operators, so phase space, sorry, phase space vector associated with the nth frequency and either the positive or negative frequency version, and I take its dot product with another one. If I just use the symplectic matrix that's relevant for the undamped systems, I don't get orthogonality. I have to slightly tweak that symplectic matrix. So originally it was this. The dissipation matrix has to appear in the upper block. And then we get back the nice orthogonality relation. And of course in the limit of taking the damping to zero, this D matrix goes to zero and I'm just back at the undamped case and its orthogonality relationship. Any questions? Okay, so just as a quick conclusion to lecture two, the configuration space, the usual eigenvectors in damped systems aren't orthogonal by any definition. But when you move into this phase space notion of normal modes, they do form an orthogonal set. There's also a completeness relation that you can write in terms of these vectors and that's in the American Journal of Physics paper that I mentioned. I will write it here. So those are sort of the details, the nuts and bolts, but I think the conclusion is that the formalism for describing even perfectly linear damped coupled oscillators still has some surprises in it. I said at the start of my last lecture that we were going to show this included non-trivial topology and that's what we'll tackle next for lecture three. And I also said that I would give a definition of non-trivial topology. So let me start out by giving a glib definition and I'll give a more serious or precise one later in the lecture. So you're talking about non-trivial topology anytime your physical system has some quantity that when plotted as a function of some other quantity, like something that you could measure plotted as a function of something that you could control, returns a plot that looks like the Mobius strip. If that happens, then, yeah, topology is probably relevant. That's a topologically non-trivial object and something that you don't usually see in plots of data or the like. So we'll start lecture number three. And the topic that I really want to talk about here is adiabatic control of oscillators. So what do I mean by that? Well, let me suppose that I have a system of two harmonic oscillators and I'll say that in this lecture we're going to take all of this complicated stuff about phase space, normal modes and all that. We're going to leave that behind. It was hopefully a pedagogical exercise. We learned something interesting about how to describe these systems. But now we're just going to go back to regular old Hamiltonians. And this is all going to be about classical mechanics. So this is the classical Hamiltonian. And the classical Hamiltonian is just the matrix whose eigenvalues give me the normal mode frequencies. That's its definition. So if I had two oscillators, say at frequency delta and minus delta, and I don't really want to think about the oscillators having negative frequencies. So this just means with respect to some arbitrarily chosen frequency. So this number is here is maybe a megahertz plus one hertz and this is a megahertz minus one hertz or something like that. So then the eigenvalues of this matrix would just be delta and minus delta. These would be the frequencies of my two oscillators. And apparently the variable is chosen such that whenever I change it, the frequency of one oscillator goes up and the frequency of the other oscillator goes down. So these could be two pendula and I'm making one string shorter and the other longer. Just two oscillators. At this point they're uncoupled. If I want to couple them, I can do so with a constant G and this is like putting a spring between these two pendula. They begin to talk to each other and the normal modes will be some linear combination of the original modes. So this is the, and again, if we wanted to draw a picture, they don't have to be pendula. They could be two springs where each spring gets a bit stiffer or a bit softer as I vary delta and the coupling between them is set by G. So the question that I want to turn to is what happens if this Hamiltonian is made time dependent by making the parameters that define it time dependent? Well, if these parameters are fast, then this is usually what we mean by parametric driving and that's been much discussed in the past lectures. So like if these are modulated at twice the natural frequency, you have spontaneous ringing up effects like that. And that is not at all what I want to talk about. What I want to talk about is the opposite limit where the parameters that define the Hamiltonian are varied very slowly. This is what's sometimes called the adiabatic limit. So adiabatic transport is what happens to the physical state of the system when I change the Hamiltonian very slowly. Questions? So here's a picture. So if I have a Hamiltonian that depends on some parameters here, I've just chosen one parameter delta. For each value of delta, it will have some set of eigenvalues. It's called the spectrum. Here's the spectrum. And as I vary delta, in general, these eigenvalues will change. I'm varying some parameter that appears in the matrix. And all I want you to take notice of is that the eigenvalues do something. So if h is a real Hermitian matrix, and I only have one parameter to vary, then quite generally, the eigenvalues will never bump into each other. We'll prove that more or less later on, but for now just take my word for it. If h is a real Hermitian Hamiltonian that depends on two parameters, and let's set that up here, this is the exact same plot just tilted out into perspective. If it now varies as a second parameter, which we'll call g in honor of our delta and g over here, then I could also plot the eigenvalues as a function of those two parameters. And that's shown here. And as a function of two parameters, there will be isolated points at which eigenvalues meet, which they touch each other. This is called a degeneracy. And what I want to say about this is there's a nice result. You can read about it in this book by Kato, which shows that for, let's say, at least Hermitian matrices, the eigenvalue sheets that I've drawn here can always be ordered unambiguously. What this means is that I can always say there's one sheet, there's a lowest sheet, there's a middle sheet, which is the red one, and there's a highest sheet, which is the blue one. And the only places at which that ambiguity breaks down is at these isolated points of degeneracy where I can't really say which one is higher. But if I excise these points, then there's absolutely no ambiguity about which sheet I'm on. If I draw a point on one of these sheets, you can tell me which sheet it's on. So that's here. These statements made a little bit more precise. And the most important one that I want to make is the one that's in this book by Kato, which is that the functions defining these sheets, the eigenvalues omega as a function of whatever the control parameters are, can always be Taylor expanded. They're always analytic functions. They have a well-behaved Taylor expansion in terms of these variables that only involves integer powers of those variables. Around any point at which I can diagonalize the Hamiltonian h. Well, if h is permission, then I can diagonalize it everywhere, and this isn't even an issue. But technically, this is a caveat to that proof that will be relevant later on. So the only thing that eigenvalues of Hermitian matrices can do is either be different and wander around or every once in a while bump into each other. So without much loss of generality, I could just look at a little region where all of those things are happening. eigenvalues just wander around or occasionally bump into each other. And I could just sort of zoom in there and redefine my control parameters to be little g and little delta, same things just with a shifted origin. And then the most general real Hermitian matrix that I have is just defined in terms of these two parameters in this fashion. It has one degeneracy point and everywhere else just is not degenerate. And this matrix is indeed this system here. So what do I want to say about this? This isn't meant to be a lecture in the most boring results of linear algebra. So far that's sort of what we're covering. This is supposed to be about physics. So why is it relevant? It's relevant because we have a lot of systems in which the equation of motion is such that the time evolution is governed by a Hermitian matrix H, Schrodinger's equation, Hamilton's equation for oscillators. And as we discussed, the system whose Hamiltonian is this is just this thing here. Now, there's a nice proof that if that Hamiltonian becomes time dependent, if that matrix becomes time dependent, then as long as the time dependence of that Hamiltonian is sufficiently slow, then the evolution of the actual state of the system will follow the evolution of whatever eigenvalue it starts in. And if that's a little bit too abstract, there's a nice way to illustrate that. If I take this system that's shown in the spectrum on the left and make the axis not some control parameter, but actually time, and at t equals zero, prepare an excitation of one of the system's normal modes, like this one here, that is vibrating. All the other modes are still. And then as a function of time, vary the Hamiltonian, as long as I do it slowly enough, the excitation will just stay in that eigenvalue. And because we said that eigenvalues can be ordered unambiguously, there's never any confusion about how to follow that eigenvalue. Just smooth evolution in whatever kind of curve or surface this is. Now, if my Hamiltonian depends on two parameters, and I vary those two parameters in a circle, so g and delta are varied as a circle, then if the system's initially prepared in the blue mode, what I just told you means that all that it can do is move around on the blue mode. So this is the point that I wanted to start at. If I have a Hamiltonian defined by some parameters, I prepare an excitation in it, and then I vary those parameters however I want to, but slowly, and at the end turn them back to their original settings, which means to go around some kind of loop and come back, then the system's energy has to be in the same mode that it started in. Questions about this? Let me just say this should seem pretty boring. There's literally nothing you can do with this. You just go around in a loop and you come back to where you started, what could be more useless? It's like somebody gives you a giant box labeled the identity operator. It doesn't do anything. So if there are people who know about the Berry phase, let me just say that all the statements that I made apply to which mode the energy is in, so to the amplitude of the oscillation. The phases of the oscillation can do some funny things in the results of some operation, but they're not going to be relevant to what we talk about at all. They don't have anything to do with what we're going to be discussing. So let me just leave that entirely to one side, not show you the little Berry phase animations, and just come back to this statement here. So whenever, everything that I just told you applies whenever the eigenvalues can be Taylor expanded, which is to say whenever the Hamiltonian can be diagonalized. So whenever that happens, closed loop operations are not going to do anything interesting for you. So when can a matrix not be diagonalized? That's the Berry's phase, yeah. So the phases, interesting things will happen as you go around in loops, but I don't want to talk about phase at all. I only want to talk about the energy or the amplitude. So what is a non-diagonalizable matrix? So let me remind you of a few things that we learn in quantum mechanics. One is that if the matrix is Hermitian, then it can always be diagonalized by what's called a unitary transformation, but we don't have to worry about what it's called. I can take any Hermitian matrix H and sandwich it between two operators, two matrices, well, a matrix and its adjoint, both of which are unitary, I'll say in a minute what that means, and I will get out a diagonal matrix, that's D. And furthermore, it will have the eigenvalues, some of which may be degenerate, on the diagonal. What is a unitary matrix? It's one that just takes one coordinate system and turns it into a coordinate system which is just rotated by some angle. It's a way to define a new coordinate system which isn't stretched at all, just rotated. Okay, so this is hopefully something that you heard at some point. If we're talking about a matrix that is not Hermitian, then it is still usually possible to diagonalize it, though not by a unitary transformation. So I can take a non-Hermitian matrix and usually I can perform what's called a similarity transformation. Here P is any invertible matrix, not just unitary, and get out a diagonal matrix, again, whose diagonal elements, some of which may be degenerate, are the eigenvalues. And unlike the unitary transformation, it corresponds to rotating your coordinates. The similarity transformation corresponds to rotating and stretching your coordinates, which means that the new coordinates are not necessarily orthogonal to each other. So you might end up with coordinates that are definitely not at right angles. It's not just a mistake in my drawing. But let's come back to this usually. So the more general statement is that any Hamiltonian, any matrix, regardless of whether it's Hermitian or not, can be brought using a similarity transform to what's called Jordan normal form. And it's named for Camille Jordan, a late 19th century mathematician, not the physicist Jordan. And Jordan normal form is sort of the equivalent of diagonal, which is also a normal form. It's just a nice way to write a matrix that reveals clearly some of its properties. It's sort of the closest you can come to diagonalizing a non-Hermitian matrix. So in general, if you give me any matrix H, I can act on it with a similarity transform and get a Jordan form matrix. And let me draw you what a Jordan matrix is. I need to make it big because I want to illustrate a few things. First of all, it has all of the eigenvalues on the diagonal. Some of those eigenvalues may be degenerate. So D, I guess, is six-fold degenerate. F is two-fold degenerate and G and so on and so forth. So those are all the eigenvalues. Within any degenerate block, so here's the D, here's a block in which everybody has eigenvalue D. Let me take a step back. When you're not in a degenerate block, all other elements are zero. Whenever you're within a degenerate block, the next diagonal can either be one or zero. So it will depend on some things that we'll talk about later. But for example, I might have three of the elements on the next diagonal be one and the other elements be zero. E is not degenerate, so nothing's going to happen. It's just going to be diagonal there. Then I have another degenerate block and it could have a one here or a zero. Let me just put a one. So this is a degenerate block and you can always choose things such that the elements that have a one on the next diagonal are grouped together. And if you do that, those are called Jordan blocks. They're the only part of the matrix that isn't just regular old diagonal. Yes, everything that I haven't explicitly written is zero. So this is really close to diagonal. In fact, the only time in which it won't be perfectly diagonal is if the matrix is A, non-hermation, and B, has degeneracies. Even if your matrix is non-hermation but has no degeneracies, the rules that I told you will just give you a purely diagonal matrix. So it's these Jordan blocks that make something different happen, and I haven't encountered in usual linear algebra. So this means that there are two types of degeneracies, unlike in regular matrices. The ones that are in the Jordan block, so if they have a one in the off diagonal, so for example, this eigenvalue is degenerate with this eigenvalue and their off diagonal component is a one, so they form a kind of degeneracy that we don't encounter in hermation systems, and it's called in some of the literature an exceptional point, EP. Then there are situations in which you have two identical eigenvalues, but their off diagonal element is zero, and that's what we know already. So that's a conventional degeneracy, and in some of the literature it is called a diabolical point, which I think is a mistake because there's nothing obviously devilish about these things. They're just conventional degeneracies, so I think of the DP as the just conventional degeneracy point. If you encounter it in the literature, that's what it's called. And just to clarify some of what's going on here, apparently to give you some nomenclature in this made up matrix that I drew for you, the eigenvalue f is an exceptional point of degeneracy 2, so we would sometimes call that EP2. Let me give you a conventional degeneracy of order 3, so here's 3 g's, and I could have a matrix in which all those off diagonals happen to be zero. In that case, g is a regular degeneracy point, a triple one, DP3, and d has quadruple exceptional point and a double regular degeneracy point, and so we might call that EP4, DP2. So there's now a family of different kinds of degeneracies. They might be purely DPs, they might be purely EPs, they might have some mixed character, but it's when there are ones that appear here that suddenly what I was telling you before in the PowerPoint slide no longer holds because it's a non-diagonalizable matrix. There's no choice of coordinates in which this thing becomes diagonal. So all that boring stuff about all the boringness of the eigenvalue sheets is suddenly gone. They might not be boring because there are choices of matrix for which it can't be diagonal. And just to give you a sense of where we're going to go, it's in the vicinity of such, we can go back to this picture, it is kind of relevant to what we're talking about, in the vicinity of such an exceptional point, the eigenvalue sheets won't look like what I've drawn up there. They will actually turn out to have non-trivial topology, and we will be able to use that non-trivial topology to do adiabatic operations that actually accomplish something, and whose outcome depends on some clearly defined topological quantity of the operation, like whether or not it encloses a special point or not. So that's the sort of interesting physics that I'm after. In terms of just making points along the way, all of this only occurs when there's dissipation. So unlike the usual situation in which dissipation just makes your system less and less interesting, the more of it there is, here's a situation in which the presence of dissipation is going to give rise to qualitatively new structures in the system. And along the way, I'm going to try to illustrate something that I hope was at least hinted at in the last lecture, which is that dissipationless systems, at least in classical mechanics, are sort of pathological mathematically. They result in you doing things that aren't quite okay, like pretending that a quadratic eigenvalue problem is actually a linear eigenvalue problem. It's equivalent to presuming that the square root of x squared is x. Often that's okay, and if you drag around the plus and minus, you might be right, but it can miss some important things. And I'll explain in a minute what I mean by pathological. Okay, so let me start out with point number one here, which is, well, okay, we're going to start with all of this, all at once. So in fact, let me start with point number three. Why would I claim that the dissipationless systems are the pathological ones? In order to answer that question, let me say that if we're going to consider Hamiltonians that are not her mission, which is to say if we're going to consider in our space of all possible systems those that include damping, which seems like a reasonable thing to do, then we're open to any Hamiltonian of the form, well, any two by two Hamiltonian. So A, B, C, D. So if I'm going to consider only her mission matrices, I have to impose some constraints on these numbers. But if I say, no, I want anything, her mission, non-her mission, whatever, these four numbers can be whatever. So if I ask how many such matrices there are, or in what way can I arrange them so that I can see which matrices are close to each other, and which way can I lay them out, I need four complex numbers, which is the same thing as eight real numbers. So these matrices live in a space, I'll call this one for reasons that will become clear in a minute, S sub trace, where S means a space, that is equivalent to R8, by which I mean there are, you need eight real numbers to define a matrix. Once you do that, you've defined the matrix. If you go anywhere else, it's a new matrix. So if you like non-her mission matrices, each one is a point in an eight-dimensional Euclidean space. But we often don't care about the trace of a matrix. The trace of a matrix just takes whatever the spectrum was and determines its center of gravity, so to speak. And if we want to know sort of whether levels are crossing each other or what they're doing, that's the one thing we don't care about is whether it's all happening down here with levels crossing or up here. So let me simplify matters and just always work with traceless matrices. And that means that once I specify A, I have specified D, it's no longer free. So let me just call those guys alpha and minus alpha, but leave the off-diagonal ones as B and C. These are still completely arbitrary complex numbers. It'll definitely be traceless. And every possible matrix is a point in a space that I'll just call S, which is topologically equivalent to R6. It takes three complex numbers, six real numbers. They're all independent of each other. So that's R6. So now let me ask about Hermitian matrices, since that's obviously for physics a very important subset. So what does it mean to have a Hermitian matrix? It means that alpha has to be real. So one constraint is that the imaginary part of alpha be zero, and then the other constraint is that C be equal to B star, the complex conjugate. So this is three real constraints. It says that one of the real numbers defining alpha has to be something. And then it says that the two real numbers that define C are specified by the two real numbers that define B. Once I tell you what B is, I know what C is, end of story. So that means that the space of Hermitian matrices is, we started out with six dimensions. We imposed three constraints. So that means that it's R3. And you already know this. Does this seem like it's making a connection to any discussion of Hermitian matrices? Yeah. So you may have already in quantum mechanics learned that any two by two traceless matrix can be represented as a linear combination of the three Pauli matrices, sigma x, sigma y, sigma z, which is the same thing as saying that the Hamiltonian of a spin one half is determined by a magnetic field that points somewhere in space and has a magnitude, which is to say, lives somewhere in three dimensional space. So you've already encountered the fact that two by two Hermitian matrices live in a three dimensional space. Let's ask about the space of matrices which constitute a conventional degeneracy. So a DP2. Let me argue, we'll prove this later, but let me argue that it is zero, that there's only a single matrix, only a single traceless matrix that has a conventional degeneracy. And let me argue it as follows. If all Hermitian matrices live in a three dimensional space, which is just the space of magnetic field vectors, well, you know that the only way to take a spin one half system and make its two levels degeneracy, degenerate, is to turn off the magnetic field. So in this space, the only conventional degeneracy point is at the origin of that space, a single point where B equals zero. But if that seems a little bit too weak as reasoning, we'll discuss it later. And let me make the further observation that this diabolic point is contained within the space of Hermitian matrices. The symbol means is a subset of, because the diabolic point is obviously a Hermitian matrix. It has real eigenvalues. And the Hermitian matrices are a subspace of the full space. So hopefully this is okay. But let's now ask about the space of exceptional points. How many matrices can I find whose Jordan normal form would be? And the answer is an awful lot. If I go to this matrix and write out its characteristic polynomial and solve for its eigenvalues, I would find that the eigenvalues are plus or minus the square root of alpha squared minus BC. So if I want my two eigenvalues to be equal to each other, I want these two possible solutions to be equal. That means this thing has to be zero. And this is one complex constraint. This is a statement about some complex numbers imposed on three complex numbers. So that leaves two complex numbers left over or four real numbers. So that means that the space of exceptional points is four dimensional. I put it in quotes because it is not actually topologically equivalent to R4. I put it into Euclidean space because there's some rather complicated constraints on these three numbers. But nevertheless, the dimensionality is four dimensional. So let me now draw a picture of all of this. So here is the six dimensional space. I can't really draw it. But this is the space of all traceless two by two matrices. It's supposed to be six dimensional. One of the degenerate points is just one matrix. It's one point at the origin. So let me draw it as a point. The space of all Hermitian matrices is three dimensional. So this is what's called co-dimension three. Its dimension is three less than that of the actual space, which means that if I try to draw it as a line here, that would be giving it too much credit. There are a lot less Hermitian matrices than that. So something less is a set that, at least in this sort of cartoon where I've turned R6 into R3, which is sort of all I can draw, the set of Hermitian matrices is even more tenuous than a line. And I've drawn it going through the origin because of this statement here. The diabolic point is definitely a subset of the set of Hermitian matrices. Now the exceptional points are four dimensional surface. So they have co-dimension two. They're two dimensions less than the full space, and that would be a line in my cartoon where R6 is R3. So the exceptional points are really a line, and they're not necessarily topologically simple, but let me pretend that they are. So here they are, the squiggly line. So that's SEP. And if you notice, I've tried to indicate that it comes arbitrarily close to the DP, but doesn't include it for the following reason. The diabolic point matrix, the only one, is this. And if I consider a matrix of the form, a matrix that's extremely close to it, arbitrarily close to it, let me consider this matrix where epsilon is something arbitrarily small. This has Jordan normal form one. I mean, you can see that sort of trivially just by pulling out to the epsilon, and it's this. So that's why the set of exceptional points comes arbitrarily close to but does not include the diabolic point. And it is this curious pathology, the fact that the normal form of a matrix changes discontinuously as you move around in this space, that motivated a very famous mathematician slash mathematical physicist named Vladimir Arnold. You may know from the KAM theory of turbulence, he's an A, you may know him from the sort of formal classical mechanics textbook. As a mathematician, he also solved one of Hilbert's original problems, the famous problem at the turn of the century. So he has some amazing stuff to his credit, and he decided to sort of simplify, he decided to address this question of finding a normal form that wouldn't jump as you go from one point in the space to another. That's not our main concern today. The nice thing that he did was to find a systematic way of cataloging the ways in which the spectrum of a matrix can change as you move around in this space. And that gives rise to what's called the Arnold-Jordan normal form, and that's the next topic that we'll discuss. And so the question that I want to ask is, suppose I pick a matrix M0 anywhere in that space, and I want to know which of the matrices nearby have a spectrum that's different from M0, a different set of eigenvalues. Now the answer isn't all of them. For the obvious reason that it takes, let's go back even to two by two matrices with trace. If I consider all two by two matrices, it takes eight real numbers to specify one of those, or let's say four complex numbers. But its spectrum is just two eigenvalues, two complex numbers. So if matrices are specified by four complex numbers and a spectrum is only specified by two, there are a lot more matrices than spectra. So there are an awful lot of matrices with the same spectrum. So the starting point that he uses was that a matrix M1 has the same spectrum, the same eigenvalues, has another spectrum M0 if they are related by a similarity transform, this thing that we talked about before. So this is, you can read about this in Wikipedia. This is a result from linear algebra. The list of eigenvalues. Just what you would mean by spectrum in a quantum system. It's just this list of eigenvalues. So without proving this, let me just put it up there. And this is the first ingredient in answering this question. The second ingredient is to say that if I only want to ask about matrices M1 that are near to M0, so somehow near in that space, then I should only consider similarity transforms that are very close to the identity. That's a reasonable definition of nearness, where this little ds is small. Questions? Okay, so let's then write this out. So we want to find matrices that have the same spectrum as some other matrix and that happen to be nearby. So that will happen if M1 can be written as small similarity transform times M0. And we can make, since ds is small, we can tailor expand this inverse matrix just as we would a regular fraction. And if I stop at terms that are linear in this small quantity, I just have the identity minus ds. And now I just multiply this out and I get one term which is the identity times M0 times the identity. That's obviously M0. I get another term that is ds times M0 times the identity. And then I get another term that is the identity times M0 times minus ds. And we know how to simplify those last two terms. This is just the commutator of ds with M0. And this is an important result. In plain English what this says is that if you give me a matrix that I can write as M0 plus the commutator of M0 with anything, then the matrix that you gave me has the same spectrum as M0. This is the answer to this question here. Which matrices near to M0 have the same spectrum? Those that can be written as M0 plus the commutator of M0 with anything. That sounds pretty broad. So let's see how much leeway that gives us. Maybe it will turn out to be every matrix in its vicinity. So to do that let me pick a completely arbitrary ds because that's the freedom that I'm given. And let me work this out not in total generality but let me consider a specific case that's relevant to us. This will just help make it more concrete. Let me consider a matrix which is a triple exceptional point as M0. You can see I'm not working with traceless matrices here. We're going to take full generality. Here's a triple degeneracy which happens to have this funny kind of degeneracy that you don't encounter in Hermitian systems. And I want to find out as I perturb this matrix, as I go away from it a little bit, which perturbations will leave me with the same spectrum? Is any of that form there? So if I write M0 to this way and if I write as my arbitrary perturbation ds, let's just pick really anything. So nine numbers here could be complex. Now I just need to calculate the commutator of that arbitrary 3 by 3 matrix with M0. And to do that, make that a little bit simpler, let me note that I can write M0 as just lambda times the 3 by 3 identity matrix plus another matrix that I'll call n for a nilpotent matrix. If you know what that means, that's great. If not, don't worry about it. n is just sort of the matrix that has all the funny stuff in it. And the reason to do this is that everything commutes with the identity operator. So once we've done this, I have the commutator of ds with lambda times the identity operator plus the nilpotent operator or matrix or whatever you want to call it. These are all matrices. And since I know that ds commutes with the identity matrix, that's 0. So the only thing I need now is the commutator of ds with n. And all I can really do is just to write it out. So what I want to calculate now is big abc, little abc, alpha beta gamma times 0 1 0 0 0 1 0 0 minus 0 1 0 0 0 1 0 0 0. And you just do a bit of multiplying and at the end of the day, of course, that's still a 3 by 3 matrix and it looks like the following. So let me remind you what this is. This is a matrix where if I have a matrix of this form and I add it to my triple exceptional point, the spectrum will remain unchanged. So you could call this a trivial perturbation. Any perturbation that looks like this doesn't change the spectrum. So those are all the ones that we're not interested in. Those aren't going to do anything to us. They don't make the levels move around at all. Well, how many such matrices are there? So let's see what's independent. If I pick a, I'm still free to pick a. I'm free to pick alpha. Once I pick a, I'm still free to pick this matrix by choosing an arbitrary beta. But at that point, this element is set by a combination of this element and this element. And likewise, this one is still free. This one is still free. This one is still free. But obviously, so these guys are linked and obviously this alpha is linked with this one, which is minus alpha. So in a little bit more concise notation, I can put anything I want here. I can put anything I want here. I can put anything I want here. I can put anything I want here and here and here. But if I fill up those six, I've constrained everything else. This element has to be zero. This element has to be minus X12 and this element here has to be minus X22 minus X11. So this is an, DS is meant to be an arbitrary perturbation to M0. That's a good question. Oh, yeah, okay, right. If you only go a little away from the identity, you're still invertible. Okay, thank you. I had never thought of that. Okay, so anything that looks like this is going to be trivial and not of interest. Well, what doesn't look like that? There are different ways of writing them, but one full parameterization is to consider matrices in which these six parameters are all zero. And then I put whatever I want here, any complex number, whatever I want here and whatever I want here. So X, Y, and Z are all elements of the set of complex numbers. These matrices will be non-trivial because obviously they're not of this form. If I put in zeros here, all the trivial ones would then require zeros down here and I don't have zeros down there. Because of the, I mean, I don't know that this is particularly deep, but because the Jordan normal form has what looks like a shift matrix as the null potent, it's just taking, when you multiply it, it's just taking all these guys and shifting them over to the left and then all these guys are shifting up one and that's why that element gets left to zero. I don't know what that means, but in terms of the mechanics, that's what happens. Okay, and so the point is that this is all perturbations to a matrix of that form over there. Now this was just one special case. We just found the form of perturbations to a matrix that has this triple exceptional point degeneracy. In the paper by Arnold, which is from 1971, which tells you something about the pace of developments in linear algebra. This is sort of a recent result. The rule is the following is to just draw a picture because it's easy to draw a picture and Arnold actually draws this. So suppose that I have a giant matrix and I have a degenerate block here, so all these diagonal elements are degenerate and within that I have some Jordan blocks. Here's one that has ones on the off diagonal and here's another little one with some ones on the off diagonal and then the remainders have zeros. Suppose I just have one isolated eigenvalue, isolated eigenvalue, not degenerate and then suppose that I have a big one, which is not at all Jordan, so it's just a conventional degeneracy and then suppose I have another one, which is entirely a Jordan block. This exhausts pretty much all the possibilities. So the rule, and this is just heuristics, though it's actually derived in the paper, is that in each Jordan block you will have independent perturbation parameters along the bottom row, just like we found over there. So if this is three by three, there will be three perturbation parameters there. There will be three perturbation parameters if this is a two by two on its bottom row. Anytime you have a conventional degeneracy, you have a free parameter on the diagonal and in fact anytime you don't have a degeneracy you have a free parameter on the diagonal and you get to take any of these and then extend them arbitrarily in the horizontal direction. So each x is an independent perturbation and likewise for any x, it wasn't part of a Jordan block. So I'm not going to put in all of these, but you go down, oh shoot, sorry, they never extend outside of the degenerate block. That's on autopilot there. So they never extend outside of the Jordan block. So if you have a non-degenerate matrix it just gives you its own free parameter. If you have a diabolic point, then every element in that matrix is a free parameter and if you have a pure exceptional point, you just have the bottom row. So a diabolic point of order n has n squared free parameters to perturb it. An exceptional point of order n has n parameters to perturb it and a mixed case like this one here has somewhere in between. Complex, these are all complex. Yes, but they are all going to be different from the diabolic point. So the only way to get dp3 is to have all zeros here and if you change any of those, you'll get a new spectrum. Just you know this from the magnetic field in three dimensional space of magnetic fields the only degeneracy is the magnetic field at the origin, zero. If I turn on magnetic field in any direction I can get a spectrum which is different from the degenerate one. Yes, yes, yes. So these enumerate all the ways in which you can go away from the original matrix. And so this is why exceptional points are vastly more general, they're more common, than conventional degeneracies. The reason is if you think about it in reverse if you're given some three by three matrix with nine random numbers in it in order to get to degeneracy you have to tune all nine of them. You have to have nine knobs in your experiments to get that thing to zero. Whereas if you're given a matrix with arbitrary numbers here, nine arbitrary numbers and you just want to get to the exceptional point to ep3, you only need to tune three numbers. You only need three knobs in your experiment to get to the degeneracy. So this is why I was able to make the claim that if you only have one knob, no degeneracies, if you have two knobs you do get a degeneracy. That's because we further restricted ourselves to only real Hermitian matrices here which give you some extra constraints. Okay, so questions about this? No, this is a matrix which cannot be written as ep3 plus the commutator of ep3 with something. And so this then is, if I add this to ep3 then that will give me a matrix whose spectrum is not ep3. And if I add a matrix of this form I will get a spectrum that is still ep3. So that's what I meant by a trivial perturbation and this is the non-trivial perturbation to ep3. Well, they're the same thing. So if you tell me the... Yeah, I think this is assuming that we're talking about an ep3. So here's my space of all matrices. The exceptional points live on a set. The diabolic point lives here. This assumes that I'm not working infinitesimally close here. So this is the space of exceptional point matrices. If I'm here I'm only looking in the vicinity of this so there's no diabolic point in the vicinity here. So if my matrix is triply degenerate it's ep3. If I'm somewhere near this point and my matrix is triply degenerate that matrix is ep3. So let me make this maybe a little more clear by an even simpler specific case which is the one that's relevant to some experiments that we've done recently which is the 2 by 2 case. So if I have a 2 by 2 case then any point with degeneracy, twofold degeneracy, except for that one unique matrix will be of the form this and its most general perturbation that will make it not ep2 is... Sorry. We've been discussing in terms of trace full matrices. So I have some eigenvalue but it has Jordan normal form. It has a general perturbation which is this. It's the bottom row. But now if I restrict myself to traceless matrices then ep2 has the form 0010 and if I'm restricting myself to traceless matrices then I have to consider perturbations that have a trace. So my most general perturbation is this. So we're now entitled to ask what can the spectrum do in the vicinity of this funny kind of degeneracy? We know if we're restricting ourselves to real Hermitian matrices all it can do is that cone that I showed you before but if we're allowing, if we're considering the space of all matrices, it's going to be rather broader. So the most general matrix near ep2 has the following form. It's just the ep2 plus the perturbation and this has eigenvalues, two of them, which are just plus or minus the square root of x because the only thing... we're not going to restrict back in but all it does is take the whole spectrum and slide it up or down and that's something we don't really care about. We care about what these eigenvalues are going to do in terms of crossing each other or moving with respect to each other but we don't care if the whole spectrum moves up and down. So the most general matrix that's near to ep2 is this, its eigenvalue spectrum depends on one complex number x and it has a really simple dependence. So in order to see what are all the things that the spectrum can do as I vary this one control knob that I have, let's take a very baby step and first constrain the control parameter x to be real. In that case, I can actually plot everything. Here's the axis of x. It's a real number so it lives on this axis. The resulting eigenvalue is going to have a real part and it's going to have an imaginary part which I can plot on this axis here and we know what the square roots of real numbers look like as long as that number is positive so over here I'm just going to get purely real numbers two purely real numbers the square root of x is plus or minus root x so here they are but as soon as x goes negative the eigenvalues are going to be purely imaginary plus and minus but they look in magnitude like square root they look like this so it's this funny thing that does a sort of half twist as it goes through the origin your eigenvalues started out as this and this and then they got closer and closer as you came together and then they popped up into the real plane and spread apart again and this constricted subspace of all possible non-hermitian systems in the vicinity of EP2 is very closely connected maybe even identical with what sometimes goes by the name PT symmetry breaking that's what happens as you vary this parameter through the origin just if you've heard that term hopefully this connects something for you if not don't worry about it we're not going to discuss it we're not keeping x to be just real if we allow it to be what it's allowed to be which is complex then is a point in a complex plane this is the real part of x this is the imaginary part of x and its eigenvalues also live in a complex plane so this is a four dimensional space it's hard for me to draw but it is a four dimensional space but it is something that you probably studied in complex analysis or math methods for physics the square root function of a complex number for any input so if I choose as an input this spot over here this value of x I will get two numbers just as over here so I could try to draw them for you let me even try so I would get one complex number and I would get another complex number and don't take it too seriously it's not directly above it lives in a higher dimensional space that I can't draw but there are definitely two numbers which is to say that for any two by two matrix I get two eigenvalues here they are the funny thing about this function which you don't encounter for real omissions for Hermitian matrices is that it has what's called a branch cut a branch point so this function both eigenvalues become equal here but the sheets which is to say if I was to start varying this point here and asking what these eigenvalues do as a function of that the sheets that each one trace out are not distinct so they lack this property they aren't orderable if I draw a point for you like suppose I say go up on the sheet and go right here you can't say that that's the same sheet as this point and a different sheet from this point because there is a path on those sheets that would connect this point to this one even though it's not a very nearby path it's any path that encircles this branch point and if you had a class on contour integrations this is sort of what makes contour integrations of funny functions a little bit tricky but this is now relevant to us for the following reason that if I try to trace out how these eigenvalues evolve as I change the Hamiltonian they're going to do stuff that isn't shown here and specifically they're going to do something that I think is really cool so suppose that I take my initial Hamiltonian this spot here and I start to tune these parameters and I consider changing those parameters in some kind of circle like this now that's a circle that doesn't happen to enclose the branch point and so I can say how each of these eigenvalues smoothly evolves along this circle this one will change in some fashion might go up and down a little bit but it returns to itself the other one might also vary as we go around in this circle it might go up and down a little bit but it returns to itself and it might seem like well that's pretty much all that can happen because if I start with a matrix it gives me a spectrum and if I change that matrix but then change it back it has to have the same spectrum the matrix doesn't know what I've been up to in the meantime so the spectrum certainly has to come back to itself but the point is that this square root function has the following property which is that it knows that there's more than one way to continuously deform a set of two numbers back to itself there's the way that I drew and then there's another way so suppose that I take this matrix and I now move it in a loop it actually encloses this funny point here then if I plot on the surfaces that are defined if I plot the solutions of square root this one will wander around and end up at the other one and this one will wander around and this is the crucial point without ever crossing or being equal to the other one it will come back to the upper one and so if I was to fill in if I was to draw little lines between these two paths I would find that those lines undergo a half twist so this is the Mobius strip so what I've told you is that the eigenvalues of a two by two matrix trace out the edges of a Mobius strip if I vary that Hamiltonian in a fashion which encloses the exceptional point and if I vary the Hamiltonian in a fashion that doesn't enclose the exceptional point the two eigenvalues trace out a strip which is just the cylinder sort of trivial structure and this is topology because of the fourth dimension ok so the reason that two eigenvalues is a function of one parameter they can't switch without crossing each other but if they're complex valued so let me try to draw this so two real quantities as a function of another real quantity can't switch without crossing each other if I want this guy to end up down here and this guy to end up up here that's it they have to cross each other but if lambda is complex valued then I have real part of lambda here imaginary part of lambda here this one can sort of do a spiral and end up here and the other one if that one's going back into the board the other one can come out and end up here and they don't cross it's just a trick of perspective there's three dimensions they can swap the reason that we don't do this is this isn't taking the control parameter and returning it to its original value to do that your control parameter needs to have an extra dimension and that's why we get to four dimensions and have to sort of draw these cartoons that's why I said don't take this vertical direction too seriously that's the real number so this is topology and it's a and I said glibly before that it's topology because it involves a drawing of the mobius strip and actual physical quantities that have the interesting properties of the mobius strip but this is something called monogamy and this isn't the most rigorous definition it's close if I give you a set and it's easiest if it's a discrete set like the spectrum of a finite dimensional matrix how can I take this set and smush it around so that I get back the same set and one way if I have two is just to return each element to itself but if I have two another ways to do it by a swap operation as a function of this continuous variable and monogamy is a pretty well studied property of solutions to polynomials so in general and the reason for this that you can the reason that the roots of polynomials or equivalently these radical functions have non-trivial monogamy is that a polynomial is an ordered set of n numbers which are the coefficients I don't get the same polynomial if I swap these two coefficients but the roots of an nth order polynomial are an unordered set of n elements so if I just give you a bunch of roots and then I say oh no actually this root wasn't one and this root wasn't two it was the opposite they define the same polynomial if I define the polynomial in terms of its roots so root finding is an operation between an ordered set of n elements and an unordered set and I can I can swap these guys and be left with the same set up here which is to say I can vary this one around in a circle and if I vary this coefficient in a circle I definitely have to get the same spectrum but I could do that by swapping these two so this is sort of well known in certain branch of algebraic topology but it manifests itself here in the eigenvalues of 2 by 2 matrices so the last thing that I wanted to show you is the experiment in which we demonstrate all of this and in particular its impact on what you can do with adiabatic operations so our two oscillators aren't actually two distinct material objects it's one object, it's a membrane but we use two of its different normal modes and one of its normal modes has a nodal line as it's drawn up there and the other one has a horizontal normal mode and when these when there's no light shining on this thing these are really normal modes they have a frequency omega 1 and omega 2 and a damping rate gamma 1 and gamma 2 which is actually arbitrarily small plays no role so these are really normal modes, they're not coupled they're basically not even damped it's only when we turn light on the cavity and turn light on that they become coupled through the light that's bouncing around in the cavity and interacting with both of these normal modes and they become damped because that radiation pressure can have a time lag with respect to the membrane's motion and introduce a delayed force which is a lot like damping so if we solve the optimal mechanical equations of motion then the Hamiltonian without any light the absence of light, the one that's written over here and all I can say is that there are a bunch of extra terms they involve the cavity susceptibility chi-c that comes in in all of them and the key thing to note is that even though chi-c is complex it appears in this off diagonal and in this off diagonal without a complex conjugate so it's not Hermitian and another way of saying that is if I was to ask what the normal modes do as a function of time they ring down, they're damped by the laser field so it's going to be non-Hermitian and the eigenvalues aren't going to be real it might not be obvious that that is the most general form of a matrix that is near an exceptional point which remember is this this does not look exactly like that matrix up there but nevertheless they are completely equivalent and you can see some of you can either show that formally or I can just plot the eigenvalues of this matrix as I vary the two parameters that we have in the lab laser detuning and laser power this is a calculation of the mechanical oscillators frequencies two frequencies and their damping rates here and again I'd like to show this all in four dimensions so you can see the funny topology but it's hard to do so I've just plotted the real part and the imaginary part separately the blue cross marks the location of the exceptional point and so basically laser detuning is playing the role of if you like the real part of our control parameter and laser power is the imaginary part it's not perfectly linear mapping it's a nonlinear mapping but it spans the same space so when we do the experiment what we do is we set the laser detuning to some value and we set the laser power to some value and then we drive the mechanics and fit the driven response to get out two resonance frequencies two damping rates which are plotted here and then we step the laser detuning and step the power and raster around and plot it all up and it agrees pretty well in fact this is a least squares fit to this data where the fitting parameters were sort of the usual things that we don't know exactly so this is a spectrum that has a complex spectrum that has the topology of an exceptional point and I should say that this has been measured at least a dozen times before in other physical systems anything you can build couple damped oscillators out of LC circuits optical cavities lots of things have shown spectra with this topology but what hadn't been shown is exploiting this to do an adiabatic operation on these topologically non-trivial sheets to actually run around the edge of a mobius strip and see what happens so to do that what we would like to do is take this system and drive one of the modes there so that energy is actually in that eigen mode and then while it's ringing and admittedly decaying down but while it's ringing a certain amount in real time vary the laser power and laser detuning and if we do it slowly the adiabatic theorem would tell us that the system should follow that here's the case of a control loop that didn't enclose the exceptional point so it should just do the usual adiabatic theorem of coming back to itself but if the control loop does enclose the exceptional point then we expect that the system would smoothly follow the surface with a mobius strip-like topology and come back not at its starting point even though you change the parameters by 2 pi so here's raw data showing that what we do is we can monitor the amplitude of motion of each of these two modes this one and this one which looks similar but is actually a different mechanical mode at the start of the experiment we drive this mode to a large amplitude here it is being driven to a large amplitude and then during this grade period which I think is 20 milliseconds in these experiments we carry out this loop the drive is off, the system is just ringing down but we do this loop and then what we want to know is at the end of this loop where is the energy and in this case the energy is in the mode that it started in but if we repeat it with a loop that does actually go around to the exceptional point it looks very different during the grade period we're doing this loop and so this data doesn't actually mean anything because the eigen modes are changing and the data in here doesn't really mean anything but when we come back to the starting point if we see motion at this guy's frequency which is how we distinguish them this guy's frequency if you look up there is about 500 hertz different from this guy's frequency so it was originally ringing at 788.0 kilohertz and we do this loop and the thing is ringing at 788.5 kilohertz it's in the blue band and then as a function of time well it's a damped oscillator so it damps but what we're interested in is this point here after the loop is performed the energy is in the other mode it's run around the circle and come up so one thing that I will admit is that we have not corralled the ventric capitalists and convinced them that this is going to solve some tremendously important issue in part because this system has loss so even though I'm doing this cool thing with the energy I have a lot less energy at the end of the operation than I do at the beginning the system is lossy and energy is lost during this entire process but the statement that I want to make is that whatever is left has really been transferred very efficiently around this spiral staircase and we can show that that's the case and that it really hinges on the topology by defining an energy transfer efficiency which is just what fraction of the remaining energy which admittedly is a small amount of energy but what fraction of the energy that does remain has been transferred to the other state and if I plot that quantity the transfer efficiency as a function of the loop shape so here's a point where we carried out a loop like this how much energy was transferred none it didn't enclose the exceptional point and then we just keep repeating for loops in which the loop is gradually deformed until it does actually enclose the exceptional point then the energy transfer really goes to 1 and we can do the same thing by starting with a loop that doesn't enclose the exceptional point because it's short in this direction and gradually expanding it and you can see that the energy transfer again goes to 1 so this is the experiment this demonstrates that the outcome this is an adiabatic operation whose outcome depends on a topological quantity did I or did I not enclose the exceptional point and one question that I can get is well if it's so topological what's all this and that's a completely fair question so let me remind you of the language that we use this is a statement about adiabatic trajectories what defines adiabaticity it means you change your Hamiltonian slowly well how slowly the usual definition is that it's slow compared to the difference between the eigenvalues so if the eigenvalues get really close you better be going very slowly and all of this data was taken and fixed time of going around the loop which means that loops that happen to go close to the exceptional point will at some point in the course of the loop have eigenvalues that come really close together so they're not adiabatic really the adiabatic limit is only in this regime where it's a terrible loop in terms of transferring but the eigenvalues are always well separated or over here where you haven't closed the EP and you've done it adiabatically so really the statement is that over here it should be 0 and I haven't made any claim about what happens when you're not adiabatic when you're sweeping too fast but of course as you'd expect there's a smooth transition okay so that's really the end I just want to show you the animation of what we've done which is on the right hand side that's the eigenvalue surface so to speak that I showed you and if you vary the system around in a circle the actual energy in the system transits the edge of a mobius strip but if you didn't enclose the exceptional point it transits the edge of a cylinder so there's a lot more to say about all of this this is really kind of the tip of the iceberg I would say one of the things to say is that this was all about the structure of eigenvalues around the simplest exceptional point just a double degeneracy it turns out and this is part of our collaboration with Nick Reid there is much richer structure around the higher order eigenvalues and it can be very nicely enumerated with some powerful mathematics so we're exploring that and hoping to also realize it in experiments the other thing is that I've cheated a tiny bit because technically the adiabatic theorem does not apply to damped systems so this smooth evolution on eigenvalue surfaces only holds for whichever eigenvalue happens to be the most damped now it turns out there's a lot of forgiveness in this failure of the adiabatic theorem there's still a lot of situations in which you will smoothly move around on the eigenvalue surfaces but it turns out that the failure is just as interesting and it leads these topological operations that I was showing you here to also be non-reciprocal in the sense that if you go around in the opposite direction you don't get the reverse of what you got by going around in the original direction so that's the end thank you very much for sticking around the extra little bit