 Today, we are delighted to have a lecture by Professor Saul Tukowski on testing no hair theorem and area theorems by LIGO. Okay. Hello, everyone. Thank you to the organizers for inviting me to give this talk. So let me start. First of all, two things. So first of all, if you have any questions, please feel free just to interrupt me. You can, if you're online, unmute yourself and ask the question, or you can type it in the chat. I can't actually read the chat very easily. So if you see a message in the chat, please, you know, whoever sees it, unmute yourself and ask the question. The second thing is, I want to acknowledge my collaborators on this project. In particular, Matt Giesler, who was a graduate student of mine at Caltech, and he made the key discovery that I'm going to talk about today. And secondly, Max EC, who was a postdoc, and led the data analysis part of this project. So let's take some archive numbers if you want to look up papers on this work. All right, so let me start with this famous sketch done by Kip Thorne in the 1980s. This was a long before LIGO was built. Long before we could actually calculate the signals that LIGO might see from two black holes that are orbiting each other, emitting gravitational waves spiraling in and merging. And you can see the process is divided up into three regimes. So the first one is the in spiral. This is when the two black holes are relatively widely separated. And they're drawn with these like vortex lines around them to represent the fact that the black holes can be spinning and dragging space time around with them. And the red is the gravitational waves that's being emitted. So the system loses energy and spirals in tighter in the tighter and tighter binding. And underneath is the is H that's the letter we use to represent the amplitude of the gravitational wave. And that's what LIGO measures the strain in the detector. And you see that part of the waveform is labeled no. And the reason is when the two black holes are relatively far apart. The speeds of the black holes in their orbit is small compared with the speed of light. So we can use perturbation theory and start with a Newtonian circular orbit. And then we make small perturbations of that to describe the general to the stick effects. And so that can be worked out to various orders in perturbation theory. And the second part of the signal is known at the very end. After the two black holes have merged. This part is called the ring down, because it's like a bell right you strike a bell, and it oscillates and you hear the tone of the bell, but the sound waves carry away the energy of the oscillation and so the bell rings down it settles down to a, an equilibrium quiescent state. So in the same way the black holes will settle down to a single isolated rotating black hole. And we know now from general relativity that that has to be a Kerr black hole is described by the Kerr metric. And that's also labeled known, because again, we can use perturbation theory, but this time, you perturb the Kerr metric. In other words, you take the equilibrium black hole and make small perturbations of it, and then you can calculate the rigged out. And in fact, that was the subject of my PhD thesis a long time ago. And I never dreamed that anything that I would do in general relativity with any, at any point have any applicability in the real world, but shows you never know. And then in the middle, the merger. That's drawn as this big mess where the two black holes come together. And you can see the form of the waveform there is some squiggle. And there there's no small parameters we can't do perturbation theory. And so that's labeled super computer. Right. The idea is that you're going to solve Einstein's equations numerically to do that. Now we fast forward to 2015 to the first LIGO detection. So GW 15 that tells you the year 09 September 14. And I'm going to walk you through this but the conclusion is that if you're thinking about alternative theories of gravity. General relativity is actually pretty good, at least for, I mean, black holes are sort of the strongest sources that we could imagine arising in the universe today right there the regions of the highest curvature. And if we were going to be deviations from general relativity. Here's where we might expect to see them first. This figure on the right here is taken from the LIGO detection paper. And one of the detectors is at Hanford in the state of Washington and you can see this red squiggle waveform and you know it sort of looks a little bit like keeps drawing. And then on the right is the second detector at Livingston Louisiana. And what the observers have done here is they've taken the red wave form shifted it in time by the difference in light travel time from the source to Livingston and the source to Hanford and superpose them. And you don't have to do any fancy data analysis just by eye you can see it's the same signal right within the noise. So that's how we know this is a gravitational wave right it's the strain in the detector. It's not terrestrial. It would be a heck of a coincidence. If two people independently slammed car doors in the parking lots of these observatories and produce exactly the same signal right so it's an astrophysical signal. It's gravitational waves. In the middle panel. If you look carefully and you'll see a gray sort of waveform that's the signal from the top panel, which has been broadened now to include the noise the effect of the uncertainty in the detected way. And now the red is basically a numerical relativity waveform so our group, our collaboration had learned how to solve Einstein's equations on big computers. And this was sort of a model that fit the data very well. And again you don't have to do fancy data analysis you can just buy I see it's a very good reproduction of the signal. And so that's how we know that what the observatory saw was two black holes spiraling together. The last panel in the bottom row is the residual so you take the best fit numerical relativity waveform, and you subtract it from the signal above. And this squiggle, if you analyze it statistically is consistent with with noise right so there's no there's no discrepancy at the level of the precision of the measure. And that turns out to be about 4%. So, if there are deviations from general relativity in an event like this. It's going to be arbitrarily big it's small it's less than 4%. So we already know that from the very first detection. And these limits are only going to get tighter right in other words, as the detector sensitivity improves, and we detect more and more events, the constraints on deviations from general relativity. Well, the two possibilities either get tighter, or we're going to see something maybe general relativity is not the correct. But the fact that the LIGO experimentalist did was a consistency test. You take the early part of the wave form the in spiral. And you fit that to these, well we've now gotten fancy you'll hear from Alessandra about things like the EOB model effective one body and so on. Basically, you can think of it as the perturbations of this post Newtonian perturbation expansion, but just made much more powerful. And from that you can read off the two masses and one and two and the two spins the magnitude of those that s one is to actually their vectors. So you can put those as initial conditions into a numerical relativity simulation, and you can use numerical relativity to predict what the final mass and spin will be. Now you can go to the ring down part of the signal and analyze that and from this perturbation theory of the car metric. We learn that that late time way for me in fact you can see it by eye if you just look at the last part of the way for you'll see some oscillations, but they're damped very rapidly. It's actually an exponential damping of a sinusoidal type of way. And as I'm going to talk about in more detail in a second. These are called quasi normal modes. The normal mode of oscillation right is something you learn about in freshman physics, right something oscillating masses on a spring. But if it's got dissipation, like in our case from gravitational waves carrying of energy, then it's a damped oscillation and we'll call that a quasi normal. Right and so from the frequency and damping time. You can also infer the mass and spin. And so there's a consistency that has to be held between those two and to within some, you know within the measurement errors, general relativity past this consistency. But now we can take this idea of this consistency test a step further. So what is the know here theorem. So the know here theorem says, basically that a stationary black hole is a very simple object. It's described by the metric so it only has two parameters, a mass and a spin. Right in principle you could also have a charge on the black hole. For an astrophysical black hole any charges completely negligible it gets neutralized by the plasma of interstellar space. So on the right is this diagram from way back in the early days of black hole studies. I'm showing all these objects going into make up the black hole. And you might enjoy looking at that picture of a 1970s television set. But the idea is that the final objects, if all you encounter is the final black hole. Well, whether it was formed by throwing a whole bunch of TV sets down to make a big black hole, or whether it was a collapsing star or Barry on's or anti Barry on's or whatever. Right, it's still the same kermit. And Wheeler, who was actually a completely bald. This phrase, the idea was a black hole has no hair. It's a very smooth object. There's no nothing discernible from the outside, other than the mass and the spin or the J the angular momentum. And this is not necessarily true in alternative theories. So the question is, can we actually test this idea. And about 20 years ago, there was a proposal that you could do this by using the quasi normal models. So if your gravitational wave detector could measure not just that final single quasi normal decay, but the theory says they should be a superposition, many of them with different frequencies and damping times. Suppose you can measure the two least damped quasi normal models. Then you have two frequencies and two damping times four numbers, but they should depend on only mass and the magnitude of the spin two numbers so you have a test of general relativity. And at the time that this proposal was made, the authors of this paper, estimated what kind of a detector sensitivity would you need in order to be able to carry out this test. And the signal to noise ratio is very low force detectors that they looked at like Lego and the idea was, we would have to wait a long time right that even Lego at its advanced sensitivity which it's not reached yet. At its design sensitivity would not be good enough, we'd have to wait for things like cosmic explorer and so on or maybe even Lisa propose space based detector which is going to fly the well nominal date is 2034. We'll see. Okay, so you know this was the idea and then people have worked elaborated this idea in the meanwhile. Alright, so I, this is sort of the only side of equations that I'm going to show. I just want to review for a second this idea of curve perturbations to make more precise what we mean by a quasi normal mode. So, if I take the curve metric. It has symmetry. It's time independent. And it has an axis of symmetry like the black hole can spin about an axis. So that means when I take the perturbation equations, I should be able to do separation of variables. But I should be able to separate out the time dependence. So you can see here I'm working with a quantity. You don't have to know the details as a certain component of the bio curvature tensor and then call it psi four. And you can see in the third equation, psi four is related to two time derivatives of H the actual string. So the perturbation theory is done with this and then you can predict what H is what the way for. Alright, so if you look at the expression for psi four you'll see the e to the minus omega t. So there's the time dependence with the frequency omega superposition over omega. And then you can see the e to the I am five. That's from the symmetry and five. But you'd have no reason to expect to be able to separate variables and say our and favor whatever coordinates you're using for the remaining spatial defenses no particular symmetry. The surprise the miracle if you like of the metric is that in fact, you can separate those variables in the in a suitable coordinate system. Okay, and so I've called the angular functions s and r. And if this was the Schwarzschild metric, then we would expect that's very symmetric we would have spherical harmonics. Right, basically why lm's right for the, you know, the mathematical types it's actually spin weighted spherical harmonics. But we're going to ignore that you can just think of them as ordinary spherical lives not important for this. The radial parts of this is like quantum mechanics or the radio product called away function satisfies a radial equation. So we're using a coordinate our star, which moves the horizon from a finite radius off to minus infinity so the domain is minus infinity to plus infinity. In Schwarzschild, which is so this parameter a is related is the spin parameter of the metric. In Schwarzschild, it just looks like barrier penetration problems that you're scaring problems are doing quantum mechanics in the curve metric the it's a little more complicated. The omega can be complex and the V can be complex but we're not going to worry about this. So the key thing is that the late time behavior of this way function. If you look at the late times there. You can see that the radial function just involves a damped, either the minus i omega, and then there's a T minus our star, right and outgoing wave. That's damp because the omega is complex in general. So it has a real part which is a solitary and then damped exponential. We can talk about modes. Right so if we think of these are for each L and M that you're doing a superposition of. We can talk about HLM as the mode of the waveform. And then the end is like a radial quantum number comes from that radial equation there are discrete solutions. Okay. So the end. On M the M are called overtones. So this is sort of by analogy with overtones on, you know, musical instruments and things like that. It's actually a terrible name as we'll see in a minute and it's set back the field for 20 years. I explained that in a second. Okay so omega is complex has a real part in the imaginary part the imaginary part is basically related to the damping time. So we have a superposition for each L and M we have a superposition of these damped sinusoids. And we arrange this N this overtone index in the order of the damping time so n equal to zero the fundamental mode is the least damped mode. n equal one is the next least damped any will to the next and so on. So no hair theorem says these mode these frequencies and damping times depend only on the final mass and spin. Now, what happened in the field, as I explained was that these overtones tended to be ignored. You find the papers calling them, you know, sub dominant and we're going to ignore them and things like that. Remember as when a physicist hears the word overtone. They think of musical overtone right where you can play the same note on a violin and a trumpet. It's the same fundamental pitch, but the overtones are different and that's how you can tell the difference between trumpet playing a particular note and the violin playing the same note. The overtones are sub dominant. They're not as powerful in in amplitude. And as we'll see that's not true for black hole. Okay, so the first people to to notice the importance of overtones were in fact two of the other people who are going to be lecturing this week. Alessandro Bonanno and Frans Petorius working with Greg Cook. So 2007 was like the stone age, as far as numerical relativity was concerned. Right. Frans Petorius had done the first successful. Inspiral calculation, just one or two orbits of black holes only in 2005. And what they found in this paper was that if you took a neat they simulated an equal mass black hole in spiral. And they found that the fundamental mode so the two to L equal the lowest detectable mode gravitation waves are quadruple so it's L equals two. And then because of this in spiral it's L equal to M equal to two. And then zero is the fundamental and two or three overtones. If you superpose that that gave a good fit to the way for not as you might think that close to the merger. Everything is very nonlinear. You remember Kip's picture with all the squiggles, and that you'd have to wait until the non linearities have died away and then you saw the linear perturbation theory, and they found that the actually the even closer to the peak. It looked like you could see the overtones as representing the way for reasonably accurately. For the experts that you have to be careful am I talking about the peak of this site for quantity or the peak of age, but anyway. When Alessandro develops the effect of one body picture of waveforms which was used the inspire a part of the way for She and her collaborators wanted to cover the merger and ring down as well. So they had this idea of taking the quasi normal modes which describe the late time behavior with the overtones, but now they had the problem of matching it to the inspire. So they kind of, you know, they wanted to make a model, so they just sort of made it match. So they distorted the late part of the way for me in a certain sense to make it fit. And they introduced some pseudo quasi all kinds of the details are not important. The important thing was that the rest of the world, the community of gravitational wave. People missed the significance of this property of quasi normal modes that they seem to be relevant early in the ring down. The idea took hold in the community that quasi normal modes are good for modeling the way for because you'll be work pretty well, but H was still actually non linear at the peak of the amplitude. I mean, after all it's expected to correlate with the non linear phase of generality. So this is again from one of the very early Lego papers in 2016 of testing general relativity. So this is a plot on the y axis, it's the decay time on the x axis is the frequency. And they're attempting to fit so doing this consistency test test right so they take the so I am our stands for in spiral merger ring down. You fit the in spiral get the M1M to S1S to use numerical relativity get the final mass and spin and predict omega 220 the free fundamental quasi normal mode. And that's the solid black contour right it's not a dot because there's noise and the detective experiment isn't perfect. And then they tried to fit a single damped side you saw it right the frequency and decay time. And they found it was sensitive to when they tried to start the fit. So, if they did the fit at the peak. They got this green contour you can see it's completely off. Okay, it just doesn't fit at all. If they waited three milliseconds after the peak of the amplitude. They got a much better at least now if you look at that sort of triangle shape contour, at least it encloses the I am our measurement, but you can see the center of that contour is biased relative to the true value. If they go to five milliseconds is much more centered but now the signal is getting damped it's much weaker relative to the noise. So the contour has grown bigger and the accuracy of the measurement is lower. So the takeaway message from this paper was that trying to detect a quasi normal mode even a single one was sensitive to when you started after the peak. And this discrepancy between the green fits, which was way off on the left, compared with the true value was described to the non linearity. And so many papers appeared about you know when does ring down start when is it a good model to use linear perturbation. Okay, so at what point to quasi normal modes a superposition of them provide the correct accurate description of the ring down. And the answer is actually at the peak by including the overtones. In fact, the quasi normal mode superposition gives a good representation of the way for. Okay, so this was what Matt Giesler found as part of his thesis. So let me show you what the argument is. So at the top, I've just reproduced, I fixed L and M to be to to just for simplicity. And so now we just have a superposition of these damped sign you so it's over and over the overtones. And so what map did was he took a numerical relativity way for so this is not yet like a data. So this means we have a very accurate solution of Einstein's equations you know in principle by spending enough money on your computer time. You can make this as accurate as you like. And then he just did a least squares fit of the. You know he started with a single n equal to zero mode, and then fitted it to the numerical relativity wave form using just a standard least squares that at various start times so T equal to zero here on the bottom scale. That means you're starting the fit at the peak, and then positive means you're starting later into the ring now. And on the y axis is the mismatch the details are not relevant is basically the overlap integral how well a dot product of the way form the model of superposition of overtones, you know one minus that that thing so a good fit means very small mismatch down, you know, down 10 minus six or seven. So the blue curve is just fitting a single overtone. Sorry, just the fundamental n equal one is you add one overtone and so on. Let's see as you add more and more overtones. The fit gets better and better that the mismatch goes down. And also the optimum is reached right around zero right around the peak. So, so the takeaway from that is if you believe that this linear superposition is a good fit. It's suggesting that whatever non linearities are in the way form. Perhaps they're quite small. At least as the scene in the way you know maybe there's strong gravity going on right where the black holes emerging. But by the time the signal gets to LIGO. In other words escapes from the strong gravity that if non linearity is not maybe observable anymore, at least that's the hypothesis that we can examine. All right and so he has another way of looking at it. How do we know the numerical relativity way for me sufficiently accurate. So, you run your, your simulation at two different resolutions right the grid spacing, and then take the difference and give that gives you an estimate of the error. So you can see on the bottom so by I, you can see any difference between the numerical relativity way form and the superposition of course the normal modes. There's a traction that gives you the residual, and you can see that the residual is above the blue so the blue curve there is the numerical relativity difference in resolution. So that's a bound, the error numerical error is below that blue curve. Okay, and so the residual is well determined by about 10 to minus four. Here's yet another way to see what's going on, you do your fit with seven overtones. That gives you the amplitudes at T equal to zero for each overtone because that's what you fit for. And now you plot each overtone with its exponentially damped amplitude. So you can see the blue curve is the fundamental tone you like so it starts at some amplitude. See, the units here are in M, right. These are gravitational theorists units so it's basically if you put in Cs and Gs. It's the light travel time across the black hole for a 30 solar mass, sorry for a 60 solar mass total mass here, 10 m is about three milliseconds. So you can see that this fundamental only becomes dominant about three milliseconds after the peak, which is exactly when LIGO was able to see it. Clearly, you know in the right place that explains why that is true. At earlier times, the overtones far from being sub dominant, you can see they actually have a bigger amplitude at T equal to zero. It's just that the decay times are more rapid, they damp away more rapidly. But if you use them early in the waveform, it gives you a good representation. Okay, so the early part is actually dominated by the old. Okay, so when this work got published. It's a very interesting thing. I know many of you are students. And you may still have a very idealistic view of science. You know science is supposed to be this completely objective enterprise, right where you remove all kinds of human bias. You know, as much as you can and everything, you know, is, you know, make a hypothesis and do an experiment and test it and, you know, all this stuff that you learn in high school science and so on. Not true. Okay. Well, it's partly true. It's better than many other fields, but science is a human enterprise. So a lot of times, you know, human emotions play a role in deciding whether to accept things or not. So when this work came out, it actually was quite controversial. Several people just couldn't accept this idea that a linear superposition of quasi normal modes gave a good representation of the gravitational wave signal early in time. So I'm going to go through some of the objections that were raised and why they're not important. So the first thing was some mathematical types knew that if you take a superposition of damp sinew soils, where you're allowed to fiddle the frequencies and damping times. It's not a complete orthonormal set you can even if you try to make it orthogonal it's not. In fact, it's over complete in some mathematical sense. Right. That's irrelevant. Red herring is one of many of you are not native English speakers red herring is one of those obscure English expressions. I don't even know where it actually originated but it means it's, it's there it's an illusion right. We don't need quasi normal modes to fit something. You have any wave equation you remember I wrote down that the thing that looked like a barrier potential for the radio wave equation. Basically, any potential that satisfies some fall off conditions at infinity. The modes. So if you put on boundary conditions you know for color correspond to quasi normal modes. You'll have it, you'll have a you have a greens function solution is an asymptotic expansion, and you'll have quasi normal modes. The details what exactly the frequencies and damping times are depend on the potential, but the phenomenon of quasi normal modes for a potential equation is ubiquitous. Right. So we have this asymptotic expansion we know the quasi normal modes are there. And all we do now is we fit for them. So we're using the existence of the underlying asymptotic expansion to justify the form of the basis functions that we're trying to fit. We don't need it's not going to do it we don't need to complete it. Okay. The resulting is not the same as making an expansion in a complete or the normal set. All right, then the next problem that was erased with some sort of medical physicists. Yeah, I have a question. So when you say fit, I just want to understand the following. Given M and J, one can check whether the linear sum of the quasi normal modes is very close to the full nonlinear solution. Right, full in a least square sense. Yes. Yeah. So why do I'm not sure to understand the so this is a statement that does not require a fit. So you for a given M and J, you know the quasi normal modes. No, what amplitudes should use. Right, you have a complex amplitude in front of these of these models, right. I see. Okay, okay, you have you have the most but you don't know the. Yeah, how much of each mode. Thanks need to represent the way. So that's what the thanks. Right. And with with with a fundamental and seven overtones, that's eight, you have 16 real numbers that you're fitting for. Okay, so some people were worried, you know, there's the old, you know, you give me 16 parameters I can fit an elephant right so the idea was that, sure, if you give me 16 free parameters, I can model even the nonlinear. The idea was that you're using the linear superposition to model the nonlinear piece of the way. That was the worry. And there is some concern about that but completeness is going to do. Okay, the next. Yeah, go ahead. So, is there a reason not to include higher elements. Higher spherical. Yeah, so in fact, if you want to do this more carefully, you have to, I glossed over the distinction between spherical harmonics and then these spheroidal harmonics that come from the perturbation theory. And so, LIGO actually measures things, LIGO needs a complete basis, because they look at things on the sky, they use spherical spherical mode decomposition. So even from going from spheroidal L's to spherical L's it turns out, there's a mix, you know, L equal to two corresponds to a superposition of L equal to two and three. To do this a little more carefully it turns out the three two fundamental mode is the most important next one. And if you had a high, you had a high signal to noise signal. Or if you had an accurate numerical relativity waveform to do this carefully you do include some higher modes, but I'm kind of simplifying things here a little bit. Yeah, but that's a good question. Okay, so the neck. Yeah, any more questions. I don't see any. Okay. So then, the next problem that was raised was a mathematical analysis of these quasi normal laws suggested that they're unstable. In other words, if you make a small perturbation, for example to the potential. So what you have right by the non linearities or whatever that you would change the frequencies and damping times by a large amount. And there's a whole analysis. I don't want to get into the details but you can look it up if you the mathematical technique that's used is called a pseudo spectrum because you don't have. It's not a self a joint problem because of the dissipation. So it's not like you can't treat it by by perturbations, you're perturbing the perturbation theory, but you can't use like, you know, standard you have to use the perturbation theory appropriate for a dissipative system. And so my answer to that. So, well, sorry. Let me come. Let me say something first. So my answer to that was mathematics is an experimental science, right from the point of view of the physicist, namely, if the, if the mathematics you're using predict something that's in contradiction with the experiment and what do I mean by experiment here I mean the numerical relativity solution of Einstein's equation is a full non linear solution it's an experiment if you like, right, except instead of using real black holes we use computer generated black holes. And that experiment tells us that we can fit the overtones. So if you have a theory, mathematical theory that says, you shouldn't be able to do that. There's something wrong with the premises of your fear they don't apply. Now this is not a popular view among mathematical physicists. You know, because for many years they range supreme. Right they what if a mathematical type proved the theorem that was it. That's not true now because people like us who can do numerical solutions to very high accuracy. We're the ultimate authorities. Okay, and if your mathematical theorem doesn't apply to my simulation. I believe the onus is on you, not on me to figure out what's wrong. You can see again this is not a particularly popular view right so I'm, I'm dramatizing it a little bit because I want to entertain you right but anyway I think there is some truth in this. There was a recent paper in the last year of elephant in the flee the idea was you make the flee is the small perturbation and so on. And they claim that you know this instability was there. We talked to them and you know told them you know there's something wrong with what you're doing. They've in fact since published a more recent paper. Which came out just a little over a month ago that what they found may apply extremely late in the ring down where it's way beyond anything to do with observations. So, basically it's not relevant. Okay. Now, the another way of addressing this question so this overfitting is actually has a germ of truth in it right. Maybe there are non linearities. And when you do the least squares fit you again do an overlap integral. And so maybe your basis functions are just representing the non linearity. They're fitting the non linearity with a good enough approximation. Right that you're misleading yourself into saying that the overtones today. So how would you test that idea. Well, what you can do is, you can take the frequencies and damping times you know what they're supposed to be based on the final mass and spin. And you can perturb them away from their true values. And ask if I use overtones that are not the correct overtones but are some arbitrary frequencies and damping times in the neighborhood of the true values. How well, how good a fit do I do. If it was just a coincidence. Then I would expect to find some nearby values that do better. Well, you can see here you don't have to go into the details. So these are sort of up to 20% perturbations on the frequencies and damping times, and the blue curve is, you know the actual original curve values. And then everything else above that is worse. Right so known. Now it's true if you make huge perturbation if you change the frequencies by a factor of two, you can actually maybe get an occasional thing below the line. But the fact that in the neighborhood right it's like a minimum. It's the best fit in the neighborhood of the true values. And it tells me at least that any over any fit of the non linearities is small you're actually representing the true overtones that are there. Excuse me. Yeah. In the plot what was the horizontal axis and so in is the number of overtones. And it doesn't go from zero to seven overtones. So it is just a fundamental and seven is the order. This is the and the vertical axis I show you I should explain it is the error. In the, in the determination of the mass and the spin. Right so our criterion is how well do you recover the final mass and spin. That's what we're fitting to. Sorry, we fit to the way form by varying the parameters and then see how well we, how well we do. Okay. All right so now let's see what happens with real data. Okay so we're going to redo the measurement that the Lego collaboration did. But here instead of frequency and damping time I'm going to show it on the vertical axis. So this is yet another symbol dimensionless spin of the final black hole, and the mass of the final black hole. So the, the X is the true value the optimum value determined from the experiment. The final black dotted contour is this IMR thing so you use the full way for including the early part in spiral. But now we're going to do the fit, starting at t equal to zero right so at the peak. So the blue contour which completely is off to the right in this case doesn't include the real value. That's if you just do the, the fundamental so that's recovering the Lego result. Add one overtone and equal to one. And you can see it's quite a big contour because you're not using the inspirational information, you're only using the from the peak on into the ring down. It perfectly encloses the true value. Adding a second overtone doesn't do much. All it does is it's straight you've got another parameter complex parameter in the fit. So you just stretch out the uncertainty that contour gets bigger. So that's a signal that you're you're trying to fit too many parameters but it certainly seems as if one. overtone is actually in the data, and you can see it more carefully. If you make what the experiment is like to do a so called corner plot so this is just a fit of. So what's being done here is we're fixing the fundamental. And then we're taking the frequency and damping time of the first overtone. So you'll see in the top right bullet point of Delta F one and the Delta Tower one. And if you look at the Delta F one at the very top plot. The width of that is roughly, I don't know, plus or minus points to or something. You're getting roughly a 20% determination to see where the best fit lies. It's close to T equals to close to Delta F equal to zero. So it's zero to with about 20%. It's not a great. So this is not precision measurements. But the fact that you can do it at all is what's important. Right if you know about base factors you can see it's a very modest base factor is not an overwhelming detection, but certainly it's consistent with. Zero I haven't shown the plot for the amplitude of this overtone, and in the in the corresponding plot the amplitude is definitely bounded away from zero. And so it's, it's present in the data to within some precision. This means we don't have to wait 20 more years for detectors in space to try to test the no hair theorem. As the current LIGO detector improves and we get to advanced to the full detector sensitivity. Another event as strong as 1509 14 just one is a very good chance we could carry carry out this test to maybe 5% or even better. So, things are looking up for testing general activity, at least with the snow here. I've seen a paper that came out in January by a group claiming that they reanalyzed the LIGO the same LIGO data that we did and there's no evidence for the overtone. That paper is wrong. Okay, so I won't go into what they made some some errors. Okay, now another idea that you can do with this. So there's a question in the chat is about the previous slide. I suppose it's maybe two slides back. So you say why the plot has contours? I mean corresponding to one mass harder to spin values. Because this is real data, right? So real data has uncertainties. There's a noise in the detector. So, so this is real data analyzed. So here you're doing the least squares fit of the model to the data, but that fit is weighted by the noise. And, and therefore you get you, you not only get parameter values, the best fit, but there's an uncertainty in. So if you're fitting for M final and chi final, you get, you know, a centroid from the fit. So there's a spread of values. And so these contours I think they're 90% contours. So the probability that the true value is inside that contour is roughly 90%. I can't remember which whether these are 68 or 90 or what they did here. It's whatever LIGO did. Thank you. Another quest. Yeah. I wanted to ask why if we include more overtones, the contour becomes larger, or is it simply a type of between because the if you add another parameter. And the effects of that basis function that you've added to the fifth is not detectable in the data. You added another parameter. There's big uncertainty in the value of that parameter. You can basically make that parameter be almost anything you like, because that particular basis function isn't present in the data, right, it's swamped by the noise. So basically all you're doing is you're fitting to the noise with that extra parameter. And therefore the uncertainty in your determination of the fit is bigger. So this is an example of what happens with this is real overfitting. You're not determining the you put too much freedom into the functions. Yeah, thank you. So here's an area theorem, which was proposed by Roger Penrose and proved by Hawking. So we call it Hawking's area theorem says that the total the sum of the total area of horizons of black holes cannot decrease. So this is the formula for the area and you see depends again for a curve black hole isolated black hole depends just on two numbers the mass and chi is the dimensionless spin. I've got season genes equal to one. So chi ranges between zero and one. So the idea is very simple when the black holes are very far apart you treat them as two curve black holes. In the spiral part of the wave form you can get the M ones and the M1M2 and chi one chi two. So you can determine the A1 and A2 get the total area. Then separately analyze the ring down wave form from the peak on by fitting the overtone model. That's got nothing to do with how you determine the initial masses. That gives you the final mass and spin you compute the final area. And you check is it bigger than A1 plus A2. Okay. So Hawking, you know, when the first LIGO detection was done, one of the first questions he asked Kip Thorn after Kip told him about this detection before it got published was can you test the area? And at that time Kip said no. Unfortunately, you know Hawking passed away, but now we can turns out you can actually using this idea. So now we have the overtones we can do right is actually a little tricky. When you do parameter when LIGO does parameter estimation, that's actually computed when you do the least squares fit it's actually done in frequency space, because the LIGO noise is known primarily in frequency space. So you do all your Fourier transform and do everything there. So if you split the wave form right here at the peak. Let's just look at, for example, the right hand piece of that. If I was going to handle that in the frequency domain, the Fourier transform. You can see that at late times the wave, the signal is zero. But at t equal to zero, I have a maximum of the amplitude. So if you think in Fourier space, right, everything is wrapped around. That's a discontinuity. And if I just take the Fourier transform of that data, starting at non zero h and going to small zero, I'll get Gibbs Phenomenon, right, because the Fourier transform of that step function, right, that's where Gibbs Phenomenon comes from. It's a standard way to handle this in in data analysis, which is you taper the wave function you multiply this in the time domain by something which, which basically damps the signal, so that it's smoothly goes up to a non zero bad. Now you can see that makes the data analysis tricky. If you do the tapering too late, then you don't damp that discontinuity enough, you get Gibbs Phenomenon. If you do it too early, you're now mixing, when you do your convolution, you're mixing some of that in spiral signal, which is being tapered into your ring dancing. So the trick to doing this correctly is you actually have to figure out how to do that the analysis in the time domain. And it's a little tricky handling the noise and so on. For the experts. If you want to know about it, the key word is you need the covariance matrix to estimate the noise you need to take its inverse. And it's a topless matrix and there's a fast way, and a stable way of doing it. Okay. And what do you find. Plotted here on the vertical axis is probability density on the horizontal axis is the change in the area over the fractional change in the area. So zero means no change everything to the right is positive area increase, the area theorem is satisfied, and the shaded region is the violation. And you can do the fit either with no overtones or with one overtone, and the area theorem is okay to about two signal right so, you know within 95% probability. And again, you know that's pretty amazing if you think about it and the area theorem. This is proof right it's this very mathematical property relating to bundles of gd6 on these mythical event horizons blah blah blah. And here we can check it. And as the detectors improve, we will be able to check it to even higher precision. And the area theorem may not be true if there's scalar hair on a black hole all kinds of weird alternatives to pure general. Okay, so let me summarize the ring down seem appears a peak strain. The overtones dominate the early ring down and the non linearities in the ring down are surprisingly small. And they qualify that are seemingly surprisingly small right so an active area of research now is to try to quantify. You know by how much do the overtone spuriously fit any well known and the areas, and by how much do they actually, you know, how much non linearity turns out that's actually surprisingly difficult to do. Some people have claimed you can do it by looking at whether the coefficients of those amplitudes are constant. That turns out to be nonsense. You can ask me a question about that. If you like later on but it's actually quite difficult to do. And the overtones enable a first test of the no hair theory, no hair theorem, the first test of the area here. Okay, I'll stop there. Take any more questions. Thank you. Hi, thanks for the talk. I have actually two questions. So one is, is this seeming seemingly I mean the fact that after the merger that the system is is is linear that does it help to merge the solution with the with the previous phase. So is there is some hope to maybe merge to the inspiring. Maybe more more accurately without simulations. So, so that's a good question. So, these models like the EOB model that you, I think we'll hear about tomorrow whenever. So these are attempts to actually piece together various analytic approximations. So the problem is, is the joint right if you, you don't want to, you know, how you go from the inspiring model into the, into the merger, you know, you'd have a kink or something right so that's not good for the data analysis. And that's also the part where the amplitude of the signal is biggest. So that's where you need the most accurate determination. So, I think the, the role of the numerical simulations right now is crucial it's the only way we really know what should go on and what will be the ultimate mean the problem with the American simulations is to do parameter estimation on a Lego detection. You might have, you know, I don't know of order a million trials you do this Monte Carlo fit in, you know, some high dimensional parameter space to find the optimum value. And then you have many trials you can't do a million American each numerical relativity simulation, depending on it can take anywhere from two weeks to a month, using a supercomputer might just for one case. So, so developing good models is is crucial for the data analysis. And, and, you know, there are various groups working on various different. You know, this is the OB there's something called phenomenon. You know, this phenomenon, you know, each is not phenomenon, including higher order modes and it's complicated thing and, and, you know, my group works with surrogate models which is yet a third way of doing this so it's an active area of research. So, two, two questions from zoom. So, thank you for the meeting. So, so my question is about like, you mentioned that the analysis is done up for equal marks by black holes and in case of three overdones. The P comes before before five four, five substance four. So I was wondering like, in case the mass of the black holes is our distinct like because in case of same mass we have a symmetry about the center of Mars. So, if the masses of the black hole differ like much step and when it into that different by a great factor with these, like, with these analysis or board the sound may be naive but I was just wondering if the symmetry breaks and maybe the results it's very hard. I don't know of any way to use the symmetry directly and you know, simplifying the analysis, because if you think about it carefully, it's actually a tricky symmetry, because during an orbit. The black holes are radiated. And the the center of mass is recoiling, right there's some momentum going off. And it's only when you kind of average over an orbit that the center of mass, you know by symmetry, you know, doesn't move. So, it's a, you know, it's a little tricky to actually, microscopically, you know, during an orbit somehow apply the symmetry because it's, it's a symmetry of the whole system, not of the trajectories of the black you know, I didn't know what the trajectory of the black hole actually means in a precise coordinate independent way. So, the answer to your question is, it's an intriguing idea, but I don't know if anyone has actually managed to use it. Thank you, sir. May I ask one more question? Sure, go ahead. Yeah. So like, we know about the black hole area theorem like how the area for black will never decrease and it's directly ready to the entropy like the second law. So I was wondering like the two children of Hawking, the Hawking radiation and the Hawking area theorem. So I was thinking like, when Hawking radiation occurs, so basically, some, some feeble amount of mass is being taken away from the black hole, right. This over, over an extended period of time, this effect may be, may become prominent. And what we have that be for a finite duration of time, the mass of the black hole, we'll see, we'll observe that it has decreased. Yes, the area goes down. Yes. The area goes down, but it cannot happen. So like, I found that, you know, discrepancy. You know the answer, right. I told you the answer. If you have a theorem. And it doesn't apply. There's something wrong with your premises. Right, I told you. So apply it. So go back and ask what premises go into proving the area. Surprisingly few, but a key one is there's an energy condition. Okay, that's applied about the positivity of particular kind of energy. And what Hawking radiation violates, it's a quantum mechanical, it's a field theory process. It violates that energy condition. So that's the way it is. It's obvious, right. I told you. Listen to me. Hi, thanks for the beautiful talk. I just, I didn't quite follow why this overtones were necessary. For the test of the area theorem and the no hair theorem. And if you could really do a numerical relativity calculation, why couldn't you directly check why do you need this. Because to do a numerical relativity calculation. You put down some initial conditions. And now you evolve forward in time. And what theory do you use for that evolution. General relativity. So you've assumed general relativity is correct. So therefore we're guaranteed that the area theorem will hold, right, because if general. We're not putting any energy, there's no, there's no matter fields, it's pure general relativity. So if you have vacuum general relativity, then the area theorem is true. Right. And the other assumption that goes into the area theorem is related to some condition on infinity about asymptotically flat or something like that. And that we certainly assume with the boundary conditions in the numerical simulation. So we could never find a violation with numerical. Okay, I have just one comment about the previous section, a previous question that even though the classical area theorem is violated. There is a generalized second law of thermodynamics. Yes. Yes, yes. Yes. Okay. Born now. Okay. Hi, thanks for the talk. I have a question about the overtones. And it looks like from your thoughts that the amplitude as you said amplitude of higher overtones is higher at equal zero. So why did you stop at like n equals seven if you kept more and more. That was about the level of precision of the numerical way for that. In other words, we could have, we would have had to calculate another simulation with higher accuracy. But you know, so if I showed you a plot with n equal to eight. You know, it doesn't. It's the same thing right just those things gets, you know, I see, okay, okay, okay. Nothing, nothing, nothing magic about it. Yeah. And relatedly, did you have any explanation because we usually think that the first the fundamental one is the most important. Do you have any explanation why like the seventh one looks like to be. It's just because when we first learn about overtones, right in undergraduate physics. The first example we see is typically waves on a string or a violin or you know some musical instrument. In that case, the overtones are multiples of the fundamental frequency. Right there. They're higher harmonics of the fundamental frequency those are the overtones. And they're produced with a smaller amp smaller amplitude. So the, you pluck the string and you get produced the fundamental, and then the overtones are sort of sub dominant. In this case, the overtones were defined by a completely different procedure. In fact, the frequencies typically are, they're very close to the fundamental frequency they're actually slightly lower. They're just the extra solutions the various solutions that you get from the radial parts of the perturbation equation. And the first person who arranged these and named them decided to number them by their decay times. So the fundamental is the least damped and then the next least in other words just by the town, the damping time. So it's a completely different physical reason to order them in that way. So the idea is that the fundamental is the most important. If you wait long enough, because it's the least damped. The others will have damped away. The first overtone is the second most important if you wait long enough and so on. So if, so what is the least as lowest oscillating one from these frequencies. The frequencies are an infinite number of modes and the frequencies, they have a very complicated dependence you can look. If you go online if you search for quasi normal modes you'll find nice pictures of these as a function of and then you can see how they people plot them in the complex plane you know, isolation frequency versus damping time. And the complicated trajectories not simple, not a simple, but in the neighborhood of the fundamental, they, they decrease in frequency by by a few percent. Thanks. Thank you for the talk. Two questions. So the first one is still a quite conceptual question, I guess. I'm still somehow missing the connection between the necessity of the overtones to improve the model and how this actually connects to enabling a test for the no hair theorem, which I think refer to the system being determined by its final mass solely. So that's an essential connection here. Yes. Yes. Okay, so let me let me try to answer that so. So the key thing is, it's a prediction of perturbation theory of the core metric. So the, so the no hair theorem says the final block states, which isn't has to be an equilibrium black hole has to be the current. That's the no hair theory. If you then look, as you settle down, that's the ring down phase. And again, a prediction of now perturbation theory of the metric is you should have these quasi normal modes. And you can calculate completely analytically. It's not just computers, but I mean principle, it's, it's analytic calculations. It's just root finding, essentially, essentially. You can find all these quasi normal modes you can predict for any mass and spin. You can predict the frequencies and damping times of all the months. So if you can measure the damp sign you saw it so you see you you fit to the end point not the in spiral. And as you're settling down to the particular black hole. If you can find to quasi normal modes in that ring down part of the way for you will have measured to frequencies and to damping times. So that's four numbers, but they shouldn't be independent. They should depend only on a mass and a spin. Overdetermined. If you only measured one quasi normal mode, that would just be a measurement of the mass and spin wouldn't be a test. Yes, yes, yes. Thank you very much. And the second question is, you mentioned the this other theory in our paper in January being wrong. Why. So, so they, in order to do this test I mentioned you have to include the noise in the detector. So that's contained in the data, and you have to then, if you do the analysis in the time domain. You, you can from the noise you get the covariance matrix and you can do and get take the inverse numerically. And to experts in the subject, it's known that you have to be very careful it's a, it's an ill conditioned matrix, and you have to do it carefully. You, it's unstable you get sort of just basically the wrong noise weighted. So that was one of one of that was one of the main issues with the problem. There's some other technical issues. But if you're unhappy with that answer there's been in the meantime just like again like a month ago, a paper by a completely independent group who actually figured out how to do the analysis in the frequency domain. So a different way of doing the analysis, and they find evidence for the opportunity. Thank you very much. So again this is not overwhelming evidence but it's definitely what you know bounded away from zero. Thank you. Hi nice for the nice talk. Thank you for the nice talk. There was a plot I might have misunderstood with the relative error on, I thought it was on the mass and spin parameters when adding more and more overtones. This one. Yes. So what's in on what's on the y axis. So the y axis is defined at the bottom so when you do the fit. You are. If you look if you're sorry if you look at the top equation right so what what's done here is, you take the overtone frequency the true, the true one with the massing and multiply by this one plus delta. So you just make an arbitrary fractional change. In the frequency. Okay, so the idea is, you know what the true final mass and spin are. If you like from a new from a numerical relativity simulation with the initial masses and spins. We take the frequencies and we're going to do the fit with some where we don't fit with the true values where we just fiddle the values. So if we were just fitting the non linearities early and we're doing we're doing this starting at equal to zero right with a peak of the way for. So if we if if we were just, if all we were doing and the early part was actually fitting the non linearities. Then we would expect. There's nothing special about the true values at that point. Yes, there's a probability that by adjusting them slightly, we would get a better determination of the final mass and spin. Okay, and we don't. So that's the argument for saying that those using the actual values treating them as true quasi normal modes is not an arbitrary fit to non linearities. Those quasi normal amplitudes are actually present. Even at equal to zero. Okay, thank you. Hi, thank you for the talk very, very interesting. I was wondering clearly the situation is a bit different but can we use these overtone source in the case of neutron star mergers and if yes what can we say in that case. Well, neutron star mergers are much more difficult to do so first of all they're more difficult numerically because in addition to solving Einstein's equations you have to solve all the complications of nuclear equations of state and magnetic fields. The main reason it's difficult to imagine doing a test for that is that when the two neutron stars merge. So all of the initial mass ends up in a physical have to make a final black hole, but not all of the mass goes into the final black hole. Some of the masses ejected. And we can't measure that very easily very accurately directly. It's done, it's done by a combination if we're like, you know, the famous neutron summer merger that was seen with, or, you know, multi messenger with optical and gamma rays and so on. From the electromagnetic radiation and fitting that to numerical models. To estimate how much mass was actually lost. And it was about 5% of the mass, but the error bar and that 5% is large right so the original numerical models predicted at most 2% and it turned out to be actually a more like 5%. And the state of the art in numerical modeling of numeric of neutron stars is just not as good. Because of the matter part of the complicated than the gravitational wave form can still be reasonably accurately done but but not as well as pure geo. Also we don't start with two black holes. So what area would we be. Would we be looking at right in terms of increase. Are there more questions. If not, I have a question. Is, is there a good model in which this cause the normal modes are modified. Good mud. I mean, one good example. Yeah, so, so people are looking at alternative theories of gravity. It's very hard to make an alternative theory of gravity. That is, so now I'm talking about it, you know, a classical theory, not even interested in this point of view of quantum gravity. So people have taken you know, effective field theories. Things like dynamical chair and Simon's theory and so on which may be low energy limits expired by string theory or something like that. And, you know, and asked, could we could we see deviations from general activity. So it turns out it's actually very difficult to actually even predict what you might see in a LIGO event from these things because for a lot of them in the classical low energy limits. The equations on our mathematically opposed. Often, you know, if you have a higher curvature theory, you'll end up for example instead of like Einstein's equations with two time derivatives of the metric, you might have four time derivatives. And now you have to ask the question, is this a well posed theory doesn't, you know, if I give you initial conditions can you actually predict the future. So a lot of these theories, they just, you know, they're they're unstable when you formulate them that way. So, but they have been attempts to try to do them in an effective field theory sense. So Concova, who was my graduate student at Caltech a few years ago. She took dynamical chair and Simon's theory in the effect of field theory limit and actually looked at what the Inspiral Wave form would look like. And other people now are using her work trying to look at particular the quasi normal modes. She focused on the Inspiral part of the merger. That's how I work with people are looking at trying to make detailed predictions. Thank you. So there was a promise to have a group photo now, but I don't exactly know how it's going to happen. Okay, so if, if no one else knows, then I guess we can have a break and then return for the for the student. Let's start a bit later, maybe at four. Okay, thank you.