 So, well, okay, we are live. Hello everybody, welcome to our next Latin American Women Learning Physics. My name is Alejandro and I'll be your host today. Oh, there is, okay. So, it is a pleasure to me to introduce to Nicolas Yunes, our today's speaker. He got his Bachelor in Physics from Washington University in St. Louis in 2003. Then he went to Penn State University where he got his PhD in Physics in 2008. And then he went on fire, became a research associate at Princeton University from 2008 to 2010, became an Einstein Fellow for NASA. And then he took that Einstein Fellowship to MIT right from 2010 to 2011. And then in 2011, he also became a Calvary Institute for Journalical Physics Scholar, became an assistant professor at Montana State University. Now he is an associate professor and he is the co-founder of the Extreme Gravity Institute at Montana State University. He has like several awards and an outstanding career. For example, I'm just gonna highlight at least two. In 2015, he won the Young Scientist Award in General Activity and Gravitational, which is given by the Gravitational, sorry, for the International Society on General Activity and Gravitational. He is currently the Vice-Chair of the Executive Committee of the Division of Gravitational Physics for the American Physical Society. Has hundreds of papers, lots of citations. And I would say like, it's not fair to summarize just in, with just three things, but I'll just say I'll highlight. He's well-known for the ILOVQ relationships, Universal Relations in Neutron Stars. He also developed the PPE formalisms to make tests of general activity and also was known for the first solutions in Chair Simon's gravity rotating solutions in Chair Simon's gravity. So it is a real pleasure to have Nicholas giving us this webinar. Remember that you can channel all your questions using a YouTube, Twitter or emailing us and then we will at the end read your questions to Nicholas. So Nicholas, hello. Yep, hello. I'm here. Okay, thank you, Alejandro. I think you forgot to mention the most important thing which is that I am your advisor. Oh, that's true. I didn't want to say it. The biggest accomplishment. Oh, guys. Thank you for the embarrassing introduction and thank you everyone for joining. So I'm gonna tell you a little bit about one of the things that we do at the Extreme Gravity Institute at Montana State University. And also, Alejandro asked me to speak in English so that this webinar would be accessible to everyone. So I'm gonna do that instead of speaking in Spanish. So I apologize to all of my peoples in Latin America. In any case, so let's begin with I guess the most important question of this webinar, which is, what is Montana? So Montana is a state in the United States that I had to Google when I first got an offer. It's in the Northwest of the United States. And that's a picture here of where we live. It's a beautiful place. My office, our offices are like right here. I think you can see the pointer. And that's where the Extreme Gravity Institute is. We are funded by a bunch of people. And so we're involved in NASA, in NSF, in Lisa, in LIGO, and post our time in the race. A very active place. This is a picture of our group. Alejandro is over here. Hey Alejandro, I am over here without a tie. Apologize. I am wearing a tie right now, but unfortunately the camera doesn't work so you can't see it. In any case, this is what I do. It's a mess. I work on a lot of different things ranging from theoretical relativity to experimental relativity. And I have a bunch of collaborators in a bunch of different universities in the United States and in Latin America. And a bunch of grad students and an undergrad students. This is a bit of an out of date slide that I have to actually improve on probably next week. But in any case, that's too complicated. So let me just summarize what I do into these three main topics. I study black holes and neutron stars. I study how to model gravitational waves analytically or semi-analytically as produced by binary systems primarily. And I also study tests of general relativity, be it with gravitational waves or with binary pulsars or in solar system. I actually started with solar system tests when I was at Washoo with Cliff Will who was an expert in experimental relativity. So today, I mean, I cannot cover all of these topics. I mean, I could give three or four talks on each of these topics separately based on all the work that my students have done with me over the past years. But instead of that, I thought it would be good to give like a little bit of a tutorial of what we can learn or what are we learning about theoretical physics and how do we learn about theoretical physics from gravitational wave observations. So this talk, it's going to be a little bit different from other talks I've given in the past where I just essentially go pretty fast and describe the results that we're obtaining and sort of instead take a step back and try to explain how we obtain those results. So let me begin by motivating a little bit why we want to test general relativity because after all, I'm a relativist so I was brought up thinking indoctrinated into the theory of Einstein. So naturally my bias is that Einstein's theory is correct but I still think it's important to test it because until very recently, we didn't have any evidence for the validity of Einstein's theory in this very, very extreme scenario where the gravitational interaction is very, very strong is highly nonlinear and it's also very, very dynamical meaning the gravitational interaction is changing during the observations that you're making. So gravitational waves are ideal to probe this extreme gravity regime. I remind you there are wave-like perturbations of the gravitational field. So there are perturbations to the metric that satisfy the wave equation very far away from source. Here's an animation by the RIT group of gravitational waves. So I think what you're seeing here is the apparent horizon painted as little spheres in gray moving around each other. These are equal mass black holes and I think you're seeing a slice of their computational domain and the gravitational waves are getting emitted. So clearly gravitational waves are produced by accelerating masses by the temporal variation of the multiple moments of the source. And they encode properties of the source. They encode the spin. What you're seeing here is two black holes with the spin angular momentum. The previous ones were not spinning, these ones are. And so just like tops precess on earth if you just spin them. Here you have like two little tops that are coupled to each other due to gravitational interaction. So there's orbital angular momentum and spin angular momentum coupling and that modulates the signal forcing the spins to precess. And all of that information is encoded in the gravitational wave train that you see below. It's sort of hard to see because the changes are very, very subtle. So you need to do some very careful analysis to pick these differences up. But if you can model these signals accurately enough you can in principle extract all of these features from the data. So the gravitational waves then propagate at the speed of light according to gravity and they're very weakly interacting as you all know and they decay as one of our are. So they essentially go mostly and perturbed from the source to us. So we get a very clean image of the system that produced gravitational waves. It's very different from what happens with electromagnetic radiation that can easily get absorbed and where the absorption properties depend on the frequency of the light and absorbed I mean by the interstellar medium by gas that might be in between us and the source. That doesn't happen with gravitational waves. And an important feature is that as you know binary systems are by Kepler's third law. So the frequency of the orbit scales with the total mass divided by the orbital separation cubed everything to the one half. I've said here Newton's constants G and speed of C to one. So I'm using what we call geometric units or relativistic units. So if you want there's a Geoverse square floating around here but when the binary system you know merges when the two compact objects touch the orbital separation is roughly you know four times the mass of either of the two objects or twice the total mass of the system. So when you take that to the cube power then one power of mass cancels over here and then take the square root and then you see that the frequency goes as one over the total mass of the system. That means that very massive binaries so supermassive black holes merging they produce gravitational waves at much, much lower frequencies much, much more orbital frequencies which are related to gravitational wave frequencies done binaries that are stellar mass. So if you have two objects that collide with stellar mass with wheel one solar mass or 10 solar masses then those will be merging roughly 500 to 1000 hertz where supermassive black holes masses about 10 to the five will merge at about a millihertz, okay. That's the sound that gravitational waves would make if we could hear them as produced by a small compact object that's spiraling into a supermassive black hole. So what you're supposed to get from this is that the frequency remain constant during the system and that can again be seen easily from Kepler's third law because the orbital separation is decreasing so the frequency is increasing. We call that a chirp. The amount of energy that gets radiated in the process is a few percent of the total mass of the system of the total rest mass of the system. So one day I decided to calculate how many gravitons that was and it's a ridiculous number of gravitons that get emitted. It's a tremendous amount of energy. It's more energetic. I think as we calculated this in one point that all of the light that gets emitted by all of the stars in the universe at an instant put together. Of course this event takes very, very little time. This merger event is about less than a second in duration and then it goes away. Of course this energy is not radiated in light. It's radiated in gravitational waves and these objects are very, very far away. So the waves decay and by the time they get to earth they're very, very weak. So here's an estimate of how much energy gets radiated in a merger. It's about 10 to the 46 joules. Depends a little bit on the efficiency in the energy removed from the system. So I'm assuming a 1% efficiency here which is about right for a non-spinning binary black hole merger and a total mass about 10 solar masses. So that's about 100 times the energy emitted in the supernova. Okay, so you have these gravitational waves. They're very, very dynamical. They're being generated in this extreme gravity regime. You might wonder, okay, well, why test general relativity in this extreme gravity regime? Don't we have data already that constrains this extreme gravity region? The answer is not really, what we can do here is plot the, in the x-axis a characteristic field strength of your test or of your observation where you define that as a characteristic mass divided by a characteristic separation. And on the y-axis we're gonna put a characteristic curvature strength which is characteristic mass divided by a characteristic length cubed to the one half. It's in units of kilometers, inverse kilometers. So in this plane, we can place all of the experiments that have been performed so far to test general relativity, all of the observations that have been done. So in the lower left region, you have all of the tests that we would call the weak field. So tests due to ranging, measuring the distance from the earth to the moon with lasers, that's called lunar laser ranging or the classic perihelium precession tests of mercury or the measurement of the precession by the light geosatellites. Up until recently, the most relativistic test of GR we had was sort of smack in the middle of this diagram. These are tests that we can do with binary pulsars, so rapidly rotating neutron stars. The double binary pulsars, so two pulsars where both neutron stars are pulsars and we can detect both pulses. That was detected a few years back and allows you to measure the orbital decay rate of the period or of the orbit itself. And that lies right in the middle of this diagram. Now, gravitational wave tests allow you to probe this upper right corner of the diagram. This blue line and red line seem to have lost my pointer for some reason, but it doesn't matter. So the blue line and the red line represent the region in this diagram that gets probed by the first two gravitational wave observations that were done by LIGO. They're both black hole merger events, both about 20 or 30 solar mass, individual solar mass black holes in each case. And you see we are approaching this upper right corner because when objects merge, then the characteristic length becomes equal to the characteristic mass of the system. So you're approaching one on the x-axis and similarly you're approaching one on the y-axis as well. It's not quite one because you never really reach characteristic length equal to the mass. That would only be possible with extremely rapidly spinning black holes that are merging. But the black holes that have been observed to merge have not had such large spins. So it is in this sense that we can say that gravitational waves probe extreme gravity. So with the rest of the talk, I wanna tell you a little bit about how we do the analysis and the modeling of gravitational waves, how we can do the same in modified gravity and then how we can extract inference about fundamental physics from the gravitational waves that we have detected and that we are going to continue to detect in the next years. Let's begin with data analysis and modeling. What I'm showing you here is the sensitivity of the instrument. This is the LIGO detectors as a function of frequency. So this is the sensitivity to strain essentially on the y-axis and on the x-axis we have the frequency. So the different lines you see here as one, as two, as three, as four, as five, as six, they're different science runs that were done by initial LIGO. So as you go from as one to as six, the instrument is improved. They do the first science run and then they bring the instrument offline so they can fix it or improve it and then they start science run two and so on. So you see these lines, blue goes to green, green goes to red, red goes to blue and so on. So you see the sensitivity increasing as a function of time. But the problem is that a lot of the signals that we wish to detect are actually below the sensitivity of the detector. So to extract them, you have to do something a bit sophisticated. You have to filter the data. And in order to filter the data, you have to construct first a good filter. You have to have a good guess of what signals are contained in the data and you have to have the ability to model those signals accurately. Once you have those templates, those models, then you can cross correlate the templates with the data to try to filter any signal that's hidden in the data out of the noise. And the way we do this is through a calculation of this quantity called the spectral, the signal to noise ratio, row square. So what I'm showing you here is row squared as SNR or signal to noise ratio and it's an integral over the Fourier transform of the data that's in the frequency domain divided by a measure of the noise of the instrument called the spectrum noise density multiplied by a Fourier transform of the template model which is essentially a projection of the gravitational wave metric perturbation. And this quantity depends not just on frequencies of Fourier transform, obviously, but also depends on the parameters that characterize the model. So if you have a model that's supposed to represent the gravitational waves produced by a binary black hole system, then that parameter lambda might be that, sorry, vector lambda might contain the masses of each object, the spins of each object, the distance to the binary system, the inclination angle and so on and so forth. So those are the parameters that you're trying to find. So what you do is you calculate this row square for one set of parameters and then you do it over and over and over again until you find a set of parameters that maximizes that signal-to-noise ratio. And hopefully if the signal-to-noise ratio, the maximum signal-to-noise ratio is high enough, then you can make a statement about the certainty you have that what you've detected is indeed a gravitational noise. I mean, there are many other ways to do this as well. I mean, to make sure that what we detected is a signal-to-noise, but this general idea is enough for this talk. So let me show you a little animation, a little example, I should do an analytic example first. Imagine that the signal of nature is some constant times cosine of Pf, okay? This is Fourier transform, so it's a function of f. Great, so that's the signal. And let me construct a model that looks like this, b times cosine of mf. So the parameters of my model are b and m, so that's what would be inside of my lambda vector here. Oh, my pointer came back, fantastic. Okay, so b and m are my parameters, and that would be inside of this lambda. And a and p are what's contained in the signal. So in principle, I don't know a and p, okay? So what I'm gonna do is I'm gonna calculate this row square and multiply these two functions together, get a times b integral of cosine Pf, cosine and mf, yes. And as you know, this will be roughly zero if p is not equal to m, or roughly a b over two if p is equal to m. So what I would do in this type of scenario is I would vary m until, like I would vary it from some minimum range to some maximum range. And I would find that this row square, as I pick say an m that's different from p like zero, for example, I would get that row square is very close to zero. Eventually I would continue to sample in m until I get an m that's close enough to p that this row square becomes approximately a times b over two for some b that I had chosen. And then as I continue to increase m beyond that value of p, then I would get back to zero. So you would see a peak structure if I were to plot row square as a function of m, I would see row square spiking when m is equal to p. So that's sort of the rough idea between matched filtering, which is what I'm describing here. So here's a way to see it visually. So this is an animation that Chad Hannah made as a professor at Penn State now. And what he's done here is he has some noise and this black thing here is noise plus a signal that he's hidden somewhere in here. So that's like the data, right? So the data is typically a signal. Well, we hope the data is a signal plus noise. Most of the time the signal, the data is just noise. But every now and then there's also a signal thrown in here somewhere. And your job is to hunt for it and figure out where is the signal in this data stream? And obviously you can't just look at it and be like, oh, well, it's right over here because the signal is safe that it's completely buried in the noise. So what he's gonna do is he's gonna calculate the signal to noise ratio, say in a small stretch of time here, he's gonna Fourier transform everything of course, but he's going to compute it and then plot it like right here. So at this instant in time, then the SNR, the signal to noise ratio is very low. Oops. And so here you go, he's calculating, he's detecting nothing and boom. At this time, he found that the signal to noise ratio spiked, okay? It spiked because he's filtering it with a model that exactly matches the signal he put into the data stream. If you had done the same analysis, but with a model that had parameters different from those in the signal, he would have found that the signal to noise ratio is essentially close to zero all the time. And that's sort of the game that LIGO plays and Virgo and all gravitational wave instruments play. They calculate the signal to noise ratio for different stretches of time for all possible sets of lambda that they can evaluate the SNR for. Of course, there's an infinite number of points that you can choose in this n-dimensional space. So you have to discretize it in some way and that there are smart Monte Carloi techniques to do that, which I'm not gonna get into. Okay, so clearly, knowing the model matters and it matters a lot. If your model does not recover or is not very accurate, is not a good approximation to the signals that are produced in nature, you're just essentially going to miss the gravitational waves that are in the data. That's why people for like now 60 or 70 years or one could argue in 80 years have been working on constructing approximate solutions to the Einstein equations to model the gravitational waves emitted by binary systems. And the way we do it nowadays, the easiest way to understand it is in terms of this plot here on the y-axis I'm putting log base 10 of the orbital separation divided by the mass of the system and on the x-axis I'm putting the log base 10 of the mass ratio. So on the upper left corner, we can make approximations like so approximations where the mass divided by the orbital separation is much less than one and the orbital velocities are much less than one. That's the realm of post-Newtonian theory. So we take the Einstein equations and we expand them in these parameters and solve them order by order in these ratios. On the lower right side of this panel, the only approximation we can make is that the mass ratio is very, very small. We call it extreme. We cannot assume that the velocity is much less than one because the velocity may not be much less than one. So in this regime, you have to use something called black hole perturbation theory or self-force perturbation theory. So you're just expanding in this mass ratio. This might be appropriate, for example, if you're trying to understand the gravitational waves emitted by Emory's extreme mass ratio in spiral. So small black holes, so say stellar mass black holes falling into supermassive black holes. Okay. And eventually, if you want to understand how gravitational waves behave when comparable mass objects merge at velocities that are, you know, high, like half the speed of light or so, then you have to discretize the set of Einstein equations or some type of 3 plus 1 decomposition, then discretize them and solve them on a cluster of supercomputers. That's the realm of numerical relativity. Okay. So the gravitational wave models are very, very lengthy. There's a combination of numerical and analytic methods. They have been shown to be highly accurate. So with the waveforms constructed in post-Newtonian theory, agree with those numerical relativity, which then also agree with those of perturbation theory. So there's nice overlaps between all of these regions. It did take over more than 50 years to develop and it has only been done so far for relatively simple orbits. So for binary systems in quasi-circular orbits, provided their spins are not extremal and that their spins are aligned or anti-aligned with the orbital angular momentum so that precession can be neglected. So most of the work I do is in post-Newtonian theory. I do a little bit of perturbation theory work as well but for this talk, let me just concentrate on this post-Newtonian side of things. In post-Newtonian theory, what we do is we divide the waveform problem into two regimes. The regime where we try to calculate how the waves are generated. And then the regime where I try to calculate how the waves propagate after they've left the system. So essentially the calculation begins by figuring out what the Hamiltonian, the binding energy of the system is. So in Newtonian gravity, the binding energy is like the reduced mass times the total mass divided by the orbital separation R. And then there's of course post-Newtonian corrections to this, to these terms. And it's a little bit complicated to explain exactly how this binding energy is computed. We teach a class, well, I teach a class here at the X-ray and gravity institute, it's about a semester long to go through these calculations. I won't have time now to explain them, but the point being is that these quantities can be calculated by solving the Einstein equations perturbatively. Similarly, you can calculate the radiation reaction force. So how much energy is being carried away by gravitational waves? The energy that's being carried away by gravitational waves has to come from somewhere. And the only other thing that has energy in the system is the orbit. So as a gravitational waves take energy away, they are removing energy from the binary. They're removing binding energy. And the rate at which they do so is given by this formula here, minus 32 over 5 times the symmetric mass ratio squared. Symmetric mass ratio is the reduced mass divided by the total mass times this fraction V over C to the 10th power, okay? And then there's an entire post-Newtonian series attached to it, one plus one PM corrections plus so on and so forth. Okay, once in addition to getting these two quantities, you of course also get the gravitational waves that get emitted. And typically those gravitational waves can be expressed as derivatives of the multiple moments of the system. So in general relativity, this starts with the second derivative of the quadruple divided by the distance to the source times one plus again a post-Newtonian series. But in principle, this post-Newtonian series also includes the derivatives of the octopole of the hexadecopol and so on and so forth. And now the derivatives of the mass, monopole or the dipole because of conservation of mass and linear momentum. So you can always go to the central mass where the dipole would vanish. And then you have to figure out, okay, so the gravitational waves got generated and I know how they got generated. I know how the objects move around each other and how they inspire all. But how do the gravitational waves then go from nearby the source to us on earth? Many, many, many megaparsecs away from the source. And so you have to study the propagation of gravitational waves in vacuum. And if you do that in general relativity, you find the gravitational waves satisfy the wave equation. At least they do so in the Lorentz gate in the TT gate. And because of that, we know that gravitational waves propagate with a dispersion relation that's omega equals K, which means that the phase velocity is equal to the speed of light and the group velocity is equal to the speed of light. So with all of that, we can write down a model for the gravitational waves that would be detected on earth. And we like to project these metric perturbations into a plus-cross basis. So what you see here is that the model depends on the symmetric mass ratio. Remember this was the ratio of the reduced mass to the total mass, the distance to the source, the inclination angle, the total mass, and the orbital frequency and orbital phase. But you have to be careful because the orbital frequency or the orbital phase are not constant. Remember the first slide I showed you, we had the frequency changing as a function of time. Why? Because the orbital separation was decreasing. So the binary we said, or the gravitational waves we said were chirping that were increasing in frequency. So similarly, you expect omega and the integral of omega, so phi, to be changing in time. So how do you do this in modified gravity? Before I tell you that, let me just do a small calculation to compute the Fourier transform of the previous gravitational wave signal. I told you just now that the frequency and the phase depend on time. So the Fourier transform is defined as usual where f is the Fourier frequency. But if you put in the signal model that I showed you earlier, that essentially looks like some amplitude which is a function of time, times e to the i phi of t. There's gonna be a real part of it. So there's gonna be an e to the plus i phi of t and then e to the minus i phi of t, depending on whether you have a cosine of phi or a sine of phi, that's really a matter. So in this case, I've taken the minus sign and you see that the argument of the exponential now combines into e to the 2 pi i of t minus i phi of t. Okay, so that term is sort of important because it tells you that there's for any fixed frequency that you pick, so for any f that you choose, there's a time at which that argument becomes stationary. So set another way. If you look at this integral, for most of this integral, the integral is gonna be oscillating wildly, very, very rapidly. And when you have an integral of a rapidly oscillating function, then that tends to average to zero, if the average is zero, right? But of course, it doesn't do so. It doesn't average to zero everywhere. There's a time that we call the stationary time where this function is not rapidly oscillating. And that is precisely when the first derivative of the Taylor expansion of this argument about the stationary time vanishes. So what I've done here is Taylor expanded this combination, two pi f t minus phi of t, which I'm calling psi. So I Taylor expanded this combination about this stationary time. So the first term is that psi quantity evaluated at this stationary time, that is a constant now, of course, plus the first derivative evaluated at the stationary time times t minus ts plus the second derivative times t minus ts square. But if you are at a stationary point, then by definition, the first derivative of the psi function will vanish. And what does that mean? Well, if I look at the definition of psi that I had up above that circled in red, then you know that if you take a time derivative of that, that is equivalent to saying that two pi f has to be equal to the first time derivative of the orbital phase evaluated at the stationary time. But the time derivative of the orbital phase is related to the orbital frequency. So you have on the right-hand side, the orbital frequency, and on the left-hand side, the Fourier frequency. So at the stationary point, there's a relationship between these two frequencies, the Fourier one and the orbital one. Okay, so let's now analyze how this Taylor-expanded integral gets resolved once we use the fact that we're at stationary point. So the first derivative vanishes, of course, by definition of the stationary point. That leads only two terms. The first term, the e to the i psi of t of s is a constant, so that can be pulled out of the integral. Great, so I'm gonna do that up above. That just leaves this third term, which depends on the second derivative, times t minus t s squared. And you'll notice here that in going from the bottom line to the top line, I've done something slightly sneaky, which is I've moved also the amplitude, which is a function of time out of the integral. The reason I've done that is because in all of these calculations, one assumes that the amplitude is evolving much more slowly than the phase. That weren't the case, then there wouldn't be rapid oscillations of the integrand that would vanish or average out in the entire temporal domain, except for in the neighborhood of the stationary point. So this entire expansion assumes that the amplitude evolves slowly relative to the phase. So I've pulled that amplitude out and I'm just left with this integral that looks a little bit nasty, but then if you close one eye and sort of half open the other, you realize that the integral is a Gaussian. So you can integrate that. Yay, and the result is this, essentially. You have the amplitude evaluated at the stationary time, times e to the i psi evaluated at the stationary time, times this correction to the amplitude that depends on the square root of one over the a. Absolute value of the secondary to the psi function evaluated at stationary time. That looks good because it's an analytic expression for what the Fourier transform of the model is. But what does it depend on? It depends on the psi function. And this funny phase, the psi function, remember was two pi f t, it's here in the lower right corner. I've lost my point target, darn it. So it's on the lower right corner, it's two pi f t minus five of t, but you have to evaluate it at the stationary point. So at the stationary point, f, big f, the orbital frequency is equal to the Fourier frequency divided by two. So psi of t s, which is the same thing as psi of f, is two pi f times t of f of f over two. So that's evaluated at the stationary time, minus the orbital phase evaluated at the stationary time. Now what's nice of this is that you can rewrite, both of these terms, you can combine them, you can rewrite them as an integral that looks like so, which is the integral over f prime, of f prime divided by f dot times parentheses two minus f over f prime. So really this psi function, this phase of the Fourier transform of the gravitational wave, really depends on the rate of change of the orbital frequency. It depends on f dot. And you know that d f, d t can be written in terms of d e, d f and d e, d t by the chain rule. So if I have the binding energy, if you have e binding, then I can take a derivative of that binding energy in terms of the, with respect to the orbital frequency. And if I have the radiation reaction force, then I also have e dot, so I have the e d t. So I can construct this f dot that appears in the integral of this Fourier phase. Once I have that, I can then solve this integral and I can find what the orbital phase, sorry, what the Fourier phase is for my model. Similarly, I can then take a couple of derivatives and calculate the square root term. And since I have the stationary point, I can calculate the amplitude as well. Okay, so that's in general, this is called a stationary phase approximation, super useful. If you want, it's also the sort of the leading order term in the expansion of a generalized Fourier integral using the method of steepest descent, which is a method, a very popular method in asymptotics. So if you don't know what I'm talking about, there's a very nice book by Carl Bander and by Orzag that we use a lot in the Extreme Routy Institute. So it's very well explained there in terms of just pure mathematics terms. So how does this relate to modifications to gravity? Well, the way it relates to it is that clearly, the two things that the wave model is going to depend the most on is the binding energy of your system and the rate at which that binding energy decreases due to the emission of gravitational waves. And maybe if you have scalar fields in your modified theory, then this might become wave-like and also carry energy away if you have vector fields the same. So you have to add up all of the energy sinks to find the total rate of change of the energy. With that, you can compute this F dot and then calculate the modifications to the phase. The modifications to the phase are super, super important because as I showed you earlier when I was doing this much filtering calculation, gravitational wave detectors are much more sensitive to the phase of the Fourier transform than they are to the amplitude. So getting the phase right is super, super important. Otherwise, you might miss the signal. And if you wanna test GR, getting a model for the phase is also similarly the most important thing. So let me give you an example, dipole radiation. Okay, so in GR, there's conservation laws that say that gravitational waves do not contain a dipole term. There's no derivatives of the dipole moment that the metric perturbation depend on. And that's because as I said earlier, you can always transform to a center of mass to make those terms go away. But in modified gravity theories, that might not be the case. In particular, if you have a scalar field that gets activated and carries energy away, if this dipole, if the scalar field has multiple structure far away from the source, and if it's anchored around each of the components of your binary, then as the binary moves around and these monopoles are gonna move around and a moving monopole will produce dipole radiation just like in electrodynamics. So the rate of change of the binding energy is not just going to be L gravitational wave, but it'll also have a correction to the scalar field. And because of this, more energy is going to be radiated away from the system that it would have been radiated in general relativity. So in general relativity, the rate of change of the binding energy due to gravitational waves is just given by the third time derivative of the quadruple moment squared. And that's the term that goes roughly as V over C to the 10. But if you have dipole emission, then the rate of energy removed by dipole emission depends on the second derivative of the dipole squared. And this goes as V over C to the eight. So if V over C is small, this dipole term can be much larger than the quadruple one. So because of these, dipole radiation forces an spiral to occur faster and the gravitational waves to chirp faster in frequency. So just to give you, I mean, you can go through an entire SPA calculation I showed you earlier, but it suffices to show you this like sort of back of the envelope calculation. You know, the Fourier transform of the gravitational wave, you can think of that as, you know, F dot times time square, roughly speaking. And F dot is going to be like the EDF times the EDT, like I said earlier, times this time square. Now, when you put in what the EDT is from the above expressions, then you get a term that is, but you would get in GR. So it is the first term, the one that goes to the power of minus five thirds, it's pi MF to the minus five thirds, the GR prediction. And then you're going to get a correction into this dipole term that's going to go as pi MF to the minus two thirds, which if you do the calculation, you realize that this is order V over C square larger than the general relativistic term. And of course it depends on some coupling constants of your theory, like that I've encapsulated here under this beta sub theta. So if you make beta sub theta be zero, then this term is not there and you recover the GR prediction. So the question is, let's compare a model like this to the data and let the data decide how small beta theta has to be. And that's essentially exactly what one does. But you can also have modifications in the propagation of gravitational waves. So how does this go about? It's considered a case of a massive grabbing. So special relativity says that if you have a propagating particle that mass, then E square is not equal to P square, but E square is equal to P square plus M square. And if you say that E is equal to H bar K and P is equal to V, then you can rewrite that expression into something that looks like this. V sub G is the velocity of this graviton square. Divided by the speed of light square. That has to be equal to one minus the mass of the graviton square divided by the frequency of the signal square. And you can of course rewrite it in terms of the Compton wavelength if you want. But the point here is that the gravitational waves that are emitted closer to the merger, they have a frequency that is larger than the waves emitted much, much earlier in the spiral. Why is that? Because the waves chirp. So the frequency increases with time. As the frequency increases with time, the velocity of the waves that are being propagating are closer and closer and closer to the speed of light. So early in the spiral, the gravitational waves may move slowly relative to the speed of light, but then they speed up and they begin to catch up with the speed of light, which produces a bunching up of the waves of the wave train with time that in principle you could detect or constrain, of course. So again, you can do a buck of the envelope calculation or you can do the full SPA calculation. Both cases you get essentially the same thing. So the Fourier phase goes as F times, another way to write is as F times the time travel or the travel time of the graviton. That's like F times D divided by the velocity of the graviton. And if I put in for the velocity of the graviton, the expression I had above and I tailor expand, I get the usual term, FD divided by C, but then I get a correction that's D divided by the Compton wavelength of the graviton squared divided by the frequency. I see an important fact here that the modification due to the mass of graviton in the dispersion relation accumulates with the distance traveled by the gravitational wave. That makes sense, right? Because the longer the wave has to travel, the more time it has to experience these modifications in the propagation of the gravitational waves. So sort of running out of time, but very briefly you can do this analysis more generically and at the end of the day, you will get a modification that looks like so. You get the GR expression for the Fourier transform of the gravitational wave times an amplitude correction and a phase correction. The amplitude correction will depend on parameters alpha and A and the phase beta and B. Alpha and beta are parameters that depend on the coupling constants of the theory. So the things that you're trying to constrain whereas A and B are numbers that define the type of modification that you're interested in. So B, for example, would be minus seven thirds if you're interested in constraining a dipole modification or B will be equal to minus one if you're concerned with a modification due to a massive graviton. So sort of running out of time, but before you can do tests like this, you wanna ask, is my data consistent with GR? So what you're seeing here is the data that LEGO detected after some filtering and some whitening. That's in gray together with the best fit waveform in red. So you can take the difference between these two signals to get the residuals. And then you can ask, is this residual consistent with noise? If it is consistent with noise, then ta-da, you've detected a signal that at least is consistent with GR. And the answer is yes. For all observations that have been done, that a residual has been consistent with noise to above 90% or so. So with that in mind, you can then do this test and you can ask, how well can we constrain beta? Remember in this PPE type framework for different Bs that might enter at different post-Newtonian orders. So something that enters at minus one PN order would represent dipole radiation. Something that enters minus one at zero PN order would represent Lorentz violation in the gravitational sector or party violation would enter at two PN order and so on. So you can ask, how well do the gravitational waves that LIGO detected and Virgo, how consistent are they with general relativity and how much do they constrain this beta parameter? And you get constraints of this type. What you see here is a bunch of different curves, a little bit busy diagram. But what you see in green, these green crosses are the constraints placed by LIGO with their first gravitational wave observation using a Bayesian framework. The red dots are some analysis that we did using a Fisher framework. The blue dots are a projection that we made in 2011 using a Bayesian framework as to how well LIGO would be able to do tests once a gravitational wave was detected and fortunately it was spot on. Okay. And so weaker gravitational, and so weaker gravity if you want would represent modifications that negative post Newtonian order. So you can see those are getting constrained well by gravitational waves but they're not constrained as well as what you can do, for example, today with a double binary pulsar, PSRJ0737. On the other hand, things that enter at higher post Newtonian order, those are constrained sort of uniquely by gravitational waves. This is the extreme gravity regime I was describing earlier. And of course, you can do this multi-wavelength observation as well, GW 170814. So that was when LIGO and Virgo detected two neutron stars colliding and then Fermi detected gamma rays and then pretty much every telescope that pointed in that direction eventually detected radiation in different wavelengths. But the gamma rays ones are particularly important because the gravitational wave detection gives you the distance and that's the arrival time of the signal assuming, well, for some velocity of the gravity. But the short gamma ray burst plus galaxy identification, that also gives you a distance. So that also gives you the light travel time. So D divided by the speed of light plus in principles, I mean, intrinsic time delay between the emission of gamma rays and the emission of gravitational waves because gamma rays are not emitted right at the time when the two neutron stars touch. It takes a little bit longer for the jets to form and to eject the photons. So you can combine these two observations, the one that Fermi did with the ones that LIGO did to place a constraint on the speed of the graviton. And doing so LIGO plays a constraint that's really, really, really good between, now it's saying, surely we're saying that the speed of gravity is equal to the speed of light one part in 10 to the 15. And that's important because different modified theories of gravity predict modifications to the speed of gravity. And essentially these observations are ruling out these alphas of team modifications which then has implications to a bunch of different theories. I'm calling them that theories although that's a little bit too strong but it places very stringent constraints on quantum and quantum Galilean theories and FAP4 theories and DOS theories and so on. But it's looking good because tests are gonna just continue to improve. We are in 2018 now, that's advanced LIGO but advanced LIGO has already implemented changes to go into the A plus configuration in the next couple of years that will increase the range to 140% what it is now. There's plans also to put in a low temperature system on the mirrors by the 2020s that's going to be called Voyager or at least that's what people are calling it now. And that will increase the range to like 200% relative to advanced LIGO. And this is what I would call moderate improvements or moderate changes in constraints, factors of 1.5 or 2. But then in 2030s we have Cosmic Explorer, Einstein Telescope and Lisa. So Cosmic Explorer and Einstein Telescope these are very, very ambitious ground-based detectors that people are trying to build in the United States and in Europe that would be cryogenically cooled and underground in some particular way at least Einstein Telescope would be underground in order to isolate from seismic noise. Whereas Lisa is this mission that has been approved by the European Space Agency with a minor partnership with NASA and with other nations to put in a gravitational wind detector in space that would open up the frequency regime in the millihertz which is right now unavailable to us because of seismic noise. So LIGO cannot really reach below say 10 hertz and neither will Voyager because of the seismic vibrations in the ground and other types of noise. So these new instruments in the 2030s will allow for new tests but new tests that will require a lot of work they will require new models that will require more accurate models and more accurate data analysis techniques as well. So you can ask, you know how well will constraints get in the future? So what I'm plotting here is a constraint on the mass of the gravity term as a function of future instrument. So you have a LIGO, Voyager, a plus cosmic explorer Einstein telescope and then three different versions of LISA that were being proposed. The last one is the one you wanna look at N2A5 is the one that's, well, the one that people settled on is between N2A2 and N2A5. So here you see the constraints that you can place the current bound is about 10 to the minus 22 electron volts per speed of light square and you can see how you can obtain improvements of roughly five orders of magnitude with a single event as you improve your instrument, as you improve your detector. So that's great. That's beginning to approach the regime where that's comparable to a mass of the gravity of a Hubble size, which might be interesting if one wishes to try to explain a late-time acceleration through a massive gravity mode. So since I'm running out of time, very dangerously, let me just conclude. So second-generation detector, so the ones that are online right now are already placing like really interesting constraints, right? I mean, they can't quite do party violation on the range violation just yet, but I predict today at this webinar that, you know, within the next five years, they will be able to place the first constraint on party violation or anti-allocation in the gravitational sector, and that's gonna be really interesting. Of course, people already begin to think about third-generation detectors and how we can do constraints on the extreme gravity regime that are really precision tests of general relativity. This will require very accurate models that we don't quite have yet, but fortunately we have 12 to 15 years to figure them out. So people are working hard at that. And this is what I was saying about working hard. There's a lot of work in theory development, in perturbative methods, in numerical methods to construct accurate models in modified theories of gravity so that we can then use them with these very sensitive detectors. These detectors are going to be, in many ways, much more sensitive than the second-generation ones, and the signals that they will detect are going to be much, much, much louder. So in order to really exploit all of the data that these detectors will acquire, we really need to have models that are accurate enough so that our tests are not limited by systematics or errors in the modeling itself, but rather they're limited by a statistical or instrumental uncertainties in the data. So I had to leave a lot of things out here. I didn't really talk about constraints compared to violation of Lorentz violation or tests of black hole hair through questionable modes or Emory tests or constraints and additional polarizations. Those will have to be left for future discussion. But I think the bottom line I want to communicate is that there is a lot of work still to be done. There's a lot of work that has been done. This is a very active area of research. Data is coming in. It has already come in. We will continue to come in at an ever-increasing rate of detections as the instruments become more sensitive, all going all the way to the 2030s with the advanced 3G detectors. And it might be very interesting because if there is something there, we will see it. And if there's nothing there, then at the very least we'll be able to constrain and verify general relativity to amazing new levels. And I think this is summarized very nicely in the words of a famous Austrian poet. If it bleeds, I think we can kill it. All right, thank you. Okay. Thank you very much for this super nice webinar. I think we have minus 10 minutes for questions. And I'm gonna start with two that appear on our YouTube channel. So what is from a poll that is for the pronunciation? Shantanu D'Say. So the question is how does the loss of angular momentum depend on the binary properties? Hi, Shantanu. I knew Shantanu from grad school. L dot. So if you have a quasi-circular binary, then the rate of change of the binding energy E dot is, or L dot is related to E dot by omega square. So there's a one-to-one mapping and these are not independent quotes. But if you have a centric binaries, then gravitational waves will carry away E dot and L dot in slightly different ways. And what L dot does is effectively, it circularizes the orbit. If you want, you can think of E dot as making the orbit become tighter and more and more bound. So it decreases the orbital separation, whereas the orbital angular momentum or the loss of orbital angular momentum forces the binary to circularize. And the circularization is very, very effective, which is why for the longest time, we thought that we could just model gravitational waves as produced by quasi-circular binaries. But of course, there is now astrophysical studies that suggest that there may be a population of eccentric binaries that are also producing gravitational waves in the LIGO band, although these waves may be weaker or more difficult to model. But certainly by the time we get to 3G detectors with Lisa, we are going to need accepted binaries and we are going to need to model the rate of change of the orbital angular momentum as accurately as we do for the energy. OK. Good. Shantauses, thank you. There is another question, but this is like a nickname. Josh Bartz, KZTS, something. So the question is, are post-Newtonian equations basically serious calculated through long calculations that are honing the original equations for relations? Sorry. Can you repeat that question? I think you can read the question in the chat. So it's, are post-Newtonian equations basically serious calculated through long calculations that are honing the original equations for relations? Yeah, it looks like if I understand correctly, the question is, if the post-Newtonians are just like refinements of quantities? Yeah, so the post-Newtonian calculations, yeah. So a post-Newtonian calculation is very much like a loop calculation in quantum field theory. There's a perturbative expansion in powers of v over c or m over r 1, 2. You can think of it as powers of v over c. So to 0's order in the post-Newtonian expansion, you just have Newtonian gravity. So you have orbits that obey the Newtonian equations of motion. But then those orbits, those moving bodies, they're accelerating because they're moving in a circle. Then they generate gravitational waves. So when you solve the Einstein equations perturbatively about Minkowski, you find that the metric perturbation has to satisfy a sourced wave equation where the source is a stress-centered tensor of the binary system. And this binary system is accelerating. So it produces gravitational waves, this Hij. And then, great. So now you've calculated gravitational waves as emitted by a Newtonian binary. That's your 0pn calculation. Now you have to back-react that gravitational wave onto the trajectory. So you have to calculate how the Hij carries energy and angular momentum away from the binary, such that the acceleration of the binary gets modified because it acquires a velocity-dependent term. So you do that calculation. And then at 1pn order, you begin, well, insufficiently high post-Newtonian order, you begin to see how the binary begins to inspiral due to the emission of gravitational waves. But you also have to be more careful because the Newtonian description of the binary system is not sufficient. You have corrections to the binding energy due to self-interactions and due to body 1 interacting with body 2. Now those also have to be taken into consideration when you construct the Hamiltonian before you solve the Hamiltonian-Dakobi equations. I don't know if I answered his question. OK. We'll see if that person replies. So do we have any questions from our Latin American coordinators? Yes. I have one, but it's very kind of a question from YouTube. It's very naive. But anyway, it's just, first of all, thank you, Nicolas, for the webinar. It's super interesting, in fact. But I have a doubt about, with current detector, which are the best measures that you can to assert this GR deviation? That means, in the sense, if there are bigger, I mean, more massive black hole is much better, or there is kind of a range for the kind of vanilla scenario in which you can see the deviations with the. Yeah. Yeah, that's a great question. It really depends strongly on what modification you are trying to observe. So for example, if you are a late-time cosmology person and you want to constrain massive gravity, so are you familiar with this massive gravity theories of the ram and company? So those theories predict that the gravity should have a mass. And the modifications to those gravitational waves, in that particular case, will scale with the distance to the source, for example. So you want to observe something as far away from Earth as possible. And that typically means, if you're going to detect it, you want something that's very, very massive. Because the more massive the binary, the easier it's going to be to detect if it's really, really far away. This is in part why Lisa allows you to place really good constraints on the mass of the graviton, because they can see things are really, really, really far away, like redshifts of 5 to 8 or even 10 or 20. And they can detect waves produced by binaries with supermassive black holes. So 10 to the 5, 10 to the 6 solar masses. On the other hand, if what you care about are modifications to the Einstein-Hilbert action due to, say, quadratic terms in the action, like Ritchie square or Riemann square or things like that, like Starovinsky gravity or turn sign, or Einstein-Biletton-Gosmone. Typically, these are theories that people concoct when they work in an effective field theory approach to quantum gravity. And so if you're trying to constrain those things, then you want systems where the curvature is as large as possible. But I'm sure you remember curvature scales as 1 over the mass squared of the system. So for that type of modification, you want the smallest black hole you can find, which is a stellar mass black hole, right? So for that, you want LIGO. And you want LIGO as sensitive as possible. So in that case, you're really talking about Einstein Telescope or Cosmic Explorer as being able to give you the best constraints. OK, thank you. Sure. Hello? I can't hear Alejandro. Sorry. I was, again, talking with me. So Josh, the person that asked you about post-Newtonian, the post-Newtonian questions, said thank you. That was basically what I meant by my question, just an undergrad trying to understand. OK, I don't know. Roberto, do we have more time for questions? There are, like, at least other questions on the YouTube channel, or should we just quit? As you prefer. I mean, the questions are very interesting, no? OK, so yeah, OK. So let me read another question for you, Nico, from Jacim Afnan. So what parameters are used to calibrate LIGO? What parameters are used to calibrate LIGO? So typically, the way calibration, first of all, I'm not a member of the LIGO scientific collaboration. I just happen to know a lot about LIGO. Because I talk to LIGO people. It used to be, right? Back when I was a young grad student, and I didn't know better, I was in the LIGO collaboration. Yes. And so yeah, so that's in part why I know a little bit about this. So yeah, so what they do is they want to calibrate, say, the amplitude of the face to figure out how well can you extract the amplitude on the face. What they do is they juggle the mirrors, effectively, through hardware injections. So they know what they're putting in there. And then they try to measure how the output of the photo detectors behaves. Like what does the output read out look like? And then based on that, they try to reconstruct the thing that they injected in the first place. And that's how the hardware calibration is done. It's not that you're calibrating parameters per se. I mean, if you're asking about the particular parameters of the instrument, of the hardware that get calibrated, then I don't know the answer to that because I'm not an experimentalist. But if you're asking about the parameters that get fitted to the signal, the model parameters, that I can answer. And that's not a calibration issue. That's a parameter estimation issue. So you have your data that has a signal plus noise, and you're trying to extract this signal. And you filter it with a model. And the number of parameters of the model depends on the system that you want to model. So typically, you have something like 15 parameters or so, depending on the system that you're considering. You have something like the symmetric mass ratio and the chirp mass, or the two masses of the objects, essentially. You have the distance to the source, the luminosity distance. You have the inclination of a polarization angle. You have two angles that localize the source in the sky. You have three numbers for the spin, for the components of the spin angular momentum of the first body, plus three other numbers to describe the components of the spin angular momentum of the other body. Plus, in principle, you can have the eccentricity. So that's 14 right there. I may be missing one or two. And you have to fit all of these parameters simultaneously. It's a global fit. That's why the data analysis problem is quite computationally expensive. And you have to search the entire parameter space. It's not that you can just search in one parameter, fixing everything else, and then move on to the next parameter. And obviously, you cannot discretize the grid, either, because if your parameter is like 14 dimensional, if you try to put 10 points in each dimension, then it takes a hell of a time to do the comparisons. So, yes, Sim, you can tell us if he answered your question. OK, and then the last question, it's again from Shantanu. And he's saying, are there any predictions of gravitational wave signals from Einstein-Carton gravity or Einstein-Carton-Siamma-Kievl gravity? I don't think I have seen even a single paper, but or it's the same as in GR. So the question is, are there any predictions regarding these particular theories, Einstein-Carton and Einstein-Carton-Siamma? Yeah, I don't think so. At least I haven't seen either. I haven't seen any papers either. So typically, when you're dealing with Einstein-Carton's theory, then you are dealing with theories with torsion. There are theories that have torsion, for which we've calculated gravitational wave modifications, but not necessarily in Einstein-Carton-Kievl's theory. So Shantanu, you should go ahead and calculate it. Should be fine. OK. OK, so question over here. Yes, Joel. All right. So I heard elsewhere that in principle, gravitational waves can put bounds on extra dimensions. On models where only gravity propagates on the extra dimensions. But I don't know much about that. So I was wondering if you could comment on this. Yes, of course. So this scenario is the following. So random and syndrome proposed some time ago that perhaps we live in a brain that is part, so a four-dimensional brain that is part of a bigger dimensional space, say a five-dimensional space. And so the extra dimension, if you move into the extra dimension, then you would be moving into the bulk. So what you are supposed to imagine here is a two-dimensional sheet that represents our four-dimensional brain where we live and exist. And then if you move away from this two-dimensional sheet, then you're in the now. In this model, gravitons can interact with the bulk. And they're really the only things that interact with the bulk. So Tanaka, well, people were trying to find solutions for black holes in these theories. What they found is that black holes, they couldn't find black holes that were stationary. It seemed like black holes had to be time-dependent. And the reasoning that they came up to argue that that had to be the case was, apart from not being able to find solutions, what they thought was the reason for that was that maybe gravitons were escaping into the bulk. So the black holes would be on our four-dimensional brain. But they would extend into the bulk. And then gravitons could extend into the bulk from through the black hole. And if that were the case, then the black holes could be losing mass, as viewed from our brain. Now, how rapidly they would be losing mass, how rapidly that m dot would be would depend on the size of this extra dimension. And so what people did is they calculated, if I have a gravitational wave emitted by a binary system of two black holes and the black holes are losing mass, they're getting thinner and thinner and thinner during the inspiral, then how does that change the rate at which they inspiral? How does it change the acceleration of the binary system, and how does it change the evolution of the gravitational waves? And what they found was a modification that could indeed be constrained. So you can indeed constrain m dot with gravitational wave observations. And if you believe in this mapping between m dot and the size of the extra dimensions, then you can place a constraint on the size of the extra dimensions. Unfortunately, those constraints are many, many, many orders of magnitude weaker than the constraints that you can place on the size of the extra dimensions from experiments on Earth, which are typically millimeter-sized. However, all of that has been put a little bit in question, because recently, some numerical people in Cambridge actually did find stationary solutions in random syndrome theories. So the need for gravitational leakage is not really there anymore, because the solutions do exist. I mean, you can still put an agnostic constraint of the rate of change on the mass of the black holes with a gravitational wave, but whether or not you can connect that to an extra dimension that's not so well-justified anymore. I see. Thank you very much. OK. So thank you very much, Nico, for this very nice webinar. So if people want to keep in touch with Nico, you can find his information on Google, or then we will post these slides of this talk, also in our web page. So please stay tuned. This is our second webinar of this new season, and we have lots of new things to show. In particular, we will have more talks about gravity, which seems that people are very interested. So Marcelo Ponce, he is now at the University of Toronto, is going to talk about high performance computing for numerical relativity and physics in general. And Leo's time will talk about also, I think, numerical relativity for tests of GR, and then how we do general relativity. So thank you, Nicolas, for joining us. We hope we will see you in any other season soon. We would love to have you again. And thank you, everybody, for joining us. And just stay tuned for more information. OK. Well, it's my pleasure. See you, guys.