 Hello, everyone, and welcome to our second web seminar of the series of the Latin American Webinars of Physics. My name is Jorge Diaz. I'm from the Karlsruhe Institute of Technology in Germany, and I will be your host today. Our speaker is Ricardo Husturani from the Institute de Física Theórica in UNESCO in Brazil. And he will talk about LIGO and the exciting discovery of gravitational waves. A little bit more about our speaker. Ricardo received his PhD from the School de Normales Superiores de Pisa in Italy. And after postdoctoral positions at Helsinki, Geneva, and Urbina, he moved to UNESCO in Brazil, where he is currently a researcher working on the LIGO experiment. The title of Ricardo's talk today is Observation of Gravitational Waves from a binary blackboard merger, we're really glad to have him today as our speaker. And I remind you that all of you, all our viewers, that you can be part of the discussion by writing questions on the Q&A system of Google+. You can also reach us through Twitter with the hashtag LAWOP. So don't forget that you can answer all your questions, and Ricardo will answer at the end. So I will now hand it over to Ricardo. Hello. So thank you. Now, please explain it for you. Thank you. So thank you for giving me this opportunity to talk about these exciting news of the gravitational wave detection, the first historic gravitational wave detection from binary blackboard merger. So I will start with an introduction about gravity waves, what we knew about gravity waves before the detection, and what this detection is teaching us for the future. So let's get started with a very short general relativity primer. So I'm on slide two. I guess everything is broadcasting fine. So what is the framework? The framework is general relativity, of course. And basically, this slide is what you need to know about general relativity to grasp an understanding of gravitational waves. So we consider the metric, and we split it in something which is Minkowski plus a perturbation. This perturbation is usually small. But what is more important rather than it is more is that it's the high-frequency part. So for instance, on Earth, we have the gravitational field of the Earth, which in dimensionless quantities is 10 to the minus 8. And usually, this perturbation, we aim at detecting as much smaller. But they have a typical behavior in frequency. And that's what allows us to disentangle them from the background. So the usual approach is to linearize the Einstein equation, and then to get a kind of D'Alembert equation for the wave. So the D'Alembert equation for the wave seems to tell us that we have 10 wave equations for the 10 components. But we don't have 10 physical degrees of freedom. Because we know that four are gauge degrees of freedom. Two are really physical and radiative degrees of freedom, and actually are the ones that radiate gravitational waves. And four are still physical, but are not radiative. I mean, this is much less exotic than it may look like, because it's exactly the same thing that happens in entramagnetism. Entramagnetism is one has four degrees of freedom, one is gauge, one is constrain, and not radiative degrees of freedom. And then there are the two photons in this city, which represent the radiation. So it was not immediate. People didn't realize this immediately after Einstein wrote down the general relativity famous paper. But it was eddington in 22 that showed, and quite ironically showed, that eight of these degrees of freedom propagate with the speed of thought, in the sense that they're not physical radiative degrees of freedom. And only two are really radiative and propagate energy from the source, even though it took some more 30 years for the full of community to accept this fact. So here I hope you can see the cursor. So I'm accompanying in the formula of the slide what I'm saying. Basically, one can decompose the 10 components of the metric into taking out the gauge degrees of freedom into one scalar, which is basically the general relativistic analog of the internal potential. Then there are two degrees of freedom in this vector, one extra degrees of freedom in this scalar, and two degrees of freedom. One plus two plus one are the four constrain degrees of freedom, and these two are the real radiative gravitational wave degrees of freedom. And after we gauge fix, the radiative degrees of freedom actually fulfills a Dalambert equation. The other constrain degrees of freedom fulfill a Laplaceian equation. So they are constrained to the source. Here, for simplicity, I put the source equal to zero. So the constrain degrees of freedom are zero in absence of the source. The wave is the only one that exists in vacuum in absence of the source. So after this short introduction, I move to the description of what do gravitational waves do. So for instance, the gravitational wave is propagating in the z direction. It's transverse, and it has two polarization. The plus polarization, basically the black line here correspond to a string of test masses. It squeezes and stretches these test masses with this pattern, and the h cross polarization squeezes and stretches in this other pattern, just at 45 degrees. So this is the effect of the gravitational wave as they pass by, as they propagate energy from the source to the eventual observer. OK, and these are the pictures of the detector. So this is LIGO in the state of Washington, in the city of Hanford, actually close to the city of Hanford in the northwest of the United States. This is LIGO Livingston in the state of Louisiana near the Gulf of Mexico, and these are the two detectors that actually made the detection. And this is Virgo, which is in PISA, which has an agreement to share data and data analysis with the two LIGOs, but it wasn't on at the time of the detection. So Virgo didn't take part. The detector itself didn't take part into the detection, even though the Virgo collaboration took part in the detection as part of the data analysis team. So all of the three of this detector are now being upgraded. They've been taking data in the past, also Virgo, even though it wasn't on at the time of the detection. It has been taking data in the past. And the new science run are due to start, well, it's not clear July, August, but surely before the end of the year. The last science run, the one which eventually came up with the detection ended in January. The analysis will be completed by the end of this month. So by the end of this month, you will see the result of the four four months of analysis. The detection that was announced only happened in the first month of the analysis. And then in a few years from now, after 2020, the Japanese detector Kagura and the Indian detector Indigo will also join the collaboration. So there will really be not only two detectors as for the first detection, but really a network of five detectors. But soon we'll be having three detectors, say by the end of the year, and then two more in four years from now. How did the detector work? So this is a simple scheme. And then I will show you an animation. Basically, there is a laser. The lights go through a beam splitter and then propagates into two orthogonal arms, and then it recombines. The idea is that once it recombines, it can have a phase shift just because the length it travels into the two orthogonal paths can be slightly different. And phase shift down to 10 to the minus 8 can be measured. And the phase shift corresponds to a different optical path given by this formula where delta L is the different optical path that the laser has gone through into the two different arms in units of the wavelength of the laser. Multiply by n. This n is basically the number of effective bounces that the photons take, because this is not just a simple Michelson interferometer. But it has, OK, they are not shown here, but here there are Fabry-Perot cavities. Fabry-Perot cavities basically allow the photon to go back and forth roughly a few hundred times before actually exiting and recombining with the photons coming from the other arm. And this says that this delta phasing of 10 to the minus 8 can be translated into a difference of optical path of 10 to the minus 15 meters times 1 over n. So the more bounces we get and the lower we can go in measuring differences in optical path. And imagine this is the size of the order of a nucleus of an atom. So this may sound crazy, but bear in mind that the beam size is few centimeters square. So what one is actually measuring is not the position of individual nuclei of atoms, but the average position of a macroscopic size of the mirror. And now I want to show you a little animation. Let's see if I can show you. OK, the animation showing how, basically, when the gravitation wave pass by, it has these stretching and squeezing of the two orthogonal directions. So the optical path changes. If the optical path changes, then the intensity of the recombine light changes. And basically from the intensity of the recombine light, we can directly read the intensity of the gravitation wave. And we actually read it in real time. OK, so that's enough for this animation. Let's go back to the presentation. OK, but this is not the end of the story. I told you that in a few years we should have a network of five detectors. Actually, in 20 years from now, we should have also a space detector. So Lisa Pathfinder has been sent into space. So the idea of Lisa is to have an interferometer, very much of the kind I showed you in the previous slide, but in space. The advantage to be in space, of course, there are many disadvantages to be in space because you cannot send your favorite PhD student to go and fix the detector. But the main advantage is that you don't have environmental noise. So the mirrors over which the photon had to bounce off will not shake because of earthquakes, because of seismic activities, because of environmental noise. The idea is that if one could build an interferometer, send an interferometer into the space, this would be sensitive to much lower frequencies. So one of the three stations that would eventually build an interferometer in space has been sent to space, and it worked. But of course, this is only a technical mission. It's not a scientific mission because you cannot make an interferometer with one station. But this showed that it is, in principle, possible. And so if the mission is funded, in 20 years from now, we could have three stations having an interferometer arranged in a triangular shape. And this will allow to extend the detection to lower frequencies. But let me tell you then what are the sources and what are the typical signals we are actually aiming at. So with LIGO, we really focus on astrophysical sources. Neutron star, massive black holes, solar mass passing, and not even supermassive black holes. Supermassive black holes are known to exist, and they should emit gravitational waves. But these are not the target of LIGO. They could be the target of LIZA. Or what matters, cosmological production mechanism. I will not talk about them. We are actively searching in LIGO, but they are not very promising to show up in LIGO. So this will not be the subject of this talk. So I will concentrate on what are stellar masses, black holes as sources, and which eventually turn out to be the first source of the detected gravitational waves. OK, just to give you a ballpark of what are the numbers we are talking about. So a gravitational wave from a distribution of masses is sourced by the quadruple moment of the distribution of masses. So you have to integrate over the density weighted by two powers of the position of that mass element. And a typical quantity that is relevant for a binary system which then, as such, has a quadruple is the mutual velocity, the relative velocity. And the relative velocity is related to the total mass and to the radius, to the orbital radius by the standard Newtonian form. So the gravitational perturbation produced by this system is of the order of the second derivative of the quadruple. Just doing a little bit of algebra, you get that this is proportional, basically, to the Newtonian potential of the source times the relative velocity. But what is important, this is modulated in time. There is a typical oscillatory behavior which is generated by the circular motion of the binary system. And this is what allows us to have the detection. Because imagine the Newtonian potential, which is gm over d mu is the reduced mass here, of a system outside our galaxy would be very, very small. And the amplitude of the gravitational wave, it's even smaller than the Newtonian potential by a factor v squared. Here I'm using units of the velocity of light equal to 1. But the fact that there is this high frequency modulation, it will allow us to disentangle from other sources of gravitational perturbation. And this is to give you just the ballpark of the numbers. So for masses which are typical solar masses and radii which are 10, 15 kilometers, you get to produce a frequency for the gravitational wasting around the kilohertz. OK, don't be mistaken. Here from this formula, it seems that as you raise the mass, you get to higher frequency. But actually, it's the other way around. Why? Because you have a higher power of r. Since a black hole is a mass which is linear with the radius, if you increase the mass, you have also to increase the radius. Because when the two black holes get so close that they touch each other, then the signal ends. So actually, when you increase the masses, you also have to increase the orbital separation. And so at the end, you get a lower frequency. So higher masses means frequency that spreads up to lower frequencies. And here is also the typical velocity. So for the frequency at which LIGO is sensitive, I will show you later more in detail. So from 100 hertz, one kilohertz, you really see solar object at relativistic velocity, so which is something that has never been achieved by any other experiment, any other observation before. And finally, to conclude this general relativity background, say, on gravitational wave, this is the Einstein formula for the emission. It's just the integral of the derivative of the plus polarization plus the derivative of the cross polarization square. And if you sum the two and you substitute the V-dian vibration, then you get the flux that goes to SV to the power of 10. This nu is the dimensionless combination of the two masses, and then you have the reduced mass. So before getting into the detection by LIGO, let me spend a couple of minutes to tell you what we knew about gravitational waves before LIGO. So we knew already the gravitational wave exist and are emitted by binary system. How? Because we know that there is a double neutral star system that emits gravitational waves. How we know that? So we observe that neutral star binary is because neutral star, which are pulsar, they emit a radio beam like in a lighthouse. This radio beam is hitting on us once every cycle. And the modulation of this light beam tells us that the pulsar which is emitting this light is in a binary. So most of the pulsar we observe are not in binary system. We observe 1,000 pulsar which are not in binary system. But a bunch of them, some 10 of them, are in binary system. And we can observe that they are in binary system just because the time of arrival of the lighthouse is modulated as it should be for this binary to be in a periodic orbital motion. So there are a few parameters that we can observe by this modulation of the time of arrival of the light beam, which is basically the projection of the semi-major axis of the orbit in the plane perpendicular to the observer. The eccentricity on this orbit, the period of the orbit, the period shift, the Einstein delay. And most importantly, we observe the derivative of the period of the orbit. So we observe that this system is not conservative. We observe that this system is changing its orbit in time. So from all of these parameters and using the first post Newtonian order, the post Newtonian at first order level physics, we can infer the mass of the primary and the mass of the companion, the mass of the two co-signants of the binary system. So we really have to use GR, Newtonian physics, not enough. So from all of these observations, we can measure the two individual masses. From the measure of the individual masses, we can plug in the Einstein formula and get the prediction for the derivative of the period. So I will show you in the next slide a comparison between the prediction of the time derivative of the period and the observation of the time derivative of the period. And the comparison is done by plotting the following quantity, basically the accumulated phase. If the system was exactly conservative, the accumulated phase would be linear in time. But since the period is varying, we can do it. We can make a theory of expansion. And we will see a quadratic behavior on the top of the linear behavior in the expansion of the phase as a function of time. Let me stress once more. All of these relies on the first post Newtonian order. So the first correction to Newtonian physics predicted by generativity in the conservative sector. And we rely on the leading order, flux formula, then in the dissipative sector. So you can recognize this formula for the energy. Energy is equal minus 1 half, reduce symmetric mass ratio times total mass times v squared is the Newtonian energy of a binary system. The kinetic energy is minus 1 half the potential energy. So for a circular orbit, the energy is minus the kinetic energy. But this formula receives correction in generativity to all orders of v squared. And we also have to use the flux. Using the first post Newtonian order correction given by generativity and the leading prediction for the flux, we can predict the derivative of the period. We compare to the derivative of the period which is observed. And this is the beautiful result that showed already many years ago that binary systems do emit gravitational wave as predicted by generativity. And this is the accordance between the prediction and the observation. And all of these binary systems have a typical velocity of 10 to the minus 3 in units of the speed of light. And there are a few hundred million years from coalescence. So we are talking about objects in mass, stellar mass objects that have periods of hours and with the velocity of 10 to the minus 3, which is very large, but still far for the real strong gravity energy. So this was about what we knew about gravitational wave before the LIGO detection. What we knew about the decals. So let me go a bit quick, because I see I spent a lot of time on the preliminary material. So we observed black holes in a galaxy of black holes of roughly up to 20 solar masses. We know that the center of our galaxy we have a black hole of 1 million solar masses. And we know that galaxies not ours, but galaxies having active galactic nuclei, they can host supermassive black holes up to 1 billion solar masses. And finally, let's go to the LIGO observation. So what I'm showing right now is the output of LIGO. Remember, the light intensity output of LIGO at the time of the detection. So 0 has been chosen as the time of the detection. And this is not what has been shown in the press release, because it looks very ugly. And the output is very, very noisy. And as you can see, there is noise with all frequency components. And just to help you to visualize the blue little line, this is the level of the signal. This is the level of the signal in the output of the detector. So it sounds ridiculously small. So I will try now to explain you how we actually dig this out. Well, this is a zoom. So again, in this scale, the blue is the gravitational wave signal. And red and green are the output of the two detectors. It's completely swamped out by the noise. But as you can see, the noise contains contribution at all frequencies. So you see this oscillation on the time scale of a few tens of seconds. But at every instant, there are very wide oscillations at much higher frequencies. So what we can do is take the Fourier transform and look at the noise, how it looks like. And as you can see, there are frequencies in which the noise is under control and very narrow frequencies in which the noise springs up. These are basically related to the fact that the mirrors are suspended and the suspensions have their own resonant frequencies. So whatever noise excites the resonant modes that will show up very strongly, but very sharply. So it would be relatively easy to chop this frequency out and get rid of them. And as you can see, at low frequencies, it's hopeless. Below 20 hertz, basically, there's too much noise. It's not even worth to look at the data below this, just because the mirrors start to oscillate, responding to environmental noise. Whereas at high frequency, the limited noise is a laser shot noise. So let's do the following. Let's divide. Let's go to frequency space. We divide by the average noise and then go back. By the way, this is the noise level that the two LIGOs had at the time of the detection. This is the noise level that LIGO had in 2000. The LIGO had five years ago at the time of the last data taking. And this brown line is the level of the signal that has been actually detected. As you can see, the signal would have been barely detectable. Basically, it would have been taken for some spurious noise with the sensitivity of five years ago, whereas it's really well above the noise level with the nowadays sensitivity. Just to give you a comparison, this light blue line is the sensitivity of Virgo five years ago. So just to show you one of the last light about the signal, so this is the signal representing a mixed time frequency space. So two slides ago, I showed you the signal in time domain, then the spectrum, which is basically the power of the signal in frequency space. And this is the power of the signal divided at each time, basically making it fully transform at each time being. As you can see, if you just take the signal, you only see the noise lines. But then if you normalize by the average noise, this is what you can see. And you can see that among the noise, there is a concentration of high power at some time. And then the concentration raises in frequency, get to a maximum frequency, and then it ends. This is exactly what we expect from a gravitational wave from a binary. When the two binaries are approaching, we get the radius of the orbit is shrinking, the frequency is going up, and the intensity of the signal is going up. So after subtraction of the noise or better said, whitening, we get this. But we can do better. OK, this is the output of the sector back in time domain after the whitening. As you can see, I mean, you can see by naked eye that there is a signal. Red and green line are the white and output of the sector. And the black line is the best fit of our model. Again, white and. I mean, the generativity model doesn't have these bumps and kinks. These bump and kinks just come from the fact of the whitening. But I mean, if you compare the white and prediction on generativity to the white and output of the data, we really get good agreement. And trying prediction on generativity for all kinds of masses, this is the values that we can get with the 10% accuracy. We can measure individual masses of the binary components. We can get an estimate of the distance to get this level of amplitude of the signal. So now let me try to explain you how do we get to this best fit waveform. OK, this best fit waveform, we can only measure the masses and not the speed. Well, how do we get to the best fit waveform? We get to the true mechanism that is called match filtering. So match filtering is something that you should be more familiar with than you may think of. It's the same thing that you do when you turn the handle of a radio to catch your station. If you look at randomly at the broadcasting signal, you only hear noise. Whereas if you get to the right frequency, then you can dig the signal out of the noise. How this is done quantitatively? In the following way, you get the output of the detector, which would be signal plus noise. And by the way, the signal is a scalar component for the output of the detector. So the signal will be a suitable combination of the metric entries. And then we characterize the noise by spectral density, that spectral density with those spikes that I showed you before. The idea of match filtering is the following. If we take a correlation of the output of the detector with a pre-computed signal, and if this pre-computed signal is exactly the same as the signal which is in the data, when they take the product, we get a term which is always positive because it's the square of a quantity, and a term that can oscillate. And basically the term which is always positive can outcome this other term, even though instantly, at every instant of time, the noise is larger than H. But this integral of the correlation of noise with signal can be smaller than the integral of H square. So to give you an example, these are the raw output of the detector. And this, we take the correlation with the waveform. And this is the waveform that actually maximizes the correlation. When we take the correlation with the best waveform, we do retain this spike in the correlation. How do we know that that's the best? OK, let me show you another animation. Just a second, I have to load the animation. OK, yes, this is the animation. Because if we vary, you see here, I'm just to play with. I let the two masses of the binary constituent vary. You keep in fix the total mass. I just vary the ratio of the two masses. And this is how it affects the waveform. As you can see, the waveform changed shape. So when you take the correlator, the output of the correlator will be very sensitive to the specific shape of the waveform. So in other words, the specific masses of the waveform and the shape of the waveform bears the fingerprint of the masses of the progenitors. So let me go back to the main presentation then. And so basically, just by trying all possible waveforms that we can generate based on general relativity, we can check what is the one that is the best correlator. And that, by definition, is the one that allow us to determine those data. So just to summarize, we observe a system that when the orbital radius is larger than the individual size, then we are in this parallel region. Then we get to a distance where the two objects are basically at a distance comparable to the individual size. Then they merge. And then you get one final object, one final curve which is highly oscillating. And then it settles down by emitting gravitation wave. So we observe a kind of six cycle in the spiral. Then we observe the merger. And then the ring down settles down very, very quickly. And you know this process is basically emitting the last, just in this last oscillation, it emitted three solar masses in gravitational wave energy. And you can see from the green line, oops, sorry, that it actually reached a velocity of 50% of the speed of light. So what do we learn, in theory, about fundamental gravity, about from this system? Well, we have to remember how we model the three phases. So we have a phase of an inspirer, where the two black holes are separated one from the other, which allows an analytical understanding in terms of post-intone approximation to generativity, where the parameter of the expansion is the velocity, which is a good parameter of the expansion because it's smaller than 1. I mean, the relative velocity. When they do object merge, of course, this approximation is no longer valid because first, we reach relative velocity. And second, the idea that you have two separate objects doesn't hold anymore. And then you have a final ring down, a final object, perturbed curve black hole. And that also allows a perturbative description. So the idea is to merge these three descriptions. And the merging of these three descriptions is done here in the analytical side. And then for the last 5, 6 orbit, which are actually crucial for this detection, is done through a numerical solution to generate relativity equation. And let me just focus for the sake of clarity to the inspirer part, which allows an analytical understanding. So all what I want to stress here that all, like in the case of parser, all what you need to know, to know, to understand the dynamics of the binaries is the energy and flux function. This is the energy and function with post internal correction. And this is the flux function thanks to post internal correction. The idea is that to do this mesh filtering and so to correlate the gravitational wave with the data, you need them to compute the phase. And you need to compute the phase with high accuracy. So how you compute the phase? Knowing the dynamics. So the phase is the integral of the phase derivative. And this is trivial. But then let's just make a change of variable. This, instead of dT, we have a dE in dV divided by dE in dT in dV. But dE in dV is exactly the derivative of the energy function. The power emitted is exactly the flux. So if you plug your knowledge of the dynamics into this phasing formula, you see that this phasing formula naturally comes in a series expanded in dV. And as you can see, since we want to correlate with the data, we need the phase to be known with order 1 accuracy. And this means that this high order correction are important because here we have 1 over v to the 6 at the denominator in the leading behavior. So we really need high order correction. So this is to stress that the phasing determination, which is crucial for doing the mesh filtering, is sensitive to high correction of general relativity. So it's very much different from the binary Pulsar case when you only need the first order correction. And what is important here is that this coefficient, which are predicted by general relativity, they depend then on the fundamental theory, but they also depend on the individual masses and properties of the system. So they depend both on the astrophysical parameter and on the general relativity parameter. So these are the so-called post-neutonic coefficients. So let me skip this. And so this post-neutonic coefficient could indeed be measured. So these delta phi's are post-neutonic coefficient at 0, 0.5, 1 PM, 1.5, could indeed be measured to some level of accuracy. But as you can see, the level of accuracy is not allowed. These delta phi's are for order 1. And we could measure them to order 1 precision. Why? Because we only one detection. And as I said, those coefficients depends, yes, on general relativity and on the parameter, astrophysical parameter. So if you want to get a better precision, we need to pile up several detection. However, as you can see, the binary in the triangle that determined the precision with which the binary Pulsar could fix those parameters, well, binary Pulsar could not say anything beyond first post-neutonic order. Just because the velocity there was 10 to the minus 3, so all post-neutonic order that involves higher power or velocity was completely undetermined. Another good test that was possible to be done with this detection, that is really the new thing that was never observed before, is about the merger and the rundown. So if you use the general relativity equation, you can predict the final mass and the final spin of the final black hole through the inspired phase. But then you can also measure the final mass on the final spin, because you know that the perturbed curve black hole will have a typical decay of its emission, which is related to the final mass on the final spin. And then you can check that the two predictions from the inspired phase, which is done through general relativity, and the measure of the final mass on the spin, sorry, this is the measure of the final mass and spin from the signal itself. And this is the prediction of the final mass and the spin from the inspired part. The two are consistent. So still there is room for something which is not GR, but there's absolutely no need so far of explaining there's no deviation that we observe from GR. OK, and this is the result of the Monte Carlo that had been tried by producing millions of waveforms and see what are the probability distribution function for the mass 1 and mass 2 for the measure. So let me skip now the rest of the details. And now I'm getting to the end of the talk. Just wanted to show you what is the distance reach of the LIGO detectors given the total mass of the system. So the distance reach in luminosity distance in megaparsec is in the observing around it and in general is this blue line, which you see is well better. This is a factor of three or four better than the distance reach of the detector that took data until five years ago. And the green line is the prediction for two years from now. So this is the portion of the universe that we aim to explore with an advanced detector. Of course, as you raise the masses of the sources, you can see too far the distances. So the signal, for instance, that we saw, it had a total mass of roughly 60. And it was detected at the distance of roughly 400 megaparsec. And from this, we can infer what is the rate of this kind of event. Of course, with one event, the uncertainty is large. But we can infer that the rate of event is of the order of 10 100 event per giga-parsec cube per year. And you have to compare with the fact that in 1 giga-parsec cube, there are about 20 million galaxies. OK, this is the region of the sky where the system, the signal, has been localized to come from. And you have to consider this is done through triangulation. So as the word says, you would need, in principle, three detectors to pinpoint a point in the sky. With two detectors, you have this degeneracy, the position in the sky. So there are far too many galaxies to be able to know which is the galaxy that hosted the system. OK, then there are, I don't have time to go into the astrophysics detail or what are the models of the progenitor, but these are still very, very unknown. The only thing I can say now is that the repeated detection on gravitational waves will help to disentangle which is the model of the progenitor. But for the moment, we really don't know if it was, the binary system was already a binary when they were stars, and then they became called by collapse or supernovae, and they stayed binary, and then they eventually merged. Or if they were isolated black holes that happened to encounter later in their life, that we still don't know. And so I'm almost at the conclusion. Sorry, I want to conclude on leaving you with a slide. Sorry, my computer is low now to change the slide. OK, I want to leave you with this slide that summarizes basically all the known facts about the gravitational wave detection. Thank you. I'm taking question now. Thanks, Ricardo, for this very nice presentation on this fascinating topic. Make sure that it is working properly. OK, so as I said, Ricardo, thanks again for the very nice presentation, and let's move on to questions. So let me first start with anybody on the handout. Do you have any questions that you'd like to pass to Ricardo? Because I see that Roberto has a few questions. I don't know if you want to go directly. Yeah, I have some questions for Ricardo. So let's see. I have a note here that's like, one question is like, when you mentioned that you have this 10 to 2 megaparsec like the horizon that you can observe with gravitational wave, do you have some kind of estimation of how many events you could observe with advanced LIGO? OK, so in the following year, how many great news? Yes. OK, so the point is that astrophysical estimation that we had before the detection, they said that they spanned five orders of magnitude, and they reached the top rate that was predicted was about 10 per gigaparsec cube per year, the number of call essences. As you can see, they were basically wrong. I mean, because now we constrained the measured rate of this event. Of course, it has still to order magnitude uncertainty because with one event we had used statistics, of course. But still, we are constraining the rate number of these events to something that is already in the very top of the previous astrophysical prediction. So this is just to say that astrophysical prediction were just not trustworthy, but that we knew in advance because the prediction that has five orders of magnitude uncertainty is clear that it wasn't completely trustworthy. So this is the rate. So how many of these we can see? Well, for example, for high masses, we can go and see things up to gigaparsec. The blue line is the sensitivity of the last run. The green line is the future runs. So if you go beyond the gigaparsec, then the event rate can really be 100 per year for these high masses, and then consequently smaller for low masses. There is a question that we have also from Nicolas. So I will read the question for you, Ricardo. So he says on the slides, 11 and 12, what is H and L? So it looks like you have label H and label L in your slide. Yes, those are M4 and Nibbiston. M4 and Nibbiston? OK, so they're labeled to stand for the two detectors. Yes. OK, so that was easy. Droer, do I think you have more questions? Hi, yeah, yeah, regarding to the, maybe with this also, these plots, when you were explaining this, how it works, the interferometer, just I was wondering is there are some kind of, the detector has some kind of blind spot to some directions, some arrival of direction always has a kind of, it kind of observed all the three, I mean all the solid sphere, all the direction. Sure, good question, yes. I forgot to put this slide on the next presentation. OK, the idea is that if you go back to the interferometer. So then, suppose if the wave arrives exactly perpendicular to the detector, then you have maximum sensitivity. But if the wave arrives, say at 45 degrees here, then you're completely blind to H plus, but not blind to H cross. So yes, there is a modulation with the position in the sky. And yes, there are spots in the sky where you're blind, but these are very, very narrow. So basically, you have modulation within a factor of few in the sky, to which you are more or less sensitive. But the spots to which you are blind are narrow. There are, say, 10% of the sky. The thing is that the two LEGO detector are oriented in exactly the same way. In order to have the same sensitivity to any position in the sky, this is done, of course, because you want to take coincidences between the two detectors. This will not decays when you add new detectors. So adding new detectors, those blind spots in the sky will be covered by the other detectors. So to have a full coverage of the sky. OK, did I answer your question? No, yeah, yeah, this is the idea. It's like to have the interplay between the two interferometers to be the most efficient possible. So any direction? Yeah, so just to stress the point, if you have only two detectors, it's better to orient them in exactly the same way. So if you orient them in exactly the same way, the blind spot of one will be the blind spot of the other. So if anything comes from that spot, it's lost. Still, it's better to do this way because you want to take coincidences. Because you have so much noise that you cannot trust the detection made only by one detector. So if with two detectors, suppose you orient them differently, so you have no blind spot. But then suppose the signal comes from a blind spot of one of the two, you do nothing with that trigger. Because it's a single detector trigger. You cannot do anything. You have too much noise, you just want the detector. You really have to have the coincidences. So with only two detectors, it makes no sense to have different orientation. Because you need the same orientation to be able to confirm one another. With more detectors, then it will make a lot of sense to cover the blind spot. I have a question. Are there is a question? Yes, from Alejandro. Alejandro. I don't, Ricardo. Thank you for the talk. So there is a window of masses where these sources could potentially be due to, this gravitational wave could potentially be due to dark matter or some primordial black hole source or something like that. Do you see LIGO or future LIGO experiments being able to distinguish whether the signal comes from some traditional source of a black hole or something like more exotic? Well, this is gravity of polyparticles, basically. I mean, when the size of the object comes, it's at the merger. And here, you see, the merger happens exactly at the time when spark shield black holes should meet. So if the black holes are made from, I mean, if they're black holes, their size is fixed by their mass. So they have to merge at that point. If they're not black holes, if they're extended, then they would merge at a different point. But that is excluded. Because this is exactly the signal that generativity foresees. Then, about the formation of the black hole, then it's another story. I mean, once you have black holes, this is the signal. What formed them? Then we are open-minded. The only thing that you have to keep in mind is that the mass we are sensitive to is these ones, from few masses to a few hundred solar masses, from a few solar masses to a few hundred. So let me maybe rephrase this in a better way. So what if you don't see the number of signals that you should see with the expected rate, or the expected abundance of this type of black holes in the non-universe? With that, what can you conclude from that? Or is there anything you can conclude from that? I mean, you've already observed one. You've observed this event. But now you expect to observe more. What if you don't observe that many more? Even no more? Well, let me stress once more. The astrophysical prediction, we are very uncertain. We don't have a model for the formation of binaries. So basically, the astrophysical prediction were consistent with having zero deductions. So now we have this lower bound for the rate. And if future data are inconsistent with this, well, then we have a problem that maybe our detection was just a very, very erratic event that happens in the tail on the distribution. But I mean, we'll see. But I mean, now we saw something in the very first week. We turned on the detector with this sensitivity. So I mean, if that's the case, that it was a very rare event in the tail distribution, we'll learn pretty soon. The most common scenario is that we keep seeing things with this kind of rate. Thank you. So regardless, we do have a few extra questions that people have been sending to the Q&A system in Google Plus. So we have a question from Diego Rastrepo. He asks, how many events for neutron star and neutron star merger are expected per year, every year? Well, again, the astrophysical predictions are smaller than this. So the point is that neutron star and neutron star, you can see them to lower distance, to shorter distance, because they are less massive. So the total mass of the neutron star and neutron star system is about three solar masses. So if you get three solar masses to the distance reach of LIGO, then you get basically 100 megaparsec for optimal orientation. If the orientation is not optimal, you can get even less. And then you have to count how many binary system you have up to 100 megaparsec. Well, consider the galaxy density that you have with it. You have 20 million galaxies per megaparsec. Consider that in, OK, you do the math now. We can see neutron star and neutron star binary up to, say, a bit less than 100 megaparsec. Then you can count the number of galaxies that you have there. We know that in a typical galaxy like ours, we have 10, 100 coalescences of neutron star per mega year. Per mega year. And then you get the number. So neutron star and neutron star are considered, they come out to be less than this rate. OK. There is actually another question. There's another question from Diego. He asked about the masses of the black holes. He says, are the masses of the black holes compatible with a statistical expectations? So he basically wants to know, how natural are these 30 solar masses? Well, again, the point is, OK, let me make one point clear. I'm not a traditional astrophysicist. I am a gravitational astrophysicist. So until one month ago, two months ago, gravitational astrophysicist was kind of oxymoron. Now it's not anymore. So now we have kind of equal status than traditional astrophysicists. I mean, it's not very common just because they have not been observed so far. Solar masses of 30, 40, 50s in solar masses. Bias there. But exactly, there could be some as biases with the observations that used to spot those low masses black holes. And sincerely, I mean, we run an experiment. We don't have to have prejudices about what are the sources out there. We have to detect whatever natural glaciers we give us. So frankly, since the model of bioinformation are so poorly understood, and let me stress this, there are few competing models. We don't know. Maybe all of them are important in different galaxies. Maybe one of them will be picked at the end. We don't know. So what are the predictions? I mean, it's at this level. I mean, once we make 20 detections, then we might be able to select a model over the other. But for the time being, I think we should be really, really, really open-minded. OK. We do have another question here from Bernardo Inseca. He says, he introduced a slide number 31. So he says, on a slide 31, on the source position, with just a time of arrival of two detectors, one would mainly predict that you could only plug a full ring on the sky as a source position. How was it possible to get more information about the source? Exactly. Now, a very good question. In principle, from the time of arrival, you get a ring. But you also get the amplitude. So as I said, just qualitatively, I didn't put any slide. But the sensitivity to every position in the sky is slightly different. Since the signal is so loud, it's very unlikely that the signal came from a position in the sky, which is still in the ring of a loud position from the time delay, but would be modulated by, would have a projection into the detector that would decrease its amplitude. So this is basically the intersection of the ring in the sky with the point in the sky which have maximal projection into the detector. So it is a combination of time delay and consistency of the amplitude. OK. Actually, Reneva has another question. He says, can lego detect a signal from any orientation of the plane of the binary system? For example, a perpendicular plane relative to the line of sight. So he's basically concerned about where the plane of the process is going on. Yes. This has to do with the extra nature of, so I'm trying to broadcast my face now. I don't know if I'm getting there. OK. But this has to do to the very nature of gravitational waves. Now, Reneva, in electromagnetic, the main emission is given by the dipole. And so the dipole is related to one line. And since, basically, you can have direction. And since electromagnetic waves are transverse, you can have two directions which are both orthogonal to the dipole. In gravity waves, the main emission is related to a quadrupole. So a quadrupole is defined by a plane. So you can never be, basically, by motion in the plane. And the motion in each of the two directions on the plane generates transverse waves. So you can never be orthogonal to both transverse waves generated by both motion on the plane. So basically, it's never zero. So wherever you look from, the emission is never zero. It's not like a dipole emission. A quadrupole emission is never zero. Then what happens is that the projection of this emission onto the detector can have zero effect just because of symmetry on the two arms. So those are the blind spots. But wherever you look at the source from, that is never zero, unlike from the dipole emission. The quadrupole emission is never zero. It's modulated within a factor of two, but it's never zero. Then it can get to zero once you project it on a specific detector orientation. Oh, sorry. This is qualitative. I didn't put the whole formula on that. But this is the qualitative idea. So that ends up with a run of questions from the audience. I want to make sure that nobody else has a question from the people who are here. I have a simple question. Yes, I have a simple question. What happens when the masses are very large? Why does the sensitivity go down? OK. When the masses are very large, what happens is that the object, which we assume are black holes, they also get fat, fatter and fatter. If they get fatter and fatter, even if they go at the speed of light, it will take longer to go around one or the other. And then if it takes longer, then the frequency will be smaller. If the frequency will be smaller, then you'll hit the noise, basically. So this is the signal of this binary system. So if it was originated by larger masses, then it would be, for a given distance, it would be higher. Because, of course, mass is the source of gravity. So larger mass, it would be higher. But it would move to the left. Because when the two black holes would touch, so the signal would end when they would have been at a larger distance, when the two center would have been at a larger distance. So at a smaller frequency, the signal would end. So if you increase the masses, then you move the signal to the left and to the upper part. And then eventually, you hit the noise at lower frequency of the detector, which doesn't allow you to see anything anymore. Great. So anybody else for the question? We have many interesting questions. So if there are no more questions, well, we thank Ricardo again. And also all of you, all the viewers, we will meet again soon for another Latin American webinar of physics. Then we remind you that you can also check for calendars in our website at lophysics.gorepress.com. And also for updates, you can also look for us on Facebook and also on Twitter. But also, don't forget to subscribe to our YouTube channel where all these thoughts will be available for future views. OK, so thanks Ricardo again. Thank you everyone, and see you next time. Thank you.