 Stajamo tudi, da jea bavila stajnje. Zato vstajamo v noah, sa Berkliu Universtvoj, vsega izgledaj vsega galaktičke vsega v SMB, v rekomstrativnji, daj bi. OK, tako, da je vsega. To je noah. Kaj sem boš vsega, preizve, da vsega zelo. Zato vsega vsega vsega. Tako, da se poče, da leži boš tudi, More detail later this week, CMB photons are lensed by the intervening matter as they traverse the observable universe. And we can use measurements of this lensing to learn about the late time structure evolution. Now in the weak lensing limit, this lensing is completely determined by this deflection field alpha. And I'm working in the flat sky approximation here, so x is some 2D coordinate. Inventionally, what we measure with the CMB is not alpha, but it's divergence, which we call the lensing convergence. And now is a particularly exciting time to be doing CMB lensing, just because we're in a phase of rapid advances and sensitivity. So in this plot here in black I'm showing the convergence power spectrum, and I'm showing some noise curves for some various surveys. Kind of the current state of the art is this curve in red from the cosmology telescope. And in the next decade or two, we're going to be decreasing these noise levels by something like an order of magnitude with CMB stage 4. OK. But how do we actually measure this lensing convergence? So as a starting point, recall that statistical isotropy forces the two-point correlation function in Fourier space to be diagonal. OK, so now let's imagine doing the same thing. We're in the average over realizations of the CMB, but we're going to fix the lensing field. OK, and by fixing the lensing field, we inherently break isotropy. And so this two-point correlator that we were just looking at ends up picking up off diagonal contributions that are proportional to this lensing field. And this is our starting point for deriving an estimator for this lensing convergence. OK, so if I have this equation, I can divide both sides by kappa. And there we have it. I have an estimator. It's perfectly unbiased if I'm allowed to do these realizations over the CMB. But, of course, I can't actually do a statistical ensemble over the CMB, but what I can do is I can do spatial averages. And so we're going to sum over multiples that are separated by a common capital L. And by doing so, we obtain an expression that is a generic quadratic estimator, which has some weights capital F. And these weights are, in principle, completely arbitrary, as long as they're properly normalized. And traditionally, what people do is they choose these weights to minimize the variance of this estimator. OK. These estimator start to run into problems when your maps are contaminated with foregrounds that are both in on gaussian, and also correlated with the lensing convergence. So specifically what I'm thinking about here are extragalactic foregrounds, so things like the cosmic infrared background, radio point sources, or the thermal or kinetic Sunjive-Zeldovich effects. So for simplicity here, let's just consider the simple case of a single foreground S. These quadratic estimators are bilinear in their arguments. So if I write down some estimator for the lensing power spectrum, I can expand it as so. The first term here is the signal you're going after, so that's great, that's what we want. And then these remaining terms here are biases that are resulting from, in the first case, a non-zero bispectrum between this lensing field in the foregrounds. And in the second case, from a non-zero trispectrum in the foregrounds. And likewise, you end up getting biases to cross correlation measurements. So let's say I want to measure my lensing map, and then I want to cross correlate it with some galaxies. I end up producing some biases due to a non-zero bispectrum between these foregrounds and the galaxies. And for an act-like survey, turns out that these foregrounds are quite statistically significant, and as a result are one of the leading systematic errors in future CMB lensing measurements. So how do we get around this? So the first approach that I want to talk about is the so-called bias hardening technique. And the basic idea behind bias hardening is very simple. So just as we can write down a quadratic estimator for the lensing field, we can also try and write down a quadratic estimator for the foregrounds. And if you have this foreground estimator, then what you can do is you can try and subtract off the bias that you would get to your naive lensing estimator using this foreground estimator. And I'm going to skip most of the details because I don't have time. But more or less, you just take some linear combination of these naive minimum variance estimators. And the only inputs that go into deriving this bias hardening estimator are the foreground biospectra and power spectrum, which are straightforward to evaluate if you assume some simple halo model for these foregrounds. So how does this actually work? Or how well does it work? So in this left plot here I'm showing the simulated biases that we expect from a cross-correlation measurement of CMB lensing with an LSS-T, like, sort of galaxies. And red here are the biases that you get from the standard minimum variance estimator. And blue is what happens when you do this point-source hardening. And you can see that you reduce the biases in a factor of 10, so that's quite nice. In this plot on the right what I'm showing is how well you can measure the amplitude, so the fixed-shape amplitude of this cross-correlation as a function of the smallest scales that I include in my lensing reconstruction. Oops, sorry. And the color here is showing the relative bias to this measurement and units of the statistical uncertainty. And so you can see here that for the standard minimum variance quadratic estimator, which, of course, has the highest signal to noise, you can see that the QE becomes overwhelmed by bias at l max of somewhere around 2200. So this is the point where the systematic error starts to dominate over the statistical one. Whereas something like a point-source hardening estimator can go to much smaller scales while retaining a smaller bias. And so at the end of the day, by including all this smaller scale information, you can build an estimator that actually is a lower noise and a lower bias than the standard QE. OK, so that's all I wanted to say in the other technique that is more commonly used in multi-frequency techniques. So typically the CMB telescopes are measuring the CMB at multiple frequencies. And I'm always free to take any linear combination of these frequencies, as I would like, to obtain some one-year combination t-hat. And I have a lot of freedom in what these weights can be. So traditionally, what you do is you choose these weights to minimize the variance of t-hat. But you can also do, if you know that the variance of some of these foregrounds is you can modify these weights to try and deproject them, to remove them from this map t-hat. And there are some subtleties that we'll get to in the next slide. I'm gonna skip this because I don't have time. So here are some results. So on this y-axis here, I'm showing the bias to the amplitude of that same cross-correlation measurement as a function of the noise. These different colors here are showing different types of estimators. One-year combination weights. And then these squares or triangles are showing what happens when I try and individually remove either the C-I-B or the T-S-C. And counterintuitively, when you remove the C-I-B, you actually end up increasing your overall bias. And likewise for the T-S-C, and that's because you end up boosting the other's power spectrum. And so this is a bit counterintuitive and it's kind of a worst-case scenario because you've increased your noise in your bias. I'm showing what happens for the bias-harden estimator. Again, your noise is below your bias, so that's good, but let's say you want to be more conservative or get your bias below maybe half a sigma or something. Instead of jointly deprojecting, you can do a semi-clever trick where you just draw a line between these two different types of weights. And so by walking just slightly along this line, you can pay a 20%, 10% noise price instead of a factor of two so that we'll be using these techniques for a cross-correlation analysis of the design act in the near future. Thank you. Questions? Oh, from Zoom. Okay, let's thank Noah again. Next one is Aparajita Sen. It's online. Should I share my screen? Yes, thank you. Are my slides visible? Yes, you can start. Okay. Oh, hi, everyone. My name is Aparajita Sen. I'm a PhD student at ISATIR VARANTAPULAM, India. And today I will talk about this topic forecast for recovery of R in CMB Bharat. I will specifically focus on thermal dust complexities and its and optimum range of frequency required for its removal. So I have done this work in collaboration with Devagroto Adak, who is a postdoc at IMFC Chennai, and Turingos, Ghosh and others. So before I start, let me just briefly tell you what is CMB Bharat. CMB Bharat is a 4th generation satellite mission, which has been proposed by the Indian cosmologic Indian community of cosmologists to our space agency, ESRO. The proposal was made in 2018 and since then it has been under consideration in the space agency. So CMB Bharat will have a multitude of science goals, such as the measurement of the neutrino masses and then observing the galactic magnetic field and the thermal dust, etc. But one of the main key scientific goal is the measurement of the primordial CMB B mode and it aims to detect the CMB B mode at the 10 to scalar ratio of 10 to the power minus 3 at the confidence level of 3 sigma. So just let me mention here that the 10 to scalar ratio are actually quantifies the power of the CMB B mode and is directly related to the energy scale of the inflation. CMB Bharat will observe the sky, microwave sky over a very wide frequency range starting from 28 gigahertz going up to 850 gigahertz and it will also have a very high resolution which is from 5 to 1.5 arc minute spanning over the several frequency band and it will have around 20 frequency band which is almost double of what Planck had and its sensitivity budget will also be around 10 to 30 times better than Planck. So what are the challenges in the detection of B mode? Nature challenges is the high level of 4 gram. So as you can see from this figure that CMB Bharat aims to measure at r at 10 to the power minus 3 which is given by this pink dash lines over here but the total foreground which is given by this black line dominates over this r by at least 3 to 4 at least 2-3 orders. So this is what makes the detection of the B mode difficult and that is why it has not been measured yet. So one of the, so basically measuring of the B mode boils down to a component separation problem and by component separation I mean that separating the CMB component from all the foreground such as synchrotone dust, pin dust etc. And one of the ways to improve the performance of component separation is taking observations over a wide range of frequency or a very high number of frequency bands. Now my talk is based on these two words. One is B mode 4, the top CMB Bharat which has been publishing M in RS and the second word, the manuscript for which is under preparation which is titled optimum range of frequency for thermal dust removal in CMB Bharat. Now this is the, so what are we aiming to do? First we will test the ability of CMB Bharat to detect the primordial B mode to make our study very robust we consider a range of foreground components and we also account for several thermal dust and synchrotone complexities in its modeling. Now this next question that we ask is as we already know that frequency bands higher than 100 gigahertz are dominated by the thermal dust component. Now how can we efficiently remove this component? One way is to increase the frequency range of dust observations and we ask the question, what is the optimum range up to which we need to observe this component so that its removal is very efficient. Now briefly let me tell you what I mean by thermal dust complexities. So thermal dust is up to now for temperature and emote observations have been very successfully modeled as the modified black body spectra. This equation, what I mean by modified black body spectra is given by this equation over here where I knew is the is the intensity of the thermal dust emission. AD is a template for the thermal dust emissivity and this is the emissivity is modified by a power law over a frequency and beta d is the spectra parameter through which it is modified and beta d does not depend on the frequency and this b nu is the Planck's law which depends on the temperature of the temperature at which the dust cloud or the dust component is at. Now this modeling is not enough for, is not adequate for the extraction of b mode. There are several, because this does not account for several forms of dust complexities, such as line of sight effects, variation in the dust composition and sight, a size and also the galactic magnetic field. And also some of these effects leads to frequency decorrelation, which means we cannot, the observations over different frequencies cannot be simply modeled by a power law. So to account for some of the dust complexities we have considered three models sorry to interrupter, you have one minute left. So we have considered three models which considers this type of complexities which takes this type of complexities in account and let me just briefly summarize our results. So for our forecast we find that for simple air focus we find that for a very simple model we are able to reach the targeted sensitivity of CMB pilot and second for dust complexity such as multi layer dust model or for frequency decorrelation some of the parametric components separation method which we have used, which is known as commander is not able to recover the dust adequately but whereas the blind component separation method which does not take into account any form of dust modeling can extract the thermal dust component can remove the thermal dust component adequately and second for the optimum range of frequency channels required we find that if we consider frequency channels up to 520 GHz as we have shown here in for the physical dust and mkd dust model we are more than able to recover the required tensor to scalar ratio we are able to recover tensor to scalar ratio 10 to the power minus 3 and we actually do not require the frequency channels above 520 which is going up to 850 GHz for this task so this is also on my top to conclude I can just go over my conclusions so we find that configuration can recover r equals to 10 to the power minus 3 and second we also find that thermal dust observations up to 500 GHz is adequate for minimizing its contamination so thanks so let's thank Parajita for your questions not from zoom either thank you again hi thank you for the talk how do these effects lead to frequency correlation what is the main reason yeah it happens because so when we consider different forms for example different composition of dust for example if we have silicon dust so dust components can have silicon or carbonaceous and we when we consider those things so they are physical properties over the frequency range changes so in short that is how we can get frequency decorrelation so this is one of the way we can get frequency decorrelation again in the second case we have for example in the multilayer dust model we can model the dependency of frequency as a line of sight for example as a distance through which we are observing and when we do that it is not possible to just model it as a power law in that case the spectral parameter which I showed beta D becomes dependent on the frequency so that is how the frequency decorrelation also comes in thank you there are other questions thank you next one is kumar aritragon EMB modes from secondary realization of CMB hi my auditor ok you can share the screen you can start hi everyone I am Oritsno I am a grad student at CART Institute of Fundamental Research and I will be speaking about the secondary EMB mode polarization of cosmic microwave background due to the peculiar transverse velocity of free electrons from the reionization and post-reionization areas so so during the time of reionization free electrons were produced in abundance and they had a peculiar velocity with respect to the CMB rescue that since these have a peculiar velocity in the rest frame of the electrons the CMB is no longer isotropic in particular one can show that there is quadropolar anisotropy in the incoming CMB radiation this is primarily due to the non-linear nature of the relativistic doctor effect when you transform your frame from the lab to the electron rest frame and also due to the temperature and the intensity in the plan spectral so in presence of quadropolar anisotropy as you can see from this figure Thomson scattering can generate linear polarization in the CMB radiation so this this is called the polarized kinetics and that is the Ldovich effect and this was first predicted in 1980 by Rache Sunnev and Yakuza Ldovich and some other studies have been made with different aspects of this signal so this is like a one page somebody slide of my top what we have done is that we have calculated or estimated angular power spectra of the ENB modes generated due to this effect and we have shown that this is sensitive to the central redshift of the reionization and also its duration and it is also sensitive to the velocity power spectrum which is related to the matter power spectrum if you consider the scalar modes to be sourced by gravity also we have shown that the spectrum the frequency spectrum does not have a black body type spectrum but it also have a white type distortion and due to that distortion you can separate this signal from the other kinds of primary CMB signals which only have a black body spectrum and also other type of LC signal such as thermal LC signals which involves unpolarized radiation and since this have a different spectrum signature it is also free from the cosmic variance of the primary CMB polarization signal so in the graph you can see that this is the polarization signal for some fiducial parameters of reionization we have chosen and this is also we have plotted the scale ratio of the primary CMB modes at r equals to 10 to the power minus 4 and 5 and this olive curve is a prism sensitivity curve prism is a space based experiment which has been proposed and as you can see that some more sensitivity needs to be gained in the future to detect this kind of signal and this visual term is a contribution from the galaxy shot noise at redshift below so moving on as I have told that the spectral signature has a white type distortion in it this is primary due to the mixing of photons from different direction which has a black body spectrum but they belong to different temperatures and as you know that mixing of such photons produces a resultant spectrum which has a black body part which is this part and there is a white type distortion in the spectral signature and x is the dimensionless frequency of course and due to this you can actually separate the signal from other kinds of CMB signals so moving on how to characterize this polarization of course you use the Q which is a spin-2 field and so you can spin-2 radical harmonics you can split this polarization signal into its harmonic coefficients and what you can show that this polarization signal consist of a parity even parity and the odd parity component so the even parity is called the EMO and the odd parity is called the EMOS polarization field is dependent on electron number density which in our case is just a function of time and it is also proportional to the square of the transverse velocity field so what we have done is that we have found out the power spectrum of such P and V modes and what you can see that if you bring it all together this becomes a complicated function this is the DP angular power and this is actually a very complicated equation and you have to do numerical integrations to find out the corresponding power sectors and so as I have said that this is sensitive to the central redshift of realization as you increase the redshift what happens is that the total constant optical depth increases which increases your signal so as you can see these blue curves are the B modes and these yellow and red curves EMOS and they all increase as you push the central redshift of realization to higher and higher redshift next it is we have also shown that this is sensitive to the width of the realization changing the width actually has a very negligible effect on the optical depth but still the power spectrum decreases with the increase of duration is a simple explanation which is like sorry to interrupt you have one minute left so what happens is that as you increase the duration of realization the velocity fields that you have there are much more uncoordinated velocity fields along your line of sight and the polarization created from those velocity fields will have been to cancel each other and you can see that this effect is stronger at the smaller scales because at smaller scales you have more and more such uncoordinated velocity fields along your line of sight so you can see that and the width of the realization is defined as the time where the universe was not reanized to a place where it is 99% kind of reanized that is how you define the width and this delta z is a parameter which characterizes this width and this is the same thing for the EMOS you can see a similar kind of signature and lastly as you may have noticed that the EMOS are always a bit greater than the EMOS you wanted to know why is that and for that we kind of decomposed our polarization field into scalar vector and tensor modes as you may have known that the velocity is still sourced by gravity which is scalar modes but at second order all the scalar vector and tensor component is present and what you can see that the scalar component do not contribute to the B modes as expected but it has a huge contribution to the EMOS sorry this will be EMOS sorry for this typo this has a huge component in the EMOS and that is why we believe that the EMOS is a bit greater than the B modes so I will end by repeating my conclusions that this power spectrum this polarization has a wide distortion part for which it will be sensitive you will be able to differentiate between other signals which have either a black body spectrum or it involves unpolarized radiation and this polarization signal is sensitive to the reanalyzation redshift with and the matter of velocity power spectrum thank you for your patience thank you are there questions prezence from zoom let's thank again aridragon next one is Maria Cedric ok, please, you can start hello everybody, my name is Maria and today I'm going to talk about interacting back energy from joint analysis in retro space a project we've done at the University of Edinburgh so last week we all heard this fantastic talk about spectroscopic service and how they are going to revolutionize the field of cosmology we also learned that there is this fantastic state of art perturbative theory which is called effective field theory of logical structure short for EFT of analysis and it allows us to go to non-linear scales and these non-linear scales contain a lot of information about gravity so you would think ok, we have the best we can do model and the data is coming, so we are done, right? not so fast, especially for extended cosmologies like the one from the title of this talk so what we have to do first of all, of course we have to implement it but then we also have to run some validation tests in order to avoid false detection we also introduce new parameters in our parameter space which is not small and you will see it in a moment so we introduce extra degeneracies so they have to be started and last but not least, with a lot of data we need to know what we are looking for which observables we want to model and which of them contain the most information we need to constrain our gravity so that's the ambitious plan for this project and our pipeline is just a classical validation test pipeline where we so we had the MCMC sample and we feed it with priors and Gaussian likelihood and for the Gaussian likelihood we constructed from the standard cosmology simulation 300 of them with 10,000 mogs constructed from 10,000 mogs and also for the model we use EFT of the sense but with modification with modification for interactive dark energy so we look at the Basterias and from this Basterias we construct two statistical properties one of them is called figure of merit which basically says how good we constrain our extended parameters and the second called figure of bias which says how biased we are from the fiducial values from the standard cosmology so let's very briefly go through the effective field theory of classical structure so if you want to construct your power spectrum in Russian space up to one loop order what you need is this set of parameters so first line is for the bias expansion then you have some bunch of counter terms and then you have a scale dependent shot noise and of course you also have the growth rate the logarithmic growth rate which is responsible which is like B parameter to test gravity so if you look into the formalism you could see directly from the equations that this bunch of parameters is already degenerate to one another so for example B1 to F so these guys degenerate so for the amplitude then the non-local biases they are also degenerate and even if you use only the power spectrum it's not enough to constrain B gamma3 also to constrain all the three of the first counter terms need all three multiples so what you want to do we want to include bias spectrum for bias spectrum to stay consistent we go only to a tree level so a little bit less parameters but still there are degeneracies in B1 however, if you use only bias spectrum measurements and you put on top of your monopole bias spectrum quadruple it's already enough not to break the degeneracies but to constrain B1 and F ok so we want to do a joint analysis so a lot of things going on we then move to the growth rate which we parameterize in our interactive we are from the name duck energy is interacting with duck matter you can think about this interaction in terms of Thompson scattering so there is only impulse transfer between these components no energy transfer so background state is the same the only thing that we modify is the structure formation and you can see there is this additional term in the oil equation the drag term so the parameters that are extra W which is just the equation of strength for duck energy and then we have the xi in xi is just the ratio between the cross section of the interaction to the matter to the duck matter particle mass we also use not the parameterization of W and xi we use W and A and A is just the product of those two just because it's easy to analyze so concentrate on this plots where I plotted the ratio rate in our model to the standard cosmology and the main takeaway is that we have these two forbidden regions and these forbidden regions they just corresponds to the values of xi which are negative so isn't it a lot because xi is just the ratio of two positive quantities there are a lot of other takeovers from this plot but we don't have time for that so enough with theory let's go to some results so what we've done we just step by step applied all our observables we searched for the best figure of merit and the smallest figure of best and we vary our range of scales for that so first of all it was the power spectrum multiples the lower order so the monopolar and the quadruple then we added for this best value of k max the hexadec couple and you can see the hexadec couple it doesn't really improve a lot it's actually even make it worse so then we move to the bispectrum monopole we added bispectrum monopole and we see that there is improvement but it kicks in in very nonlinear scales we also tried it to add the hexadec couple for this case and actually it improves because it breaks like very slight degeneracies in counter terms so take away if you are using enjoyed analysis with the bispectrum monopole please don't forget to add the hexadec couple in their power spectrum on the power spectrum part ok great so let's look at this wonderful marginalize posterior for only the extended parameters a and w you can see this butterfly pattern and it just corresponds to the figure you've seen before where we have to forbidden regions and what is important that if we have the bispectrum monopole then the contours they are tighter up to 30% however they are really nonlinear scales and we need to evaluate a lot of triangles so can we do better? well the same but with the least amount of triangles yes we can so first approach will be just to apply some bias relations again not a lot of time to go into details but you can see we can achieve the same improvement going to much more modern scales and the second improvement type is just to add the bispectrum quadruple on top of bispectrum monopole and you can see again on more moderate scales like here we already see the comparable improvement of our extended parameters constraints of course after the invalidation test we have to repeat the same analysis with the real data for example from boss and it has been done for bispectrum monopoles only now we are working to include the bispectrum so Marie please use bispectrum monopole if you don't want to calculate a lot of triangles just add some bias relations or bispectrum quadruple we also found very similar improvement in WCDM scenario however a lot of work has to be done which you can see on this wonderful meme thank you for your attention thank you very much Maria nice talk are there questions? nice talk I'm sorry that you had to fit it into 7 minutes because it sounds very interesting so can you go back to the plot the final triangle plot for the cosmological parameters with the correlations the final one when you show the correlation to the other cosmological parameters or maybe you can also comment on what would you expect the parameters to be correlated with the other cosmological parameters that you would expect correlation with they definitely would the work which was done with real data and full cosmological analysis was not done by me but my colleagues yes of course there will be some degeneracies in parameter space because it will control the amplitude so it will be degenerate not only with biases but also will be degenerate with A so it controls the amplitude so there will be degeneracies probably because we can constrain bias better the constraints will be better but still of course the dark energy parameters will be degenerate with cosmological parameters as you can see already here thank you just if you could turn the second slide I think it was after the introduction after this yes this slide so there is the energy density of lambda depends on the scale factor through the W how is this obtained this this so always the solution for your Einstein equation so it's the usual cosmology like what we've done in our but in free if we look at last lectures the roll lambda does not depend on A so how this what was changed in how did we get this so you have this expression for like all cosmological components so it's the solution for the equation of state so it depends how your density and your pressure correspond so for cosmological constant we assume the solution is that W is just minus 1 but it cannot be minus 1 but here is not minus 1 so it's one of the models where you allow your cosmological constant to have the equation of state where it's not even cosmological constant so it's not minus 1 it's not time dependent it's just not minus 1 that's it thank you can you thanks for the talk can you say something more about the bias relations that you use the ones we use they are well the first one which works best is the bias relation for the tidal bias and it was derived so both of them are fits they are not since we are fitting everything for halers we trust halo simulation so one of them is completely from the separatuneral simulation it's the bias expression for B2 so for the second local bias but the one that fits better is for the from the excursion set approach I don't know what specifically I can say more about bias relations are done with this model no they are not done with this model that's the bias relations from standard cosmology so if there are no further questions let's thank you let's thank Maria then we have Gabriel Luz Almeida let me just how do I use this this is the pointer I can use this as a right this is the pointer oops ok ok so my name is Gabriel I'll be talking a little bit about effective theory approach to deal with the binary system dynamics so gravitational waves were first detected 7 years ago in 2015 by the Cholago detectors and this was sorry so sorry this observation provided a new window to observe the universe since now we have another way of probing astrophysics and up to now the LEGO Virgo collaboration they detected 90 gravitational events and in all these events in all these gravitational signals the waveforms they have all this morphology which encapsulates these different phases of the binary system evolution so we first have the spiral phase in which we can separate the two objects and then we have the merger and they ring down as Tukowski was teaching us for circular orbits we can derive these expressions for the polarizations of the gravitational waves and one of the most important quantities in this expression is this is this phase in here which attracts the orbital phase evolution of the system so I'm talking precisely about the spiral phase of the system and one of the things interesting thing is that in this modelate we need information about the conservative dynamics of the system in which we have here the energy of the system but also we need to give information about the dissipative part which is the the energy flux emitted by the system in terms of gravitational waves so one interesting characteristic that we have in this binary system is that we have a clear separation of the hierarchy of scales let me just move here ok so we have an orbital scale in the orbital scale we have this which is basically the viral virial theorem this is just the leading order ok this R is the orbital separation of the two objects this R of S is the size of the source so if we are talking about black holes this is just the ratio of the black holes and also we have the gravitational wave length which we can derive this expression in here this relation using linearized theory of general relativity the linearized theory and we can build this relation and if we consider a non-relative visky regime of the velocity this V is the relative velocity between the bodies we will end up with this hierarchy of scales so we have that the wavelength of the gravitational wave emitted is much larger than the orbital separation of the objects which in turn is much greater than the radius of the objects if we are talking about compact objects like black holes or even neutron stars because we have this separation of scales in turn is how to describe the physics of for instance gravitational waves using an effective field theory the same way that for the off-tile scale we can also have another description in terms of an effective field theory in order to do this we use the so-called method of regions in which we split the metric perturbations into potential modes in radiation modes potential modes are on-shell modes in which this scaling is given by this and these radiation modes are on-shell modes which scales according to this relation and then for instance if we want to study the binary system dynamics the conservative dynamics we can first neglect the radiation modes and build an effective field theory for this using the Einstein-Hubert Lagrangian the Einstein-Hubert action as the bulk action which provides the way of describing the interactions but also we need an action for the particle points for the black holes which at this sense is described as particle points since we are in a regime that the optical scales are much larger than the size of these objects and then we start by adding operators in here the first one being just the minimal coupling that we have in general activity but also we can add degrees of freedom we can include finite size effects and so on and make the post-Newtonian expansion the way that we implement this is by expanding the propagator that we have that we build using final rules built from these actions we can expand them because as we have seen in here for the potential modes these quantities in here will be giving contributions to the power of V square in which this perturbative approach is about the post-Newtonian approximation and then each of these terms they are of higher order in V square and then computing for instance these diagrams in here we can compute this effective action which should be added to the Newtonian potential this gives the first post-Newtonian correction if we keep the relative modes we can integrate now the smaller scales in order to get an effective field theory by doing this in order to get the physics at these scales of lambda much larger than the other scales we end up with this theory in which we can compute diagrams which has these ones and computer observables such as the energy flux emitted by the system other nonlinear effects can also be computed we have the teore effects that we have investigated and they are interesting because they have IR in ultraviolent divergences which is typical of classical field theory in general and we could understand this in terms of the randomization group evolution but also one extremely important type of diagrams are these ones in which the imaginary part of these diagrams from them we can have information about the energy flux and other observables related to gravitational waves but the real part of these diagrams we can compute we can obtain conservative contributions coming from radiation reactions to the conservative dynamics of the system and I finish by leaving this Thank you Are there questions? I wanted to ask if in your action you are also in your effective action do you also get dissipative terms? Yes Ok, you can put dissipative terms you can include them here I just put the more general most famous ones which are the most important I guess but yes, you can do that so in order to build this you have to put all the types of terms that are consistent with the symmetries of the problem which is vejomofizimvariance and also hyperometerizacimvariance so you need to find operators in terms that they are consistent with these symmetries but these are the these in here they are the ones that start to contribute first Yeah I have one small question can you go to the next aren't you missing omega or something in the last equation inside the integral In here I don't think so Ok While this would be I need to check this This would be the power emitted by the gravitational waves Yes Precisely In energy Yes, so in general if you are computing these nonlinear effects you can use this Maybe next slide So, at what in order do these effects appear and are they important than in observations are they relevant Yes, this is something that I would like to talk more about because so so this type of diagrams in here they will contribute to the dynamics of the system and they start to enter at the 4pn order which is the order in which is completely well understood right now and then the next order the 5pn order in which you also have diagrams such as these ones but there are other more complicated diagrams that you have to include this is important because at 5pn is the first order in which finite size effects enter the description of the binary system dynamics so basically we need to to understand better how to compute these kind of things because this is something that we are currently doing actually in order to study the 5pn order which is something ongoing right now I would just ask about the 5n diagrams simply if you could tell me about the lines what is the vertex proportional to are there masses important and so on because those should probably represent a charge as in electrodynamics or something so we have in here we have two types of diagrams I mean we have basically two effective few theories one that I call the near zone which describes the sorry the conservative dynamics of the system these lines in here like these two parallel lines they represent the word line of the particles and these connections in here they are propagators which are generated by the Einstein-Hubert action plus gauge fix in term as I mentioned here so the propagator would be proportional to the metric capital H or like over the sorry if the whether the external lines the full lines represent something as a fermion with one index and the propagator is like the metric or is it or is it a scalar propagator the propagator is the full is the full metric perturbation that we have in here this H mu nu the potential volts they come from the Einstein-Hubert action plus the gauge fix in term and in here we have the coupling of gravity with these word lines so I don't know if I answered your question I wanted pictorially when when we write in QED for example where we draw fermion lines and between them full photon propagator we write it as minus eta mu nu over the k squared I guess here is a little bit different because in this particle physics computations we do not have this word line the way that we have in here because they are treated basically classically we yeah so it's okay but and the vertices what are they proportional to or what is the interaction so these vertices in here they also come from petrobative expansion of the Einstein-Hubert equation in the gauge fix in term in which of course you have to have the graph tone to the turn not to build expressions like this to compute vertices in this theory and the same way in this of our zone we also have the same these vertices that we have in here several that we have they are all coming from the Einstein-Hubert equation and yeah that's it we can discuss later yeah thank you there is another question from the chat so they are asking about the significance of instantaneity approximation okay basically in order to study gravitational waves we have to expand we have to have a petrobative approach because we do not have exact solutions to deal with the problem, okay and this is one of the ways that we have in order to basically start from the Newtonian potential, the Newtonian physics and from general relativity we can encapsulate corrections and these corrections can be done can be organized in terms of velocity so these terms in here they are all velocity square, velocity to the fourth and so on this is simply a simple let's say a simple way to to work petrobatively with the description of the two-body dynamics there are other questions okay, let's thank Gabriel again last one is Sara Libanore hello everyone and thanks to the organizer for the opportunity they gave me to present my topic here today I'm Sara Libanore I just finished my PhD in Padova now I'm a visiting student in Bersheva, Israel and now I will talk about the importance of clustering analysis in future surveys of gravitational waves okay, before going to that since I'm not sure where Padova or Bersheva are I just wanted to show you this image and if you want to walk from Padova to Bersheva, ICTP is on your way so you can just go that way the names you see in these lights are the person I'm working with and the ones that are highlighted in bold are the ones that are important for this work not my favorites but the ones that work on this topic with me so this question we based on our work we asked ourselves whether the study of future gravitational waves surveys and in particular the study of their clustering if it will be an effective tool to put to constrain either on cosmology, astrophysics or both in the same fashion that is already done with galaxy surveys in this talk I just want to give you the taste of what I mean with the words that I highlighted and if you want me to be more technical you can ask me a question at the end so let's start by discussing the clustering the image that you see here is built using mock data from the simulation each of the black dot here represents a part of the Dharma content that was drawn by an initial distribution of perturbation and was evolved under the effect of gravity and as you can see gravity brings darmator to form these large structures that in cosmology we call darmator halos the darmator halos are the most dense places in the universe and therefore are also the places where the baryonimator falls into baryonimator falls into the gravitational wells of the darmator and there they form astrophysical sources all the blue dots you see here these could be stars, could be galaxies in my work they represent binary black hole mergers and in particular the gravitational waves emitted by these mergers so using simulations we can start from a darmator field and we can model assuming an astrophysical model how these sources are distributed and how they clustered all around the universe and the spacetime but when we go to observe data we have to go the other way around so we observe a distribution of luminous tracers in the sky in my case we observe gravitational waves and by studying their distribution and clustering we are interested in understanding the properties of the underlined darmator field and in particular the parameter that links the clustering of these sources with the underlined darmator field is called the bias so the study of the bias was one of the main topic that I focused on during my work in the analytical model to describe how these gravitational waves sources are distributed so future gravitational waves surveys will put constraints on this bias in the same fashion that galaxy survey do with galaxy bias but there is an important thing to take into account as we discussed during the lectures in these days from the gravitational waves signal you cannot extract the redshift the redshift is an information that you don't have in gravitational waves surveys well, either you assume some statistical model or you see an electromagnetic counterpart to associate a redshift with the gravitational wave event or you just forget about the redshift and you go to luminosity distance space luminosity distance can be measured from the gravitational wave signal and therefore can be used as the radial coordinate of your survey if you just map the binary black hole mergers or binary neutron star mergers in luminosity distance space you can just rely on your gravitational wave survey without the needing of external datasets or assumptions for the redshift the other thing that I have to underline is that all the work that I am doing is based on statistics and to perform statistical analysis you need a lot of observations we don't have enough with LIGO Virgo so we have to do forecast for the next generation of detector and in my case I'm working think about the Einstein telescope Einstein telescope will push the sensitivity in the observation of the gravitational waves to a level that will allow us to observe around a million of events up to redshift 6 over even further so with Einstein telescope, with cosmic explorer with next generation detectors we will be able to have constraints either on cosmological parameters or on bias parameters since I don't have much time in this talk I just want to focus on the study of the bias and I want to show you an application that the constraints on the bias parameters can have in the study of both cosmology and astrophysics and the way I will do this is speaking about formation channels so what I showed you at the beginning with the dark matter and gravitational wave sources distribution was referred to astrophysical blackout mergers so astrophysical blackouts form at the end of the stellar evolution and therefore they are found where also stars are which is in galaxies therefore the bias of such astrophysical blackout mergers will trace the distribution of galaxies which in standard theory cosmological model et cetera they form inside the more massive dark matter halos so when you measure the bias you expect the bias of these binary blackout mergers to follow the bias of the galaxies but cosmological models also assume that primordial blackouts exist but if they exist they can also get bound in binaries and if they get bound in binaries they can merge and produce gravitational waves so if you trace the distribution and the bias of the primordial blackouts since they form before the structure formation their bias will be completely different with respect to the astrophysical ones and therefore by studying the bias you open a way to disentangle the astrophysical and primordial blackout mergers which is quite usable since the signal they emit is almost the same let me just show you this example so here I'm assuming that the astrophysical blackout mergers have like a linear bias that evolves in redshift while for primordial blackouts depending on the model you assume you can always think that they have almost constant bias a future survey of gravitational wave events will measure a mixture of the two and therefore the effective bias that will be measured will deviate with respect to the linear trend you expect for the only astrophysical ones and how much you will deviate it depends on how many primordial blackouts you will see in the gravitational wave emissions so with this you can see that the bias can help you putting constraints on both cosmology since you probing primordial blackouts but also astrophysics probing the formation tunnels of these events if you have any question I will be really happy to answer and thank you for your attention thank you at the beginning you showed this simulation I was wondering if there is a recipe to populate the halos with the binaries so this is a very interesting question so in the case of this image the dark matter part was taken from the eagle simulation which is an embody simulation that just evolved dark matter through cosmic time while the binary blackouts were populate using these mobsi which is a population synthesis code which somehow assumes some statistical distribution of the mergers depending on the properties of the host galaxy and in a statistical way allows you to populate these halos or these galaxies with the merger distribution what we are doing now with Matteo Peron co-supervisor in his master thesis he is working on creating a model to repopulate other simulations starting from these assumptions we are developing some machine learning pipeline in order to extrapolate these properties and make them more easy to use because running the full pipeline here takes weeks and this by the way we are running Mali these are mainly the two ways that we are following for these distributions just coming back to this eagle thing because eagle is a full hydro simulation and it has galaxies so the prescriptions that we are talking about they care about galaxy properties or do they care about halo properties in the work that Michela and collaborator did they were based on galaxies so in particular on star formation rate, metallicity and stellar mass of the host galaxies of such mergers what we are doing now is trying to extrapolate the relation with respect to the Darmadar halos because if we, since as you said you need the bridge between of the galaxies between the halos and the binary black hole mergers at this point but simulating all the galaxies with the full hydrodynamics mass so we are trying to understand whether statistically you can derive properties that allows you to populate just the Darmadar part of the simulation which is way faster to be to be used and also to be run in different cosmologies or different also different boxes of different sizes so at this point there are galaxies in beneath so black hole mergers depends on the properties of the host galaxy but we will have soon updates on the only Darmadar binary merger relation and just another comment at some point you just in passing mentioned that there is an issue with the binary black holes because you don't have their redshifts there is actually a technique to statistically get redshifts of individual binaries by cross correlating with the presence of galaxies in the same volume clustering redshift and I guess the LIGO collaboration has a couple of papers doing this I completely agree with you and in fact the thing that is mostly done is to use this cross correlation with the galaxy field what we were trying to do is to understand whether we can use the gravitational wave surveys alone without the needing of the galaxy field in particular because with Einstein telescope we will go up to very high redshift and we won't have information on the galaxy field probably so the idea was to understand whether using luminosity distance space and assuming that we marginalize over the cosmology because more or less we know through Planck or through galaxy surveys the good value of the cosmological parameters if we can just gave up the cross correlations and rely on a full gravitational wave survey but clearly the cross correlation technique and also there is this clustering based analysis that is used both with radio and with gravitational waves that somehow scans the distribution of a survey with a known redshift such as gravitational waves with a distribution in which you know the redshift like galaxies and by looking at cross correlations in the different parts of the sky the redshift with the gravitational wave events but yeah I agree with you that is one way to go questions I have a very dummy question certain point you mentioned late primordial black holes which is the difference in with early primordial black holes yeah so here is not early and late primordial black holes is early late primordial black hole murders or binaries so the idea is that you form primordial black holes in the very early universe radiation dominated era and these black holes stay around and they can form binaries there are two main channels in which you can form the binaries for primordial black holes one is the early primordial black hole wave which means that these primordial black holes just get bound in binaries in the early universe during the redshift dominated era so they get bound at the beginning and here is just postponed by the fact that they are immersed in a field with other black holes or darmadder that prevent the binary to collapse as soon as it creates because of tidal forces these early binaries just stay around and they are part of the darmadder so since they form very early and they are part of the darmadder they trust the darmadder one to one mostly so their bias can be assumed to be around one because they are just like the darmadder while late binaries they are forming in a dynamical way so you have primordial black holes that just stay around and move when they cross one next to the other they can meet the gravitational wave lose energy and form a binary dynamically and this happens if the cross session of the event is high enough so if these two black holes last next to the other have a velocity which is not too high otherwise they just cross and don't get bound so this binary formation can happen where velocities are quite low which are the small halos for this reason the bias of the late primordial black holes binaries is around 0.5 which is more or less the bias of the small darmadder halos and it is constant since being a dynamical process it's not really a ratio dependent so that's the main reason I hope you are answering Yes, thank you Considering it is a tracer of the life structure are there other relevant ingredients that you can use like for instance other bias of higher order in preservation theory like stochastic feud and so on Okay, so Depends on what you want to do what I mean is this so for this work we just relied on the on b1 so the first approximation because these future gravitational wave surveys will have very poor scalocalization so you will just see the large scales and since you just see the large scales you won't be able to measure the higher order function okay the work that Matteo is doing now is based on simulating data is estimating the bias on the simulations and inside the simulations you can access everything so in that case we are including also the higher order terms of the bias expansion If there are no further questions let's thank Sara