 two techniques to search for axioms and axiom-like particles. This meeting is being recorded. So the, should I start over? Okay. So the QCD axiom is a golden symposium. I mean, I raised after the spontaneous breaking of the Beecher Queen Symmetry, which introduced to solve the strong cyber problem by explaining the smallness of the Neutron Electric Diable Moment. And just few words about the strong cyber problem. It is the problem of the absence of the CB violation from Neutron. Or simply, it question why the vacuum angle theta it is so small. And if axioms are exist, the potential should dynamically promote theta to be zero and solve the strong CB problem. Axiom-like particles are similar particles to the QCD axiom introduced to solve other fundamental problems using the same Beecher Queen method. And they are sharing the same phenomenology with the QCD axiom, which determined by the high energy scale at which the symmetry is spontaneously broken. Because of this high energy scale, all their interactions with standard model particles are sub-risk. And because of this reason, plus that they can form the total abundance of the dark matter, they seem to be very suitable candidates for the dark matter. The common interactions between axiom-like particles and QCD axioms is their interactions with photons, which determined by this coupling G between axioms and photons. Most of the search, the current search, is looking to, I mean, parameterize the axiom mass and the axiom coupling with photons. And there is a bunch of experiments. We have upper bound on the axiom mass. It shouldn't be greater than a few electron volts because they, in such a case, they will not form the total abundance of the dark matter. And they cannot be lighter than 10 to negative 22 because they cannot be dark matter in such a case. We have upper bound on the coupling. Come from the cost experiments around few times 10 to negative 11 JVM verse. And there is near future experiments like the Alps second and the Axio and many others trying to scan the barometer space area up to 10 to negative 16 JVM verse for the coupling. We also have very good news from the ADMX experiments around this area. Their sensitivity already reached the yellow region for the QCD axiom around the microelectron volt scale. And that makes us expect within the next few years to hear very good news about axioms from their site. And the first part of my work, I was focusing on studying the axiom conversion from to photons using or inside the astrophysical environment of the jet of the M87 active galactic nucleus. And for this interaction to happen with required the existing, I mean require the background of magnetic field which exists within the jet of the active galactic nucleus. And in this model, we consider relativistic axioms. And those equations are just the clean Jordan equation for the axiom field coupled with the Maxwell equations for the electromagnetic field. And by solving this system, we can go to the tubularization component for the photon field and calculate the conversion probability. Our contribution to this work, we consider the case when there is a misalignment between our line of sight and the jet axis. And on the such, we use the geometry in this case to argue that the maximum conversion should happen when the misalignment angle is so close to the value of the opening angle of the jet. And that's simply because if you consider the misalignment, this will be the way for the axiom beam to move the longest distance diagonally inside the jet. Then we tried all the possible cases for the misalignments angle and the opening angle for the jet suggested on the literature just to test our model. Then we consider our modified model to calculate the total energy spectrum for the M87. And that's for, this is just the bar spectrum for using different misalignment angles. And for each case, we make sure that our results match with observations. Then for each case, we got our bound on the coupling. And since the most motivated value for the opening angle for the M87 from the literature, that it's equal to four degrees, and it is misaligned by something less than 20 degrees, we got this limit on the coupling. Then we compare our model by a model bought by Colin to explain the Coma clusters of X-ray, X-C's, based on the conversion from axions to photons. And since they are using stronger coupling than ours and using the same model, we argue that their model should overproduce the emission for the Coma cluster. And for a better explanation, we suggest use our new constraint for this model. Yeah, sure. One minute. Well, very quickly on the second part, I was looking to calculate the radio emissions can be produced from the axon decay. And axon can spontaneously decay with lifetime larger than the current age of the universe. And we cannot count on this interaction to produce any observable signal. But since axons are bosons, they are very light and they can be exist with very high occupation number, like what Stafa said. They can form both in a starting coordinates and those can thermalize to make axon clumps. And in such a scenario, the estimated decay is possible with effective decay rate determined by this formula where the term between the brackets is the enhancement factor. Then we estimate the value for the enhancement factors using contribution from the cosmic microwave background plus the contribution from the galactic diffusion emission for selected astrophysical targets plus contribution from the extra galactic background. Then we use that to solve the equation of motion, I'll try just to summarize in half a minute. We solve the equation of motion considering those enhancement factors and we calculate the emission compared to the sensitivity for the America and the SKA. Unfortunately, those flux are quite lower, but instead the non-observation of these signals can both some points on the coupling. On the case of the America, they are comparable to the current limit for the cost, but they are a little bit weaker. But fortunately, on the case of the SKA, they exceed the current limit for the cost and they can be comparable to the potential limit for the out-second experiment and the axial experiment, which argue that the radio telescopes can play a complementary role in scanning the barometer space for axioms. It can help the experiments on doing that. Well, that's all what I want to say. I'll leave you with my conclusion and thanks a lot. Okay, so a question. You discussed limits from conversion of relativistic axioms into photos. Yeah, relativistic, of course. Where are these axioms coming from? And now do you know the Rabundas? Yeah, sure, let me, what, I'm doing something wrong. So they are relativistic axioms produced by the decay of the string theorem of the line and their energy should be between 0.1 and one kilo-electron volt. And the value we used on our model is 0.15 kilo-electron volt. What, I'm just doing something wrong. Yeah, so in this model we use relativistic axioms. Yeah, because you put the limit on it, there is a lot of uncertainty because there is uncertainty on the abundance of it. I mean, yes, there are some barometers still included. And certainly you can speak also about the magnetic field. We tried to make the most acceptable values and we try our best to produce something. So there is another question. So the relativistic axioms should also, there are dark radiations, so do you include the limits from ineffective on these things? Yes, but we didn't consider this in this model. I mean, you are right, there are some people considering that, but we didn't consider it in our model. Because that should also limit the population. Of course, sure. Thanks, any question from Zoom? No? Okay, so thanks, Ahmed, for your talk. Can proceed with... Oh, sorry, where is, which one is the previous one? Yeah, yeah, yeah, yeah, sorry. Hello everyone, I am Orgajit, I am from India. And today I will be talking about Fibrily Interacting Dark Matter and a little bit about Baryonium Symmetry and I will try to show whether there are any connection with one of the most popular mechanism in neutrino mass generation, which is the Cessar mechanism. So let's start. So as we have already know that there is, from various experimental evidences that there exists dark matter, they have a finite amount of relic density, they are massive, they are stable objects. One can think, looking at these properties, one can think this dark matter as a fundamental particle and if one think them as a, if one think this dark matter as a fundamental particle, then there are some unanswered questions which still we do not know what is the answer of them. Like what is the nature of the dark matter, whether it is a fermion or whether it is a boson? What is the interaction with standard model field and what is the production mechanism in the early universe? We also don't know that, we also know that there are no standard model particles which satisfies all the properties which we know from the observations. So one need to go beyond standard model of particle physics to satisfy or to accommodate the dark matter. There is another problem in our universe which is that our universe is solely a baryonic baryon asymmetric universe which is quantified, generally quantified from this YB quantity which is nothing but the ratio between NB and number density of baryon minus number density of anti-baryon by the entropy density. And we, from two different experiments to different regime from CMB and from VBN, we are getting a constraint on this YB which are quite similar. So it indeed telling us that we are living in a baryon asymmetric universe. So one can now question whether we can generate a baryon asymmetric universe from a baryon symmetric universe. Okay, so Shakarov give us some conditions which can, by using which, we can generate the sufficient amount of baryon asymmetry which are C and Cp violation, baryon number violation and out of equilibrium decay. But we also don't know that if we want to work within the standard model framework, we will not be able to generate baryon asymmetry. So one again need to go beyond standard model. So what we ask is that what is the minimal possibility or simplest possibility to bring all these unknowns together? So type one system mechanism can be one of the possibility. So what is type one system? This is a very popular mechanism from Neutronomass where we add three right-handed neutrinos and we write along with standard model particles. We write the most general Lagrangian with that. What happened is after spontaneous symmetry baking, Neutrono get, we get a Neutronomass matrix, we diagonalize that mass matrix, we get the Neutronomass. This is the formula for Neutronomass and we are seeing that. So you see that why it's called CISO because here it is like Mn is in the denominator. So if we have a very large Mn, we can get, we can generate a small Mn, a small Neutronomass with this formula. So that's why so something very small is determined, something very big is determining something very small which is this small Neutronomass. The same mechanism also generate new concept which is the active strain Neutronomixing which is quantized by this V which is MD Mn inverse. Now the same type of CISO mechanism also helps us to generate sufficient amount of baryonyse symmetry via leptogenesis mechanism where you see I have written the same Lagrangian here. So you see this y nu is in general a complex matrix so it can be a source of CP violation. This Myronomass term is a lepton number, source of lepton number violation and out of equilibrium decay of right-handed Neutrono can be shown if we compare with the decay rate with Hubble. So in this way we can generate a sufficient amount of lepton asymmetry which can be converted to baryonyse symmetry via Spheleron processes. So now what we ask is that we see that type 1 CISO can generate Neutronomass. It can also generate baryonyse symmetry whether that same mechanism can explain the dark existence of dark matter. So for what what we ask is that we added three right-handed Neutrono for Neutronomass generation whether one of the right-handed Neutrono can be our dark matter. So then the next issue is that dark matter should be a stable object. So if let's say my lightest right-handed Neutrono to be dark matter if it has to be stable the interaction column should be exactly zero but then it is a very ideal situation then dark matter cannot be produced by any standard model interaction. It may produce from gravitation but we are not including here. So this is a problematic scenario. So what we propose is that can this lightest right-handed Neutrono be a fibrilli interacting massive particle with a coupling very small of the dot into a minus 10 and we go one step ahead and we ask whether that small coupling be connected to a... Can we explain the small coupling very naturally or can it be connected with smallness of the Neutronomass? So what we have done we perturb these interaction column of this smallest right-handed Neutrono with a small numbers let's say epsilon one, epsilon two and epsilon three and we want to show that whether it is connected can this small number be connected with the small Neutronomass? So indeed we can do that. We have to use a parametrization. It is called Casa Cibara parametrization. The summary of this slide is that using this parametrization you can show that my small epsilons is connected with the small lightest Neutronomass M1 and also outcome of this is that the active sterile Neutronomixing of this lightest Neutrono will be also one minute, okay sir. Will be proportional to this small amount. So now let's see what can we do. So these active sterile Neutronomixing can help us to get production, various production channels of this dark matter. So what we did after spontaneous symmetry baking, Neutrono will get mass and if you write the gauge interaction and Yoko interaction in this mass diagonal basis you will see these type of production channels will pop up and all these production channels are related to these active sterile Neutronomixing angles. So you see using these production channels we can solve the Boltzmann equations and from there we get this plot. So here in Y axis we have plotted this number density to interpret ratio and in the X axis we have plotted the dimensionless quantity Z and what it is saying that that initially our dark matter there were no dark matter and because of this W and Z decay dark matter is start to produce and for a large value of Z we are getting saturated Yn value. From this Yn we can calculate the correct for a large value of Z we can calculate what is the Relic abundance of N1 and from this plot we also infer that the main production is coming from W and Z decay. We also infer with little bit of calculation that this Relic density dark matter Relic is independent of its mass. It only depend on the lightest active Neutronomix and we have found that the lightest active Neutronomus of the order of 10 to the minus 12 BV will generate the sufficient amount of dark matter. Same active Stereo-Neutronomixing can also help this dark matter to 1 decay 1, 2 slide, sorry, 1 second, 1 second, it's all over. So same active Stereo-Neutronomixing will also help the dark matter to decay so there are various possible channels but the most stringent channel will come from this N1 going to radiative decay process of right-handed Neutronomix and if you calculate the decay rate it will you see that it will also related to these mixing angles. So one can search for these photons and try to find whether there is a photons in universe. So that also put a bound on this active Stereo-Neutronomixing angles. So you see the light blue region is excluded because non-observance of this X-rays and this magenta line what we have found is satisfying the correct relic abundance for our scenario. So from here you are easily seeing that this M1 of the below 1 MeV is allowed region which can be a fibrilli interacting dark matter. Lower bound we have found from the structure formation which is turbine-gain bound with 1 kV. So we can say that dark matter with 1 kV to 1 MeV range can be a fibrilli interacting dark matter. Other two right-handed Neutronomix can be used for leptogenesis which I am not talking about. So ultimately what I am trying to say is that type one C. So itself provide the most minimal platform to explain the Neutronomix dark matter as well as barogenesis. And that's all. Yeah. Sorry for and sorry for that. Any question from the audience? No, from so maybe. Does this mean feeble interactions with W and C do not spoil BBM constraint? No. The interaction is very small. The interaction strength is very small. So it will not affect. No other questions? Then thank you again. So the next talk is Bayer and Aura Vansal. How does it work? Okay. Okay. One, two. Yeah. All right. So hi everyone. I am Aeronora from Padova. And my project here, wait, wait, sorry. My project here focuses on a particular candidate for a dark matter, ultralight axion fields whereby ultralight I mean in this mass range. So about 10 to the minus 19, 10 to the minus 21 electron volts. And this, as was just discussed in the seminar earlier, this naturally provides a solution for one of the puzzles in the co-dark matter model. That is, it naturally washes out power on a small scales. So on the scales that are comparable to the gene's land of the field we are talking about. And this is termed the fuzzy dark matter or sometimes also wave a dark matter. This is the set of equations that we were writing down yesterday. You can see that they look exactly like the co-dark matter equations except for the appearance of this term here. So this term as it roots in quantum pressure of the ultralight field and it acts as some effective sound speed of the axion. Therefore, it completely washes out, it washes out power on scales that are larger than the gene's land of the particle. This is a plot just to appreciate the difference between various mass ranges for the axion. So as you can see, the lightest particles wash out more power. So the idea is to look for the suppression on very small scales and it is convenient to go to the dark ages because things remain linear up to small scales because essentially you have no astrophysical complications entering the game. And the powerful probe in the near future will be 21 centimeter line intensity mapping. So for those of you who are not familiar with line intensity mapping, the basic idea is you select the frequency, the 21 centimeter spin flip of the hydrogen atom and this gets diluted, this gets redshifted by the expansion of the universe. Therefore, you can do tomography. It's as if you add a lot of CMB screens each at a different redshift slice. And this allows you to probe the dark ages where we are essentially blind right now. So what I've been doing is essentially official forecast to look for this suppression at the very high, the highest multiples in the angular power spectrum of 21 centimeter intensity mapping. At this stage you can see this is a very futuristic lunar radio array experiment. So what we can do is look for a smarter effect that will allow us to see some imprint at lower multiples. So let me take a step back and talk about relative velocity between variance and dark matter at the time of recombination. So right before recombination, variance traveled together with photons while co-dark matter or dark matter in general simply followed the geodesics and created its potential wells because it did not talk to photons. Therefore, there was a relative velocity between the two and at recombination when variance decoupled from the photon fluid, they did not immediately realize that there are gravitational potential wells. So this kind of delays the growth of structure and this suppresses power too on the scales where the relative velocity effect is important. This is a picture just to summarize everything. So the dashed lines are without taking into account for the relative velocity effect while solid lines do. And as you can see, there is a suppression. The black lines here are the usual co-dark matter model while the light blue lines are the models with the ultra light action. Now, you can imagine that at recombination, the universe was divided in many patches and in each of these regions, the relative velocity had a slightly different background value. Therefore, your power spectrum gets modulated on the scales over which the relative velocity varies and this leads in other words to a long short mod coupling that enhances the power on small scales. And we can use this effect to search for an enhancement at the lowest multiples in the 21 centimeter angular power spectrum. You see that there is a relative correction at low else due to this long short mod coupling. So this hopefully will allow us to appreciate the effect of ultra light action fields with more realistic surveys. And well, this is still a work in progress. So I do not have results yet for the second order effects. Do I still have a lot of time? It's strange. Okay, so I don't know. Do you have a, thank you for your attention. Are there questions? I miss it perhaps, but what mass will you be able to constrain using this? So I'm looking at the 10 to the minus 21, 19 mass range, which is a very delicate one because lima and alpha constraints are closing it from one side and then you have a black hole super radius that is closing it on the other side. So essentially what remains is around 10 to the minus 19 electron volts. So there is a, I think I forgot to mention it, but there is a recent paper by Dalal and Krautsov that seemed to constrain the mass to be bigger than three times 10 to the minus 19 EV. So this is based on stellar, based on velocity dispersion in ultra-fine dwarf galaxies. So they're essentially, so there at least a claim is that the mass has to be bigger than three times 10 to the minus 19 EV. So it might be worth looking into that as well to see. Well, that would still be in the range where the effect of the oxygen suppression and the relative velocity suppression are more or less in the same range. Okay. On the left, why you chose to use the inter-pattern spectrum pretty well? Well, usually in 21 centimeter, you look for the CLs as you do in the CMB physics, basically. I don't, maybe I didn't understand your question. I have a rich power effect for the youth. I was wondering if there was a specific reason why you chose the angular power effect? Well, when you do the forecast, you assume that you look for red sheet beams that are not, they're independent, basically. And so you sum the information on every red sheet slice. But what happens typically is that the first red sheet slices contain the most information. So since you're probing larger values of K, what I understand is the 21 centimeter initial, at least the detections will be used to probe large scales, so small K. So do you worry that by going to larger K, the 21 centimeter power spectrum, it's relation to the dark matter power spectrum may not be very simple? Well, since that it's not linear anymore, you mean? For example, so I may not be able to think of the 21 centimeter power spectrum as a simple multiple of the dark matter power spectrum anymore. Well, it's shown for the bias, you mean? You can think of it in terms of bias, but it's not clear which scales you plan to probe, because this will depend on the mass that you're trying to constrain. Yes, well, if I look at the first order effects only, we are at multiple of about 10 to the six, even larger, which is really high. So it will be the target of very futuristic experiments. But hopefully, if you account for a second order relative velocity effects, they would leave an imprint on more realistic smaller multiples. Okay, so, Merda, do we have any questions from Zoom? Okay, so thank you again. Okay, thanks. So the next speaker is Beatriz Tucci. Hi, everyone, I'm Beatriz Tucci. I'm currently a PhD student working with Fabian Schmidt, there at Max Planck Institute for Astrophysics. But today I'll be talking about a work that I've been developing during my undergrad and master's, which is called the Spin Bias of Dark Matter Halos. And I did this in collaboration with Raul Abram and Antonio Montero Dorta, and their University of Sao Paulo. So here's another line of my talk. I will first introduce what secondary bias is, then I will go over low mass and high mass spin bias. I will briefly comment how we can probe spin bias in observations, and I'll give you the conclusions. So we have been studying, in the Large Scale Structure course, the distribution of matter particles. And we know that on the top of that, we have the distribution of dark matter halos and galaxies. And we call these objects cosmological tracers of this underlining matter distribution. And of course, these objects, different one from another in several properties, such as mass, size, age, spin, and so on. And we can connect the distribution of these cosmological tracers that are what we actually observe in the sky to this distribution of matter with the so-called bias expansion. And we can write it by considering all the relevant operators at each order. And it's very important for us to understand this connection via the bias parameters because they encode a lot of information on the physics. And it has been known for a long time that halo bias depends primarily on halo mass. So basically for the linear bias, the highest is the mass of the halo, the highest is its linear bias. However, it has been shown in numerical and body simulations that actually at a fixed mass, actually the halo bias also depends on several secondary halo properties, such as age, concentration, spin, and so on. So here in this plot, for example, if we take like for a fixed halo mass and we separate the halos into the oldest and younger halos, we see that there is, oh, sorry, there is a difference in bias between them. So the oldest halos here have a higher relative bias than the youngest halos. And this trend kind of decreases with halo mass. And actually to this day, we do not have like an agreement in the literature, I would say about observational evidence of secondary bias. And also we do not have like complete analytical framework to account for all these trends. And that's where my work comes in. So perhaps the most well-studied case of secondary bias is assembly bias, which is the secondary dependence of halo clustering on the assembly history of halos. And we can actually parameterize the assembly history of halos with, for example, its age or its concentration. And here is a plot of a slice of assimilation where we took the halos of a fixed mass and we separated the halos into more and less concentrated. And you can see that this two population of halos, they populate different regions of the cosmic web. So these halos are like in more dense regions here. And this reflects the fact that these halos have different bias. Okay. But in this talk of its focusing on spin bias, which is the secondary dependence of halo clustering on spin, that is basically a dimensionless quantity related to the halo angular momentum. And we can see here in this figure that we can separate spin bias in two regimes. So in the low mass end, we see that low spin halos have a higher bias than high spin halos, while the opposite is true for the high mass end, right? So I'm basically, I'll be basically trying to understand what is happening here in this figure. So for the low mass spin bias, what we found is that this inversion of the trend at the low mass end can be completely explained by a very specific population of halos called splashback halos. So splashback halos are distinct halos at the present redshift that we are analyzing, but that privilege has passed inside the bureau regions of another halo. So they are X sub halos. And as you can imagine, most of these halos still live near their previous hosts. So they kind of inherit the large scale bias properties of these halos. So usually these halos have a higher bias than other halos of similar mass. And also, so here is just to show you, we measured the spin bias in several redshifts and for several halo masses. And what we saw is that after removing this very specific population of splashback halos that are usually like 10% of the population depending on their mass range, the low mass inversion completely disappeared. So, and the physical reason for why these splashback halos drive the low mass spin bias inversion is that when this halo passes inside the bureau regions of another halo, they suffer from tight stripping due to tight interactions with this host. And this tight stripping, besides from decreasing halo mass, it also decreases halo-angular momentum. And one of the theories for that is that because by removing the outer particles of the halo, which have a higher angular momentum, you are decreasing the halo-angular momentum as well. And so here's just a wrap up, right? So first of all, splashback halos naturally have a spin by a higher bias because they live near these massive X hosts. They also have a lower spin because when they pass inside these X hosts, they suffer from tense tight stripping. And also they are more important in low masses in recent times, which is also true for this low mass spin bias inversion that we were observing. Okay, now that we understood what is happening in this region here, we can turn out and ask ourselves why high spin halos have a higher bias than low spin halos at the highest range. And it's very good that we are in the highest end because now we can use analytical tools to have halo formations such as the peak formalism and exclusion sets. And with these tools, we can, in principle, predict analytically the bias dependence of halos with their secondary properties such as this spin. And what has been shown in this seminar paper here is that if you take the halos in the present redshift and you trace them back to the initial conditions, usually you're gonna find that they are formed in peaks of the initial density field. And it has been shown that for a fixed peak height, the halos with different curvatures have different bias. So halos with shallower curvature usually have a higher bias than halos of a sharper curvature. And it has been shown in this paper that for the high mass assembly bias, actually halos of a lower concentration also are formed from shallower peaks. What would, in principle, explain why they have a higher bias than higher concentration halos at a fixed mass? So it has been shown also in this paper that a angular momentum can be also related to the curvature. So in principle, both effects could be originated from the same physical mechanism. And also we can think about spin bias in the exclusion set peak formalism. And basically, if we take the initial density of these halos, we can see that for a fixed halo mass that is scattering their initial density is correlated with their initial shear. And we also know that the initial shear is correlated to the halo angular momentum with the tidal torque theory. So in principle, we can also use this barrier here to derive the bias dependence on the spin of the halos for a given mass. And these are just some preliminary results where I measured the initial curvature and initial density of halos by separating them in spin. And then we can see that they are formed from shallower peaks and also they have a higher initial curvature and a higher initial density. And just to show you an interesting work that we did that we can actually probe the spin bias in real data in principle by using the kinetics Sonayad-Zodovich effect to trace the halo angular momentum in such a way that we can in principle separate halos in higher and lower spin and try to recover the signal in observations. And that's it, thank you. Questions from the audience? Sorry, I wonder, you said in principle, but is it something that can work because it's in principle at the end? We have measured in Lester's TNG, which is our hydrodynamic simulations, which we believe that they explain like the distribution of galaxies as well. So for us to really have a good model for that, we would have to carefully test how would be the signal to noise ratio and do a more careful analysis and then apply to real data with clusters and so on. But the problem is that nowadays we have very few clusters with the KSC signal. So it would be only a thing that you're doing in the future. Sorry if I miss this, but the recent papers about spin of filaments as well. Yes. So how does your, can you tell us a little bit about what's the update on that and does your work connect to that? Yes, so I think the spin of filaments can't be explained by title torque theory if I'm not mistaken, what I heard there. So I think this would be a bit different I can use the title torque theory. And all my work has been focused on halos for now, but of course I could try to at least see in simulations how this happens for filaments and try to think of possible explanations, even see if a fixed mass of filaments I can see a different bias because filaments are also tracers. I have a question. Is there any correlation between mass and spin? Yes, yes, there is a correlation. But actually the distribution of spin is kind of independent of mass. So if you plot for example the mean spin as a function of mass, you see almost a constant. There is very small to it only. So thanks again. Next speaker is Prem Vijay Velmani. Sorry about this. Okay, I'm, myself Prem Vijay Velmani from the Intermediary Research Center for Astronomy and Astrophysics. I'm going to talk about my recent work on how the galaxy formation affects the dark matter halos. Enough of information has been said about the dark matter halos in the lectures and this previous talk. So I can skip through the introduction. So basically most of the properties of the nature of the dark matter halos have been studied through gravitational only simulations. However, this formation of galaxy within those halos can also strongly impact those halos. So we are interested in particular that aspect. So how we study is that primarily using this hydrodermical simulations where the viranic processes that are underlying this astrophysical various processes like star formation, cooling and all this stuff. So those things are incorporated as subgrade prescriptions. For example, this Eagle project. So this you can see that it simulates a cosmological volume in that if you zoom into a small patch you can see that there are galaxies in. So we use such simulations like Eagle and there is another suit of simulations, hydrodermical simulations, illustrious TNG. In those simulations we also use their corresponding gravity only runs where the same initial density field has been evolved by the switching of all the viranic physics. So then we take the halos from these corresponding simulations and match them by looking at their initial protopaths in the initial density field. After matching them, for example, here it is shown the one halo from illustrious TNG and another one dynamically chosen halo from the Eagle. So we can see that in the left panel the same halo in the hydrodermical run with full viranic physics is distinctly different from the same halo in the gravity only run. We can easily note that the halo in the hydrodermical is more spherical and it's also somewhat more compact if you can notice. Actually, this differences have been studied over the last two decades but initially they have been looking at through more individual halo simulations. Now with the availability of cosmological simulation we can study this effects in more statistical sense. We focus on in particular the change in the compactification of the halo or like what we call the relaxation of the dark matter content in response to the galaxy formation. So physically this has been usually understood in terms of adiabatic invariance. One of the simple model is by this Blumenten et al where they make an ideal assumption of spherical symmetry, perfect angular momentum conservation for the dark matter particle orbits so that and also assuming no shell crossing so that if we were to look at the same halo before the galaxy formation and after the galaxy formation the change in the dark matter particle orbits will be given by the change in the total mass enclosed by that spherical shell in which that particle or dark matter particle is present. So this gives a simple relation for the change in the particles orbit radius RF by RA where RF is the final radius after the galaxy formation. So here there is a terminology one thing that when we say after relaxation before relaxation it can it is equivalent to saying that we are looking at the halo in the hydrodynamical trend with the galaxy formation and another gravity only trend where we switch off all the variant physics and use only gravity. So that is considered as the under relaxed halo and the halo in the hydrodynamical trend we consider as the relaxed halo. So this produces a simple relation for this relaxation ratio RF by RA in terms of the mass ratio. Here MI of RA is the total mass enclosed within the shell in the gravity only halo and MF of RF is the total mass enclosed within the corresponding shell where the dark matter enclosed is where the dark matter at this MDF RF is same as the dark matter enclosed in the initial shell but the total mass is changed because the variants would have either moved in or moved out of the shell. So generating such adiabatic relaxation models gives relaxation relation of this form that RF by RA in terms of some function of mass ratio. But what we find is that if we try to come exact such relaxation relation in using this hydrodynamical simulations this elasticity and G and Eagle we find there is a wide variation in the relaxation relation with mass. For example, in the first panel we show that even within a say let's look at this one. This is for the halos of mass 10 power 12 H inverse M say halos. So even this individual lines correspond for single halos and the thick line and the thick markers correspond to a averaged behavior of the relaxation relation. While there is a wide variation in the relaxation relation across the halos in the given mass there is a strong difference if you notice both the X axis range that the relaxation, the average relaxation behavior also strongly change which in this behavior will be more obvious. So using the three different boxes of the elastic TNG simulation we were able to exact the relaxation relation information across a wide range of halomases. So here the color is denoted by this product the darkest the black one corresponds to the cluster scale of 10 power 14 H inverse M say halos and this red ones are of the very dwarf galaxy halos. So we see that there is a wide range of laxation. So wide range of differences in the relaxation relation but even though this is consistent with the results from the Eagle what our turns out that this is not really telling about the differences in the relaxation with mass because we find that if we suppose take the laxation relation at fixed lax radii across different halos in a given set of sample. Suppose I'm taking hundred halos in a selected by certain criteria then I calculate at a then I say that I'm looking at this lax radii and then calculate the laxation ratio and the mass ratio and then make a function of this are laxation ratio as a function of mass ratio then we find a very simple relation for which not going back so for this relation if suppose we were to restrict ourselves to the particular lax radii then this is giving a linear relation for the chi. So then we look at that with the linear model we look at other halos. So here we show for a variety of different halos at different mass scales and we see that at all the mass scales that we considered and at different radii that we are considering the relation between the mass ratio mass ratio and the laxation ratio is very well described by a linear relation. So here if suppose if we are to take that slope of that relation and intercept that qn is the slope and q0 being the intercept and we take plotted as a function of the final lax radii then we find that now these two parameters which completely captures the laxation behavior is much more similar across the different halomases. This is in strong contrast to what we saw initially that the global laxation relation was very different with different halomases. So this is what the so then we see that if we can even further simplify this this relation between this laxation behavior with radius. Yes, so this qn as a function of r and q0 as a function of r if we'll focus on those halomases less than 10 power 13 h inverse m sin then we see that the qn that slope is a very monotonically increasing and it's also very linearly increasing with respect to the log radius and the q0 parameter which represent offset is roughly constant for such low mass halos. So we try to make a simplified three-parameter model to capture the laxation relation is just opposite. So this model which has this q10, q11 and q0 with three parameters this can capture the laxation relation for all the halos except for the cluster scale halos. So using, so just looking at the physics for this q0 parameter because qn has been previously looked at in the literature but this q0 is something which is saying something weird because if the laxation ratio is less than one even when the mass ratio is equal to one what this means is that for a shell when the total mass enclosed by the shell is remaining same but still there will be some relaxation. So how we can understand this that equally we can also think of this as that when the laxation ratio is equal to one that is there is no relaxation but still some mass as the total mass enclosed is changed. So this could be possibly, this could be because the baryonic feedback has pushed some of the baryon mass away so that the final total mass enclosed is lower than the initial so that the Mi by Mf is greater than one but to test this further we need to study the how this laxation changes with respect to the halo properties. I will, I will go to the final result that I want to show. This one, I'm going to the conclusion. No, I'm just going to the conclusion. So we can discuss. So please, any questions from the audience? From Zoom maybe? Then, I don't know, thanks again, first of all. And we're going to have a five minutes break before the next presentation. Thank you. This is Sonya Ornela-Schobes-Berger, please. Thank you. Yeah, welcome. Good afternoon. My name is... Ah. Ah, okay. Did I do that? No, I think so. Second try. So yeah, my name is Sonya. I'm in my first year of my PhD and I'm happy to present to you a paper I contributed to at the beginning titled Analytical Growth Functions for Cosmic Structures in the Lambda CDM Universe. And it's going to be centered around basically analytical avenues and predictions to large-scale structure of cosmic matter. A lot of the setup is very, actually closely related to what we learned today and in the past days, so maybe I can keep this short. Essentially, all of this is about the nonlinear, large-scale dynamics of cosmic matter and the idea that we can describe it in the light limits as a face-based distribution and doing the essential limits considering exactly like it's a fluid in a weak field, non-autolevistic and collision-less limit. And also in a limit where we can actually describe all of the evolution analytically, namely assuming initially cold and scale of fluids. We arrive at, yeah, the maybe-familiar, by now, system of equations, the Euler-Porson equations for cold cosmic matter. Again, the assumption here, and we have learned this already today, is the single-stream regime, so it's about vanishing velocity dispersion and vorticity. If you linearize the system, also this we have seen today, the solution factorizes into a temple and a spatial part and just so that we know what we're talking about, if we consider the appropriate Friedman equation where your fluid is essentially exposed to a constant component of dark energy, the lambda, then the temporal, this, exactly, the temporal part will be given by a hypergeometric function and the spatial part will essentially be given by powers of the gravitational field. Okay, and also this we have seen today, so I will keep it short. Essentially, the basic or main tool to work in all of this is Lagrangian perturbation theory. The change is to switch from density and velocity to a moving coordinate system that moves along with the trajectories. These trajectories will eventually meet at a point which is often referred to as shell crossing and this is when this approach also has local breakdowns, so to say. The idea is now to evolve or expand the displacement field in the time variable itself, exactly, and it breaks down at shell crossing. So the idea or the purpose of the paper was to find or basically exact growth functions for a lambda CDM Friedman equation. Also to analyze different, so to say, summation techniques within LPT. Why does this matter? So there is, in this context, there's the effort of the community to basically forward model this fluid as far as we can by analytical tools. So this can be relevant for forward modeling, for effective field descriptions that then have many other applications or when you consider, for example, massive neutrino cosmologies. So what do they do when they want to forward model? There are approximations to this, so you can, for example, consider the Einstein-Desita Friedman equation, solve your displacement field, and then replace your time variable with the linear growth factor of a lambda CDM model. This is an approximation. Another approach would be, and now we come to a different kind of summation technique, sorry, where you don't, so to say, sum or order via time variable or powers in it, but via certain spatial kernels. And if you do that, there are also approximations in the form of asymptotic considerations. Famous paper by Boucher of 1995. So in our paper, we came up with a, yeah, partly new formalism, the D-time formalism, where we do not discard any couplings between lambda and the matter itself. We expand in terms of the refined time variable D itself. We developed all order recursion relations in order to recursively solve for the coefficients in this expansion. And then developed also a resummation technique to, so to say, arrive from this time expansion to this summation in spatial kernels with corresponding growth functions. Doing this, the output is basically exact, up to certain, up to, so to say, arbitrary precision, exact growth functions in any order you want, which are given by an expression in terms of the refined time variable. And this is one of the, so to say, results you see here. Also, these expressions are really fastly converging. That's why I'm talking about exact. So, maybe later, yeah, sorry. Exactly, so we have these developed. Also, expanded this to velocity coefficients and calculated one loop power and three-level bispector out of this. So, the results are for the growth functions that's the difference between this exact, so to say, predictions for the growth functions and, so to say, the ratio to when you turn off, so to say, the explicit appearance of lambda is small for the growth functions, so below 2%. It's up to 4% when we talk about the velocity coefficients. So, it's a non-negligible effect there. When we talk about power and bispector, it's a fact that's less than 1%, so less than 1%, but it shows a scale-dependent power suppression similar to massive neutrino cosmologies, which could be relevant in corresponding pipelines. And, so to say, as a, yeah, as a finish, the take-home messages. So, now there exist true analytical growth functions in the code limit of the lambda-synium universe where they're implemented, they're implemented in monophone IC, a initial condition generator. There is a straightforward way to apply and expand or extend these techniques to more generic cosmologies, and we highly recommend to use these growth functions when you talk about forward modeling or also in the context of effective field theories and, for example, neutrino cosmologies. Thank you very much. Thank you. So, this expansion that you had, B, and then E, E, and so. Yes. So, the coefficients psi 1, psi 2 are the usual expression that we calculate in perturbation theory? Exactly, so in, yeah, here. So, this would be, so to say, the maybe standard, exactly, prohibitive approach where this psi 1, psi 2. It's the same side that appears in this. It's not the same, exactly. So, maybe, yeah, we should be careful here. So, these are the large ones, these are the small ones. Yeah, maybe, yeah, notation is essential here. These are coefficients when you, so to say, sort by the time variable. So, this is, so to say, the new approach, and the standard approach is, so to say, these ones where the size would be the spatial kernels that actually arise from the Jacobian when you think of the Q derivative of psi or combinations of it. Yeah. Hi. Okay, I don't know if I understood correctly how the corrections appear, but it seemed to me that the corrections for the perturbative theory are after the perturbative theory breaks down, isn't it? I know. Okay, so this is an important distinction, right? When the, so to say, the breakdown as soon as chart crossing happens means that none of this is applicable. This is then, yeah, it just rendered meaningless. And this is why I also, so to say, why I mentioned effective field theories. So what you would do there is you take these approaches and ideally with this formalism because it's highly precise up until any time, you would apply, so to say, the best filtering techniques that you have could be in a Fourier space or other spaces, so I'm working on other spaces. And then, so to say, filter out wherever shell crossing happens and then combine this with other theories that then work afterwards. This is also what was mentioned in the lecture today. So, yeah, two different things to consider. Okay, any question from Zoom? No? Okay, so thank you again. Thank you very much. Thank you very much. Thank you very much. Next speaker is Ivana Babic. Ah, okay, okay. Seven minutes. Okay, so hi everyone, I'm Ivana Babic. I am currently a PhD student at Max Planck Institute for Astrophysics in Munich and I will be telling you about this work on the BAO scale inference from bias tracers using the EFT likelihood. So, baryonacoustic oscillations are visible as an oscillatory feature in the matter power spectrum. We see them as these wiggles here and we also see them in the correlation function where we see them as this bump. Now, since the size of the physical scale corresponding to the BAO scale, measuring its apparent size in the late-time matter distribution actually allows us to estimate the angular diameter, distance, and the Hubble parameter as the function of redshift. But before we can apply this method, we have to face two problems. The first problem is that matter evolved nonlinearly and the second one is that we don't even observe directly the evolved matter density field but what we observe are the bias tracers of this field and the distribution of these bias tracers is affected by the highly nonlinear structure formation. And this nonlinear structure formation kind of makes our life difficult because it decreases the precision with which we can actually measure the BAO scale from the galaxy clustering. Essentially what it does, it reduces, it shifts and it dumps down these peak in the correlation function and it erases these high K-modes in the power spectrum. And this is why many reconstruction methods for the BAO have been developed, but most of them they rely on the so-called backward modeling and we wanted to see how well we can infer BAO using the forward modeling approach. So in forward modeling we start from the initial phases, so the initial conditions which correspond to the primordial fluctuations and then we evolve them into the observable structures of the day. And our main goal is to find a posterior, a joint posterior for the initial density field, cosmological parameters, bias parameters, and the stochastic amplitudes. And to find this posterior we need four separate ingredients. So the first thing we need is this prior on the initial conditions. And in our case we assume that these initial conditions are Gaussian simply because this was predicted by inflation and probed by CMB. And once we have initial conditions we apply some kind of forward model for matter and gravity and combine it with bias expansion just to get the density field of a tracer at the redshift of interest. In our case we used third order Lagrangian perturbation theory combined with the third order bias, Lagrangian bias expansion. And finally we need some likelihood which will allow us to compare our theoretical prediction to some data. And in our case the data came from body simulations. We used rest frame halo catalogs. This is why in the paper all our results are expressed in terms of halos but this can be applied to any other bias tracer just as well. And the fact that we were using simulation kind of came in handy to us because it allowed us to fix these initial phases to the exact ones that were used to initialize simulation that was used to find these halos. So all our results are for these fixed initial conditions and our future work is going to be on varying these initial conditions and some work is already happening on this in the group. Now we wanted to constrain BAO scale only from the information available in the oscillatory part of the power spectrum. We didn't want to refer to the broadband part and we also wanted to be able to change this BAO scale in the initial density field simply because this feature was imprinted into the matter density field so early into the universe days that it made sense to change it into the initial conditions and to be able to do this we started with this approximation which separates the linear power spectrum into the broadband and into the oscillatory feature and in this oscillatory feature we introduced the parameter beta as a ratio of some rescaled BAO scale size to the fiducial BAO scale. So for beta equals to one we recovered the fiducial value of the BAO scale and changing the beta changes the BAO scale but without changing the broadband part it only changes those wiggles. Just think something. And we introduced this function F to be able to apply these changes onto the density field directly. So essentially what we were doing is we were feeding all these different initial density fields into the forward model so all the cosmological parameters were the same but the only difference was into the size of the BAO scale. And finally the likelihood that we were using is this EFT likelihood. So the most distinct feature of this likelihood is that it works at the level of the field as you can see here. And then this means that it captures all the information at once because it's working with the field it gets the information from the power spectrum by spectrum and so on. But to really be able to tell how well this likelihood works we had to compare it to something. And it was a little bit tricky to find what to compare it to because we fixed these initial conditions so comparing it to results that are already out there wouldn't really be fair it would be like comparing apples to oranges. So what we did is we took a standard power spectrum, a level likelihood but we recalculated the covariance to reflect the fact that initial phases are fixed. Therefore we were able to do a very fair comparison of apples to apples. And one more thing that is important to mention is that here at this power spectrum level likelihood we didn't perform any additional reconstruction. So the input inside was the one that was coming from the forward model so the same one that was coming from the EFT likelihood. So this is comparison at the level just of the likelihood. And finally, let's see the results. So we were testing two things we wanted to see how biased EFT likelihood is and then we wanted to compare it's one sigma, we wanted to compare one sigma to the power spectrum one and we see that this likelihood is really unbiased so bias is actually below two percent across all the redships and across all the halo mass samples that we were using and if we focus only on these least massive halos bias is actually below one percent which is really good. And maybe the most interesting plot is the one comparing these two likelihoods and we see that for a very small cut-off these two likelihoods have a very similar performance which is to be expected because data is close to Gaussian here so they get the same amount of information but as the cut-off increases these nonlinearities become more significant and the EFT likelihood outperforms the power spectrum likelihood. So finally for the takeaway we see that in the case of fixed phases where it phases will come in the future we see that the remaining systematic bias for the EFT likelihood is below one percent and we see that the improvement compared to the power spectrum likelihood is between 1.1 and 3.3 depending on the cut-off amount. That's it, thank you very much. What covariance matrix did you use? Sorry, I cannot hear you. What covariance matrix did you use? How did you obtain your covariance matrix? Well, you mean for the EFT likelihood case or for the case of the power spectrum one because there are two likelihoods. Oh, false. Okay, so in this one, sorry. How do I go back? It doesn't, yeah, it doesn't want to go anywhere. Okay, can I use this? Okay, so here this is, so in this case this is the covariance so it's just the power spectrum of halo stochasticity but in the case of the power spectrum likelihood unfortunately I don't have the covariance written here but I can show you later. We have this in the paper and I can show you how we derive this because it's a new thing, it's a bit unusual approach. I'll find you tomorrow, thanks. Okay. Are there questions from you? Nice. For the power spectrum errors that you're comparing to is this pre or post reconstruction? Sorry, can you repeat the question? When you're comparing the errors from the field level to the power spectrum are the power spectrum errors from pre or post reconstruction? So there in a sense there are post reconstruction because we are, like the theoretical input is the one from the forward model, right? And forward model naturally, natural part of it is this, let's say reconstruction because I mean the way it works but we didn't perform any additional reconstruction in the sense of some kind of a backward modeling that people usually apply to the power spectrum. So it's the same information that goes into the FT likelihood and the power spectrum likelihood. The spirometer beta that you introduced that's effectively rescaling the sound horizon, is that right? Yeah, exactly. Were you sort of trying to see whether you would recover that with your likelihood? Yes, yes, we were trying to see how close to one we will be. So closer beta is to one, less biased are likelihood is, right, because then it's reconstructing the, it's recovering the fiducial value of the BAO scale size. Okay, thanks. Questions from Zoom? No, okay. Okay. So thank you again. Next speaker is Marina Silvia Cagliari. What's happening here? Okay. Okay, so hello, I'm Marina Cagliari, a PhD student at the University of Milan and my talk is going to be about augmenting grass sheet information in large cosmological surveys. So a grass sheet can be measured in two different way with spectroscopy, which is the more direct way to do it or with photometry. In this case, you only are able to locate broadly some characteristic feature of your spectra. Photometric grass sheets are much less precise than spectroscopics one. And they are also prone to some degeneracies, as you can see here. However, they can go much deeper than spectroscopy and they can be acquired faster. So what we're trying to do is to augment the photometric data using ancillary spectroscopic information. And we are working on two different methods to do so. The first one is to be applied to a Slytherin spectroscopy survey, as Euclid will be. And actually I used the Mock Galaxy catalog from Euclid for the results I'm going to show later. In a Slytherin spectroscopy survey, the instrument is going to obtain a spectra for all the object in the field of view, as you can see from this image. However, the majority of the spectra will have a very low signal-to-noise ratio and so we will not be able to actually obtain a ration from them. So what we have been trying to do is to actually extract some information from the spectra that otherwise would have been wasted. So in order to do so, we have used the ensemble of automatic ration methods, which aims at constraining the ration distribution of our overall group of galaxy, which has been previously selected in photometry, using the Stuckett spectrum, which is basically the average of the spectra of all the galaxy in the selected color group. Here on the right, you can see the plot of three different spectra at three different ratios. And being the Stuckett spectrum a linear combination of them, it will retain information of the overall ration distribution of these galaxies. And more often, being the average, it will have a smaller noise in the Stuckett spectrum than the single spectra. So let's see some results from this method. On the left side, you can see the results from the more ideal case where you can see that the ration distribution is fitted with much detail, even its substructure, while in the realistic case, we can see that the distribution is now very noisy, but we are still able to locate its position, its width, and broadly understand its major peaks, which is at least some results, let's say. The other method I'm briefly going to talk about is meant to be applied to a parent sample of a spectroscopic surveys. In this case, we are using the vipers survey, and here you can see an example simulation of one of the field of view of vipers. The red dots are the galaxy from which we only have photometric information of the parent sample, while the black one are the galaxy for which a spectra was actually acquired. And as you can see, they are not much, they are about 35%. So what we wanted to do was to try to increase, augment the information in the photometric data, exploiting the special correlation of galaxy, in particular of the galaxy in the spectroscopic sample and the photometric one. So if you imagine that we are on the surface of the sky, we have a galaxy here, the black one, of which we only have photometric information, and then it has some neighbors which has a spectroscopic information. It is very probable that at least one of these neighbors will also be a neighbor in a rash space. So it's really correlated to this galaxy. So what we've been working on is a graph neural network, a machine learning algorithm, which we've been training and testing on the spectroscopic sample of vipers, that given a pair of galaxy, it will classify it as real neighbors, so true neighbors or false neighbors. Then given this information, we can do many things with this information. Basically, one of which is to try to measure the rash shift of the galaxies. And here we decided a very simple way to do it. So given a photometric galaxy, we'll say that its spectroscopic rash shift is equal to the spectroscopic rash shift of its more probable neighbor identified by the graph neural network. Here you can see the result from the network compared with the result of a more standard metal to measure photometric rash shifts. And as you can see, the first thing that one can notice is this very prominent straight line in this plot, and these are the objects which are really correlated, especially correlated, while this spread is well typical for photometric rash shifts, but is smaller than the one one would get with the standard method of photometric rash shifts. Then finally, we also computed the outlier fraction, which are the objects which are far from the distribution which is here in the center. And the graph neural network almost half and the number of outliers one would get with standard methods, and yeah. And finally, the last thing we notice is that the graph actually gives slightly biased results, which is something on which we are currently working to try to improve and the results. So thank you for your attention. Questions from the audience? Oh, great work, very impressive. Have you considered or tried using more than three galaxies? Yeah, sure, sorry. This was an example just to make it clear what it's happening, but actually for the network, we started with, we select 30 neighbors, and then from them we try to find which is the more probable one to be the real neighbor of the galaxy. So it also depends on the depth of the survey. So if we are, for example, actually vipers have the object which are also at the rash shift higher than one and also some higher than two. Here I'm not showing them, but in that case, you wouldn't expect that any one of the angular neighbor is actually a real neighbor. So we want the network to understand that as well. Can I ask one more question? In the correlation point you showed, it looked like at the galaxy that have very high measured spectroscopic redshift tend to be more likely to be underestimated by your approach. Yeah, yeah, in fact, that is true. Is there a reasoning for it? Like, do you have any idea why the advice goes in that direction? We are still discussing about that because this is a work in progress, but we think that it may be some volume effect which makes it select galaxy at lower rash than higher. Yeah, I was wondering, because you mentioned that you're using the angular separation, but I was wondering if you're using any other features as inputs to your neural network. Yeah, sure. The feature that the networks get, these are graph neural networks. So we give a graph, which is the most simple graph ever. So it's two nodes, one of each galaxy. And the feature of each one of these nodes is for the galaxy with the spectroscopic redshift. And then the other feature are the same, which are the magnitude in the filters that we have and the angular position. Thanks. Questions on Zoom. Thanks again, then. Next speaker is Stephanie Brokenhoff. Cool, cool, cool, cool. Yeah, that kind of takes some seconds to point her out here. Okay, that's good to know. Wait, just a second. Drop the microphone, sorry. Where is it? Where is it? Isn't this the microphone? Yeah, yeah. Is this okay? Yeah. Okay. Okay, hi. So my name is Stephanie Brokenhoff. I'm a PhD student at the Kaptein Astronomical Institute in the Netherlands. And today I want to introduce you to some of the state-of-the-art observational challenges that we face when we're trying to provide observations to constrain models for reinitiation. So essentially what Eleonora talked about with the tomography, this is kind of the observational side of that and the challenges that we are facing. So the instrument that we are using is the low-frequency array, LOFAR. And what we're trying to do is observe the 21 centimeter signal originating from the epoch of reinitiation and cosmic dawn. And as you all know, this is a very, very high redshift signal, whereas if I take a picture with LOFAR, it looks a little bit like this, where I have a lot of bright foreground sources in a way that dominates over my power spectrum. So what we try to do is we try to analyze all the different contaminants and remove them, and then we get something that looks like this. It's a lot cleaner, a lot of the power has been removed, but we still have a lot of things that are brighter than both the 21 centimeter signal and the thermal noise. So we can't just integrate more data to get a better estimate of the 21 centimeter power. So we get stuck with this kind of weird problem where we have an amazing instrument. LOFAR is really one of the top radio telescopes that are currently operational. We have a lot of data. We have over 3,000 hours currently on disk, ready to be analyzed, but we cannot get to the signal that we're trying to observe because there is simply this excess power which we don't understand. So what we are trying to do right now is to make forward models of these excess power sources and to try to figure out which one is actually the dominant one that is, yeah, that is the one that's making sure that we can't reach that signal that we're interested in. And the one I am currently interested in is the ionosphere, so that's what I'll be talking about. But first let's take a quick look at what we're actually doing here. This is a timeline of the early universe and what we're interested in is this epoch ranging from the dark ages kind of until reionization is complete and ideally we would like to do tomography where we fully know where all the neutral hydrogen is as a function of redshift. Now, SKA might be able to do this a little bit, but with the current generation of instrument that's just not possible. So what people do instead is two different things. First of all, you can look at the sky averaged global signal, but what my group is doing is looking at the spatial fluctuations using the power spectrum. And we are doing that using instruments called lofar and nanofar. And the redshift ranges that lofar and nanofar are able to reach are shown here in this plot. And as you can see, we do not cover the full redshift range that we're interested in, but if we are able to probe down to the 21 centimeter signal in this range, we'll be able to really meaningfully constrain the process of reionization a lot better. But as I said, unfortunately, we're currently not right there because finding this very high redshift signal is a bit like finding a needle in a haystack. There are many different contaminants. Like I already said, we have astrophysical foregrounds that are 10 to the power of five brighter than the signal we're interested in. That problem is made a lot worse by the ionosphere. There is radio frequency interference, both man-made and natural. There are instrumental imperfections. If we don't watch out, we introduce extra errors with our calibration pipeline. In other words, there are a lot of issues and we need to separate these issues. And the one I am looking into is the ionosphere. The ionosphere is this turbulent ionized layer in the top of the Earth's atmosphere and it causes phase shifts to incoming radiation. So our incoming wavefront is distorted and we cannot properly image anymore. So if I have an array that is trying to, oh, I tried to make a pointer, but if I have an array that is looking at that, I can have two different pairs of antennas. I can have this on the left, two antennas close together. That's a short baseline or this thing on the right where I have two antennas far apart. That's a long baseline. Now a long baseline probes large distances in the atmosphere because the pierce points are far apart and that means if you look a bit at what this screen looks like that I will get a larger phase variance. So in that way, a long baseline has worse effects. However, I'm also trying to calibrate as I'm observing and it takes quite a while for the ionosphere to move over such a long baseline. So I have a longer correlation time to solve for these errors that are introduced and also I can calibrate my long baselines better on bright point sources. So on the one hand, I have a lot more trouble with a long baseline, but I can also solve for it much better. And that introduces kind of a weird effect in relation to baseline length and also we treat different baselines very differently in our calibration strategy. So we currently have no analytical way to compute the effects of the ionosphere which is why we're doing forward simulations. Now this is the result of a very simple forward simulation where I just have a few foreground sources and some thermal noise. And here on the left, that's all that's in there. And on the right, I have also distorted my signal with an ionosphere. And here I'm plotting the power divided by the power of thermal noise realization. So what I want is this plot to be kind of white, kind of dancing around the white values because then the thermal noise is dominating and I can integrate more data to get lower in power to really probe down to where I want to be in sensitivity. If I only have thermal noise and foregrounds, I am able to do this very well. But if I also have an ionosphere, then in the bottom right part of this plot, we see a lot of excess power showing up. And that is also what we see in real data. This is an actual upper limit published by our group where you see that there's also more purple in the bottom of this plot. So that is also where we see this excess variance. And this is an indicator that this ionospheric noise can really be one of the dominant sources. However, this is still very preliminary. We need to make the models that we're using a lot more realistic to really say something sensible about this. But I just wanted to introduce you to what we're doing here. So that brings me to my take home message. I didn't start before. Oh yeah, I'm exactly at seven minutes right now. So I had my own timer. Which is that estimating the 21 centimeter spectrum from the epoch of realization in cosmic dawn is right now very difficult. And that is basically because the exact effects of the different contaminants that we're dealing with are not fully analyzed. So we need forward simulations of these effects. And the ionosphere might be one piece of this puzzle. Thank you for your attention. You said it exactly at seven minutes. So questions from the audience? Can you go back to the slide where you're showing the power? This one and the next one as well? Yeah, okay, both. This one? Yeah, I guess my simple question, I know nothing about the ionosphere. So I just want to mention that the ionosphere varies over a long time scale on these, on the long baseline next to it. Oh yeah, that's, I have a longer time scale here. What is that time scale, like how long? So for a short baseline, it's really in the order of a few seconds. And for a longer baseline, that can be several minutes. It really depends on the weather. The worst observations we just throw away because it's impossible to calibrate. But on a really good night, you might have like 20 minutes even, which is good. But especially on the short baselines, we are not able to do the source subtraction in the way that we would like to for the shorter baselines. Thank you. Other questions? So why do you have an excess of power in the lower part of the power spectrum? Yeah, thanks for your question. I think you mean this slide? Yeah. Yeah, so this is basically because the main effect of the ionosphere that we see is that the foreground sources are broken up and smeared out, which makes them more difficult to subtract. And typically these foreground sources are very spectrally smooth, which means that they end up at low K parallel values. So spectrally smooth is in the bottom of this plot, which is where we get the foregrounds. Thank you. It looks like in the, with the ionosphere, it looks like you have more noise towards the smaller scales in the transverse direction, or is it just me? So towards this, you mean kind of, wait, in this area? Yeah, yeah. Yeah, that is actually something we expect. So I have a bunch of, yeah. This is because if I have a baseline and I travel to a different frequency, it kind of gets shifted. So these different colors are just the same pair of antennas, but how they travel through K space. So this is just an instrumental effect that kind of wedges low K parallel modes up a bit at high K perpendicular modes. So it has nothing to do with the ionosphere, the source of the. It also has to do with the ionosphere, but it is in general something you observe with foregrounds, with excess foreground power. And the ionosphere just makes this foreground power worse because we're not able to piece the foreground sources together to properly subtract them. Does this answer your question? Yeah, more or less. Okay. Questions on Zoom? Thanks. Thank you. So last but not least, Iago Merguliao will conclude the second session of student talks. No, I mean, you can turn it off. Hi. Okay. Fine. Right. Here. Okay. Excellent. So hi everyone, I'm Tiago Merguliao. I'm a first year PhD student at the University of Edinburgh where we put in Boitler and John Peacock, but today I'd like to share with you some results I got when I was my master's. The project is called the factory theory for us with structure and with tracer. I'm just going to show some results, but if you want more details, the reference is just out there. So just a brief introduction. As you saw in the summer school, we have the Lambda CDM models. We basically learned how to evolve all the primordial seed for inflation until the evolved art metaphyl, but when we look to the sky, we see galaxies. And one of the main goals of last year structure is how can you constrain, I can use that. How can you, second? Yeah. How can you constrain our theory given that the data looks like this? That's basically one of the main goals of last year structure. And knowing that it's really important, we basically asked ourselves the following question. How can we optimize the information we extract from last year structure? And there are many different models in the literature. You can basically have many different approaches, but two of these approaches that are really famous are the multi-tracer approach and perturbation theory approach. Well, the perturbation theory approaches on the lecturers, it basically gives you the power to predict what's going on in small scales. So it basically allows you to include that modes in your analysis so you can unlock the information happening in short scales. The multi-tracer, I think if it's new for you, just let me give you a brief introduction. When you look to the sky, you see galaxies. But it happens that we have many different types of galaxies. So we have basically two different approaches. You can have the single-tracer approach where you look to the galaxies and it's treated as being the same or look to the galaxies and say, wait a minute, these galaxies actually are really a bit different. Some galaxies are blue, some galaxies are red. What if I treat them separately? That's basically the multi-tracer mindset. And there is a well-known result that we find some conditions. When you do the multi-tracer approach, we can basically boost all the information extracted from the large scales. So on our hand, we have the multi-tracer which improves the constraint from our scales and we have perturbation theory that improves the constraint from small scales. And now probably know what you did, right? So why don't you put both together and have both at the same time? That's basically a motivation, okay? Yeah, so let me explain briefly how did you do that. So you need to basically go to that picture again. And that pen on the top are basically galaxies. Each one is a galaxy on the top and the left. And here on the right and the bottom are simulations of the dark matter field. And it's a really complicated model to connect the galaxy field with the matter field but you should expect that they are connected. Imagine the following. Imagine you have a region in space that has a lot of dark matter. This region is going to attract a lot of modic matter and also baryonic stuff. It means that in that region, trace formation is boosted. So you should expect that kind of interplay between galaxies and tracers. And over history, over the last decades, people try to make all kind of, what? Oh, I think that there is something missing here. Yeah, my slides were not like that. Yes, I do have my laptop. Can I get it? Yes, I have an adapter here. Oh, yeah, but in that case, I'm not to be able to use that. But yeah, anyway. Yes, oh, was it like that? Yes, I'm doing it right now. Got it. Share screen, desktop one. Let me close my WhatsApp so people don't bother me. Is it working? Perfect. Okay. Yes, precisely. How do I do full screen here, I don't know. Oh my gosh. Oh my gosh. Yeah, here. Last, last. Oh, I hit full screen. Oh, excellent. So I think I was here, right? Yeah, so I need to remove my sound. And also to move myself. Oh, I should have removed this? No, I moved myself here and yeah. Okay, excellent. I think it's fine now. Yeah. But I can use here, right? You can use your point. Ah, in the computer. Excellent. Okay, I'm sorry about that. Okay, excellent. So I was talking about, yes, how to connect the tracer field with the galaxy field. The galaxy field with the metafield and yeah. So there are these really dense regions that attract even more matter of time and better stuff. So I should expect that there is this kind of connection between galaxies and the metafield. And over the last decades, people has been trying to figure it out what is how this connection occurs. In the beginning, people simply use like very simple relations like. You basically said, oh, there is a linear response in large scales. It's basically a constant factor between them. And then they start to complicate, complicate, complicate until the perturbation theory were developed. And now they have a systematic way to figure out what kind of operators that are relevant to the tracer field up to some ordinary perturbation theory. What is really good is that the set of operators needed are fixed, given to, up to some ordinary perturbation theory. What is great. So I'm not going to detail the calculations, but I really refer you to that seminal paper by Assasi. And in this talk, I'll basically be concerned about the pulse spectrum, which is the two-function function. And here on the bottom show basically what it looks like. The blue bots are the datas and the different lines are the non-linear are the contributions of all that operators that appear here. Remember that for the pulse spectrum, basically multiply two different deltages and then take the same average. And then when you do this, you have many different contributions and all of that sum up to give you a description about how Galax are clustering, okay? And how do we put multi-tracer on that? Well, we have the result for single trace. And if you assume that the single trace, one of the tracers doesn't care too much about the other tracer, you can simply apply your result twice. So we basically assume that the same base expansion have for one tracer have for the other tracer as well. So in that case, you have these red galaxies and the orange galaxies. So we basically allow each tracer to have its own set of bias parameters, okay? And then in the single tracer case, we have basically four different, four degrees of freedom in the effective theory, but in the multi-tracer case, you have two s's more. And the question is, which case is going to perform more in this crime of the physics that's happened and also to recover all the cosmological parameters? And when you do the calculations, we were a bit shocked, but in that plot, I don't know if you can see well, yeah, I think you can. The red lines are the result for the single tracer and the blue line are the result for the multi-tracer. It seems that the multi-tracer results outperformed the single tracer results, not only bias parameters, but also in the cosmological parameters, especially on H and omega CDM. And it's a really good improvement, right? And the question we asked ourselves is, why was it happening? Why? And then we'll see something, wait a minute. Different traces of the elastic structure, they populate different elastic structure environments. Each tracer has its own tracer formation history. There are different things, even different astrophysical things. So they should be treated differently. So we expect that different traces has different nonlinear responses to the elastic structure dynamics. Even the perturbation theory tells you that because it's possible to write some sort of bias convolutional relations. I'm not going to do the details again, but you basically can fix many of the EFT coefficients with respect to the linear bias. And it tells you that tracers with different linear bias are going to have different tidal fields. They're going to depend on that tidal field in a different way. And it's also valid for all the auto-operators. When you did this and you split the super-populations, we saw this happening. It's basically a correlation matrix for the data. In the left, they have the single tracer case. On the right, they have the much tracer case. I can see that by splitting the population, too, these two tracers accommodate way better the nonlinear information than the single tracer case. But now that you know that, even the theory tells you that it's supposed to happen, it makes sense, right? When you treat them together, you basically is mirroring out or taking an average about the nonlinear response of two totally different objects. That's basically the takeaway message from my presentation. And just to conclude, multi-trace is better than single tracer when it comes to perform a full-shape analysis of the power spectrum. I forgot to say something on rear space. All these routes are for rear space. The multi-trace is useful to break the generality between bias parameters because they have different nonlinear responses to the LSS dynamics. And also, the information for small scales is better translated into separated trace bias. And yeah, that's basically what I had to talk about today. Thank you. Oh, just as a small comment, I forgot to say, right now it's a work in progress. The follow-up work is to basically put all that in registered space distortion. And we have this preliminary result, where we have the multi-pose for two different tracers. And it seems to be working really well. And it's a collaboration with Enrique Rubira, Rodrigo Vivaldić, Florian Boitler, and John Peacock. So thank you. Questions? Any questions from the audience? From Zoom? You have a question? Where? Excuse me. Just one. As you said, when we have a collection of mass, for example, somewhere, you are dividing several types of masses that have several kinds of red ships, for example. Am I right? Sorry, are you saying that each trace would be in different red ships? Yeah, how do you divide those? Excellent, excellent. There is no answer for that question, I suppose, to what's the best way to divide them. In that case, I was working with the dark matter halos. And for dark matter halos, you know that their masses are really important. Because the mass is related with the linear bias. And the linear bias can be related to BK2. So everything comes traced back to the mass. So for halos, it's easy to find what's the physical property that is really relevant. For Galax, it's not that obvious. But what people usually do is to use red in Galax and blue in Galaxies, RGs and ERGs, for the boys, for instance. Are these tracers related to each other? What do you mean by related? Is there any correlation between these? No, yes, yes, yes, perfect, yes. As I showed there, when you split the populations, you need to take into account the out correlations between them and also the cross correlation between the two. And there is a cross correlation, as you can see there, for all the line, I think I can use that. But for instance, this one is the cross-perspective, this one is another cross-perspective, and so on. So yes, there is a cross correlation between them. Thank you. Other questions? I thought you had a question, right? No? OK, so thanks again. And thanks again to all the students who have presented their presentations today.