 I'm happy that Marcus Simanovich from CERN accepted to tell us about some real cosmology that one can do with the observations. Okay. Thank you. Thank you very much, Merdan, for such a brilliant introduction. So yes, I guess that the idea here is the organizers asked me to maybe do a small review of some of the recent progress, both in theory and in data analysis. In an actual, really, it is something that is presented in this first slide, how to take the galaxy surveys data, which measure the position of galaxies in 3D space, and then convert them to some cosmological parameters, the same way we do it for the cosmic microwave background radiation. And I will focus mainly on spectroscopic galaxy surveys, as I'm sure that many of you know. There are also photometric surveys. These are a bit different. But most of the things that I'm going to say will be about spectroscopic galaxy surveys. So here is the outline of this seminar that will be roughly speaking three parts. At the beginning, I would like to give some little introduction and motivation for why do we do this spectroscopic galaxy surveys? Where are we currently with the data? What is upcoming? And why are we doing all that? And then, in the second part, I will review a bit some theoretical progress in describing the dynamics of large-scale structure, and finally show some applications to the data, and then prospects for basically things that are coming in the next couple of years, which may be of interest to many of you. So let us begin with this motivation and context. So I think it is fair to say that most of the things that we know very precisely in cosmology comes from studying the small density fluctuations. Asim has already introduced some of these things. For instance, from the cosmic microwave background radiation that we can observe on our past light cone, which is represented here with this red cone, we can look at the CMB, and as you know, many of the important discoveries in cosmology in the last 20 years or so, such as precise measurements and confirmation that dark matter exists, dark energy and inflation, all of these things were amazing, and they had a huge impact, not only for cosmology, but also beyond cosmology for particle physics, string theory, et cetera. And already, like in Vapha's talk, you could hear about some of these things. But the question is, can we do even more than that, or there is something else to be discovered, maybe more subtle, something that we missed in the CMB? And like for that, we have to look elsewhere, and then we're going to talk about galaxy surveys, the nearby universe, and how can they be used by themselves or in combination with the CMB to improve our knowledge of cosmology. And so just to remind you, this is also something that you have seen in Asim's lectures, but these spectroscopic galaxy surveys are essentially big surveys, which scan the sky, and then you're trying to measure positions on the sky and red shifts of typically millions of galaxies and produce these maps. So here is an example from a very old survey, actually, 20 years or so old, where roughly you can see something like over 100,000 galaxies, and you can see this nice cosmic web. So the distribution of galaxies in space is not random. It forms this nice cosmic web, and this is not coincidence. In fact, these maps contain a lot of information that can be used to deduce cosmological parameters and test various models, as I'm going to show. And we have been very good in producing these maps. What I'm showing here is some kind of history of these efforts, and for five decades or so, we are sitting nicely on this exponential line where the size of the spectroscopic samples and roughly speaking the observed volume of the universe is increasing by a factor of 10, more or less every 10 years. And with the upcoming galaxy surveys, which are currently operating, in fact, they will deliver the data in a couple of years, such as Desi, and a little bit further during this day in Euclid, we are staying on this exponential line. And there is something very important that is happening around this time, and this is the volumes that we are observing, and the galaxy samples that we are going to have are going to be sufficiently large, that we are going to start being competitive with information on cosmology, again, for the CNB. So this is a very important point in time, because as we move on in the, maybe for you, of course, you can already think about the next decade and the planned surveys in the future, they may even surpass the constraining part. So there is a long prospect, an important prospect for the developments in this field. So why are we making these big maps? Well, the science case is extremely broad, apart from many relevant questions in astrophysics, what concerns cosmology, I could roughly split into two different parts. So the first thing I would say is that these galaxies, as I already mentioned, are not distributed randomly, and very importantly, they remember the initial conditions. So on very small scales, of course, things are scrambled by the complicated nonlinear evolution. Like in this room, we don't remember very much about inflation, I guess. But on large scales, this scrambling is not happening really. So when we look at the density field of galaxies, the similar way as in the CMB, we are really seeing the imprint of this quantum mechanical processes during inflation that survive until the late universe. And for that reason, these galaxy surveys are very powerful, probable inflation, and they can answer, I think, in a very qualitative way, many of the questions that we still have and that we cannot answer with the CMB alone. And then second thing is that you can think about, since the gravitational interactions are the only basically relevant forces on large scales, and since we have some base model, which is lambda CDM, and we know how the different components then interact within lambda CDM, then using gravity, we can basically explore if there are any extensions to lambda CDM. And since everything gravitates, then looking at distributions of galaxies is very good, because it's a very sensitive probe simply because of a huge volume of the universe over which you are integrating of various extensions. For example, the most well known and guaranteed signal is the sum of neutrino masses, and even though neutrino is a very light particle, which we cannot be difficult to measure in a laboratory, simply using the fact that we have such a huge volume of the observable universe, we can do it from galaxy surveys remarkably. But also we can test many other scenarios along the similar lines. For example, we can ask whether there are other light relics, for example, in the dark sector, but still massive. You heard already about ultralight dark matter yesterday, and for example, that is another thing that galaxy servers can test very precisely. There are also classical questions such as, is there any special curvature, which would again have very important implications for inflation if you detect any, or for example, what is the nature of dark energy, is it really cosmological constant or not, and so on and so forth. And so I think that here you can see some list of questions and we can discuss them if you want, but the bottom line is that the science case for this service is extremely broad and there are really ways to probe like all epochs in the history of the universe, from inflation through early universe to the late universe today. So whatever eventually, whatever theoretical model you come up with, you will have to pass the constraints from this data and this is why this is so important. Now you have probably heard about all these questions before and you may wonder whether the CMB is just good enough, because CMB can answer some of these questions to some extent. So why does not doing CMB is not good enough? Well, there are roughly speaking two reasons. So the first reason is that in the CMB observations, even though we have a long way to go and you will hear about it next week from Black-Schervin, we are approaching the limit given simply by the number of pixels that we can observe in the sky on a two-dimensional surface which basically limits our ability to measure cosmological parameters. So we are approaching this limit and this limit and we already know that we cannot answer all of these questions simply using CMB. The second thing which is maybe even more important is that the CMB analysis is fantastic when you want to study really the base lambda CDM model. But if you make extensions, there are many important, many large degeneracies which appear and then the constraining power of the CMB alone is not so great anymore. So here I'm showing two examples which are not even some crazy extension of the CDM. For example on the left panel you can see the very well-known degeneracy which appears if you allow curvature to be non-zero in the CMB analysis and then you anyhow have to use some external data sets such as the BAO to break the degeneracy and do the measurements. And on the right another important extension of the CDM where you allow neutrino masses to vary. I will remind you that in the baseline Planck analysis the sound neutrino masses is fixed even though we don't know it. So this is I would say even the realistic scenario that we always have to keep in mind and in this realistic scenario you can see that the neutrino masses and for example the Hull parameter degenerate and again in order to get the best possible constraints you already at this stage have to use some external data sets such as the BAO. So the bottom line here really is that CMB of course was great to establish precision cosmology and to really establish Lambda CDM as our baseline cosmological model but it is insufficient like in the future for answering to some of these open questions that we have and exploring like extensions of Lambda CDM. The galaxy surveys are complementary probes and they're becoming competitive and in combinations I'm going to show they're becoming much more powerful than each one of them individually. So these are all reasons why you're going to hear a lot about this thing in the years to come and this is going to be I think the topic which will dominate the discussion in cosmology for the foreseeable future. But of course like in order to use this data one of the keys is to have a good and reliable theoretical framework to making predictions for how these galaxy maps are supposed to look like what are their statistical properties and then this of course is needed in order to be able to extract the information that you're interested in. So for a given cosmological model and a given set of initial conditions you have to be able to tell how your galaxy density field is supposed to evolve and then you can compare it to the data and tell whether your cosmological model and your cosmological parameter makes sense or not. So then this brings me to the second part of the talk in which I would like to say a few words about what is at the moment our leading tool maybe even the only tool that we have at our disposal to deal with this data analysis related to a spectroscopic galaxy service and it is something that I seem briefly mentioned in his lectures it is a perturbative approach to galaxy clustering and of course it has its own limitations but also I think it has a lot of advantages and let me go a bit now through this more technical part of the talk where I will show you some basic logic behind it and some basic results and then we will see how it works in practice. Now I'm going to present this from I would say from a point of view of a physicist so basically focusing on things that we usually use in theoretical physics when we want to deal with some dynamical systems in particular we're going to study what is the the dynamics what are the equations of motion how can we solve in perturbative etc and I will briefly say a few words about some I would say let's say non perturbative results based on symmetries and so on and of course this sounds very different from maybe a story that you would get from somebody who is more interested in astrophysics or questions related to astrophysics regarding galaxy clustering but I think that it's a useful point of view and you should know about it and okay so let us let us start first of all reviewing what are the main non-linearities that we have should keep in mind when we're trying to describe the the non-linear evolution and galaxy clustering so at the very high red shifts I'm going to illustrate this in some using some simulation outputs and at very high red shifts the field is nearly Gaussian and fluctuations are small so you're very close to the initial conditions and gravitational collapse didn't have time to happen and produce the non-linear structure just a reminder that our variables that we care degrees of readings we care about this density contrast delta you can study it either in real space or into the Fourier transform and then for example define the power spectrum at early times of course this will be just the linear power spectrum the two-point function of Fourier modes but also you can study the higher order and point functions if you want in a similar way but of course as the universe evolves and you're coming to smaller red shifts the gravitational collapse becomes significant and you start forming this complicated cosmic web where you have voids, filaments, large dark matter halos and then the first question that you should ask basically is like at which scales these non-linearities become really very large basically what are the typical scales where this density contrast for example becomes of order one you can answer these questions looking at the the variance of the density field which is given here in this equation just the integral of the power spectrum up to some scale so some smoothing scale that you're interested in and you can ask then varying this R which you can think of some smoothing scale like for which R's I get the variance of the density field becomes of order one and at low red shifts this scale happens to be of the order of few mega parsecs now this whether this is good or bad like it's maybe a point of view and perspective but I think that the useful thing is to compare this scale to the horizon scale which is of course much larger and this means that you have a lot of if you want independent pixels or Fourier modes which are in the linear or quasi not so non-linear regime that you may have hope to describe analytically and this number of pixels is typically even if you do a naive estimate much larger than the number of pieces of the CMB this is the reason why we believe that this galaxy surface have higher potential in terms of constraining power and for cosmological parameters now the simple gravitational collapse which is induced by tidal fields and over densities etc it is in some sense a true non-linearity but there are also other types of effects which are slightly different and they require slightly different treatment but they're very important so let me also review this so so apart from just the gravitational collapse there are also large displacements and Asim has shown this is a doge approximation in which basically you get the whole cosmic web starting from a homogeneous initial conditions from a homogeneous universe and then really displacing particles if you want to estimate and this already is a good approximation so the true non-linearities are basically some kind of a combination of these displacements and really gravitational collapse which would be the true non-linear collapse in the in the structure formation and if you want to estimate the size of these displacements you will see that the so-called velocity dispersion which measures how big are these typical displacements is such that the typical displacements over 10 mega parcels 10-15 mega parcels in the late universe and of course these displacements if these motions were random would introduce like scrambling of of like matter on scales of the order of 10-15 mega parcels but they are not random as you remember from this doge approximation they're very much correlated with initial conditions with the initial over densities and therefore they're computable so this is why the true non-linear scale where we hit the real problem is really the one from the previous slide while these displacements can be treated in perturbation theory or beyond even on perturbative reason I'm going to show and their effects can be taken into account exactly and computed for what for for all purposes that we needed for now of course this is all for this is all what I said so far was just about dark matter but the reality is is far more complicated than that so first of all we are not observing dark matter directly we are rather observing only a discrete set of tracers such as galaxies and this set of tracers is not even a fair representation of the underlying yes sorry so the the lss stands for large scale structure apologies maybe I should have introduced that yes so so these tracers are are discrete so there will be some sampling noise inevitably related to the fact that you're probing the underlying dark matter field with a finite number of points but also these these traces are biased they're not forming galaxies they're not forming uniformly in your dark matter field but they're forming very special points where the over densities are very large and so this will have an important implications for example for the structure of the perturbative solution also this galaxy formation process is very complicated we don't even know all the details if you we can we can run some simulations we get even the hydrosimulation we have different guesses for how important or how large are different astrophysical phenomena when the galaxies form but we it's I think it's fair to say that we don't really know all the details which are needed in order to make a clear clear predictions for what you observe however we believe that this formation of galaxies is local in space in other words the way that galaxies form in some region of space doesn't influence how another galaxy forms across the universe and that that will be important to keep in mind and finally there is yet another level of complication which comes from the fact that we have we're observing things in in ratchet space or in other words we're observing red shifts which we want to convert to distances this mapping of course as I see more they showed you is not trivial because it involves peculiar velocities and therefore we're going to inevitably introduce some distortions in our maps and these are these are the famous reticule distortions which lead to the unisotropic two-point function and so the way that we usually deal with this is that we when we measure the two-point function of the power spectrum we expand it in multiples like in the angle of the line of sight in the Fourier mode k and and usually since this the angular dependence is not very strong it is sufficient to keep only the few lowest order multiples such as the power spectrum monopole p0 or quadruple p2 etc all right so is there another question because I see two things in check maybe not or just me there is a thank you miss okay okay great so so I briefly reviewed these very complicated details and the question then is like what should we do how should we proceed because it seems that that things are very complex on small scales we don't even understand the the physics of galaxy formation so what should we do next so one direction of course and this is more I think the way that things are done historically or maybe they're more intuitive from the point of view of an astrophysicist because this is the way we discovered the universe going from small scales and then successfully observing large and larger structures is to first understand all the details of these astrophysical processes and small scale things and then make predictions for what happens for the behavior of our dynamical system on large scales okay that would be one way of doing things and there is nothing wrong with it but you see it is very ambitious it really requires you to really solve all the difficult things first and then go on large scales and make predictions but this is not what we usually do in physics actually most of the time we think in the opposite direction and again this is because in physics we are discovering the universe in the opposite way we're starting from some large microscopic separations and we are learning about new phenomena by going to smaller and smaller scales and for example think about things such as molecular interactions and then fluid description of course that you discover fluid first even before you know that molecules exist or you don't have to know the details of quarks and gluons and their interactions to do nuclear physics and eventually you don't even have to know really string theory in order to talk about standard model elementary particles so usually we this intuition that we can describe the world on large separations without knowing the details on small scales is formalized in this effective field theory approach and it's a very successful way of thinking about things because it saves us from having to understand all the details at once to describe the relevant phenomena on a certain range of scales okay so I think that the question then is like can we do something along these lines for galaxies can we just try to describe what happens on large scales without having to know the details on small scales and this is an important I think shift in the point of view which which I think happened around 10 years ago and this approach is called the effective field theory of large-scale structure and the idea here released to think from a point of view an observer who doesn't live in a small galaxy they're trying to understand all the details but really to think about seeing the universe from the outside and thinking about this galaxy field that we see in the data as some some sort of a strange material which is made of baryons and dark matter which are self- gravitating in the expanding universe and then study this material in the same way as we will study some other material like or some fluid without knowing its micro physics so the micro physics are known but what we know is that the only long range force which is relevant in the problem is gravity and then we know how to do calculations in this setup and the formation of galaxies is local in space therefore all non-trivial interactions which involve baryons will be localized in space and they cannot mediate the long range forces that will spoil your predictions on large scales and so this is a setup in which we do not have to know the details or the uv physics in this context will be some kind of structure some really like galaxy formation physics in order to describe the long wavelength fluctuations in this in this galaxy density field okay now like in any effective theory I would have to specify here what are the relevant degrees of freedom in this case it is really the galaxy over density field delta g the equations of motion if you look at details will be fluid like but with many additional terms in fact which are non-trivial and they of course include gravity because it's a very it is an important long range force which which basically shapes the whole cosmic web and what kind of terms you're allowed to write down in these equations of motion is then dictated by symmetries of the problem and the prevalence principle etc and there will be as usually happens in effective theories two expansion parameters one is going to be this the smallest density fluctuations so the variance of the density field will be one small parameter so if you want to write down terms in equations of motion with more powers of density field that will be suppressed and the second expansion parameter is going to be the derivative expansion in other words if you want to write down the equations of motion terms which have more derivatives those are also going to be suppressed and the scale which suppresses these derivatives is the non-linear scale basically the scale where the fluctuations become of order one and so then where how come the description is so universal and and why it doesn't depend on the details of the microphysics or galaxy formation well it does depend but only through a set of numbers a handful of numbers which are multiplying these terms in the equations of motion and these are called counter terms or like effective field theory parameters etc and there are only a few of them and you don't have to know them for each galaxy formation example these these terms will slightly change but the point is that the description of fluctuations will remain unchanged the form of the long wavelength like fluctuations that you can expect is always the same only these coefficients change and they capture all dependence on the uv physics and so as a consequence on scale larger than the non-linear scale such description will be universal and it will apply to any galaxy formation processes or whatever you have on small scales that you don't know and this is the power of this this idea Marco so since everyone is quiet I ask a question why why is the expansion parameter delta g and not delta dark matter yes I'm being I'm being sloppy it is delta dark matter well delta g and delta dark matter will be related only through the linear bias which is the number of order one and also I think that sometimes there will be there will be other non-trivial things which I'm not writing here for example the size of the dark matter halos which which galaxy's form so for bias tracers there will be other scales in the problem with the Lagrangian size of the hail etc but yes this can also be like the delta delta dark matter which is very close to delta g I think in terms of numerical it's just very similar sorry oh but because only can you repeat yes so the question is like why am I mentioning only the linear bias I guess of course the the things are non-linear I'm just mentioning linear biases is a proxy of the difference between like is a rough estimate like how different delta g is from delta matter and this number will be roughly some number of order one and so like of course if you want to be very precise you would have to do a more careful like analysis I'm here giving just a rough ideas about what are the scales in the problem okay there is also a question in the chat asking to clarify the notation k k meaning like I only see k non-linear so k so k here is just a Fourier like a wave number when you do a Fourier transform and k is inversely proportional to the to the lens scale right so if you have large case you're talking about small scales in real space and the other way around and the k non-linear I think if you want is like like roughly speaking defined as a scale where the density fluctuations of dark matter become of order one okay so it is not a strictly strict definition but it is a order of a few megaparsecs so therefore this k non-linear that is in this slide is going to be roughly one over a few megaparsecs it would be like say something like 0.3 0.4 something like that okay all right so let me maybe be a bit more specific in one simple example that where you can really convince yourself that or at least you can construct some equations of motion for this long wavelength degrees of freedom starting starting from the exact equations of motion that you would solve on the computer and you can really see the emergence of these additional terms in the equations of motion compared to what you usually have in mind and and how this how this whole effective field theory works really and this is the example of dark matter so if you want to solve like dark matter on the computer what you would do is to write down the collisionless Boltzmann equation of course with the gravitational interactions and then just evolve the positions and velocities of all particles and this is what we do when we run numerical simulations and then we can ask ourselves but since we don't we cannot solve analytically this Boltzmann equation what is the next best thing that we can do I told you that one idea is to really look only in the long wavelength degrees of freedom and where the fluctuations are small with the hope that maybe there you can apply some perturbation theory so the first task then is to find the questions of motion for these small fluctuations on large scales so in order to do that you can separate the fluctuations or fields in the Boltzmann equation into short wavelength and long wavelength degrees of freedom and you can just average over the short wavelength degrees of freedom so you can explicitly do this starting from the Boltzmann equation and what you're going to find is that you're going to end up with the equations of motion for delta and velocities shown here on the right hand side which resemble fluid equations of motion so this is something that I think I see Moser showed in his lectures and they they have the usual like structure except that now on the right hand side for example of the Euler equation you have some additional terms these additional terms come from this procedure of averaging the short where in degrees of freedom and then tell you how at leading order in delta and derivatives the distribution of galaxies react to some short wavelength fluctuations okay this term of course has a free parameter in front of it and you cannot predict it from this description you have to measure it in the data yes uh yeah so the question is like can this extra term be understood as a as an effect of multi-streaming because I cannot do perturbation theory there and now I'm seeing some consequences of that fact I think that the answer is yes for the dark matter particles really this is the correct interpretation the size uh it is a bit like um again complicated because in some sense um yes so yes you you can see the I would say it's a more effect more it has to do with the fact that you cannot even if there was no multi-streaming you'll be still in trouble because your density fluctuations will be so large when you hit the non-linear scale that uh you will not be able to use these equations anymore and you will have to supplement them with some non-linear corrections but but um let me maybe maybe mention effect that as I promised this this is supposed to be the unique long distance description of some self-gravity in dark matter whatever it is so for example if you're talking about dark matter particles then indeed there is some there is some um essentially effective me free path for dark matter particles which are captured in dark matter halos which is a typical size of the halos which is a typical lens scale where the multi-streaming becomes relevant and indeed this is a typical size of the non-linear scale but you can for example consider other uh examples of dark matter and yesterday we heard for example about the ultralight dark matter and you saw that the equation of motion for the ultralight dark matter is not the full Boltzmann collisionless Boltzmann equation but actually something different it looks really very much like a fluid equation with additional term which is this quantum pressure term where which can be very small for large uh let's say axial masses and so there you would you would have a naively in the ideal fluid equations of motion where if you would again do into this coarse-graining procedure without multi-streaming by the way but if you would do the coarse-graining procedure and focus on the long wavelength fields you will again come up with the same sets of equations so so this this counter terms they have contributions not only from the fact that there are there are like there's some me free path let me just finish but also from the fact that you have a gravitational collapse and you form halos no matter what they're made of yes well it is it is a free parameter but well in some sense it is a free parameter you can decide where to put the cutoff scale but different cutoff scales are just going to lead to different values of this coefficient here such that when you do the analysis like and combine the two there will be no sensitivity to this cutoff scale it does affect also continuity equation I'm being again a bit sloppy because this would take too much time to explain things but there is a way to to think about continuity equation in terms of the momentum rather than the velocity in which case delta and pi the momentum which will be like one plus delta times b right would be basically linearly related in that case the average will be trivial and you can then like rewrite everything in terms of these variables into the computation so in principle it affects in in in general whenever you have the product of two fields and you look at the long wavelength limit this is not going to be the same as the the product of two long fields but there will be some derivative corrections of course and this affects every every term all terms where you multiply fields in the same point so the question was whether this averaging also affects the continuity and there I was yesterday thinking when I give a talk I will for sure repeat all the questions but it's always easier when you read the audience I have another question why this average what guarantees that after the average you can truncate the Boltzmann hierarchy um well I was about to so so the question was why are we guaranteed so so why this like averaging procedure guarantees that they can truncate the Boltzmann hierarchy and why am I not supposed to keep all the moments well there will be a let me let me first maybe also mention this that the collisionless when I wrote here collisionless system this already should make you very disturbed because the collisionless particles can never become a fluid formally imagine that you have a box with particles and which never collide with each other I can give some some strange initial let's say conditions where they just have velocities along the x-axis and they all move and bounce back and forth between they will never become a fluid so for things to be fluid there must be a mean free path and for the mean free path to exist the system must be collisional and the Boltzmann hierarchy can be always truncated on scales larger than mean free path okay so now why the dark matter which is a collisionless system behaves like a fluid well there are several reasons but the most important one is that in the finite age of the universe the dark matter particles cannot move very far because they're very slow so this provides effectively a mean free path and moreover they're all captured in the dark matter halos which basically provide in a sense another source of the mean free path okay and so in the similar way as a fluid if you're working on scales larger than the typical size of the dark matter halos which will be the mean free path for this system then you're guaranteed that you can truncate the Boltzmann hierarchy and in fact all higher terms which will appear like in the order equation will be suppressed by this nonlinear scale so there's an approach in the nonlinear scale you will start seeing that so many additional terms become important and in some sense you will not your description breaks down you will have to restore the full Boltzmann hierarchy but those scales large in the nonlinear scale using a fluid is a good approximation are there any other questions of course i'm super late it's already time to stop here i guess how much time do you have 30 minutes okay so again i seem already mentioned some of these things but you see these equations now are some equations of motion for delta which you can try to solve perturbatively since we are talking about large scales where delta is supposed to be small there is some sense in which the leading order linear theory will be the leading to provide the leading terms and then this like the nonlinear terms in the equations of motion will provide some corrections you can compute like this these nonlinear corrections and for instance i think that one even one of the exercises that you have as homework was to derive this second order perturbation theory kernel which which comes from these equations of motion and this is something which is very well known and people have been doing for a very long time and and once you have for example this second order solution which will have this form where you basically convolve two initial Fourier modes with some function f2 then you can start calculating the nonlinear corrections to the linear power spectrum okay so one such correction for example with the diagram or there'll be a term where you correlate two second order fields and it is represented here with these little stick diagrams where you can should imagine that the time flows from bottom to up you have two initial Fourier modes let's say q1 and q2 which are convolved to produce the final Fourier mode k using this second order solution and then you have another second order solution when you when you when you correlate them you will be doing some average over the initial conditions at the end of the day and this average is represented by this little green dotted lines and each expectation value of the initial conditions going to produce one linear power spectrum in a Gaussian field as as I seem already explained and when everything is done the whole contribution from this so-called 2-2 diagram because you're collecting two second order solutions is going to be some expression written here so p2-2 is going to be some integral over some linear power spectra weighted by some perturbation theory kernels okay we call these loop diagrams by an analogy with quantum field theory loops and in fact they're very close in nature to what Marco was telling you about these are some basically this is really like calculation about of the correlation functions not scattering amplitudes now the full one loop solution so the full next to leading order result has another diagram this is actually the one the two diagrams that Marco already showed one of course comes from the fact that you're correlating two second order fields which are just described in the previous slide is this one and then there are also the same order in perturbation theory their terms will correlate linear theory with a third order solution and that would be this diagram okay and on top of this you also have to add the correction coming from this additional term in the equations of motion proportional to these free coefficients that we call the counter term so the final answer for the for the one loop power spectrum is something like this and as you expect and you can see in this figure here these loop corrections are very small on large scales and then as you approach smaller scales they become large they become significant okay and the second thing is that if you look at the form of the leading uv dependence of the one loop diagram which is shown here it has exactly the same momentum structure as the counter term and this is and this is not a coincidence it has to be like that for consistency because whatever terms you're when you're when doing the loops whatever sensitivity you have to the very small scales which you don't trust and you cannot capture well the perturbation theory is absorbed in this frequent efficient here because your theory makes sense on our scales okay and this will always happen it's all orders in perturbation theory and for any observable now when we talk about galaxies things are of course much more complicated and I will not go into the details but the logic will remain the same there will be additional source of non-linearity is coming from um this rest of space distortion mappings there will be additional source of new varieties coming from this galaxy bias which can be non-linear etc but you will have more terms to worry about but the logic will be always the same you can compute next to leading order corrections and for if you do things consistently there will be always a set of additional counter terms to absorb all uv dependence of this of this loop integrals okay and this will this will this will keep happening like for any observable again for any endpoint function at any order in perturbation theory so are there any questions about this so the question is like whether this results in the power of k over k no linear um not all of them so let me again good so there's a good question so so there are two uh expansion parameters so so for example if you if you if you if you look at this loop diagrams here there's some uh is such that basically the suppression power is the variance of the density field times p linear so this is one parameter however the counter term has k square you can think of it over k no linear square which becomes r square so so there are two kinds of expansions that it's like expansion in k over k no linear and there's expansion in terms of the variance of the density field and for for each linear power spectrum you have to decide um how these two different expansions like look like and perhaps sometimes you have to keep higher orders in one and the other etc there's a numerical question all right so um i want you to to mention also some of these some of these non-perturbative results okay so far i talked about perturbation theory and how to do it for this like complicated material that galaxies are um and and and and then i um i explain some basic logic behind this EFT approach but there are also some exact results um i will go very quickly through this but if you want to discuss i'm here the whole next week and anytime please ask me there are some exact results um that that like are valid even in the non-linear regime i will not tell you how they're derived but essentially these results like like relate the the let's say n plus one point function and this can be also generalized where one of the momenta let's say this q is much smaller than all the other case this is the so-called soft limit of the correlation function and then on the right-hand side you can relate it to just the endpoint function okay these are the so-called consistency relations for logical structure and the important point here is that the right-hand side vanishes if you have single field inflation this is a non-trivial statement but it is true if the equivalence principle holds and what is going to be important for us i will assume the first two points to be correct and what is what will be important in the next couple of slides is that if there are no features in the endpoint correlation functions okay only then the right-hand side vanishes okay now so remember this is a non-perturbative result now that i'm showing you however we do know that in our universe there is there is a very prominent feature in the power spectrum and also all other endpoint functions and this of course is the BEO peak right so there is some excess probability to find two galaxies separated around 100 megaparsecs and so if this feature is very important because it contains a lot of cosmological information and we we would like to understand the shape of this feature as well as possible okay now if you look at the linear theory this feature is rather sharp but if you look at this feature in the in the evolved galaxy density field which is no linear then the feature becomes much less prominent and there is some kind of a broadening of the BEO peak okay so why is this happening well the reason is that there are these like fluctuations again the reason that these placements that i mentioned is one of the important nonlinearities because if these placements come from the Fourier MOS with wavelengths here showing purple so from wavelength who have wavelengths which are shorter than the BEO scale but longer than the width of the BEO peak they're going to move particles in a way which is not coherent over the entire ring here which is shown in these black dark matter particles and therefore they're going to distort this ring such as on average when you compute a two-point function you're going to find that the BEO feature is broadened okay so why did i mention these soft theorems well because they provide a very simple and and a simple way to understand how to deal with this phenomenon of the broadening of the BEO peak analytically and in particular since there is a feature in the two-point function this means that when you look at the endpoint functions here you cannot do a Taylor expansion such as the whole thing vanishes if q omega is sorry is like basically approximately equal to one or larger and therefore you're going to have some enhancement terms which go like k over q so remember the limit here is that q is much less than k and you can identify really these pieces in the endpoint functions which are which are exact they're given by this non-perturbative result and they're working in the nonlinear but they also can help you to identify which of these terms are actually like in other words where the information about the BEO peak is in terms of the higher order endpoint functions and also motivate this so-called infrared resumption where you can basically resum the effects of these large displacements and again this this can be done exactly even for galaxies it's not only something which is perturbed to for that matter and in such procedure you will get some formula like this which some of you may have seen some simple versions of it where you really have to do perturbation theory in a way that you can compute to one loop but then for the feature you will have to do something a bit more complicated and you will have to basically compute order by order in perturbation theory how it gets broadened and whatever you don't put in your perturbative calculation you have to resum exactly using these displacements so this is the infrared resumption so this result is non-perturbative in the displacement fields they are resummed exactly but it is still perturbative in the density field so there is always the same expansion parameters before as a consequence you have such you have results like as shown here where you can see the data points these black dots and you have for example the linear theory which is the solid black line and you can see how well this one loop infrared resumption formula works which is shown in this purple line so it goes to the very precise data points very well okay and again this is extremely important step because this be a feature is one of the main things that we are looking for in the galaxy correlation functions and it contains a lot of information but it was small all right so what to conclude hello yeah i can i ask you a question so what is the tilde on p what does it mean p tilde oh i guess it's just infrared resummed powers but i don't remember why there is a field to be on i haven't even noticed so this is supposed to be you see like it is really supposed to be the power spectrum which is um but you do the one loop computation as explained in the previous slides but this is not enough when you have features features get distorted by these large displacements however these large displacements can be computed non-perturbatively and this is done in this infrared resumption so you have to correct this wiggly part or feature part of the power spectrum with this funny formula in the second line okay and this is how it performs i mean if you don't understand all of these details i think it's okay i'm around we can discuss if you're interested but i just wanted to show this because it's a very important aspect of this nonlinear dynamics and without it all the things in the analysis i'm going to show would not work so the bottom line is that this is a very busy slide so don't look at all the formulas i just wanted to show that really for what is needed in the real analysis i can fit all the ingredients almost in a single slide of course in ratio space things are more complicated like all this perturbation theory kernels will now depend also on the direction of the line of sight you're going to have as i said additional terms in in in the perturbative kernels which have to do with these galaxy biases which in turn depend the microphysics of galaxy formation etc etc etc the infrared resumption in ratio space is more complicated it depends also on the on the line of sight and so on so forth but all the formulas are really written here and at the end of the day using these relatively simple equations you come to the point where in order to make predictions at one loop you you have to know the cosmological parameters the standard set of cosmological parameters and then there are there are several nuisance parameters which are not predictable within effective theory and they have to be measured from the data those include bias parameters the amplitude of the shot noise and several counter terms and of course i so far i just gave you some theoretical arguments and i showed some equations but of course the question that you may ask is like how well does this work because at the end of the day i'm still doing some approximations some perturbation theory so how well does it work and so like in order to to test it many many many things have been done in the literature but let me show you only one example which i find like curious and this is the example where we were given a very large volume of simulated data prepared by Takahiro Nishimishi and Masahiro Takada and this is this was the famous PT challenge in which we didn't know cosmological parameters and we didn't know what kind of prescriptions they use to create a galaxy density field it was an age of d involving of course also satellite centers this is of course always an energy space and it is designed to resemble the realistic data as we see in the boss but the volume of the simulations was hundred times larger than both there are extremely large volume simulations and as you can see on these plots like the data points are so small that you can really not see the error bars in the residual plot you can see that the error bars for example for the monopole in almost all k beams are like smaller or equal to 0.1 percent okay so this is an extremely large simulation volumes and then the challenge was that we are supposed to use this theoretical prediction to fit this data and recover cosmological parameters and the question was like whether can we recover the correct cosmological parameters or not okay so on the right panel you can see the results of this blind analysis and as you can tell we are of course able to recover all cosmological parameters and bias parameters in a biased way and the error bars are rather small so all these cosmological parameters are measured with roughly one percent precision so this I think is an important test because the deal was that even if you get it wrong it has to go on archive so then it's a very dangerous game to play but I think that we did have some confidence that the results will be will be correct and this means that we have enough precision in our population to do even much larger volumes than what is available today and so therefore gives us confidence that when we analyze the current data we're not making a theoretical mistake what is s n in the cap in the on the right what is s n uh-huh s n vary there was a small misunderstanding because they they gave us the data with the short noise it stands for the short noise was subtracted already from the from the data but there was a little bit of a misunderstanding what do they subtract exactly so at some point we reintroduced the the short noise but with the fiducial value of zero and just marginalize over these short noise yeah it's just the amplitudes of the constant short noise okay so so and this really means that now with this kind of theory at hand we are in some sense and entering like a like a new era in cosmology the reason is that this theory has been turned into very efficient codes which which are extensions of standard Boltzmann codes i'm giving here like three of them with with with we can look at these papers for details and so so these codes are basically able to compute all these non-linear revolution even faster than the linear codes compute the linear power spectrum so they're very efficient and this opens up a possibility to run MCMC analysis in the in the in this in this like spectroscopic data and in the same way as this was very interesting when it was done for the cmb i think that this of course now is becoming very interesting for a large-scale structure as well and there is a burst of activity where people are trying this and applying it to various examples and i will show some of them but i also want to say that it is very satisfying to have this if you want to unify the description of the history of the density fluctuations in terms of this weekly couple the effective field theory all the way from the bunch Davis vacuum and rotations are coming from through inflation through the cmb and all the way to the direction zero so we have a the the entire evolution under control and we can apply this effective field theory methods throughout the entire history of the universe to make predictions in a way that is like it's calculable the whole theory is weekly coupled and we have it under control i think it's a big it's a big basically if you want the big step to really complete this this program and get the the whole history of the universe covered all right so how am i doing with the time 10 more minutes is it okay all right so now this was the part about the theory of large case i try to convince you that even though maybe you don't understand all the details and these formulas look a bit like all over the place essentially like we do have a very good understanding of all these details and maybe testing simulations they they perform very well so how about applying it to the data okay let's do this so now how do we apply this to the data well this is very straightforward there is nothing conceptually difficult here so what we do is to take the galaxy map like the one that i showed you in the very beginning for both data let's say then you can measure the power spectrum from this galaxy map this galaxy catalog this power spectrum would be anisotropic and therefore it would have like the power spectrum monopole here showing black dots and and the quadrupole in blue dots etc and what you have to do then is to take your favorite cosmological model that you want to test let's say lambda cdm make predictions for all possible values of cosmological parameters within some reason and then compare these predictions with this data okay this is all that you have to do this is called the full shape analysis and it is very similar to what is done in the cmb from the from the shape of this power spectrum you can measure all cosmological parameters and crucially this does not rely on any input from the cmb or other data sets okay you can really take these maps and measure cosmological parameters without any additional input this is very different from the standard analysis that the collaborations were doing in the past which would be like a fixed shape or fixed template analysis where you take the cosmological parameters from Planck the best fit cosmology of Planck you fix the shape of the power linear power spectrum and the only thing that you vary when you compare to the data is really the amplitude for example this is how both measures f sigma 8 okay so this is this is something that is less powerful because you're not allowing the shape of the linear power spectrum to vary in such analysis and also you're relying on external data sets well here you're doing everything consistently and not using any external data sets so as I mentioned the ingredients are very simple and here you just have to follow basically the first lecture or second lecture I forgot now that you have seen about statistics so of course you have to have the data this is just the power spectrum monopole and quadruple from the both galaxy catalog you have to have the theory and this is what we described before and then you need some likelihood we are assuming a simple Gaussian likelihood for the power spectrum so the chi square is going to be just the theory the data minus theory c inverse data minus theory as simple as that and this c in this formula would be the covariance matrix and the covariance matrix can be can be obtained in various ways for example you can use the mock data for for the boss survey to measure the covariance or you can even in fact that one is slightly off it turns out or you can calculate it analytically which it turns out to be a better thing to do now I could tell you it's an interesting story and could tell you more about covariance but I'm afraid that Merdad will not like it so so I don't do this so so and finally once you have your likelihood if you want you can also put some priors let's say saying that this eft coefficients are some numbers of order one which is a reasonable prior and calculate your posterior okay this posterior will be posterior for all cosmological parameters and noose's parameters and then for example you can marginalize our noose parameters to compute posterior only for cosmology okay this is some something that you have seen already in the lectures about statistics not maybe in this particular example but the logic is the same and so here are the results so now let me comment about these results so what is shown here on this plot is this triangle plot with all these parameters that are cosmological parameters and the red contours is what's Planck measures so the red contour is the best thing that we have at the moment from the cmb and let's say the blue contours let's focus on the blue contours are the ones that come from the boss analysis all right so as you can see for some cosmological parameters such as like as a small omega cdm or the spectral index clearly cmb is currently much better okay however there are some parameters such as the Hubble parameter or or omega matter which are measured in in the boss data alone as well as in the cmb okay and and there are two important things here to keep in mind first is that this is very important thing to this is an important plot to have because it shows you that if we assume lambda cdm cosmological model the parameters that we infer from the cmb are consistent with the parameters that we infer from galaxy surveys even though these two data samples are very different they are taken at very different epochs in the history of the universe and the semantics into two datasets is completely different okay so this agreement is rather impressive and the second thing is that as i said that for some cosmological parameters already the boss data is is is constraining enough to to to give us like a decent measurements comparable to those of the cmb and in particular of the Hubble parameter and in the light of the Hubble tension this is an interesting result all right so are there any questions about procedure yes please uh sorry i didn't understand can you repeat okay so the question is like which priors did we use on cosmological parameters none they're flat priors like let's say omega matter is from zero to one or something like that no no no we are varying directly a yes yes yes it is flat on all parameters yes there was another question yes okay so the question is that when typically we consider the large-scale structure data there is a strong degeneracy between Hubble and omega matter and how is it broken here i think that what you're talking about is this analysis of the bao peak only so this is this is this is what i was saying in the previous slide it's a very important difference if you if you have the bao peak only this corresponds to looking only at the wiggly part of the power spectrum and if you have the bao peak only then um you need some sound horizon from the cmb in order to measure let's say Hubble parameter if you don't use the sound horizon there'll be a very strong degeneracy between omega matter and Hubble and you cannot infer this from the position of the bao peak only however since we are fitting here the full shape then information on omega matter is not only in the bao but also in the quality scale and also in the slope of the power spectrum in the mildly non-linear regime in the same way as the cmb can tell you what is omega matter because you're using the full shape this is the big difference yes there was another question the question is like i didn't understand disadvantages or advantages okay so in fact the disadvantage is to use the covariance matrix from the mox the analytical covariance so the question was like is it better to use let's say analytical or the covariance matrix from the from the mox okay so in practice people always believe simulations more than they believe calculation this is a sad mistake so like you can of course use either of the two it doesn't matter as long as as as you're sure that it is correct okay so so you the problem with the mox data is that if the data vector is long the covariance matrix is large and in order to estimate the inverse of the covariance reliably you really need a lot of simulations and these are expensive so people usually try to play this dangerous game where they are not running really maybe as many simulations they would need on the other hand the analytical covariance is very difficult to compute it's like the only disadvantage i mean it will be correct but the disadvantage is that it's a difficult to compute and it's difficult to compute because there is a mask you know this problem i mean if you have a periodic box no problem then it's not so so hard but usually there's here non-trivial masks and then it's hard to do it but i have to say that regarding covariance that even if you use a simple Gaussian covariance you will get the same answer i think that the covariance is not so important the exact form of the covariance is not so important and the reason is that while the data covariance of course in principle should impact things quite a lot when you marginalize over noses parameters you're in a sense introducing a new additional contribution to the covariance which is bigger than your data covariance okay i can i can explain maybe later i'll be better so so really in practice it turns out that when you look at the posteriors from the from the from from like say mox or gaussian covariance or an analytic covariance that's a which includes non-trivial connected part of the trispectrum you will get the same answer this is this is for two reasons first of all the shot noise is high so you never enter the the regime where things get complicated there is there's pressure shot noise but also there is there is a contribution to the covariance you can think of it as from coming from marginalization over noses parameters because the one thing is the data covariance the other thing is the covariance matrix for marginalized posterior for your cosmological parameters okay this is the two different things and and it's not clear which one then dominates okay so maybe there are some samples in which for example neutral hydrogen comes to mind where the shot noise is very low and this may be an issue but for the galaxy clustering this is not a problem you can be relaxed much more about covariance matrix than about your theoretical model yeah so the question is like can this approach be applied to the photometric surveys i think that photometric surveys are much more difficult for perturbation theory because the way that things are done well sorry let me maybe say say it in a different way so so just to the photometric survey in the sense that you have some uncertainty where your galaxy is sure you can apply it but if you do lancing then this is difficult for perturbation theory because in in lancing these observers that you're looking at like somehow and inevitably mix like large small scales as it's your perturbative calculation doesn't apply on small scales small scales then this will be a problem while in the galaxy clustering there is a simple way to separate it too yeah so it's more challenging i think it was okay some questions from the chat so oh one is about the Hubble tension can you comment on the values of redshift that your survey is measuring or using compared to cnb and compared to supernova right so i think that very often this Hubble tension is phrased as the tension between late universe or low redshift universe measurements and high redshift like or early universe measurements while in reality this i don't think is a very useful distinction so the redshift doesn't matter once you fix the cosmological so the real distinction between supernovae and this type of measurements is whether you measure the Hubble directly basically by definition looking at how fast different galaxies are moving away from you or you're measuring it from the density fluctuations basically fitting Hubble as one of the parameters in your cosmological model so once you set the cosmological model then the entire history of the universe is fixed so there is no real distinction between late and early universe galaxies are just like the cnb just nearby it doesn't the redshift doesn't matter you have the light cone which intersects the evolution of all these fluctuations from from from redshift to infinity to redshift zero and you're fitting these fluctuations so for example like like in this plot omega matter is it a late universe or early universe parameter well you know it is very important to set the transfer functions in the cnb but at the same time it's very important to estimate the distances to the galaxies in the late universe so i don't know it is both on the other hand supernovae are measuring Hubble basically directly they're trying to calibrate the luminosity of supernovae and they're looking at the luminosity distance relation fitting for a single parameter which is the Hubble parameter so i think this is a real distinction so this you may be confused and think that this this thing which overlaps with supernovae but it would be like the late universe measurement but it is not it is really appropriately should say it's an undirect measurement we are fitting for the Hubble from starting the history of the the fluctuations throughout the entire history of the universe while the other measurements are direct thank you so there is another question about the loop computations in non-standard cosmologies can it be done and can it be used as a way of discriminating those models it depends where the what non-standard means for example if you modify early universe cosmology but let's say people would like to do i'm going to show you some examples then you can still use the exact same codes they're not affected if you modify the late universe cosmology or introducing another long-range force which is not only gravity you still can do loops of course you can do the calculation but you will have to change the equations of motion and recompute everything again from scratch but in principle the method always applies okay all right so of course now i'm running out of time let me in fact finish i think this is the last like basic next to last i want to show you some of these non-standard examples exactly and of course like once this this this codes are out i think that then like you can do a lot of things with them i'm showing only a few examples of things that have been done in the last year or two so for instance i haven't talked about the higher order and point functions but all that i said about the two point function applies also to the three point function you can also also that one can be computed and compared to the data and for example you can see the constraints on the first constraints effect on the so-called single field inflation the type of non-gaussianities which can arise in single field models which is not local non-gaussianity but the collateral or orthogonal from the from the large-scale structure so this has never been done before the only constraints we had were from the cmb and this is because we we didn't have a reliable theory and the methods to apply to to compare the by-spectrum that you will compute to the data okay then in the middle plot you can see some analysis where you modify the early universe cosmology you're adding a little bit of this early dark energy that you may have heard about as a way to solve the Hubble tension and you can see that the addition of the BOS data in this extended scenario is already very important it basically makes the difference between whether like your early dark energy theory is a viable solution if you use only Planck data or BAO data and whether it is not a viable solution if you use a full shape analysis so it is already important at this stage and if you do the forecast for what Desi will do it will really shrink these aerobars dramatically this is a typical thing that always happens that in these extended scenarios this future galaxy servers will be extremely powerful either alone or in the combination with the cmb then for instance if you if you consider that the ultralight axions are a fraction of dark matter here we are talking about axion masses which are very light as 10 to the minus 25 to 10 to the minus 30 electron volts and as you saw yesterday they cannot be the whole dark matter but the fraction they can be and this is interesting to look at then for example the red contours here in the in the upper left panel are showing you what kind of constraints you can get from the cmb on this fraction of the axis of dark matter if you combine this with galaxy clustering we improve this by a factor of two already with both with Desi it will be it will be much more significant or for instance if you consider some light relics which are there in addition to neutrinos again for them you either people will use these both analysis to either provide the first ever constraints or tighten significantly the leading constraints that would present the literature so far before that so you see like this method is not only applicable to lambda cdm it can be really used as a generic tool to test various extensions of lambda cdm which are all interesting and we and and with the new data that are coming things are going to become even much much more much much better all right so let me finish with this final plot which shows the forecasted power of a service such as Desi or Euclid okay you can basically apply the same method that i was talking about to make a prediction for what kind of cosmological constraints you will have in the future and what the the dashed so this is a this is like these are more likelihoods both from Planck and large-scale structures assuming that the neutrino mass is like i think 0.1 electron volt okay so the dashed lines are Planck more likelihoods data okay the red solid contours are what Desi power spectrum will give you the same analysis that i was talking about for both and the blue solid contours are combinations just take a look at for example Hubble and omega cdm how much we are going to improve in just a couple of years when Desi delivers the data okay this kind of improvement is quite impressive even within the contours of lambda cdm and for the extensions it is even much much better okay so i think that this is a very it's a very exciting thing because we will be really able to test very precisely much more than what we have we can do at the moment like both lambda cdm and perhaps find some discrepancy or deviation or look for particular imprints of other non-call dark matter things in our universe such as for example ultra light axions or like this light but massive relics or perhaps looking for some new long-range forces in the dark sector etc in this in this data that are coming okay so i'm going to stop here i'm sorry for being over time and i will just say that in this sense really galaxy clustering is the new cmb and you will really hear a lot about this in the years to come and i think for you it is very important to keep an eye on these things on these developments both on the theory side in the data analysis side because it would be an important part of the activities in the field i think that the perturbation theory is not perfect but it can it can really bring us very far and it will be good enough importantly for the upcoming surveys and i think that still there is a lot of room for improvements of this analysis and also exploring new pieces relevant new pieces scenarios which can be constrained by galaxy clustering so with these three remarks i'll stop and take some questions oh yes the question is if i'm here in the discussion says absolutely yes absolutely yes anytime i'm very i'm very easy to bribe just spritz or coffee and i can like answer all your questions you mentioned the eft of largest scale structure is universal does this include the sound speed parameter i mean for fuzzy dark matter and cold dark matter yes yes yes so so so the counter term this this speed of sound or the one loop counter term for the fuzzy sorry for the fuzzy dark matter axi ultra light axion or cold dark matter would be actually very similar it would be slightly different maybe because they differ of course in terms of the micro physics but as i said one of the main contributions to this speed of sound really is the fact that you have a gravitationally bound dark matter halos and as long as those are not very different in different scenarios the typical size of this halo provides really the typical estimate to the how big is this counter term okay so so in some sense since gravity is universally it doesn't care about what dark matter is and it will form dark matter halos in the same way also these counter terms are to the same extent universal because they care also to understand about gravity only okay but yes it is universal in any example it would be a similar number in fact we have only used the boss data so there are other galaxy servers like sdss and so i am using those two so so which others like the sdss or something like that there are a lot of swistos well the boss is in a sense an extension of sdss even though they're not fully overlapping boss is much bigger in volume so it would dominate the error anyhow okay there are other other data sets like quasars for example from eboss but those are more difficult to analyze because they're very sparse the shot is very high and the masses are very complicated so i'm not sure it will add much to this analysis and there are all these like galaxy lancing surveys but for the reason things that they mentioned earlier this perturbation theory is very hard to apply for the lancing so i was talking about this heteroscopic service because i think that at the moment they're the leading probe from large-scale structure side for most of the parameters and this will remain true i think for the foreseeable future okay if there is no more question that's time marco again