 Good morning everyone. Let's start with Ravi's lecture today. So just a quick recap of what we did last time with the spherical collapse and then what we're going to do today is we'll put the physics of spherical collapse into to make an estimate of the abundance of virilized objects. And so the quick recap is although it's a really simple model just energy conservation it works pretty well because more or less the rank order of binding energy of particles is preserved and the spherical collapse model is doing that. So that's you know even though collapse is lumpy and a spherical it's a reasonable approximation okay. It's a reasonable approximation but there's an interesting thing which which I haven't done because we've jumped immediately to the fully nonlinear model spherical collapse okay. But I sort of noted in passing right that the the spherical collapse we can write it I guess we'll use this later so I might as well write it on the board. One plus the nonlinear density is approximately one minus the linear theory density over the spherical collapse value critical linear theory value for collapse is about 1.686. Okay and you know and so I made the point you know this is sort of linear theory in the beginning but if I expand this in a Taylor series there will be terms of order delta squared delta cubed those would be convolutions in k space and so those are that's the mode coupling that's happening because of nonlinear evolution okay. This is an approximation we put up the exact parametric solution if you expand that exact parametric solution in delta linear delta linear squared delta linear cubed and so on you'll get coefficients. Those coefficients are exact so they're exact for the spherical collapse model but if you were to solve the full the the full equations of motion including a sphericity then you might imagine you'd get very different stuff because of tidal effects which are completely missing from the spherical collapse approach but if you think of the full solution as having a monopole plus a quadrupole plus so you decompose it into monopole quadrupole and so on then the monopole terms to all orders are exactly the spherical collapse okay and so so although you're making a big approximation here it's in fact an exact approximation because you have solved the monopole exactly right and so you're just missing quadrupole and higher order higher order stuff okay. So it's worth remembering that okay that this is actually a much more powerful approximation than you might naively have thought than you might naively have thought looking at the lumpy a spherical a spherical collapse yeah that we're modeling yeah okay the one last point I wanted to make before before we we do some statistics using this physics is the following so we said that the this this critical number for collapse this thing is the same for all objects all objects that collapse today right if I want objects that collapse at redshift one then this number is higher by one plus or the growth factor associated with redshift one one plus okay so this number is independent of mass of the of the final object the nonlinear density is independent of mass of the final object and let's use that fact now okay so we can ask if the final virial density is just some some so the so the final virial density is the mass over the volume okay but the mass was the initial volume because initially there was some patch initially that patch had an over density but at the initial time the over density was 10 to the minus 5 right it's that over density grown by linear theory that is order unity but the actual initial density was a 10 to the minus 5 and so that means that the the initial mass which is 4 pi r i cubed over 3 times the background density times 1 plus the initial density inside r this thing we can throw away and so the initial mass is the initial volume right co-moving co-moving density and so the ratio of the initial and the final sizes is the nonlinear density and this is the thing we said will be afforded 200 times the background density okay for all objects so if they all have the same density then we can ask you know what is the typical speed with which an object is moving inside the virialized object and so the speed is you know the the kinetic energy is the potential energy from the virial theorem minus w is 2k and so the scaling is like this and then then this is just algebra that's plugging in for the mass you know you say that the masses are virial cubed and and then you can just work through the algebra to see the scaling of v squared with mass of v squared with radius so let's do v squared with mass so the typical speeds are higher in the more massive halos massive clusters will be hotter they will have the galaxies in them will be moving faster and the gas in those clusters will be hotter the x-ray gas that you observe the kinetics the thermal Sunni-Ayazel Novich effect will be will be due to a hotter gas if the mass is higher okay from this you can also work out the scalings with redshift with time because it's 200 times the background density and so that's why I've written it in terms of h h is a function of redshift and so now you can work out what should be for a fixed mass objects that high redshift should be hotter because h was higher in the past okay and so this is quite a powerful thing knowing that the that the density is always the same whatever the mass and the density that density is some multiple of the background density so high redshift objects are denser they're hotter everything is moving faster okay so this will help us when we want to model the redshift space distortions yeah this fact that that we can model this but in addition to the redshift space distortions it helps us model x-ray clusters and so on yeah so right so so so we make an estimate of that that that's what we're gonna do today as with everything this is gonna be a model okay and so in the model we're always going to have stuff that is virilizing now so an object like you're describing that we realize that redshift 2 and then not much happened to it that object is something that we will say a created a tiny amount of mass from then to now so the mass changed by a little bit and now it has a new realized so it's slightly artificial but we'll come back to that okay so so so we're set right most of the non-linear physics will come from the viril theorem and from the fact that all objects have the same density whatever their mass when we start to write the halo model we will make a lot of use of this okay just for completeness I won't go through it here but I've given you the hydrostatic equilibrium equations that you would use to convert to gas temperatures from from the masses that we have okay we we've argued that all objects should have the same density whatever their mass but we have not yet made an estimate of how many massive objects there should be and how many low mass objects there should be so that's the next thing we're after okay and so the way to make this estimate is called the excursion set approach I will I will give you the the simplest version of the approach and then I will just say a few words about the more recent developments in it but I think the simplest version is a good way to think in general about not the non-linear object formation okay so let's let's get started so so first there's a series of pictures in co-moving coordinates so the expansion has been divided out of the box this is a box at redshift 20 and then we click forward in time and so you can see the structure early to late and the thing you want to notice is that nothing moved across the box this is called dark matter right so the speeds are all small and so structure formation is local right the mass that made this guy didn't come from very far away that didn't come from across the box this void it simply expanded okay all right so so structure formation is local okay so we want to try to use this fact in estimating in estimating what objects are okay now what's an object well so if we look at this then we'll say oh that looks like a place where collapse happened that's look looks like a place where collapse happened and in fact all of the little yellow dots this thing has been color coded by density and so the yellow or the high density regions and so you know each little knot here corresponds to a little virilized halo okay and so we want to make an estimate of how many big guys and how many little guys so what does the guy in the simulation do the guy in the simulation will come here and we'll try to you know count up all the particles that are in that thing but how do how do yeah this one is just dark matter the baryons for the most part fall where the dark matter is for for this we're going to ignore the difference between baryons and dark matter basically because the baryons there might be small scale differences right there might be feedback that's heating the gas and making the gas distribution a little different from the dark matter but on scales of a megaparsec or so there's no difference okay so so so this is a dark matter only simulation just just for simplicity here and now we're going to ask what what you do how do you find a halo in a simulation and so one way you might do it is you might say oh you know let me choose up a random position and you know I could have chosen many random positions but here just for you know to illustrate I've chosen centered on this guy okay and then you say is this is the mass inside this volume 200 times the background density and if it's not then you say let me go to a smaller scale and a smaller scale and the smaller scale and you keep asking is it 200 times the background density when it is then you say I found an object that's a very realized halo then you can do the check no you have the particles you can go and measure the speeds and measure the potential energy and make sure it's very realized to make sure it's bound okay and so that that that will be sent that will that will work right so so it's pretty efficient but our goal is is to model what is happening in the simulations not by doing this but by predicting it from the initial conditions okay and so so we want to make a prediction of this thing from the initial conditions and so now we can ask you know just like there's the critical density 200 times the background density in the nonlinear field in the linear field it's this number so let's go to that nonlinear field and play the same game we start with a big smoothing scale and we shrink and we shrink and we shrink and we ask when is it 1.686 and when it is then we say this is probably the patch that will go on to make a halo and and you've seen already the patches don't move too much right they're just going to shrink yeah okay so so this would be 1.686 divided by 10 by the growth factor from Redshift so it's yeah okay so so then you find the place that is the right density and then so so now the question is how often does that happen so let's let's go back here right so this happens this will happen in this point in space this point in space so you know I want to know what fraction of positions in the Gaussian random field have the critical density on this smoothing scale have the critical density on this smoothing scale I have the critical density on a tiny smoothing scale and that will tell me something about the fraction of the mass in the universe that is bound up in small objects or the fraction of the mass in the universe that is bound up in big objects in massive objects yeah so that's the goal that's what we're going to try to do so so so how can we how can we picture this so we're going to be looking as a function of smoothing scale right we we take a big smoothing scale and we ask on what scale is it the critical density notice that I started on a big scale and I came down to a smaller scale and I didn't start from a small scale and grow okay why is that so the reason for that is if I start on a small scale then maybe it would already have been over dense but if suppose its density was five instead of 1.686 okay then we would have said oh sure it should have collapsed but it means that somewhere around it is 3 and somewhere around it is 1.686 and the 1.686 is the thing that will have collapsed today and if it collapsed today everything inside it is still inside it and so that's the mass today not the small mass in the middle but the big mass and so the reason for starting big and come small is to get the largest possible mass associated with this patch in space that will have collapsed to make a halo today okay so so that's the logic and so so on on this axis here we're going to make a plot the y-axis is going to be what's the density and the x-axis is the smoothing scale and the smoothing scale big scales will be here and small scales will be here and then we'll ask you know when I smooth the density field with a big filter the big circles that I drew before what's the density so if the filter is big enough a thousand mega parsecs then the density should be close to zero because the universe is homogeneous okay and then as I shrink smoothing scales the density should pop up and down okay and so it should pop up and down as I decrease the smoothing scale as I decrease the smoothing scale until finally I find the scale where it crosses 1.686 on the scale where it crosses 1.686 I say oh what was the smoothing scale that smoothing scale corresponds to a mass and this is my prediction for the mass of the collapsed object and so so if this walk crosses this barrier after a few steps that's a large smoothing scale that's a massive halo if it crosses it after very many steps that's a small smoothing scale that's a low mass halo and of course you know this I could keep doing the smoothing I could keep plotting what is the density as a function of smoothing scale but there's no point continuing for this position in space because you know I found the biggest one it already crossed so all the mass that would be associated with the smaller smoothing scale is contained inside this one right so this is the right mass okay one reason for continuing to draw this walk is the following suppose I was interested in objects at high redshift and compared to objects at low redshift okay so let me let me put that again I drew a barrier here this barrier was 1.686 okay now let's imagine that I wanted to I wanted to describe objects at some other time at some other time this number will be a different number different by the growth factor okay and now I have a choice do I want to make this diagram this random walk do I want to make this random walk in the initial field where the fluctuations are 10 to the minus 5 or can I take the initial field multiply everything by the growth factor to redshift zero so now I have one field and now I will describe evolution just by saying I want a barrier of a different height for the different red shifts or the same linear field yeah so that's what we're going to do the barrier will be a higher barrier for high red shift will be a lower barrier for low redshift and so that's why I was drawing that barrier scrolling down from high redshift to low redshift and so that means that from where the barrier intersects this walk or where this walk intersects the barrier as I decrease the height of the barrier so you know this walk will have continued over here so the so it will have crossed the high redshift barrier somewhere here then a lower redshift barrier it crosses here then as the barrier drops it crosses here here here here then when the barrier is here then that means that the mass jumped from this value to this value that makes sense because we're always asking when did it first cross the barrier so as we lower it we can just trace out where this walk first crosses the barrier as the barrier is changing height and this is going to tell me the mass of the halo at high redshift at lower redshift at lower redshift the mass is increasing smoothly is smoothly smoothly and then the mass jumps so there was a merger yeah this is the okay so this will be suppose the suppose redshift 0 is this one so at redshift 0 the object had this mass at redshift 1 the object the barrier height was here and if the barrier height was here the object had this mass and this is a smaller smoothing scale so a smaller mass at redshift 2 let's say the barrier height was here so the object had even smaller mass and so on so this random walk is giving me the mass as a function of time of this object it's giving the entire merger history of this object yeah and so this is kind of powerful right because astronomers like to talk about how things merge whether the whether the change in mass was smooth accretion or the change in mass was because of big mergers if the jump is big that's a you know my my object hit something the same mass as it doubled its mass or it was a small it was you know a small thing that merged onto a big thing so the mass jumped a lot or it was doing as he described right that basically it formed at high redshift and then mass trickled onto it and so it kind of accreted smoothly okay and so so this is a picture that lets you describe those things but it lets you do that in a statistical sense right because now we can ask oh how likely was it that I have a walk like this so in a Gaussian random field if I went to some other position in space and I made the random walk then maybe the walk will not do this but maybe that walk will do something like this no for some other place in space and so if I make a quick sketch then you know one point in space the walk can do something like this another point in space the walk can do something like that and you know still somewhere else maybe it does like this right statistics Gaussian statistics means that if I take a fixed smoothing scale then of the so this is density this was smoothing scale as a function you know I can ask what's the distribution of heights that these walks reach if I don't worry about the barrier what is the distribution and Gaussian statistics means it will be a Gaussian distribution most will hit here some will hit here but when I allow many more steps if I was choosing this smoothing scale now here I will get a much broader Gaussian distribution here was a kind of narrow Gaussian distribution okay and so so that means that I can think of this maybe I should keep this so so this is explicitly the Gaussian thing at very small scales it's very narrow Gaussian at very large scales it's very narrow Gaussian at very small scales a very broad Gaussian okay so all we want now is the statistics you know given that the distribution on any scale is Gaussian I want to know what is the probability that I cross the barrier of this height okay so if I keep the barrier to have one height then the white walk it crossed here so this tells me one halo of this mass actually a particle in a halo of this mass the blue one is a particle in a halo of slightly bigger mass at redshift zero and the white one is a particle of in a halo of this mass at redshift zero but if I want to ask I took this halo at redshift zero then in the future what will its mass be the barrier will be smaller so I will follow the mass where it crosses the barrier in the past what was its mass it was here and then this walk will continue and then somewhere here it will cross yeah so for one object I look at one walk for the statistics at one time of many objects then I then I keep one barrier and I look for the statistics of many walks the so yeah so this is so I should think of each walk as a random position in space around which I made the walk and I asked what was the density as a function of smoothing scale does it make sense yeah okay and so and so so that's why if I choose random positions in space then in a Gaussian random field I will get Gaussian statistics on any smoothing scale yeah so that was the beautiful thing about Gaussian statistics right the pdf is the same on all smoothing scales okay um and so so now we're going to do that problem so um so we're not going to do the evolution of mass problem we're going to do at a fixed time what is the distribution of masses and so I was just motivating what you can do with this and then we can come back and ask what is the evolution problem how do we do the the evolution problem so now for a fixed time we want to know what is the distribution of masses yeah maybe maybe I should say this one so so we will you know our goal will be to ask what is the fraction of walks that cross the barrier of a fixed height okay and so you can already see this is going to be something that's a kind of general problem that may already have been solved in other fields right because this is brownian motion hitting a barrier okay so it's a it's a pretty rich problem um but let's let's put this completely in the language that is uh uh that that is useful for us and so so that for that we have to do one small one small thing so far I described the walk and I said this is a smoothing scale big smoothing scale small smoothing scale and we said we can convert from smoothing scale to mass because you know all the the radius is giving me the mass now because we've established that the distribution on any smoothing scale should be a Gaussian broad Gaussian narrow Gaussian the width of the Gaussian will be sigma squared smoothing filter yeah so this was the expression we had for the variance before so this is telling us the width of the Gaussian as I changed the smoothing scale and this r cubed is m okay because this this is a monotonic relation between smoothing scale and variance instead of plotting smoothing scale here I will plot the variance and so so I use s to mean smoothing scale but it's the variance okay and so the and so the variance is small on large scales and the variance is big on variance is small on large scales and big on small scales okay why is that useful so the reason that is useful is that we know that this will be a Gaussian on each smoothing scale and in principle the width of you know how the width of the Gaussian depends on smoothing scale depends on the power spectrum but now by plotting things in this variable rather than this variable we have removed the dependence on the power spectrum so it means we can study the problem for all power spectra with one plot and so we will solve when we solve the first crossing distribution we will solve the first crossing distribution in the in the variables delta and sigma squared then we will convert from sigma squared to mass and that conversion will depend on power spectrum okay so we will convert from sigma squared to mass by doing this thing so so this is a powerful approach because it means that you can you can solve the problem for arbitrary power spectra so for arbitrary cosmological models not quite because if I change the cosmological model I will change the growth factor so I change not just the power spectrum but the growth with redshift that's this axis and so changing the growth factor just says I change the mapping between the height and redshift but that there's a barrier I don't change and so that means this picture this solution will be valid for arbitrary cosmology arbitrary growth factor arbitrary power spectrum so it's a big simplification yeah then we will convert from sigma squared to mass and from barrier height to a redshift using the cosmology dependent growth factor so maybe what I should put here is this thing we'll have something like with a with a growth factor and the growth factor depends on cosmology so so so let's sketch the solution okay so the solution goes like this so we have a so the critical density there's the barrier and we want to know how many walks have crossed the barrier okay for the first time on scale s now it's a complicated problem right because it's pretty jagged the walks so it's useful to first study a simple case in which the walks are completely smooth okay so here the walks are completely smooth where I've drawn them I've drawn them so that you know I will there will be another set of walks going like this and I've drawn it so that you should imagine the distribution of walks is the same distribution right so it's a Gaussian distribution of heights just where just that the walks are totally smooth right so like this and so there will be many walks like this fewer like this and very few like this okay now for every walk that is above the barrier on this smoothing scale we know it must have crossed at some earlier smoothing scale because all these walks crossed at some earlier time so this integral which is just the integral over a Gaussian this is the tail of the Gaussian is the same as the integral of all the walks that crossed at some scale less than this s so that's what I've written here we want the integral of all the walks that first cross on scale s times the probability that they're above the threshold on this smoothing scale given that they first crossed on this smoothing scale but we know that this thing should equal the tail of the Gaussian so um so all the walks that crossed here think of this as the left hand side so and and we know the left hand side it's the tail of a Gaussian it's like an error function this one equals all the walks that crossed here and are still above or crossed here and still above and crossed here and are still above so that this is the fraction that crossed at some s and this is the fraction that are still above given that they crossed for the first time at s okay and this is a kind of clumsy thing to write because it's obvious from this picture that if they crossed here they are still above so so this probability is one no so I did I did more work than I should do but because I want to set up the problem for when the walks are jagged we will use the same logic when this when this is one this is going to be a simple thing to solve because then I will have if this is one then this is ds fs integral from zero to s I just take the derivative with respect to s that will give me the thing I want and I can take this derivative with respect to s it's the derivative of an error function so I can do that and I will have an estimate of the mass function okay so that's one simple case another simple case is when the walks are not completely smooth but they're completely jagged completely not jagged but they're completely random steps and so that means they're more like I drew here they went up and down like this rather than completely smooth what would that correspond to so you remember in a Gaussian field the k modes are all independent of each other so now suppose that my smoothing filter when I when I do this and I ask what's the what's the density what's the density what's the density I didn't specify how I'm smoothing the field but I think we all had the idea that it's it's what I see you know that I I count everything inside this sphere inside this sphere inside this sphere but there's no particular reason why I couldn't decide to smooth with a Gaussian instead of with a top hat or smooth with some other shape I can use any filter I want to do this we use the top hat because when we did the spherical collapse model we it was a top hat object that was collapsing but just to play the game to see how this will depend on what filter I choose suppose the filter was one that is letting in one k mode at a time one k mode at a time if it's one k mode at a time so it's sharp in k space not in real space then because the k modes are all independent each of these steps will be completely independent because as I change the smoothing scale I let in one more k mode one more k mode one more k mode and so the walk is going up or down depending on whether I added a plus or a minus k mode yeah so those walks will be completely uncorrelated steps so they will be completely jagged that means that if I had a completely jagged walk that arrived here and now I want to calculate this piece what is the probability that the the walk arrived here and is still above the barrier well it's going to take completely random steps up and down up and down until it gets to this smoothing scale and so as it takes those completely random steps it means because they're completely random half of them they will end up above and half of them they will end up below and so for this problem also it's very simple this probability is half okay and so reality is something in between right it's not the completely smooth and it's not the completely jagged there will be something in between and that this thing will be complicated for the something in between but the two extremes are easy whoops okay so the two extremes are completely smooth walks where the probability was one and completely um completely uncorrelated steps where this probability was half and so then if you do that then you know taking the derivative of the error function is giving this term and then there will be uh these two solutions will just differ by a factor of two some of you will know the literature and will have heard of the preceptor approach the factor of two in the preceptor approach is this factor essentially preceptor solved the problem where it's completely smooth walks um and then bond at all in 91 solved the problem where it's completely jagged walks but the the the one we're most interested in is where the walks are somewhere in between so what to do so for something in between we can try to write out no we can try to write what is the probability that the walk has height delta one on scale one delta two on scale two delta n on scale n and then on scale n plus one it has height delta c or it's bigger than delta c and so on okay but you can see this is going to be a painful i mean we can write the joint multivariate Gaussian but it's a it's a painful calculation yeah so if we wanted to solve this problem we would have to integrate over all possibilities of this multivariate thing and impose the condition that on the nth that on all the first n steps it was below the barrier and then it was above and so you can try to write it but it's uh not efficient it's more efficient to instead say um instead of thinking of the walk as many many steps the height after many many steps think of it as the height on one scale and then the derivatives because i can describe a curve by a value and the derivatives rather than the full set of heights right it's just any function i can i can think that way and the virtue of doing that is that because the walks have correlations between the steps if you impose that it has a certain height on a certain scale you have already made a constraint on the walks height on many other scales and so that's why you are efficiently taking care of a bunch of these correlations by instead of working in this basis in working in the basis of delta and derivatives on some scale um and so if you do that then what you want is you want to say the fraction of walks that first cross on scale s are the i i want the joint probability that the walk has height delta on scale s and the derivative is delta prime okay and i want all the walks where the height is um between delta c and uh and something to do with the slope so let me let me write this a little more clearly what i really want is that delta on scale s should be bigger than delta c and i want delta on s minus delta s so the previous step i wanted to be less than delta c okay i also want all the other ones to be less than delta c and i'm not going to impose that constraint now i'm just going to see how well can i do if i just add this one right because my goal was to reduce this big problem with big multivariate Gaussian to fewer dimensions i'm trying to do it by saying let me put a constraint on this uh on one other scale um and and we'll see where we go okay so this one i can write as delta s minus um delta s d delta by d s no so so so this one really means this one and uh and so that means that i want delta s plus delta s delta prime and so that means i want delta s to be bigger than delta c but i want delta s to be less than delta c plus something to do with the slope so that's the condition that's written up there yeah so i want this and i want that the walk crossed the barrier going upwards which means i want the derivative to be positive and so so so we just work with the two variables and now we have to do you know the integral of these two variables with these constraints no this one should be positive and this one should be between delta c and then it's just math okay so then you can you can write this out it's 2d Gaussian not a huge infinite dimensional Gaussian um and uh and so so so it's a it's a simple approximation we can ask how well it works okay um the other nice thing is the logic is pretty general and so it doesn't i mean here we've done it for Gaussian statistics but this could be a non-gaussian pdf so you can apply this to non-gaussian fields um and uh also we've done it for a barrier of constant height but the barrier could have some shape and then you can you can work this out for the if the barrier depends on s and i mentioned that because yesterday someone asked about ellipsoidal collapse about triaxial collapse and a crude approximation to triaxial collapse is that the the critical density to form an object is higher for the low mass halos which means a crude approximation for ellipsoidal collapse is to say this barrier associated with making a halo at a fixed time has a shape that's more like this and uh so you can use the same logic to solve solve that problem okay um so so so that's the solution um and uh this solution has the piece that we had before that was the derivative of the error function plus correction um and this is the correction that's saying uh now i want to impose that the walks are crossing upwards and you can imagine there would be more corrections if i were correcting for many more previous crossings but this is the first correction um and it's a correction that will so so so let's let's think a little bit about this correction so the correction matters only when uh when this parameter starts getting big so maybe i should talk a little bit about what new is so new is this combination of the critical density over the variance yeah and so so the reason for doing that put up the solution while we while we discuss this um so the reason for doing this is that's a solution of the random walk problem in these variables now if i specify my cosmology i have specified the growth factor so i can convert this into something as a function of redshift and if i specify the power spectrum i now have specified the conversion from this variable to mass and so now i have a prediction that lets me translate from this to halo abundances okay that one will look like this f new d new was the fraction of random walks that crossed on smoothing scale s so this is the fraction so each random walk is like a position in space so this is the or it's a or it's like a each random each random walk is associated with a position in space so if i can think of it as each random walk is associated with a particle in the initial conditions and so each so so this is like the fraction of mass that is in halos of mass m at redshift at this redshift or a power spectrum that converts this to this and so this one this is a fraction of mass so there's a number density of objects this number density of objects each halo of this mass has contributes so this is the number this is the mass that such an object contributes if i take this this is the fraction of mass that is in halos of this mass this is the fraction of the total mass so there's an implicit assumption that this integral is the background density because the assumption is all halos or all mass is in halos of some mass that's why you get the background density yeah so so this is the ansatz for converting the first crossing distribution into an approximation for the halo mass function and think a little bit you know is it true that all the walks are going to cross because if they don't then not all objects are in halos and i won't i won't get this integral so in cold dark matter models the the walks are jagged they're very jagged this variance diverges at very small scales in cdn so all the walks cross the barrier and so that assumption is good if you're interested in warm dark matter models then not all the walks are going to cross then talk to me okay so i won't go into that here um okay um so so this is this is one approximation okay um oh so so this was the bit about warm dark matter models okay um we've sort of gone through the simplifications about why this was useful that you can you can translate arbitrary cosmologies and arbitrary power spectra so so that's all this is saying um okay let let me we'll come back to this um so so now so now let's go one one step further okay so so now we'll do the mergers of objects right so we'll try to do different times okay um so so this is a picture of the particles that were in a halo at redshift zero in a simulation and we asked where were those particles at some earlier time i think at redshift two okay and so you can see that the particles that make up the halo were in a bigger patch and in that bigger patch you can identify all the objects that are 200 times the background density at that time so that's the little circles there they look like they overlap because it's a 3d sphere and it's just been projected they don't really overlap okay um and so you know the the big object at redshift zero was in many pieces at redshift two if you were interested in the gravity wave problem the gravity waves we care about the mergers of binary black holes we would like to make an estimate of what's the expected stochastic background this is how you do it right we we we'll be going through this calculation now right you because it's the mergers of these guys that are making the gravity wave signal they're the massive galaxy halos that are merging at redshift two redshift one and then the waves propagate to us right um so okay um so what's the toy model so the toy model is you have a bunch of smaller spheres that are shrinking right um to to make your final guy right um and uh so so so what does that correspond to so so here's the picture right you have lots of smaller things that are going to fall together but now that the toy model is that this was at redshift one thousand it was a patch that we identified that had density 1.686 divided by growth factor okay so if that's the patch inside this patch there's substructure and so this is drawing the substructure of that patch and the substructure here we've drawn to let's say uh all this all the smaller pieces that are double the density and that would correspond to asking the question oh these guys they should all have collapsed at some higher redshift as they are collapsing they will be moving together because the big guy is also collapsing because by redshift zero it should collapse and so what is happening is i have this collapse happening and they're all moving together to collapse sort of like you saw in the movie right and the result of that is that associated with this patch is a merger history initially you had lots of small pieces two two and four and then as this shrinks then these things will start to merge with each other and so the things that are close to each other will merge with each other first so you know these guys will collapse then they will merge with each other then they will merge together with each other before they merge with this and then this side will merge with this side yeah so so you can see associated with the initial substructure is the merger history tree of the patch and this works because it's cdm because nothing is moving far and so if they're close initially they will merge with each other initially it's not that this guy will somehow merge with that guy before it merges with this guy yeah so that's why we can use the initial Gaussian statistics to calculate everything later okay so so this is another picture of what this calculation is going to do okay of the merger history this is again the merger history but we're going to get a lot of information yeah from from from the initial Gaussian statistics okay so this was showing you you know another picture of the the pieces that merge with each other and we had used this before right to say this merges to merge to merge to make the final guy so so if we wanted to do this problem then we would say associated with a higher redshift is a higher barrier and so now I want to know given that I had a walk that crossed given that I had a walk that crossed this scale so it has some mass at redshift zero what is the mass at redshift two yeah and so I now I want instead of solving the problem of walks crossing a barrier of height delta c I now want to solve the problem of given that they had a big mass at redshift zero what was the smaller mass at redshift z and it's just a same random walk problem just one extra conditional distribution for the for the Gaussian and so it has the same kind of solution just with a different different Gaussian that you write so so you can write that problem right the solution is the same one as before the term with an error you know the thing that's the derivative of an error function plus a correction and that's it okay so so we can and so now you can do this for you know one redshift another redshift and so on and so you can describe now how likely it is that an object of 10 to the 15 solar masses had a piece that was 10 to the 14 solar masses at redshift one so you can ask in particular what fraction of its mass was in pieces of 10 to the 10 at redshift one or at redshift 10 yeah how does the mass you know how is the mass partitioned into smaller pieces at each redshift so how likely is it that the 10 to the 15 solar mass guy was 9.9 times 10 to the 15 solar masses at redshift two very unlikely how likely was it that an object of mass 10 to the 11 today was 9.9 times 10 to the 10 at redshift two very likely okay so you can quantify that kind of thing so these are very useful things to be able to just get a handle on what structure was like at earlier times yeah so there's one final thing I want to show you that you can do with this construction okay and so the thing I want to now model is the fact that when the Gaussian field starts out as Gaussian it evolves right and we said it's going to become some non-Gaussian field can we understand the shape of that non-Gaussian distribution okay so I want to try to model that now so the idea is to go back to the spherical evolution mapping uh the the one I wrote on the board here this one and to say non-linear density is related to the linear density so let's rearrange this to write the linear density in terms of the non-linear density there's just a rearranging and now we'll say look um this it was kind of my y-axis right because that's the linear density divided by the spherical collapse value but that's it right it's a this is the y-axis m is a monotonic function between the mass to the radius and the radius to the smoothing scale so this is really like s and so that means if I fix the volume today then I will get a curve which is this as a function of s okay so for a fixed volume there will be a curve which is this as a function of this so let's look at what that looks like okay so there's the random walk oh there's the random walk space right the smoothing scale density and those curves look like if I pick a volume then I will get a curve that looks like this okay you can kind of read off why it should look like that so when let me write mass of a volume over here when this value is zero then this is one that means this is zero that means the mass is the volume co-moving density times volume and so when it crosses zero that is where m equals v so if I if I fixed my smoothing scale to be 10 mega parsecs then this contains a mass which is background density times 10 mega parsec volume so suppose this was the 10 mega parsec smoothing scale so on smaller smoothing scales this will have a smaller value and on bigger smoothing scales this will have a bigger value okay from that curve uh and so so how can we see that as I increase the mass so increasing the mass means I go this way smaller smoothing scale uh bigger smoothing scales smaller variance as I increase the mass that's like a bigger non-linear density the bigger non-linear density means a bigger linear density but the bigger linear density can't get bigger than delta c or then the non-linear density one minus one with a negative would be diverging and so when this is infinite this is delta c infinite mass means humongous smoothing scale and so on very large smoothing scales this is going to delta c so this is a 10 mega parsec late time volume I will have a curve which looks like it goes from delta c down to this scale and keeps going down if I changed my volume and I was making this curve not for 10 mega parsecs but for 100 then 100 means the mass when the over density is zero is a bigger mass so bigger smoothing scale so that's the one for a bigger volume it has to still go up to delta c and then keeps going down okay so these are moving barriers that have a different shape depending on this this value and now the problem of a random walk that crosses this barrier is telling me the fraction of mass that is in cells of size 10 mega parsecs today or the fraction of mass that is in cells of 100 mega parsecs today and from this you can see that the random walk problem we solved before of a constant barrier that is just the limit in which this volume is zero because when this volume is zero then this density is diverging and if this density is diverging that's saying that this is delta c and that's why that's the horizontal value for all smoothing scales yeah and so that means that now in this picture this random walk which before we thought of as evolution by making the bad the horizontal barrier drop with time so we could read off mass as a function of redshift we can also think of it as oh this is my halo that at redshift zero has this mass but surrounding it on the 10 mega parsec scale the mass is this and surrounding it on the 100 mega parsec scale the mass is this so I have the density profile around it today okay the density profile around it at redshift 1000 was something we can calculate from Gaussian statistics the redshift profile the the density profile around it at redshift zero in the non-linear field I can calculate from the first crossing given that it crossed the pink here then when did it cross the green when did it cross the the white and so on is the same calculation right as the evolution with time is the density profile with environment so that's a key insight the idea that time evolution is the same as a density profile okay because if the profile is falling steeply then the way the mass will arrive so if it's falling steeply there's not much to come the mass profile is very shallow that means in the future stuff is there to to be accreted right and so you can you can just work out what the accretion history will be from the density profile if I was drawing this these set of curves but I didn't want the distribution of the density profiles or what is the okay so I've described that the the density profiles but I could simply ask you know what's the first crossing distribution of the green and that's the pdf of the mass in cells of size 10 mega parsecs or the pdf of the mass in cells of size 100 mega parsecs you can see that as I start going to 100 mega parsecs 200 mega parsecs is becoming steeper but if it's very steep that's Gaussian and so the fact that it's getting tilted that's what makes the non-gaussian pdf okay because the first crossing distribution of this will be slightly different from the first crossing distribution of a vertical line here on large smoothing scales most walks are between here and here right because on small smoothing scales this distribution is narrow and so it doesn't matter if my barrier looks like this or my barrier looks like this they're all crossing here anyway so the pdf will be close to gaussian but as I go to small smoothing scales then the difference between vertical and is big the distribution will be very non-gaussian will be very different from the gaussian okay so so this lets you calculate those things lets you calculate the profiles and there's a very close connection between environment and evolution and if there's one message from this that's the one that's the thing you really want to remember yeah there's a very close connection between environment and evolution okay so this this was the the model for the pdf for for the evolution from the gaussian to the non-gaussian thing and okay so so let's talk a little bit more about the correlation between environment and and halo masses because the the direction I want to go is I want to start trying to set up the clustering of halos so not just the mass function but the spatial distribution of the halos and you can see that we should have some some way of estimating that from the picture that I had before of the big round thing with small pieces in it because when I described that thing I had the big round thing with pieces in it and we asked you know at reach of zero they're all together but I could have drawn the same picture and I could have said at reach of zero they're not all together at you know one Hubble time from now they will all be together because the density of this patch is not 1.686 but half of that okay so it hasn't collapsed today but it will in the future and that means that the that means it has shrunk but not all the way and that means that the pieces inside it are like the mass function today those are the halos today that still exist that have not yet merged yeah so let's go yeah so before we said this is reach of zero and these were the pieces in the initial conditions now we're going to say this is sometime in the future and these this is reach of zero and we can do the same statistics problem to say you know what is the distribution of pieces in cells of us of some size today of cells of size 50 megaparsecs today if I know that that 50 if I know that the 50 megaparsec region has is twice the background density then what is the mix of halos in it so if I have a 50 megaparsec patch and I have one part so there's one 50 megaparsec patch which is over dense and another elsewhere in the universe another 50 megaparsec patch which is under dense then what is the mix of halos in the over dense patch and in the under dense patch the the simplest guess would be the over dense patch has higher mass higher mass means I can make more halos so one possibility is the distribution of halo masses is the same but I just have more of everything in the bigger in the more massive patch and I have fewer of everything in the less massive patch in the less dense patch that's one possibility but you can see from that picture that that won't be right right you can see from this picture that if your if your walk has already arrived here then it only has a little bit more to cross delta c and so that means that the halos in dense regions are all massive the halos in under dense regions well this guy has to get all the way back up to here and so it will take many more steps before it reaches so it won't do this it will take a while and they will all be low mass and so the mass function in dense regions and under dense regions will be different okay so the mass function given the large scale density will not equal the the you know the average over everything and just you have more of everything or less of everything but there will actually be some dependence on the density so there will be some function of the density here what function well we know it will be some we should have more massive halos in dense regions and so if I were to call that a bias then I expect the bias to be a number that is growing with mass no because if the if the region is dense I want to have lots of massive halos so we know how to write this problem exactly right we know how to write before I said the high redshift piece of the low redshift guy but it can be the large-scale environment and the mass the the very last halo in the large-scale environment so I can now ask this was an m0 this m0 corresponds to some volume suppose that my volume is big I was saying 50 megaparsecs then I know that the associated density is small and so it's close to the linear density so this number is small and so that means that I can that means this constraint is just a small perturbation to this so I can expand in a Taylor series of this value I can expand this function in a Taylor series and that expansion the leading order term will be the density then there will be density square density cubed and so on the the 50 megaparsec density square density cubed and so on and so this coefficient we call the bias factor the linear bias factor there will be a quadratic bias factor and a cubic bias factor and so on and you'll be hearing a lot about these bias factors next week okay but next week when you hear about them there will be three parameters about which there is zero knowledge and here you have a handle for actually calculating them okay now it's a model that has come from idealizations of spherical collapse and stuff like that okay so it's not going to work at one percent precision but it does work at 10 percent precision okay um for for for understand so it's very good for understanding the origin of bias and it's good for quantifying it to about 10 percent precision yeah um all right um the other nice thing about this picture that I stopped uh is you can think of so I said there's this close connection between environment and evolution but you can do something even nicer which is because this random walk doesn't have far to go you can think of this because remember in the in the evolution picture how much the barrier has changed is a change in redshift is something to do with a growth factor so this is a growth factor and this is a different growth factor okay so you can think of an under dense region as having a different effective cosmology in which the growth factor grows differently than the dense region and you know you should be able to do that right I should be able to use burkov's theorem and a dense region should be like a dense universe and an under dense region should be like an under dense universe okay and this does that exactly okay so you never had to think about it but if you work out what is this difference and you convert this difference in delta c values into a growth factor and you say what is the effective cosmology associated with this growth factor it is the one that you would have calculated you would have said yeah um because it's a higher density omega matter is higher because it's a higher density that patch is shrinking because the patch is shrinking it has a different Hubble constant and so the density is different Hubble constant is different so omega matter is different omega lambda energy density in lambda is constant in space so it's the same whether I'm in a dense region or in an under dense region but h is different the Hubble constant is different in the high density compared to the low density universe and so omega lambda is different and so omega lambda plus omega matter in a dense region is a curved universe closed omega matter plus omega lambda in an under dense region is an open is a you know open geometry yeah and so they have their own growth factors and this gets that exactly right so it's built in to the description and it's a powerful approach yeah um so so so i'm out of time so I should stop there I I haven't shown you plots of how well this kind of thing works there are some in the slides that are online yeah um I haven't talked at all about recent improvements the ways this doesn't work if you're interested come talk to me the other experts in the room are Marcello Mousseau and Nassim Paranjpe they've done a lot of work on this yeah and Farnik Nikakhtar has done some stuff on the random walk problem as well okay um so thank you enjoy