 Okay, so I'm just going to continue on. Maybe I should assume, well, maybe in any case, it's good that I do a little resume of yesterday. So my plan, my overall plan was this to give, can you hear me all right, yeah, a brief introduction. So to give a little overview of the problem of structure formation and cosmology. So here I'm really, I'm not talking about my own work. I'm just trying to give a, it is, some in many points, obviously my own perspective, but really I'm just reviewing very standard material. I didn't put many references in my slides. I'll try and complete with some references for people who, if you're interested. That was my original plan. I'll just change probably more like that. The last part is something closer to just a specific model I've worked on, which I think is interesting, but we'll see how much time left for that. If I do. So the main points yesterday, so just to remind you, I just told you about what this problem of, what the problem of large-scale structure formation is. It can be kind of summarized in this picture. Basically, you know, this is the universe, the prime, early universe, the young universe with small fluctuations. These correspond to small density fluctuations. This is a picture of the universe today, very large fluctuations, these large structures. This is the problem of large-scale structure formation. And what I tried to explain was that the essentially most of the non-trivial part of this problem can be really treated in the Newtonian limit. And really it's just then just a purely self-gravitating system and treated in the Newtonian limit. And that really is a very good approximation for basically because the universe is dominated by matter that's non-relativistic. And because the gravitational potentials remain weak, you're in the Newtonian limit. I spent some time explaining then what this Newtonian limit is. I said it's a little bit subtle of what it is because we're talking about a very specific system, which is an infinite system. You know, the universe is modeled as an infinite system in this context. And so I explained that this meant to obtain the equations. So I really, all this was aimed at just explaining what the actual equations of motion are for this self-gravitating system. And so I explained that you have, you know, so-called physical coordinates, which is, you know, the position of a particle. And, you know, if you have two galaxies, you know, this is the distance, the physical distance between particles. And then you derive, we define a co-moving variable. The co-moving variable just takes out the expansion of the universe. And the expansion of the universe, I wrote down the equations. It can be written for a matter-dominated universe. It's just this. I didn't. So this is the mean density of the universe, which is a function of time. As the universe scales, it length scales go as 1 over A. This goes as 1 over A cubed. The mass density goes as 1 over A cubed. And you can show that in the matter-dominated universe, this gives you A going as T to the 2 thirds. It's not actually going to be important for anything I do. But it's just, I didn't say it may be clear enough yesterday, this function A of T is just the expansion of the universe given by a simple function of time. We can use A or T as the time variable. And in practice, we'll usually use A as the time variable, instead of the time variable. So the equations that are written down are equations in co-moving coordinates. So you have here just an inertial term, a kind of damping term that comes from the expansion of the universe. And then on the right-hand side, you have what I've written as the regularized force. The regularized Newtonian force is really just the Newtonian force with the contribution of the mean density subtracted off. So I said, in terms of maybe what helium or the one component plasma, you would get the same equations. What you're doing is solving basically the same equations just with this extra damping term and with a change of sign. So it's just exactly the same kind of regularization as induced in a one component plasma by the background. So in cosmology, we're interested, in principle, in the Vlasov-Newton limit, a continuum limit. So the equations that we would really like to solve are for the phase-based density like this. And they correspond just in an obvious way. You see the damping force and the gravitational force, the gravitational force, the regularized gravitational force being coming from the mean density. OK? So yeah, I just can't emphasize enough that the, so what are the initial conditions? That's the next thing that I talk about. So I said the initial conditions for the phase-based are for this typical cold, dark matter scenario, which is the kind of standard scenario in cosmology, is that you have a face-face. So you have a rho x of t at the beginning as our initial condition, right? So our rho x of t i is rho 0 times 1 plus delta x t i. OK? The density fluctuation at the beginning of the simulation. And this is specified as a realization of a Gaussian stochastic process. So it's specified by the variance of the modes in Fourier space, which is basically just the power spectrum or the structure factor, OK? So that completely specifies your initial condition. A cosmologist, you need to do some relativistic cosmology to calculate where do I lose my, yeah. You need to do some relativistic cosmology to know something about to go beyond Newton in order to calculate this spectrum. But at the time the problem becomes Newtonian, you just have some input spectrum, OK? So those are your initial conditions. So the thing I, yeah, one of the other things I said that's important about these initial conditions is that if you define a coarse-grained, you coarse-grained the density field on some scale r. So suppose we take a sphere, right? So we take the volume of the sphere and we integrate over the sphere delta r d3r, OK? And we calculate the variance of this. So the variance of density fluctuations in a sphere, this is a monotonically decreasing function of scale, OK, in these initial conditions. And in any cosmological initial condition, that will be true, OK? Monotonically decreasing with that, OK? So those are our initial conditions. We have our equations. And I said a little bit about dynamics. So the most simple, the essential thing is the essential starting point is so-called linear theory. When the density fluctuations are small, they are just amplified. So this delta k of t is just a of t delta k of ti, or let's say at over ati, OK? So the density fluctuation just gets amplified. And sorry, maybe not. Density fluctuations just get amplified. The amplification of the density fluctuation obviously amplifies this as well. So what happens is that this linear amplification drives nonlinear, drives you into a regime where it's no longer valid, and you get a time of, the time of nonlinearity is going to depend on the scale. At any given time, you have a scale at which the system is going to go nonlinear, and that increases in time. And that's what's called hierarchical structure formation. What's non-trivial about it, as it repeats it again, what's really non-trivial is that it says that when a certain scale is going to go nonlinear, really just depends on the initial density fluctuation of that scale. It doesn't depend on smaller scales. So structures are going to evolve, basically, and go nonlinear in a way that's completely determined already by this linear theory. I mean, I didn't demonstrate that, but nobody, I don't think anybody has really demonstrated it rigorously, but we see it's definitely true, seen in numerical simulation and so on. OK, for today, so I'm going to go on and just say, so we're going to say, what happens in this strongly nonlinear regime? What happens when you start forming highly, where you go into a regime where you cannot treat the problem in a linear manner, or even where you can't really treat it perturbatively at all? There is quite a lot of work, analytically, in the literature on perturbation theory, improving perturbation theory, but you always go into a regime where all that breaks down, and what I'm interested in talking about is the really completely nonlinear regime. So there are, in fact, very few analytical results. We have very few analytical guidelines, even for simple cases, and that's one of the limitations and one of the problems in the field is that you could do a numerical simulation, a very big numerical simulation, but you really don't have very much control on it, and you have to, therefore, obviously be very careful about trying to understand the eventually numerical problems or problems related to the discretization. But there are a few simple guideline ideas about the nonlinear regime, and that's the first thing I'm going to talk about. Then I'm going to talk about numerical simulations, not in any great detail, like if people want, I can also give them more references on that. There's a huge field of numerical simulation, and I'm just going to try and outline what are the kind of qualitative results, and basically what I'm going to try and explain is how people or cosmologists describe the nonlinear regime, how they actually characterize it, and the kind of results that they get and the way they understand that. And then I'm going to talk about some open issues, which maybe I've tried to make some connections to some problems that are more general in long-range interactions. OK, so before I go on, does anybody have any burning questions? I think it doesn't. OK, so the simplest model you can think of for trying to understand nonlinear structural formation going beyond the linear regime is an incredibly obvious and simple model. And you would think maybe it's so obvious and simple that it can't have anything to do with reality. In fact, it turns out to be a model that seems to have tremendous relevance and is almost one of the main guidelines that people really have in their understanding the nonlinear regime. So let me explain this model. It's the so-called spherical collapse model. The spherical collapse model, I don't know who first wrote this down. I mean, this is very old. It must go back to, I don't know who the original reference, who originally wrote this down. But in the spherical collapse model, what you do is you just consider a single over-density sitting in the middle of an otherwise homogeneous universe. So the background density is rho equal to rho 0. And in the middle, you put, or not in the middle, somewhere you have a single over-density. You start it at some time. And that density, so it's just a top-out over-density, uniform over-density, it has a certain radius. So we can write the co-moving. This is all in co-moving coordinates. It has a certain co-moving radius. And basically, one can write down that it's quite trivial, the equations of motion, using Gauss's theorem for this. And you get the evolution. It's really like a slightly over-dense universe inside the average universe. The density is just a little bit higher. And it collapsed, turns around. So what happens? Initially, the sphere follows the expansion. Well, if I consider the physical size of the sphere, it initially follows the expansion. And then, at some point, it so-called turns around. It stops moving in physical coordinates. And it collapses down. And what happens? You could write the solution in an exact solution in a parametric form, which I've written here. Now, the very nice thing about the solution, or the way you can consider the solution, is that you can write the radius of the bowl with respect to its asymptotic radius. So the r0 would be the radius that you would get in the asymptotic past as it goes back to the mean density. So r0 would be r equal to r0 as a goes to 0 as you go backward in time. So what you see is that you can write actually the actual density. So you can use mass conservation, r0 cubed, is equal to rho r cubed. And then so you can write delta, the density, the full density at any time, is just rho over rho 0 minus 1. And that's equal to r0 over r cubed minus 1. So you can write the full density as a function of theta. But more usefully, you can write it as a function of the linear amplitude. What's the linear amplitude? The linear amplitude is just the initial amplitude multiplied by delta 0. So you can write it the nonlinear amplitude as a function of the linear amplitude. So remember, the linear amplitude is just a monotonically growing function. So you can use the linear amplitude as your time variable, if you like. And you can write the nonlinear amplitude as a function of the linear amplitude. You can write the real density of the system as a collapse as a function of the linear density. And what you find is that when the density is low, you get linear. I'll just show. I mean, maybe it's not very easy to see. So the red line is, so this is the nonlinear versus the linear. You've got the red line corresponds to linear theory. Just it's linear. And the green line is the increase in density of the system. It's going faster than linear. Nonlinear evolution is more efficient. It leads to a much more efficient collapse of the system. And what happens in a finite time is that you have a singularity at a finite time. So the system is just going to collapse down to an infinite density in the solution. And you can say that that singularity will happen at a time when the linear density has a certain amplitude. You can calculate if it just went on evolving linearly at the same time it would actually have an amplitude. One could write down is 1.68. When the density has got to 1.68, the actual sphere will have collapsed rather than just been amplified to 1.68. So this model, as I say, very simple, can actually be used as a model to try and say, once we're inside this universe and a region collapses, what does it do? This is the guideline for what it does. It just, in a time which you can estimate from the model, it actually collapses and, well, so in the model it collapses. In reality, of course, we're treating here, why does it collapse to a singularity? Because you've taken a perfectly uniform density. In reality, your system is going to have some density fluctuations in it, and that's going to lead to the singularity being regularized. It's not actually going to collapse to be singular. So the system actually, I mean, just to illustrate, sorry, I'll just go back a second. So if we consider just an isolated cloud, basically from the time the solution turns around, from the time when it turns around, what we call turns around, it's exactly at rest in physical coordinates. And it's already quite dense compared to the background. So we can just treat it as a system that's like this isolated in space. So here we've got some fluctuations. This is just Poissonian fluctuations. You follow what happens. In fact, this is just a numerical solution of a collapsing cloud. What happens is that instead of collapsing to a singularity, it stops at a finite radius, and it reaches a quasi-stationary state. So this is the physics of an isolated system in the Vlasov limit in principle, at least in whether there are non-Vlasov effects in the numerical simulations, another question. But basically, you have the collapse and virulization of the system, and it virulizes very, very efficiently. This is a very, very efficiently. This is a virial ratio of the system. And what you see is that with incredible efficiency, it actually virulizes and becomes a quasi-stationary state. For a cold, yeah. Well, what actually showed is without the damping term, but to an excellent approximation in physical coordinates, it's without a damping term. You see, that's what's there. In the physical coordinates, it really becomes like an isolated system, and there's no damping term. See, the damping term is in the co-moving coordinates, but I can change back to the physical coordinates. And when I've got an overdamped system like that, it's just behaving like an isolated system in those coordinates without damping. It's a pure, isolated, self-gravitating system to a very good approximation, right? And that's where there really is, this is the connection between the cosmological simulation and isolated simulations. It really is behaving more or less like an isolated object, okay? Yep. Oh, yeah, okay, I don't wanna get too fixed on that. It was just really to show an example of a, it's not that it's gonna look exactly like that. Yeah. This is actually gonna be very dependent on the initial conditions, what exactly happens. I'm just, to a first approximation, you see, if it was completely uniform, it would be singular. The singularity is regularized by the fluctuations I've put in. Now, it's another discussion about what are the right fluctuations to put in and how that's going to affect the final state and what happens. But for the moment, I don't wanna get into that discussion, okay? It's an interesting and relevant discussion, but it's not what I want to put in there, okay? So, okay, this spherical collapse model, so again, this really naive model gives you, you can assume you can extend it so it becomes, the matter density actually becomes singular. What you can assume is that if you just assume that instead of going singular, you get a variolized structure, which seems to be indeed what happens if you put in some fluctuations. If you assume you've got energy conservation that may or may not be a good assumption of the isolated system, you can actually do a simple calculation in three lines in which you can calculate the density of the collapsing system when it has variolized. So you can say it starts from some size, it collapses, and it would variolize, and when it variolizes, it will be about a sixth of its initial size. It will bounce back and give you a structure that's about a sixth of its initial size with an over density, which corresponds to being overdense about 200. That number 200 appears a lot in cosmology. If you go and look at the cosmological literature, it's a figure that appears a lot because of this model and because it seems to be really pertinent. It's supposed to represent the density compared to the mean density of the universe of an object that's just variolized. So this all comes from this very, very simple model. You can also do, let me just keep it on time. I just want to mention some other things you can do analytically. There's a so-called press-shecter formalism introduced by press and shecter in which they now exploit the spherical collapse model in the following way. The spherical lapse model tells you that a spherical region will variolize when its linear density, extrapolated linear density contrast reaches some value, 1.68. So if I take my initial field and I just take a spherical region, I could say that spherical region I expected to have collapsed and variolized when the density contrast has reached that value. So what press and shecter did was they said, well, the initial conditions on the power spectrum allow us to calculate the statistics of the initial density field. If I throw down a sphere in the initial density field, I have the statistics of the fluctuations or I can, at any given time, I can calculate the probability that the density fluctuation in a certain region has reached this value. And I could just use the initial density field, the initial statistics of the density field to predict how many collapsed structures I will have of any given mass. It's not a very long calculation. And one can use this simple reasoning to construct what's called the halo mass or the mass function of variolized objects, which is often written like M, how's modules will write M, and Mz would be the number density of variolized objects of mass, M at the redshift z, which is related to the scale factor, as a function of time. And this is a simple, so this is an analytical calculation that one can do. And so these all inform the nonlinear regime. Okay, I think I'll jump, yeah, I'll just say one or two words about that. One of the other, there are a few other kind of analytical regimes in which you can get some analytical prediction for the nonlinear regime. One of them is in the case where you put in a power law initial condition. So instead of putting in a cosmological initial condition, which I said has something like that, you just put in a simple power spectrum, P of K proportional to K is the power of N, okay? The fact that there's no characteristic scale in the power spectrum, you can allows you through a simple dimensional analysis argument to say that you have to have a property called self, which is self similarity, which says that the temporal evolution of the correlations in the system or of the density fluctuations in the system have to be equivalent to just a rescaling of the length scales. And that's called self similarity. And that's true when you take a simple, should be true when you take a simple power. So that's one analytical prediction you have for a certain case for the nonlinear regime. One other approach you can take is you can say, what happens, okay, these clumps, they collapse, okay? Small clumps first, big clumps next. After a clump verilizes, what does it do? One, the simplest thing you could say is, well, maybe it behaves as if it sees nobody else. It doesn't see the rest of the mass distribution. And in that case, it's just gonna remain verilized and it's verilized in physical coordinates, okay? It remains verilized in the physical coordinates, which means that in the co-moving coordinates, it will reduce in size because they're rescaled, okay? So that's what you call stability. That's called the stable clustering hypothesis. And that can give you predictions, okay? But those are very limiting cases. It's clearly not an approximation you expect to be valid for too long. Or clearly, these clumps, they're not gonna behave as if they're completely independent. You've clumps inside clumps. They're gonna interact with one another. They're not going to be independent. But those are the analytical, some of the analytic approaches that can be used. I have a couple of slides on them if you want to look at them. Okay, so numerical simulations of cosmological structure formation. So what one does in practice is what people do and have been doing for the last 40 years at least is putting this on a computer. You'd like to solve, a cosmologist, ideally would like to solve the Vlasov-Passo equations. But that's a six-dimensional system in three dimensions. Corresponds to solving your dependence on six variables. And numerically, if you want any sort of decent, you want to get anything useful for cosmology, it implies huge numerical cost, which is really just not. Why do you need resolution? Well, because we've seen structures collapse. That means you have a coupling of scales. We have a coupling of the fluctuations over a range of scales that grows in time. And if you're gonna really follow and resolve those scales, you need incredible resolution in six dimensions. It's not very difficult to convince yourself that to do a Poisson-Vassov-Passov-Newton simulation is pretty well impossible. Although I do want to mention that nowadays, obviously with the growth of computer power, people are really beginning to look at this again. And there has been an interesting, there's some work by a Japanese group, I'm not sure if there's some more recent, and in particular by the group of Stefan Colombi in Paris. They've done some really interesting work on isolated systems for the moment, but also, in principle, could be extended to the cosmological case, I think that's the aim. So people are really beginning to, some people are beginning to explore that. But what one does in practice is solve the end body problem. So you just, just as, you know, when we most, as many people are familiar with here, when you simulate a long range system, you don't do the blast of dynamics, you simulate an end body system, and you're assuming that really your results, you're trying to probe a regime in which the results don't depend on the number of particles. So the particles in the end body simulations of cosmologists, they're numerical particles, okay? They're discretizations of the mass density, they're not physical astrophysical objects. In fact, their masses, typically, because of limited resolution are enormous, you know, they're of order, well, nowadays they can get down to, you know, 10 to the sixth, 10 to the seventh solar masses or something like that for reasonably lower simulation, but they're still, you know, they're not, they're big clumps even on an astrophysical scale, whereas the real matter particles, dark matter is a microscopic particle with a mass of a GV or something, okay? So they're softening, that's a whole issue I've got to just breathe and probably won't talk about, but you do soften the potential at small scales. Essentially, the way I understand that, I think the best way of understanding that is that you do it for numerical reasons, that you can, it's just, it greatly, it makes your numerical cost grow enormously if you don't soften the potential. Well, you see, A of t is just a fixed function, okay? A of t is just the fixed background expansion. It's not coupled, that's a really good point. It's not only through the mean density, you see? It's just, that's through the mean density. Once you have the mean density, it's on initial time, that's a fixed function. But in principle, you know, that's a back, you could have enough question about, yeah. That is sufficiently large-scale, well, this remain uniform. That is sufficiently large-scale, you see? It's really at the large scale, that's the, you're taking out the background. Yeah, yeah, yeah. But, okay, so the A of t is an input function from the cosmology and you've got, you just, the mass of the particles? M, the mass is the mass of the particles. Typically, yeah, yeah. No, that's the mass there, okay, the mass there is the mass of the n-body particle. It's not a physical mass. It's about, yeah. Well, if you do a cosmological simulation, that has some, you know, it has some physical, you know, I don't know, it could be 10 to the 8 solar masses or something. I'm not saying it's that, but it's a, you know, it's a question of resolution. That's clearly, if you want to, you know, you're taking, does that, it's just, it's fixed by the numerical resolution. And they're not, as I said, not physical particles. And clearly, you can't resolve structures that have less than 10 to the 8 particles. In fact, you have to ask the question, how many particles do I need to begin to be able to, I can, maybe can I resolve a galaxy? But, you know, with that few, with that many particles, no, you can't really resolve a galaxy. Yeah. These are purely self-gravitating particles. For the moment, you know, to make galaxies, I haven't made galaxies yet, and in fact, I'm not gonna make galaxies, I'm gonna say a little bit about it, but this is matter, just gravitating matter, okay? There's nothing else. And how I distribute or make the mass doesn't really matter. I mean, I, what the mass of the particles is, as we know, it's gravity, so it's just the accelerator, you know, it's not, nothing should depend on the mass of the particles. Nothing that's physical here should depend on the mass of the particles. Anything that depends on the mass of the particle is a resolution effect, because it depends on the number of particles, right? Is that okay? Yeah. So, okay, go on. Okay, so initial conditions, you take this, you know, we want to represent a continuous density field as well as possible with an end, with endpoints, and the way that people have come up with doing this is by taking a grid, so you take a grid, which is the uniform universe, and you displace particles off the grid. You have, there's a simple algorithm which allows you to, you know, for a given density field, you have, you know how to do the displacements. You can kind of invert in order to get the density field you want, and obviously over some range of scale, and you put in the velocities, just given by this linear mode, you have a velocity that is just a function of position once you have the, and it's related to the local gravitational field. So that's how you set up the initial conditions. So now we've got, you have to imagine, this is an infinite universe, right? You're actually going to make copies of that. What we do is simulate copies of that, an infinite number of copies, and the force is the force calculated in that infinite universe. The size of the box obviously should not matter. That's numerical as well. Everything we see should not depend on the size of the box, okay? And of course, one can check that, and it is true. So the size of simulations, this is from 2005 from a big, one of the big groups of the Bags Institute, and this was a map up to, this is up to 2010. People are doing 10 billion particles. Now they're talking about aiming for, I've heard recently people aiming for a trillion particle simulations. And so this is, so I'm not, I'm talking about the actual simulations I'm showing here are small simulations, but okay, I showed you this picture yesterday, that's an initial condition with a certain power spectrum, a parallel power spectrum, and what you see is the growth, the development of these structures, smaller structures, and then larger structures. And for what concerns the dependence on the size of the box, basically, so long as those structures are relatively small compared to the size of the box, the size of the box doesn't matter. I mean, if the box was 10 times larger, this box would look the same. It wouldn't, none of this is dependent on the size of the box. So this is a completely out of equilibrium problem. You have this continually development, developing structure, and it's got absolutely nothing to do with equilibrium. It's supposed to be representative of a class of Poisson dynamics where everything depends clearly strongly on the initial conditions. That's in co-moving coordinates. So if you wanted to convert that into physical coordinates, you just have to multiply by the factor A of t. All that's included. Once you know the background, all the information about the background density goes into the scale, this A of t, and then that just tells you how to rescale that to physical coordinates. The pictures are in co-moving coordinates. On top of that, there's the, if you didn't displace the lattice, everything would stay the same. In co-moving coordinates, the particles would follow the Hubble flow, they would do that, okay? So, okay, so that's just to zoom in on a smaller region. You can see in a smaller region, you have structures. So the structures of various different sizes form. Oh, sorry, that's a bit hard to, maybe impossible to see. That's just for a different power spectrum. What I wanted to illustrate is that this is a more closer to a cosmological power spectrum. You get more kind of large-scale structure. You can see this kind of filamentary structure in these, and that's due to the fact that there's more, when you vacate the minus two, that means there's more long wavelength power, and that leads to a more kind of visible filamentary structure in the thing. So this is an example just of already going back 10 years of a big paper where they did, what were at the time, the very largest simulations. What you see at the bottom of this picture is the final configuration at a one giga-parsec, so that's at a huge scale. It's basically still uniformed the density with only initial fluctuations, and then they're showing a zoom in, so you're seeing this hierarchical structure that has formed as a result of, the smallest structures will in principle have four first, then bigger ones, then bigger ones, then bigger ones, and at this scale you're still, at the biggest scale you're still just have small density fluctuations. The colors here are density, okay? The yellow is very dense, red is less dense, and so on. And so, okay, you can see that same picture. Basically, okay, maybe I just, what happens, I won't bother with that slide. Well, that's just would be from left to right, that's just a schematic picture in a simple simulation of the development of the correlation function, which is just basically also related to the development of structure to larger and larger scales and monotonically increases, and you can calculate the scale of non-linearity. To go back to the validity of linear theory, so this is from that big paper, this is for a cold-arc matter cosmology, you can see that the power, so this is the power spectrum, or it's K cubed times P of K, okay? So it's just a dimensionless, what they call, what's called the dimensionless power spectrum. As at small scales, you can see there are light lines that correspond to the gray lines that correspond to linear amplification, okay? So where's my, where's it up, anybody see? Oh, there it is. So here, so this would be the, this is early times, okay? This would be more or less close to the little after the beginning of the simulation. The gray line would be linear theory, okay? And the linear, so if you linearly amplified, you would follow the gray lines, but what you see is that you get deviation from the gray line, so non-linearity, okay? Developing at smaller and smaller K that corresponds to larger and larger scales, okay? So you see that the modes, the linear theory remains valid at small scales and the non-linearity develops. So just summarize that, the basic thing is again, linear theory describes the evolution while it's sufficiently large scales. The non-linearity scales grows monotonically, so maybe I'm being a bit too repetitive. And in the non-linear regime, the flow of power is from large to small scale. So you do see that there's this collapse going on in which the fluctuations are going from large scales to smaller scales as predicted by the spherical collapse model. That you end up with, start with something at this scale and you end up with a structure at this scale. You get the flow of power from large scale to small scale, okay? So the non-linear regime, so how do we describe this non-linear regime? I'm not going to try to be complete, I'm really just going to try to give you a picture of what the way most, I think most people cosmologists think about it. So one of the things that people noticed or checked early on in simulations was they went and took these clumps. So you can identify by eye that there are clumps and you isolate clumps, you take out clumps using this criterion that their mean density is about 200 from which comes from the spherical collapse model. So you take out the biggest clumps in the simulation and you look at their distribution and you find that they're reasonably well fit by this, very crudely fit by this press sector but then you can improve that model and you can get reasonably good fits in a kind of slightly phenomenological manner. So this is the number, this is the function nm or it's some related, this is basically as a function of mass, the number of objects, the number of clumps and you see at different times, you get different bigger clumps as time goes on and you get a certain distribution. The little blue dotted lines are the very simple press sector formalism that comes from the spherical collapse model and this is an improved kind of phenomenological model that's used basically fit it from simulations but can be justified phenomenologically as well. So how do people describe the non-linear regime? So the distribution masses, okay, I said that already. So these clumps, okay, you've probably, I don't know, maybe if you've seen any kind of talks in cosmology, you've heard people talk about dark matter halos, okay? So what are these halos? They basically are these clumps that you see in simulations and I would say the way to put it physically, what are they supposed to be? They're supposed to be approximately varialized finite systems. The idea, again, based on this spherical collapse model is that a region has undergone collapse and it's varialized to some good approximation and what people found in simulations is that it appears that the smaller scale structure is reasonably efficiently wiped out so that the biggest clumps are relatively smooth objects but these are numerical results, okay? So these biggest clumps, you have hierarchical formation of clumps but there is some sort of merging process by which the big clumps turn into the biggest clumps that collapse do actually kind of smooth out and form these kind of varialized objects and what people do is then go and take these clumps out of simulations so they have an algorithm which I won't describe in detail but it's a little, a simple algorithm that will kind of select the clumps. You could even do it by eye. Initially people did it by eye. Now they have algorithms to do it. Just take out the clumps and they look at the density profile. They find the potential, the minimum of the potential, the gravitational potential and they kind of study the density profiles and the properties of those clumps, right? And they find that they have apparently universal properties that their profiles are independent of cosmology and initial conditions. At least that's the claim. These are all numerical results, okay? So let me see. Yeah, I'll just show you, you know, different simulations, they go select clumps out of them and they look at the density profiles and study various different things about them and so one of the most common profile that's used is the so-called NFW profile, Navarro, Frank and White. I think this is one of the most cited papers in physics, apparently, which they did this fit, numerical fit, to cosmological simulations. So what is that density profile? It says the density profile is about R to the minus one at small scales and then R to the minus three at large scales. But since then, these have been refined. There's a lot of numerical issues. Other profiles have been fitted to it but this nevertheless remains a kind of reference profile that people use, but it's purely numerical. So for the moment there's no convincing, in my view, no convincing explanation physically of where this really comes from. And they are characterized by two parameters, these halos, they're parameterized by the mass of the object and what's called its concentration. Its concentration is the ratio of its size, basically what was called its virial radius to this radius at which it bends, okay? So you've got a density profile, you've got a density profile in log log that looks something like this, R to the minus one, R to the minus three. There's a characteristic radius here called the scale radius. There's a cut here where you cut the object as a finite size and the ratio of those is called its concentration. And then people go do numerical studies, enormous numerical resources, studying how the concentration of mass of these objects depend on the cosmology and what the relation is between the concentration and the mass. Okay, so let me go on. So, okay, so halo models, the analytical tool that people use to describe the nonlinear regime are these so-called halo models. So basically the idea is that you describe the nonlinear density field as a kind of superposition as a sum of these halos that it's basically, you model it as a collection of these halos, it's an approximate description. And so I'll just give you, and then you can go and calculate things. So if you assume your ingredients are, you put in a profile, okay? And you put in, that's what I said, you put in these things. So you ask, you have to add the mass function for the halos, so you need to measure it out and you take some simple function from that. And then in practice, people usually input what they call a mass concentration relationship, which is some perical relation between the concentration and the mass of these objects. You take them out, they have a concentration of the mass, and to a first approximation, the concentration is a deterministic function of the mass. Okay, and then you need to also have the correlation properties of these halos if you're going to write down, so you can then write down, okay, I just want to give an idea of what you do. So what they do is you take your profile, okay, it's the mass, and it's the, this is the normalized mass profile, the U, so it's just normalized to one. And you can write down the, you can calculate then a two point correlation function and you can calculate or two point correlation of the density. And there are different terms that appear in this. There's the diagonal term, which corresponds to taking, it's called the one halo term where the two points you select are in the same halo. And then there's the off diagonal term, the two halo term where the points are not in the same halo. And okay, you can write down this. So the two point correlations, for example, can be written as a function of then known objects, okay? They can be written as a function of this halo mass function and the profiles and the profiles of the halos. And this basically describes the strongly nonlinear regime and then you have parts that are part, the two halo term that describes the larger scales. Okay, so it's just basically, this is a purely phenomenological model, okay? But it doesn't come from, it comes from simulations and it comes from the observation that simulations can be approximately described as a kind of collection of clumps. And you can use that then to construct a model that has got some parameters in it and you can measure those parameters from simulations. Why do you want to do that? Well, because you might, you then use those semi-analytical models, these analytical expressions with a few unknown parameters to determine observational quantities. If you want to work out how the matter field is gonna lens, is gonna have effect on lensing objects that are far away, then you have to have them, if you have an expression, an analytical expression for the dense, the correlation properties, you can actually write down something. So you can get, it allows you basically to summarize the, and describe phenomenologically the structure formation, the nonlinear structure formation, okay? So galaxies, okay, I said I'm not really gonna, how do you can then use these models actually in a phenomenological way, again, to construct the ideas that these halos are then sites for galaxy formation. Hey, they're overdense, they're potential, there's a gravitational potential, they're where galaxies are going to form. And the way that's constructed is that basically you have, you say that you assume that you have a certain probability of having a galaxy of a certain type inside a certain halo. And then you do some statistics and you can work out the statistical properties of the galaxy. You put in lots of parameters and you fit basically to the data. So it's a very phenomenological approach. It's, that's where that's basically what is done at the moment for the nonlinear regime. Okay, I'm not gonna get to my 1D models, but I'm just gonna say, so I gotta finish with maybe just some comments on open issues. So how is the nonlinear clustering best characterized? So here maybe there's the question is, you can use endpoint correlation functions or maybe there were other ways one can describe these correlated structures that haven't been used. The dominant tool at the moment is this kind of phenomenological tool, but I don't think that that's a closed issue as to what's the right way. Is this halo model even the right way really to describe? Is it sufficiently accurate? Is it really the right way to describe the matter density? And this issue of how the non-clustering clustering, how does it depend on initial conditions in cosmology? And that's what one of the major questions that cosmologists is interested in is saying, but if I can measure through galaxy correlations, I can measure inferred the density fluctuations, the density fluctuate the underlying matter density, can I infer information about cosmological initial conditions and the history of the expansion, the function AFT basically. You see, you can put in different functions AFT and that's going to change what happens, right? So or is the inner structure of these halos something universal that's got nothing to do with, it's just something to do with gravity or something to do with maybe there's a whole literature on people trying to use statistical mechanics approaches to understanding the structure of these halos, is it got to do with just statistical mechanics and gravity? Those are open questions, I would say, they're completely open. And I just want to finish by just commenting on, well maybe I'll, okay, the halo model, halos have the problem that they're rather poorly defined objects. The approximation that they're smooth is actually problematic. Resolution, increasing of resolution in simulations has revealed that there's more and more substructure and it's not really clear if it's reasonable to assume that they're really smooth structures or if that's just a problem in the simulations. And so this universality, it's unclear what it means. I just want to talk, somebody asked the question yesterday about, okay, you see, I showed you the simulation of a lattice and you set up this lattice, you could see the lattice structure. What, clearly that's not physical and nothing should depend on the number of particles. So the question in end body simulations, the resolution issue is how accurately does this finite end body representation represent the underlying continuum of physical model? What are the finite end effects? So if you take a finite system and a long range system, even gravitation system and you let it evolve into a very lized equilibrium, there is going to be some finite end effects and they're believed to be on time scales that grow within or that they're on very long time scales. So for a finite system, the problem is really just up to what time can I trust my simulation? Up to what time will my simulation represent well the phase space density and the velocity equation? In end body simulations, it's more complicated. The really the question is up to what's at a given time, up to what scale or above what scale can I trust my results? So it's also a question of scale and time, right? Right at the beginning of the simulation. So you really would like to say what's the resolution as a function of time? The resolution, what do I mean by the resolution? The resolution would be the scale above which the simulation approximates well the continuum model, right? So if you go to sufficiently large scales, you have a sufficient number of particles in any scale, it's reasonable that you would expect to go to the continuum model. But the question, the practical question is, what is the resolution scale? And it's very unclear really, it's not, I don't think it's a result issue as to what the answer to that question is. So you introduce actually two non-physical parameters. So there was that clearly the size of the mean interparticle distance. You also introduce the softening in the force. So those are non-physical parameters, not, you know, there's a numerical scale, those are length scales which are numerical that are induced in the simulation. So you really want to work out how does the resolution depend on those scales and on the model. So I'm not, maybe I just to finish, I just say, maybe I don't really have time. So I just say, you know, it's really a very, the reason why it's clear that the problem is very, that the answer is not so clear and numerical in cosmological simulations is that in practice, the resolution that one basically usually assumes that it takes the scale of resolution at the end of the simulation to be even considered to be smaller than the initial interparticle distance. So you'd say, okay, just take enough particles, take your smoothing length sufficiently large and just take a huge number of particles, you should be okay. We're not in that regime at all. We're not even close to that regime. And for very, you know, because there are argument, qualitative arguments that justify making much more optimistic assumptions. But the assumptions that are made are very optimistic. And I'll basically, okay, I'm not going to, I say that is a very open question. I think there's an interesting literature. Maybe I'll just finish to show something that I've done or that actually that Bruno did. This is just to illustrate, you know, the existence of these finite N effects. This is, these are four, those are three different lattices. I think here's a simple cubic lattice. That's a body centered cubic lattice, face centered cubic lattice. And here you can see it's actually a disordered configuration. And on top of those, we've put exactly the same density fluctuation. So it's the same cosmological model with a slightly different discretization, but the same number of particles, okay? So when we evolve it, this is a simple test. Nothing should be, anything physical should be different. So there you evolve to a is equal to eight. And you can see at your distance actually, and with the light in the room, they probably look absolutely identical. You need to go closer to start seeing differences, but that's a very, very large scale. Remember the lattice spacing is, you could barely see it on the initial, you could just about see it. And then you go forward and that's, you know, further you can see that you have, and that's again, they look very good at large scale. It produces, it's not sensitive to the small scale discretization. Whereas if you go in and zoom in, so now I'm zooming in on like four or five lattice spacings, and you go in and you see what that produces. Now you can see at that scale, you know, very visible. This is the same realization of the density field. There are exactly the same initial cosmological simulations and that there's not even a different realization, right? It's the same realization. You get what's pretty obvious, it has to be same realization. But you can see here quite visible differences, right? And the differences are at scales in the assumptions in cosmological simulations, the assumption often made is that the resolution really goes down well below this scale. So it's actually that basically all the clustering you can measure in that is unaffected by, but so anyway, this is just to give you an idea in illustration, this is a tiny thing you can change and you can see that there's differences. Obviously this is a fixed number of particles. So this doesn't ask the question, how does it depend on the number of particles? But there's an open question about resolution. It may be, why is it so important? It may be that halos cannot be well described as smooth objects and that there's some sort of hierarchical structure. It's really not clear, at least I'm, this is more personal point of view. I think there's a lot of, there's an open question about really the nature of the clustering in the nonlinear regime. And I think one of the interesting things that's happening is that people maybe looking at the Vlasov-Crosseau route in time may help to resolve some of these questions. Okay, so I'll stop, I didn't do any 1D, but too bad.