 Tako. Zdaj smo videli, da je tudi vse je vse najbolje, kako je statične simulacije. Zdaj smo povedali stefano Balbani, about embadi and hydronomica simulacije. Zdaj se, da je to poveda. Ok. Protočno problem. Zdaj je to poveda. Nisem ne poveda. Ok. Prejdo. So there is a pointer, right? Here. OK. Good. OK. Sorry for the glitch. Good afternoon everybody. My name is Stefano Borgani. I'm here in Trieste, the department of physics of the University and the Observatory. So my duty today will be to to entertain you with a discussion on n body and hydrodynamical simulations. OK. Not sure whether in one hour and a half that we have I will be able to go tudi za trebali in teknikaličke, in da v sljesega rangega, če je to podjamostilo, ali bo smo tudi nekako zelo počusti, da bo pa vse na čas prison, in na što ono doginaj tezmat, nekaj nifer, aredifizujem, na ko jih se tudi priče in, in zelo možemo pričo se vživati od zelo, da såme tem trebo beginivosti. Tudi zelo bo spravit, da pomembno se se se zelo, ki se zelo v brain unanimously, in še, ki so, možemo, da vse veče je neče vesel, in vtečno je tako nekaj, kaj počite vse zemne zrejmo in neč vstaviti, da jim je listedem, da se vse zelo svojo všeč nekaj povedaj, tako, da bomo zelo srednje. Na svojo, tako, meneh tudi treba za naših delov, da je ovo počit, vsega pravda je, če najšlo nekaj vsega zrejmo in nekaj vomeni readim mene, zelo sem�� korseljčil igreja, obnije pri interceptim v podbeli in zelo ispohrava. To je počršil, da se izvukamo? Ar poznije je idete pri mojšer, kako ta kraj očaj je več in začne pohodijo te vse začne začne začne začne začne začne začne začne začne načo se prevazujimo izvukov. Zato vidimo še počo je. which are of course Lagrangian and Euridian, and we will try to see the virtues and the limitations of these two methods. And then I will try at least to cover something about the applications to formational cosmic structures. One thing is about including astrophysics of garlic formation, but eventually I am afraid I won't have time to cover this, is cosmological with galaxy clusters and how you can use simulations to help in this process. This is one of several examples in which you can use in izgledati kažek kosmologiskosti in u toimarno priedenstέ so vzeneši osmike kosmologi quečenji začati, da si izgledali vzeneši tebe vzeneši dvej sko meni, modificacije do gravidov, in tako. Asimutno besetnja je, da, kako imaš o centrufiskega tačnega informacija, saveljaj, that egočno in postupne postupne proces, ki si izgleda o spremno na pomembna, ko se začel vzeneši kere doživati in se staj tryna v punji, ker večo čim je nama skala z nama skala. To bolj ne bi padali na politi zviselji, ker je je večo tko začna, da mu se zprabil evočovo, neki Galaxi, na nekšli kažen sem umiz jazir. To nebezprekovno, in na comprenstvenje vipovalivi mali mikroskopike, in povrniti procesi, pokaz thirsty začta, mikroskopi, veliko način, AGM-ov, in Shi b could be done in they always happen in in our simulations below their resolved scales. So this shouldn't be kept in mind. OK, let's get started with an Wednesday simulations, couple of papers here that you can use for reference. This actually very nice review by Fokker Springel. It's a pretty extended review that you can... is a very nice reading. kavljučenpo zod vzvestovanje v konvodizacjari. Sve tako po keči, vzko, če WE, začalo, in kosmologičnje zledi, poskrati začal je zelo z zelo, da je in del ko se dirge. Sve zelo, da je zelo prizvido kosmogijna, konjukt kod komentar, pa taj izprot玩i. Več, ki se vem, bo mimo, že lega v��du dolg. in z naprej z tajimi vzvečnikami, ki se skupal za časnj, je zelo neko zero. Zelo je to počak, kaj je zelo zažal vzvečnjem vzvečnjem. architectural in posto vs. Vzvečnjem posločenjem prejzavim, bi se prejzavim, na kaj je zelo od všem potencišem, bo pričo delno vzvečnjem, da je zelo posločena kaj je vzvečnjem vzvečnjem z vzvečnjem, na kaj je zelo izgleda zelo hodin, na kaj je zelo izgleda. Ok. in v prine, ki so vznikati, nekaj se počutite, ali je komputativno izgleda. Tukaj, ta n-body je tako ino poslutnja da je vznikat. Tukaj, to je n-body poslutnja, je vznikat v Monte Carlo, vsega sampljena in vsega n-body vsega n-body vsega n-body. Vsega, kaj n-body vsega n-body vsega n-body, kaj ne zelo, nekaj se je vsega n-body vsega n-body vsega n-body. kaj je naredil na počkot, ko je zelo začel, je začel, kaj je lačilo začel v mzne, tukaj je to zelo, ki zaradi se zelo začel. Počkot, da se pričel vsev začel na zelo, ko je zelo začel, ki je ta šeče pačjev. Zelo. Zelo, da je začel, kaj almost loves the equations of motion. Therefore, if you have, you can compute the acceleration, you can compute the velocity of the particle, and use these two equations to update the velocity and to update the position of the particle. Alright, so how to do it? The first end body code that you can write is a simple do loop. It's a nested, it's two nested do loops. A simple as this. What you do is to compute for each particle the contribution to the force acting on in pa z soper in je zaite ispresi v soli ljudi in ta zene je zelo poskratil na kot pokazati neč편 nečkopšča potenša in vzelo v zelo ali najdezželim vzelo, vzelo in od 0 prvnju priplonje, če bi ne bo takimi možno zelo vzelo vzelo, začne da so tes celti, zelo dve načine pilvstvenih proti vzelo takimi način, ko ekspečno vzelo Second thing is that if epsilon goes to zero, when the separation between a pair of particles goes to zero, then we have an infinite acceleration. And therefore, if we want to keep accuracy fixed in the integration of the orbits, we are forced to choose infinitely small time stepping with which to advance our system, and therefore the computational cost diverges. But also this method here, even regularizing with this epsilon here, which is usually chosen to be, say, between one tenth and one fifth of the mean interparticle separation, we still need something like n-square operation. So this method here, which is the first one introduced by Sven Arsat back in the 60s, is extremely expensive. So there are two possible solutions for increasing to make this feasible. The first one is to resort to special purpose hardware. So what you can do is to devise or construct or instruct some processor to be very stupid and be able to do just one operation, which is one over R squared, or one over R if you're interested in the potential. This is what the GPUs, the standard GPUs that you guys use when you play with some video game. This is one possibility. The other possibility is to resort to faster integration method that provides approximations to the exact computation provided by the n-square computation. Of course, this is always approximation. So whenever you have a faster method to integrate and embody simulation, you must be always aware that you are introducing approximation. So better you keep control of the uncertainties introduced by the approximation. So let's start with the first of such approximation, which is the particle mesh code. In the particle mesh code what you do basically is to you have the set of your n-particles and you compute the force on the grid points on a mesh and then you use the force compute on this mesh to displace the particles. So at the first side you may think that this is quite a complex procedure. This actually turns out to be quite simple and especially extremely fast. So let's try to see which are the steps to write a particle mesh code. The first thing, suppose that you start from a particle configuration, the first thing that you do is to compute to assign the charge of the mass if you want the density on the mesh. So you have particles with mass mi located at the positions xi and you want to compute the density field at the position of the mesh xm. So basically what you do is you contribute, you sum the contribution of all particles each weighted by some kernel function, which is an interpolation function. This is the first step. Second step is to solve the Poisson equation in Fourier space. So basically what you do is to write your potential in terms of the integral, the convolution say between your density field and the greens function of the Laplacian. So what you can conveniently do is to solve this in Fourier space and therefore you just make the Fourier transform of rho with your density field and of the green function, which is fixed. So you can compute it once at the beginning of your simulation and that remains fixed forever. And then you compute the Fourier transform of your potential and then you transform back to compute the potential in the configuration space. This is the second step. Once you have done this, what you can do on the grid, you compute the gradient of the potential and therefore you compute the force. And this is a simple finite difference operation. You apply a finite difference operator to your potential, you compute the components of the force on the grid. This is the first step. Fourth step is to interpolate back the force that you compute on the grid points to the force at the particle positions. And what you do is simply to apply the same convolution method that we use for interpolating, assigning the charge on the mesh, using exactly the same weighting function, otherwise you don't conserve momentum and energy. So at this point we have the force computed at the particle positions and we can at this point advance the particles, update velocities and positions. So the fifth step is to update particle positions and velocities. Basically what you need to do is to use a scheme to integrate the simple second law of dynamics, acceleration equal force. This is apparently simple at this point. It's actually tricky. This is one of the most delicate parts of the game because you need to be accurate on the way in which you update the things. First of all, you need to choose a time step. The time step should be small enough that you are accurate, that you make a small error in updating forces, positions and velocities. At the same time you don't want to overdo otherwise you are burning CPU time without any reason. So there are many different criteria for advancing particles here and using a scheme which is called leapfrogging in which you can either kick-drift-kick or drift-kick-drift. What does it mean? It means that you can either choose from the positions assigned at the position xn, which is the nth time step, to compute the velocity at the nth plus 1, so at half time step, advanced in time, computing the force at the position of the particle n and using the velocity at the time step n. Fair enough. Then what you do, you use this velocity to advance the position of the particle to the nth plus 1 time step. At this point you update the velocity at the same time to make particle position and velocities aligned in the time sequence. So you understand why this is called leapfrog, because you make a jump at nth plus 1 time step. The alternative is to do just vice versa. So update first the particle position at half time step, then kick with the computation of the velocity and then drift again bringing the particle position at the nth plus 1 time step. The reason why this is very fast is because when you go forth and back in Fourier space, you can do this with fast Fourier transform. Fast Fourier transform is computationally very efficient. You can find parallelized versions and so on and so forth. In this case the computational cost doesn't go like n squared, but goes like the number of particles plus the number of grid points times the logarithm of the number of grid points, which is faster. Of course there is a price to pay. No, there are different numerical schemes of interpolation. Basically it depends on the tradeoff between the cost of the interpolation operation and the accuracy of the interpolation operation. It can be a linear interpolation, it can be quadratic, it can be particle in cell, cloud in cell, and so on and so forth. This is not adaptive usually in n body simulations because if you do it adaptive, then you have to modify also the Poisson equation. In the third n body simulation it is fixed. Usually the interpolation is done on few neighboring cells. When you want to compute the density at one grid point, you just use the particles in the 27 surrounding cells, more or less. Depends a little bit on the accuracy of the scheme of interpolation. It is not driven by any physical consideration except conservation of quantities and accuracy. As I said, as I was about to say, the price to pay for this scheme here is that the resolution now is bound to the grid size. You can't resolve things on scales smaller than the grid size. Just a second. Of course you can do adaptive mesh, but this is going to be too complicated. There are a number, of course, doing adaptive mesh in which you refine the grid locally according to the density. Yes? Right. Sure. Suppose that you want to compute the updated position and you compute the position from the old position plus velocity times time step. Which velocity? Is the velocity before or after the kick? One first approximation is to use the velocity at half of the time step, just because you think you give a better approximation of what's going on in between. This makes the integration more simplistic, if you want. It's more time reversal, in a sense. Yes, of course, but then you increase your computational cost. There is always a trade-off between the accuracy that you want to reach and the computational cost. If you use a finer time stepping, of course you increase your computational cost and you are still not guaranteed, if you don't use the half time step, that your system is time reversible. Yes? That you increase by a factor of two your computational cost. So the rule of the game is to reach a given level of accuracy by spending the least possible CPU time. So suppose that here I say that delta t is what I can afford with my computer. Then how can I increase the accuracy? Where accuracy doesn't not only mean how accurate I'm integrating in the orbit, but also making considerations like time reversability. This system is Hamiltonian, after all. All right. So another alternative is based on three codes, in which you basically couple the virtues of a average mean field approximation, like the particle mesh, with a direct integration. What you do in this three code basically is that if you have a group of particles which is far away from your target particle here, you can be tempted to use this group of particles as just one micro particle. Whose position is the center of mass of this group of particles here and the distance is the distance from that center of mass. Of course, the accuracy here is related to the angle or if you prefer the size of this group here to be considered as a single particle, as a single macro particle. You understand that if I put this group of particles closer to my target particle, then I will increase the error that I do in the computation of the force more and more if the size of this structure here is kept constant. So basically what this three code does is to fix a precision regulated by the value of the critical opening angle. So the opening angle is this one. So if the distance, if the size r s is larger than the distance divided by the opening angle, the critical opening angle, then I should increase the accuracy. I cannot treat this group of particles just one macro particle. In this case I have n log n operations, but the problem here is that there is a fairly large prefactor in front of n log n. And the second possible limitation is that in this case what I need to do is to construct and store the structure of a fairly large hierarchical binary tree. And I will make it clear to you what it means for this. So suppose that you start with a particle distribution. The first thing that you need to do in this three code is to make a recursive binary tree reconstruction in which you divide your computational box in this case in four parts, in three dimensions should be an oct tree. So every time you split your computational domain in eight parts and you keep going and going until you have in each of these small squares either one or zero particle. So you end up with this and these small squares in which you have just one particle you call them the leaves of the tree. Then you have the branches of the tree and this one, this one, this one and this is the main branch of the tree. All right. So once you have done this recursive operation what you do for each particle, for each target particle you decide whether a group of particle can be treated only as just one so you are happy to stop with a branch of the tree or you need to go over and over and opening the tree down to the leaves. This is what you need to decide. So suppose that you will make it clear with an example. So again, if the opening angle is smaller than the distance over the size of the group then I need to open the node and therefore going downward the tree down to smaller branches otherwise I stop and I consider the group of particles just one single macro particle. All right. So suppose that you have a particle distribution like that, this is the binary tree that you would, the hierarchical tree that you would construct in this case and suppose that now I want to compute the structure of the tree that I need to open for the particle located at the origin of coordinates like here and you see that distant portions of these computational volumes are treated as stopping at the branches of the tree while as I go closer and closer I need to open the tree and walk the tree down to the leaves of the smaller cells. So this is the philosophy of this method. Extremely fast and accurate. You need a lot of memory to store the stretch of the tree. All right. There are hybrids. There are hybrids in which you try to combine the virtues of the different codes. So one of the hybrids is the 3 p.m. So you use the tree code to compute the short range force and the particle mesh part so in principle the philosophy is that you split your potential computing Fourier space into a long range potential and a short range potential. One thing that you have to be careful in doing this game is that it may happen that one particle that is close to you to the target particle gives a contribution that is counted twice. It's counted once when you assign the charge on the mesh for the p.m. part through the tree code. Then you have to be careful therefore not to double counting the contribution of these particles and this is usually done by filtering your potential, cutting usually with a sort of gaussian like here. So this is the long range force and you integrate the p.m. using this force here instead of the standard 1 over r and this is instead the potential that you use for the short range force. And the combining the two is well matched 1 over r potential with a long range which is this one that you compute with a particle mesh and the short range which is this one computed with a tree code. So this is good because you can be very accurate on the computation of the short range force and you are not bounded anymore to the lowest resolution set by the mesh size in the p.m. part. Alright and then there is a number of hybrids like the p3m in which you do particle particle instead of tree code so direct integration for nearby particles and then p.m. again. So this is particle mesh so that's why p3m. Or you can do even something even more complicated like adaptive p3m so in which you have the p3m with the p.m. part which is adaptive. So with a grid that is locally adaptive or the adaptive 3 p.m. part. So there are a number of varieties you can increase the complexity of your code and try to take advantage of the different techniques depending on the regime in which you are working. Alright So this is well in these days I should say that the codes that are most used are adaptive 3 p.m. This is the philosophy of the codes that are mostly used. Or there are also completely adaptive p.m. codes which are also used by a community especially of people working with the Eulerian hydrodynamics. So this is the record today. It's a 1 trillion particle simulation done by Skillman et al from the Dark Sky simulation set. And this is just to give you a flavor of what you can do in these days. So this is a picture showing the past like cone, the density on the past like cone between ration 0.9 and 1 for a concordant slamda CDM model. So this is the size of the computational effort that you can do in these days with these n-body codes. Alright Let's move to hydrodynamical methods. Okay, couple of references here. This is monagon which is a standard reference for Eulerian for Lagrangian hydrodynamics. Rosbok which is also for Lagrangian hydrodynamics. This is a review that I wrote with Klaus Dahlack where we give both Lagrangian and Eulerian and eventually something about n-body simulations. And this is also the same nice review that I mentioned you before where there is a very nice explanation of Lagrangian and Eulerian methods and also hybrid methods. Alright, so what is numerical hydrodynamics? Very simply, it is meant to follow the formation evolution of baryonic structures inside the potential wells of the evolving dark matter density field. There are two classes Lagrangian and Eulerian and Eulerian as you well know the Eulerian methods are based on computing quantities at fixed locations in space and computing the fluxes of gas and energies between neighboring cells. The Lagrangian methods instead you follow along their orbits the fluid elements and you update the thermodynamic quantities as the fluid element moves along the orbit. Okay, so standard definition Lagrangian and Eulerian hydrodynamics. Alright, so what we do? So in hydrodynamics we have state functions which are basically the density the three components of the velocity field pressure and internal energy of my fluid element and what I do is to integrate this set of equations here. So continuity equation the Eulerian equation in which I have the, you know this is basically the acceleration the total acceleration given by pressure force plus gravitational force and then I have the energy conservation equation which is basically the first law of thermodynamics in which I set dq equal to zero and this system of equation here is the equation of state which is the relation between density pressure and internal energy and in all this notation here total derivative which is the Lagrangian derivative is usually as usual given by the partial time derivative plus the advection term. Okay, this is the notation. Standard fluidodynamics nothing fancy. So the health of Lagrangian, of Eulerian methods is the solution of the so-called Riemann problem. I think about Riemann solver because it would itself take probably two or more lectures. So what is the Riemann problem? The Riemann problem is basically an initial value problem for an hyperbolic system in which you have the state variables given by u and the fluxes given by f. So it's a sort of continuity equation for a given generic quantity u and a given flux f and this is an initial value problem for two piecewise constant states at t equals zero. So basically you have a piecewise constant phase here piecewise constant phase here and the problem is to compute the fluxes and the evolution of my state variable u given by the computation of the flux across this interface. So the state variables are usually specified on the left part and on the right part. And this is an example of Riemann problem in which you have constant pressure density and velocity set equal to zero both to the left and to the right part. In this case we have the so-called SOD shock tube. It's one of the simplest cases in which you have, in fact, an analytic solution. And this is the time evolution of this SOD shock problem in which you have basically as you allow your system to evolve you have the propagation of a shock which is here, you have a creation of a contact discontinuity here across which pressure is constant and you have a rarefaction fan, a rarefaction wave which bears in mind the fact that this shock is propagating this discontinuity here is propagating. So in general, whenever you have an Eulerian code what the Eulerian code is meant to do is to have two fluid elements that you can decide to locate at your best convenience and solve the Riemann problem. So find an exact or iterative solution for this problem of computing fluxes f across the interface. All right. So this is what you do again. This is your hyperbolic system with state variable u fluxes f equation of state you specify your variables inside your cells and you define the average state within a cell which is just the volume of your quantity that can be density velocity or total energy or system of your fluid element and this is the average value within the cell. OK. So what you need to do is to compute this quantity here. So the integral between x minus i minus 1 half so between the boundaries of the cell and across a time step of the u in the t plus the f in the x. OK. So this is of course the one dimensional case equal to zero. This gives you the conservation law of your system. So again, here we have our state variable specified within these cells and we need to compute the fluxes across the boundaries. All right. So if we start from this and we make the time discretization for the evolution of u and the x discretization for the gradient for the divergence of f in which we have the integration within the boundaries of your cell of your quantity u at two different time steps and plus the integral across the time step of the value of the fluxes computed at the interfaces. So this is basically the continued equation. You have that the variation of u is given by the flux f across the surface of the boundaries of your fluid element. OK. So again, if you define this cell average quantity, this equation here can be recast in this form here and basically at this point what is the rule of the game is computing this quantity f the fluxes across the boundaries at i plus one-half and i minus one-half and use this to update u n to u n plus one. The so-called Godunov scheme is a scheme in which you basically say that you assume, you make the answers that the quantity here the fluxes are computed as a solution of the Riemann problem in which the state variables are computed at the n time step at two or more cell positions. So you solve your Riemann problem you have the flux across the boundaries you include the flux across the boundaries in this equation here for state variables. OK. As simple as this. OK. As simple as this apparently there are a number of huge number of complications here but you know related to the especially to the Riemann solve. But in principle this is the general philosophy. One delicate point. OK. So the general philosophy can be summarized in this slide here. So basically your evolution is done in three steps. Reconstruct, Evolve, Average R, E, A Scheme or Roy Scheme as a is commonly said. So in the first phase is the reconstruction. So in the reconstruction you start from the cell average state, so UN within each of these cells and you compute the run of this quantity within the cell and the aim of this is to compute the quantities at the interface of the cells so that you specify the left and right part of the state variables that you should use to solve your Riemann problem. Then you Evolve meaning that you solve the Riemann problem and you compute the fluxes. Once you have computed the fluxes you average. So what does it mean is that you compute again the cell average quantity within each cell once you account for the entering and with the incoming and the outgoing fluxes that you compute with the Evolve part and that's it. And you cycle. So these are just two examples in which you can see to reconstruct the state variables within your cell. In first approximation you can simply assume that they are constant. So within the cell is simply constant. If you want to be more accurate you try to do a linear interpolation between a given number of neighboring cells. There are further complications here because like in this example here for instance, if you do this linear interpolation you may end up with situations in which the state variable here after the interpolation is more extreme than the values that you have in your system. And this creates unpleasant features like unphysical oscillations of the solution and things like that. And what the people usually does with some black magic I would say is impose some slope limiters in such a way that the slope for the assignment of the left and right state inside the cell cannot be arbitrary. It should be limited in such a way that you never exceed the extreme. All right. Again there is a whole industry of methods for doing the reconstruction and these methods are meant to be stable non-oscillatory to preserve monotonicity use more grid points to increase the accuracy in the reconstruction and so on and so forth. And this is the structure of this business which is really a problem. This is the core of the problem of Eulerian methods to be sure that you have under control diffusivity of your element problem and unpleasant numerical effects related to the reconstruction. All right. So how much time on? So there are different kind of Eulerian codes. Eulerian codes in principle have the same problem of the PM of the particle mesh code because you are bound to the size of your grid on which you are solving your remand problem in principle. In practice there are a number of tricks of numerical web, more than tricks of numerical techniques, in which you can increase the resolution for instance with adaptive mesh refinement in which you refine the mesh according to something, can be density but density may not be the only criterion with which you want to refine. For instance if you are interested in resolving accurate shocks then what you want to do is to refine whenever you have a jump in pressure or a jump in entropy. So you can use different state quantities to define the criterion for the refinement. Or you can have moving mesh like in the repo code which is one of the codes which use a moving mesh so the mesh is not fixed, doesn't have a fixed geometry and the mesh itself can move with the flow can be sort of lagrangely along with the fluid. And this is very nice because this allows you to to say to couple the good features of Eulerian scheme with the good features of Lagrangian scheme. And these are nice figures in which you see, this is in the case in which you have a mesh which is given by the Voronoi tessellation of your particle distribution. You make a Voronoi tessellation, in this case you need to solve the remand problem in the surfaces of this Voronoi polyetra. It's more complicated but you can do that. And there are also meshless codes in which you like the Godin of SPH which is my favorite and the only problem is that I never make it working but I mean, I still I'm still trying. Which is very nice because basically it's completely Lagrangian coding in which you solve the remand problem between each pair of particles. If you're interested I can tell you more. So this is about Eulerian schemes. Let's go to the smoothed particle hydrodynamics. Smoothed particle hydrodynamics is one of the incarnation of possible Lagrangian schemes. So in Lagrangian schemes basically what you do you sample your fluid with points with particles and again hydrodynamic quantities are carried by each fluid element and you follow along the trajectory and the particles move under the Euler equations of course and they use smoothed quantities. I will show you what I mean. And these quantities are re-evaluated at each time step. So let's see what it means in practice. So we have a generic field F which is a continuous function in your domain and the thing is that you want to compute the interpolated quantities at a given position R knowing is distribution at some sampling points R prime. With an accuracy there is a resolution which is given by this interpolating kernel W which has a resolution given by H. So basically this is a sort of filter. The interpolating function that you use to assign the value of a continuous function which is sample at discrete points in space. So this is an interpolating kernel and H gives you the smoothing length. So the coarse-graining scale for your hydrodynamics. So this kernel needs to have good features. The first feature is that when you take the limit for H going to 0 what you want to have you want to recover the original function. So when you go to in the limit for H going to 0 you better this goes like a direct delta function. The other thing that you want to have from your kernel is that it concerns mass. Therefore the total integral over the whole volume should be unity. All right. Then we discretize. So we said that the idea is that we want to sample our fields with points with particles. So we have what? The first thing we take this interpolating function here, multiply and divide by rho by the density. And then we say okay I'm not doing an integral I'm simply sampling my continuous fluid with particles which are labeled by this b label here. So what we say is that the value of the field interpolated at the position r simply given by my kernel function multiplied by the value of the field f computed at the position of the particle b weighted by the volume by d3r which is mass divided by density. Fair enough. If for f I take the density this is my density estimator. Okay. So interpolation of the values is carried by each particle using the interpolating kernel. It's quite simple. All right. So which kernel? This kernel should have few properties. The first properties that should conserve angular momentum therefore first approximation is better if you use a fully radial kernel. For computational reason you prefer to have a compact support. Why a compact support? If the support is not compact it means sum over all the particles of your system. Okay. However hydrodynamics we know that hydrodynamics unlike gravity is a short range interaction and therefore you may be happy enough to take just a compact kernel as a kernel with a compact support. Okay. You don't want to go and weight particles which are far away from my target particle. So usually one popular kernel expression is this one in which you have that the first of all is compact. You have a vanishing value of the kernel whenever the separation is two times the kernel coarse-graining scale and these are first and second order derivatives that are continuous. So it is a well-behaved kernel that you can use for this business. Again if you go in the literature you find the plethora of these kernel functions of different orders being on the trade-off again between the computational cost and the accuracy that you want to reach. So the Gaussian kernel is very useful for an analytic computation but again the Gaussian kernel has a non-compact support so in principle it is much more costy than any compact kernel. All right. So this is the interpolating function. Now few tricks. First trick, differentiation. So if I have to compute it can be computed in this way. So sum over the particles mass over density times the value of the function at the position of the B particle times the gradient of my kernel. Unfortunately this doesn't go to zero even if f is equal to is constant. So what you need to do is to use a simple trick in which your function f here is labeled by a you multiply by a function external function phi and what you do in this case in dx is 1 over phi computer at the position of the of the target particle a sum over b of mb phi b over rho b which is this quantity first piece here minus this second piece here which is given by this quantity this second term. And this goes manifestally to zero if a is constant this time. It's a trick but you have to play these tricks with this kind of codes. So if phi equal to 1 you have this scheme for computing the gradient if phi equal to rho you have the density weighted gradient and you have this other expression and very simply at this point you can compute the continuity equation. So d rho in dt which is equal to minus rho divergence of the velocity field in the sph jargon is computed with this expression here where vab is the difference in velocity between particle a and particle b so this is the first equation of hydrodynamics that we can translate in sph jargon. Second is the Euler equation so the momentum equation. In the momentum equation you have dv in dt minus gradient of pressure over rho. Again even in this case if you do a brute force differentiation of this this is the expression. However, if you compute the force acting on the particle a exerted by the particle b using this simple expression here this is what you would get. If vice versa you compute the force acting on the particle b exerted by the particle a you get this expression here. Now, if pressure is not constant therefore pa is different from pb you have that you violate you don't concern momentum you violate the first law of dynamics which you don't like so you have to do something for this and again is a trick and what you do in this case to compute the gradient of p over rho by writing this as the gradient of p over rho plus p gradient of rho over rho squared you do the sph translation of this and you end up with an expression for the momentum equation which is completely symmetric for a and b and therefore you concern momentum looks like we are playing dirty tricks here it's actually what happens so this is accurate and concerned momentum which is what you want to have so we have continuity we have the momentum equation what we need to integrate is the energy equation this is the energy equation variation of the internal energy and this is the flux the corresponding flux again also in this case you can play with some algebra here and what you end up with is the expression for the Lagrangian derivative for the variation of the internal energy of the particle a which is given by this expression here where vab is again the velocity difference between particle a and b and that you can figure out that this is the result of simple computations then you have the equation state and that's it so you have three momentum equations one continuity equation one energy equation and one equation state five equations for five unknown quantities and we can integrate all right you can do something more refined like integrating directly the entropy and therefore enforcing the conservation of the entropy at the particle level in case you don't have any variation of entropy associated to dissipation of energy you can do and this is more important enforce the local to make locally adaptive code in which the choice of the software is dictated by the local value of the density and this is what you want to do if you want to take full advantage of the Lagrangian nature of the code because what you want to do is to have better resolution whenever the density is higher so whenever rho i is higher I take a smaller and if you do that you can figure out that if you run your Lagrangian derivation or the fluid equation and so on you put the Lagrangian constraints imposed by this condition here you end up with equations which are completely analogous to what we saw before except that everything must be multiplied by this correction factor f i that accounts for the density dependence of the kernel to make your code fully adaptive ok Does it work? No, it doesn't So this is the solution of the shock tube that I showed you before so this is the solution of the simplest Riemann problem for which we have an analytic solution so as I said, rarefaction wave discontinuity and shock this is the run of the density this is the run of the pressure this is the run of the velocity this is the solution the SPH solution with the equation that I showed you before and as you can see there is a huge noise here first of all you notice two things if you look carefully the position of the shock is not correctly recovered it's actually a little bit anticipated and there is a huge noise in velocity basically what happens is that you have a pile up of momentum on small scales and therefore you have a lot of noise in the particle velocities and this happens because this poor SPH particle is not dissipative in the absence of any dissipation term they just pile up they keep accumulating kinetic energy and they don't know what to do of this kinetic energy so it doesn't work so what to do to make it working what we need to do is basically to convert this mechanical energy here that is curiously accumulated into thermal energy so we want to capture the shock better to do this what the people usually do is we modify by hand the correct equations by introducing an artificial viscosity contribution to the pressure and this artificial viscosity in general this expression here with a term of bulk viscosity and another quadratic viscosity which is called von Neumann-Richtmeyer viscosity that the people work it out actually studying the first experiments of atomic bombs they try to understand which is the viscosity and which is the fallout so this is the reason for this von Neumann-Richtmeyer viscosity so which is the rational the rational is that basically the relevant quantity is the jump in velocity between two fluid elements which is this quantity here L times L is the separation between two fluid elements times the divergence of the velocity field this is the sound speed of your medium so you have this linear and this quadratic contribution so this means that if I have a viscosity term so if I use an Euler equation with a viscous term in which I add to the pressure this quantity here in the SPA jargon is like adding to the momentum equation this pi AB where pi AB is again a viscosity term which contains a term of bulk and the term of second order part here is this this second order part here is this one and this mu AB basically is given by is again the velocity jump as that we said before and one thing that you want to have in your system is that you want to turn on your viscosity only when you have convergent flow because it's the case in which you have shocks if you have a divergent flow you don't care you don't want to introduce viscosity where you don't need it so whenever you have a convergent flow minus and negative then you turn on this viscosity and then you have this numerical factors here alpha, beta epsilon here to regularize this velocity jump and there is nothing you can do rather than fixing this number by running control experiments so there is no first principle telling you which is the value of these quantities is again running control experiments and trying to reproduce as accurately as identic solutions of course you can so this is once you add this value of the pressure then this is the modified value the modified expression for the momentum equation the modified expression for the energy conservation equation further complication just to make you aware of the things that you have to be careful of suppose that you have a shear flow if you have a shear flow of course you don't have shocks and you turn on viscosity however if you use the viscosity expression that we wrote before with this mu AB so this expression here with this mu AB in a shear flow what we have is that the scalar product between the separation vector and the difference of velocities is different from zero they are not orthogonal they are not what you want to have so there is again a switch in this case in which you compute a quantity FA and you multiply your viscosity pressure by this quantity FA which has these two terms here there is a divergence of the velocity field and the rotational component of the velocity field so whenever you have a shear flow this is equal to zero and if you have a purely compressional flow instead this is the quantity that goes to zero and you recover the previous expression for the viscosity so this is to prevent the appearance of spurious viscosity in cases where you want to have no viscosity so suppose that you have a rotating disc with a shear rotating disc and you don't use this Balsau switch what you would have is a spurious transfer of momentum just due to the presence of viscosity alright and then you can do fancy things like since you are not happy with having this artificial viscosity you would like to get rid of this artificial viscosity away from shocks and therefore you write down a dynamical equation for the viscosity coefficient in which you have a source term which is again given by the minus the divergence of the velocity field and a decay time at which this velocity viscosity goes to zero when you are away from the shock but you can resort to optimize the performances of your coat you run the salt shock tube and in this case you get a much much better agreement first of all you notice that there is the velocity noise is much reduced and you capture the shock perfectly here but the problem is that look at this shear pressure should be constant across the discontinuity instead when you have the discontinuity in density and therefore in energy you have a blip in pressure and this blip in pressure you can say who cares it is a small feature in this solution it is actually causing most of the drawbacks of the SPH so this blip in pressure is actually caused by a spurious pressure force that arises at the discontinuity and this spurious pressure force causes a sort of surface tension once you have this surface tension the difference is that you don't mix these two parts of the discontinuity so whenever you have some hydrodynamical instability developing this surface tension force it prevents the development of hydrodynamical instabilities and the reason for this is again that there is no diffusion of energy across the discontinuity because we are preserving entropy at the particle level this is what we impose to the code and the code very nicely implement this is an unwanted feature an unpleasant feature is wanted but unpleasant ok this time it should start otherwise otherwise sorry sorry sorry I don't understand why this guy doesn't put it this way it's put by hand so one of the features no doesn't need to start in this way right? alright what this movie was meant to show is the alright what this movie was meant to show is the development of the kelvin-elmoltz instabilities for whatever reason so kelvin-elmoltz instabilities what are hydrodynamical instabilities fluidodynamical instabilities are the interface between two fluids moving with respect to each other in a shear flow ok so if you add a velocity perturbation along the y axis ok with a sinusoid we have the development of these curls ok over a time scale which is which can be computed in terms of the local density velocity difference and the wavelength of the velocity perturbation ok so this is what the comparison between a new lirian code where you correctly develop these instabilities and an sph code in which you prevent the development of these instabilities I'm very sorry because the movie is very instructive anyway so the movie would show that you don't develop these instabilities ok sorry about that I can show you later on of line if anyone is interested alright so but we know that this happens ok we know in nature that this kelvin-elmoltz instabilities happens in the real world so we know that there is a problem which is the cure of this problem the cure of this problem is basically to allow for diffusion of energy across the contact discontinuity which we identified as the problem causing the blip in the shot tube so to allow this is again we had to include a switch the sph is full of switches so every problem that you have in sph you cure with a switch the good thing is that there is a switch meaning that since it is a switch that you introduce by hand you can move the switch according to what you want to reach in the Eulerian codes there is not such a switch so you would say oh nice I don't need tricks problem is that in Eulerian codes you have diffusivity of your solution of the Riemann problem which is something that you cannot control so you can control by changing the Riemann solver by changing the reconstruction method by changing the slope limiter but it's not something that you put by hand and you can regulate time by time so there are pros and cons in this business so in the sph what you can decide to do is to introduce a dissipation term for a concerned scalar quantity so given a scalar quantity you can always write a sort of continuity equation in which the variation of your scalar quantity is given by the flux of the scalar quantity so you have the scalar quantity a at the position a and b and then you have that for instance you put a coefficient alpha that tells you how much diffusion you want to have and this signal velocity here which is basically the sound speed so depending on the sound speed you want to diffuse quantities according to some sound speed with a degree of diffusivity which is set by a dimensionless parameter here and you do that if you do this to diffuse momentum and you recognize that this equation here is nothing but the viscosity the artificial viscosity that we introduced before so artificial viscosity is nothing but a way to diffuse momentum which is what we want to do because we said that in sph when we approach a shock there is a pile up of momentum accumulation in particles that we want to get rid of and the way we get rid of it is introduce artificial viscosity if I do the same for internal energy then what I do so this is artificial viscosity that we saw before if I do the same for the internal energy what I have is a thermal energy diffusion it's not spitzer like thermal conduction it's not thermal conduction thermal conduction is related to the mean free path of electrons in a magnetized plasma this is something that I introduced by hand I introduced by hand to get rid of the pile up of energy on small scales you can think of the sph like producing a turbulent cascade to transfer energy from the large addis to the small addis and once you reach the small addis the energy remains there the code is not diffusive at all and you don't know what to do with this energy so that's why you create this pile up in the surface tension and this is a way of getting rid of this piling up of energy on the small scales then there are tricks in which you define which is the signal velocity for the artificial viscosity which is the signal velocity for the internal energy in which you regulate this by the difference in pressure and so on and so forth and again you run the same test as before and what you get this time is this red curve so this red curve here is the behavior of the pressure across this continuity here so the solution of the standard sph is this blue the green on the bottom is the ppm so it's the Eulerian solution using the artificial diffusion of the thermal energy so you get what you want to do you get what you wanted to obtain so this is good alright again I can show you the movie but if you make a comparison between the standard sph and the new sph in which you introduce this scheme here in this case you have the correct development of the curl features of the ice cut as people say in jargon produced by the Kelvin Helmholtz instabilities so you exactly reproduce what you get in Eulerian codes exactly so basically you recover what you hope to recover once you include this artificial diffusion alright to some map Eulerian versus Lagrangian so which are the advantages of the Lagrangian scheme is that you have better resolution where needed in high density region and this is normal because in any Lagrangian scheme whenever you have more density you have more particles and therefore is intrinsically adaptive can be easily coupled to n body codes because it's a particle based method and it's intrinsically Galilean invariant actually Galilean invariance is broken in Eulerian codes because of errors that you make in the advection of the fluid elements so it's not explicitly included in the code it's something that you have code to correct for the SPH disadvantages is that it is always a low order accuracy for the treatment of contact discontinuities because we have this smooth kernel so whenever we have a sharp discontinuity either in density or in energy or in entropy or in whatever the best you can do is to recover the discontinuity with a kernel and the kernel is smooth and therefore you have a smooth representation of a sharp discontinuity there is a lot of subsonic velocity noise that we can try to cure again with switches there is a poor shock resolution for the same variation on this first point here and there is a difficulty in following hydrodynamic instabilities each of these problems can be cured but again with switches that we need to introduce a dock in the code advantages of Eulerian codes sharp discontinuities are very nicely recovered very nicely with one fluid element you recover discontinuity hydrodynamic instabilities are generally nice followed for several characteristic times there are disadvantages again it's not manifestly Galilean invariant because if you add an advection term a bulk velocity term the Riemann solver doesn't know that you are adding a velocity a bulk velocity basically there is a preference for special directions so if you have a Cartesian grid there is of course a preference for x, y and z and you can overcome this problem by using unstructured meshes of moving meshes adaptive resolution is not trivial believe me and the degree of diffusivity here you want to have diffusivity that you don't have here you have diffusivity related to the Riemann solver but it's something that is difficult to control it's something that you have to be very careful of so if you want to have just one keyword here the problem is that we need we include every time switches modifications of the equations that we want to integrate while here you have to control what you do numerically alright, then there is of course a huge industry of comparison between the codes because given the pros and cons at this point it's better that we make a decision of which code to use depending on the problem that we need to solve this is from an old comparison paper by Osir told in 2005 in which they run large cosmological box with gas followed with a Lagrangian code with an Eulerian code and you know you make a comparison by how you say ok, it's nice it's ok, it's not that bad but then if you look carefully and check you can see that there are small scale features that are different in the two codes you have like the impression that in the Eulerian codes you have higher density here in this over dense region that is a galaxy cluster than here there are small differences that can be appreciated actually if you want to trace the origin of these differences the differences are not negligible there are control tests that the people run to appreciate these differences for instance the first one is the so called test of the cold blob you have a blob which is cold moving in a medium which is hotter in pressure equilibrium and what happens to this cold blob is that you have a supersonic flow in front of the blob with a shock front, with a both shock here ok and then you have a subsonic flow between the shock and the blob and what happens is that what you expect physically is that as this blob is moving in a wind against a hot wind this blob is progressively destroyed on the head by Rayleigh-Taylor instabilities and then once you have striped gas you have a mixing caused by Calvin Helmholtz instabilities this is what you expect simulations using a variety of SPH and grid codes and what the people notice is that in the SPH codes this blob is rather persistent against disruption caused by the wind ok so you see that this blob here in SPH is flattened by this wind ok but it's never destroyed in Neolirian codes instead this blob is initially flattened and then you have hydrodynamical instabilities developing and completely destroying the blob after a few characteristic time scales if you make a zoom in of the particle distribution on the head of the blob what you notice is that the particle distribution in Lagrangian codes in SPH has this distribution if you further zoom in you notice that this is the blob and this is the hot wind and you see that there is a sort of discretization here and this is exactly the effect of the surface tension caused by the spurious force arising from the lack of diffusivity in SPH and it is this diffuse oops it is this diffuse sorry this surface tension that prevents the development of the instabilities at the head of the blob and therefore prevents the disruption of the blob itself but then again if you allow for artificial thermal diffusion this is what happens so this is the SPH artificial thermal diffusion without and you clearly see that after as time goes by in this case you completely destroy the blob much like you do in Eulerian code why in the standard SPH you preserve the blob so you preserve the identity of the blob if not the form alright it's a movie that it doesn't don't even try okay how much time I'm left with like 20 minutes I will say something so this is another historical plot if you want coming from a paper from a rather old paper by Frank et al in 1999 the so-called Santa Barbara cluster comparison project in this project what people did at that time was to collect together a bunch of simulation codes from different people Lagrange and Eulerian and simulate the formation in a fully cosmological environment of a galaxy, of a massive galaxy cluster in a nice and desiter universe at that time and this is simply a density map of what the people got at that time one interesting feature of that comparison was that okay sorry and this is instead for the same cluster simulated nowadays at a much better resolution with a standard SPH and with a new SPH I ask you to concentrate on temperature map so you see the feature there is a merger here going on with a shock front this is a both shock here and here this is standard SPH this is SPH with diffusion and you clearly see that in the SPH with diffusion you have a much less complex thermostructure of the intra cluster medium so all these tricks that I'm describing you actually they have precise observational consequences and one has to be extremely careful whenever we make comparison between simulations and observations because we must be sure what are the features that are related to the physics that we want to describe and which are related to the numerical methods so which features are spurious and which are genuine physical features so again the both shock is very nicely recovered but the thermostructure is completely different here alright so this is an old plot from the original Santa Barbara comparison project in which is a plot of the entropy versus the radius so it's the entropy profile of this cluster so people working in X-rays usually define the entropy as the ratio between the temperature and the number density of electrons in this case the gas density to a 2 third this is strictly speaking it's not the entropy, it's the adiabat let's call it entropy so what the people recognize at that time is that in all the Eulerian codes there is a creation of an entropy core here there is a flatten in the central region which is not observed in this Lagrangian code and running nowadays this is still the solution that we would have with the standard SPH the green is the solution that we would have exactly for the same cluster and the red is the solution of the SPH but using thermal diffusion one thing that I want to make clear is that we didn't move any parameter we didn't adjust any parameter to make the red going over the green the choice of the parameters were only dictated by removing the pressure blip in the shock tube test so we just wanted to adjust that and then we say let's simulate the structure in a cosmological environment so it's pretty nice at least in this case we have a convergence in the answer between Eulerian and Lagrangian code so but why we have this difference why we have this difference well you can understand from you can understand from the previous plot here so suppose that you are in SPH in standard SPH in the standard SPH we preserve the entropy at the particle level therefore suppose that you have a merger coming with a low virial temperature into the hot atmosphere of my cluster this merger brings low entropy gas since it has low entropy it has low virial temperature so what the code does is to make sinking of low entropy gas at the center if you have low entropy gas entering into the cluster and you have no mixing of entropy what happens is that the low entropy gas slowly sinks to the center and therefore I keep the entropy level at the center low if I have mixing this doesn't happen anymore if I have mixing what happens is that so this is the entropy for instance I destroy my merging blobs, I diffuse the gas I thermalize the striped gas from my merging blobs and I destroy this low level of entropy at the center I don't have anymore any sinking of low entropy gas because my low entropy gas is completely thermalized is completely diffused by thermal diffusion in the hot atmosphere of the main object application to formation of cosmic structures again this is a movie that doesn't go this is another movie that doesn't go at least I want to show you it doesn't show even the movie so what does a cosmological simulation a cosmological simulation basically does something quite simple you take initial conditions from the cosmic microwave background and you evolve cosmic structures by solving your body simulations and solving your hydrodynamics one thing that needed to be clear as I said at the beginning is that these cosmological simulations cover a huge dynamic range we want to describe the hundreds of megaparsec relevant for cosmology to the parsec scale where we have star formation supernova explosion, chemical enrichment black color creation and so on and so forth so we are talking about something like eight or more decades in dynamic range something that is not affordable at least today and on the top of that you want to describe a variety of complex astrophysical processes like star formation again or supernova explosion there is no possibility that we can describe them explicitly in a simulation alright so first problem how do we generate initial conditions in cosmological simulation so the problem in this case is to generate a distribution of particles with positions and velocities representing a realization of a gaussian random field assuming that we want to start from gaussian initial conditions a gaussian random field with a given power spectrum that is completely specified by CMB fast, class, whatever you pick up your model you generate your power spectrum at the ratio that you want how do we generate initial conditions well, if we have a gaussian random field then what we know is that the Fourier transform of my density flatuation field delta K should have a distribution such that the distribution of the moduli is a Raelic distribution and the distribution of phases is random because by the central limit theorem you want to produce a gaussian random field so you have delta K which is given by the modulus of delta K times e to the i theta K and this is the distribution of moduli and phases of the Fourier transform of my gaussian random field then what you do simply is I want to make a sampling of this distribution so what I do is to generate a gaussian delta K on a grid in Fourier space so what I generate is a delta K for each value of K I generate on a grid in Fourier space I generate a delta K by throwing two random numbers R1 and R2 taking minus 2 p of K logarit of this and this ensures me that I'm generating that I'm making a sampling out of a Raelic distribution and then this second condition here ensures me that the phases of this complex number delta K are random between 0 and 2 pi so I just generate two random sequence of numbers and I and using an analytic expression for my p of K or tabulated expression for my p of K I generate the value of delta K on the grid then what to do I do simply Fourier transform so I go back to configuration space and I compute the potential on this grid on a grid in linear sorry in configuration space at the Lagrangian position q's and the Poisson equation in the configuration space at this point what I do is simply to use the trick of using the Zeldovich approximation to shift my particle so basically what I do in the step 3 is to compute the linear theory velocity field on the grid by simply taking the gradient of the potential that I just computed multiply by the linear growth factor perturbations at Lagrangian coordinates on the grid and then what I do I move these particles according to Zeldovich approximation ok so basically this gives me the possibility of generating positions of particles that are a representation of a realization of my power spectrum for Gaussian initial conditions yes no in principle they can apply to both dark matter and gas actually you have a complication of the particles with no negligible thermal velocity so suppose that you have neutrinos then the story is different because this helps you to give the velocity induced by gravity by the gravitational instability on the top of that you have to include the thermal velocity well thermal or whatever velocity of the particle for instance in the case of neutrinos you have to sample on the top of that you have to sample the Fermi Dirac distribution ok so then I have to make a decision on which ratio to generate these initial conditions and the golden rule one of the golden rules is that for instance you want to have this ratio large enough that d or z is small enough here to have small displacements to make sure that you are very far from the shell crossing regime because you want to generate initial conditions in a regime where the Zeldovich approximation is correct of course also in this case you can use a number of improvements of refinements like generate initial conditions on an unstructured glass and a Norford glass rather than on a regular grid or use a second order Lagrangian perturbation theory instead of the Zeldovich approximation but the bottom line is that you use some Lagrangian perturbation theory to generate the initial conditions for your simulations this is the out initial conditions look like so in this particular case this is a portion of the initial condition generated for an hydrodynamic simulations in which the white points are the dark matter particles and the red are the variance you see that the variance that the variance are located half grid spacing in this case and you look if you try to defocus this image you can recognize a sort of large scale pattern in this displacement and this scale pattern is exactly the effect of the power spectrum that you are generating a power spectrum it's not a uniform distribution it's not a random distribution it's not even a distribution on a regular grid it's something for which you have a pattern large scale features which is given by the power spectrum the long wavelength modes of the power spectrum and believe it or not if you compute then the power spectrum from this initial conditions in the bottom the black curve is the power spectrum linear power spectrum that you get from CMB fast or whatever and on the top the points are the computation of the power spectrum from the particle distribution that I just showed you before this one and as you can see you actually do a good job in generating a particle distribution whose power spectrum is exactly the same model of some difference is exactly the same as the power spectrum that you use to generate the Zeldowic displacements if you look carefully you clearly see that in fact if you go to small k's you don't have a perfect overlap between the points and the power spectrum and this is sampling variance because as you go to small k's you have only few Fourier modes that you can accommodate in your computational volume and therefore you sample your line distribution just with few Fourier modes and the consequence is that you are not accurate in sampling that and therefore you have small mistakes as you go to higher and higher k's you accommodate more and more Fourier modes and therefore you have a better representation of your power spectrum and therefore in large k's you have a much nicer representation of the power spectrum so this is cosmic variance basically and the difference between this and this is the evolution after some time the black curve is the linear evolution the points is the evolution by the unbody and you get what you hope to get so on large scales linear evolution is preserved on small scales you have no linearities developing and therefore you have deviations from the linear power spectrum that takes place on smaller and smaller scales as evolution goes by so it works I think I almost done I will skip all the rest of my talk which is not small amount of slides just to make sure I just want to show you one thing before I move to the conclusions this is just an example of all the complex astrophysical processes happening taking place on small scales so suppose that you want to describe star formation in these simulations what is star formation? Star formation is basically cooling down by radiative losses, by radiative cooling eventually due to the lack of pressure this gas becomes denser and denser and eventually becomes so dense and so cold that is star forming stars again star formation is a process taking place on scales that we have no hope to represent so basically what the people do is to say we have that matter we have multi-population of particles we have that matter population star population stars are collisionless and therefore they feel gravity, only gravity much like dark matter particles gas undergoes a number of astrophysical phenomena like radiative cooling star formation, once some density and temperature criteria are met gas undergoes gravity of course so it's coupled to dark matter and stars but it also undergoes hydrodynamics so you have to describe this system of course what happens is that starts explode into supernova and the supernova hits back the gas in a feedback process of course you can have some UV background like the one that Andrea Ferrara described this morning that you have to include in your simulations if you want to have a realistic description of the cosmological evolution of your variance which also affect the gas and so on and so forth so at this point I just keep everything because I want to I just don't want to bother you, as you can see I have many many more slides many more this was actually meant for two lectures which I don't have time so let me give just one final message so what you need to bring home in my view first of all is that numerical and body hand writer simulations represent in my personal view the ideal framework to capture the complexity of cosmic structure formation cosmic structure formation is a complex process and this is in my view the best way that we have today to describe in detail what happens to galaxy formation but there are a number of but first one is that an exact numerical and hydrodynamical method simply does not exist there is no such a thing an exact numerical and hydrodynamical method is not something we have and we have to be aware of this second always test and compare different methods to understand the range of validity and limitations in this game of simulations never be tired to test and test because you never know which is whether you have a feature generated by physics of binumerical artifacts astrophysical processes keep in mind they are not set consistently described there are phenomenological recipes that we use to implement star formation and feedback whenever you hear somebody saying that we describe in a self consistent way galaxy formation this is plain wrong so what we can do is to describe include phenomenological description of physics so for instance star formation we can use something reproducing as meat technique at law for whoever of you are familiar with these things you can impose to reproduce simple observations and then ask to your simulations whether you can reproduce more than that so this is what we can do and it's not an easy business in these days we have hard time to produce realistic galaxies especially these galaxies so always keep in mind that there is a range of astrophysical processes that are described as sub resolution effective models so in galaxy classes that they talk about as an example of the application simulations these simulations can help to calibrate cosmological applications of classes but again the recommendation is to use with some grain of salt for the reasons that I just told just because we want to be sure where is the physical effect and where is the numerical effect so just one final sort of provocative question should we really aim to include in simulations all the physical processes we can think of is really this is the business or said in other words like some of my colleagues say shall we include so many astrophysical processes to the point that we are in the position of defining a best fitting model for simulations well my personal opinion is that if we have a simulation that includes all the astrophysics then it would be as difficult to interpret as the real data so I don't know how much are we gaining except that we are including along with astrophysical processes a lot of numerical artifacts so I think we have to be careful and especially refrain from thinking about numerical simulations as producing the best fitting model in these numerical simulations we have a number of parameters we throw the parameters already to the hydrodynamics but there is an even larger set of parameters already to astrophysics star formation rate feedback efficiency how much energy you put in kinetic feedback how much you put in thermal feedback how fast the black hole is allowed to accrete which is the how do you distribute the mechanical energy released by the black hole in agent feedback there is a huge number of parameters there is no hope that we can do something like a best fitting of these parameters against observations this is not what we want to do we want to understand whether we are getting the global picture of galaxy formation I don't think we have to best fit to data when we use a simulation there is not a single Lagrangian that we simulate there is not such a thing like a direct numerical simulation direct numerical solution that we have for galaxy formation that we can use as a benchmark for interpreting observations so try to be conservative maybe I am too much conservative ok, thank you