 Welcome back. The posters will stay also for the lunch break, so you can go on the discussion during lunch. And now we have the second lecture by Gus Everend about computational cosmology. Thank you, Pella. Welcome back, everyone. Today we'll open the box that admits that the universe is made up of more than just collisionless dark matter. So what we'll talk about, here's a kind of outline of where we'll go. We'll first motivate having additional fluid to model explicitly the baryonic behavior by kind of showing how n-body techniques have been used, have been essentially misused to understand structure on small scales associated with galaxies. So there are limits to what you can do with an n-body simulation, and we'll go there. Then we'll talk about what you need to do galaxy formation properly, and I should say here that my emphasis is going to be more or less on galaxy formation, although there are a lot of kind of roots and tendrils that come out of, like, say, forming the Milky Way associated with, let's say, the Lyman Alpha Forest at High Redshift, which I know that some of you are actually working on, or associated with the first generation of stars that might form, or associated with 21 centimeter observations at Redshift's of 10 or 20 to come. So there's quite a bit of association of baryonic physics on scales that I won't focus on, but we have one lecture, so there's only so much we can do, and galaxies are important. We live in one. So understanding how we got here in the Milky Way is kind of where I'm going to be focused today, and then as I mentioned, tomorrow we'll talk about collections of galaxies and clusters and move up to the high end of the mass scale. So as a result, I'll be focusing more or less on the kind of lower Redshift universe, and I won't be talking about the first generation of stars that might form at, say, Redshift of 30 or 50, although that's an extremely interesting topic. The good news, of course, is that by focusing on low Redshift, we have lots and lots and lots of observational data. So one punchline of all of this is that, you know, we're struggling, as the simulation community is struggling to match all of that complexity. We have a ridiculous amount of observational data associated with galaxies, and there's no way we're going to match all of that with current technologies. But I'll show you that things are actually getting quite reasonably, quite reasonable in terms of their fidelity for kind of low-order properties, making disc galaxies that kind of look like the Milky Way, which was a challenge as recently as, say, four or five years ago. That's now kind of achieved with a fairly high degree of fidelity, as I'll show you in some movies from simulations by Phil Hopkins and his group. Okay, so then, you know, I'll go through some of the methods that are in use. Talk about some of the results with cooling and star formation focused on forming a Milky Way-like galaxy. Take a brief look inside the ENZO code just to kind of give you a sense of what, you know, what are the gears and wheels inside of such a code. And then talk, finish out by looking at some recent important papers that bring together multiple codes with the same physics and same initial conditions and compare results. And that's kind of a verification stage that these codes need to undergo, and that work is important and ongoing. And then we'll try to summarize all of this and set up for tomorrow. Okay, so the goal of, like I said with the focus today, the goal is to make things that look like this in your computer. These are Hubble Space Telescope Treasury images of nearby galaxies, and you can see a lot of interesting phenomenology and features. This guy, for example, is presumably the result of a head-on merger, a merger happening along the line of sight where the stars are kind of shockwave, gravitationally induced by a high-velocity encounter with a satellite that merged essentially in its core. That's really cool. So that's a transient phenomenon. Another transient phenomenon are things like this. And then, of course, you have dust lanes and beautiful, grand design spirals, and then you also have the rest of the Hubble sequence, most of which, because balls of stars aren't that exciting for people to see. There aren't that many. Well, here's the sombrero galaxy. I think that's got a big bulge to it, but it's not quite an elliptical galaxy. But, you know, there's our elliptical galaxies, there's spiral galaxies, there's our regulars, there's the whole Hubble sequence to try to figure out. So let's try to motivate doing two expluids explicitly by taking a look at some relatively recent work of pushing in-body simulations to essentially the limit. I mentioned last time that there are subhalos within halos, and that halos form through hierarchical merging. Well, when you store results from your simulation, store snapshots, you can go in, do halo finding at each snapshot. By having identifications of every particle, you can essentially do Lagrangian tracing of, you know, a structure present at Redshift 10. Where is it at Redshift 2? Where are all of those particles? We can do that. We can trace that. And so you can trace the evolution of the merger history of a particular halo and generate diagrams like this. These are diagrams where the x-axis is not meaningful. The location along the x-axis is just a, I think sometimes these are called dendrite diagrams or such, but there's basically a tree, hierarchical tree, where time runs upward on the graph. And for example, this is the largest halo to form at low Redshift. All of these halos are in a fof, a friends of friends environment, but they're identified using spherical over density as a secondary step. So the fof group contains this halo and a bunch of other halos over here at Redshift of 0. Now, how did this halo come to be? Well, it has a main progenitor track that goes back as you go backward in time along the green route here, but as you move the clock forward in time, there's mergers, right? Little halo forms here, merges into a larger halo here, which hangs around until it finally merges with the main progenitor group here and ends up as part of the main halo at Redshift of 0. Okay, so we can analyze the merger histories and generate these so-called merger trees and use them to sort of assign starlight to the substructures in the halo at the Redshift of 0. That's the game. People have done this for a while. This was a relatively recent work by Gabriel de Ligia in 2006, but back in... Sorry, that's on the next slide, what I'm going to talk about from 1999. So this has led to this notion of what's called subhalo assignment matching, where you look at that merger tree and you identify all progenitors. You tag each progenitor with the maximum circular velocity, which is just gm over r maximized in your profile as a function of radius. I didn't talk about that much yesterday, but you have a density profile. You can integrate that density profile to get an enclosed mass, divide by radius, and you get a circular velocity curve. The peak of that circular velocity curve is kind of a tag that you can put on each one of these subhalos. And that peak of the circular velocity curve, essentially through empirical observations, relates to the luminosity of systems. We know in low Redshift observations of dwarf and normal galaxies that there are scaling relations between circular velocity and luminosity, and you're kind of taking advantage of that empirical information to ensign luminosity to a subhalo given its peak circular velocity. When you do that... So this technique has been around for about a decade, and here's an example of modeling galaxies in the Sloan Digital Sky Survey by Charlie Conroy and collaborators from 2006. And all I did was take an n-body simulation and do this kind of tagging, and then assign luminosities to the subhalos at low Redshift, appropriate for Sloan. Sloan Digital Sky Survey contains galaxies out to about Redshift of 0.1. So they would take, say, a snapshot of a simulation at Redshift of 0.1, do this assignment, and then do things like measure properties. So they can measure things like the angular correlation function, the excess probability of finding a galaxy in angle theta away from a given galaxy. And you can do that for galaxies defined above given luminosity limits. And what's shown here are luminosity limits at the bright end, visual magnitudes brighter than minus 21, and then going down to the faint end minus 18. And what's shown here are the data from the simulations or the solid lines. The actual Sloan data are the circles. So these are not fits. The prediction is the solid line from the simulations. And the actual measurements in the data are the points. And you can see that it's like magic. It just works. It works perfectly to match the correlation function. The dotted line here is the dark matter correlation function, which is constant. So this thin dotted line is the same in all panels. What you can see is that the brightest galaxies are more strongly correlated. The same way that high peaks in a density, Gaussian random field density field are more strongly correlated. So that makes sense. On the other hand, low amplitude peaks are less correlated than in general. So that method seemed to work extremely well. It makes a prediction. Here is essentially this halo occupation distribution, which I briefly will mention going on, which is just the mean number of galaxies above, say, these luminosity limits, that's the different lines, as a function of halo mass at a fixed relative to 0.1. And what you can see is that at the bright end, you need a pretty high mass halo to form a single galaxy. And then as you go up in mass, you have more and more of them. These are groups of galaxies now, and then clusters would be out here at 10 to the 14, 10 to the 15. So here is kind of the cluster mass scale. As you go down in luminosity, you're finding that you have of order 100 galaxies in these halos at low redshift. Yes, question? The question is why use an angular correlation versus a three dimensional spatial correlation? Because the angular correlation function you can apply to the photometric samples where you don't know the redshift. You need to know the redshift in order to do the 3D version. And the 3D version actually has been done now for the brighter spectroscopic sample. This approach was kind of early in the Sloan survey, 2005, 2006, so it was easier to do back in 2006. And also you can push it to fainter magnitudes. You can go down to this magnitude, which the Sloan spectroscopic limit, if I remember correctly, wouldn't take you to minus 18. It's something like minus 20. So you can only do the bright part of the correlation function in three dimensions. Thank you. So this is essentially now a prediction coming out of the model, which says that there's kind of a minimum mass scale above which you need to reach in order to form a galaxy of a certain brightness. And then as you go to higher and higher masses, you just accrete more and more of these galaxies of that luminosity. So that approach was applied kind of to understand the correlation function of brighter galaxies, but it was pushed maybe a little too hard to understand the structure of satellite galaxies around the Milky Way. So these satellite galaxies have been known. I mentioned the LMC and SMC, the Large and Small Magellanic Clouds, but there are other satellite galaxies that are known around the Milky Way. And in 1999, there weren't that many known. So what I'm showing you here is the cumulative number as a function of essentially luminosity, but again phrased in terms of circular velocity for these satellite galaxies around the Milky Way and also Andromeda, because Andromeda are nearest neighbor. You can also see these systems. And these are very, very faint systems. These might contain something on the order of 10 to the 7 stars in them. They're actually very, very small. The observations are just points. The predictions from simulations of the day were here. And as you can see, as we go down to very faint systems, low circular velocities around 30 kilometers per second or lower, the expectations from the simulations doing this subhalo assignment matching approach were such that we should have seen many, many more times, like factor five more satellite galaxies around the Milky Way and Andromeda than are observed. So that was a problem. And the question was, you know, where are all the missing satellite galaxies? And that became kind of a crisis for Lambda CDM, if you will. And there's a whole bunch of literature that was published since that time on solving this crisis. There was a related problem called the too big to fail problem. Michael Boylan Kirchen published paper in 2011, shown here as a graph of the maximum circular velocity versus radius, the size of the system. And the Milky Way systems kind of follow this gray band here. And the point was that at a given size, the systems were too dense in the simulations. They have too high a circular velocity at a given size compared to observations. And so these were issues that have been around for a while that troubled people. And there was some concern that maybe Lambda CDM on very small scales might be wrong. But as we'll see at the end of the lecture, it turns out that Baryon physics helps explain these problems. That we were pushing the n-body simulation, pushing the subhalo assignment matching technique in particular, and relying on results from a single fluid simulation, when in fact you needed multi-fluid physics in order to get the right answer. Okay, so let's talk about galaxy formation in a cosmological context now. Here's a graph that I've been showing for a while. That kind of gives you a sense of, it's like a flow chart for galaxy formation. The essential ingredients. You start with quantum noise in the early universe. Inflation gives you wiggles. Those wiggles are small amplitude, sets up everything. So I love to say everything in the universe is amplified noise. It explains a lot. Donald Trump explains a lot. It's all amplified quantum noise. And then, of course, we amplify by gravity, and that's captured well by n-body simulations. So you go from 10 to minus 5 to over densities in the core of our galaxy. Obviously, we're 10 to the 27 times more dense than the universe on average in this room. We achieve very high density. And the way we got there, the way we got to that level of density contrast is because the baryons have this ability to separate themselves from the dark matter through radiative cooling. So you fall into this box here where baryons get separated from the dark matter. The dark matter is collisionless. The standard WIMP model doesn't interact with radiation, so it thermalizes through random motions, as we saw yesterday, but can't cool. That energy is trapped in it, and your system is, in principle, equilibrated and will hang around for the rest of the universe. So you've got this halo, but inside that halo, the baryons can shock heat. They can heat up. But the medium typically is optically thin, and baryons do interact with the radiation field. So every time an electron scatters off a proton, there's a little photon that gets generated in premstrawling, and it can escape the system. And that escaping the system carries away energy. That's a little bit of energy, yeah, but there's a lot of electrons and a lot of time, and you just wait, and you can cool. You can let the gas cool. That means it will shrink, the pressure support will be lost, and it will shrink down into the bottom of the potential well. And here I have a kind of characteristic, used to be a black box for representing star formation and black hole formation and feedback. I think it's, as you'll see today, it's becoming grayer. It's not quite white. It's not quite completely transparent and understood, but it used to be very opaque. And at least we're getting better at handling it. And then, this is a little faint, but at the end you get a little galaxy sitting at the bottom of the dark matter halo. All right, now this picture has been in place for, you know, almost as long as I've been on the planet, and certainly longer than most of you have been on the planet. So it all goes back to a paper, which, if you read it today, it's kind of, you know, it makes your head scratch a little bit. Some of the grass in it and stuff are not all that illuminating. But the abstract is perfect. The abstract is exactly what the model is today. You know, essentially, the entire luminous content of galaxies results from the cooling and fragmentation of residual gas within the transient potential well is provided by the dark matter. Boom. That's it. You know, that's what it is. And these guys put that together back in the day, and are still with us, still working on the problem. It's not so. All right, now, what does it mean to then, you know, that's a nice theoretical framework, what does it mean to get real? You know, what does it mean to actually form, make those HST pictures in a synthetic, in a computer? For real. Well, we have a laundry list that we put up about the physics that we need. What does that laundry list look like? There's a lot of stuff on here. There are entire subfields of astrophysics devoted to a bullet point here. You know, hundreds of people around the planet working on it. So this is a demonstration of a complicated problem. But there's a lot of complicated problems on the planet that we are working on, including global warming and including feeding the world's populations. And, you know, we just have to have smart people like you working on it, and we'll solve it. Take it one step at a time. So one step at a time, we start with gravity and that kind of with n-body simulation methods that I talked about yesterday. Now we put in gas dynamics. We'll explore that a little bit today. We know that there's magnetic fields, but let's ignore them typically to begin with and just think about hydrodynamic methods. We have them. So let's embed hydrodynamics inside n-body models and bring a second fluid into represent the baryons. Now it requires a wide dynamic range, and in particular, baryons are going to end up being stirred up, and there's not sure the cascade of turbulence that can happen in nonlinear regions like halos. Sometimes the core of halos will be cooler than the outer parts or vice versa, and thermal conduction can happen to move heat from hot zones to cool zones. Now if you have a magnetic field, that's going to steer the electrons wherever the magnetic field wants to point them. So the tangled magnetic fields can shut off thermal conduction, and we don't know exactly how much thermal conduction there is on a galactic scale yet. That's kind of an open field problem, but you can put it in and you can parameterize it and try to figure it out as you go along. Interactions of baryons with a radiation field. As I mentioned, optically thin gas can radiate and allow baryons to lose pressure, support, and cool. On the other hand, if you have a strong radiation source and you're a baryon nearby, you can heat up. So sometimes you cool, sometimes you heat. Now then if you need to, if you have plasma that can cool, maybe it will form a giant molecular cloud and actually start forming a group of stars. Star clusters, or star pairs, or whatever. So we need to think about how we can model that given realistic kind of resolution in a cosmological simulation of about kiloparsec. I'll show you some simulations today that are really pushing to much higher resolution. Something on the order of 10 parsecs rather than 1 kiloparsec. And that's very helpful, right, because the molecular clouds is about parsec scale and you're almost getting to the point where we can see individual star clusters happen in a cosmological setting, which is awesome. At any rate, even in that environment, you need some rules for converting gas into stars. And if you're forming black holes, you need to seed them somehow and do all that stuff. We'll talk more about that tomorrow. When you form a star cluster, you need to understand what's the population of stars that form? What's their initial mass function? What's the frequency of the number of stars as a function of stellar mass? Then you can go talk to the people who do stellar mass evolution and understand how that population of stars will evolve over time. What will the optical colors expected from that population be if we observe this system? What are the supernova rates from this system? What are the yields of metals and cosmic rays that come out of the supernova? There's lots of things that you in principle need to put in from that. That includes the fact that this initial mass function at the high mass end, you will have supernova that go off. If you're forming black holes, compact systems, you can drive jets from accretion discs surrounding those black holes. How do we do that? Metal production, turbulence, blah, blah, blah. Finally, if you're really going to small scales, cooling below about 10,000 degrees requires molecules. Getting down to the scale in which molecular clouds operate, you're talking about, say, hundreds of degrees or even tens of degrees Kelvin, cooling down to that scale requires molecular chemistry. That will affect radiative opacities and all this kind of stuff. There's a lot of different ingredients here. Some populations within the community will take different approaches to model smaller scale systems or larger scale systems as needed. There's one laundry list. I won't talk about everything here, but I just wanted to get it up there. It's also important to remember that it's not quite complete necessarily. Let's take a look at the equations we're trying to solve. This is from, again, I wish I could point you to a more recent review of hydro simulation techniques in cosmology, but there really hasn't been one. But Birchinger in 1998 kind of put together a review where he talked about this. Here are the basic equations from that paper. One has Euler's equation, equation of continuity. And then, again, so you have to conserve mass. Your acceleration involves both a Hubble term and gravity, but now we also have pressure gradients. And if we're going to think in an Eulerian sense, we had VEC material as well. Right? So those are basic equations, and then we have to think about the energy or entropy of the gas. So U is an internal energy, S is an entropy, and basically in terms of internal energy, there's going to be PDV work that looks like this. And then there's going to be both the possibility for heating and cooling to occur. So you can, and again, these are going to be either shocks or radiative for heating and radiative cooling for loss, for losing internal energy. In terms of capturing shocks, you need to capture shocks to heat the gas in the first place. And at a planar shock, you have these conditions for conserving mass, momentum, internal energy. We can know relations for polychropic gas look like this. In the limit of cold in fall when the upstream material is relatively cold, one has a limit on the boost in density that occurs at the jump in the location of the shock for gamma plus one over gamma minus one, which is equal to four for a monatomic gas. So you do go up in density, you can go up arbitrarily high in temperature, but the density increase across the shock is going to be limited to about a factor of four. Now there's various approaches numerically to handle that, and you can go to the Wikipedia page to understand these various techniques that have been in place, these techniques would have been in place since the fifties, probably associated with supersonic aircraft, modeling the behavior of the flight of supersonic aircraft, or modeling the behavior of Sputnik coming back into the Earth's atmosphere or something, because you're going to generate shocks, you're moving supersonically at the top of the atmosphere. One way is to use schemes, that finite difference schemes that try to capture this behavior analytically, essentially directly in your numerical method. Another way if you don't take that is to introduce an artificial viscosity, which just increases the pressure locally in regions where the flow is convergent. And so you just say, I'm going to raise the pressure in order to prevent interpenetration of two streams of gas, and that will both boost, and that PDV work will heat the gas as well as well as keeping it from keeping two streams of gas from overlapping interpenetrating. This is the approach that's typically used in SPH, which I'll talk about in a minute, whereas these other approaches are more conventional to Eulerian techniques. I just said Eulerian, Lagrangian, what do I mean? Well, what I mean is the following. If you use the method of characteristics, as it's called, and fluid equations, and follow the streamlines of a flow, like imagine looking at a stream, you throw a leaf in it, the leaf will flow down the stream, so the Lagrangian approach is to follow the leaf, become the leaf, and follow it, and then write whole derivatives of quantities at the position of the leaf. As opposed to I sit in the stream and I put a cell down and I watch the leaf go by. That's this part, because it says the time rate of change of any quantity can involve both the production over here on the right hand side of the equation, but also things can just leave my box because they're affecting through it. So this takes care of the advection sort of naturally. It's a whole derivative versus partial derivative. So there's two extremes. The Eulerian codes take this part, take this approach, Lagrangian methods take this part, and there's something that's, there's new techniques that are called kind of moving mesh that try to have the best of both worlds. And I'll say a little bit about them in the next couple of slides. In fact, here's a very busy slide that kind of summarizes some of the character of these different approaches. What their advantages are, disadvantages give some examples of codes out there that take this approach. The first, historically, and I'll go into this a little bit because yours truly was the guy who put this together. First approach was to use Lagrangian particle techniques. Then came fixed Eulerian meshes, that is to say take a cubic region, just put down a finite mesh of N cells on a side and that's what you have, and solve your hydrodynamics in those cells. The Eulerian adaptive mesh uses that approach but it uses it in a hierarchical fashion. So you start with what's called a root grid of a given size, but within a root grid cell you can now insert new grid cells at higher resolution and keep going hierarchically as you need. And then finally, these approaches use various, they're relatively new but they might use something like a voluntary tessellation technique to define cells that advect with the flow. So you might use particles as tracers of the flow and as the particles move around, you can define local volumes surrounding each particle and write your hydrodynamic equations in those volumes. So there are various advantages. This is kind of fast and cheap. This is more expensive. These I think may be best of breed but remain to be seen. The disadvantages here is that for Lagrangian particle techniques the grid as it were are the particles, they can deform arbitrarily sort of. So you can't easily do sort of analytic estimates for the error. Normally what you're thinking about in doing difference equations is I'm going to take a partial difference equation and approximate it with a difference equation that's of some order. I use three cells or I use five cells or I use seven cells. That gives me some order and then my error is the next order term that I'm missing. And then you can estimate that order later when you do the calculation. It's harder to do that in SPH, harder to estimate that error analytically. So that's one of the advantages of AMR is that you have that ability. These codes also seem to have that ability a little bit more than SPH. You also have to do this measuring using artificial viscosity in particle techniques and that introduces kind of unwanted features that I'll show you some examples of in a few slides. And there's various examples. Gadget, of course, is a workhorse. We'll see a lot of Gadget today. There's also a competing code called Gasoline out there. There's various AMR codes. Art Enzo will dive a little bit into Enzo today. Ramseys is a European version, Roman Tessier in France and has built a nice code base on AMR and Ramseys. Flash is a more general purpose code that really grew out of more like, you know, the bomb making people, if you will, but also can be used for cosmological applications. And then new codes of RAPO and Gizmo we'll see a little bit of Gizmo today and tomorrow we'll see a little bit from a RAPO. Okay, let me give you let me take you back in time to when I was approximately your age. I was a postdoc. I did my first postdoc very brief stint at Princeton University working with Jerry Ostriker. Jerry wanted to do AMR code. I thought that AMR was really complicated and it is. So I decided, no, I'm going to do SPH. I left. I went to Cambridge, started working with Georgia Stathio and other people. So I'm working with Georgia Stathio. I took P3M, that code that I described yesterday, the particle, particle, particle mesh code, and I just embedded a second set of particles to rattle the baryons and had them follow the hydrodynamic equations that I just wrote down for you. So that I did in 1988. Just to give you a sense of what 1988 was like. It was big hair, baggy jeans. Here's the next. Who remembers next? The next computer. Anybody ever operate on a next? Yeah. So the legacy is such that Steve Jobs went and formed next and doing his hiatus, his ears in the desert. He went off and formed next. There were a lot of great ideas that eventually made their way into Apple OSX. One of them was a Unix operating system, which is OSX. Before that, Apple had its own operating system. I forget what it was called. It was just kind of like Windows. Lane. But now OSX is a real operating system. How many have seen Beetlejuice? Yeah. And the Ford Taurus was like the new thing. Wow. Streamlined. Probably modeled with hydrodynamic and I'm not kidding. So anyway, as I mentioned yesterday, this whole cyber infrastructure thing, manufacturing has completely been transformed by CFD, by computational CAD, and CFD approaches. You wouldn't think of people just make clay models of everything in the auto industry. That still happens, but it only happens for aesthetic purposes. You can now with virtual reality, you can drive your car around Chicago in virtual reality at way less the cost than actually building a prototype and taking it out to Chicago. Okay. From that paper, here are the equations. So this gives you a sense of what you do in these Lagrangian methods. You have a collection of particles. You need a density, because various things depend on density. What do you do? Well, you just use a local kernel density estimate. So every gas particle has a mass Mg. This is like a Gaussian function. You just sum up a set of Gaussians surrounding each particle. Way to every particle with a Gaussian. With various properties. Good properties. In order to achieve high resolution, this is the scale of Gaussian. Let's let that scale vary, depending on conditions. Over time, as you step forward and the densities start to increase, I'm going to vary H in a way that captures, allows me to achieve higher density. An increase in density will decrease H and allow me to resolve finer and finer features as the calculation goes on. I can't stop at some point. I can't allow myself to go to H equals 0. But again, we have a gravitational softening scale which kind of sets the minimum scale of the calculation. So that's fine. We stop around there. Then we need a pressure gradient. We need a pressure gradient term. It helps to take that pressure gradient term right at this way and use this trick, integrate by parts. So that you transform the gradient in your thing into an integral of the thing with the gradient of the kernel. Cool. Basically, the pressure gradient forces look like, let me take p over rho squared, symmetrize it, and then weight each of my particle pairs locally with the gradient of my kernel. And the symmetry is important here because basically symmetrizing the pair interactions will conserve global momentum, and you do want to do that. Then there's an energy equation, but once you have, you know, the PDV work is handled this way. So here we see we've got kind of a del with a gradient dotted with a local velocity field. So I can tell whether the flow is expanding. I'm going to drop in thermal energy if the flow is converging. I'm going to raise my thermal energy. Okay. Some simple tests, and I have to say that, you know, you're young, right? As you go on in your career, there's always this kind of ideal that you want to have something named after you, right? You want to have a particle. If you have a particle, it's like, oh, that's the coolest, right? I have a test. This is kind of part of what's called the Everard test. It's a collapse of initially r equals minus 1 density profile, static, you know, static initial conditions. So let's just let it collapse on itself. This is just part of the test that I did in 1988. And here I'm showing you the gas distribution of gas particles at some time in the calculation later. It's evolved. It started to collapse. You can see that it started to collapse because here are the dark matter particles. The ones that started life on the left-hand side have already streamed through the core and have come out the right side in vice versa. You can also see the initial grid setup. It was set up with a grid, and therefore you've got this collection of, you know, coherence in here that is artificial, but still, you know, that is what it is back in the day. And but the gas, you'll see, I mean, these guys are streaming through very high velocity. These guys are not. These guys have stopped interpenetrating as they should with some pressure here and here. But that's what the artificial viscosity is doing is preventing the gas from free streaming through the center of your spherical perturbation, whereas the dark matter happily does. Let me just jump forward to 1994 rather than 1988 to show you some science calculations that came out. And here is a paper that I wrote with Frank Summers and Mark Davis while a postdoc at Berkeley using a Cray YMP center back in the day. And it's a 2 by 64 cube particle ensemble modeling a 16 megaparsec region. We evolved to a redshift of one. The particle mass here is about 10 to the 8 which is kind of not crazy. It's not until recently that that particle mass for the gas has been pushed to much lower values. And the softening was about 10 kiloparsecs. And here is I'm showing you the full volume, 16 megaparsecs. We're going to zoom in later on this. At redshift of three you see the filaments. Here's the dark matter, here's the baryons. On large scales they trace each other. On small scales here's the dark matter in halos only. And then here's the baryons in what we called galaxy-like objects, the really high-density cold stuff. We allowed radiative cooling to happen in this calculation. And the baryons sank down to the bottom. And at redshift of one, sorry, redshift of three zooming in on that inner box here we see, again, the galaxies are much more compact compared to the dark matter halos. And then going forward to redshift of one, the end of the calculation we essentially have a group scale halo here with multiple galaxies in it. Another smaller group over here with multiple galaxies in it. And then, oops, you can actually this was the first paper to actually try to measure what's called the halo occupation distribution, the number average number of galaxies in halo of mass M. You can see here's the baryonic mass of these galaxy-like objects as a function of halo mass. Most, at the low mass you have one galaxy per halo, and then at about a few 10 to the 12th you can start breaking out into multiple galaxies per halo. And here's that number as a function of halo mass. And lastly, this is the end of history now and the end of the ancient history. What was surprising was we didn't anticipate this. This was came up in post-processing while I was visualizing the results of the simulation. I noticed that there were these thin disks occasionally. And so we actually, here's a larger structure but here's a little galaxy here we blow it up and blow it up again here's the velocity field. You formed a disk. So it was the first simulation that naturally formed a disk galaxy in a cosmological environment. All right, let's move forward to more or less the present day. And I'm going to tell you a little bit about more modern recipes for cooling and star formation. Now, as I said only very recently have we been able to perform calculations in a cosmological setting which have resolutions that really can push down to where the giant molecular clouds are forming stars. Most of the time you're dealing with spatial resolutions that are larger than this and sometimes much larger than this. So you have to have some kind of what are called subgrid recipes or rules for deciding how you convert plasma into stars and then what happens when the stars evolve and push back. So these rules are heuristic you know there's no exact you know solutions for anything. They're mainly motivated by being quote-unquote physically reasonable and they're empirically tuned meaning you have some free parameters and then you try to sort of match something in the observations to tune your parameters. So for example, in gasoline a paper by Stinson et al. 2006 here are the rules for forming stars. Gas must be colder than 15,000 Kelvin must be denser than 0.1 particles per cc must be over density enough to be part of a verial structure that's almost guaranteed when you apply this density condition and it must be part of a converging flow which again is almost guaranteed when you apply this density condition. So those are the rules. So you write down these rules you parameterize them and you put them in. So there's multiple parameters but these very high resolution zoom simulations that I'll talk about in a minute can avoid some of these parameterizations. So then when you form a star you have to form a star particle that star particle again might have masses say 10 to the 6, 10 to the 7 solar masses so it represents a whole collection of stars the same way that a dark matter particle represents a whole part of the face space of dark matter. And then you can tag it with things like its formation time the local gas metallicity and then later you can group these stars particles together to say what's in one galaxy versus what's in another. Now one of the challenges here has always been well how do we set these parameters? And as I mentioned you're often tuning to low redshift empirical data. Here's an example. There's a parameter here called C star which is the efficiency of what fraction of my gas in this region should I convert it to stars. If you set it to low what happens is you don't quite match the observed sorry let me take a step back Kenecut in 1998 Rob Kenecut looked out at local observations of galaxies and sort of wrote down what's called Kenecut law it's like a Schmidt law for galaxy formation that the star formation rate in a spiral disk will scale as some power of the mass surface density the gas mass surface density in that disk so it's just a power law relation between how much mass I have per unit area in my disk and how much stars form per unit area in my disk so that's given by this line here because this is star formation rate in units of solar masses per square kiloparsec per year versus gas surface density in units of solar masses per parsec square and what I'm aiming for is this line and what the simulations produced with c star 0.1 was this so now they go back let me just go ramp that up by a factor of 5 and voila at least at the high surface density end I'm doing pretty well here's the difference in star formation rate is appreciable right for this lower setting here's my star formation rate as a function of time now for my higher setting it's actually high and then starts to drift down a little lower furthermore not only do you have these kinds of parameter dependencies but there's also just resolution issues so here's from that paper, stinson et al showing what happens when the gas disk is modeled with 11,000 55,000 275,000 particles star formation rate is kind of high at low resolution lowers a little bit at higher resolution and then kind of gets episodic completely changes character at very high resolution and it was because of that they claim it was because of the one condition wasn't being satisfied at very high resolution basically the condition of there was a genes condition associated with the gas being able to pressure support itself or not and in the case of very high resolution some of these local patches were able to pressure support themselves and weren't satisfying the genes condition so basically their solution shut off the genes condition yes the stars become a collisionless species and they follow n-body yes so some of the stars yes yes so the question is the stars will form a collision when you form a star particle it becomes a collisionless species and it just evolves in an n-body matter due to gravity but there's a clock happening that records how long since that packet of stars was formed and at some point after 100 million years or 10 million years there will be supernova going off type 2 supernova first, later type 1 supernova later and that will affect the local gas that is actually taken care of in the sense that there are source terms written out in the equations that look for star particles and say are you ready to go supernova and if you are then feedback and the exact details vary depending on code you know that's all part of you know it's IMF dependent metallicity dependent potentially all sorts of stuff yeah okay so as I mentioned the very high resolution studies some of which I'll show movies of in a few minutes well okay your question was well posed because we just talked about this under an assumed IMF you could then take your stellar populations and you write source terms that give back both momentum and mass into the surrounding gas and very high resolutions you know not only do you have supernova feedback but basically massive stars on small scales I mean those beautiful pictures of like the horse head nebula or you know other beautiful HST images come about because of radiation pressure we're involved with radiation pressure onto dust in giant molecular cloud complexes and stuff so you can move gas around just through radiation pressure of stellar populations so but you need very high resolution to be able to kind of see that effect and that's just happened relatively recently I'll show you work by Andrew Wetzel and the fire team in a couple of slides so you know and they're claim here which I boxed is maybe a little bit excessive in the sense that you know in italics this is their italics no tuning of parameters well what's happened is the sub grid approach with parameters has been replaced with let's apply somebody else's code that does all of this for us right so what you're doing is you're talking to Clauss Leiterer who you know and his company who built this code starverse99 and that starverse99 tells you about how what are the effects of star what do you get back in terms of photon fields radiation fields from a population of stars of a certain age in a certain IMF and you can just embed that into your calculation and yeah there are no free parameters but you know go back to Clauss and ask well okay this is version 7.0 what will version 11.0 look like in six years you know what what's wrong with version 7.0 what do we need to do to improve it you're never done but I'm not saying this is bad I'm saying that this is kind of where the field is moving it's moving into more coupled approaches of taking astrophysical components and embedding literally into the calculation rather than writing a sub grid parameterization of it that just involves a couple of numbers that you tune okay taking a step back let's think about a Milky Way calculation for a long time there was there were difficulties forming a galaxy that looked like the Milky Way in the sense of having a relatively small bulge and an extended thin disk and what was often happening was you'd get a big fat bulge and a small disk most so you were there was an angular momentum problem basically you were losing too much angular momentum in the gas well the claim that was made in this paper by Guedes at all here's another name simulation the heiress simulation was that what you need to do is to concentrate the effects of star formation and feedback into higher density regions so what they did was take that 0.1 per cubic centimeter density threshold and ramp it up by a factor of 50 so that instead of like having a popcorn popper stars going off all over in your disk you'd really have only the densest regions forming stars and those dense regions would particularly be confined in the center which then drove the gas out to higher radius which reduced the amount of slowing the momentum gas you had and conserved more of the angular momentum in the system overall as a function of time allowing you to let the gas drizzle back onto the disk later and form a big fat disk so seeing as believing quote unquote here's their disk galaxy here it is viewed edge on here it is viewed face on and then this is something synthetic optical image in let's see so this is blah blah blah okay IV and far UV composite color RGB color is showing you the old stars will be will be red new stars will be blue etc so you know it is a thin disk and in fact it's almost got no bulge to it at all it's really you know so this this this paper said okay this is important going to very high resolution and confining star formation to the very uh high-density regions what they see is they see there is a little bit of a bulge so here's showing you surface brightness as a function of radius for that system at the end of the calculation there is a little baby bulge in the core which is hard to see in that image that I showed you previously followed by an exponential disk with a scale length of about two and a half kilopar six so it's kind of a baby milky way you know milky way scale this scale length is about two and a half a little bit maybe a little larger than two and a half kilopar six and so their dark matter looks like NFW and then they get a hot gas component which we now know exists in spiral galaxies and emits weekly in X-rays now this is the last slide that I'll show from this paper but one interesting question that people have talked about and I won't talk a lot about although I'll mention a little bit more tomorrow on it is the question of missing baryons where are all the baryons in the universe when you do accounting of baryons and stars in the universe you only get about ten percent of the global baryon fraction we know the global baryon fraction well from cosmic nucleosynthesis which I'm sure you heard about from Professor Reiden and when you do some accounting to try to figure out where are those baryons they're not all in stars and then you can look for gaseous phases and you realize well we're not getting it all in the gas but there's this kind of, there's a lot of volume out there galaxies occupy about one millionth of the volume, the bright parts of galaxies occupy less than one millionth of the volume of the universe so you've got all of this space out there that you can just fill with very dilute gas at say a temperature of about a little under ten to the fifth Kelvin and there's almost, there's very few ways to detect that either through absorption or emission lines especially if it's very tenuous so that's probably where all the baryons are but what you can see in their calculations that the baryon fraction in their galaxy is large when they use this canonical lower density threshold but when they go to the higher density threshold there's more action to drive baryons out of the cortohalos and the final halo so they get a reduced fraction, a baryon fraction within the halo at redshift to zero okay now I'm going to move to show you some movies from this I think in the last couple of years probably what I've noticed happening quite a bit is more of kind of a Sony motion pictures approach to simulations we've done a simulation, watch this movie it's great which is fine it's capturing the imagination and again there's a lot of insight to be had from watching movies that's why I'm going to show you some movies but it's important to so this fire project led by Phil Hopkins now at Caltech hosted at Northwestern University has some really interesting stuff and I recommend that you go and take a look at it the technology has let me down, Kino decided not to embed movies yesterday just decided not to play them I haven't solved that so what I'm going to have to do is get out and then we're going to go over here where are you there I am and we are going to play I want to play this one first here is a 50 megaparsec region we're going to see the gas temperature play for me play for me I have to escape back out start playing first and what we're seeing here is inside this 50 megaparsec physical region low temperature gas is kind of white actually magenta and then green is an intermediate gas of maybe 10 to the 4 to 10 to the 5k and red is greater than 10 to the 5k and what you can see is the action of this feedback of supernova happening episodically and this is extremely high resolution simulation so the sub grid prescriptions have been replaced by this lighter 1999 kind of starburst prescriptions for both stellar and supernova feedback and I'll just let the clock run we're seeing redshift change up there redshift of 1 as I mentioned yesterday for a galaxy like the Milky Way which is this is a simulation of something like the Milky Way size at the end by the way we'll be seeing this galaxy face on so wow there was just a local episode that drove out a lot of gas there's also dust in this calculation so each cell contains information about a lot of gas properties including chemistry and including information about a dust population and we see some later accretion of material coming in which disturbs the disk so something major in terms of mergers happens although we'll see at the very end there's kind of an interesting system that floats in from the right hand side now we can see you know this looks pretty disky it's self gravitating in a relatively static potential there's a little interaction with a accreting satellite there at low redshift I have a couple more movies that I will show in a couple of days we're not seeing the dark matter the dark matter would fill out to the edges of the room here dark matter halo we're just looking at the very inner part of the galaxy let us now take a look at the behavior hard time seeing where am I now we're going to watch the stars in the same system and of course there are no stars early on and then stars start to form and what you're seeing are synthetic images that will include the screening of dust when occasionally you'll see dust lanes pop up here like there so the dark regions that occur are because our line of sight to this galaxy will be shrouded by intervening dust gas and dust and see again that there's a lot of merging early on which then settles down the the disc it's not as if you form a disc galaxy the angular momentum vector is not just constant in time it does dance around due to interactions and mergers but after about a redshift of one the angular momentum vector is pretty well established it's coming essentially out of the page at us again you can see that dust that big interaction that happened or that big star formation event gas out of the core at around a redshift of 0.9 or so also had the effect of driving a lot of dust out of the core and you saw that in the dust lanes yes the simulation size is it's one of these zoom simulations so the whole size of the simulation if I remember correctly is something on the order of 100 megaparsecs or many tens of megaparsecs but the region that we're looking at is just more like a few megaparsecs on the side the high resolution region is just enough to resolve the galaxy itself and of course that interaction is a tidal interaction that pulls off some stars and that's the kind of interaction that creates stellar streams in the Milky Way halo that the Sloan Digital Sky Survey has observed and will be observed further by dark energy survey ok one last movie which will go back to the gas but now what we're going to do is zoom out to a larger scale and what's nice about this part of the movie is actually these are physical you're looking at a movie with a fixed physical size of 200 kiloparsecs on a size now and so that the high you can see this is the high resolution zone at redshift of 30 you're seeing it in its entirety now when I turn the movie on you see the Hubble expansion initially so that's what's going on physically and here it gives you a little bit more sense of the filamentary network that again is kind of corresponding to the movie that I showed yesterday from Aquarius simulation where at high redshift you see this filamentary cosmic web that then condenses to form the final system at the end here again you're seeing that condensation and that whole 200 kiloparsec region by redshift of 2 is already filled pretty much with hot gas right now here's an accretion event the effect of that starburst center coming up very soon there we go you see that starburst event doesn't drive out it does drive material to large radius maybe say 100 kpc but it doesn't unbind that gas on a smaller galaxy an event like that could completely unbind the gas and that will take us back at the very end of today's lecture to this missing satellites problem a little satellite galaxy down below which itself is having its own convulsions and this later so now we see how it gets accreted by vital streams and ultimately if we continue to move the clock forward it would completely merge with the parent galaxy and you see it's on its way okay yes question can simulations make bars, make bar galaxies the answer is yes and those simulations are kind of old in the sense that people have been doing isolated galaxy simulations to model in particular the spiral structure and nonlinear interactions that happen in disc galaxies probably for 20, 30 years but it's only recently that they've emerged in a cosmological setting like these kinds of calculations any other questions okay let's go back to you know the movies that were supposed to play that didn't and finally here's a picture of that simulation I just showed you these are just now images looking face on and edge on both with dust in the upper panel and then if you ignore dust this is what you would see in the lower panel you can see the obscuring effects of dust and it's a beautiful picture right in the sense that I showed you like slide one I said we want to make these guys well we're kind of at this level of resolution we've done it right I mean we're kind of done it's pretty cool but we don't have all the morphologies there so one thing to remember is this is kind of this is a 10 million hour or a few million hour calculation on a major supercomputer those kinds of simulations don't come cheap we're talking about something on the order of $10,000 in power and cooling alone to just do that simulation so creating millions of these galaxies is going to be a challenge for these simulations. Let us take a look at Enzo we're at the one hour mark I'm going to try to keep to time today because I know lunch is happening and I want to have lunch with you guys but let me take you inside the belly of the beast as it were Enzo is interesting in that it is the one truly open source code out there in computational cosmology there are codes available, gadgets available it's not been officially open sourced this is hosted I think on GitHub, I can't remember which repository but it's hosted in a repository where you can pull it down create a new track and feedback to the community and it's a real honest to God computer science development developed open source code base and associated with that open sourcing of the code was a paper from 2014 or 2015 shown here, all the collaborators involved in Enzo are listed as authors on this paper it is a long paper, a lot of detail if you really want to know, go there we'll walk through a little bit of this in section 2 of this paper section 2.2 in particular kind of lists these 12 different components that are involved in Enzo just kind of read through them you've got to deal with your mesh so you have to do adaptive mesh refinement that's shown graphically here in 2 dimensions if you have a root grid of 4x4 then you want to adapt locally as the calculation evolves to refine that single cell into multiple sub cells and do that refinement hierarchically that's shown in the red down there then you're going to solve the hydrodynamics equations in an Eulerian way so you have to have some kind of solver to do that there are multiple different solver techniques available within Enzo and we'll show a little bit of detail on that later you have to have gravity obviously along with pushing a collisionless component of particles that represent the dark matter then, as I mentioned you have cooling and chemistry that you also have the ability to turn on a homogeneous radiation background because if you're doing a cosmological simulation we still are not entirely sure how re-ionization occurs at reg shifts of order 8 or 10 but it's probably from a you'd like to have a calculation that resolves the sources of ionizing radiation but until you if you don't have that resolution you can instead just impose an ionizing radiation field of your own design that's available and then you can do radiative transport in two different approaches so heat conduction, star formation and feedback and then there's a whole section to talk about how time stepping is done alright and then this is a distributed parallel a parallel code that uses measures passing interface prescriptions to handle communication among processors on a processor you might handle grid zones that are shown in black but in order to do your finite difference calculations you need the surrounding region as well because in order to do a pressure gradient here you need to finite difference both from the side and that side so you import from other processors what are called ghost zones in order to do that calculation and so you have to handle all of that communication and in real time as calculation evolves as you go forward in time there's a the root grid sets a global time step but then as you go down the hierarchy you typically have shorter and shorter time steps associated with the higher resolution zones what's shown here are factors of two but the factors of two are not a constraint in Enzo but for graphical purposes we can just think of that so first you solve at the root level then you go down solve the next level do two steps at the lowest level come back up to do that intermediate level go back down do two more and then come back up to the root that's kind of the order of what you would go here is partial list from table one of the parameters involved inside the code so if you pull down the code there's going to be some you know read me and also some includes if you will that will define structures in the code and parameters here they are you can't read them that's kind of purposeful because I just want to highlight this little section up there star formation feedback to remind you about the things that we've already talked about this kind of a minimum particle mass a gene's mass in a cell the star formation efficiency parameter and illicit fraction blah blah blah blah blah right all sorts of stuff that you now have control of okay and you as the user you as the simulator need to make choices cooling function like this and when you turn on an ionizing radiation field the cooling function actually becomes a heating function at low temperatures so what's shown here is cooling function of different styles again you have you can flip switches to go between using say cloudy versus in white 1987 which are shown in blue and red respectively they're you know close but not identical and then below about a few ten of the 4k when you have an ionizing background these little dips change represent a change in sign this is logarithmic below this you actually are heating the gas above this you're cooling the gas that temperature finally this is the last slide from ENSO here's a unit test ENSO comes with a package if you were to pull it down in the development in order to make sure that you haven't screwed up the code there's a number of unit tests that you can perform on the code one of them is very simple and is shown here it's a unit it's a propagation of a single sound wave so you put a low amplitude pressure density perturbation into let's say the left hand side of the box and let it propagate with velocity taking it to the right the gray region shows you one level of refinement initially there were a hundred grid cells on the unit positions 0 to 1 so we're just looking at a small region here but you go from the root level description to a one zone higher as you cross from here to here and so this test is trying to ask the question can we propagate a sine wave without introducing any funny features as it goes through the refined region pops out the other side refined region is confined to 0.25 to 0.75 and this shows you the beginning and end of the propagation using three different methods for propagating the wave three different hydrodynamic solvers PPM, piecewise parabolic the original Zeus method and this muscle scheme that again I had on the previous graph and this is kind of a challenge so you look at this and you go look so great I mean Zeus does probably the best job the blue is what you expect the blue is the analytic you know propagating a sine wave is an analytic thing sine kx minus omega t and but the problem is that what's done here is to push the wavelength to be very short ten cells on a wavelength which means a half wavelength is just five cells so that's pretty challenging numerically right five resolution elements in a half wavelength and so Zeus does a pretty good job except that it has these oscillations right that it introduces as you propagate so although it preserves the amplitude it's not very diffusive but it also is not all that stable in the sense that it introduces more features than you originally had on the other hand PPM and muscle are don't introduce oscillations right they're not they're more stable but they're more diffusive they lose amplitude right so these the kind of choices that one has to but again remember this is a this is a barely resolved feature so you know it's not as if if we just used a hundred zones per wavelength instead of ten these solutions would look much much better so it's one of several unit tests that you can do with this code and again this is kind of the future it's a stable code base that we can build on as we go forward as a community alright I want to spend some time on code comparisons and then we'll finish going back to that to that small scale lambda CEM problem with a couple of slides okay so you know like I said it's hard to you you can build a code that conserves conserved quantities but beyond that it's a little hard to tell when you're doing things right so one of the things you can do though is to at least do verification we can take code A and code B put in the same initial conditions put in the same physics and ask do we get the same answer that's what's being done in these code comparisons kind of cross verification so here's an early version of this done in 1999 you can see my code p3msph up here but this is the early version of ENSO from Greg Bryan just a few levels of refinement were used in this calculation there were other grid codes here and here that were kind of coarser resolution otherwise the rest are SPH except for these two are moving early mesh codes that actually never survived they were interesting in their day weren't quite as powerful and as ready as a repo and gizmo are now anyway this is the dark matter so you see similar structure one thing you'll notice is that there are substructures which are not always in the same place here for example is a substructure that in this solution is over here it's crossed over to the other side that's just because we're working in a highly nonlinear environment if you make small errors in the initial state they can get amplified the way that chaotic systems do so the small initial change in the linear phase a quasi linear phase of the evolution can lead to a kind of difference between this sub halo being over here versus over here at the end of the day don't worry about those features so much we're going to look at statistically averaged properties in a minute that's the dark matter here's the gas first thing you notice is the gases rounder because the gas has an isotropic pressure tensor whereas the dark matter is supporting itself by an anisotropic velocity distribution so yeah should be rounder but you also see that there's different levels of core density some of them are very high some of them are very low this is a very low resolution calculation so it didn't achieve the high density shown in red and white on this diagram also some of these calculations are noisier here's a very noisy calculation and then finally the temperature map similar kinds of temperatures again the hot gas sort of fills the inner sort of megaparsec temperatures vary somewhat but I'll show you a temperature no I won't show you I'm going to show you instead density and entropy profile the entropy profile is a sort of pseudo entropy where one takes the temperature and divides by rho to the gamma minus one in the log is the entropy that's plotted on the right hand side so what's shown here and it's a little hard to see is multiple different solutions the code names over here and here is the gas density profile and here's the gas entropy profile one thing you'll notice that we'll see again in the next slide which is a very recent code comparison paper is that the AMR codes the grid codes have this kind of constant entropy core whereas the SPH solutions continue down in entropy sort of all the way and notice the sort of virial radius or r200 for this system will be about is about 2 megaparsec so we're going down about one tenth of r200 in this calculation in these calculations we're going to move forward now from 1999 to 2016 when a similar exercise was done called the NIFTY cluster comparison 2016 Sembilino here's again some pictures of the calculation at the end here is the gas density now as a function of radius r200 is shown by the dotted line and so we're pushing much further in with bigger computers higher resolution we can resolve the core structure and what you see is these small deviations that were visible in 1999 were just as good amplified as you go to smaller and smaller scales and in particular the grid codes Ramsey's and art which are the black lines have kind of a limited central density almost a constant like core density if you will whereas some of the SPH codes continue to rise in density as you go to the core that corresponds to also a decline in entropy it turns out that these solutions are very likely wrong so we've discovered something in this code comparison that the right answers are probably here and the reason why you can see that is because some of, sorry let me back up for a second some of these blue lines are gadget with a new SPH method that's designed to better handle gas mixing in subsonic turbulent environments here are some of the test cases with which people developed some of these new SPH methods paper by Beck et al just recently last year this is a time evolution where time goes down this is a periodic region where you just have a cold blob in pressure equilibrium with surrounding gas that's not shown it's the black so the cold blob is propagating here it's periodic so it pops back into the field and under the old scheme what happened is that the blob would get deformed but most of the material wouldn't mix with the surrounding low density medium it would kind of be preserved and shielded because of the way that the pressure various technical reasons that you can read the paper to understand if you really care the new versions allow this blob to be shredded now the important thing to realize here is that this is pure hydrodynamics there's no gravity this is not a cosmological experiment this is an experiment that you can do in a lab so that's why it's important is because you can validate the code this way now remember there's verification and validation in the world of computer science verification is kind of well I have ways in which I think that I can make arguments for consistency and correctness of my solution validation is harder validation is I can do an experiment that tells me the right answer and I can check my code and see if it got the right answer so for example I can light a cigarette or a joint and let the smoke rise up and what you'll see is as the smoke rises up it will develop Rayleigh Taylor instabilities it will form curls you've seen that I don't need to demonstrate it not this time of the morning and so this is what you expect and you can do these kinds of calculations where you have a rarefied medium in pressure equilibrium so it's a little higher temperature or lower density moving relative to a substrate and there's an interface it's contact discontinuity right at that contact discontinuity it's unstable to these Kelvin-Helmholtz rolls and again this is stuff that you can do in a lab and so nature tells you what the solution is you don't make it up and so the old scheme used to not develop and used to not mix the new schemes do and this mixing is important this is the kind of thing that happens when a sub halo falls in its edges are going to get shredded like this so you need to be able to do this so the new SPH methods do it well and they do it as well as the grid codes do and then let me just show you the other graphs from this paper which show you again that the new versions of Gadget produce an entropy core the same way that the AMR codes do so basically these codes need to be retired these codes need to be promoted and that's we've learned something it's like taking 20 some odd years but we've learned something alright now another code comparison relatively recent scan of paco at all 2012 is a little more depressing in the sense that this was just a everybody show up here some initial conditions put your full physics quote-unquote in and former galaxy and here are the answers and you can see kind of from your distance you know they look kind of similar but look at the size of this disk compared to the size of this disk that's pretty order unity difference alright I'm going to show you more detail that will get you even more depressed here's different codes what's shown here is the circular velocity curve of the galaxy so they do form a disk at the end and then you're looking at the circular velocity of the gas and stars as a function of radius from the center of the galaxy the Milky Way is shown as gray so you're kind of shooting for that most of these codes have very high velocities which lead to high circular velocities and that's true for a lot of them some of them don't some of them do better than others quote-unquote what I'm going to show you next is some global properties and the global property being shown over here is stellar mass versus halo mass as a function of time the time is shown by just a line and then the final redshift zero behavior is shown by the symbol and sadly what you see is that there's a dynamic range here of almost an order of magnitude in the final stellar mass of the galaxy disk, galactic disk there's actually a dynamic range of 50% in just the mass of the halo so a lot of deviation here is the circular velocity as a function of stellar mass for all of these different simulations observations are the little points down here the conspiracy is such that the simulations lie on either side of the observations none of them quite match what's going on in the observations so it's a challenge this is a challenge but I should say that these codes were all run kind of before the fire simulations that I showed you so we're on a good path to do a much better job of this in the next few years alright so here's a little pre-summary and then I'll just have a few slides to go back to the small scale structure problem and we'll finish up these code comparisons are essential really for assessing the level of theoretical uncertainty in all of what we're trying to do here when you don't have fancy physics if you just have shocks the codes we've already learned that we should get rid of the old SPH implementations and promote these new SPH implementations that's important the full physics comparison demonstrates that we're nowhere near ready to claim that we've solved the galaxy formation problem with any of these methods yet but there's a question as we go forward what do we do we want to demonstrate convergence i.e. change your numerical resolution and demonstrate that your solution doesn't change very much if at all different codes with the same recipes for star and such should take the same initial conditions and produce the same results so that code comparison should be done at some point in the future and it should be done more carefully and hopefully we'll see better conversions among different treatments uniqueness will always be a challenge i.e. if we think about that list of parameters that I showed you for star formation feedback in ENSO they're not independent in the sense that star formation efficiency a little I can kind of combat will change the solution but then I have another parameter that correlates with that parameter that I can shift to adjust back and all these parameters have complicated correlated properties that mean that we're dealing with a kind of very complex nonlinear system with a set of controls that are very strongly coupled and how do we validate again we can't I can take a cold blob and drop it into a you know I can take a blob of cream and drop it in my coffee and watch it I can't form a galaxy in the lab you know literally and watch it form so how do we validate well we validate against the sky and that means I'll talk a little bit more tomorrow that means kind of we have to start pushing forward and promoting making synthetic imaging like we've seen from some of these simulations and really asking what kind of photons would I receive from that system and does it look like the photons I receive on the sky okay let's finish up by going back to the small scale crisis with Baryon feedback so a couple of recent papers Zolotov et al and then I'll show you the fire results on the next slide essentially the original small scale crisis problems were associated with going into just pure in body simulations and tagging stellar properties onto subhalos well now with Baryon physics you can actually see what's going on in the subhalos and what goes on in some of these subhalos is they'll accrete a lot of Baryons that will initially pull in a lot of dark matter gravitationally but then when the stars explode and drive the Baryons back out it also unbinds the dark matter that's associated with the core of the system and so what's shown over here is a density profile of what are called the most luminous and gas free satellites at range of zero in this calculation using the gasoline code to just have dark matter only you get density profiles that look like these blue lines when you include the full physics you get density profiles that look like the black lines so the densities are reduced substantially look at this is an order of magnitude densities in the core can be reduced substantially that brings down your circular velocity substantially as well so it solves both the too big to fail problem and the missing satellites problem is the number of systems small systems in the Milky Way and Andromeda here's now in their new simulations with Baryon Physics's reasonable agreement ok and then next to the last slide same thing for the fire simulations missing satellites issue goes away again Milky Way, Andromeda what's called the Latte simulation shown here and then it's very hard to see but there's some thin lines to show you if you just had dark matter only and that's shown here so you can see how far off you were with dark matter only going from yellow to blue what would be nice and this paper touches on a little bit is to get beyond the attitude of I'm trying to tune my parameters to match observations to move forward into ok I've tuned parameters to match these observations but now I'm going to predict something else something I didn't tune for you can see here well here there was kind of tuned if you will to be able to produce this circular velocity as a function of stellar mass observations are stars simulations are open circles those are good reasonable agreement but there was no real tuning the metallicy kind of came in for free it's put in with this lighter 99 starburst model and you can see that the observations and simulations actually agree reasonably well in stellar mass and that's moving more towards the predictive side and as I said there's data galore out there about the local galaxy population and so the more that we push harder on our models to you know ok we've tuned over here but now I'm going to look in this space where I haven't tuned and see how well I'm doing you know the more that we push in that direction and the more success we have in this new direction that's where also where this field is heading ok so last slide just to kind of summarize what we went over today you know in general I would say that the field is in a somewhat adolescent phase you know it's just growing pains we're trying to, and body methods are mature they're stable, they're not going to change very much over the next few decades hydro methods and cosmological setting are very young they'll continue to evolve over the next few decades and that means that there's contributions to be made by folks like you in the room code comparisons are important to help move the field forward and I think that you can you know ultimately what ends up happening is you do simulations a lot of complicated physics and a lot of a lot of parameters a lot of things going on but at the end of the day we want to come up with a story we want to come up with a story that might be something like we need warm dark matter in the universe that a small scale structure just doesn't work when we cut lambda CDM you know we want a narrative we want to understand something about the world around us and the hope is that this small scale structure problem that is to say we look out in our galaxy we see these small satellites they have properties that we can measure well and we'll continue to measure even better as we go forward that those properties really do require the effect of Baryon feedback and star formation and that this coupling between dark matter and gas is very important and one way to address that will be to use gravitational lensing on very small scales including micro lensing to be able to assess the graininess of the dark matter structure in galaxies and so going forward as we develop techniques to be able to resolve that kind of structure in galaxies that's again where simulations can make predictions now for what you should see and it would be nice to actually make predictions that were then gone out and verified observationally as opposed to getting the observations and then trying to work hard to match them in simulations okay so I think that's all I wanted to cover today happy to take questions and then we'll talk tomorrow about large scales and galaxy clusters