 So good morning. We are back to the problem of cosmic realization that we started. So this will be the second lecture in which we will concentrate on the basics, physics of the organization. And also we'll spend some time discussing what are the possible sources of organization and what could be their specific signature that we can look in order to understand what is driving realization, whereas being the power in terms of ionizing photons and eating of the intergalactic medium. Before I start, just a brief recap of what we have seen yesterday. So yesterday we spent the first lecture by discussing the general properties of the intergalactic medium. Remember that cosmic realization is actually the process that brings the intergalactic medium that is diffused matter from a neutral state in which it was left after recombination at the last coupling surface of the CMB into an ionized state. And so what we have been seeing is that a large fraction of cosmic variants are residing, not in galaxies, but they are residing in the intergalactic medium up to more than 90% of redshift, about four or five. And then gradually this gas is heated by both photorealization eating. Remember we have derived how the radiation is absorbed by neutral hydrogen, which gets ionized and heated. But also is heated by shocks due to structure formation at later times. Now this second process is not directly related to realization because it has more to do with the structure formation. But it produces the same ionization and eating that photorealization produces. So at the end of the lecture that we have, in addition to the discussion of the general properties of the distribution of the gas in terms of the column density of the observers and the temperature of the observers, we have also introduced a way to study the realization, which is the standard way based on quasi absorption line spectra. And the spectra are interpreted in terms of the so-called Gam Peterson opacity, so the opacity due to scattering of UV photons by intervening neutral hydrogen atoms that scatter the photons away from the line of sight due to the resonance scattering of the liman atom. So we have seen that the Gam Peterson opacity evolves with redshift. And we have seen that after a smooth growth that is mostly driven by the increase of mean density of the cosmic gas, then at some point stippens up around redshift 6 or so. And that might or may not signal the end of realization. So where the interface where the gas completely start to become ionized from a neutral state that they had before. So this is an hypothesis so far. So now we are to put this on more theoretical grounds and try to understand from basic principles what we do expect in terms of realization and all the properties that characterize this process. So let's start very simple. So studying realization is equivalent to understanding basically how ionization fronts produced by radiation from sources propagates into the surrounding medium. So this is a standard problem not only of course in cosmology but it's also been studied in a number of contexts and not late as the H2 region, the ionized region around massive stars. So the theory is well settled. The difference here is that when applying this theory to the theory of ionization fronts to cosmology, we have to keep in mind that the universe is expanding and therefore the density of the universe is changing with time. And of course that introduces an extra term in the equation. So let's see what the basic equation says. So this is suppose you have a single source embedded into otherwise homogeneous background of gas. And so if we work first in physical coordinates, something that we would do in non-cosmological situations. So we have VP, which is the proper volume that is ionized that contains the ionized gas. So sometimes this is called as an ionized bubble or an H2 region, there are different names but they all mean the same thing. So this volume with time as the central source continues to inject ionizing photons. I recall you that this has to be photons with energy larger than 1 Riedberg, so 13.6 electron volt. So this volume is expanding as becoming larger. So this is the first term that on the left hand side expresses the change, the fractional change in the volume as a function of time. And you see that the Hubble constant here centers just to express the fact that the universe is also expanding. So in some sense this is an extra term that would not appear in a static universe. NH here is the mean density of hydrogen atoms in the volume. And then we have a source and a sink term. So the source term is the rate at which the central source produces photons and injects them into the surrounding gas. And so N gamma will see what the expression is in a moment. But for now this is just the production rate of ionizing photons. And we have then an extra term here which is a sink. So that works against the expansion of the ionized volume. And this is the product of three terms, alpha B, which is the recombination coefficient at the what is called the case B recombination. Case B recombination excludes the recombination that go directly to the ground level. And this is because in that case if you recombine an atom to the ground level that would emit another ionizing photons that would be absorbed anyway. So you exclude that case, the ground level. And this is called case B recombination. So case B recombination sums the recombination coefficient on all levels from 2 to infinity. So this is the recombination coefficient in the case B. VP is the volume that we are just defining proper units. And we have then here a term which expresses the variance of the neutral hydrogen density. So why do we care about the variance? Well, sometimes this is called the clumping factor. So the clumping factor tries to include in the equation the fact that you may have enomogenities in the gas. So the fact that the gas is not perfectly smooth, perfectly homogeneous, but it's somewhat enomogeneous. And so you try to condense all this information into a quantity which is called the clumping factor which is nothing else than the variance of the density field divided by the mean square. So this is called the clumping factor and gives you an idea of the clumpiness of the gas. Of course, it's the perfectly smooth gas, then clumping factor would be equal 1. Now, we can then do the next step and move in somewhat more convenient coordinates that are the co-moving coordinates in which we factor out the expansion of the universe. And as you see, of course, as a result of that, we are now turning to a V, which is the co-moving volume. And the expansion term has dropped off. And at the same time, we have an A cubed appears here. In a way, expansion is still there. But the scale factor now appears explicitly into the evolution of the density. And we have NH naught, which is the mean density, is the co-moving mean density. Or if you prefer the density of hydrogen at relative 0. Now, this is a relatively simple differential equation that can be solved analytically, essentially. So we can find the solution for the evolution of the volume as a function of time. And you simply find an integral that depends also on this exponential. And the exponential contains, essentially, the evolution of the clamping factor and the scale factor as a function of time. Because as the universe expands, the density decreases. And therefore, also, the recombination rate decreases. Because remember that the recombination timescale is inversely proportional to the density. So as the universe expands, the density drops. And the recombination becomes less and less efficient. So but in principle, you have a formal solution. So you can describe exactly how the volume of the ionized bubble evolves with time in a very precise manner. Now, this is the basic theory for an expanding H2 region in a cosmological context. But let's do a next step then. So sometimes what we are really interested in studying realization is the collective effect of many of these point sources. So we are not interested in one single point source, but we are interested in the cosmological effects on large scales in volumes that contain 1,000, if not 100,000 of sources. So it is better to describe the process in a similar but somewhat different way, which implies a statistical approach that asks exactly what is the fraction of a given cosmic volume that we take appropriately large enough to be representative of the whole universe. What is the fraction of such volume that is filled with H2 regions or with ionized gas? So this is called the quantity Q, which is the ratio of the sum of all the H2 region around the individual sources divided by the volume that they are sampling. This is called the volume filling factor. Sometimes it's also interchangeably called the porosity factor, even strictly speaking the two things are not exactly the same, but they're almost the same. Anyway, so let's call it a filling factor. So it is the fraction of the volume that it's filled with gas. Of course, initially the filling factor will be 0 because there is no ionized gas around. And at the end of recombination, Q will be equal 1 because all the volume will be filled with neutralized or essentially almost 1. Now, we need to introduce now, we need to say something about the source term of ionizing photos. So in this statistical approach, as we are dealing with volumes that contain many, many sources, and these sources are essentially could be galaxies, could be quizzes, could be even more exotic sources that we will discuss later. So we need to specify exactly what is the production rate of photons within this volume, the volume that we are studying. And so what is often done, the assumption that we usually make, or at least to a zero level, is to say that the mean density of ionizing photos is proportional to the mean density of bearings that you have in that volume with coupling coefficient that it's the product of two things. So n-ion, two quantities, n-ion, which I'll discuss in a second, and the fraction of the bearings that are collapsed into nonlinear structures. So what you want to say is that the amount of photos that I produce in a given volume is proportional to the density of bearings that I have in that volume. So for example, an over-dense region will produce more photons than an under-dense region. But at the same time, inside that volume, the disproportionality only holds for matter that is in collapsed structures. So because we know that both stars or quasar or in general, the more natural ionization sources are produced in regions of very high density where the gases collapse to form, I mean, bound structures like stars or even black holes surrounded by accretion disks. So the fraction, we account for that by including this f-col, which is the fraction of the bearings. There are collapsed in nonlinear structure. This is you can compute simply, for example, from standard pressure theory if you know what it is. But I mean, it's a simple calculation that one can do as a function of redshift. And then there is another factor there that is the number. This is an ion that is the number of photons that I have available per bearing that I put into this collapsed structure. So there are three steps. The first is how many bearings I have in the volume, how many of them are collapsed, and how many photons do I get for each bearing that is in these collapsed structures. So that, of course, depends on it's where the property, the specific property of the source enters. Because in order to specify an ion, I need to know. I need at this point to specify am I dealing with stars? Am I dealing with the quasar? If I'm dealing with stars, what type of stars am I considering? So this is all the astrophysics that comes in. This is the simplest possible model. I'm trying to be very pedagogical here. This is the simplest model of realization that we can set up. But it gives reasonable results, as I'll show you in a second. So this n ion is the product itself of the number of photons that I produce per barrier that I put into stars. For example, this is the case of stars, but could be quasar, for example, here. Times also another factor that says that not all the photons that are produced in the structure can get out of this collapsed structure and go into the intergalactic medium. Remember that more than 95% of the barriers are in the intergalactic medium. So this is the, if you want to achieve realization, we need to ionize this 95% of matter, which is outside collapsed structure. So the escape fraction tells you how many of the, what fraction of the photon that I produce inside collapsed structure can make it out into the intergalactic medium. This is where I want realization to happen. And we will discuss during this lecture a little bit more in detail all the intricacies of these stars because these, of course, your final result will depend dramatically on the assumption that you're making on this quantity. So it's important to understand them. But for a second, for the time being, right now they are just, for us, are numbers. So we assume that somebody is giving us those numbers and we can use them right away as free parameters. And so this is now the same equation that we had before, but now instead in terms of the volume that we have in the previous slide, now we write it in terms of the volume feeling factor, so the evolution of the feeling factor. And this is proportional to this n ion term times the evolution of the collapsed fraction. And so this is, again, the production term. And this is the usual sinc term once again. So the 0.76, you may wonder why it comes out, it's just the abundance of a mass of hydrogen, of course, because we are dealing only with hydrogen atoms, but there are also helium atoms, of course, to make up the mass. So Q, again, is a simple solution. So you find it very easily by solving that equation exactly as we did before. And so what do we get? Well, here is what I call the yellow word realization model. So the simplest realization model that you can do, which is not totally crazy, actually, it makes sense. And let's try to understand it from the physical point of view. So what I'm showing here is the evolution. Well, this is the value of the feeling factor that, as I said before, goes from zero to one. One means that we have achieved a full realization of the IGN, and this is redshift, of course. Now, I'll plot here several curves. The first is there are curves with varying clamping factor. Remember that clamping factor that expresses the enomogenetics. Now, I have four values that go from zero to 30. You may wonder why I'm considering C equals zero, which doesn't make sense. C equals zero essentially is equivalent to neglecting the source term here, okay? Sorry, the sinc term, neglecting recombination completely. It's like you have only the source term with no recombination. So C equals zero would drop this term. And so it's a, you know, maybe extreme case, but just for us to understand the physics, it makes sense to be considered. So you see that from zero to one to 10 to 30, first of all, we know that the larger is the clamping factor, the larger is the amount of enomogenetics we have in the gas. The more the gas is enomogeneous, the later realization occurs, okay? And this is easily understood because the number of recombinations that you have in the gas depends on the square of the density. So if you increase the density locally or in enomogenetics, these enomogenetics are sucking a lot of photons that are only used to constantly ionize and recombine the gas. And so you waste a lot of photons in this enomogenetics. So that delays the process of realization. And in fact, you may see that we are going from essentially, what is that, a realization that would occur at redshift 15 for C equals zero to a realization that would occur at redshift six or so for a clamping factor of 30. Now, I've drawn a line here which refers to redshift 5.8, which is the minimum redshift which we know that realization must have ended. So this is the, by the time you hit this line, you must be at one. So this is what data and the GaN-Peters that we have seen yesterday are telling us. So by 5.8, we know that the gas below that redshift has to be fully ionized. So any solution which has not reached one yes and this is barely, should be barely excluded, it must be excluded. Yeah. Yes, yes, yes, there's a, your colleague is asking whether the clamping factor of the IGM may change with time. And this is certainly true because the more collapse structure you're producing, the more obviously you increase the inhomogeneity of the gas. Fortunately, we now, well, this clamping factor is an unknown factor right now. We have some bounds and we have some guidance from numerical simulations that seem to indicate that a reasonable value is of the order of three, okay? And it's not varying that much. Maybe it could vary between three and five, but it's not evolving very much. Of course, the real problem is that even with the best numerical simulations, it's a problem of scale, right? Because you can always have more and more inhomogeneities on scales that you're not resolving. So it's very hard to compute it exactly, but we see a convergence with increasing numerical resolution in the simulation. We see that there's a flattening that does not increase dramatically as you continue to go to smaller scales. And so it seems to stabilize around three or five, okay? But we'll see later that we are starting to approach the problem, the key point that I want to make in this lecture that cosmic realization has a number of three parameters that we don't know. Like this is a classical example. We have the clamping factor which for us is very difficult to understand or even to model, even with the perfect supercomputer that we could imagine. But the only way in which we can fix these parameters and the only way in which organization model can be believed is by constraining them with available data, because we have a number of three parameters, but we also have a lot of very good data that we can use and matching the two in a kind of semi or phenomenological approach by matching theory with data, then you can hope to fix parameters on which, like the clamping factor, we have little handle. So it has to, any theory of theoretical model of organization has to satisfy at least a number of constrained account from the data and from that type of operation that you may trust what you're finding. Otherwise, we are running a little bit in an unknown territory. Now, so this is, as I said, the simplest model that we can think of, but as you can imagine, if you want to go one step beyond that, you need to do something which is a little bit more sophisticated and in particular because the topology of organization is so complex and in homogeneities in the density field are present, a number of things. So essentially the only way to make, detailed models of organization, it's to do radiative transfer calculations that has to be numerical. Why do they have to be numerical? Well, if you think for a second, what we want to know is just to determine the intensity of the radiation field produced by a combination of a large number of sources. So in each position you want to know exactly what is the radiation field that atom is seeing as a result of the collection of all the flux that is receiving from the surrounding sources. But the problem is difficult because this intensity J depends on, is a seven dimensional function of time, three spatial coordinates, three angular coordinates and, sorry, two angular coordinates and frequency. So there are seven unknown variables and therefore it's a very complex problem to be solved. But formally, this is the radiative transfer equation that you would need to solve. Again, we have a close parallel with the non-cosmological radiative transfer equation that you may have seen during your courses. So the evolution of the intensity, the specific intensity, it depends on frequency as a function of time, in an expanding universe as two more terms with respect to the standard two of the classical radiative transfer. So the first one includes or describes the effect of red shifting of radiation. So as the source is emitting a photon, the photon is traveling but the universe is expanding so this photon gets red shifted as we know from standard cosmology and also because of expansion, there's also the energy density of the photons is also diluted because the energy is now distributed in a larger volume. And then we have phenomena of more micro physical phenomena like the absorption of radiation and which is described by this absorption coefficient K new which is the absorption per unit mass, sorry per unit length of, per unit path length of the radiation. And we also need to take care of the fact that the gas is also emitting radiation with a given emissivity epsilon which depends on the physical process they are taking care, they're taking place at that particular position. So what we can do is just to find again, there is a formal solution to this equation which is this but it's not much of use because it's an implicit solution that we can use directly. We can define however a quantity which is the optical depth which is the integrated absorption coefficient along a path CDT and also the frequency red shifting which is the standard expression for red shifting of photons. Now, we can make at least a simple approximation sometimes that we can say that it's called the local approximation. This local approximation holds all the times in which the mean free path of a photon is much smaller than the Amur radius. Now remember that K new is the inverse of the mean free path of the ionizing photons and H overseas the inverse of the Amur radius. So whenever the Amur radius is much larger than the mean free path then we can have a simple approximation that also assumes a steady state solution in which J new is simply the ratio between the emissivity and the absorption coefficient. Now, in order to solve or to make use of this equation as I said we need to implement this radiative transfer equation into numerical simulation and people have here work a lot but in the last decade to find efficient methods to couple the radiative transfer equation to the hydrodynamics of the gas where the gas is evolving and the structures are forming, galaxies are forming, stars are forming and dying. And so we need to couple these two problems and this is a terribly difficult problem and many people have tried different strategies to work on that. This, simply this topic would require a course on the zone so I cannot cover all the possible strategies in detail but let me just give you at least a flavor of what people are trying. So there are families of strategies that are being used to couple the solution of radiative transfer equation to the hydrodynamics and the first two families are using the idea of ray tracing. So what you do is just you have a source and you solve the radiative transfer equation in one dimension along each ray. So then you change your rays and you try to build up the 3D structure by shooting rays in all directions. Now this also comes into different flavors that depends if the speed of light is assumed to be infinite or not that corresponds in some sense to a steady state or non-steady state solution so these are the longer short characteristics but the basic idea is that you shoot rays and solve reduce the dimensionality of the equation to 1D. Then there are other, another family of solutions have to do by taking moments of the previous equation exactly as we do in hydrodynamics when we build the mass conservation, momentum conservation and so you take moments of the equation of radiative transfer that I showed you before and so you define a number of moments usually you go up to the second order but then this thing needs a closure relation because each equation involves something a moment of the next order so you cannot close unless you make an assumption and this closure assumption can be made for example in terms of the Eddington tensor or also assuming that diffusion gets through a threshold and it's limited so there are different ways to the approximation so to speak to close this set of actually infinite equation that at some point you cut then there are less standard or less standard approaches to do with the Fourier transforms and structured grades and another family that, well, I'm particularly like because we develop in our group is the working with statistical, this is a statistical method based on Monte Carlo approach in which essentially you can, a similar to ray tracing but not exactly like that so you term, you treat the radiation field and the interaction with the atoms in terms of statistical sampling of some distribution function, physical distribution functions to obtain the actual radiation field intensity so how do we know which one is better? It's very hard because in order to know which, the strengths or drawbacks of each single implementation of the problem essentially what you can do is just to compare these different strategies among themselves and so there's been what's called two cube benchmarks there's been a comparison, a project that aimed at comparing all these different codes that were, I guess 11 codes that participated into this effort has been a huge effort and so we selected what we did is just to select three test problems and see how these 11 codes perform in each of them and so the first that we have selected is the point source problem that I started from remember they have a point source sometimes it's called the strong grain sphere and so we have been looking at what are the, so these are three different codes that are given the following problem you have a point source sitting here you have a uniform homogeneous gas around it so the simplest possible problem but in 3D, okay so this is already makes the problem a little bit more complex at least numerically and you see that these three codes these are three different codes actually these are probably the ones that perform the best I don't remember the names but you see that there are differences you see how for example this code here is producing all this fluffy thing and also the contours here these are ionization fraction, okay so where you see pink it's neutral when you see black it's totally ionized and there are intermediate stages so clearly there are differences and so this is what we are trying to understand and this type of experiments are important for us to have a feeling of how the radiative transfer codes are performing because otherwise we would be just waving in the dark without knowing exactly which one is correct or not so the second test that we perform is instead the we have a dense clamp sitting here with the radiation coming from the left part of the grid and you see that we were looking after the shadow produced by the clamp so you have this radiation and in principle you should have the shape of the clamp or the shape sorry of the shadow that is behind the clamp it's telling us a lot about the accuracy of the method and again you see that there are differences this one always perform a little bit in a strange way but even there are also differences if you look between these two but overall I would say that we are not very bad and then we increase one step further the complexity and then we just took a real cosmological density field with fluctuation, inhomogenetics all we can I mean a real situation and now we see that on average I mean the three shapes look pretty much the same but if you look in the details still there are differences now you see that these two cases look these two codes perform much more similar than that one while before it was the opposite so that tells us that probably depending on the physics that you have implemented in the code in the numerical strategy that they are using then you may get somewhat different results and this is obviously if you want to do precision studies then it may be maybe a little bit of a problem but the only way to make progress is just to truly compare exchange information so if we are happy with this more or less I would say 10% level of agreement or actually of disagreement we could proceed and say okay we have details but maybe radiative transfer we more or less know how to do it also in a cosmological context and so let's be brave and try to go for the problem so realization so now we are going to see what this organization is expected to be from the most sophisticated numerical simulation that we have available now to explain this process I will use this in the next slides I will use this type of display in a sense that there will be four panels in each panel I'll show a different quantity so these are simulations that have been performed within a commuting box of four megaparsecs which is relatively small but as we will see later the problem with realization is that radiation is produced by even the tiniest galaxies so they actually are the most important galaxies that for realization of the small ones I'll show you we will come to that point later so you want to resolve that in your simulation so if you take a box which is too large which would be more representative then you are missing because of numerical resolution these faint sources and you may be completely wrong so we need to do relatively if we want to solve the problem exactly we are limited in some sense to relatively small volumes anyway this is just pedagogical just to see what is going on so it's good now in this panel I'll show you the neutral fraction so pink means totally neutral when you see black means that the gas is being ionized here we'll see how the UV background that we introduced yesterday so remember we have the intensity of the UV background measured at 1 Rydberg 912 Angstrom in units of 10 to the minus 21 Earth per second per square centimeter per state radius which is at the standard normalization so we see it as a function of red sheet here is the gas of the density in the box and this is the gas temperature that ranges from 1000 to 10 to the 5K here in the center you get the scale factor which is just the inverse of 1 plus Z so let's start so initially and this is red sheet around 15 or so actually 12 so you see that first of all the density here you see there is a collapsed structure here with this red thing with the density is high that is among the first to form stars in this volume so in this case we are not including any other source any source other than stars so we're not including quasars or anything else so it's a very vanilla model for realization so this is the stars and you see that the stars are starting to produce this ionized bubble that is more or less spherical as we said at the very beginning so it's a round structure that it's becoming larger and larger you note that the green part here means that inside the bubble the gas is rich 10% to the 4 which is dictated by the photo heating rate that we discussed yesterday at the same time the UV background has already increased the intensity has increased to 10 to the minus 4 at this point and so this is average over the box so clearly inside here the UV background is much larger that's what we were discussing yesterday so this is a large fluctuation of course this is much larger and this is much lower this is almost zero so the mean is that value so this is something to be kept in mind so as we proceed see that now other sources start to produce ionizing photons and see even around small galaxies here you see the tiny bubbles but the bubbles start to merge one with each other so the topology now becomes much more complex now remember that these are two D cuts around a 3D structure therefore in 3D it can become very complex yes please the two what? yes, yes that's a very good point and you're right in fact actually you see that probably well almost for sure I mean this bubble is driven by this complex here but it goes into that direction you see why because look at this blue part here the density is very low so the ionization front proceeds into the voids where it's easier for the speed is higher because there are less recombination in the gas that can contrast the expansion on the other side is that you see there is a dense filament here that is probably blocking the expansion on the other side so that's why it becomes asymmetric so this is a very good point that you raised actually so yes so we have seen that and so this process continues now you see that the UV background steadily increased now we are around redshift 8 or so here and the gas starts to be heated to temperatures that are even 10 to the 4 in the ionized region or a little bit even higher in the densest among these regions so we continue and now you start to see two things here first is that there's a switch in the topology okay we start starting to see a switch in the topology that means that while at the beginning we had isolated isolated ionized region within a neutral hydrogen C now we are reversing the situation so we have neutral islands you see the pink islands that are embedded in the ionized gas okay so because now most of the most of the we have moved beyond the 50% ionization level on average so we now have neutral regions embedded in otherwise ionized gas and that means that you see that there is an indication here that the UV background increases that is because the mean free path of the ionizing photons can start to become comparable to the size of the box so every point is seeing more or less a little bit of radiation because the mean free path is comparable to the size of the box and so there is a sharp increase in the you see how steeply the UV background increases and now it's truly the situation has become even more dramatic so now the gases turn completely almost completely ionized you see that blue corresponds to 10 to the minus four so only one atom in 10,000 is still neutral on average but in these regions 99.99% of the of the gases is now ionized temperature has become more or less constant than to the four with slight variations and so that process continues you see the islands, the neutral islands shrink and become they are now they are now what to say, they are only confined within the most densest regions where the recombination rate which is higher there can keep the little bit of neutral hydrogen but these are heated up by the increasing intensity of the radiation field because now each point sees all the sources essentially in the box and therefore finally the final stage you are left with these filaments here that are also neutral sorry, are also ionized in one part to 10,000 but they retain a little bit of gas so these are the filaments that yesterday we saw as the what we call the absorbers in the spectral distant quasar remember that we were looking at the spectral the quasar with all those lines so these are exactly what is produced in the lines in the absorption line in the quasar spectra that we have seen and therefore at this point realization is completed so keep in mind that there will be always a little bit of neutral hydrogen hanging around and of course there will be hydrogen in galaxies so galaxies even like the Milky Way our own galaxy it's retaining a lot of neutral hydrogen but this is because the mean density in the galaxy is one million times higher than the cosmic mean so there is no way that the UV background can compete with recombination so the gas remains neutral so in 3D what we have seen there are just 2D slides through the slices through the box but in 3D you can imagine at this point what the topology could be so you have these tiny bubbles that hierarchically grow because they grow by merging so they merge with each other and form larger and larger complexes and eventually all the gas will be fully ionized now as I already said before now we can go back for a second to our picture of the absorption lines in the quasar spectra and having understood exactly our organization proceeds we can also have a clear view of we can interpret almost directly the spectra of the quasar now consider that you're looking at the light cone that is being re-ionized like the one that we have just seen the volume that has been re-ionized from this side so if you have a quasar that is located after so redshift increases in that towards the right so if you have a source like a quasar that is sitting here in the already ionized gas then what we see is the liman alpha forest so we see all these this is the liman limit and so these are all the absorption line features impressed by these fluctuating structural filaments in which we have a little bit of hydrogen left now if you start to put your quasar to higher redshift for example then what you see is that the line passes through regions where the gas is already ionized within these bubbles or passes through regions through the neutral islands and these two type of situations imprint different type of records on the spectrum so in this case we would have again a liman alpha forest type absorption but also we have neutral patches here that block completely the flux so the flux is totally absorbed and so we have what is called patchy absorption with some gaps here that are intermixed with transmissivity windows where the flux can be transmitted and of course if you put your source as sufficiently high redshift then you have what is called the Gump Peterson trough it's totally black so the spectrum becomes completely obscure so we can go back also to the figures that I showed yesterday because now it becomes also more clear what is happening so in this high redshift quasar you see that here we are in the regime where we are almost completely black we have a Gump Peterson trough but depending on the direction you can have regions where you have transmission or absorption depending on the fact that different directions depending on the line of sight we can pass through a region which is still patchy or maybe in another one which is fully ionized like for example if you pass here you can go up to this redshift by always staying within the ionized gas so there is also variability in the value of the Gump Peterson optical depth as a function of direction in the sky or the source that you are considering depending if the source is embedded or not in the neutral or ionized gas now so this is what is happening in terms of the global topological properties of realization but now we have never touched upon what is really producing desionizing photos that are required so far we have only assumed that there are photos produced by some type of source typically in the collapsed parts of the collapsed structures of the universe now the source list just to tell you briefly so we don't know what the answer is yet we have some ideas, we have feelings and also hypotheses but to tell you the truth as we speak there is no complete consensus on what the sources of realization are and that requires definitely more and more work but the most natural choice would be to use stars stars are the most natural way to produce photos and in particular ionizing photos but stars come in different flavors and in particular we know that initially the gas in the universe was left by the Big Bang Nucleosynthesis in the primordial composition state that is made only of hydrogen and helium so there are reasons to think that the stars that form out of this pristine gas have properties that are very different from the ones that form today and in particular the differences are so important that people found different names for these two types of stars so they could come as present day stars are usually termed population two stars meaning they are already enriched with heavy elements that change dramatically their properties as we'll see in a second with respect to the population three stars that refer to the first stars that form so stars that form with the almost primordial composition out of hydrogen and helium now not only this is the first difference so not only you can have a star with or without heavy elements they would make it a pop two or pop three stars but you can also have pop three stars there are theories that predict that the first stars if you form a star out of this primordial composition gas this star could be much more massive than the star that we see today so pop three stars can also come in the massive flavor or a normal flavor now there are reasons to explain why this could be maybe I'm not sure we have time to discuss it today in this lecture but certainly it's an open possibility now how this affect realization remember that well first of all the two populations may be co-eviled because in some parts of the universe you may still have some primordial composition gas out of which the population three star can form somewhere else you may have in the dense regions where for example we have seen before where the realization starts the stars that are produced there start to pollute the gas with heavy elements and therefore you would form pop two stars so at the same time in the universe you can form both so the question is in what proportion are you producing pop three stars versus pop two stars at each redshift this is something we don't know but the key point even perhaps even more important is that a key quantity that we need to in our realization simulation is to know how many photos do I produce per barrier that I put in a given type of star so these are the numbers that we get for three cases so for a standard pop today a standard population two star with an initial mass function that is salpita if you know what it is so it's a power law distribution of the stellar masses that is commonly observed in our galaxy in the local universe so for this type of stars normal stars for each barrier that you put in that star you would get four thousand photos four thousand ionizing photos okay so for each barrier that you put there the nuclear reaction will act in such a way that it produce four thousand ionizing photos for a population three stars if they have the same mass distribution the same salpita initial mass function then instead you would get something like thirty thousand so this is simply because zero order is because if you have a population three star because of the absence of metals the star becomes hotter because remember that metals produce that cooling function that we discussed yesterday so in the absence of metals the star is hotter and because it's hotter it's producing more ionizing UV radiation and ionizing photos so there is a factor of more than ten if you use population three stars instead of pop two stars and even more if the stars are massive as predicted by several studies so they have masses instead of ten solar masses they have masses of one hundred solar masses typically then you get an incredible amount of photos so one hundred thousand so twenty five times more photos than the stars that we see today so obviously population three stars would be the best possible ionizer that you can think of but the question is for how long do they live for how long their production can be continued because the universe is getting ionized so stars you see already from here what is the variety of choices that you have and what are the implications of those choices so the second type of source is it's quasar now quasars differently from stars do not get their radiation output from the conversion of gravitation into nuclear energy but they use gravitational accretion onto a compact source which typically is a black hole so this gas is getting hot and radiation is emitted now the problem with quasars at least the quasar that we know and we observe is that these are very rare objects so for example a redshift six the quasar that I showed you before from which we get those nice spectra are very rare and by rare I mean there is typically one of such quasar per giga parsec cube so the number density of this quasar is very small a redshift six and even worse is dropping as you go to higher redshift so the quasar that we know are probably too rare or if you prefer they come too late on the scene when realization has already happened however as we were discussing yesterday they could be important for the other type of realization which is the helium realization that has occurred later and as you know the ionization potential for helium it's four times that of hydrogen so it's for Rydbergs and that requires sources that have a hard spectra that a lot of high energy photos like quasar so quasars probably are not that important for a realization of hydrogen but it could be important for a realization of helium now supernovae supernovae can also as you know the massive stars at the end of their life they produce they explode and by exploding they produce something similar to the to the bubbles that we saw before so these are bubbles of shock heated gas with where the gas goes to temperature of 10 to the 6 K or so one million degrees so it's very hot and the gas of course is ionized so in principle theoretically you could think that realization has occurred due to the fact that the bubbles produced by the supernovae feel the volume however if you think for a second this hypothesis is not very suitable because the the sizes of the bubble driven by shocks is much smaller than the size of the of the bubbles produced by ionizing radiation and of course the reason is that that for if you want to shock heat the gas you have to move around large amounts of gas and that limits the this is very energy consuming so at the end of the day the the size of the volume that you can affect with supernova explosion is much smaller than the one that you can affect with radiation in addition there's another problem with with this supernova hypothesis that has to do with the distortions of the cosmic microwave background that are produced by this hot gas so what happens is that if you produce these million degrees bubbles around each galaxy essentially then the CMB photos that could the pass through through this to these bubbles and as they are coming towards us are the exchange energy with the hot gas and and so they acquire energy the CMB photo acquire energy from the thermal energy of the gas this is called the thermal Sunia is a dovage effect and so you would produce distortions in the CMB spectrum which it's not that are not observed so far or at least we have upper limits so there's a limit on that if you want to do it in that way to solve this problem then we can go to to more exotic but equally interesting hypothesis so you may say that there are several theories that predict that dark matter particle may aniculate or decay into a number of products and in particular high energy particles that could produce a ionization shower we will come back to this point tomorrow while discussing the 21 centimeter because that is actually 21 centimeter is one of the possibly more most promising ways to study this type of process of the dark matter annihilation therefore to study the nature of dark matter we'll see tomorrow but in terms of realization in practice it's very well you can have a little bit of ionization added there's an upper bound of course it depends on many details but typically many or most models that have studied these things tend to say that the the electron scattering optical depth that you produce also the amount of free electrons that you produce is very limited so you can you can get to full realization purely with dark matter without violating a number of other constraints but certainly if dark matter is there and is producing this type of process then it will be possible to use that to infer something about the nature of dark matter then there's another class of objects that are called mini-quasar so these are the physics is similar to to that of quasars but as I was telling before this the quasar that we observe are very rare but keep in mind that this redshift-6 quasar are already very massive so the quasar that we are talking here have masses of 10 to the 9 so are masses of so so we think that they must have had some ancestors and progenitors in the in the hierarchical merger framework of mind so there must be some other quasar that are very faint that we cannot see yet but they could produce radiation so mini-quasar are powered by black holes that are that are massive they are 10 to the 4 to 10 to the 6 so they are not as massive as a billion solar mass quasar but yet they produce some if they are in sufficient numbers they could produce a lot of extra radiation that that could produce substantial ionization so in this class of sources I also implicitly include even more I mean complex systems like the high mass x-ray binaries that are binary systems in which you have a black hole and a neutron star or in various combinations that they also produce x-rays so there is a limit on the amount of x-rays that you can produce and this is set by the the x-ray background so you don't want to produce too many x-rays that I've read that you exceed the x-ray background that you observe so there are constraints also on that and if you satisfy that limit you're bound depending on the spectrum that you assume I have to say to provide only 3 photon per barrier in the IGM in 10 Salpeter times Salpeter times the time that the black hole takes for a black hole to double its mass due to accretion so there are only 3 photon per barrier and as we have seen before if the clamping factor is substantial this may not be enough because you have to beat also recombination so it's not enough to give 1 photon per barrier usually because recombination brings you back to neutral and then you have to give another photon so 3 photons are the bare limit if you include recombination and finally the very final possible source of ionizing photon is as we were discussing also yesterday structure formation so we have this virialization shocks that shock heat the gas due to structure formation and they start to heat it up so the gas is collisionally ionized again this comes a little bit too too late a redshift below redshift 5 or 6 and therefore it could be again important for ionization this is Bremstrahlung emission from the hot plasma created by the structure formation the nice thing is that in this case you don't have to care about the escape fraction they were discussing before because this is automatically equal to 1 because this hot gas is optically thin so radiation produced can flow away almost almost directly now so as I said before there are so many uncertainties you see only from this list that you don't know what are the sources and even if you take the simplest hypothesis there are stars there could be many options that you have available and therefore it's very hard to fix down the problem purely from theoretical principle first theoretical principle so we need to constantly be in contact with the data and the experiments just to fix these numbers so the first thing that we can consider here is what do we get the information that we need so suppose you have developed your best fiducial model for ionization how do you know if that model is correct or not you have to make prediction and compare them with the data so what are the data the primary data that we can use is the cosmic microwave background so why do we care about the cosmic microwave background because the cosmic microwave background is strongly affected by ionization in several ways and there are at least three ways in which it could be affected there are also others but these are the basic ones that are mostly used so the first effect of changing realization history so realization history is the way in which realization proceeds as a function of time is to damp the primary anisotropies of the CMB on all scales we will go through this point now in detail but let me just list them so the first you have a dumping of the primary anisotropy of the CMB on all scales the second is that you create new anisotropies that would not be there in the absence of if realization would not occur and these are typically found on small scales also large multiple numbers and for this reason they are called secondary because they are not in the primary spectrum but they are produced as the photons travel towards us and so this is the effect that is called the patchy realization anisotropies and third is realization also affects the large scale polarization signal of the CMB and this is because there is an interaction between the fact that an electron is the CMB, a free electron produced by realization scatters the CMB radiation and it reflecting the because of the fact that you have a quadrupole anisotropy in the CMB also that induces a polarization signal in the CMB which is a clear signature of realization so the key quantity when we discuss the effects of the link between realization and the CMB is of course is the abundance of free electrons so realization is the machine that creates free electrons so the free electrons are produced by realization and all the effects that we can measure on the CMB depend on the depth of this free electron layer as a function of relative is this realization layer is that of course depends on the realization history right because if you start the earlier you start realization the thicker this layer is from us and the CMB so there is a layer in between us and the CMB and the thickness depends on realization so the earlier you start the thicker is the realization layer so this is quantified by the quantity that is called the electron scattering optical depth so this is indicated like tau e is a function of the realization region now you may ask me what is the realization region now this is a tricky question and there is a lot of debate also there is also I should say also a little bit of confusion in the literature because the optical depth that you measure in principle should be the beginning of realization so the moment at which you start to form electrons due to realization so from the early redshift at which you can produce realization however people doing the CMB often when they model the effects of realization on CMB they don't want to deal with all the intricacies of realization so what they usually do is assume that realization of course is a step function this is a very crude assumption for what we have seen before we have seen that the realization is a very gradual process also very extended but for the data for the CMB interpretation often people are using realization redshift as a step the position of a step function where the gas turns from neutral to ionized but anyway whatever you choose for that redshift the physical choice will be exactly to model things like that in which we model the optical depth as essentially the product of the number density of free electrons which is a function of redshift if you assume that the realization occurs in a gradual manner and then the Thompson cross section for the scattering of photons by electrons themselves and then you have a cosmological path essentially all this would be CDT right so written in terms of redshift but so it's CDT times sigma times CDT the radial path length of the photons so this is a quantity that you can write in a more useful or transparent way so this formula can be written like this so we have again the various parameters cosmological parameters omega baryon and omega matter which enter through essentially H and though we have the critical density and this Y are the elium abundance in mass and in number with respect to hydrogen so this is a number that this would be 0.24 and this would be 8% so this is the we correct for elium okay as usual and so we have a behavior that goes from like 1 plus z to 3 halves so the optical depth that is that can be derived from C and B experiments it's a very simple expression so if you factorize all these numbers of the parameters you get that by redshift 7 you would expect to have an optical depth of the order of 5% okay and that increases with powered 1.5 or 3 halves so the higher is the redshift at which you start renization the higher is the optical depth that you get and therefore the higher and the stronger would be the effect of the renization on the C and B so remember that I told you there are these three effects so the dumping on the dumping of the anisotropies and the large scale polarization signal so this is an example of what happens to the C and B as you fix all the parameters and you only change the value of tau e the electro scattering okay so the points are the data points here and the red curve is the what you would expect if in addition to the primary anisotropies you add this layer of free electrons that is dumping the anisotropies you see that we are going here from value of zero to very large value which is of the other two so it's a very strong realization by the way the previous formula give us a little bit of for example if you say there is roughly a rule of thumb that I personally use but is that the realization of redshift is essentially the value of tau multiplied by 100 so 5% is roughly 5 or 6 and 0.2 will be 20 so if you are using here tau of 2 that will be a very high redshift of realization now you see in principle we can derive the value of tau and therefore learn about the realization from the study of the CMB of course this is an ideal situation in which the optical depth is very large but because we are dealing with values that are much smaller than what we see here this effect is very within the measurement errors so it's very hard to use the dumping as a precise measurement of tau e and in fact we can only get right now very loose upper limits that tells us that realization has occurred below redshift 20 or so so it's not this in principle it's possible but this is not the best way to do it. The best way to do it probably using the CMB is with the second and the third method the second method is the production of secondary anisotropies in the CMB due to the patchy structure patchy morphology of the realization which is made of by all this bubble and the effect is called the kinetics of the Zeldowich effect and essentially as again to do with the fact that the photons of the CMB are interacting with the with the electrons that have a velocity that it's due to the peculiar motion of the gas that is dictated by structure formation so I'll come back to this point more formally in a second but so let's see at least qualitatively what is happening so this is a busy figure so let me go through it slowly so this is the power spectrum of the CMB as a function of the multiple moment. Now note that we are dealing here with scales they are much smaller than the one that we were looking before so we are talking here with values of the multiple of the other 1,000 to 10,000 so this is the small scale part of the CMB spectrum and as we know the primary spectrum which is the red line drops down because of the process that is known as seal dumping so the suppression of small scale anisotropies due to the photo diffusion essentially so the primary spectrum is decreasing and then we have data points here look at the points that are measurements these are measured by the Atacama telescope it's called act at two different frequencies 148 gigahertz 211 gigahertz now you see the points here that have been measured you see that they follow the CMB but then they diverge so they diverge and they go up so this extra signal is produced by essentially it has nothing to do with the CMB but it's produced by galaxies in particular infrared sources that are emitting at this frequency so if you take the power spectrum of this galaxy distribution you get a spectrum which is done like that which is made by again a shot noise component which is a power low ear and the clustering component that is underneath the CMB then we have also radio sources here so galaxies that are emitting in radio and finally we have the signal that we are after so this is the signal produced by realization with all the free electrons that we have available that combine to produce the signal which is the sum of both the thermal and the kinetic suniazardovich effect so if we want to dig out this signal the signal is there in the CMB you see that actually here without this foreground this extra contamination from the infrared galaxies we would be able to find a very clean window here because the CMB has dropped down and so we only are left with the CMB signal produced by the patch realization so it would be a perfect experiment to perform so how do we compute the power spectrum due to the kinetic suniazardovich well it's very simple it's just you have to take essentially the Fourier transform of the temperature fluctuation induced by the electron scattering in a given direction and it's the calculation similar to what we have done before for the electron scattering that enters in this form in this exponential here so you can compute the delta t the fluctuation induced by by the fact that there is a coupling between the electrons and the CMB photons so you can take care of the component of the peculiar velocity of this gas along the line of sight where you use the line of sight through which you're looking at the fluctuation you transform that and you get the power spectrum so now can we look at that yes yes I'm almost done so yes we can we can look at this and we have to keep in mind though that when we compute the suniazardovich effect another point comes into play that is the fact that galaxies not only emit UV radiation but also they emit x-rays as I'll show in a second these x-rays also is something that we also need to include in our calculation because they change dramatically the morphology as I'll show you in a second of the realization history so the simulation I showed before did not include x-rays but if you want to compute particular suniazardovich effect it's very important that we include that for reasons that I'm going to tell you in one sec so there is a relation between the luminosity and the star formation rate in the x-ray band and so you take this for granted this is obtained at a redshift zero and with some prescription you also assume that it works at a high redshift and so depending on how much of these x-rays you have in your simulation things change because so this is the same this is a realization simulation now we have a light cone and these are 700 mega parts these are very large scale simulations as a function of redshift so white here is neutral black is ionized and so this is the case essentially the case that I showed you before in which x-rays are zero or negligible which is the fiducial case but because we don't know how many x-rays are present there remember that I told you it could be quizzes I must x-ray binaries so if you allow as much x-rays as it's allowed by the soft x-ray background that I was discussing before then you see that this track the morphology of realization becomes from very grainy to very smoothed so in this case it's very smoothed so when you compute the kinetic the different smoothness of the realization has a very strong impact on the power spectrum that you would compute so this is the again the power spectrum of the of the kinetic Sunni Aseldovich in the multiple ten to the three to ten to the four so the fiducial case the one that we have discussing so far is the black curve but if you add x-rays you depress this power spectrum and you can get down to the purple line and this is exactly because x-rays tend to smooth out all this the struct the morphology and the topology of realization so it becomes much more smoother process much less dependent on the position and so for that reason the power spectrum loses become the intensity of the power spectrum becomes lower and actually we can constrain there's something funny here because the actual data if you assume that there is no contribution from the infrared sources that are contaminating the signal would be very close to the extreme would favor a model in which there is a lot of x-rays of course the signal that is coming from the sources I mean remember those sources that I was telling you before so these are I mean all these sources here are contaminating the signal so we are not sure if we can subtract them well enough but if there is a contribution from the sources in the data then the data point goes up here and so even the fiducial model it's allowed so depending on the this is a problem with the data so the models are here the prediction is very clear but what the data is telling us is still yet a little bit uncertain so this is the second is a very good way to study the to study the realization and for the third way to use the CMB I have to defer it to tomorrow and then in addition to the CMB tomorrow we will also learn how to use 21 centimeter observation to do a very detailed job on the study of realization so I'll stop here for today