 Okay, we had the third lecture about the CNB by Raphael Flauver. Okay, so there wasn't that much time between the lecture this morning and now, so maybe, and you haven't had a chance to look at anything, think about anything, so maybe I should ask if there's questions about the previous lecture. Some of this stuff we'll look it again. So the part we did toward the end, we'll look it again, but if you have questions about anything else, just feel free to ask. Okay, so what we talked about in the previous lecture for some time was the effects of our motion with respect to the cosmic microwave background, and after that we started talking about the primary anisotropies in the cosmic microwave background, and the goal was to eventually calculate the angular power spectrum of these primary anisotropies, and we said that we start our calculation at a temperature that should be at a time when you know that the CNB is a black body spectrum, but not too high so you don't complicate your life, so you typically start at a temperature of below 10 to the 9 Kelvin when electrons and positrons have annihilated, and you have a universe that's filled with electrons and some protons and some helium nuclei, some neutrinos, some photons, and then some cold dark matter and maybe dark energy, and the question was now how do we describe this system or how do we derive a system of equations that describes the universe at temperatures of 10 to the 9 Kelvin and below, and we started focusing on the photons and we'll continue, we still weren't there, so we started with the toy model and we'll continue with the toy model and then we'll see the equations of motion for the photons, and once we're done with that, so this is the primary anisotropies continued, and I'll eventually, so this will eventually get us to a place where we know how to compute the angular power spectrum or we have an idea how to compute the angular power spectrum, so we'll be able to, in principle, generate these curves, the 2 pi and then L, and then in the second part of the talk I'll start discussing the measurements, and we'll try to understand what all the data points actually are and how they're derived, and we'll see how we derive parameter constraints from it, so I'll talk about likelihoods and so on. Okay, so now let's go back to where we ended the last lecture and just briefly review, so we're trying to describe this toy model, so we had a toy model of free massless particles in flat space and we'll get rid of these assumptions one by one, so we'll eventually go to curved space and do perturbation theory in general relativity and we'll make them interacting, but for now let's go back to the toy model and we saw that it's convenient to describe the system in terms of the phase space density which satisfies a simple collisionless Boltzmann equation and if you have a detector and you're trying to understand how to measure the temperature perturbations in the system it's convenient to define this quantity, so this is essentially the density contrast, so you can think of it as delta rho over rho or the contribution of delta rho over rho from photons with momenta with direction p hat, so this is the quantity we introduced and hopefully it's more or less clear what it is, but it's really just the delta t over t up to a factor of 4 measure that position x in direction n hat, so some direction n hat which is minus p hat, so this is the quantity we introduced and then we saw that it satisfied a very simple differential equation that it inherits from the collisionless Boltzmann equation and we saw that because of translational invariance it's convenient to look for solutions that look like this, so Fourier solutions and we briefly discussed what the physics of this equation is, why there is this appearance of the q dot p, what it means and then we saw that what we eventually want to compute or we know that what we eventually want to compute for the cosmic microwave background are these angular power spectra or the multiple coefficients and for that it's convenient to decompose this quantity in terms of Legendre polynomials and you end up with these coefficients, I guess transfer functions and in terms of these transfer functions the multiple coefficients we saw just take this form where alpha now are some stochastic parameters that we'll say a little bit more about but you have some stochastic parameters that encode the initial conditions so the equations were rotationally invariant so you can always find solutions that only depend on the magnitude but the initial conditions just like the solutions that Enrico was writing down will have to have some piece that keeps track of the directional dependence it's just the initial conditions in my case and I'll assume or they're normalized in this case in a way that alpha of q, alpha star of q prime is 2 pi cubed delta of q minus q prime so I'm assuming that my system at least statistically is isotropic and then you can go from here and compute the angular power spectra just by taking the ensemble average of the ATLMs that I'm giving you ATLM prime star this is the angular power spectra we're interested in so TTL, delta L, L prime, delta M, M prime and if you compute this using this ensemble average then you see that the angular correlation angular power spectrum is a very simple function of these transfer functions so you're just really integrating the square of these transfer functions over the momenta so this is why they're convenient and from what we've done now these quantities we saw they satisfy a coupled system of ordinary differential equations and similar equations I haven't talked yet about polarization but similar equations apply for polarization where you just replace the T by P so are there other questions about the various steps so there's a lot of algebra and introduction of lots of new quantities but hopefully it's somewhat clear why we're introducing them and what we've introduced does it make sense if you have questions about it now is maybe a good time to ask so if it's clear then we'll keep going and move on beyond our little toy model and the first way we'll generalize it is by allowing for interactions between the particles and then this is perhaps easier to see from the Boltzmann equation I mean you might know that if you have a Boltzmann equation so the number density of particles it satisfies our mathless guys satisfies this equation and then you will have a collision term on the other side and it will describe if we're interested in the contribution of photons with momentum p hat that are coming to us there will be two contributions one of them is when things get scattered out of the line of sight the other one when things get scattered into the line of sight and if you run this through the derivations that we did then you find that they just show up on the right-hand side in this way it's not so surprising that to see this term, this is the term that corresponds to scattering things out of the line of sight and then the term that scatters things into the line of sight is an integral over different angles and so it will be a functional of various of these multiples and exactly which multiples appear here will be depending on the interactions we are having so here I'm assuming that we have an interaction that's similar to Compton scattering in which case you have delta zero and delta two appearing but in principle these are the collision terms that appear and even if you have these collision terms turned on what's nice about this equation is that you can write a formal solution for it so this is a still this homogeneous solution you can solve and then you can write down a solution a formal solution for the inhomogeneous equation I call it formal because the right-hand side involves the things that you're trying to solve for so this is clearly not immediately useful but you see that it will be useful because it's only the lowest L that actually appear here in the collision terms and so we don't have to solve a full hierarchy that we had on the previous slide but in practice we can solve for the lowest few multiples to get them accurately and then use this line of sight solution to get the full function so we'll see that in slightly more detail but this is referred to as the line of sight integral because you're really essentially integrating or solving the equation by integrating along the line of sight so in that sense we'll see it again for the photons in a little bit more detail and we'll see what the hierarchy is, how it truncates but for now so if this makes sense I'll move on and include the general relativistic effects from the perturbations in the metric so to include the perturbations is relatively simple everything works as before the phase space density the only thing you now have to be a little bit careful about so you're not picking up metrics or inverse metrics and so on or determinants of the metric is to have where you put your indices so for the coordinates we have upper indices for the momenta we have lower indices then the phase space density is still of this form and you can derive equations of motion for it in the same way we did before we compute the partial derivative with respect to time and then it acts on this piece which gives you a contribution that looks similar to before except now the dxi by dr is pi over p0 so this is what the momentum is for the particles in general relativity and then for the momentum the derivative with respect to momentum looks like this so these are the geodesic equations that we had in one of the earlier lectures and so you just put it together so this is the term we have the partial derivative with respect to time this is the piece you get from the action on the coordinates of the particles and you see that this resembles the piece we had before so you have the pk divided by the energy so this is still just the direction of the momentum and then this term looks slightly more complicated but it's just because the particle has to move along to d6 so this is the collisionless Boltzmann equation part and then in general there will be collisions just in the same way we discussed some of the things get scattered out of our line of sight some of them get scattered into the line of sight does that make sense okay and so if you then go through all the same gymnastics so you introduce again the delta of t as an integral of p cubed dp of the perturbation to this phase space density and from the delta t of x and p euphoria transform and expand it in terms of Legendre polynomials so going through the same exact steps you get a Boltzmann hierarchy for the photons that looks like this and you should be able to recognize most of the pieces because we've seen them before so in our toy model we had exactly the same piece here so this is still true for the for the photons in the in a perturbed frw universe then here we see the scattering contribution of the collision term and then there's additional pieces from the collisions these are from the omega I mean omega is the rate at which scattering occurs so this is the number density of free electrons times the Thomson cross section times the speed of light which I didn't put on the slide but so this is what the frequency here is and then you see this piece these are the metric perturbations that we introduced I don't know if you still remember them and Rico also introduced them but we had the metric perturbations so delta g i j we said was a squared and then 2 a plus d i d j b and then plus vector is and so these are the pieces sorry there's a delta i j here so these are the metric perturbations that appear and they're the ones that tell the photons how to move along geodesics and then there's a similar term for the polarization so this is the piece that keeps track of the polarization for the photons and again there's the collision terms and one interesting thing that you see is that the polarization so maybe let's take one step back and just look at this system of equations at very early times when the collision rate is high I'll say it again in more detail in the next lecture but let's look at it in the limit where the collision rate is large and let's look at L larger than 2 for a second and let's look at well let's look at polarization first so if you look at polarization when omega is large this will exponentially suppress the polarization so the solution to this equation will exponentially suppress the multiples for the polarization because you see that here there's a factor of a half so you partially cancel this for delta p 0 and delta p 2 but overall there's always some exponential suppression of polarization and the only thing that leads to polarization is the temperature quadrupole if you want so this is the part that actually will eventually when scattering becomes inefficient these are the pieces that lead to the generation but early on there's no polarization in the plasma and you only generate polarization through these terms in particular the temperature quadrupole and it's intuitively clear why the temperature quadrupole generates polarization so you might so if you look at one of our the configuration that we had with our wave crests let's say there's a hot part over here and a hot part over here and in between it's cooler then if I'm looking at this at this region and there's photons scattering off from so photons coming in from the hotter part of the plasma scattering into my line of sight then you know that the polarization of the photon I'm observing is predominantly in the transverse to the planar in which the scattering occurs so you're scattering in this plane and you polarized the photon in this direction similarly for the photons that scatter from this direction you will see them polarized in this direction and because it's hotter the intensity so you have some amount of intensity now you also have photons that scatter from the cooler parts if you have a photon that scatters from below it will be polarized in this direction whereas the ones that scattered from the sides were polarized in this direction and because the intensity of the the photons or the energy density in the photons is smaller here than it is here you end up with a net polarization so there's more polarization in this direction than there is in this direction does that make sense I mean this is for the quadrupole if it's too much waving hands I can try to draw a cartoon on the board but essentially what I'm saying is if I look face on at a pattern a density pattern that looks like this and I'm looking at it and there's an electron here the intensity of the radiation I'm observing that's polarized in this direction is larger because it's hotter than the intensity that I'm observing in this direction so there's some net polarization that's generated from a quadrupole whereas that's not the case if you have a monopole a dipole sorry for the monopole certainly it's not true because the radiation from everywhere is the same so the monopole certainly doesn't generate polarization it's also not true if you have a dipole in this situation then there's some additional intensity from here but you're exactly making it making up for it on this side so here you also have no net polarization you end up with a unpolarized radiation so it's really the quadrupole that generates the polarization in this case and then we'll look at this system of equations in a little bit more detail tomorrow in the last lecture but for now I just wanted to show you the system of equations and then we'll see the rest of the system of equations and we'll know what the system of equations are we're trying to solve with the Boltzmann codes and then tomorrow we'll try to look at analytic solutions so the components that should be familiar to you are from the stress tensor are the delta T sub zero so this is just the density contrast at that place it's just the average version then the dipole is the velocity potential and then there is also in principle the anisotropic stress or delta T2 which corresponds to the anisotropic stress and to pressure at early times as I said and as we'll see in more detail tomorrow Compton scattering is very efficient so you have a scattering rate that's much larger than Hubble and you're driving to zero as we said all the polarization pieces and you're also driving to zero all the pieces with multiples greater than or equal to two in the temperature and this means that what is left is the system of equations of hydrodynamics so you just end up with energy conservation and momentum conservation and the photon plasma and so this is why you can actually make some progress in finding analytic solutions so in that regime you can then for example if you're only interested in the temperature perturbations you can solve that system of equations and just assume that you describe the fluid as a hydrodynamically all the way until last scattering and then just track it from last scattering but the codes solve the full system of coupled equations there's a similar system of equations for the neutrinos if you treat them as massless if you treat them as massive gets a little bit more complicated so here I'm just writing a system of equations for the massless neutrinos so you see that they're just like our massless particles in the toy example except for the appearance of the metric perturbations which will tell them how to move along to your d6 and then for the variants the equations are very simple you just have energy conservation which takes this form and momentum conservation with between the variants and the photons and for the dark matter again you just have energy conservation this is an asynchronous gauge there's nothing to be said about the momentum conservation because synchronous gauge so we fixed we got rid of the two scalar perturbations in the metric in H00 and in H0i and there's an additional residual gauge redundancy in the synchronous gauge which allows you to gauge fixed to 0 the velocity potential of the cold dark matter so this is just 0 which just really means that all the velocity potentials you're writing for the variants and so on are measured with respect to the dark matter so this is the only equation that you have to care about for the dark matter it just has to conserve energy density and energy and then for the metric perturbations it's somewhat up to you I mean there's some choice of which equations you like writing two linearly independent equations that you can use to solve and this is the system of equations that's in the code so for example if you want to now look at some applications that you might want to do is you might want to study let's say what happens if the dark matter decays into dark radiation then you might want to have a term here on the other side that tells you that the dark matter decays into something else and you might want to include another copy of the Boltzmann equations that for the dark radiation would look a lot like the neutrinos unless you're trying to keep track of the polarization then you would look also keep track I don't know why you would keep track of the polarization of the dark radiation but anyway I mean so there's if you want to do modifications you modify these equations in the codes and then just run them but these are the kind of equations that are solved in the codes and then the question is what are the initial conditions so these are the equations we're trying to solve now the question is what are the initial conditions we're trying to solve and if you go back in time so today obviously all the modes we're observing by definition are inside the horizon we can't measure things that are larger than the horizon or than Hubble but we're looking at modes that are inside the horizon today if you extrapolate them backwards they grow so K over A obviously redshifts like 1 over A or grows like 1 over A at the same time if you look at the Hubble rate Hubble goes like Hubble always goes like 1 over T in the radiation dominated area you have a scale factor that grows like T to the one half so you get Hubble that goes like 1 over A squared in the matter dominated epoch you have A goes like T to the two thirds so Hubble decays like 1 over A to the three halves in both cases as you go backwards in time it grows more rapidly then the momenta grow and so as you go back in time all the modes we observe today are outside the horizon so by that I mean that the physical momentum Q over A is much less than Hubble if you go back to early enough times and this is nice in a sense because the system of equations there becomes very simple and you can actually work out the initial conditions analytically so as I already said the system of equations at early times really reduces to that of hydrodynamics and you can look for solutions of this form and in more detail if you want to look at them in terms of the quantity that Enrico also defined this curly R which is the same as his notation so it's A over 2 plus H delta U except I'm using the velocity potential and the velocity potential for a single scalar field is minus Enrico called it phi for the perturbations and then phi dot and so you see that you get the same quantity that Enrico had. For now this is really just the initial condition of my system of equations at less than 10 to the 9 kelvin so I'm not yet talking about inflation I'm just trying to work out what the initial conditions are for my system of equations below 10 to the 9 kelvin and you find as Enrico also explained that there is a solution that's constant outside the horizon for R or a solution for which R is constant outside the horizon becomes a constant and in terms of this constant you can analytically work out these the various quantities that appear in our system of equations and then with this system of equations you can run them forward. You can in principle implement them in Mathematica if you feel like it and feel like doing the exercise it will be very slow so people have Fortran codes and the Fortran codes really solve exactly the system of equations I wrote down for you and once you know the delta T sub L you know what the angular power spectra are so this is the code called camp and then there's another code called class and depending on this code is written in Fortran it's still used very widely this one is written in C or C++ so it depends a little bit on what you like better but in principle you can use both of these codes to compute the angular power spectra and if you have models of new physics that you're trying to study let's say you have interactions between the baryons and the dark matter you can modify them and just run the angular power spectra compute them and compare to the C and B data so now so far we've done EV physics as Enrico also was pointing out in his first lecture and so now the question is how do we go or why do we claim that we can learn something about the very early universe from it so now we're trying to understand why the initial conditions we're imposing here have anything to do with the very early universe and here what I'm showing you is the formula for the angular power spectrum so these were our transfer functions that we had earlier I'm just breaking them up for for reasons so I mean there's really two physical contributions to the transfer functions on the one hand this is the physics of recombination if you want these are the source functions this is the baryon photon plasma and so on and then there's another piece that's just doing a projection from the plane waves onto whatever onto the onto the sphere and this will depend on the geometry so if you in a flat, spatially flat universe the functions the special functions appear are the spherical Bessel functions if you wanted to do the computation in an open or closed universe it would be slightly different but so there's this piece this is the the late time evolution and then these are the initial conditions so they factorize in a nice way at linear order in perturbation theory and so far there are the initial conditions for the equations below 10 to the 9 Kelvin and I didn't go through it we can maybe look at it in the next lecture but you can somehow really analytically find the solution and you see that the system of equations that I wrote down has five solutions that don't decay so there are some that decay you don't so much care about them because they will become subdominant compared to the ones that don't decay and you have one adiabatic solution and you have four that are called isocurvature solutions and I won't say much about the isocurvature perturbations because experimentally only the adiabatic solution seems excited and that's the solution for which R is constant the fact that R is constant and these is very helpful because it allows you to extrapolate backward in time so no matter how early you go this quantity R is constant now the only caveat is that eventually the system of equations I was writing for which I said that R there was a solution with constant R when the modes are outside the horizon eventually the system of equations will break down if I go to high enough temperatures where I have electrons and positrons in thermal equilibrium and then eventually I will have the quark glue on plasma and so on but it turns out that essentially for a general matter content this quantity R is there is always a solution for which R is constant so this is the theorem that Enrico was referring to and sketched for you how you would show it this is true for very general matter content and so in principle you can extrapolate backwards even through epochs for which you don't really know what the physics is so on the one hand this is good because you can go back in time on the other hand it's not so good because these super Hubble perturbations are not something that you can generate causally and so this is why so what you need is in this picture you know that eventually you want something that makes K over A as you go back in time grow more rapidly than Hubble so you want this quantity the time derivative of Q over A times the magnitude of H to be less than zero and there's two ways to do it so the first way to do it is if you have an expanding universe so H is positive then this will just be d by dt of 1 over A dot has to be less than zero Q is positive so it just tells you d by dt of 1 over A dot has to be less than zero which is the same as saying that A double dot has to be positive so you have an expanding accelerated expansion which we typically call inflation the other option is to have a negative Hubble in which case you have a decelerating contraction so these are the two ways to get out of it so you can inflate or you can try to bounce and the picture then for the inflationary paradigm that Enrico is describing looks like this so it's I'm drawing the picture in a slightly different way but Enrico has drawn the same picture with somewhat different things on the axis for example for him it was 1 over K and the scale factor wasn't in there and so on but it's conceptually the same so the idea is that you have the quantum fluctuations that he was describing to you generating the perturbations deep inside the horizons you have these fluctuations in your clock field and as the universe expands and the perturbations get stretched and eventually freeze out you saw it in Enrico's mode functions and they become you approach in the single field models a solution where R is constant and if you approach that solution it will remain constant essentially no matter what happens to the matter in between so it doesn't matter what happens at reheating which is good because we don't really know what happened at reheating but it's this conservation law that guarantees that we can still use this to extrapolate to lower energies you don't know how let's say the dark matter decoupled there's lots of physics potentially unknown physics in between here but we don't care because we have this conservation law at least in the single field models and so it's true more generally I guess that whenever there's a you get into the mode where R is constant doesn't matter if it was single field inflation or any other process you think about that then you stay in that solution because it's a solution of the equations of motion the two cases that we know where this is an attractor is in the single field inflation just because we know there's always two adiabatic modes one of the constant ones the other one decaying so you have to have I mean you only have two degrees of freedom in the single field inflation so you have to excite that mode the other case is where you have a phase of thermal equilibrium without any conserved charges and so in those scenarios at least the anisotropies that we see in the CMB directly tell us about the inflationary dynamics because they allow you to compute the quantity the curly R and this quantity you can compute it during inflation is conserved all the way to temperatures below 10 to the 9 Kelvin where you use it as the initial conditions for the system of equations that I was writing does that make sense and this is just what Enrico wrote down at the end of his lecture even in the same conventions which is good anyway except here I have delta which is the fractional rate of change of H dot Enrico wrote it in terms of eta which is the fractional rate of change of epsilon but this is the prediction for the primordial power spectrum for the scalars in inflation and the additional prediction for the single fields lower all models is that the three point function and non-gaussianity in general is too small to be observed so these are the predictions and you can feed them into the into the code compute the angular power spectrum and this is what gives you the the red line so hopefully have well we'll see a little bit more analytically what you can do and what's the physics of these equations but for now I just wanted to show you the equations show you what the codes are if any of you have used them and you have problem with them I guess you can you can ask but in principle this is how you would generate the red line so you run one of these codes with your favorite initial conditions maybe you modify some of the equations if you want to study some new physics and this gives you so the next part will be to understand how we actually make the data point so how you actually do the CMB measurement okay and before we talk about measuring the CMB let's just look at some of the maps because it's quite obvious that they're not really the CMB experiments are not really measuring just the CMB but they measure all kinds of things that are in the sky they just are experiments that map the sky at a range of frequencies for Colby DMR for example it was at 3153 and 91 gigahertz and then you still I mean here you have fairly noisy maps you mostly see the galaxy as you go to WMAP WMAP map the sky at 5 frequencies over 9 years from 2001 to 2010 and you see the K band at 23 gigahertz and there's the Ka band Q band, V band and W band and you see that a lot of what you're seeing is not the you see a fair amount of primary anisotropies but you also see a fair amount of other stuff in this case synchrotron emission at 100 gigahertz you're starting to see some dust but there's other things in the maps and we'll talk a little bit about them but not too much here's the maps from the Planck satellite so Planck mapped the sky at 9 frequencies between 30 gigahertz and 857 gigahertz it's really broken up into two experiments two different technologies so there was the Planck LFI measurement from 30 to 70 gigahertz and the Planck HFI measurement from 100 gigahertz 857 gigahertz LFI stands just for low frequency instrument HFI stands for high frequency instrument at low frequencies it was radiometers which is similar to the technology W map used at high frequencies it's a ballometer detector so these are the these are the maps and you see that as you go up in frequency you start to see a lot of dust for example at 217 gigahertz this map you can make a map that looks much nicer but I'm just showing you on the same scale how dusty it is 153 gigahertz and then the scales change for the 545 gigahertz and 857 gigahertz and so in addition to C&B there's a lot of stuff in the maps that you actually have to understand at least at some level and take out if you want to learn about the C&B so there's the dust emission from dust grains in our galaxy there's emission from a so synchrotron emission from electrons in the magnetic field of our galaxy and then there's also some things that are actually interesting so some people also find these interesting in their own right I don't know if you like I mean in principle you can study the physics of dust grains how they align with the magnetic field what the composition is the size distribution of these things for the C&B measurements they're really just a nuisance but there are some other things that are still interesting in cosmology you can learn about the late universe I mean there's some information about reionization because some of the photons don't I mean that come to us from the last scattering surface they re-scatter from electrons after the universe becomes re-ionized so in the first stars form they ionize the universe or re-ionize the universe and some of the photons scatter from them again then there's also the thermal Sonjaya-Seldovich effect which I briefly show there's the kinetic Sonjaya-Seldovich effect lensing which I'll briefly say a few words about so there's a number of things in the where is it mostly coming from it's all I mean the dust I'm talking about here or what I call dust it's all dust from our galaxy and then obviously if you look at the maps most of the emission is in the in the galactic plane but even at high latitudes there is some emission from galactic dust so there's basically no region where you don't see any galactic dust I mean even at high latitude so basically over the full sky there's some emission from thermal dust so these are I mean if you if you look even in the solar system is maybe it gets pushed out but so there's dust is really a somewhat broad term for anything that's larger than so there's the poly-aromatic hydrocarbons which are maybe at the lower end of the spectrum in terms of the sizes so these are just large molecules and then it goes up to I mean let's say typically you have some micron size dust grains they could be silicates they could be some iron inclusions they could be made from all kinds of things but typically they're from carbon or from silicates and they're just leftover things I mean from stars that exploded and so on so it's just material that's left out there and they also have different dust grain geometries and I don't know if we want to talk too much about dust but I mean the reason it's a new is because it also emits polarized radiation and this is because it actually aligns with magnetic fields exactly how it happens I'm not sure it's completely understood but one of the ideas at least is that you have some dust grains that have a regular shape and they get hit by a light that's coming from stars and it makes them spin and if you have a spinning particle you induce a dipole moment and the dipole moment makes it process around the magnetic field and eventually it will lose some energy and align further and further with the magnetic field and then you have these parts of the you have these dust grains that are aligned with the magnetic field and they predominantly absorb and emit in the longer direction I guess is you see this is all galactic foregrounds this is all dust I mean there's some parts where you see a little bit of yellow shining through this is where the C and B is so there's some C and B at 353 gigahertz but it's mostly dust that you see at 353 gigahertz okay so now let's the first effect I wanted to maybe at least say one or two words about but not too much either so the thermal Sunaya Sildovic effect this is because of scattering of the photons of hot electrons in the clusters so if you have a cluster somewhere and then you have the C and B that's emitted and then you're observing it there will be an effect where some of the C and B photons scatter out of the line of sight so some of the photons that you would have seen are out of the line of sight and then there will be also effects where some of the photons will scatter into the line of sight and they scatter off hot electrons in the gas so there's electrons at some temperature and this leads to a spectral distortion so you're upscattering the photons and you can compute it it looks like this it's just proportional again to the number density of electrons in their temperature so what you can do is you can make a map of what is called the Compton Y parameter which tells you about the hot gas between us and the surface of last scattering and this effect you can see in the map so as I said there's this shape so this is what's shown here so this is the function you saw in the previous slide at low frequencies what you expect is if you're looking in the direction of a cluster is that the CMB in that direction actually looks colder because you see this deficit here you see it at 44 gigahertz, 70 gigahertz 143 gigahertz and then so you see how it becomes colder and colder and then at 217 gigahertz you expect to see nothing so there's the thermal sonarive sildovich effect here predicts a zero and this is why a lot of CMB experiments actually have a frequency in that range so for example Planck has a 217 gigahertz band there's Act and so on have 220 gigahertz channels this is because you want to actually measure the thermal sonarive sildovich effect so you want to have one channel where you don't see it and then other channels where you do see something and you can measure the spectrum so you see that you need at least two of them because otherwise you couldn't tell if it's just a CMB fluctuation but if you have one channel where you don't expect the SE effect and then channels where you do see a deficit or decrement in the temperature you can actually extract it and for Planck what's nice is that you also have channels above the 217 gigahertz where the clusters show up as hotter than below so this is how it shows up in the map so you really by eye clearly see the decrement and then the increment now from the ground it's difficult to do these measurements because you only really have four atmospheric windows so you can measure at around 40 gigahertz 90 gigahertz, 150 gigahertz and 220 gigahertz but these high frequencies at least you can only do either from space or from balloons but this is a nice example of the SE effect and you can use it just in your maps you look for these kind of point sources where you have a decrement at low frequencies and an increment at high frequencies and you can map out all the clusters so this is a map of the SE clusters in the Planck map so you see that you're detecting a fair number of them and in principle you can do cosmology with it you can look at the number counts they will be very sensitive to sigma 8 and so on or you can make a Compton Y map so this is not just the clusters but it's really the Compton Y map so you make a map of this Compton Y parameter that I showed earlier and then you can try to measure the power spectrum of it so you can in principle also do cosmology with the normalization of the thermal SE effect and all these small scale measurements I mean will be more and more prominent because there will be lots of experiments over the next few years they will measure the CMB at high resolution so you can do a lot of TSE and KSE and you can also do a lot of interesting things with the lensing so if you imagine that the CMB is emitted at some fixed redshift I mean to a good approximation that's what's happening then if you have a lens somewhere along the maybe not along the line of sight but just next to your line of sight you actually don't see the temperature of the CMB of the primary anisotropy is exactly in that direction so you see the unlensed CMB shifted by the derivative of the lensing potential which is so intuitively should be clear if you have a cluster the light will be deflected by the cluster and this has a number of effects on the cosmic microwave background so one of them is that it washes out the well maybe not washes out but it somewhat smoothest the peaks in the angular power spectrum and this is something that's included in all the analysis so let me briefly describe what the effect is so let's imagine you're trying to measure the angular power spectrum in some direction of the sky and there's some lens that magnifies the CMB then if the primary CMB anisotropies have a power spectrum so this is L times L plus 1 over 2 pi times C L and then this is L so if your primary anisotropy so the theory input has some power spectrum that looks like this if you have a lens that magnifies the CMB it means that you're shifting everything to somewhat larger angular scales or lower L so you get something that will look like this this is obviously an extreme version and then you might have a place where you demagnify the CMB so you're moving it to higher L so these are if you were to imagine measuring it in small patches and then what you're doing with the full sky measurement is you're averaging over all these patches and so you see that if you're averaging these curves then you're filling in the minimal a little bit and you're moving out the maxima a little bit this is the basic effect of the peak smearing from lensing in addition the lensing leads to 3-point correlation so there is some non-gaussianity in the CMB maps just not the kind we're usually interested in for inflation but there are correlations 3-point correlations in the CMB because the integrated sucks wolf effect which arises from the time variation of the gravitational potentials and the lensing which arises from the spatial derivatives are correlated because of the equations of motion and this is something that's always subtracted in the 3-point analysis of Planck so this effect is taken out this also taken into account and then at 4-points you also have a non-trivial 4-point function that you can use to measure the lensing power spectrum so the 4-point function has this schematic form so it's proportional to the temperature power spectrum squared and the lensing power spectrum and you can use this to measure the lensing power spectrum and you've probably seen the plots from Planck and it's detected at very high significance so it's depending exactly on how you count it's detected at around 40 sigma you see the different Planck measurements 2013-2015 and the act and SPT measurements there's still some issues in the measurements so usually it's cut off somewhere here because some of the null tests are failing at high L but it's a very nice measurement of the lensing power spectrum so you can really generate a map of the mass between the matter between us and the surface of last scattering and this map of the lensing potential is what I'm showing here so the gray is obviously just the galaxy that's being masked because there's too much dust to do the reconstruction but you see that you now have a Kobe-like let's call it Kobe-like map of the matter between us and the surface of last scattering and this will get much better future experiments so there will be a number of experiments ground based that will make very nice lensing maps okay so this is what I wanted to say about the things that are not primary anisotropies and now I'll say a few words about actually measuring the angular angular power spectrum yeah I'm not sure exactly if there's anything that's specifically interesting about the point that L equals 100 but I can try to look at it I haven't looked at the point that L of 100 I mean it's only I mean it's not a highly significant difference but it's maybe worth checking what actually happened here I haven't looked into it so I don't know yeah I don't know I should check but so it doesn't at least it doesn't fail the down here everything looks under control where the problems are still in the lensing measurement is out here so there's some null tests in that it fails for example you see a curl which you should not see from lensing out at high L so I think the low L is under control but it's worth maybe checking this point I don't remember so maybe one should ask someone who worked on it but I don't remember any more questions yeah how good are the ah so camp doesn't really generate maps it just generates angular power spectra so you tell camp what the cosmology is you would like to compute the angular power spectrum for so you tell it what omega B is what omega called dark matter is what the the various parameters are H naught tau and s and the amplitude of the scalar perturbations let's say something like this and then it computes for you ah the angular power spectra so it gives for you this this plot it it computes the the temperature anisotropies or the angular power spectrum of temperature anisotropies and also for the polarization it doesn't really generate maps in principle if you want to generate maps so these maps are not measurements but these are maps that are just generated from a given set of seals and if you want to do that then you should use something that's called heel pics which I didn't want to say much about it but if you're interested I can tell you how I mean where to find it and how to use it but camp really only generates angular power spectra but not maps yeah sorry the ground-based experiments will be very good at measuring the clusters eventually because they have higher resolution than Planck so they will actually do much better than Planck did for all the TSE KSE this is something that will get much better over the next few years with advanced act SPT and so on they do have less frequency bands the frequency bands I'm not sure will be so problematic for the TSE but they will be problematic for things that I will talk about also tomorrow and not tomorrow on Thursday I think for the search for primordial gravitational waves if you're trying to look for P modes you really have to understand the foregrounds well I don't think right now it's holding up the TSE and KSE measurements because they typically right now do it with two frequency bands and advanced act for example will have five and then I don't know for CMB stage 4 it's not yet clear what it will be but right now we're typically discussing eight frequencies so I think this kind of stuff will be done very well from the ground yeah some voids you mean something that's not captured I mean as I said there are some parts where you magnify some parts where you de-magnify the CMB so this in principle captured unless you somehow have something in mind that's beyond lambda CDM but the kind of stuff that is in lambda CDM voids or over densities are both included in the computations more questions ok so then let's talk a little bit about the measurement of the power spectra and the the Planck analysis and here I'm just showing you simulated maps just so we have an ideal measurement so there's the temperature map, full sky temperature map and then full sky Q and U maps and what you do as we said before from the map you just compute your ALMs this is also something you can do in heal pics if you want to do it in a practical way you would do it in heal pics and then given the ALMs you can compute the angular power spectra in this way and similarly from Q and U you can extract A, E, L, M, A, B, L, M and you can compute all the cross spectra and power spectra in polarization from the maps now the question is let's say we have such an ideal measurement how do we go about estimating the cosmological parameters in your favorite model so what you would like to know is the probability distribution or the probability that the universe is described by your model with some parameters theta given the the data so the theta here really just is whatever parameters you need to describe your model so for lambda CDM you could use these 6 parameters if you have some other model it's theta just stands for whatever the parameters are in your model and then the data D stands for the data for the C and B it could be either the ALMs directly or the angular power spectra if you have galaxy surveys it could be the matter power spectrum or galaxy power spectrum and so on and what you're interested in really is this probability but it's not something that you can easily compute directly but what you can do is you can use space theorem to relate it to the probability for finding the data given your parameters so this is something that you can easily compute if you have a theory for any given set of parameters you should be able to tell me how likely it is to find a certain angular power spectrum and then you divide by the probability for the data which you typically don't know so this is just what you need for base theorem and then this quantity the probability for finding the set of parameters that you the values of parameters you're using this is referred to as the prior and because you don't usually or you don't know the denominator you define a likelihood which is just given by the probability for the data given the theory so this is the kind of likelihood and you should be able to compute it for any model you have and as a warm up what we'll look at are just the temperature and isotropy and then we'll add back the polarization so for the temperature and isotropy what we said was that the ensemble average over the ALMs or ALM AL prime and prime gives you the angular power spectrum times delta AL prime delta MM prime and this means these are Gaussian random variables and so the probability for finding a given set of ALMs can be written in this way so this is exactly just a Gaussian probability distribution and this is just there to properly normalize the probability distribution you can convince yourself that you get back these correlations for the for the ALMs for this probability and so in this case you can write down the exact likelihood so this is for a given L for the different L's are statistically independent so the likelihood is just the product over all the contributions for the individual L's so the exact likelihood in this case looks like this or if you want to write the likelihood for in terms of not the ALMs but the CL's you can just rewrite it and so you move this up into the exponential by turning it into a log and you get something like this so this comes from a change of variables because you were computing the probability for CL's here and here the probability for ALMs so there's a change of variables but these are the likelihoods either for the ALMs or for the CL's and this is something that you can now compute because you know using CAMB you know how to compute this so you can compute the likelihood and the probability for finding the ALMs or the CL's this is for temperature if you want to move on you and include polarization it's very simple you just define a vector that includes the temperature multiples modes, B modes and then can define the correlation of A with a dagger as CL delta LL prime delta MM prime so this still has to be true because of rotational invariance and then the CL is just the matrix of the various angular power spectra so you have TT TE, EE, NBB and the exact likelihood then you can still easily write down it's still Gaussian it's just this of this form of the ALMs are these these things so I don't know does that make sense I mean this is the yeah so CAMB computes the various power spectra so it computes TT, TE, EE everything you want BBE and then it also computes the Phi Phi and Phi E where this is the lensing potential so it computes all those values for you and you can use them in the likelihood so I didn't hear include the lensing in principle you want to fold you could fold the lensing into this as well because you're measuring other modes but these are the exact likelihoods that you would write if you include polarization and this is for the ideal measurement if you have a real measurement then obviously there's noise you have finite resolution you have the maps are pixels that you have to take into account there's pixel window functions you have to mask this guy because there's the galactic plane there's a lot of things that you have to take into account a lot of this well we'll see it's easy in pixel space but so one thing that's useful to point out which is related to what I was starting to say so the likelihoods that we were writing down are Gaussian ALMs but not for the CLs and so it's easy to incorporate all these effects in map space where the likelihood is Gaussian you have an exact likelihood it's very easy to do it in map space you just look at the correlations in map space between different pixels and you know what their covariance is so these are the angular the correlation functions this is the noise covariance matrix which you know for the experiment so in principle in pixel space you know this and then it's easy to write down the probability distribution for the temperature in a given pixel it just looks like this and it's easy to generalize to polarization obviously in the same way we did that before so you can write this down and as I said you can extend it to polarization the problem is that if you wanted to do this for Planck in practice have 50 million pixels so your covariance matrices the matrices that you have to invert are 50 million by 50 million and so in practice this isn't really something that's feasible so you have to find other ways to do this and what is done is to cut the likelihood into different pieces so you use a likelihood in map space for low L and then you use approximations so the likelihood for the CLs so you use a likelihood that is called pseudo-CL likelihood and one of the approximations that you can do this is what Planck actually used in practice in their analysis is to just say well we'll approximate the CLs as Gaussian this isn't really true they're distributed according to a chi-square distribution with 12 plus 1 degrees of freedom but you know that at large L at least because of the natural limit theorem it will approach a Gaussian distribution so for large L this is okay for W map added a term a log normal piece to correct for the non-gaussianities and the likelihood Planck didn't but most of it is really coming from very high L where these effects presumably don't matter all that much and so the covariance matrix here is evaluated for some fiducial cosmology so you're not varying it it's some fiducial fixed cosmology typically what you do is you start with some cosmology that you think is probably roughly correct and then you do the measurement and you just iterate so you can just compute this covariance matrix a few times until you have something that's consistent and the nice thing about this approximation and the covariance matrix is that you can compute it analytically even if you mask this guy and if you have noise and I'll briefly show you the some of the things that go into it although I'm not sure it's a good time in the evening at 5pm but so there's these what you want to have you have masked sky maps so you're really multiplying them by some weight function typically for W map the weight was either 0 or 1, 0 for the pixels you throw out, 1 for the ones you keep for W map it's slightly more complicated to appodize the mask which means that it's actually a function that smoothly goes between 0 and 1 and so you have to take into account this kind of weighting but the maps that you're using the masked maps are weighted maps multiplied by some weights of the measured maps and you can compute angular the multiple coefficients from these maps just in the same way this is essentially here I'm just writing this is the little area of a pixel the omega i and so what I'm writing here is effectively a discretized version of the integral that we had before y star lm of delta t n hat so these are the at lm's and from them you can compute the pseudo power spectra these are not really directly related they're not the power spectra you're interested in but they're related to the power spectra you're interested in by what is called a mode coupling matrix and then these are pixel window functions and this is a beam window function from the experiment so there's a number of other pieces that go in and then there's some noise so this is in any case I'm just really trying to show you that in principle it is straightforward to compute this and to extract this by inverting the mode coupling matrix if you then want to compute the covariance matrix you can, given what I just showed you on the previous slide, compute it analytically and it looks like this so it's a bit of a mess but you can easily implement it so this is just the various pieces that go in if anyone is interested in it I'm happy to talk about it more but it's just to give you an idea what these things look like so this is what's used in practice, it's an analytic approximation to the covariance matrix and then typically these approximations that are using they fail in certain cases so you then use simulations to improve them but it's based on a covariance matrix and so what you do as I said before is you use hybrid likelihoods so you use a pixel based likelihood on large angular scales this is exact but very very lengthy to evaluate so use it at large angular scales on small scales you use the pseudo-CL likelihood that we just described so you stitch them together around maybe 32 or 50 depending on the exact analysis so different analyses use slightly different places where you stitch them together so now the question is once you have a likelihood how do you extract the cosmological parameters and one of the ways you could do it is to just evaluate the likelihood function on a grid so you have a six-dimensional space you could just break it up into a grid you could just evaluate it but it would take you 10 to the 6 evaluations if you just want 10 points in each direction which isn't really enough so you have to evaluate the likelihood a large number of times so typically this is not what you do and in particular the problem is that there's lots of foreground parameters so in Planck for example you typically have an additional 20 to 25 parameters that you have to marginalize over so what you do is you typically use a small chain Monte Carlos this is again something that you can download so I think the most common one is still Cosmo MC which I'm showing you the plot from the web page here so if you type it in you can find it and you can play with it and this is the piece that just evaluates the likelihood and gives you back the parameter constraints what it does is it uses typically a Metropolis Hastings sampling which chooses some starting point in your parameter space and it computes the likelihood there so you have your parameter space let's just draw a two dimensional and then you just pick some random point and then what you do is you pick another random point and compute the likelihood ratio if the new point is more likely then you stay there if it's less likely you go there with probability Epsilon where Epsilon is this likelihood ratio so this is like a thermal process so this is like an e to the minus e over kT some Boltzmann suppression and so there's some temperature associated with them that you can in principle adjust but it just jumps around and samples the distribution it just repeated until you have enough points and with if you run it for long enough you can use it to derive the usual contours that people show so you just then jump around and then there will be 68% of the time you will be in some region and then 95% of the time you will be in this region and so on so you can just then draw contours based on the points that you've sampled from the distribution and I wanted to say a few words about how the Planck angular power spectrum was measured but maybe I'll end here it's over right I'm out of time yeah but I have more stuff to say anyway so I'll just continue in the next lecture maybe end here