 through it in case there's questions so just like in the last lectures you should feel free to to ask me questions whenever something isn't clear. Plunk I think most of you know about Plunk so it was launched on May 14 2009. It's about the the size of a car. It observed from L2 starting from August 2009 and then ended HFI so as we said there's two instruments on Plunk the high frequency instrument and the low frequency instrument the high frequency instrument ended observations after five surveys in January 2012 and LFI kept observing for another year and a half until August 2013 and since then Plunk is basically the the measurement is done but there's still a lot of work to be done in analyzing the data. The first data release was in March 2013 this was for the nominal mission data which stands for the first 15.5 months of the mission and the first full mission release was in February 2015 there will be another release and another set of papers either I mean it's supposedly this year but it might I'm not sure maybe it'll be early 2017 but we'll see so far the one thing that's been consistent about it is that the deadlines have always shifted but so this is the the experiment the picture and then I already showed you in the previous talk we saw all the the maps at the various frequencies and now I'll just describe for you the measurement of the angular power spectrum the way it was done by the Plunk collaboration obviously there's more to be done than measuring the power spectrum you can measure the bi-spectrum you can measure SC clusters you can measure a lot of things I'll focus really just on the power spectrum because that's what we've been discussing in where the constraints on the lambda CDM parameters come from so just like we discussed in the last lecture the likelihood is a hybrid of a pixel space likelihood this is again this is a nice way to a nice likelihood because we know the CMB is Gaussian to a very good approximation so this likelihood is essentially exact it's expensive to compute if you want to do it at the resolution of Plunk because you would have covariance matrices that are 50 million by 50 million so instead what you do so this just for for temperature even and so instead what you do is you only use a pixel space likelihood at low L and then use in the case of Plunk fiducial Gaussian approximation for high L so Gaussian yeah this is the part we saw yesterday by fiducial again I just mean that it depends on some fiducial model and as I said you start with some fiducial model that you think is close to the real data and then you can iterate just compute your covariance matrices a few times so compute the covariance matrix extract parameters and then feed them back in and eventually should be stable so the 2013 and 2015 analysis are here on the on the two sides because there were small changes in the first round of analysis the low L polarization data was not used it wasn't the the Plunk polarization data but was still used from from WMAP in 2015 Plunk used its own polarization data from the low frequency instrument you see here that the fraction of the sky that was used by Plunk was substantially smaller than the fraction of the sky that was used in the WMAP likelihood and this was because of for I mean the dust contamination by by dust for the most part for the low L temperature data not too much changed between 2013 and 2015 so this is some some pixels based likelihood at low L with slightly larger sky coverage in 2015 than in 2013 so I'm not describing in detail what the commander is but conceptually it's one of the likelihoods that we discussed yesterday yeah it has the same dust but there was no measurement of the dust so if you look at the frequency coverage for WMAP and Plunk then the WMAP frequency coverage is from so let's WMAP has five frequency bands from 23 gigahertz to 94 gigahertz and they're all polarized for Plunk the maps I showed you saw that you have data from 30 gigahertz to 857 gigahertz but only the highest two are not polarized but you have the highest polarized frequency is at 800 353 gigahertz and so the the dust we haven't said much about it but the dust intensity grows as you go to higher frequencies and so while you don't really a while you can't really measure it well with WMAP because you're missing the high frequency information you can make very good dust maps with with Plunk and you see that in the in WMAP so as you say they have the same dust but what WMAP did is they modeled the the dust so there was some assumption of roughly what the dust should be and Plunk saw in their maps I guess that the dust is slightly higher than the the model that WMAP was assuming this also is responsible for for example the shift in the optical depth between the two measurements which is coming from these lower likelihoods so you can see for example from here you have something like 0.089 for the optical depth plus minus something like 0.013 and for the well this depends a bit exactly on what temperature data you combine it with and then this one if you look at it can give you something like 0.75 and then they had slightly larger arrow bars because the the mask was smaller and so this shift downward I'm not talking about it later but so this was one of the reasons for this shift was that the you modeled the the dust so some of the some of the optical depth here is really from from dust I mean it's not so there really is some some dust that you had to remove and you can measure it with Plunk but you couldn't measure it with with WMAP so I don't know if that helps but okay so then at so in other words I mean this measurement you should probably think of as just a buyer a measurement that's biased to a slightly higher value to make sense so you have another question okay and so then for the high likelihood there at the at the level of discussion right now they're both of this form so they're both fiducial Gaussian approximations so the likelihood at high all they were called it was called camp spec in 2013 this was by the by the Cambridge group and then it was switched to a clique which is by the Paris group so these are at the level we're discussing that essentially the same in there the I'll describe them in in somewhat more detail now so the the high L likelihoods that are called camp spec or or a clique depending on on the year so camp spec also is around in in 2015 but the main results are quoted for for clique so they're based on only these three frequency channels so you have nine frequency channels from 30 to 857 gigahertz for the measurement of the angular power spectrum Planck only used the frequencies from 100 gigahertz to 217 gigahertz and then you don't use the same L range for all of them there's a two reasons I guess so the at 100 gigahertz Planck used the spectra out to about 1200 143 gigahertz out to about 2000 and then at 143 cross to 17 and to 17 cross to 17 the spectra were used out to L of 2500 and the difference is that the detectors have slightly different sensitivities and slightly different beam sizes different resolutions this has a beam of about 10 arc minutes this has a beam of about 7 arc minutes and 5 arc minutes and so here the noise comes in before it comes in in these so the the smaller the beam the smaller the scales you can resolve and so that explains this trend and one of the differences between Planck and Campsbeck and Plick is where you put the the lower lower end if people are interested we can discuss the differences but for now they're really the same type of likelihoods based on the 100 143 and 217 gigahertz data and then as we said you cannot use the full sky just because there's a lot of foreground emission partly from our galaxy so here this is the galactic mask and here in the image if you look at it closely you see that it's not a sharp edge but it's smooth so this is what I meant by apodization and which is why we had these window functions in our formulas and then you see that there's a bunch of point sources that you have to mask and some contributions from CO emission and the emission as you could also see in the in the maps on the previous slide the dust emission is stronger at oops at 217 gigahertz and so the mask masks out more of the sky at 217 gigahertz so you need the you can make these masks we can discuss in more detail exactly how you would make the masks in practice they're made from the higher frequency data so you might make them for example from the 857 gigahertz map and just smooth that map and threshold it at some level and everything that emits more you cut out so that's your mask and then you have to understand how you apodize for the point sources typically have some detection threshold maybe 5 sigma 7 sigma and then yeah anyway so then those are also apodized even though it's a little bit hard to see so you have the masks and then even though you're masking they're still galactic emission at high latitudes so if you don't correct for it it will bias your measurements so what Plunk did was so model the diffuse galactic emission or a measure the power in the diffuse galactic emission also again from higher frequency maps and extrapolate down so they're still galactic emission even if you're masking a fair amount of the sky and in this part we already discussed so you have an analytic fiducial Gaussian approximation for the likelihood this was the expression I showed in the previous lecture briefly and you still need in the formula as you saw that you also have to know the noise of the experiment and in the case of Plunk what is typically done is that you take maps either let's say a map from the first half of the mission second half of the mission you take the difference the CMB and all the emission in the sky should be the same so the difference should be the noise and you can measure the noise power spectra that way I'm showing them here you probably don't care so much about the noise but you do need it if you want to do the analysis and so you see the green is the measurement of the noise power spectrum and then what is done you have some model with a few parameters and you fit it to the to the noise this gives you the the Plunk noise model and then the the green lines I'm showing here are the white noise levels that you give in together with the map so obviously at 100 gigahertz it's far from from white and then the other ones are a little bit better at least that on small scales but there's always some some noise in in there that's that's not white and you just have to take it into account if you want to have proper error bars so once you have your covariance matrix and your your likelihood you then run Markov chain Monte Carlo's and if you run them for lambda CDM then these are the the parameters you get out and so that's roughly how the how the analysis works in a somewhat coarse-grained point of view and their provider so these this table you may have I mean it looks not too different from when we had let's say W map what really is new with with Plunk from the information on small scales is that you're not just getting good constraints on lambda CDM but you're also getting very nice constraints on departures from lambda CDM and you see that lambda CDM provides a good fit in the sense that the data always pushes you on to the lambda CDM model even if you're allowing for departures from it so for example you might want to allow departure from cosmological constant you might want to allow the equation of state to vary you might want to allow the the running of the spectral index to vary or the helium abundance the effective number of relativistic degrees of freedom the neutrino mass curvature and you see that if you if you run the analysis you're always consistent with these it's maybe hard to see from the back but there's these thin dashed lines in here that indicate the lambda CDM model and you always consistent with lambda CDM within let's say one sigma for all the parameters so this is something that you couldn't really have done with with just W map because you didn't have enough information on on small scales so even though the parameters didn't change very much we now have much more confidence that lambda CDM actually describes the the data that we have very well so that's basically what I wanted to say about the Plunk likelihood so now we've saw we've seen how you would derive the red curve using using codes you should also have an idea how you would modify them you just add the equations of your new physics to the codes and run them so you can in principle generate these curves and you've seen a little bit how you derive the data points you now also understand what this dashed line here means the dashed line in the plot indicates where the low L likelihood and the higher likelihood are stitched together so this is where the transition occurs and you see here that these are not a Gaussian so you see that the arrow bars are actually clearly non Gaussian so this is the low L pixel space likelihood and then at high L you have the pseudo the pseudo CL likelihood this one here is maybe still a little bit misleading this isn't quite how the analysis I described works because you had measurements at different frequencies and so at different frequencies you have different foregrounds here what's done is that a best fit fiducial model is subtracted and then they're co-added to make this one measurement typically otherwise you have different fiducial models for different frequencies but this is what happens when you take out the fiducial models and so now unless there's questions about this part I'll try to go back and say a little bit more about the contribution so what you're actually looking at here in in the angular power spectra yeah not the start at the end but you can take for example the first half and the second half you have I mean or you can take surveys let's say for simplicity you make in the first year you make a map of the full sky for it's not quite true there's always some missing pixels and whatnot but let's say you make one map from the first year and then one map from the second year most of the things I mean the C and B certainly should still be the same the emission from the dust synchrotron and so on they should be the same so when you take the difference there are some things that you can actually see that are not just noise for example there's some scattering of light of particles I mean zodiacal light I mean scattering of light from particles in the solar system and that how much of it you see or where you see it depends on the time so you in principle you can see that in some of these maps but for the most part the things you're interested in are time independent and so you can take the first year minus the second year it should only be noise and so you take a power spectrum of that map and that's one way to measure it and to get the properties right you really take let's say half mission half differences because the full mission is the average of the first and the second so you take a half the first minus a half the second and from from that you compute the power spectrum and that has the same statistical properties as the ones as the noise in the experiment up to so this gives you the uncorrelated noise in principle there's also correlated noise on the experiment because all the detectors are sitting on the same spacecraft so the cooler is shaking so there's some noise introduced that's correlated between them there's in principle also ways to extract that from the auto-spectra but yeah so at the simple version as you take year one minus year two and take the power spectrum to get the noise for example or half mission one minus half mission two and take the power spectrum. More questions about the plonk? Yeah there's certain I mean there's a lot to be said about effects of cosmic rays on the on the plonk data in the sense that there were a lot more cosmic ray hits than were anticipated I think is fair to say so the 98% of the of the data are actually affected by cosmic rays of the time-order data and then you fit templates to it but I think there's no overall degradation of the satellite if that's what you're asking but cosmic rays definitely have to be taken into account there's 98% of the data were affected by cosmic rays and 30% roughly were thrown out because of cosmic rays or cosmic ray contamination so it's an important effect but not so much it's not a large systematic when you take year one minus year two or something like this yeah well plonk I would say it did cover all the sky but maybe not in the individual surveys so if you look at the first survey the the first so the the way the scan strategy works is you have the the Sun the Earth and then it's L2 somewhere and the satellite just scans and scans as it moves around so in half a year it roughly scans the full sky not quite there's missing pixels in the survey maps but once you'd take more than a single survey you actually do fill all the the pixels I mean if there are still no it really depends on the maps you're using so if you take the full mission map there's no it's there's not 3% missing pixels it's definitely covering the the full sky if you co-add the maps there were some glitches early on in in LFI from cosmic ray activity that you can even see by eye in the 70 gigahertz map for example there's a strip that goes across so this was where LFI was shut down for a while so there's some missing pixels in some of the maps but it did cover the full sky so it can't necessarily use the full sky because a lot of it is contaminated by foregrounds but it did eventually cover the full sky more questions if not then let's say a little bit more about the temperature and isotropies and the computation and just to remind you in case you don't remember all the formulae we had in the previous lecture the temperature perturbation was related to the quantity that we introduced so this is essentially something like the density contrast so this is the the quantity that we introduced from the phase space density so it was defined as let me just write it again so there was a delta of x p and t was defined as one over the average intensity and then integrated d3 p no p cubed dp over 2 pi cubed and then the perturbation in the phase space density with momentum p p hat and t so this was the definition and it satisfies the Boltzmann equation that it just inherits from the from the phase space density so this was this equation and if we're interested in the temperature perturbation at some point like our point I'll call that point zero just because our cosmology is the background cosmology is isotropic and homogeneous so we can put ourselves anywhere we want in particular at the origin and then this was again the phase space density for the photons and here this was the momentum of the photon and the momentum of the photon is minus the the direction you're looking in so this was the the definition basically of how we would get the temperature perturbations from from this quantity and we had an equation of motion for that quantity that was translationally invariant so we looked for an ansatz of this form so we did a Fourier transform and we expanded this quantity in terms of the Schoenberg polynomial so this was just some gymnastics that eventually led us to the Boltzmann hierarchy which is convenient because those are the quantities you want to get the the angular power spectrum it looked like this and then this part you might remember was just what we had also in our toy example for free massless particles non-interacting particles and then on the right hand side there's a number of contributions so there's a contribution from scattering of these particles out of the line of sight then there's contributions from scattering particles into the line of sight then there are the contributions that tell you that the the photons are propagating not in a in a flat or FRW universe but in a universe that's slightly perturbed and then this one we'll see is the the Doppler effect from the motion of the the electrons and a similar Boltzmann equation we had for the for polarization and for what we'll discuss next let's undo the last step so let's not look at the Boltzmann hierarchy but let's look at the equations of motion that are satisfied by this quantity and it's relatively well I mean from the previous slide this look familiar let me not say it's relatively simple but it is this equation so you have this part that is just describing the free propagation then this part again from from scattering out of the line of sight these pieces that describe the scattering into the line of sight where this was the the source function and then you had the the Doppler piece and the the perturbations to the geometry and what you can do with this system of equations is write down a formal solution we discussed this already maybe if it's I'm not sure if it's obvious to everyone this this solution so maybe is it obvious otherwise I can briefly maybe sketch where it's coming from I don't have it on the slides but let me just briefly maybe say what the line of sight integration is so if you look at this equation so it's a it's an ordinary differential equation so you have some something schematically it's of the form y prime and then there's some y here plus let's call it p of x and y so these are these these contributions this one and this one and then we have some some stuff which I'll call q of x so this is the the form of the differential equation which is a differential equation that you've probably solved many times so the standard way or one of the standard ways to solve the differential equation is to first look at the solution of the homogeneous equation so you find for the homogeneous equation what you find is that y of x I mean you just I guess this is zero then you have the derivative of the log of y is minus p which means that y is y of x zero times e to the minus integral x zero to x dx prime of p of x prime so this is a solution to the to the homogeneous equation that makes sense and so this is a solution to the homogeneous equation and then the standard way to find the solution to the inhomogeneous equation is to make this a function of x so look for an ansatz where y of x is some other function a of x times e to the minus dx prime of x prime again integrated from x zero to x and then if you now take y prime when it doesn't act on this thing well let's write it like this so y prime is equal one piece where it acts on a a prime of x e to the minus x zero to x dx prime e of x prime and then the second piece when it acts here it just gives you minus p times y so by construction when it doesn't act here it's a solution of the homogeneous equation so you satisfy this equation and then this is the piece that you want to be equal to q of x and now all you have to do is integrate once to get a of x so what you get if you integrate once it's just oops so let's write it out again a of x prime is q of x and then e to the minus x not no plus x not to x x prime you have x prime and so you just integrated once and you get a of x is a of x not plus and then the integral from x not to x dx prime of q of x prime times e to the x double prime from x not x prime e of x double prime so this is this solution this is the piece that gives you the homogeneous solution so the solution to the inhomogeneous equation is just plugging this part in here so you just get y of x is equal and then x not to x dx prime q of x prime and then here you just get e to the x to x prime dx prime p of x double prime so you can solve the equation in in this way and this is what I'm writing on the next on the next slide so this is the solution here and you recognize the various pieces so if you look at the exponential can you read it because I almost can't read it from over here but so the you recognize the various pieces so the this piece and this piece was what I called p so you find those in the in the exponential here and then the rest of the stuff was essentially what I called q so this is the solution it's called line of site integration because it was a derivative along the line of site this piece was a derivative along the line of site the q times mu is q dotted into into n hat now if you look at the solution then you see why it's useful because on the right hand side so it's a formal solution at first because the quantity you're interested in also appears under the the integral in various places but you see that you only have l of 0 and l of 2 appearing and so what you can do is you can solve the Boltzmann hierarchy truncated to relatively low l high enough so you get delta t0 accurately and delta so this one consisted of delta p0 delta t2 and delta p2 so you want to have the Boltzmann hierarchy hierarchy to sufficiently high l so you can compute those guys with high precision and then you can compute all the other delta l's using this line of site integral so that's what makes it simple and this was a big breakthrough so before people are solving the full hierarchy is very computationally expensive this is what the the codes now are using it's very simple and it's also nice because you see that there's really two different contributions to the temperature perturbation if you look at this formula you see that there's one contribution which has e to the minus omega times omega and this is the probability of last scattering so the probability of last scattering is this quantity times so all the first pieces are proportional to the probability of last scattering no and the second piece is not so the well have it in in more detail so here I'm writing it out this is the first contribution it's proportional to omega times e to the minus so this is the probability of last scattering so all these contributions come from the time of last scattering and again we'll discuss them in more detail but again there's two different contributions here so this is the last scattering probability as I said and if you one thing you can do just out of curiosity it's just see what the power spectrum looks like if all you're including are these effects and you see you get a fair amount of the spectrum correct so there's some something that you more or less get correct on on large angular scales we'll talk about it more and then on the small angular scales you also get it correct there's some pieces that are missing and we'll discuss them but this is what happens if you only look at that contribution in that contribution you will see there's two terms so the first one is from the intrinsic temperature of density perturbations at the surface of last scattering this are these are gravitational potentials so it corresponds to the redshifting I'm the reason I'm grouping them together is because they're not independently gauge invariant and so often the people talk about them separately but they're not necessarily separately gauge invariant so I'm grouping them together in and here this is the the Doppler effect and maybe it's clear why it's called the Doppler effect so this is the velocity potential of the the baryons or also the photons very very close to it at least at at early times and then this q mu if you remember what the definition was this is minus q dot n hat so in other words this is the direction you're looking in and then q is the the wave vector so if you're looking in a direction if the plasma is moving along the line of sight so this is really the q the q times delta u is really the velocity of the plasma and so if the velocity of the plasma is along the line of sight you get a Doppler effect if it's perpendicular you don't get a Doppler effect so this is the Doppler effect here and again this is the the gauge invariant combination in the variables that I've been using and if you plot them independently then you see that predominantly what you're looking at both on on large and on small scales are really the temperature perturbations this is also something or the intrinsic temperature perturbations and gravitational redshifting this is also something that Enrico told you here you see it in a more quantitative way and the Doppler effect is there but it's it's sub subdominant will describe both contributions but the dominant one is really the the temperature perturbations in addition the second term was from the was the one that didn't involve the probability of last scattering and it's a term that generates contributions at all times but it's proportional so even in the absence of free electrons in other words the previous term you only got contributions from the surface of last scattering and also at late times from from reionization this term can generate contributions at all times this is what it looks like so it's again relatively small there's a contribution here at L of around 100 200 and then there's another contribution at very large angular scales and one thing that you can convince yourself which is maybe easier if you work in Newtonian gauge so eventually this is the Newtonian potential you can convince yourself that it doesn't evolve in a matter dominated universe so the the time derivative of this quantity is zero when you have matter domination and so the two contributions that you see on the previous slide you understand come from on the one hand from the fact that during recombination the radiation is not completely subdominant so there's if you look at the evolution of the universe at some point the matter become the in the beginning as we know from the discussion of nuclear synthesis the universe was radiation dominated it redshift's like 1 over 8 so the 4 the dark matter redshift's like 1 over a cube so at some point you enter you get to a point where matter and radiation are equally important and if you go even later then the the radiation is negligible the matter radiation equality isn't far enough in the past of last scattering for radiation to be completely negligible so there's a contribution here and then you also know that at very late times the gravitational potentials will evolve will evolve because dark energy becomes important and that's what you're seeing here so you can so these are the two contributions the early contribution when radiation isn't completely negligible and then the late contribution from dark energy and you can break them up so there's the early integrated socks wolf contribution and the late contribution so now we've seen the the main contributions to the temperature and isotropy is in the cosmic microwave background the the Doppler effect the socks wolf effect and then the early and late integrated socks socks wolf contribution and here what I'm showing you is just all the contributions that arise from the time of recombination together in in blue and then the late integrated socks wolf contribution in green and you see it's only really important on very large scales you also notice that if you plot the the full contribution not the one from recombination you see that the the black spectrum of the contribution is actually below what I'm calling socks wolf Doppler and early integrated socks wolf effect and this is coming from reionization so if you have the first stars form and reionize the universe then the the medium has some some optical depth and some finite probability for the photons that come to us from the last scattering surface to scatter again the probability for a photon to scatter again or to not scatter let's put that one is e to the minus tau and here we're looking at the power spectrum so you expect an e to the minus 2 tau suppression from the optical depth on small scales in addition to that is not I didn't highlight it but we'll see it later in the polarization spectra the there's a contribution on large angular scales that's the ad so there's a contribution from reionization on large angular scales so this was the probability for a photon to not scatter terrible English but in any case you know what I mean probably to not scatter after recombination or after what we call the last scattering surface make it shorter so this is that probability and then you see this suppression in the in the angular power spectra and so this also means that you're on small scales never really directly measuring the the amplitude of the primordial fluctuations but you're only ever measuring the quantity so in Ricko introduced the extra best so you're really measuring on small scales a times e to the minus 2 tau and this is also why this combination is often quoted in the in the papers okay so are there any questions yeah on large scales it's really not held on small like large L or small angular scales it's really an overall suppression by e to the minus 2 tau this may look funny but this is just really what it looks like if you multiply by some by some number I mean the number is smaller so also if you multiply by I mean it's just the fraction of it but the the fraction is the same between them yeah yeah it's degenerate with a number of other parameters in fact essentially all parameters are degenerate at some level with the optical depth which makes it a little bit often us and so in s to some extent is degenerate with tau something that's more crucial for future missions that I'll talk about in a little while is that is for example also degenerate with a neutrino mass and that means that we will if we want to actually measure the neutrino mass which is the claim that you can do with the future C and B missions you actually need a measurement of the optical depth at a level that was promised in the blue book but it looks we won't really quite get from from punk so optical depth is degenerate with a number of parameters and it's an important parameter to figure out more questions okay so this was just the pictorial way of looking at the different spectrum now I want to give you a little bit more intuition of the what's going on in in terms of equations so you have you should have them in mind and I mean obviously I'm gonna discuss the ones that are the largest contributions in principle you can understand all the contributions but we'll focus on trying to understand a little bit better the contributions and contributions that actually dominate and so again we saw that the dominant contribution to the angular power spectra came really from the the period of recombination the late-time effects are very small so I'll focus on this part of the the integral and then what you can do so this is the probability of last scattering what I'll do as a first approximation is to set the probability of last scattering to a delta function so I'm assuming that all photons last scatter at the same time this is something we already assumed earlier at some point then we saw it didn't make a different much of a difference it will make a difference for some of the modes and I'll come back to that but for now let's assume that and then let's fix it later if you assume that then the integral completely collapses and you get simpler equations so you get well first of all the integral over time collapses and you just evaluate everything at the time of which last scattering occurs and in addition I'm also ignoring the contribution that was there from the delta p0 delta t2 delta p2 which are suppressed compared to the to the temperature perturbations so in there in the approximation we're using this is the contribution to the temperature perturbations and it's interesting to take it and compute the multiple coefficients so go from t delta t to to the angle to the multiple coefficients and this is easy to do I mean you can do it this is again a one-line calculation that I didn't put in the slides but the only thing you have to remember is the expansion of plane wave in terms of spherical harmonics so if you have a plane wave e to the iqx then you know you can write it as 4 pi and then sum over l and m i to the l times a spherical vessel function of the magnitude of this times the magnitude of the of this vector and then you have y star of q l m y of n hat q hat and l m so this is the the expression that you can plug in here and then integrating it against the spherical harmonic to get the multiple coefficient just is very simple because it just collapses this sum and you get this expression where you have the spherical vessel functions and the spherical vessel functions are the reason I went through the exercise of computing these multiple coefficients because the spherical vessel functions have the property that they're essentially or the integral is dominated where at a place where the argument of the vessel function is of order l and this approximation gets better and better as you go to large l so you see that there is a one-to-one correspondence between the multiples that you were looking at in the in the previous plots and the the wave number and the relation is given in in in this way so there's a one-to-one relation between multiples and wife numbers at least on on small scales so for for for high l this is useful to know because we understand the behavior of the the perturbation as a function of the momenta and it tells us that the behavior of the solutions we're interested in will be very different depending on whether the momentum of the physical momentum of the mode at last scattering is small or large compared to the Hubble rate if it's small compared to Hubble it means it's outside the horizon we started with adiabatic initial conditions it means that at the time of last scattering it is still frozen and we're really just looking at the at the initial conditions so the temperature perturbations for modes that enter the horizon after last scattering really give us a fairly direct measure of the the primordial power spectrum whereas for the modes that enter that have a momentum that's large or momentum at last scattering that's large compared to Hubble enter during enter earlier and will start to oscillate and for them will have to do a lot more work the the first class is very simple because they're frozen outside the horizon and we're just looking at their primordial value for for the most part so one interesting question you should ask is where this transition happens and what you want to know is where this quantity is one and if it's less than one they're frozen larger than one they're oscillating and if you just use the relation that we had before so we had the q RL was equal to L so I'm just substituting that into the equation to write it in this way AL RL times the Hubble at last scattering and then you can evaluate it for our cosmology and it's around 60 and so you learn that in the on large angular scale so on L below 60 the modes you're looking at are frozen or at least the for the the contribution to the angular power spectra from the modes for that were generated during recombination of frozen there's some small contribution from the late time integrated sucks wolf effect but for the most part you're looking at the primordial values and then for the modes L greater than 60 the modes are inside the horizon and they're they're oscillating so we'll look at those two one at a time so for the frozen long modes it's very simple as I said because they're frozen and you can write down the multiple coefficients they become very simple for the modes that are frozen outside the horizon this is sometimes called the sucks wolf approximation where the transfer function is just a spherical Bessel function and you can compute the angular power spectrum analytically in this case very simple to compute and you just get L times L plus one times C L over 2 pi is the the mean temperature of the C and B squared divided by 25 times the primordial the amplitude of the primordial power spectrum and this is called the sucks wolf plateau and it's also the motivation for actually plotting this quantity this quantity is just constant on for low L you can in principle I didn't write down the formula because it's not super illuminating and we're close to scale invariant but you can evaluate this for a general power law if you want but I mean it's close to for the for lambda CDM it's close to scale invariant so it's roughly given by this and for the short short modes it takes a little bit more work I wanted to go through it but I realized it actually takes a little bit more time if one wants to do it carefully but the modes that one can study in an analytic way fairly easily are the ones that not only enter when before last scattering but also enter the horizon when the universe was radiation dominated which happens a little bit before last scattering and so these are the modes that are much larger than let's say 140 and for these modes so when they enter the horizon as much before last scattering so there's still a large number of free electrons around and typically what you want to do one way to find analytic solutions of the system of equations is to expand in the momentum of the mode divided by the the scattering rate the omega remember was the these are the number of free electrons times the the Thompson cross section and in the limit where this goes to zero scattering is very rapid and it's a perfect fluid eventually this breaks down and but you can actually do very well if you treat this as an expansion and go order by order so this lets you yeah so at leading order as I said you get the equations of hydrodynamics and the solutions you have are just sound waves so you just have hydrodynamic equations you have the sound waves that Charlie was also telling you about and in this case for the C and B typically was convenient so in for the be very an acoustic oscillations it's convenient to think about them in real space so you look at an initial over density see what happens you see that there's this sound wave that's propagating outward here I'm looking at it in terms of standing waves with oscillate more slowly if the wavelength is long and more rapidly if the wavelength is short and you can actually work out the solution analytically and it is of this form where this quantity here that appears here and here is called the the baryon loading the three quarters if you want I mean you could have written it in a different way we don't have to remember the three quarters this is also P baryon plus row baryon and the P baryon is essentially zero and then you have P gamma plus row gamma which is four thirds row gamma so this is where the four thirds comes from and then this quantity here is the the co-moving sound horizon at the surface of last scattering so this is what appears in the argument and this is the the matter transfer functions that you transfer function that you know from other places so I didn't have time to derive it but I wanted to at least show you that they look relatively simple in the physics in in a plasma you know there are it supports sound waves and this is what you're seeing on the small scales in the cosmic microwave background and the I said we assume that the all the photons last scatter at the same time and I said that this introduces some errors for the for the modes that are outside the horizon they don't really care they're not evolving so they don't care if the last scattering is instantaneous or is spread out over time but the modes that are rapidly oscillating actually get averaged out if the last scattering is not instantaneous so this leads to a damping effect that sometimes called land out damping in addition as I said the the tight coupling expansion is not perfect and eventually breaks down and at leading order this introduces viscosity which leads to to silk damping this is something you can include and you get a damping factor so instead of just standing waves you actually have damped waves and so this is the way the the socks wolf contribution looks like and in principle the Doppler contribution so you can solve the equations of motion for the density perturbation for the velocity perturbation and you get an equation that looks like this so you have the contribution from the from the socks wolf effect or the density perturbations and the gravitational redshifting and then the Doppler contribution and the reason this formula I think is useful even if it's still somewhat long is that it tells you about some of the dependencies on the cosmological parameters so first again notice that the because of the spherical best of functions the integral is dominated when Q times RL is of order L so this means the the peak positions are set by the by this quantity that's usually called theta which is the ratio of the the sound horizon at last scattering divided by the distance to the surface of last scattering and this is for example a sensitive probe to curvature so this is why the peak positions are usually used to measure the curvature this is the baryon loading which is proportional to omega baryon so you see that you have some some constant plus an oscillatory thing so you have let's say something that oscillates around some some quantities we have something that looks like this if it's offset the even and odd peaks will have different different power this is tells you that the relative heights of the even and odd peaks in the C and B power spectra are a sensitive probe of the baryon abundance and the the damping scale is also a very sensitive probe of the the composition of the plasma for example allows you to probe the the helium abundance and so on so hopefully this gives you some idea how the the angular power spectra depend on on the cosmological parameters and why you can actually use them and I thought I would just put up the this slide with the various contributions and I see if you have questions about the this part and if not then I'll move on and talk about the b-modes but maybe so it's really a change in discussion so if you have questions about this maybe want to ask now okay so if there's no questions then let's move on and talk about the the b-mode search so you can write down a very analogous set of equations for the for the tensor perturbations and here I'm it's written down and you have the same kind of physics so there's some scattering and then there's some effects from the gravitational waves and the the source term now is more more complicated I mean you just because you intrinsically have a quadrupole it's very easy to generate polarization and here I'm showing you the the plots of the contributions of the temperature of the tensors to the various power spectra I left out te but not not because not for a good reason just because I wanted to have space to write something here but for for TT you see that there is the plateau let's say at L below 100 so this is what was used by by W map and and plunk to constrain the contribution to of 10 tensors to the angular power spectra and then here are the the angular power spectra of primordial gravitational way for the contribution to the polarization of primordial gravitational waves so there's the emode power spectrum and the b-mode power spectrum and you see two features one of them is here this contribution that's coming from reionization so this is photons that scatter again and you have large scale correlations in them because they and then here you have the contribution from recombination so this is the reionization bump and this one is the one that people call the recombination bump and different experiments target different parts of the of the spectrum so as Enrico explained to you the power spectrum of primordial gravitational waves is of this form so the amplitude is really just set by the expansion rate of the universe and so a measurement of these be more primordial gravitational waves directly gives you a measurement of the the Hubble rate during inflation and then if you use the the Friedman equation it gives you a way to measure the energy scale of inflation as Enrico explained we really don't know what the energy scale is if we measure b-modes we'll finally know what the what the energy scale of the process is you also see so this are you also have already seen many times so this is the tensor to scalar ratio I'm putting here 10 to the minus 2 because well a it's a theoretically interesting number and it's also something experimentally the one can do in the in the near future the one quarter is a little bit sad because it tells you that even if you put very good constraints upper limits on our you're not pushing the scale down very far so you have to put to get one order of magnitude in the energy scale you have to push our down by four orders of magnitude which is very difficult now the R of order 10 to the minus 2 is interesting because it tells you as also as Enrico explained to you that the infotone must have moved over a super plunking distance in in field space and this is an interesting interesting because it's something that's difficult to control in an effective field theory so I won't go into much detail but the basic idea is that you take let's consider some low energy effective field theory with a single scalar field and let's ask the question should you should you trust the theory if you move the field over a distance large compared to the cutoff so let's consider in flat space so let's forget about the expanding universe for a second just a flat space effective field theory with a single light degree of freedom and it energies low enough so you can ignore the higher derivative terms then all you have to specify is the potential and you have some renormalizable interactions and then you have the non-renormalizable interactions that are suppressed by the cutoff of the theory and there's typically order one Wilson coefficients that you don't know so if you are interested in motions that are much smaller than the cutoff you can ignore them at the same time you see that if you're interested in in field excursions comparable to the cutoff you would have to know these terms if you don't know them certainly you can't trust it in that regime but you might say well maybe someone can give me these coefficients and I can then use it and the statement is that in general there still isn't the right way to think about it because the field doesn't only have self-couplings it also couples to some of the other heavy degrees of freedom that you integrated out and if you move it over a large distance in field space then the masses of these degrees of freedom might change and get pushed below the cutoff so the effective field theory changes completely and I'm just trying to show that in a pictorial way over here so you might have a theory where the side direction is heavy you integrated out you get some effective single field description for phi with a potential where you can compute all the coefficients but if you move phi far enough the system effectively becomes two-dimensional and you cannot really trust it so the the cutoff here typically is below the the plunk scale so having a field range that's super plunky and it's interesting because it's it probes the the UV of the the theory the solution that's typically been proposed to this is to just assume that the infleton is a field with a shift symmetry that means it only has derivative couplings to the heavy degrees of freedom and doesn't affect their masses so the vacuum expectation value of the infleton doesn't affect their masses and then what you want to do obviously so if it's a shift symmetry in the sitter space which you don't want so you have to break it in some controlled way and the the simplest examples if you want are lindays chaotic inflation with a mass that's a plunk in or the natural inflation models by freeze Friedman and Alinto and in in a field theory you can just postulate these kind of things but it's unclear if theories of quantum gravity respect these kind of shift symmetries and so the detection of primordial gravitational waves in in the CMB might actually teach you something about quantum gravity which is one of the reasons we think it's it's exciting and there was a short period of excitement where we thought that this had actually been detected so this is the measurement from the from the bicep to experiment where we had emote maps that were consistent with what you expect from the simulations and b-mode maps that were not consistent with what you expect from simulation so there was clearly an excess in the in the b-mode maps there's two obvious questions one of them if it's just a systematic you're seeing this measurement is very difficult so you have to distinguish between roughly a hundred million photons so the way it's done the polarization measurements is you have a detector that's sensitive to polarization in the x direction one that's sensitive to polarization in the y direction and you take a difference at least for this measurement and then you want to be able to distinguish between a hundred million photons in a hundred million and one photons in in the difference so you have to have very good calibrations or just really observe a lot of photons there's a lot of other things you have to worry about so this is just if you if you have slight miscalibration it would show up as polarization the other thing is if you detector slightly a point in slightly different directions then you pick up a gradient in the in the temperature field so if it's the same temperature everywhere that would be okay but we know there's temperature anisotropies in the sky so you would pick up a systematic from there and then there's also effects that pick up the second derivative of the temperature field which are different shapes of the beams for the different detectors and so on and so you see this is some expansion and that there's lots of systematic so one might have worried that it's systematic it doesn't look like it's systematic the second question was whether it's foregrounds and I think everyone knows that it's unfortunately foregrounds what was seen in this map but the maps are still useful because they're the deepest maps we have still these maps were 87 nano Kelvin degree if you include the Keck measurement then it's around 50 nano Kelvin degree if you compare this to the Planck noise then the Planck noise is a few micro Kelvin degree so much higher the instantaneous sensitivity of the experiments is comparable but Planck is looking at the full sky here you're just looking at a small patch of the sky so you can make much deeper maps if you want to get a feeling for what this means then a degree or a square degree is roughly the your fingernail at arms length and so it means that the standard deviation for the noise is around as 87 nano Kelvin if you measure a pixel of that size so if you measure a pixel of that size in the sky you can roughly measure the temperature to 80 nano Kelvin so it's it's remarkable measurements they're very difficult but it turned out that the foregrounds were relatively large I don't want to go into the details of the foreground estimates at the time so these were foreground models that we made after the announcement with David Spergel calling Hill and Aurelian and my goal at the time was to really understand what it said about the stringy model so I was working on the stringy models of inflation I just wanted to understand what it said about them the likelihood from bicep didn't have any foreground model in it so I wanted to make a likelihood that had a foreground model in it and then eventually it becomes clear that you don't really have to make a likelihood anymore because the this is the foreground is as large as the the signal and this was confirmed by the by the plunk measurement on the one hand I was happy because it told me that the naive measurement I did was correct on the other hand it's very sad because you know you're looking at the dust once you know there's more dust you can correct and use these constraints to use these measurements to put constraints on the tensor to scalar ratio and what they've done by now is ruled out one of our old all-time favorite models of inflation the m squared phi squared chaotic inflation model is disfavored now and a lot of other models are looks like they will be ruled out soon one of the things I briefly wanted to explain is here in this part it looks like the progress is somewhat gradual so we've had these constraints of r is less than 0.13 at 95% confidence level for a long time and it looks like we just pushed them down by a small amount this isn't really the right way to think about it because there's two ways the constraints are derived for as I said for w-map and plunk they were derived from the contribution to the temperature data on large angular scales and this was the plot that I showed you earlier so if you plot the l times l plus 1 cl over 2 pi for the TT spectrum for the tensors you had this plateau and then it goes down there's the silk damping and plunk and w-map really were sensitive to the contribution of the tensors to the temperature data so this is what was saturated so you cannot really do better because we've measured those multiples were to the cosmic variance limit so you cannot really do better with that but there's the additional contribution from the b-mode polarization and the two likelihoods are essentially independent so you can make a likelihood separately for the temperature data and for the b-mode data typically what we look at is the diagonal of that likelihood so we're assuming that there's the same amount of contribution to the to the b-modes and to the temperature which makes sense obviously that's the physical thing to do but it's interesting to break it up because it tells you which of the data sets is actually more constraining and if you do that exercise you see that the upper limits from before the bicep two measurement from temperature where the ones that we have from plunk and w-map at point one three point one two for a long time and then the constraints on the b-mode polarization where r was less than point seven at 95% confidence level from bicep one and the constraint was completely dominated by the temperature data and what has changed with bicep two is it now the constraints just happen to be comparable from the b-modes and from the temperature but these constraints will improve very rapidly so if you just go one year further and include the keg data you already see that it shrank a significant amount and there's a really there's a fairly big effort in the US to push down the upper limits or to detect primordial gravitational waves in the CMB I don't have time to talk about all the experiments but there's a large number of experiments ongoing so there's a bicep keg array set of experiments there's spt ag pole advanced act there's apps in the other comma desert there's the other comma b-mode search class there's polar bear which will become the simons array there's a number of them and I'm sure I've forgotten some of them simons observatory is something that was just announced that it's an expansion of the platform in the other comma desert so it's act pole and polar bear eventually will join forces and this was a 40 million contribution from Jim simons there's a balloon experiment spider flew recently eventually there should be an announcement from them what they've seen they had one flight with 90 and 150 gigahertz they will have another flight and then if you look out further there's also a number of experiments so the CMB stage four I put it here as greater than five years but it's fairly the planning is fairly active so there was a call yesterday so we're actively actually talking about it writing the science book for it and it looks like at least it might go ahead and if you if you think an experiment like that will go ahead then actually five years is pretty early because you have to still figure out what the experiment should even look like so we're still it the goal for CMB stage four is on the one hand to measure primordial gravitational waves or put upper limits on the other hand you want to measure the sum of the neutrino masses affective relativistic degrees of freedom and so on so this would be an experiment that should cover 70 percent of the sky at one arc minute resolution at roughly a noise level of one micro care arc minute which is significantly lower than the the maps I showed you and then there's potentially also satellite experiments light bird is a Japanese mission there may be or there will be a contribution from the US there may also be a contribution from Europe then there's pixie I already mentioned because it would be a measurement that would go after the spectral distortions again for the first time in 25 or so years it's difficult to do because the spectrum you're trying to extract is shown in blue here the foregrounds in the cleanest one percent of the sky shown in orange so you really have to dig into the foregrounds there's also lensing I didn't really talk about it much but I mean I talked about it in the context of the the temperature data for b-modes there's lensing that you lens so deflections of photons generate b-modes from the existing emodes and this is something you have to remove in principle you can do it if you measure the emodes precisely enough it's something you can remove and here I'm showing you what you can do this is I can tell you exactly what resolution and so on I use but it's not so interesting so you can remove a fair amount of the lensing but you have to remove a number of things to actually get to the to measure the b-modes and you can also see why some people are interested in going after the realization bump because it sticks out over the lensing longer than the recombination bump that doesn't really make it easier because the the foregrounds are very difficult on on large angular scale so we'll we'll just have to see what's the best way to predict how well this can do is relatively difficult because currently the foreground models are very poor I'm just illustrating that here in this little histogram and then I'm almost done but so here the the Planck data is shown in orange and then here are the various models and you see that the models just don't look like the like the real sky at all so here what I'm doing is I'm just taking a patch of the sky and plot a histogram of the polarization fraction so in each pixel you compute the polarization fraction and then just make a histogram of the the pixels and you get these histograms and you see that the models don't look like the real sky at all so it's a little bit difficult and there's a lot of work going into understanding that better obviously once you do these experiments as I already said you're not just trying to measure primordial gravitational waves you're trying to measure a number of other things like your neutrino mass you're trying to constrain the the growth of structure and constrain dark energy this one is interesting if you're interested in in fundamental physics I think it's a constraint on the effective number of degrees of freedom and what's interesting about it is that the error bars of cmv stage 4 currently predicted to be about 2 times 10 to the minus 2 and that's roughly a contribution that you would get from a single degree of freedom that was in thermal equilibrium with a standard model at some point and then decoupled assuming just standard model degrees of freedom so if it's supersymmetric standard model you would double them and then this this goes down by a factor of 2 so it's not completely model independent but if it's just standard model plus a degree of relativistic degree of freedom that's in thermal equilibrium with a standard model and then decouples this is the kind of thing you might be able to see with cmv stage 4 so you can even do beyond the standard model physics with it in in principle and so there's a lot more to be said about the cmv obviously I didn't have a chance for example to talk about measurements of the bi-spectrum and I didn't have much time to talk about the secondary anisotropies which are interesting for other aspects of cosmology in particular in combination with the large-scale structure surveys but I hope you learned something about the cmv and you know a little bit more about it than before and it's the cmv has been very a fun field to to work in and has provided us with lots of information about the early universe for 51 years now and it will continue to do that for another at least decade probably probably more than that maybe we'll detect gravitational waves and measure neutrino masses and so on well hopefully I mean the neutrino masses I guess this is again so there's one has to figure out what tau is one has to combine it with large-scale structure surveys and so this is where the large-scale structure surveys really will play an important role also in in combination with the with the cmv and I think all in all the the next decade and it should be very interesting in cosmology and I'll just end it there and just say thanks