 This lecture could be subtitled No More Dirt. We're actually going to have results today, a few. And here is the outline. I'll be talking first about the BICEP2 instrument and its data and the BICEP2 claim of the detection of inflationary B modes. And then I'll talk a little bit about work that followed that claim, joint work between the Planck team and the BICEP team. Before I go any further, are there members of the BICEP team in the audience? So I'm going to do my best to represent it fairly and to say some very nice things about the strength of the experiment. Then I'll talk a little bit about future B mode searches. And then because you heard the nice lectures on gravitational lensing of galaxies, I thought I'd talk a little bit about gravitational lensing of the CMB, its advantages and disadvantages. And we'll see that there is a lensing B mode in addition to the inflationary B modes. So before BICEP2, this was the observational situation. A large number of groups, as I mentioned earlier, trying to detect B modes, both those that arise from inflation. And you recall that the inflationary B modes rise up and then drop away at L of order 100 or so. And also the lensing B modes, which I'll talk about at the end of today's lecture. Every one of these points, as you can see, is an upper limit. In the range of L, or multiple moment of interest for inflation, the experiments before BICEP2 were already probing the 0.1 micro-Calvin level. And again, let me remind you that 0.1 micro-Calvin can be compared to 3-Calvin for the CMB, which can be compared to 200-Calvin or so for your instrument. So these are very sensitive experiments and not easy to do. And roughly speaking, the limits set on R were of the order 0.3 or something of that sort. So this is all before the BICEP announcement in March of 2014. The BICEP2 team is a large team consisting, among others, of researchers from Berkeley, Caltech, and Harvard. It is, as you can see here, located at the South Pole. The decision on the part of the BICEP team was to do a very deep integration on a fairly small chunk of the sky, small, solid angle. The notion was to look at a large enough solid angle so that they could make statistical statements about the B modes, which, as you recall, have a characteristic scale of a degree or so. But they didn't want to cover the entire sky. They wanted to concentrate their sensitivity on a relatively small region. They also worked at a single frequency. And you recall from the lecture I gave yesterday that is, in a sense, a warning flag. Since the BICEP team made observations only at a single frequency, they had to use external data to measure and try to control the foregrounds. They could not do it internally, as it were. They did have some 100 gigahertz data as a check, slightly longer wavelength. There was very strong and very careful attention paid to the control of instrumental errors. And systematics. You recall that if you're looking at the quality of an experiment, you should check to see how many levels of modulation are included. Many were included for the BICEP experiment. For instance, they could rotate their entire apparatus. At the South Pole, the sky, of course, rotates around you once a day. So they had that control as well. And switching and so on and so forth. So lots of attention to systematics. And I'll show you some of their null tests that show that instrumental effects are not responsible for the B-mode signal-A detected. The actual detecting devices were 512 polarized millimeters, like the ones that I showed you in my first lecture. They also employed a halfway plate, a polarizer, which provided an additional level of modulation. That is, you're looking at a particular chunk of the sky with the apparatus in a particular configuration. You can change the polarization sensitivity by moving a halfway plate ahead of or on top of the detector. So lots and lots of different modulations, as I said. Here is the reference for the result, March of 2014. And here is their site, very near to the South Pole. Here is the horn, which defines the size of the beam on the sky, that is the solid angle to which they're instantaneously sensitive. And then they can move the horn around a little bit, wait for the sky to move, and map out a region. And this is the ground screen to prevent radiation from this 250 Kelvin background from reaching the horn antenna. And here's the camera, the detecting device. There's the halfway plate. You'll notice that to prevent emission from the halfway plate itself, they cool it to 4 Kelvin. Lenses, focal plane unit down here and so on, and the whole thing is refrigerated. Even at the South Pole, you need a refrigerator because you try to keep the millimeters cool to about a 10th of a Kelvin to maximize their sensitivity. So lots and lots of care for sensitivity and systematics. And since I keep talking about the importance of null tests, here are some bicep null tests. Take the data and chop it into two halves. Let's say the data recorded on even number days and the data recorded on odd number days as an example. Sum them, and you see the fluctuations in the CMB. Difference them, and you see nothing. So the null test is passed. That is, there's no spurious introduction of temperature fluctuations from the instrument. And the same is true for large-scale polarization signals. Here's the Stokes Q parameter, Stokes U parameter, summed and differenced. All you see in these two images is noise. So again, careful attention to checking the null tests. And they also looked at the null signals. You recall that I pointed out that some cross signals like EB and TB, where you look at the temperature fluctuations and cross correlate them with the b-mode signals, should be null. And they are. Observations up here. Tests where you inject a null signal down here. The null tests were passed. And that leads me to the following statement. There was a lot of care to make sure that the polarized b-mode signals they detected were real. They weren't instrumental. There's something on the sky that has a b-mode signal detected by bisap. And here are the essential results. These are the power spectra for temperature and polarization fluctuations, plotted in terms of temperature squared. So you can see the units here, microkelvin squared, versus L or multiple moment. And there are a whole bunch of them here. But the ones to focus on are TT. The line here, the solid line, is the accepted standard cosmological model. And there are the bicep observations in pretty good agreement. The same is true for the e-mode signal. Good agreement with the known spectrum. And also with the cross of Te, which is non-zero, not null. So the temperature in the e-mode signals that bicep observed were consistent with what we know from other experiments and standard cosmology. As already pointed out, the TB and EB signals are null, as they should be. You notice a very different set of units here. And then finally, here are the BB results. B-mode detection. There are two sets of points shown here. One is yet another null test, the blue points. And the black points are the measurements. So it is on the basis of these few data points that the claim of the detection of inflationary B-modes was made. The dash curve here is the predictions of a B-mode signal for R equals 0.2. And this rapidly rising curve are the lens B-modes, which I'll discuss later. So the inflationary B-modes are down here. And there are several data points that suggest the detection of inflationary B-modes. And here is their B-mode picture. This is a map of the microwave background with hot spots and cold spots indicated by the color bars. And then imposed on it is the decomposition of the polarization in the B-mode. So this is not exactly a picture of polarization. It is instead the B-mode signal as a function of position in the sky. Now you recall when I introduced the B-modes, I pointed out that they have this curl-like dependence. I probably forgot to say that the curl-like dependence has the opposite sign for cold spots and hot spots. So now let's look at this curl-like pattern or not sure that people are familiar with the word a pinwheel, a little thing that blows around in the wind. No, some of you are, some of you aren't. There's a pinwheel pattern here with a certain handedness around this hot spot. Here's a cold spot and the handedness is the opposite. So not only are they seeing B-modes, they're seeing B-modes with the correct behavior in a sense with respect to the microwave background. The other point I want to make is that the strength of the signal really is determined by these two chunks of the sky. And one of you yesterday asked a question about whether there could be some other astrophysical effect that might be dominant. If you want to find an astrophysical problem, you might want to look at declinations minus 52, right ascension zero, declination minus 62, right ascension about trying to do this quickly. I had 350 degrees, something like that, or 23 hours, 10 minutes. Is there anything funny going on there? But this is the result. There clearly is sort of B-mode signal that's dominated by these two blobs and the power spectrum makes sense. Back to the power spectrum again. Not only is it there, but the angular dependence, the way the B-mode signal changes with angular scale is consistent with the models. Sorry, this didn't reproduce very well, but this dash curve again is the predicted B-mode signal for R equals 0.2. Has a certain angular dependence. It rises up and then falls away again at L of 100 or 200. And these points sort of march along the predicted path. So everything looked quite solid and convincing and I want to give full credit to the bicep team. They worked very hard to make sure that instrumental effects and so on were not causing problems. However, as I already mentioned, they needed to depend on external data to remove foregrounds, do the job of peeling away galactic emission in particular. And because of the frequency they elected to use, there was at least the possibility that both galactic synchrotron and galactic dust emission could cause problems. The dust emission at 150 gigahertz is a little stronger than synchrotron, but the polarization of synchrotron emission is higher than that of dust. So there was a possibility of both signals contaminating the result. And again, the bicep team did a number of things to make sure that foregrounds were not the cause of the problem. First, they worked in a region of the sky well away from the galactic plane, as I've already pointed out. They could not, because of their location at the South Pole, easily reach the South Galactic Pole, but they observed in a region that they believed from other data was fairly free of dust emission and certainly pretty far away from the galactic plane. They were also able to demonstrate, and I'll show you that that's correct, that the synchrotron emission at these high latitudes was not a problem. It was subdominant to the dust emission. So what they did was to use the little bits of publicly available Planck data that they could get their hands on in March and some Archeops data and other external data sets to make models, many models, boom, boom, boom, boom, of the dust power spectrum. Here's their BMO detection. Here are their models for dust emission. You notice that the dust emission, as I showed you in yesterday's lecture, tends to have a fairly flat power spectrum, not strongly dependent on L. Different models, all subdominant to their detection. Okay, in a sense, unfortunately for the BICEP team, they didn't have available the final Planck results. They were coming and they emerged shortly after the BICEP claim. Okay, so BICEP came up with a claim of R equals 0.2. The chalk seems to have vanished. Meanwhile, using entirely different methods, not based on the B modes, Planck had derived an upper limit on R and I'll go through how that upper limit was determined and then demonstrate that it was in substantial conflict with the claim by BICEP. So let me put in parenthesis here that this was based on the B modes. Planck's observation was that and it was based instead on temperature. So let's look briefly at the physics of the Planck upper limit on R. And I wanna emphasize that these are two entirely different methods. One relies on the detection of the B modes, which can only be produced by tensor fluctuations, inflation, and so on. And the other is slightly different. Okay, if tensor perturbations are present in the early universe, as we believe is the case for many models of inflation, they produce tensor fluctuations. However, the gravitational waves also produce a small scalar component, which adds to the signal that's produced by standard scalar fluctuations. What I'm trying to make is the following point. If we make a plot of the power spectrum induced by scalar perturbations, it has this familiar shape, micro Kelvin squared versus L down here. But the gravitational wave perturbations also produce a small addition to this. If that addition were independent of L, all it would do would be to add a little bit to this curve or bump it up slightly. But remember that the gravitational wave perturbations are strongly damped away on small angular scales. So the gravitational wave contribution does not have a flat dependence now. It falls off rapidly. So if there are tensors, if there are gravitational waves, what we get is a small addition to the spectrum at low L but not elsewhere. And the sensitivity of Planck across this range of L was adequate to look for this small bump, increase at low L. In the temperature fluctuations, we didn't use polarization for this test at all. So the tensor part appears only at low L. There's a small increase or bump and Planck detected that. However, the amplitude of this bump is model dependent. And furthermore, the measurement of the bump depends on precise knowledge of other properties of the TT spectrum. For instance, I could get the same increase here by simply tilting the TT spectrum slightly. So the Planck results, which depend on this effect, are very strongly model-dependent. The B mode signal is the clean one, we really wanna get that. But at least Planck could set plausible upper limits, assuming a reasonable model and the upper limits that we set and published in 2013 were these. So there was an immediate conflict, yes, Val. Well, just what I've been describing. For instance, let's imagine a scenario in which the inflationary signal really is zero. Can we make an apparent inflationary signal, in other words, can we raise this part of the spectrum? We can do that to some approximation by building in a bigger tilt, okay? So the tilt of the spectrum, NS minus one, to speak technically, is degenerate with this R detection and so on. There are other things you can do. So it's model-dependent, but we know the model reasonably well. Hence, with reasonable conviction, indeed 95% confidence, this was the Planck preliminary limit. Well, the two are clearly in tension. What do we do about that? Well, first you can look at the consequences of this tension. What I've done here is to show the Planck upper limits. There it is, less than 0.11. And crudely, this is just something I drew on my own, the bicep results. There's a clear conflict here. They're placed on a plot of R, the tension-to-scale ratio, versus the tilt in the spectrum. And you recall, I hope, from the first lecture that in many theories of inflation, there is a link between R and the tilt of the spectrum. And here are a bunch of different potential kinds of inflationary models. You can pick the one you like. They trace out either a point in this graph, or a trajectory in this graph. But it's pretty clear that the models down here that Planck favors are going to be very different from the models up here that bicep favors. So there's a tension here. And here are some of the consequences of that. I don't intend to go into the details of the N plus two inflationary models, but Planck favors concave, low to moderate R, this class of inflationary models. It rules out convex ones, power, law, inflation, and so on. And if we look at bicep, it's almost exactly the opposite. It favors high R, favors this kind of inflationary model, rules out concave, and so on. Low scale, Susie, Straubinsky models, all ruled out by bicep, but perfectly acceptable to Planck. So there's a real tension going on here. As soon as the bicep result came out, there were immediately concerns raised about contamination by dust. Are the models adopted by the bicep two team reasonable? The questions arose both within the Planck collaboration where we had data and outside the Planck collaboration where people were suspicious of how low these models were predicting the dust contamination. About four months later, Planck finally emerged with its images of the contamination by dust. Remember that these plots, which I showed yesterday, show the amount of contamination by dust in terms of R. That's the logarithmic scale here, running from R equals 0.01 to R equals 10. This is the bicep region. It's right about in the middle of this color bar, roughly speaking. So the amount of residual dust contamination after the subtraction of Planck's best dust model was of the order of 0.3 and R. In other words, very close to what bicep was seeing. What that means is that these models were underestimates. The real dust contamination is higher. And that's shown in the following image. What's shown here is a plot of the b-modes. This did not come through. Can anyone see color there? It must be very faint. This is a plot of the b-mode signal, assuming R equals 0.2. In other words, assuming the bicep result. And imposed on it in broad bands of L are the contributions from dust, as determined by Planck. So what you see here is that it's certainly plausible that the entire signal they saw was produced by remnant dust. Now, I'd like to insert a little detour here, going back many slides. Yes, to this one. If the signal was induced by dust, the signal had to be produced largely in these two regions of the sky. Planck doesn't have the sensitivity combined with angular resolution to look particularly at those two regions. But if something funny is going on in these two regions, if dust happens to be particularly emissive and polarized in those regions, it could explain the results we see. So again, I think it would be interesting for somebody in the audience to write down these coordinates and look to see if there's anything interesting going on in terms of dust emission there. Now I'll go back to the main burden of the talk. So this is the problem. Dust can explain this signal. So in terms of the correct way to proceed scientifically, the clever thing to do would be for the Bicep team and the Planck team to work together to combine the sensitivity of Bicep with the multi-frequency coverage of Planck. And I'm happy to say that that's exactly what happened. There was very good and very amicable collaboration between the two teams to produce a joint paper in which we made use of all of the Planck frequency observations and we made use of Bicep's higher sensitivity at one frequency to produce a joint paper. And here are some of the results. The paper is now out incidentally in fissure of letters, it's been out some months now. We use the fact that Bicep is less noisy and that Planck controls the foregrounds better. And what we've got here are some of the results. So let me talk about a couple of particular issues and then at the end I will show you the results. First, Planck remember also can control the synchrotron emission because we've got low frequency channels where the synchrotron emission is dominant. So we can ask the following question. If we change the model of synchrotron emission in the Bicep field, does that alter the conclusions? And here is the answer. There are two curves here. One including synchrotron and one not as I recall and there's essentially no change on the preferred value of R. So synchrotron is present but is not dominant. The other issue which I've already raised before is that Planck's model of polarized emission from the galaxy, from dust in the galaxy produces a result that I find a little bit anomalous that the B mode signal is much weaker than the E mode signal. That's still a residual worry even for this set of results, at least in my mind. Okay, finally the Bicep team has a new instrument, the Keck instrument which by the time of this joint paper had begun to produce results. And what I want to show here is that the tension and values of R in a sense is really internal to the Bicep team and is less between Bicep and Planck than between Bicep and their new instrument called Keck. Again, I can't see the colored curves here because I've got a bad angle, but I can over here, okay? And what I hope that these demonstrate is the following that if you look at the agreement of the results between Planck and Keck, you get reasonable agreement, but if you look at the agreement between Planck and Bicep 2, the agreement is not so good. So the tension in a sense is internal to that team. But what we can do is to put the Keck data, the Bicep 2 data and the Planck data together in a variety of ways. For instance, we can take the Bicep data and run it through the Planck pipeline analysis. We can do the Planck data through the Bicep analysis and so on to produce a joint result. And it is here, shown here. This is the likelihood of a value of R as a function of R. You can see that the chance that R is as large as 0.2 in this combined data set is really very low. And indeed, the formal limit, combining the Bicep and Keck data with Planck, the limit using the B-modes now. This is a different way than the limit I quoted here. But the joint analysis now using the B-modes is R less than 0.13. And I would claim that a value of R less than 0.11 or 0.13 is a sort of state of art for limits on the tensor to scalar ratio R. It is interesting that the likelihood, shown here, peaks up at R of 0.06. But the chance that it exceeds that value is quite substantial and for that reason we prefer to report an upper limit rather than a detection. However, the fact that this likelihood peaks at R of 0.06 means that intrinsically the sensitivity of Planck, particularly combined with Bicep and Keck, is capable of detecting values of R below 0.1. So if R happens to be, let's say, 0.03 or 0.05, the final Planck analysis looking for the B-modes may provide a detection instead of just an upper limit. But as I said yesterday, in order to do that we've got to beat down and control better the dust emission. It all goes back to this problem, which I keep showing, of dust emission. In order to reach down into, let's say, of order 0.1 which would be here, we have to do a better job of controlling the dust over much of the sky. Okay, so I'll leave up the final results here, R less than 0.13 or R less than 0.11, and then re-emphasize that the new joint results are also based on the B-mode detections like this initial claim. So this is bicep two plus Planck, R less than equal 0.13, again at 95% confidence. Okay, needless to say, many other experimental groups in addition to Planck are now pursuing as rapidly as possible better measurements of the B-mode. And by better measurements I tend to mean both better instruments and better control of the foregrounds. But this is where we stand at the moment. Then in the remaining few minutes, I thought I'd make a brief excursion to try to link some of the things I've been saying with some of the things that you learned from Bouvnice-Jane's lectures in the last couple of days. There are two other B-mode signals that are cosmological, that is they're not foregrounds and they're not instrumental. They're the lensing B-modes produced by gravitational lensing. And they're also B-modes produced at the epoch of re-ionization. And I'll run through the physics of both of those and show the current status of the observations. Okay, requires a brief detour. And that is a detour to consider the possibility of gravitational lensing of the CMB. I'm going to draw a cartoon of the situation that Bouvnice-Jane showed you the other day. You have galaxies here. You have intervening matter, which distorts the wave fronts, causes deflections and magnification by gravitational effects, and you have some sort of detector here. Many galaxies, and you make some sort of statistical average, which he described. The situation with the CMB is rather different. We have a surface with structure on it at a known and fixed redshift. We have the intervening matter, which distorts the field and we have a detector here. The advantages of using the CMB are that we know the redshift and furthermore, it's the same for all sources. Disadvantage, in a sense, of using the optical measurement of galaxies is that the sources are different redshifts and those have to be measured. And therefore, the intervening path length is different. So it's a more complicated problem to unfold. So there's certain advantages in using the CMB, but I also list some disadvantages. Galaxies can be defined as little hard round dots, okay? And they're stretched and magnified by the action of gravitational lensing. The CMB fluctuations are much softer or floppy than the hard images of galaxies. So it's a little bit harder to use them to detect the deflection, magnification, shear, and so on of the intervening matter. In addition, the fluctuations in the microwave background are typically larger in angular scale than the kinds of deflection angles that gravitational lensing produces. I seem to remember from Mouvnish Jane's lecture that he was talking about angular scales, typically between one and 10 minutes, okay? Planck's data alone shows that the RMS value for the deflection is 2. something arc minutes. That's small compared to the size of the main temperature fluctuations in the CMB. So it's not an ideal source, but it's a well-defined source distance. So there are advantages to both. While I have this slide up, let me also make the fairly obvious point that if you start with a temperature fluctuation that has circular symmetry, you'll recall from his lecture that the first order effect is to stretch that into an ellipse, okay? A circle has a certain characteristic angular scale, so a certain value of multi-pole moment. If you stretch it, you're mixing modes. You're adding lower L and higher L modes, okay? So keep that in mind. And as I'll show in a moment, the effect is to smooth over the temperature power spectrum at high L. So let me do that next and go back to this cartoon. This is the power spectrum of the CMB and I want to look at various features of it. This will be familiar material to many of you, so I'll just run over it quickly for those for whom it's not. Why does the temperature power spectrum have these various features? Okay. First, this flat region out here at low L, which means large angular scale, is just a remnant of the Harrison-Zeldovich scale invariant spectrum that you expected inflation. Next, remember that if inflation is to end, it can't be exactly Harrison-Zeldovich. There's got to be a slight tilt, a few percent tilt to the red, tilting the spectrum a little bit, so there's a slight tilt. If I really plotted Harrison-Zeldovich all the way out, it would look like that. Not quite flat and L. This structure here are the Sacheroff oscillations, sound waves on the surface of last scattering, resonant sound waves, which amplify up the magnitude of perturbations. And the fact that the signal decreases is the so-called silk damping has to do with the diffusion of the photons out of the Sacheroff oscillations. So there's a sort of envelope that multiplies a signal that otherwise would look like that. So that's a very quick cartoon version of why the power spectrum looks the way it does. Let me point out that what silk damping does is to decrease the amplitude of the power spectrum. It damps it. What gravitational lensing does is instead to smooth it over. You're mixing modes at different L, so you're mixing low points and high points in this plot. So the effect is to smear or to decrease the contrast because of mode mixing. And that's a detectable effect. And Planck, among other instruments, has detected it. Okay, I'm gonna put this up. This is an advertisement for Nice Place to go to read about these things. In addition to the references that he provided, you might want to look at a nice primer or introduction to gravitational lensing of the CMB on Wayne Hu's website. And here is the reference. And I'll put that reference and others up at the very end so that you've got them in front of you. Now, what this shows, and you already know from Bookness Jane's lecture, is that the gravitational lensing effects depends on the gradient of the potential, of the projected potential. And that, in turns, depends on how matter is distributed between you and the source. And let me remind you that the CMB is the source at a known redshift. So all of the CMB goes through the same amount of intervening matter. So, if you have a model for cosmology, and we do, you can make predictions of what the power spectrum of this deflection field should be. And here are the Planck predictions. You'll notice that there's some model variance. It depends, sorry, this is a mess. It depends slightly on the matter density. But in general, you can make a pretty accurate prediction of what the gravitational lensing potential should be. If you can make that prediction, and it works for the observed fluctuations in temperature, you can also make that model for the observed B-mode, lens B-modes that I'll describe in a moment. So, does it work? Well, here's that same model, the midpoint of that set of models. And superimposed on it are experimental or observational results from Planck, South Pole Telescope, Atacoma Cosmology Telescope, boom, boom, boom, right along the predicted value. What that tells us is we can have confidence that we can reconstruct the statistical properties of this intervening matter. Exactly the problem that was raised by Ms. Jane. We can do it with galaxies, we can do it with the CMB. The difference is that the CMB can do it only on fairly large angular scales. And here is a map of the integrated matter distribution between us and a redshift of a thousand. It's fuzzy because we can only do it at low resolution, but this is essentially a photograph of all the matter in the universe, along each given line of sight. The little holes here, remember, are masking. We get rid of strong foreground emission. So there's the distribution of matter and, just to go back for a second, here's the power spectrum. Theory is the bold line, observations by many groups, the points. Now, in the previous two lectures, I kept talking about how important it is to do checks. So, I'm presenting some Planck results here. Did we do these checks? Yes, we did. Here, for instance, is a measurement of consistency. All of these measurements are different ways of chopping up the Planck data. One frequency at a time, different methods of subtracting the foreground. Do they produce consistent results? Yes. So, what we're seeing is real. And we also did many, many null tests. I won't go over these in gory detail, but you can see here that there was substantial attention paid to, for instance, the TB cross test. Should be null, is null. So, we are taking the kind of careful steps that I kept asking experiments to do in the previous lectures. Okay, now I wanna switch gears from gravitational lensing in general to gravitational production of B-modes. And the trick here is that as E-mode polarization passes through this deflection field, E-modes originating here, passing through this deflection field, some of it gets converted into B-modes. It's another kind of mode mixing. Instead of going from one L to another, you're going from E-mode to B-mode. And here, again, pinched from Wayne, who's a nice website, are some actual pictures, vastly overstated or magnified. But what we have here is the CMB, temperature, E-mode polarization. And then we plop down in front of it a single blob, big massive collection of mass. What does it do? Well, this is what it does to temperature. This is what it does to the E-modes. It distorts them. It mixes modes, as I've already said, okay? What does it do to the B-modes? It produces B-modes, non-zero B-mode signal when there was no initial B-mode present. So, lensing basically takes those nice, radially symmetric E-mode patterns and introduce some curl-like behavior, so it generates B-modes. But remember, we know the power spectrum of the E-modes, and we also know how much stuff there is between us and the surface of last scattering. We know the power spectrum of the deflection field, so we can accurately predict what the B-mode lens signal should look like, and here it is. We know the effect of intervening matter, and we know what the E-mode input spectrum is so we can make predictions, and here they are. This is predicted B-mode signal as a function of, sorry, I've made a mistake. This is not the right graph. This is the predicted temperature spectrum, which tells us the effect of intervening matter, but since we know that, we can also predict the B-mode lensing signal exactly, and I'll show you that next. Here it is, mixed up with the results. This red line is derived from the E-mode power spectrum, which we know, and the deflection field, which we know, so I claim that this is an exact and unalterable prediction of the theory. Do the experimental results agree? And here are some experimental results from Bicep and from Polar Bear, which you saw in the lecture, I believe it was on Wednesday, by Poletti. So Polar Bear has higher angular resolution than Bicep and was specifically designed to detect the lensing B-modes. Bicep too has adequate angular resolution to begin to detect things at L of order two or 300 and also saw this same trend. So the claim is that the lensing B-modes have been moderately well detected. And of course, these are the Bicep points and this is R equals 0.2, the inflationary modes. Well, I have this slide up here. If we want to detect accurately the inflationary modes at levels that are much below 0.2, so this curve moves down, we have the problem that the lensing B-modes are gonna get in the way. But since we know this curve exactly, we can delens the images and get rid of that effect. It's yet another foreground. This stuff, B-modes produced by gravitational lensing of the E-mode signal can be subtracted. Probably not, okay? You'll notice that they're a little high, okay? And remember, this is a log scale. So when I say they're a little high, they're a factor of too high. So my guess is that dust is not only explaining the inflationary B-mode claim, but is also a responsibility for these points being a little high, good eye. I can't put myself into the mindset of the bicep team except to say that they did their very best to model the dust and they just got it wrong, okay? Nature wasn't kind to them. You can be less charitable and say that they really wanted to detect the R-mode. So do I, right? They really wanted to detect the R-modes. So once they saw that B-mode, sorry, the B-mode, once they saw the inflationary signal, they were happy to go with it. And little concerns like these points being a little high compared to the lens signal didn't stand in the way, okay? But let me make the point that although this is still rough, we have detected, we being the community, have detected the lens B-modes and they're roughly the right amplitude. There's nothing seriously wrong or skew in the detection. Okay, I have just a few minutes. So let me end with yet another kind of cosmological B-mode. You learned in previous lectures that the universe was initially ionized. It settled out at a redshift of about 1,000 to be neutral, but then was re-ionized by early stars, early galaxies, early quasars, at redshifts of order six or 10 or something like that. Once the universe was re-ionized, there were again free electrons. And if those free electrons saw an anisotropic radiation field, they can once again reintroduce polarization. So there's a second source of polarization in the CMB, produced by the free electrons, which re-emerge at re-ionization. Oh yes, I still have a whole half hour. I've been talking too fast. We'll go to coffee early or there'll be questions. So these electrons again allow Thompson scattering. However, there are two issues that have to be taken into account. One is that the characteristic scale of polarization now becomes the horizon scale at a redshift of order eight, instead of a redshift of order 1,000. So the angular scales are different. The values of L are smaller because the angular scales are bigger. So we're going to see this effect of the re-ionization B modes only at very low L below roughly speaking 20. It's this bump here. In addition, the amount of polarization you get, B mode or E mode, doesn't matter, depends on the optical depth. That is, even though the universe is completely re-ionized, and we know it is from earlier lectures, even though the universe is completely re-ionized, the optical depth to Thompson scattering can be quite small. And indeed, that's been measured by Planck and other experiments. And the optical depth is of the order of 0.1 or below. So intrinsically, any polarization produced by re-ionization is going to be weaker than the polarization induced at a redshift of 1,000. So these bumps will be lower in amplitude, weaker. Than the inflationary E modes and B modes. The variation shown here is shown for a cosmological or inflationary value of our 0.01 and depends on the optical depth. But they're certainly substantially weaker. We now know what the optical depth is. Here's a reference to it, 0.08 is a little lower than was assumed for these drawings. So, what hope do we have of seeing these re-ionization B modes? I want to be fairly pessimistic about that. The characteristic scale, as I've said, is tens of degrees. L less than 20 corresponds to angles bigger than roughly 10 degrees. It's experimentally very awkward. It's essentially impossible to make these observations beneath the atmosphere because atmospheric variations are just too dominant. So a space experiment would be required. The galactic foregrounds are a bigger problem on large angular scale. And in addition to fighting the galaxy, if you want to see re-ionization B modes with a characteristic scale of 10 degrees, you've obviously got to look at a large chunk of the sky to get a statistically significant sample. For instance, if you want 10 patches of roughly 10 degrees on the side, you have to look at 1,000 square degrees of the sky. And the amplitude is tiny. Even worse, as I pointed out, it scales with tau the optical depth of this re-ionization, the optical depth to Thompson scattering, and the Planck results are favoring a smaller value of that parameter than has normally been the case. So it looks a little pessimistic. In any case, what we're really interested in is not so much these re-ionization B modes, but the inflationary ones. This peak here at around 100 in L, that's what we're really looking for. And for those, we really do need to wait on additional experimental results. As I've said before, there are many groups out hunting for the B modes. In a variety of ways, both single frequency observations, which will make use of external data sets to control foregrounds, and multi-frequency observations on a variety of angular scales. Most of these groups now are claiming sensitivity that will reach something like 0.01 in R. Whether that's achievable or not, I think is going to depend primarily on their ability to control foregrounds. So the problem is yesterday's lecture, not Tuesday's lecture. Okay, at least I hope that you now know how to read the papers that will emerge from these groups, making claims of B mode detections or better upper limits on B modes with a much more critical eye. And I'm going to end by doing two things. One is to thank you for your attention for patiently listening to an incredible amount of experimental and foreground detail, but that's not my fault. I was told by Paolo, Ravi, and others that I had to give such talk, so I did. And also for good questions. And now I'll invite questions, and what I'll do is to leave up a list of useful references for those of you who are interested in copying them down. Okay, so we're going to go to coffee early unless there are lots of questions.