 I'll begin by asking whether people in the back can hear. It's all right. Good. I want to thank the organizers of the school for inviting me back to Trieste and to ICTP. I was last here I think about 30 years ago talking about the cosmic microwave background and indeed that's what I'll be talking about today but with a rather specific focus. I'll just start by mentioning that the cosmic microwave background has been particularly successful in turning cosmology into a precision science. Many of the quantities you've heard about in previous lectures such as the Baryon density, matter density, dark energy density and so on are well determined by studies of the CMB and I'll be looking at one other use of the microwave background to provide information about inflation. However, this is going to be a talk by an experimentalist or an observer for theorists. At least I hope it will find appeal to theorists. I want to emphasize primarily the instrumental and foreground challenges to looking for polarized fluctuations in the microwave background and as a sort of lead quote for the talk I'm to give today. I want to quote Heraclitus who said something along the lines that nature likes to hide herself or often is hidden and indeed we will discover that's true. The Greek scholars in the audience will notice that I missed some accent marks. This is an outline of what I hope to do in the three lectures I'm giving. The first lecture today is to review a few features of the B mode signals which I'll define and describe later and in particular those features of the B mode signals that influence how we make our observations. Then I'll talk about various experimental details and results and in particular instrumental challenges and ways to defeat them. In other words, how do you make your apparatus behave in such a way that you get correct results? In the second lecture on Thursday and note that that will now be Thursday afternoon I'll be talking about astrophysical foregrounds and particularly galactic dust which turns out is a contaminant of the very signal we're looking for. And again taking this experimental approach the notion is to describe the foregrounds and then to describe means of getting rid of the foregrounds so that you can see the underlying signal. And in the third lecture on Friday morning I'll be talking about acclaimed detection of the B modes by the BISEP team and some problems that have arisen in the interpretation of that and where we stand observationally in terms of limits on B modes. I want to emphasize this point in particular down here. Do interrupt to ask questions. I hope I've arranged things so that there's time during the talk for you to interrupt and ask questions. Furthermore, if there are additional questions I will remain after my talk is finished if you have questions in general about the CMB, the Planck satellite mission or other things come down and join me informally after my talk. Here's a nice sketch of essentially what I propose to do. What this shows is the Planck image of the microwave sky. Completely and totally dominated in this case by the galaxy. Up here you can see in regions where the galaxy is less strong a little bit of the fluctuation in the microwave background peeking through. What we need to do as observers is to strip away all of this stuff to leave the cosmic microwave background with its fluctuations. In this picture, as in all others, the demonstration here or the presentation is in galactic coordinates, galactic plane running along here, North Galactic Pole, South Galactic Pole. Now most talks you've heard about the microwave background emphasize the microwave background itself. My talks will primarily be about the dirt, the stuff that gets in the way of the science we're actually interested in. Hence the short title of my talks, dirt rather than B-Modes. I'll be talking about problems more than results. I trust people are familiar with this picture. This happens to be the Planck image of fluctuations in the microwave background with a false color here, slightly warmer regions of the sky, slightly cooler regions of the sky. It's important to recognize that the difference in temperature you're seeing there is of the order of tens or perhaps a hundred micro kelvin, whereas the temperature of the CMB itself is about three micro kelvin, and the temperature of the apparatus that made this picture is of the order of 40 or 50 kelvin. So we're looking many orders of magnitude below the temperature of the devices we're using. I'm going to assume general familiarity with this picture, and let me just remind you that in a sense, it's a picture or a representation of the distribution of matter at a particular time in the history of the universe. Earlier in the history of the universe, the microwave background photons were strongly scattered at a particular instant in time about 380,000 years into the history of the universe. The universe became transparent as the free electrons disappeared, and it's that surface that we see here. So this is in a sense a map of the distribution of matter at a particular early time in the history of the universe. Next, I want to make the point that if the fluctuations you've just seen in this little image here are Gaussian in their statistics, if that is the case, all of the information contained in this map can be represented by a power spectrum. You heard earlier today of the introduction of a power spectrum and its connection to the two-point correlation function. Here is the power spectrum of the fluctuations shown in this map. It's a plot of the square of temperature variations, expressed as you see here in micro-Kelvin squared, as a function of angular scale. And since all the interesting features occur at relatively small angular scales, it's normal to plot it instead as a function of multiple moment. Multiple moment increasing from the dipole moment up to angular scales here of the order of arc minutes. So you'll see many representations of the power spectrum. The power spectrum shown here is for temperature fluctuations. And as I go along, I'll be giving you references to the papers from which these various results are derived. Now let me turn for a moment to some features of the B mode signal and consequences for searches for just that signal. Primordial B modes are produced only by tensor fluctuations. Think for a moment about the second lecture this morning. It was pointed out that if you have gravitational lensing by a scalar perturbation, such as a point mass, that the stretching is radially symmetric. Indeed, what I've drawn here is the picture of the little ellipses in the case of a void. A void or point mass is a scalar perturbation, right? Another way of looking at this is that the pattern I've drawn here is consistent with a divergence, a div signal. Okay? If you have tensor perturbations in the early universe, the tensor perturbations can introduce a different kind of symmetry, and I'll show you that in just a moment. The tensor perturbations are associated with gravitational waves, which in turn are a generic prediction of many, but not all theories of inflation. So notice the link there. The B modes, which I'll be talking about, are linked to tensor perturbations, linked to gravitational waves, linked to inflation. Okay? So the B modes are one of the indicators of inflation. There are a couple of others that I will mention, but not deal with in much detail. Again, in some theories, but not all theories, the perturbations in the scalar field are non-Gaussian. And again, it's possible in some inflationary models to produce scalar perturbations which are isocurvature, not adiabatic, and just a little digression on that difference. In the case of adiabatic fluctuations, you take the material contents of the universe, which at this early time are primarily matter, a little bit of dark energy and radiation, and you compress them all equally. In other words, it's an adiabatic compression. In the case of isocurvature fluctuations, you keep the curvature constant. So if you're compressing the matter, you must uncompress the radiation to maintain constant curvature. The next point I want to make is that the predicted amplitude of the gravity waves, and hence the amplitude of any B-modes that come from them, is highly model-dependent. Depends very much on your particular theory of inflation. And there is a theorem, which I would like named after me, which goes as follows. Whenever there are N theorists, there are N plus 2 theories of inflation. So there are a very large number of theories. There's no way in which a single experimental result is going to isolate the single correct theory. What we can do and will do by the end of these three lectures is to get rid of some theories to pare away the set of theories of inflation. So I'll be emphasizing this point again and again. The amplitude of the gravity waves and therefore the B-mode signals that emerge from them is highly model-dependent. And of course you can turn this argument the other way, and that is that if you can measure the amplitude of the gravity waves or the B-mode signals, then you can say something about the theories of inflation. Next, gravitational waves decay on small scales, smaller than the horizon scale at the moment of last scattering. The last scattering occurred when the universe was roughly 380,000 years old, so there is associated with that a very characteristic physical scale. On scales smaller than that, the gravity waves are damped away. And that scale corresponds, in terms of its angular diameter, distance to about one degree. So on small scales, you don't see these gravity wave signals, you don't see B-modes. And furthermore, the polarization grows weaker as you go to larger and larger scales. So in a very crude way, I can draw an approximate power spectrum for the expected gravity wave or B-mode signal. At small angular scales, it drops off rather rapidly, it rises up and then goes back down as a function of L. Now what I've drawn here is of course a cartoon, not a real power spectrum, but I want to emphasize that this general form is fixed by the physics of cosmology. So the angular behavior, or the power spectrum of the B-modes, is pretty well constrained. What isn't constrained is their amplitude. So keep that in mind as we turn to the observations. Now I want to talk about the pattern of polarization in the microwave background. You'll recall in that picture I showed that there are regions of slightly higher temperature and regions of slightly lower temperature. And around those regions of higher and lower temperature, polarization patterns can be established. And they come in two flavors. One are the so-called E-modes. The name E-mode is derived from analogy to an electric field, which you'll recall as a non-zero divergence. And these are produced, as I've already said, by scalar fluctuations, regions of higher or lower density. In contrast, the B-modes are curl-like, hence named after a field with a curl component, namely the magnetic or B field. And these are produced by tensor perturbations. Parenthetically, tensor perturbations also produce some E-mode signal, but it's sub-dominant. The B-mode signal, however, is produced only by tensor fluctuations. So if you see B-mode fluctuations, it's a strong indication that you've got gravitational waves present, which are producing tensor perturbations. And I want to emphasize that these patterns are not global. That is, when I talk about the polarization of the microwave background, I'm not talking about polarization rods or vectors that run across the whole sky. Nor am I talking about a point polarization. Think for a moment about a polarized radio source with a polarization direction given, let's say horizontally, at a given point. Neither of these properties, global polarization or point polarization, is what I'm talking about. Instead, it's a pattern of polarization around the hot and cool spots in the CMB. To give an example, here's an actual, this is a simulation, but here's what the pattern of polarization vectors would look like around the cold spot in the CMB and around the hot spot in the CMB. And you can see the characteristic radial symmetry of those patterns. Now, if there were B-modes, the symmetry would be very different. There'd be a handedness to it or a curl-like pattern. For those of you who are a little less familiar with the microwave background than others, let me just interject here how it is that polarization is produced in the first place. How can it be that scattering by a free electron, Thompson scattering, which is itself an isotropic process, how can it produce polarization? And the answer lies in the fact that the electrons witness or see an anisotropic radiation field. So if you have scatterers in the form of free electrons, and at that time of the last scattering, you also have an anisotropic radiation field, polarization can be produced. For instance, here is incoming radiation unpolarized, but strong. Scatterers from the electron, here is a scattering along this axis. At 90 degrees, you have, again, unpolarized radiation, but less strong. The scattered radiation along that direction is weaker, and you end up with linear polarization. So I've already said that this is not what we see, I'll skip over these slides, and turn instead to talk about the power spectrum that you expect, particularly from emodes. These, remember, are associated with scalar perturbations, and we have a map of those scalar perturbations. I've already shown it. This is the map of the temperature fluctuations in the microwave background. Since we know the amplitude of those scalar perturbations, we can predict exactly the amplitude and the power spectrum of the emodes. There's not a free parameter. And as an instance of that, what I want to show here are the measurements made by the Planck satellite. Again, this is a power spectrum. The measurements in blue. The red curve is not fitted to the points. The red curve instead is the theoretical prediction for EE polarization drawn from the temperature fluctuations. So the cosmological model uniquely predicts what the emode polarization signal should be, and the agreement is excellent. And, parenthetically, that tells us that we understand the mechanism of polarization at the surface of last scattering. In other words, the physics is well understood because we get such good agreement between theory and experiment. You can also play the following trick. Go back to this image and select regions that are hot and select regions that are cold. And just as you heard this morning, instead of stacking lots of images of galaxies to look for gravitational lensing, you can stack lots of images of hot spots in the CMB, lots of images of cold spots in the CMB, and look to see if you see the characteristic div-like, divergence-like signal. One of these sets of patterns are simulations derived from the Planck cosmology, and the other are the observations of these stack points. So we're seeing the emodes, they have the proper amplitude, they have the proper orientation, and so on. Top row happens to be simulations, bottom row happens to be measurements. Next, I want to talk for a moment about what are called cross modes. Remember that the power spectrum is an expression of a two-point correlation function, transform of it. It is also measured in units of micro-kelvin squared. So it is possible to take cross power spectrum, E cross B, E cross T, B cross T, and so on. And here I want to set a small homework assignment. I want to make the following claim and ask you to think through it. First, E cross B must, by some symmetry argument, be zero. I'll come back to the footnote in just a moment. Likewise, since the emodes are determined entirely by the scalar fluctuations, the same must be true for T cross B. It follows from that that if you can make power spectra of the EB and TB crosses, they provide a good null test. Theory tells you that they're supposed to be zero. If your experiment shows that they're non-zero, it's probably the case that your experiment is screwed up, that you've made a mistake of some sort. Now that's not quite true, and that's where the footnote comes in. There are some inflation models, as I've mentioned, that predict non-Gaussian statistics. And these allow some mixing of the E and B modes. If you allow some mixing, there's a non-zero cross correlation. So I won't be talking about those at all. Instead, what I'll do is to treat EB and TB measurements as a good test of the cleanness or consistency of the experimental results. Now, as I've already said, the B modes... Yes, there is a question. Sorry. I'm going to ask you to use the microphone so that everyone can hear your question. And then that also gives me time to think about the answer. Meanwhile, remember your homework assignment. See if you can come up with a nice argument based on symmetry that E cross B must be zero. Okay, your question. So if there is some kind of parity violation, of course, a small thing. Is it, in principle, possible to measure that? Because you're somehow going to remove all the observational things that can actually contribute to that. For example, the parity odd correlations. I'm not going to answer that question now. Wait and hear about all the experimental difficulties that we have to face, okay? And then I'll come back and try to answer it. The question is basically, if you see a real E cross B signal, parity-violating signal or something of the sort, could you use that to say something about cosmological issues? Parity non-conservation and so on. The problem at the moment is that the observations won't allow that. But in principle, it's there. And when I get to that point, there is a reference that I will point out to you. In the last blank paper, there was like some disclaimer about the EMOS that had like a leakage, it's called there. I don't know if it has to do with this. If you start about that at the end of this lecture, then you care to imagine. So just be patient. That comes up. Okay, back to the B modes. The E modes work. So now what about the B modes? The amplitude is strongly modeled, and it's already said. And it's conventional to characterize the amplitude of the B modes by a ratio, little r, of the tensor mode amplitude to the scalar mode amplitude. For virtually all theories of inflation, this value or the value of this parameter is typically less than one. And it can be a lot less than one. But it's not generally more than one. Which means that the B mode signals are intrinsically weaker than the E mode. And when I talk about experimental limits on the B modes, I'll be talking in terms of this quantity r. Next, another brief detour. There are in fact two classes of cosmic B modes, one of which I won't talk much about at all. So far I've been talking about the primordial B modes, that are produced by tensor perturbations, invisible on scales that are greater than the horizon scale at the time of last scattering. And I'll be focusing on these almost exclusively. But there is another kind of B mode that's produced by gravitational lensing. It's basically a distortion by gravitational lensing. It's too bad that the figures that Bouvnish Jane had up earlier got erased. You can imagine a bundle of rays passing through the matter on the way from the source plane to us. It's possible to take a divergence-like signal and distort it and produce a small B-like signal. So this is produced by gravitational lensing, by all the matter between us and the last scattering surface. It is worth noting, however, that the distortion of the E mode signal depends on just two things. One is the initial amplitude of the E mode signal, which we know. Exactly. And the other is the deflection field. Essentially the same kind of argument that was being made this morning about the gravitational deflection produced by intervening matter. So if you've got some sense from your theory of cosmology or your structure formation about what the intervening matter is doing, either from optical observations or from the CMB itself, you can predict the amplitude of the lensing B modes. They also occur at a rather different angular scale. They typically peak up at angular scales around 1 to 10 arc minutes, just as you heard this morning. Gravitational effect of the intervening matter sits at a fraction of a degree. Okay, I won't be talking much about those, but I will mention them in the final lecture. Okay, now back to a few remarks about values of R. This has to do with the primordial fluctuations. It's a ratio of the tensor to the scalar amplitudes. This is certainly not a full analysis. I trust other lecturers will handle this or have handled this. I want to make the claim that R less than 1 is generally favored, but there's no lower limit. So if you're designing an experiment to go after the B modes, you better design it so that you can get a value of R, let's say a tenth or better. Next, I want to connect the value of R to some other pieces of physics. And again, I won't do this in detail because I'll be talking about the experiments rather than the results. But R is related to the energy scale involved in inflation, as many of you will know, and that relation is approximately given here. The energy divided by a characteristic energy of 2 times 10 to the 16 GeV relates to the value of R. Then in some models, and I should emphasize some models but not all, there's a link between R and the tilt in the power spectrum of the microwave background. And there again, let me go to the board for just a minute. I'm going to draw a cartoon of the power spectrum of scalar or temperature fluctuations in the CMB. It rises up, there's a peak, and it does that, right? The tilt refers to the following. This entire graph can be tilted to have more power at low L, large angular scales, or vice versa. And that tilt is represented by N sub S minus 1. If N sub S, as spectral index of the scalar fluctuations, is exactly 1, then there's no tilt. In fact, we expect the number to be slightly less than 1. And the departure from unity, in other words, the amplitude of this small quantity N sub S minus 1, has to do with the number of E-folds in the inflation. And again, in some models but not all, there's a direct link between the tilt and R. Here's one case. You'll notice that if I fix R, I fixed N sub S. Alternatively, if I fix N sub S, I must have the right value for R. Here's a way that the results are frequently projected. And indeed, I'll be showing this in the third lecture. Here is R, and here is N sub S. You'll notice slightly less than 1. If this model is correct, there's a unique trajectory or path connecting the value of R and the value of N sub S. And of course the question is, does that particular trajectory lie anywhere near the experimental results? If it doesn't, then this particular theory is out. That's how the tests are done. Finally, the power spectra of the polarized signals. Can I do better than this cartoon to represent the B-mode power spectrum? First, I remind you that the E-mode power spectrum is predicted exactly and agrees beautifully with the cosmology derived from the temperature measurements of the CMB. Here are the B-mode predictions. There's a characteristic shape which I tried to indicate in my cartoon form here. But we don't know the amplitude. The shape is preserved. The amplitude isn't. It depends on the value of R. Hence what I've shown here are two possibilities. I believe those are for R.1 and .01. I'm not certain of the numbers, but they show that there's a range of amplitude possible. And then here are the gravitational lens B-modes, which I won't be talking much about. They peak up at a much larger value of L, a much smaller angular scale. Again, 1 to 10 arc minutes would be reasonable. But it's these B-modes that I will be interested in. It's also worth noting that this picture is shown with a linear scale and this power spectrum is shown with a logarithmic scale. Temperature fluctuations, EE-modes, and the B-modes lying below them. Now I'm going to shift gears and talk about the consequences of the little bit of descriptive material I presented for the design of experiments to detect the B-mode. And this is where I become a complete experimentalist. I'm going to be talking about equipment, observational techniques, and so on. But I think it's useful for those of you who are theorists to know how difficult these experiments are and how careful you have to be in interpreting the results. Okay, if r is small, let's say of order one-tenth. The polarization amplitude, the amplitude of these little polarization rods is less than a micro-Calvin. And again, the equipment we use to make these measurements has a characteristic thermal temperature of 50 Kelvin if it's in space, 300 Kelvin if it's on the ground. So we're talking about a huge range of sensitivity. You certainly need high sensitivity to detect signals of the order of a micro-Calvin. In addition, we face a fundamental problem. The detectors we use are getting very close to a fundamental quantum limit. You can't do better than that. So if you have a single detector, you're limited not by the mechanics of the detector, how it's designed, or what have you, but fundamentally by quantum physics. So the only way to increase sensitivity is to make many detectors. So most B-mode searches now use of the order of a thousand detectors to lower resulting signal-to-noise. Next, there's a frequency sweet spot, a range of frequencies that's best to make observations for the B-mode. In principle, you could make observations at any frequency you choose, but there are certain constraints that make it favorable to use frequencies around 100 gigahertz wavelengths of the order of millimeters. First, to avoid the atmosphere if you're making observations from the ground. And second, to avoid foregrounds, a subject that I'll be talking about on Thursday. So typically the observations I'll be describing are made within a factor of two or so of 100 gigahertz wavelengths of a few millimeters. That being said, the combination of the need for sensitivity and the frequency tends to push you to using bolometric detectors. I'll be talking a little bit about them in a moment. Next, if you're interested in the B-modes that come from the primordial tensor fluctuations, you're interested in scales, as indicated here, of around a degree or so. And it turns out that that's an experimentally rather awkward scale. Think for a moment about a large radio telescope. The diameter of the telescope is far, far larger than the wavelength used. So the angular resolution is typically very small. Then think about building a simple piece of equipment in your laboratory at home, say a cone to collect microwaves. There, the size of the horn or the cone compared to the wavelength of light can be rather small, right? But what you need to see about one degree is a ratio of about 60 to one. And it just turns out that's rather awkward. For instance, if you're working at, say, a couple of millimeters, you need a structure that's roughly this size. Don't want awkward. Okay, furthermore, if you're interested in fluctuations of about a degree in size, or maybe a little bigger or maybe a little smaller, and you want to see whether there's a characteristic pattern of them across the sky, if you've got patches of a degree or so, you want to cover a large fraction of the sky in order to get a statistically significant number of independent samples of a degree in size. So you've got to have wide sky coverage as well as high sensitivity. Okay, now some discussions of the instruments we actually use. The first thing you've got to do is to define your beam on the sky. What chunk of the sky are you going to look at? And that's done by some aperture, either a horn, as shown here. This is for the bicep experiment. Or you can have a primary mirror, as done here for the Planck experiment. The diameter of the horn or the diameter of the mirror determines the angular size of the piece of sky you're looking at. If you're using a ballometer, a device which is sensitive to energy no matter what its wavelength, you also have to use filters to avoid energy creeping in outside of the frequency range you're interested in. I've already said that you need arrays of detectors to improve the sensitivity. They come in two classes, ballometers, which I've already mentioned, and radiometers, which I'll talk about in a moment. And both of these can be made in arrays. You also need some form of fast switching. And I'll be talking about that in some detail. You cannot simply stare at the sky, the microwave background, and make measurements. Because while you're staring, your equipment is likely to change. So the incoming signal will vary. So what you need is some method of looking at the sky in something else, sky in something else rapidly, or comparing two pieces of the sky, some method of doing a fast switch so that you're making a comparative measurement rather than an absolute measurement and you're making it on a time scale that's short compared to any changes in your instrument. I'll be talking about examples of this. Finally, we're looking from micro-Calvin. Here is the ACT telescope sitting in the desert in Chile. It's sitting on ground, which is essentially perfectly emissive, at 300 Kelvin or 270 Kelvin. You've got to keep radiation from the ground from getting into diffracting around the edge of the mirrors and causing a signal. And you do that by using what are called ground screens. This structure, which is what catches your eye, is not the telescope. The telescope is a little thing in here. This huge structure is designed specifically to keep radiation from the ground from entering the telescope. So all of those features are needed. Yes. But the emissivity of metal is very low compared to the emissivity of dirt. I'll go back. The idea, basically here, is that what the instrument is seeing is either emission from the ground screen, which is much smaller because the emissivity and microwave wavelengths are small, or radiation coming from the sky that bounces off. So it's basically replacing the cold ground with reflected skylight, which is cold. Now some more gory detail of detectors, starting with volometers. They measure the total energy deposited, and the volometer is this entire structure here. It's designed in the following clever way. First worked out by Paul Richards at Berkeley. These sort of spider web patterns here are metallic, and they're designed to have spacings that are smaller than the wavelength of light that you're interested in. So to an incoming photon of wavelength, I'll just indicate it schematically here, some long wavelength, let's say two millimeters, this thing looks like a reflective or absorptive sheet. Continuous, okay? Any radiation that's incident on this device causes it to warm up, and then you have a little thermometer here. I'll talk about what we typically use for thermometers in the center. Why do this? And the answer is that the cross-section for cosmic rays of this thin spider web is much reduced. You could, in principle, have a solid surface here that catches the radiation you're interested in, but that would also catch a far larger flux of cosmic rays. And cosmic rays, particularly in space experiments, turn out to be substantially a substantial problem. Okay, so what do we use here to measure temperature? A microwave photon comes in, or many of them come in, causes slight increase in temperature. How do we measure that? And that's typically done with what are called transition edge sensors. It's a superconducting device whose resistance, as a function of temperature, looks something like this. And again, this is a cartoon. Very low resistance, essentially no, until a certain temperature, and a steep rise up to some non-zero resistance. Yes. Sorry? I should not have said reflect. I should have said absorb. Yeah. It's designed to absorb. Right. Resistance as a function of temperature. You'll notice that there's a very substantial change in resistance for a very small change in temperature. And by using a feedback mechanism, you keep the device sitting right here as it warms up, increasing temperature. The resistance would go up, except you apply a bias to pull it back down again. That biasing signal is what you measure. So you're making use of the high delta R delta T at the transition edge of a superconductor. Finally, as an example of the sensitivity, Advanced Act, the camera that's now on the Atacama cosmology telescope has an overall sensitivity of about 5 micro-kelvins per root second. It's an odd looking unit, but what it means is the following. If I integrate for one second, the typical RMS noise is 5.4 micro-kelvins. If I integrate for 10 to the 4 seconds, which is about 3 hours, the typical noise is 0.05 micro-kelvins. In other words, a reasonable amount of integration time of order hours or days per point on the sky is enough to measure the B mode signals. We can get down low enough. So fundamentally, the sensitivity is okay if you've got 1,000 receivers and you've got many days of observation. The other technique used are radiometers. They work in a slightly different way, but I want to emphasize one feature of radiometers and that is what's called the 1 over F knee. I've displayed here frequency in Hertz and basically a measure of noise. At high frequency, radiometers like millimeters have what's called white noise. The level of noise is independent of frequency. But as you go to lower and lower frequencies, this is a frequency of 0.01 Hertz, so a time of about 100 seconds. If you try to integrate the signal for up to 100 seconds, the noise becomes much larger. So you beat this by working only at high frequencies, let's say below or above a frequency of a second or so, time scales less than a second or so where the device behaves in a linear white noise fashion. So this is an example of why it is that you need to do fast switching. You can't simply sit and integrate at a particular point of the sky because the noise will kill you. You have to make comparative measurements on time scales for this particular device of a second or less. So now let me summarize where we stood in terms of B mode searches before March of last year. In other words, say 15 months ago or so. And I show this slide for two reasons. One is that you'll notice that we are achieving at values of L between 10 and 100, in other words, right about where we need it, sensitivities of the order of a tenth of a microkelvin. Things get a little worse as you go to higher multiples, that is smaller angular scales. But we have the sensitivity necessary to detect reasonable values of R. The other point is that there are a large number of experiments involved in the search for B modes, as shown by the color scaling here. The ones that are perhaps most interesting are Bicep. Okay. It may be that the Bicep team didn't like what I said. They're going to like even less what I say on Friday. So Bicep is one to look at. But some of the other points here are comparable. Quad, for instance, is a large angular scale low L experiment getting good results down here. Okay. I like to draw an analogy between this slide, which shows the hunt for B modes, and the situation that applied before 1990 when a variety of ground-based and space-based experiments were looking to find fluctuations in temperature in the microwave background. Increasingly sensitive upper limits were being established, but until the launch of the COBE satellite and the announcement of fluctuations in the CMB in early 1990, there'd been no detection. And in terms of the B modes, we're about in that same place 25 years later. Tight upper limits, but no firm detections as yet. All right. Now, I was asked earlier whether a small E cross B signal could tell us something about parity violations or models of inflation and so on. I want in the next half hour or so to run through some of the instrumental systematics that get in the way of that and then talk a little bit about ways that we can beat the instrumental systematics. And to give you some hope that I'll get through this in half an hour, what I'm going to end with is a list of things that I hope you theorists will look at when you read papers about the B modes, experimental papers about the B modes, things to look for that will give you some trust that the experiment has been done right. Okay. So there are several categories here. The first are instrumental effects that bother both measurements of temperature fluctuations and measurements of polarization. For instance, one of them is the one over F noise. If you don't make a comparative measurement at a rapid enough cadence, you're introducing additional noise in whatever you measure, whether it's temperature or polarization. There's a rather similar thing when we come to the bulometers in terms of their time response. The time response of bulometers is long. That is, once they've been heated up, they take a long time to cool back down. That will turn out to be a problem. And so will cosmic ray hits which deliver energy to the bulometer but not from the microwave background, so you don't want that. The second category of instrumental problems is instrumental polarization. You imagine you have a purely unpolarized signal coming into your instrument, but things in the instrument, such as reflections off metallic surfaces, introduce polarization that you didn't know about or didn't expect. I'll be talking a little bit about some of these. There's an easy fix, and that is a polarization calibrator that is outside your instrument, and there happens to be one. And finally, down in the realm of dirt and nastiness that gets in the way of clean detection of the B-modes are various mismatch effects which, among other things, can mix the E and B modes in your instrument, which, of course, you don't want. Okay, so 1 over F noise and glitches. Anything that builds a long time scale into your detector can cause problems. I've already talked about the 1 over F noise. Here is a response of the bulometer measured in time, not frequency, but time. And what you can see here is that if something delivers a bundle of energy, let's say a cosmic ray strike to a bulometer, it takes of the order of a second or so to recover. Consequently, if you happen to have cosmic ray strikes every second, your bulometer is not doing you much good. And that has turned out to be a substantial problem for the high-frequency experiment aboard the Planck satellite. There are enough cosmic ray hits so that you lose data. And here's an example of some raw data from the HFI instrument. The black are the actual measurements. The red are fits to remove this glitch pattern from the data. And down here, this little block diagram shows you the chunks of data that have to be eliminated because you can't get a good fit. So you can see here that a substantial chunk of the data is being lost in this particular stretch of time, about half. Overall, the data loss is about 15%. So we're losing about 15% of the data because of cosmic ray hits. And we're using those spider web bulometers. If we'd used a solid surface, the cosmic ray hits would be much more frequent, much bigger problem. Now, instrumental polarization. You all know from your electricity and magnetism class that if you have an unpolarized wave bouncing off a metallic surface, it can be polarized. So any reflecting element in your device can introduce polarization. So can imperfections in whatever device you use to restrict the response to one polarization. You imagine, for instance, a simplest case of a wave guide with a rectangular cross-section. It takes one polarization and not the other. If that is imperfect, you can introduce an instrumental polarization. I won't bother with those much because we do have a way to get around it. And that is to have out there in space an entity, a source, which is itself intrinsically polarized. And the best example of that is the Crab Nebula. How I say, what I show here is a picture made at 90 gigahertz of the little polarization rods. So there's a source with substantial signal that's linearly polarized in the sky. We know the direction of polarization. We know pretty well the amplitude of polarization. And you can use that to check to see whether your instrument is responding correctly and correct for the polarization introduced by reflection and so on. It's not perfect. First, there is some frequency dependence in the polarization. This map was made at 90 gigahertz. Is the polarization as strong at, let's say, 200 gigahertz or 50 gigahertz as it is at 90? We think we know the answer to that reasonably well. The other is a question of resolution. You'll notice that there's some change in both the amplitude and the direction of the polarization vectors in this map. Planck has a huge beam, so it sees the whole thing. That's not a particular problem. But if your instrument has a tighter beam, so higher resolution, and you're only seeing some fraction of this source, the polarization directions will be a little bit different. And again, to theorist, you may be getting bored with this, but I'm emphasizing all the experimental difficulties faced by the people who are trying to measure B-modes. So when you read in the abstract, we have measured B-modes at r equals 0.23. All of this has to be taken into account before that number is derived. Here's an example. This is from the low-frequency instrument on Planck. We need to know what the angle of polarization is in an absolute sense. And we do that by measuring the crab nebula. But we can't do it with absolute precision. For the roughly dozen different detectors at LFI, the errors in that measurement range from maybe a degree up to 10 degrees or so. In other words, we simply don't know the angle of polarization of our detectors exactly. What effect does that have on the detection of the B-modes and E-modes? And that's done by simulation. Here is the E-mode signal, which you recall is predicted exactly. And underneath it are estimates, simulations, of the effect of this uncertainty on our ability to measure the E-modes. And you'll notice that it's a couple of orders of magnitude below the E-mode signal. In other words, for the detection of E-modes, the fact that we don't know the instrumental polarization exactly is not a problem. But now let's look at the B-modes. Here we've got only one order of magnitude. And you'll notice that as we go to higher L, smaller angular scale, the imprecision in our knowledge of the instrumental polarization becomes as large as the B-mode signal. I suspect that this B-mode amplitude is drawn for maybe R.1 or something of the sort. So just this one feature, the fact that the instrument polarization is not perfectly known, can cause you problems in measurement of the B-modes. Now the so-called mismatch effects. First, beam ellipticity. If the beam on the sky, formed by the primary reflector or the horn, whatever you're using, is intrinsically elliptical, you're mixing different angular scales. You're mixing the angular scale associated with the small axis for the ellipse with the angular scale associated with the large axis of the ellipse. You could reasonably ask, why design a piece of equipment that has elliptical beams? Why don't you design a piece of equipment that has circular beams? Then you don't have this problem. There the problem is the following. If you need a thousand detectors in order to get the sensitivity, not all thousand of the detectors can be on the optical axis of your instrument. Some of them have to be off axis. And indeed, that is a substantial effect in the Planck focal plane. Particularly for the LFI, the low-frequency instruments, they're far removed from the optical axis. If the detector is far removed from the optical axis, it's in effect not seeing a circularly symmetric primary. So just by diffraction theory, you can tell that the beam is going to be elliptical. And they are. Here's an example. Is it recorded what frequency this is? Oh, it's 70 gigahertz. That's not a circle. So we're mixing different Ls. And you can see that if you mix different Ls, you're going to be mixing up the signal here. And that's a problem with temperature measurements as well as polarization measurements, but it's worse for polarization. It can be fixed. If you know what your beams look like, and this is a map of the beam shown in false color, you can resymmetrize the signal in a sense. So it can be corrected. Then there's a different kind of beam mismatch that requires a little bit of explanation. For the polarized detectors on Planck and many other ground-based experiments, there are basically two detectors that are fed by the same optical beam. One polarized in this direction, one polarized orthogonal or at 45 degrees. So there are two different polarizations being measured by the same instrument, the same optical path. All right. Suppose as a function of frequency that the sensitivity of those two detectors differs slightly. Here is one. I'll draw its response. I'll draw it as a function of frequency. We think we're seeing a narrow chunk of the frequency spectrum. Now let me draw the other one in the same pair using dashed lines. It might look like that. Then you look at the crab nebula, which has a particular spectral behavior to check the relative polarizations of these two instruments. Fine. But then you go to the cosmic microwave background, which has a different spectrum. You calibrate using a spectrum like this. Then you go to the CMB, which has a spectrum running up like that. Pretty clearly, your polarization calibration was appropriate for the crab, but it's not appropriate for the spectrum. It's called beam mismatch or band mismatch. Worse than that, in addition to this, the two beams of the two polarizations may not be seeing exactly the same chunk of the sky. So all of those things have to be modeled and dealt with. Again, can you see why it is that it's going to be difficult to look for parity violations? First we've got to find the damn beam modes, in addition to all these problems, there's another that I should mention, and that is the leakage of temperature into polarization. I think I had a question about that earlier. If the polarization of your instrument is not perfect, then temperature measurements or temperature flux can leak into the polarization. Here's an example. It's not the only one. Go back to this diagram here. If you've calibrated your instrument using the crab nebula, but you're looking at a source with a different spectrum, and these two bands do not agree, you'll be getting some leakage from the CMB into polarization. Again, if you know the properties of your instrument, you can correct for this. But here, again, is an example of how this particular effect produces an error power spectrum. This is simply a measurement of the error in the PLUNC HFI instruments at three different frequencies. What we've got here, compared to a BB signal, and I think again this is 4.1, is the error introduced by this single instrumental uncertainty or imperfection, namely the leakage of temperature into polarization. At 217 GHz, it's way below the B mode signal, but at 100 GHz, you're looking at a contamination that is of order, the amplitude of the B mode signal at high L, and still 10% of it at low L. So this temperature to polarization leakage can be a serious problem. Again, correctable to some degree. I've already covered that enough. I'll mention a couple of problems that affect ground-based experiments in particular. One, I'll start from the bottom, one I've already mentioned, and that is that radiation from the ground, in this case, this is bicep at the south pole, and the ground is about 250 Kelvin, can diffract in to this horn and cause problems, hence the bicep team, like the ACT team, uses ground shields around the optical element. The second is the atmosphere. The atmosphere is intrinsically fluctuating. There's enough emission from water vapor and other components in the Earth's atmosphere to swamp the CMB signal at these frequencies. So typical fluctuations, as you can see here, are the order of many milli-Calvin, and we're trying to make micro-Calvin-type observations. So we've got in some way to get rid of the atmosphere, and there are a variety of ways of doing that. In addition, side load pickup from the ground can introduce unwanted polarization. So you can get a polarized signal coming from side load pickup. How do we get rid of these various effects? Let's talk about eliminating some of these effects. How far down can we push them? First, in terms of designing your instrument, there are a few obvious things. One is to make the instrument sensitive enough, and that means cooling your detectors to very low temperatures, and we're now out of the order of a few tenths of a Kelvin. So all of the experiments whose results I'll be talking about have a cryogenic element. You use filters to reduce unwanted signals. You use ground screens and so on. Perhaps less obvious techniques for reducing some of the many signals that I've been talking about include symmetrical design. You would like, for instance, to have all of your detectors right on the optical axis, but you can't do that. You do the best you can. You want to minimize a number of reflections, and then there's one other that's rather nice. If you're looking for polarized signals, it's nice to be able to do the following. Take the sky, which has, let's say, a B-mode or an E-mode signal in it. Take the sky and rotate it by some angle. Now you can't actually do that, but you can rotate the angle of your instrument so that you can observe the same chunk of sky with your polarized detectors running this way and a little bit later with the polarized detectors running that way. And here's an example of how that's done. This is the ACT telescope. There's a chunk of the sky, a strip of the sky, a few degrees in size, and shown here in these yellow lines are the scans made by the instrument. The ACT telescope is built to scan like this. As a piece of the sky is rising up through the beam of the telescope, it's cut at a certain angle. As that same piece of sky five, six, seven hours later it's cut at a different angle. Okay? If on this scan and on this scan you get the same polarization, you know it's in the sky. Here's a question for you. Why is it necessary to make the switching back and forth as I've drawn it here only in asthma for the ACT telescope? Why can't we switch like this? Somebody answer. This is known as make sure at least some of the audience is awake. Volunteer? Why switch like this? Yes. Say it loudly. You have the same atmosphere. Well, not quite the same atmosphere, but think of the atmosphere as a thin shell going around the earth, okay? If you were to switch in elevation you would be looking through less of the atmosphere as you look higher in the sky. So you switch in asthma. Good. Okay, now another method to reduce instrumental errors. The kind of thing that I've been talking about all along here. You can call it redundancy or switching or modulation. And typically this is done with many different layers, different ways of making the signal a comparative one rather than trying to measure an absolute signal. For instance, the easiest are the LFI radiometers on Planck. You have a detecting device here which I'll just draw as a square box attached to it is a horn which receives radiation coming in from the sky, the CMB. And you also have a secondary horn which looks at a perfectly absorptive and emissive surface at a known temperature. And what happens is you simply switch back and forth between these two inputs very rapidly, kilohertz type frequencies. So you're constantly comparing the temperature from the sky with the temperature from a known source. The instrument can vary in time. This temperature can vary in time but it's unlikely to vary in time faster than a thousandth of a second. So rapid switching. In the case of the WMAP experiment, the previous satellite that did such beautiful work, the comparison was done in a different way. Look to this part of the sky, look at this part of the sky, back and forth like this and make comparative measurements. So you're always comparing two chunks of the sky. Yes, the instrument may be varying slightly in temperature but that subtracts out in these differential measurements. How well does that work? Here's an example. This is the noise as a function of frequency from an experiment called the Atacama B mode search. Not by set, but different experiment. Before and after. Before and after subtraction, basically. So you have a large amount of noise. You get rid of most of it by simply subtracting the difference. If you've got a bunch of detectors, you can play another trick too. If you've got a bunch of detectors looking up at the sky, they will all see the same atmosphere at the same time. Right? So what you can do is subtract the average from all the detectors as a function of time called common mode subtraction. In other words, there are ways of getting rid of the signal introduced by noise by averaging and by comparing. Okay. A new trick that's being used now by a couple of instruments is in addition to everything else, having a rapidly rotating polarizer plate in front of the device that receives the radiation. So in this cartoon, something running across here that modulates the polarization angle at a high speed. So you don't have to compare polarization measurements made when the source is rising to polarization measurements made when the source is setting. You can make comparative measurements in a fraction of a second with a rapidly rotating polarizer. And again, the purpose of this is to get rid of long-term drifts and changes in your instrument of the various kinds that I've been talking about. Another trick you can use is used by the bicep people simply to take the whole apparatus and rotate it 90 degrees every once in a while. So any instrumental effects will appear to shift by some 90 degrees. So all of these tricks are used in order to get around the various problems that I've been introducing. Here is the rotating polarizer that's placed on the Atacama cosmology telescope. Finally, you can use null tests, and they come in a variety of forms. First, if you've got a bunch of data, you can split it up in various ways. For instance, I'll talk about the Ponx satellite. The Ponx satellite simply looks at the sky and scans essentially a great circle many times. And then it adjusts slightly and scans another great circle. If you're scanning a circle many times, let's say 20 times, you can take the little map made from 10 of those scans and compare it to the little map made from the next 10 of those little scans. Half ring sets. If you subtract them, the signal should go away. Any real signal should go away, right? Because they're looking at the same part of the sky. And what you're left with is noise. So you can make what are called half ring null tests. If the signal doesn't go away, you've got a problem. You can also compare survey by survey. Ponx scanned the whole sky every six months. So six months after you did a particular ring, covering a particular part of the sky, six months later, Planck is covering it again. You subtract the two maps. Okay? Does it work? So the variety of ways of checking to see that what's supposed to be a null signal is in fact a null signal. And here's an example of a Planck result that failed. What we've got here are in solid lines, not very well reproduced, the TT signal, and up here, this is the E signal. In the red are the differences between two halves of the ring. It was the same scan made minutes later. In the black, I think I've got this right, are the survey differences. We came back six months later, do we get the same thing? In the case of the T-modes, the answer is yes, they agree very nicely. There's a little noise, but basically you see the same thing when you come back six months later. In the case of the polarization, we don't for this detector. We don't know why, but the fact that it behaves differently tells us that something is wrong. This is an example of using a null test to constrain your data or to get rid of bad data. In the Planck papers that are coming out now, this detector at 70 GHz was not used for just this reason. It failed this experimental test. Yes, probably L. Let's see, can we go back one? Yes, L. Sorry, it got wiped out. That is an interesting point. You'll notice where this test failed was at low L, at big angles. Experimentally, those are often the most difficult, the large angles. There's some contamination that we don't understand, some instrumental effect we don't understand. The prudent thing is to drop the data from this particular detector, which we did. Here's another example of a null test. In this case, they're making null maps. Here, this is the bicep data, as it happens, that I'll be talking about on Friday. Here is what you get if you measure the temperatures and you take half the data and add it to the other half. Over here is what you get if you take half the data and subtract it from the other half. You ought to see a zero signal, or uniform signal, and in this false color picture, orange is null. You can do the same with one axis of polarization, Q, the other axis of polarization, U. You can see here a characteristic pattern associated with a Q polarization, U polarization, subtract them. All you see is noise. So I show this as an example of a test that has worked well. When I come back and talk about the bicep results, they were very careful to make this kind of null test. They're not seeing something systematic introduced by their instrument. This is a test of how well the instrument behaves. Here's an example from Planck. Again, map-based. Here's the sum of observations. The first 10 scans of a ring plus the next 10 scans of the same ring and down here is exactly the same thing except there's a difference. Notice the change in scale from 500 to 5 micro kelvin. This is noise, slightly lower here near the ecliptic poles where the satellite scans more frequently. But pure noise here, nothing systematic up here. You clearly can see a big signal which happens to be the galaxy, which is the subject of the lecture tomorrow. Okay, more examples of null signals. Same thing here. This is bicep 2, shown in a different way. This is the TB signal, which I asked you to convince yourself should be null, and the EB signal. Here are the observed signals. And here are the results you get by dividing the data up, okay? Both of these sets should be null. This one, by definition, because you've divided the data up, this one, is it null or not? Is there a difference between what you see here and what you see here? The error bars are slightly different, but there's no indication of something systematic in the bicep polarization data. In other words, we're not seeing an E cross B and we're not seeing a T cross B signal. This is the next to the last slide. Here's an example of the analysis of systematic effects, all the instrumental systematics that we could think of for a particular Planck frequency, for T, E, and B. The signal we seek is a heavy dark black line. Temperature, E-mode polarization known exactly, B-mode drawn here again for R equals 0.1, okay? And then all these colored lines are various systematics. None of them cause any problem with temperature. There are all many orders of magnitude below. This happens to be the system noise, okay? E-modes, oh, here some of these systematics are comparable in size to the E-mode fluctuations. And the E-mode fluctuations are below the system noise. It's not a problem because you can integrate, but system noise is beginning to be a problem. And here for R equals 0.1 are the B-mode signals and you'll notice that several of the systematics I've been describing, whatever is orange and brown and purple, I think it's mostly calibration problems are comparable to the B-mode signal we're trying to see. So with the E-modes we're okay, with the B-modes we're still fighting systematic problems when we look for B-modes. A few useful references. I'll put this back in just a moment for you to look at. Unlike the case for gravitational lensing or the 21 centimeter observations that you heard about earlier, there's no single reference that does the dirty work that I've been doing today. That's because most people aren't interested in it. That may include many of you, but here we are. Sort of general papers on CMB research, including instrumental problems, is a task force report. Jamie Bach was the lead author. Many of you may be familiar with Wayne Hughes' sort of educational website and there's an out-of-date book that I wrote some 10 years ago, something like that, that goes into some of the details. A bunch of the Planck papers and here's the reference for the bicep paper. I'll come back to this and leave it up when it's time for coffee, but I want to end on the following slide. Theorists, when you read a B-mode paper, an experimental B-mode paper, do the following. It's hard, but read the instruments and systematics paper. I'll use Planck as an example. All of you who are interested in the CMB will go off and read the paper on cosmological parameters because you all want to know what H-naught is, 67.3. You all want to know what the Baryon density is, 0.0024, and so on. Please also look at the papers on instrument and systematics and get a sense of how the various effects I've been describing influence things. Look as you read from many layers of modulation, various kinds of switching to get rid of by subtraction the null tests or some other means, the various kinds of systematics I've been describing. How well do they know, the authors, their polarization calibration? Look carefully at the null tests. If the paper doesn't publish their E cross B and their T cross B spectra, be suspicious. Those ought to be published and they ought to be null. Accepting, of course, the possibility of a real cosmic signal. And look for the modeling or simulation of the residual effects like that slide I just showed you for Planck. In other words, get the authors of the paper to tell you what their instrument is doing to things. Okay, I'll go back and simply leave this up and I will stay here for the first part of coffee to answer all the questions and thank you.