 Okay, so welcome, is it on? Yes? Yeah? Okay, so welcome everyone. It's really a pleasure to welcome today Professor Michael Peskin from SLAC, the Stanford National Accelerator Lab. He's given a colloquium today. The title is The Quest for 30 TV, the next milestone in elementary particle physics. Michael is of course a theoretical elementary particle physicist who received his PhD from Cornell University in 1978. He got a post-doctoral appointment at Harvard and Cornell, and then he joined the faculty at SLAC in 1982. A theme that runs through much of his work is the use of high precision measurements at the highest energy, lepton and hadron colliders to search for new interactions beyond the particle physics standard model, of course, and this is actually a topic of today's talk. I mean, how we can go beyond the standard model using next generation machines beyond what is available currently at LHC, I guess. So let's welcome Professor Peskin of colloquium. Thank you very much. Oh, is this on now? Yeah. Is this on? Yeah. Is it on now? Yeah, he's on. Well, apparently not. You have to, oh, I said here, the screen should have a little higher. There. Is that better? Is that better? Okay. Very good. Okay. So thank you very much for coming. And for, I hope you find this talk interesting. It's one that I must say, I don't really have the expertise to give, but I thought it was important to put it together. It's a subject about which there's a lot of uncertainty. So in this talk there will be a speculative part and then there'll be an even more speculative part and then we'll come to the end. So the first thing I wanted to do is to kind of set the stage by telling you a little about the way I see the current situation in elementary particle physics. So I consider this a mixture of really good news and confusion. The good news is we have something called the standard model of particle physics. It's a non-abillion gauge theory, a Young Mills theory, which is to say a theory where the basic forces come from the exchange of spin one particles. And those theories are very constrained. They're controlled by a symmetry group. And we know what that symmetry group is. It's the group, oh, sorry. A straightforward generalization of Maxwell's equations. The only difference is you have more than one photon. It's the gluon, the photon, the W bosons and the Z, and they interact with each other in a way which is given by the group structure. And that theory describes with increasingly precise tests all of the interactions that particles obey at the nuclear and subnuclear scale that we've discovered so far. The strong, the weak, and the electromagnetic interactions. So let me just give you a couple examples of the successes of this theory that I find particularly impressive. The W plus W minus and Z are the bosons that mediate the weak interactions. And in particular, the Z is a neutral particle, so you can produce it in the collision of electrons and positrons. It appears there as a resonance, and you can measure the resonance line shape very precisely. Those of you who do atomic physics know that resonance line shapes can be subtle. They can include a lot of interesting effects. Actually, the resonance line shape of the Z includes imprints from actually all three of the interactions, strong, weak, and electromagnetic. What I'm showing you is the measurement by the OPAL experiment at CERN done in the early 1990s of the resonance shape and in a black line, the theoretical prediction. The data is given by these red dots. If you really squint, you can see their error bars on those measurements. And it's in perfect agreement with the theory. Once one has to a sufficient level of calculation accounted, especially all the subtle electromagnetic effects that go into this observable. Here's another example. This one more modern from the LHC. So what you see here are the cross sections for a very large number of processes in proton-proton collision. So if you have sharp eyes, you can read across the bottom production cross sections for jets, for photons, for single Ws, for pairs of Ws, sometimes for triples of gauge bosons. When you go down the column, it's the production with one, two, three, in some cases, four gluons. And the dots are the experimental measurements and the gray lines are the theoretical predictions. And this is all without extra parameters. This is just in the context of this standard model. One can, starting from a parametrized model of the proton structure, put that in the front end and ask for all these cross sections to come out the back end. And indeed, the patterns are reproduced and the numbers are reproduced within the experimental errors. It's really a very impressive success. On the other hand, the standard model has some difficulties. They're not difficulties in the sense that it predicts things that are in contradiction with experiment, but they're conceptual difficulties. There are questions that we would like to answer about nature that the standard model is just absolutely incapable of answering. And so how do we get beyond where we are? How do we fill in the missing ingredients? Probably the most puzzling aspect of the standard model is the fact that it relies on spontaneous symmetry breaking. So whereas this gauge symmetry, SU3 cross SU2 cross U1 seems to be an exact symmetry of the equations of motion, it's not respected by the ground state of the theory. And that has important consequences for how the theory actually looks experimentally. And some of those things are written here. If the world were really SU2 cross U1 symmetric, we would have zero mass in this theory for all the quarks and leptons. We would have zero mass for all the spin one bosons, including the one that I just showed you has a mass of about 90 GeV. There'd be no difference in behavior between the photon, the W, and the Z boson. Is this misbehaving? Do. No. No, there we go. So I wanted to give you a little explanation of why in this theory the fermions, the quarks and leptons have no mass. And actually it's an easy thing to see. One characteristic of the weak interactions, which is very well established, actually really the starting point for our modern weak interactions is the idea of parity violation. The fact that the weak interactions don't look the same in a mirror and in reality. The way that's manifested is actually, you might say, a maximal amount of parity violation. When an electron is ejected in beta decay at relativistic velocity, it's always spinning in the left-handed sense and never in the right-handed sense. And in the equations, the left and right-handed electrons correspond to independent fields, one of which has the SU2 quantum numbers and the other one has zero SU2 quantum numbers. So that sounds strange, but it doesn't seem like there's anything wrong with that, but there's something which is terribly wrong with that, which is shown on this slide. So let's start with a left-handed electron over here and we watch it move and spin with that orientation. However, the world is relativistically invariant, so we can run fast enough, or maybe I can't run that fast, but one of you guys can, to realize a situation like this where the electron is at rest. Now, it's in an odd state, it's sitting there, of course, it's still spinning in the same direction, that's the consequence of the Lorenz transformation. And if your friend could run even faster, he'd see the electron looking like this, but now it would be a right-handed electron. In fact, you can't have a mass term consistent with Lorenz invariance unless the left and right-handed particles are connected by Lorenz transformations, that is, mixed with each other quantum mechanically. But in the standard model, this guy has non-zero SU2 quantum numbers and this guy has zero SU2 quantum numbers. So something has to form the bridge and what forms the bridge is the breaking of this symmetry. Something has to sit down and pick a direction in the SU2 group that allows this state to mix with that state quantum mechanically. And the way we imagine that within the standard model is by using the thing called the Higgs field. So we have, for that field, it's an extra scalar field that we add to the model. For that field, we have a potential energy which has multiple vacuum states to minimize the energy, you fall down into one of these, but each one is a separate reality. And every time the field falls down in some particular direction, that breaks the rotational symmetry and then allows all these laws that I showed on the previous slide to be violated in just the way they're violated in nature. Now, in the standard model, we postulate this Higgs field and we assume a potential of this form, but we really don't know where it comes from or why the potential has the form that we described. Maybe it's not being very reliable. Can I just push something here? There we go, okay. I just wanted to know that this Higgs field, we really know it exists. If you quantize the Higgs field, you get a Higgs particle and that particle was discovered at the LHC. Here's a beautiful example of a Higgs decay to two muons and two electrons. Actually, these are all charged particles so you can add up their masses. These are the muons here. They're little stubs here, but you can see in the other view, the electrons as charged particles making electromagnetic depositions in the detector. You can add up the total mass of that system and it turns out to be in this event something like that. And in fact, you can collect events and see that they peak at about 125 GeV, indicating the presence of a resonance, which is the Higgs boson. We've actually at the LHC learned a lot about this particle. If the Higgs boson is really essential to making the mass of all the quarks and leptons, then what you should find is a regularity where the coupling of the Higgs boson to a given species of quark or lepton is proportional to its mass. So that would be the solid line on this diagram from the Atlas collaboration. And as you see, the experimental data follows that line extremely well. The bottom slide blows up the error bars to show that there are maybe 10 or 20% experimental errors around this prediction, but it's really extremely well satisfied that the heavier the particle, also the stronger its coupling to the Higgs boson. Now, spontaneous symmetry breaking is a very common phenomenon in nature as far as we can see from the study of condensed matter physics. There's a very wide variety of systems in condensed matter physics that illustrates spontaneous symmetry breaking. I've listed a few of them here. Magnets, which have a preferred direction of the electron spins, superconductors and superfluids for which the superconducting medium is essentially a condensate of bosons which corresponds to a particular direction in the phase of the Schrodinger wave function. Crystals, liquid crystals, I've shown illustrated here, the molecules line up in a certain orientation, binary alloys, there are many examples. What's fascinating about condensed matter physics is that in all of these examples, there's actually a physics explanation for why the symmetry is spontaneously broken. You have Hund's rule for magnetism, you have Cooper pairing for superconductivity, et cetera, et cetera. There's a very beautiful story about each of those systems which explains why the ground state is asymmetric even though the basic equations are symmetric. But the problem with this Higgs field is that we don't have any explanation like that. And in fact, there's no explanation that seems to be possible within the confines of the standard model. In the standard model, we write some simple polynomial equation. We assume that this input parameter mu squared is negative. We then get a minimum displaced from the origin where phi equals zero, but we don't understand why. And in fact, the parameter mu squared is not computable within the standard model. People have been trying for decades, but let's just say all of those papers that claim to do that have been rejected by vigilant referees after errors have been found. Many physics explanations of the parameter mu squared being negative have been proposed, but they always involve new ingredients that are beyond the standard model. If we look at the known Higgs mass and turn that into a value of mu squared, we get a value which is a mass scale of 100 GeV. And so naively, what you would expect is that there are some particles that have mass 100 or a few hundred GeV and those interact with the Higgs field and create its potential through that interaction. And you can try to work that with the standard model. There are particles of mass a few hundred GeV. Actually, the most interesting one is the top quark, which has a mass of about 170 GeV. But if you look at the top quark-Higgs interaction, actually, which is now known from experiment, to be right on that line that I showed you, and you compute, for example, this Feynman diagram, you find a funny result. It depends on this top quark coupling to the Higgs boson, some numerical factors you could easily obtain, and then it turns out that the diagram is ultraviolet divergent. It's infinity unless you assume some cutoff in the theory and not knowing what that cutoff is, it can be as large as you like. Interestingly, the answer is negative, although there are other positive radiative corrections that you could call on to counterbalance it. Any counterbalancing within the standard model itself seems to be just a very arbitrary choice. Now, here are the way that theorists over the years have tried to explain the spontaneous symmetry breaking. The first idea was something called technicolor, where you have the strong interactions, you make a copy of them at an energy of one TeV. A TeV, which will come up a lot in this talk, is 1,000 GeV. It's an amount of energy, sorry, it's 1,000 GeV, or 10 times roughly the mass of the Higgs boson. You have new technicorcs, they form Cooper pairs, and there's a way that those particles can indeed, through this mechanism, provide an expectation value for the Higgs field. There's a concept called supersymmetry that you've probably heard a lot about in this room. The top quark has a spin-zero symmetry partner, and it turns out that if you include that in the calculation that I referred to previously, it cancels the ultraviolet divergence, or at least the leading part of it, and what's left over is a negative contribution, which actually drives the symmetry breaking that we want. There's another idea called Higgs as a Goldstone, where you have new strong interactions, not relatively close to us at one TeV, but a little further away at about 10 TeV. There you form a multiplet of Goldstone bosons through the symmetry breaking. The whole Higgs multiplet is composed of those, and then through corrections, you would find a non-zero potential for these particles generated, which often again gives a finite negative value of mu squared. So this is really cool. There are lots of possibilities, and we can go and look for the new, but in all these cases, new particles are predicted. So we can go and look for those new particles and hope to find them. And I think probably the height of the excitement for finding these particles occurred just before the turn on of BLHC. But as you probably know, it's been very frustrating. One has looked for these particles very intensely at BLHC, and they haven't appeared. Technicolor actually was particularly easy to exclude as in experiment. It really requires, it makes significant corrections already at lower energy. But the most important thing is it requires the Higgs mass to be roughly 10 times at least the known mass of the Higgs boson. And so when that particle was discovered, this whole class of theories just really had to be thrown in the waste basket. The other two theories I've described are still hanging on, but both of them expect top quark partners with masses of a few hundred GEV. And so let me just show you some slides from searches for these particles. This is the excluded regions for searches of the top squark, the supersymmetric partner of the top quark in various channels. Please notice that the exclusions go up almost to a TEV now, unless somehow there's some special situation where for example the top quark and it's the top squark and it's decay, the particles it decays into are almost totally degenerate. This region in orange is the region where the top squark has a mass approximately equal to the top quark. And a lot of special analyses were done to explore that region, but unfortunately nothing there. Here is the situation which is relevant to the case of Higgs as a goldstone. That, those models always have an extra top quark, which is a so-called vector-like particle. It has some definite signatures. It has a variety of different ways that it can decay. So what's done here is that there are three possible decay channels and you scan over all possible branching ratios. And as you see, the limits on that particle go from about 1.3 to 1.4 TEV, depending upon how it decays, but there are always limits that particle hasn't yet appeared. If you believe the first thing that I told you about electroweak symmetry breaking, that there's a physical mechanism which is generated by new particles beyond the standard model and you accept the second thing that the simplest models predict particles that have now been excluded by the LHC, you want to look at more complicated scenarios. It becomes now a question for theorists. How can I push off the observable consequences of that new physics out of the range of the LHC? And recently there have been a lot of papers at least interesting to particle theorists that try to do this. There's something called Dirac-Gaginos. I've put forward a model called competing forces which I must say has not much to claim for it. There's an idea called neutral naturalness in which these top partners actually have zero color quantum numbers. They have quantum numbers under some other groups. In all cases, we can hope to see still signals. There are still windows as we increase the data sample of the LHC. But in all cases, we don't get near the final answer. There's really a big change I think in the way that theoretical physicists approach the question of the final theory after these LHC experiments. So some time ago before the LHC, you would see pictures like this. This is a very cool cartoon that I found in Japan in the, there's some display posters in the room where they do the experiments on next generation linear collider technology. And this is very cool. It's hard to see, isn't it? Let me tell you what's there. So this is the Z resonance and here is the W boson and the top quark and then supersymmetry is here by its placement at a few hundred GEV. And then there's a guy on top of the supersymmetry mountain who is looking across the Grand Desert and discovering the land of unification and gravity very far away. And the idea is that by doing experiments at the energies that we know how to access with accelerators, we could actually learn everything. And maybe a more sober version of that is given in the title of Stephen Weinberg's book, Dreams of a Final Theory. He really believes that we built, in this case, the superconducting supercollider. We would find all of the evidence, all of the particles and interactions that we would need to put the whole story together and have a complete picture of particle physics. And I think today there's probably nobody in the theory community who believes this anymore. We might find the first hint of what's beyond the standard model at the LHC. Actually, there are other possibilities that are relatively near in time scale. But to really understand what the nature of this new physics is, it seems very unlikely that that's within our capabilities. We really have to go a lot farther. Now, I think now the question of the next milestone is really coming up and becoming a very interesting question for particle theorists. And if you think about perturbative generation of the Higgs mass, the thing I was just talking about, one way to imagine this, it's the way that occurs in these Higgs as Goldstone models is you start with the Higgs potential. That is generated by some light new particles which are in turn generated by new interactions. And from here to here, maybe there's a factor of the weak alpha divided by four pi. That's a factor of about one over 30. To get to the Higgs mass, you have another factor of one over 30. All these calculations though go by mass squared. So roughly a factor of 30 from here to here. If you think also about flavor, another thing that I haven't talked about, it is suggested by many people flavor in this standard model is very constrained. If you introduce new ingredients, they have the possibility of creating many new flavor dependent interactions. For example, the decay muti e gamma which is now anxiously searched for contributions to K long, K short and other mixing problems. These are highly constrained but you can get out of the constraints by assuming that these constraints come from the fact that those interactions are at a very high mass scale. And by putting together arguments like this, you would say where we really want to go is somewhere in particle interactions not to one TV where the LHC gets us but maybe a factor of 10 or more higher to 10 to 100 TV. For this talk, I'm just gonna pick the number of 30 TV which I think is illustrative of this kind of viewpoint. So in the remainder of the talk, what I'd like to do is to discuss how we're gonna get there. And let me just tell you that this is a totally non-trivial question. Always before in the history of particle physics we've been able to do a little more, have one new idea or maybe build something that's maybe a factor of five bigger than the things that we've built before. But we're now in a situation where that's not possible. We obviously can't build something a factor of five bigger. It's the LHC which is built with a cost of something like $5 billion the way CERN accounts their projects almost didn't get built. And the comparable project in the US, the superconducting supercollider really didn't get built because it was eventually deemed too expensive by the US general public. We have to then control the scale of these machines but on the other hand, we have to figure out how to make them more powerful. And there too we're running into boundaries. We're at the boundaries of our current technologies and we have to figure out new technologies that we'll get as farther. And right now there are ideas about how to do that but they're still in development, still some of them very speculative. And so that's what I'd like to tell you about now. I'd like to discuss the prospects for proton, proton, muon, electron and photon colliders that might eventually get us to this point of being able to access energies of tens of TEV which then might be the right place to really understand the physics that's missing from the standard model. So one thing that I should say before I begin this is that a quantity called luminosity is just as important as having high energy. Luminosity is the brilliance of an accelerator. It comes in the units of events per centimeter squared second where a centimeter squared is a cross-section unit. It's a unit that tells you if you bring two particles together what's the probability that they interact? So what is essentially the area that one presents to another for interaction? Those cross-sections become extremely small because we're interested in interactions that are taking place at very short distances. And per second is just okay, the events per cross-section per second. You need a certain number of events to see anything and so here are some estimates. Just to give you an idea, the best performing colliders at the moment, the B-factories and the LHC, have luminosities that are a few times 10 to the 34 per centimeter squared per second. And the relevance of that will become clear in a moment. If I look at a very simple process in high energy physics, electron positron annihilation to a new species, the cross-section for that process is governed by the parameter called R that has this value if you calculate it in quantum electrodynamics. 100 femtobarns, a very small unit of area divided by the center of mass energy in TEV. In other words, 10 to the five events per year for a luminosity of this scale divided by the energy in TEV squared. If you go to 30 TEV, this becomes a factor of 10 to the minus three. And then you have extremely difficult requirements both on the luminosity and on the energy. If you're at a proton collider, the correspondent of R is maybe a factor of 10 larger because you can use the strong interactions to produce particles. But it doesn't get you out of this business. You still have to be at 10 to the 35 type luminosities beyond what accelerators produce at the moment. And for electron colliders or muon colliders, probably 10 to the 36 is the absolute minimum to find out really what's going on there. So these are things we have to keep in mind as we discuss accelerators. So for each kind of accelerator, there'll be a slide like this which begins with the unfortunate facts of life. And then we have to figure out how we can overcome them. For proton colliders, the unfortunate facts of life are, first of all, that the bending power of magnets is such that to keep protons at TEV energies bending in a circle, you need magnetic fields in Tesla. And if we're talking about 10s of TEV, a radius in 10s of kilometers. This is for the really active parts of the magnet and there's a lot of connections that you have to put. But just for a reference, at the LHC, we circulate 6.5 TEV protons. We have eight Tesla bending fields and the whole thing is in a 27 kilometer ring. So if you wanna go to much higher than that, either you need magnetic fields much greater than eight Tesla or a ring that's much larger than 27 kilometers, which is already kind of at the limit of what society lets us build. Secondly, the proton is not an elementary particle. So the quarks and gluons inside the proton have energies which are a tenth to two tenths the energy of the proton. And so you could have protons at 100 colliding at 100 TEV, but still not be at the 30 TEV scale that I've put as my goal here. And finally, there's something called pileup. The proton proton total cross section remains roughly constant as you increase energy where the point cross section, the cross section for producing new particles is dropping like one over E squared. And so the backgrounds become more and more complicated as you go to higher energies. So just a few examples of these. Here is a slide from the recent future circular collider physics report, which gives the reach in terms of supersymmetric particles like the gluino. The red is an exclusion limit. We're not interested in exclusion, only discovery. The discovery for relatively, the discovery for relatively heavy gluinos goes up to about eight TEV. For example, for gluino pair production, I think it's 10 TEV. That's another slide that I could have shown you for a hundred TEV collider. So there's a factor of 10 or five that you're missing going from the composite proton level to the level of the things that are actually colliding to produce these events. Here is a picture that's posted on the Atlas web pages. This is an amazing event. It's a little complicated, but not really out of bounds of what people actually observe today. The protons, when you collide them, they collide in bunches that go through each other. And when you have all the bunches sufficiently small and the parameters, the bunches sufficiently intense, you have multiple proton collisions in the process of producing one interesting event. So here you see the yellow lines are muons from a quark-antiquark annihilation to two muons at very high energy. And everything else is ignorable garbage from basically the big proton bags hitting each other and blowing up and spilling out thousands of relatively low energy and uninteresting particles. This is today, and actually amazingly, the pattern recognition codes that are used in the Atlas and CMS experiments at the LHC can deal with this level of pile up. At the high luminosity LHC, it'll be roughly 10 times greater and at the machines we're talking about hundreds of times more complex than this picture. Here is probably the closest approach though to really designing a machine that can get to close to this 30 TEV goal. This is about a 10 to 20 TEV facility for producing quark-quark and quark-gluan collisions. It's called the Future Circular Collider, the Hadron Hadron version. And if you look this up, you'll find long and interesting reports on the archive and the certain website. It's 100 kilometer ring with 16 Tesla magnets. We'll come back to that, which produces 100 TEV in the proton center of mass. The estimated cost is 24 billion Swiss francs. That's five times the cost of the LHC. So it might be that the hardest part of this is trying to get someone to pay for it. But that may be true of all the other things I'm going to show you too. So we shouldn't laugh too much. To keep the size within reasonable limits, and here we're really talking about 100 kilometers as a reasonable limit, you have to have higher magnetic fields. And this is also a big problem because the current magnet technology, which is used at the LHC, really runs out at the LHC itself. In fact, the LHC was designed to have 10 Tesla magnets, but industry couldn't produce them at a reasonable price. And so one scaled back to eight Tesla, limiting the energy to 14 TEV instead of the original 20 TEV that was planned. To get beyond this, you have to actually change the superconductor that you use to make the magnets. And so there's a relatively established technology for magnets from niobium titanium. But to get to higher fields, you have to use a more brittle and harder to work material called niobium-3 tin. Actually there, 12 Tesla magnets have already been constructed, and 16 Tesla is foreseen. Unfortunately, the current cost is five times the cost of comparable LHC magnets. So again, this becomes an issue. And what's really fascinating to me, if you take a really long-term view, is the question of high-TC materials. So high-TC materials, the current costs are really enormous with respect to LHC magnets. But you can get much further in magnetic field, and maybe really much further in terms of the physics reach. Here's a complicated diagram from the U.S. National High Field Magnet Laboratory. But it's fun, if you have these slides offline, you can prove it is this. This is the critical current on this side. This is the magnetic field, and as you raise the magnetic field, the critical current goes down. It actually plummets. This is niobium-titanium, this blue line here. This is niobium-3 tin, and you see it's again cutting out at a solenoidal magnetic field of 25 Tesla. If you build accelerating magnets, you have to basically produce dipoles of these things, and that kind of cuts the field strength by about a factor of two. So something like this would be a 14 Tesla field in an accelerating and accelerator dipole magnet. What's very interesting are these things that seem to go on forever, which have funny names like Rebco and YBco, and these are the high-temperature superconducting materials. For some reason, they support much larger critical currents, much more stable Abricosa flux tubes, and so it's worth exploring them a little. Here's kind of a limit in terms of a solenoid. Remember, there's this factor of two, so this would correspond to a 16 Tesla accelerator dipole magnet. This thing's actually operating at the U.S. high magnetic field laboratory. A 32 Tesla superconducting solenoid. If you look at the structure of it here, this is all cryogenic. It runs at liquid helium temperatures of 4.2 Kelvin. What's inside is this YBco material, and then outside, successively, the Niobium-310 and the Niobium-Titanium is a kind of booster. And this 32 Tesla, at the time this was written, I think a year ago, was a record. Here's some more recent studies there. And this involves, it tells you, I guess, a little more about these high-TC materials. They're crazy. They're very, originally, I remember, when these came out in the 80s, they were very finicky to make. They're ceramic structures, so you can't bend them without breaking them unless you're extremely careful. And the way they're produced, actually, is as tapes. So the yellow thing here is Rebco, which is rare earth barium copper oxide, where the rare earth may be, for example, yttrium. And there's a certain isotope ratio that you need to get into the regime of superconductivity. Those of you who are in condensed matter physics know all about this. But then you decorate it with various metals, which have the purpose of, first of all, allowing the thing to be more flexible. The high-TC layer is only like a micron thick, and that's actually enough to carry the current that propels the big magnetic fields. And to both make it more flexible and to allow it to quench in a natural way in such a way that it doesn't completely disrupt its structure in the quench. And so there's a lot of research now going on in these materials. The question is whether in 20 years we'll be producing these industrially at low cost. And we're very far from that now, but don't laugh, it could really happen. And then we'll be in business to make a really nice proton accelerator. One last point. The synchrotron radiation from protons in the LHC is small, but when you go to the FCC, the synchrotron radiation becomes sizable. It's actually 2.5 megawatt per beam over the accelerator. And so you need to really design the magnets with special cooling channels like this one called beam screens that sit inside the magnet, by the way, decreasing the aperture for the protons, and pick up that synchrotron radiation before it can drive the superconducting magnets normal. Synchrotron radiation increases like the energy to the fourth power. And so already the 100 kilometer, 100 TeV machine is very close to the limit that's possible for a circular proton collider. But maybe this step is the one that we need. Okay, muon colliders. The unfortunate facts of life here are that muons are unstable with a lifetime of two microseconds at rest. If you get them up to a GeV, you get 20 microseconds to do something with them before they disappear. If, of course, if you get them to TeV energies, they live a long time, but you have to do a lot of things to them to get there. And the concomitant with that is this observation that muons are difficult to produce in the small phase space needed for acceleration. So muons are not stable. You can't shovel them out of the ground. You have to produce them in an accelerator. It's hard to produce them in a small phase space, so you have to collect them, gather them, and eventually reduce their phase space. We would say cool them. And when you cool the muons down, that's a problem. Now, muons are great for one thing, which is that compared to electrons, if you can get them accelerated, they're much easier to work with. An electron, when it goes around a ring, radiates synchrotron radiation. And in the highest energy electron accelerator built so far, which is LEP2 at CERN, you have a 100 GeV electron beam circulating in a 27 kilometer tunnel and radiating roughly 2% of its energy every time it goes around the tunnel. So it's almost not worth having a synchrotron. Every 500 times it goes around, which is a small fraction of a second, you have to replace its entire energy. As you go to higher energies with the same radius, as I said before, synchrotron radiation goes up like E to the fourth, and this just becomes totally untenable. It's thought by going to 100 kilometer radius, you can have electrons circulating with a center of mass energy of 350 GeV or somewhere around there, but that's really, it seems, the hard limit. For muon, synchrotron radiation is smaller by a factor of 10 to the minus four, so there's a lot more lever arm to play with. You can imagine if you can put muons into a synchrotron, you can circulate them, accelerate them, and do experiments at tens of TeV. Muon, of course, is an elementary lepton just like the electron, so it's a very interesting particle. Its interactions are the fundamental interactions. You don't have to take it apart into pieces like the proton. And actually, people have thought already about muons in the LHC tunnel at 14 TeV in the center of mass. Now the problem with muons is that, as I said before, they're unstable, so you have to produce them, and once you've produced them, you have to reduce their phase space so that they're actually, you have a beam going in a certain direction with a relatively small divergence that you can put into an accelerator. One idea for that is called ionization cooling, and it's illustrated rather simply here. You have your muon going at a random angle, so you basically have protons on a target, you produce muons all over phase space, you collect them with some magnets or collect a fraction of them, and then they're going at random angles, then you put it through an absorber, that shortens the momentum vector as you see here, then you accelerate them, so it extends the momentum vector but only in the z direction. And now you've actually reduced the phase space. You've turned something with a divergence like this into a beam of muons with a divergence like that, and if you did it a few tens more times, you could meet the criterion for injecting these muons into an accelerator. So the kind of plan that you would use is something like this. You start out with your muons from the target, so this is what's called the longitudinal and transverse admittance, a measurement of the phase space. You start up here, you collect the muons, you cool them by ionization cooling as I've described, then you have something which has a very small emittance in the transverse direction, but it still has a large energy spread. Then people have the idea that if you put a muon through an absorber in a Clark screw orbit, you can transfer the phase space from transverse to longitudinal, and then you climb back up here, and then you have a sufficient condition to put something into a multi-TEV collider. I think on the next slide there's a diagram of this. Yeah, so a facility would look like this. A very intense proton source creating a huge number of muons here, you collect some fraction of them, you have these various stages of ionization cooling, eventually you accelerate them to high energy. Ionization cooling turns out to be really difficult. This has now been demonstrated in an experiment called mice at Oxford. What you see here, mice had a relatively modest goal of making a 10% reduction in the muon phase space, and they've achieved that, unfortunately, not for the whole beam, the systems rather lossy, but for the central part of the beam, and you can see here the difference between the input brown and the output green curves shows the effect of ionization cooling, and they're very happy this agrees with their simulations, but this is a factor of 10% reduction, and as you saw in the previous slide, you need a factor of 10 to the fifth or 10 to the sixth in overall phase space reduction in order to get this to be a practical method of making an accelerator. Recently there's another idea, which is very cool from the Frascati group, and that is to have a 45 GEV positron beam that collides with a target with electrons at rest, so that would produce muon pairs very close to threshold. The energy dependence of the cross section has a big peak at threshold, so you produce a lot of muons. They're naturally in an extremely small phase space. Unfortunately, that region is not very large, and so you don't really get enough muons with one collision to be able to run an accelerator at high luminosity. Roughly you would need 100 production collisions, and then you would have to pile those muons on top of each other, hopefully without increasing their phase space. To produce a luminosity, they would say at 10 to the 34 at TEV energies, or this would naturally be 10 to the 35 at 10 TEV energy. So this is the beginning of an idea. I think there's as yet no accelerator test of this method. Here's the sketch of a facility. Now muons also have another problem, which is that when you try to actually observe events with muons, you have to worry about the fact that muons are decaying as they go around the accelerator. Here from this paper is a diagram of the collision region, and these yellow things are tungsten shields that shield the detector from the muons going into the detector, which are decaying. Roughly on every pass, 10 to the fifth muons are decaying and depositing their ionization products into the detector. You can shield that by about a factor of a thousand, and that gets you down to levels that people think you can begin to deal with. What's interesting is that there's a time structure to those. There's huge amounts of muons, ah, okay, we don't need that. Large amounts of neutrons that come in rather slowly, and if you could have inside your detector precision timing elements, as well as what we think of in high energy physics as the usual detector elements, then maybe you can deal with that. So actually right now for LHC, people are developing what they call four-dimensional tracking, and eventually we'll also have four-dimensional calorimetry, picosecond time stamps on calorimeter hits, and hopefully that will be able to allow you to deal with this problem. Now muons also have one more very unique hazard, and that is that the neutrinos that they emit are sufficiently many and sufficiently energetic that you have to worry about irradiating the population. The dose in milli-severts per year is given by a formula like this, where R is the radius at which the muons are circulating underground, they're emitting neutrinos horizontally. Where that comes out, there's a radiation dose from neutrinos hitting the bodies of ordinary people. Legally that should be limited to one milli-severt per year, and that's actually a challenge. For the LHC tunnel, it was estimated that you're basically at that limit at a luminosity of about 10 to the 35. So all these things have to be thought about, but there's an opening here for some interesting project with muon colliders reaching energies of about 10 times those we can reach today. Okay, finally, electron colliders. Here are the unfortunate facts of life or the following. First of all, due to synchrotron radiation, as I've already explained, you can't have a circular collider. It has to be a linear collider, which is to say two linear accelerators pointed at each other, and with electrons and positrons, let's say, coming down those accelerators colliding, and once they collide, it's just garbage. You have to throw them away, and so continually you have to replenish the stream of energy that you're shooting one end at the other. The second thing is that for linear colliders, the luminosity formula looks like this. The power in the beam divided by the cross-sectional area of the two beams that are colliding, and this is a constraint. I'll talk about this on the next slide. And finally, there's a problem that the electron-positron beam-beam interaction becomes extreme above 3TEV. So let's talk about all these things briefly. For linear colliders, as I've said, this is the scaling law for luminosity as a function of the beam power and the cross-sectional area. For the next generation linear collider, the international linear collider, which hopefully will take place before these machines are built, the scaling is that a reasonable luminosity of 10 to the 34 corresponds to 10 megawatt beams, and as you see, some tens of nanometers square cross-sections. Scaling to 30TEV at 10 to the 36, we can keep the beam power constant, which we pretty much have to do, but the size goes down to a fraction of a nanometer. There are actually designs that realize these parameters for the next generation ILC and click detectors. So maybe this is actually possible, but it's extremely challenging. And one more thing I should say is that this 10 megawatts is the power in the beam, but the power, you don't get directly into the beam, you have to plug something in to your electrical grid, do some transformations, and eventually transform that to a beam. And the efficiency of that transformation is going to be absolutely crucial. If you can't make that an efficient process, you won't be able to afford the power bill. Here is a view of the design next generation international linear collider. You see the footprint is actually quite large. There are 22 kilometers of accelerating structure to get to 500 GEV, the accelerating gradients are 31 MEV per meter. And that's limited by the fact that you're using superconducting RF cavities to basically make it more efficient to transfer wall plug power into beam power. This number has to change if we're ever going to get to tens of TEV. So what we have to do is then think about genuinely new methods for accelerating electrons that can realize a number that's much larger. I think a good goal is something like a GEV per meter. And that's a big challenge. There are two interesting ways that have been proposed to do that. The first is dielectric laser acceleration. So basically if you look at a laser field, a laser field actually has GEV per meter accelerating fields oscillating in the intense light. What you need is about one joule per centimeter cubed at optical wavelengths. And so the question is can you actually harness that into an accelerating field? And so there's a group of people around the world. A buyer is one of my colleagues at Stanford interested in trying to do this by turning the transverse fields of optical light into longitudinal accelerating fields by interacting them with some dielectric structure. So here's some cool things. This was actually built and operated at Stanford. So you basically fabricate micron-sized channels in silicon by etching. You then get an appropriate dielectric. You then illuminate this from the side with a laser. The transverse fields create longitudinal accelerating fields in that little micron-sized hole that you made here. Buyer calls this accelerator on a chip. And then you try and put electrons down that hole and accelerate them. And indeed they've observed GEV per meter accelerating fields. Although frankly it's a little hard to get the electrons down that hole with high efficiency. These people imagine a setup with ten parallel channels three times ten to the fourth electrons per bunch and one gets reasonable beam parameters. Although definitely there are questions. A laser is a very inefficient way to transfer wall plug power into the power optical power. Typical efficiencies are below one percent now and that has to become much larger if this is to work. And then of course you always have to ask if you put a 10 megawatt beam down this tiny and carefully machined device or you're going to destroy it. And I'm not an engineer so I'm not going to say yes but people ask that question. An alternative is something called plasma wake field acceleration. So there you make an accelerating channel which is genuinely ephemeral. You have a dilute plasma in a cell. You put something into that cell. Either an electron beam, a proton beam, or a laser beam that basically carves a small hole in it ejects the electrons from that. So it's carving a small channel. It ejects the electrons from that channel creating a short term but very intense electromagnetic fields and then you can put a witness, so-called witness beam coming behind the main perturbation and it will be accelerated by these fields. And actually this method naturally produces very high gradients. I think the record, I'm not sure if it's reproduced, is something like 150 GED per meter which is, well, you know, I work at Slack which is a three kilometer long accelerator so this is three times Slack in a meter. This is a very ambitious business. Here again is some kind of graphics of what a plasma wake field accelerator looks like. You create a pulse which carves a hole and then you carry charge in the hole that you've made. Very recently there have been some very neat experiments where you use beams from the SPS at CERN to drive a plasma wake field accelerator. This is an experiment called a wake at CERN. And as you see in this setup, proton beams are so stable that you can produce very well defined and accelerated electron beams. I think in this setup it's still 200 MEV per meter but they have the ability to go higher as the program progresses. So this is the picture that it was on the cover. You can do this with lasers. There's a big laser facility called BELLA at the Lawrence Berkeley Laboratory that has produced something like four GEV per meter laser driven with this technique. Maybe I should skip this. Here is a facility that was proposed by these people and maybe I should say another part of the vision is the reuse of the LHC tunnel. I love this figure so I'm going to bother you to show it. Here's the ILC at 500 GEV. You replace a very small part, a fraction of a kilometer with plasma wake field. You double the energy. You then replace everything. You get to 22 TEV. Now we're in business for the kind of physics that I want to do. And now there's just one more thing. Oh, and of course here are the issues. The energy efficiency, which is a very ill understood problem. You have to have something that's reproducible shot to shot even though you're dealing with a plasma and you're creating the accelerating medium shot to shot. There are beam instabilities that you have to control as always with accelerators. The most difficult part probably is that a single stage of plasma is a meter or at most 10 meters long and it accelerates you then only by tens of GEV. But what you really want is to get to 30 TEV, which means you need a very large number of stages. Here I've assumed 5 GEV per stage. That's 3,000 stages. And then you have to preserve the small emittance of the bunch, which is natural for one stage to the next one and the next one without totally messing up the very small spot. Remember subnanometer size that you have to make at the collision point. Oh, and there's one more thing that one really doesn't know how to accelerate positrons. The plasma accelerator is not matter-antimatter symmetric. And so it's understood how to accelerate electrons. There's been a demonstration of positron acceleration, but the method is probably the opposite of robust. And that's a problem here. I had some comments about photon colliders. I think I'll just skip it and go on to the very end. So all four of these approaches, four includes the things I wanted to say about photon colliders, require much effort to reach 30 TEV in the part-on-part-on center of mass. We need new conceptual developments and also in many cases I've pointed to places where, although we know how to do it in principle, the engineering is just not sufficient to really make a proposal. And it's going to be decades of effort to make any of these facilities work. But I think as high-energy physicists, we have to go there. We have to develop a technology that's going to make it to this next milestone and allow us to do experiments there. So let me just show you this last slide. This is the front page of the New York Times in March 1953. When I show pictures of Einstein, I usually don't show the old Einstein, but this is the old Einstein, still working on the theory of the unified field. And probably Einstein's much clever than I don't want to impugn you guys, but certainly much clever than I am. And he tried for decades to create a theory of everything out of gravity and electromagnetism. He hadn't been reading the papers that there are these other interactions of nature which are equally fundamental and which all have to fit into this picture. And in the first half of this talk, I tried to convince you that there are still more fundamental interactions, responsible, for example, for the mass of the Higgs boson, that are out there and that are undiscovered. So if we're ever going to find the true unified picture of nature, we've got to know what they are. We're going to have to do these experiments. We're going to have to reach these high energies and we've got to figure out how that's technologically possible. This is a big challenge for particle physics, but we have to face it. And those of you who are young in the room, I really encourage you to think about these problems. So thank you very much. Michael for this beautiful overview of what we should be going towards in particle physics. Questions or comments? Well, maybe start with my question. I was wondering whether the case of superconducting magnets, the bottleneck is the cost of the magnets, building the magnets of the refrigeration, the fact that you have to go to low temperatures, in terms of... Well, all those things are true. To get the high fields, you can actually read that off of one of my slides. The fact that it's high temperature superconductivity doesn't matter. To get a high field, you have to go to liquid helium temperatures. So you still need the kind of cryogenic system you have for the LHC. Tens of kilometers in the LHC, 27 kilometers of refrigerated helium circulating in that thing. That's not a bottleneck. We know how to do that. Maybe it's going to be expensive to do it at a 100 kilometer scale. The... I think now... I remember when it was hard to really make these materials, because they had a very delicate stoichiometry, but I think that problem is solved now in the laboratory. It's beginning to be solved industrially. There are companies that are producing some of this high-TC tape for it's air, and they seem to know what they're doing. The big problems then come in how you actually work with the coil or tape to make the kind of fields that you need for dipoles. So what you need is up here... Sorry, what should I say? Up here, down here, so that you have a perfect vertical bending field inside over a region of water a centimeter. And that's really difficult to do with ordinary superconducting wire. It's really not known how to do that with these Rebco tapes, and that's a technology that has to be developed. Just how do you wind the magnets to get the field you want? And of course the cost is extraordinary, so one has to, in industry, lower that cost by a large factor. Now, since I've been interested in linear colliders, let's say since about 1990, people have, with engineering and cooperation with industry, lowered the cost of superconducting RF cavities by a factor of 10. So this kind of thing can be done, but it just takes a lot of experience and careful engineering. And we're just asking the question now. We'll see how far in the future it takes to do that. Questions? Yes. At the beginning you have shown the dream of Weinberg. Why you have some doubts? Well, in the 1990s and the 2000s, many people believed in this desert. So you could imagine that at the LHC you would discover the whole spectrum of low energy supersymmetry. And then you'd have all the data that you needed to understand that theory. And we spent a lot of time, how do you measure this supersymmetry parameter? How do you measure that supersymmetry parameter? Maybe if you had an E plus and minus collider, you could measure this other one that's hard to measure at the LHC. And then you could put the theory together and extrapolate to the grand unification scale because the connection would be logarithmic renormalization group writing. So you really could put together a nice picture of the unified theory at 10 to the 16 GeV. Now, in supersymmetry, we know that the lightest particles are in the TeV region. And it seems like there's not much chance that we'll find all of them. Some could be a 10 TeV. If you think that supersymmetry is highly excluded now and the answer or the alternative is some kind of composite Higgs model, we need to go to these energies to find the strong interaction sector of the composite Higgs theory. And so, you know, what seemed like it could have been so easy 20 years ago, I think we're discovering it's not going to be that easy. We just have to rethink the whole problem. And when we rethink the whole problem, you also have to ask how we're going to get there with experiments. Anyone else? Please. So in this article you cite, there is a very interesting sentence. One serious difficulty is still to be solved. So, at that time, of course, Einstein didn't know about all the development that was going to happen in the 60s and 70s and so on. So, my question is, how does this connect to our current status? What is, according to you, the latest difficulty, the most important difficulty to be solved? And because, let's say, what I get from the introduction of your talk is that for you, that the spontaneous symmetry breaking has no explanation, like the other ways where you see... As I tried to make the point in the talk, I think that the reality of spontaneous symmetry breaking in the weak interactions implies that there are new forces of nature not included in the standard model that are part of the big picture. Something I left out that maybe I should have mentioned is that, of course, dark matter and dark energy are also ingredients that are not in the standard model, but it's not clear we're going to learn about them by going to very high energy. But I think that for the Higgs problem, it's the only place to look. So we have to look there. Yes. Of these, you mean as a person who's totally ignorant, you want me to choose one? Do you have some money to invest? On which option would you spend your money? Oh, I'd definitely lose it. I mean, the thing that I'm personally interested in, I guess this has a lot to do with where I sit. I work a lot with the people at Slack who work on these advanced electron accelerators. So that's the thing that's very interesting to me. But other people are passionate about these other... In fact, it's funny. If you read the literature on this subject, a typical paper on how you get to very high energy, the first page is trashing all the other approaches. So I think it's important to give a talk where it's kind of more balanced that actually nobody knows how to do it. I haven't read many times on the internet that you favor a lot the linear accelerators. So is the presence of synchrotron radiation in circular accelerators the only reason because of you do that, because of which you do that, or is there any other reason because of your favoring of linear accelerators? Well, so now you're asking me about the... not this generation, but really the next generation of accelerators. So that's a whole other issue that... I mean, in some sense, it's the subject of a whole other colloquium other than the one I've given here. I think recently there's been a lot of talk in the high-energy physics community about what the next accelerator should be, either at CERN or elsewhere in the world, in Japan or China or in the U.S. It seems pretty clear to me that the next thing we have to do is really study the Higgs boson with high precision. So we need an E plus-E minus collider at roughly 250 GeV. There are solutions both linear and circular. They... I think what's become clear in this year's European strategy study is that they have equivalent performance. So in some sense for physics, it doesn't matter which one you build. It's just which one is cheaper and which one is somebody going to pay for. And we're... that's a big, contentious issue this year in the high-energy physics community. I've been thinking about linear E plus-E minus colliders since the 90s. So I guess I'm one of the people who helps those people defend their turf. But frankly, I'd be happy to see these experiments no matter how they're done. It's just we actually have to find the money and some willing person who will allow us to build these things. Coming to the money, CERN is primarily a European effort. What is the future looking like? Is it going to be a truly international effort, including U.S., Europe and China perhaps? Well, I think the LHC is already an international effort. There's a large contribution from the U.S., there's a large contribution from Japan. The high luminosity LHC also has large contributions from outside Europe. So there are several branches that are possible. One is that CERN will be the only place that people do energy frontier high-energy physics, and everyone will contribute to a project there. And I think the FCC vision that I showed here has that point of view. To get the money, you're going to have to ask everyone in the world to make a contribution not only to the construction, but also the operation of that accelerator. The countries in Asia that are now very prosperous and want to be at the forefront of science, in particular Japan and China, have their own ambitions for the next generation accelerator. And if they have the money and CERN doesn't have the money, I'm happy to go anywhere in the world to study the Higgs boson. But the answer to the question, it's not up to me. I encourage everyone, but you have to find national political reasons why this becomes an important project. It's different in every country. Before we actually close the colloquium and thank the speaker, let me remind you the rules of the game. Everybody is now encouraged to go outside, and we have refreshments outside. The students are kindly encouraged to come closer to the speaker. They will stay with the speaker for a few minutes asking questions, and then of course we'll make sure there will be food available for you and the students after the session. So let me...