 OK, so yeah, welcome to the second lecture. Now, what I'm going to discuss is the following. We had discussed the outline on Tuesday. So I want to discuss biogenesis and inflation. And we started with electroweak biogenesis. And in fact, that took a little longer than I expected because we had all this discussion, I think, which was quite useful, maybe, on Svaloran processes and what happened there, because that is really important for biogenesis in many ways. And I think it is also sort of an interesting compliment to what you learn about axioms and instantons in QCD. So that's, I think, an important problem in quantum field theory. And I tried to discuss some aspects of that. So today, we now come to the second part, which is leptogenesis. And then I will very briefly discuss other models. There will be not much time for this. And then we start with inflation and hopefully come to the middle of this roughly. So this is the plan. Now, I should say there are many models. There's a huge literature on biogenesis. That was a point I tried to make last time. I mean, there is, it's one number. And I don't know, a few thousand papers, I guess. So what does that mean? And now, then, such a lecture, how do you pick models? Now, I think what is important, and let me just say it again, I think it is important that one does not just invent a theory to explain one number, because that is always possible. But rather, one should have something which is well-motivated for other reasons and which sort of gives you the value in asymmetry as, I would say, byproduct, which is not invented in order to explain this number. And the other thing which I think is important is in order to make progress, we really have to check things by experiments and to link it to observables in the lab and to cosmological astrophysical observables. That is also important. And that's, I think, why I emphasized these two things. Electroic biogenesis is closely related to the Higgs sector. And I think there will be a very good chance to really test it at the LHC. And hopefully, within a few years, we will know much more about the topic, what is true, what is not true, just as a consequence of the LHC data. Now, leptogenesis, the connection is to neutrinos, which I think is also very interesting. There, in fact, the success of leptogenesis during the more than a decade now is due to the fact that, in fact, it fits very well together with neutrinos. So we'll talk a little bit about that. And OK, then we'll move on to inflation. Now, again, let me start for leptogenesis with a few key references and some reviews. There is this thing that was suggested by Fukugita and Yanagita about, well, almost 30 years ago. So like electroic biogenesis is, in some sense, a principal and old topic, it's a little bit younger because of the fact that during the first 10 years, almost nobody paid attention to that. It was a very sort of, the topic was not very popular. And the main focus was on electroic biogenesis. So that then changed. I mean, when it became clear that the Higgs is heavier, the electroic biogenesis has more difficulty, so it changed to that. So that's the first paper really on this topic. Then I mentioned these for the following reasons. I say these key references is not necessarily always the most original or most important or best papers. Of course, there should be interesting papers, but are those which sort of started some direction which are important if you really want to get deep into the subject? So then I think that's a good reference to do some reading and to move on from there. So Lazzarides and Chaffee, in fact, I think was the first who proposed what goes under the name of non-term leptogenesis. Now, what that is is the following. It's again related to heavy Majorana neutrinos. We will see a little bit how. However, the production is not thermal, as we do it with Saharoff and early universe thermodynamics. But the production comes from decays, out of decays of fields, in particular the inflator fields. Then there is a very special direction, which, well, one can have different opinions on, but which goes under the name of resonant leptogenesis. And that is something which you get if you have mass degeneracies among the heavy neutrinos, as we will see. And that is a possibility then to lower the scale of B minus L breaking and to see maybe heavy neutrinos, heavy Majorana neutrinos, even at the NHC. And this group and a number of other people really pushed this field over the years so that a number of consequences were derived, which can be changed. And then this work, in some sense, I would say is a certain end of this simplest version of thermal leptogenesis, where you can see quantitatively how well it works. And various errors have been estimated and sort of a full calculation with all the ingredients which were needed was done. And then there are reviews. Let me mention in particular the last two because they cover material for which I will not have that much time in this lecture, namely the important flavor effects which you have in leptogenesis. So I'll come to that. So that's all right. Now as a motivation, let me start from granularification. As you know, the standard model is very well embedded to a grade unified group, SO5 or SO10. And then the sequence which continues to E6, the exceptional group, C6, E7, E8. And I think that's a very promising and interesting route and has a high probability, in my opinion, of being correct. So the standard model particles, the left-handed quarks, the up quarks, electron, down quarks, lepton doublet, they fit into remarkably into SU5 representations. And then in addition, you may have, or we may need, that's the basis of these lectures, a right-handed neutrino. Actually, maybe it's interesting to remind you, well, you may not know because you are too young for that. But for many years, it was said that one of the successful prediction of SU5 unification was that neutrinos were massless. Because you see what you always need, all these objects of the standard model we know fit into these two representations. This is an additional state, singlet state, which we will need. If this is not there, and you just write down the Lagrangian SU5 renormalizable Lagrangian, it will also consequences neutrinos are massless. However, if you extend the group, you will also get these guys, the right-handed neutrinos, which are important. Now you can then write down your kava couplings for these objects, which if you take an SU5 in very notation, I just sketch it here. You couple two of those objects to one Higgs. You couple such an object and this object to another Higgs. You couple this and the singlet to this. This is what, in the end, will generate a neutrino masses. And then you can have, in addition, something which does not involve the Higgs, and which is just a mass term in SU5, just a mass term for the right-handed neutrinos. And that is, in fact, very important because you can see already here, the expectation values of these Higgs fields are given by the electroweak scale. So that will generate mass terms of the order, say, of 100 GV times some yukava coupling. But this is independent of that and can generically be much bigger. So this is the starting point and was, in fact, I think some of the basis also for the discovery of leptogenesis, namely the realization that if the fact that neutrino masses are so small, if that is due to the existence of right-handed neutrinos, then these right-handed neutrinos can also cosmologically do very interesting things. In particular, they can decay and they can generate a barrier in asymmetry. I mean, if you have such a mass here for right-handed neutrinos, which is much bigger than the mass you generate by electroweak symmetry breaking, which would be something like this, yukava coupling times the width of the Fermi scale, then you get a mass matrix, say a six by six mass matrix, which for the neutrinos, which essentially looks like this, you have a zero entry here, you have here MD, you have here MD, you have here this big thing. M, all of them being three by three mass matrices, and if you diagonalize that, you find typically eigenvalues say M. This is just without indices now, say in the simplest case of just a two by two matrix, and you find other eigenvalues, which are M squared over M, which would be, say, would give you the mass matrix for light neutrinos. Capital M being big gives you that this is small, and this is this famous CISO mechanism, which I guess was discussed in detail with some of its applications in the lectures by Smirnov. Now, so you get then, and if you do that, you get another thing which is important. That is the following. You see here, you have one lepton and you have one entylabton, the left-handed neutrino, the right-handed anti-neutrino, and here you have two neutrinos. So that means such a mass term violates lepton number, and that is very important. It violates lepton number, so this is a Majorana mass term, and therefore if you diagonalize the whole mass matrix, you will get, although this is a Dirac mass matrix, you will get all together as mass eigenstates, six Majorana neutrinos. Now, these six Majorana neutrinos, three of them being heavy, and three of them being light, is what we have, and the ones which are light, they do the usual physics with neutrino oscillations and so on, which Alexei Smirnov discussed. Now, let's do a very naive estimate. So, if you take, say, this CISO formula, which I wrote here on the board again, and you insert, say, for this mass, this Dirac mass, say 100 GV, and for this, a big mass, and you take the big mass equal to the grand unification mass, then you get a mass here, which is about 0.01 electron volt. And this, I think, is very remarkable. It tells you if you take the two scales for which we have evidence from the standard model, one scale which corresponds to electroveximity breaking, and another scale for which we have evidence from the unification of gauge couplings, and you take this ratio, you really get something which is in the range where the neutrino masses are. So far, we know two mass differences. So, the square root of the mass difference which you see in atmospheric neutrino oscillation and in solar neutrino oscillations, one is 0.05 EV, the other is 0.009 EV, so you see that with this estimate, you are really hitting the right order of magnitude. Now, this is, of course, not a proof. It's just an argument. However, it is, I think, very suggestive. You can have different situations where you avoid this that happens in resonant-laptogenesis, which I will discuss also. Now, to continue, then, you have to calculate the CP violation, which you have in the case of, say, the lightest of these seven neutrinos. We will assume, following the typical mass patterns which we see in the standard model, that also these right-handed neutrinos are hierarchical. That's not necessary, but I think it's a simpler starting point. And then, what is important for leptogenesis will be just the lightest and one that is the lightest of these heavy-mai-rana neutrinos. And then, the CP asymmetry is a difference of these decay rates, interlepton and anti-lepton, divided by the sum. And for that, you can derive a nice formula where the CP asymmetry is just given in terms of neutrino masses. These are the matrix of Yuccava couplings here. This is the light-neutrino mass matrix, and here you have the mass of the lightest of the heavy neutrinos. In fact, that, what is very important, and causes a couple of, there are a couple of delicate questions associated with that, is that this formula comes about through the interference of the three-level term and two quantum corrections, vertex corrections and the self-energy correction. And in fact, this complete, the correct result was first obtained here at the suicide fact, I think many years ago, by Covey-Rollet and Visani at the time, it was an important result. Now, let's now, before we go into the details, again, as we try to do it for electrophic biogenesis, make an estimate of what kind of asymmetry we can expect, baryon asymmetry. So, what will happen is, as a following, you will have, say, a decay, I start from the seven-neutrino N, then I have fearsome blob, which involves the three-level and the quantum corrections, and I go, say, to lepton antiques. And that will generate a difference in B, well, in lepton number, but also in B minus L. So, this is the quantity which we discussed on Tuesday. You generate, in the case of these objects, you generate an asymmetry in B minus L, and once you have that, that is not affected by Svaloron processes, it just is what it is. So, you generate that early, and then this gives you, then, B and L according to the formulae, which I've given you on Tuesday. So, given this asymmetry, B minus L, you get B and L, and it's related by a Svaloron factor, and this is something which you see here. This is the CP asymmetry which you have in the end for the baryon asymmetry, and this is where we come to that later. Now, if you want in the end to calculate the baryon asymmetry, the question is, how big is the CP asymmetry? If you start from this hierarchical picture and you insert in this formula for the CP asymmetry which you showed you, most likely just the largest eigenvalue for the large neutrino mass matrix and for the light neutrino mass matrix, and you take the mass of the lightest of the heavy ones, and this is the expectation value of the Higgs boson, then this is the loop factor, roughly. In fact, this is close to also an upper bound on the CP asymmetry which was derived by Davidson and Barra. Then you get for this, you get something like value of 0.1 from this, and then you can use, again, the seesaw formulae, and replace V squared over M3 by one over big M3. So you get a CP asymmetry which is a loop factor times the hierarchy of the heavy Majorana neutrinos. Now, if this hierarchy is of the hierarchies which we see in quark and lepton mass matrices in the standard model, you may say you will conclude if that's a model-dependent statement, then you will get something between 10 to the minus four to 10 to the minus five. So CP asymmetry, 10 to the minus five to 10 to the minus six, and that you will have here. Then you have this follow-on factor, and then you have to account for the fact that asymmetry in B will not change, but if you calculate the ratio, this quantity A to B, which is the asymmetry divided by the photon density, the photon number will change. Because in the beginning, when you generate this lepton or B minus L asymmetry, in the beginning, you have all the quantities of the standard model sort of an equal number because you are at very high temperature. And then as you go to lower temperatures, in particular below the electric phase transition, the QCD phase transition, all this stuff annihilates away and essentially goes into photons. So the photon number density gets enhanced by a big factor compared to the original heavy neutrino number density, which is almost a factor of a hundred here. And then there is something which I will now talk a little bit about, which you get, for that you have to solve so-called Boltzmann equations and putting those factors together, you get an asymmetry indeed of the right order of magnitude. Now, I think I spent a little bit of time on that just to illustrate that you can sort of naively, if you start from a gut picture, understand where the asymmetry comes from and the biggest factors which move you to this very small number here come really from neutrino physics. It's the structure of the gut mass matrices. That's what gives you this factor. In the end, all the thermodynamics here, this factor is trivial, the thermodynamics is in this and that's the factor which varies if you do the calculation, I will show you between say 10 to the minus one, 10 to the minus two, so there's a relatively small uncertainty. So this is one big difference compared to electroweak barrier genesis. And the other is that here, whereas in electroweak barrier genesis, barrier number is always conserved during the process and it's just the dynamics of the bubble wall which makes the transition here. In leptogenesis, you really have intrinsically lepton number violation and that leads that you see in the decays of the heavy myra neutrinos. Which masses, M1 and M3? No, typically these are the masses of the heavy neutrinos N1 and N3. Now if you, so of the first and the third family, you will have three right-handed neutrinos if you have three, this is just the mass ratios, the masses individually. So typically in these models what you get is that M3 is about say lambda gut which you might take to be say 10 to the 15 GV or 10 to the 16 something from a unification and then you go down to M2 which is smaller and then finally to M1 and if this is about 10 to the minus five M3, you end up at a mass of about 10 to the 10 GV. This is a picture. So if you take say the top mass to the up quark mass, it's the fact that 10 to the five. No, no, no, no, the mass ratios is a unification you have in the coupling constants, in the gauge coupling constants, not in these Yuccava matrices where there are some words where you can also study Yuccava coupling unification but this is not what is relevant here. So it's really the, yeah, it's this. So that's how you come to these numbers. Yeah, yeah. Well, of course, if there would be a strong, if you generate all the baryons, well, that's an important question. In principle, of course you can have a number of sources for baryon asymmetry and if you have now a strong electric phase transition, of course, which generates a big asymmetry, then that's what matters in the end. However, if, as we saw, it's difficult, so suppose there are small or no asymmetry variation, generation at the electric phase transition, then you can do it by something like this and what is special about this mechanism is we'll discuss it a little bit that these right-handy neutrinos, if they are in thermal equilibrium in the very early universe, also can wash out an asymmetry which was generated in other ways. So these right-hand neutrinos essentially act a little bit like a vacuum cleaner. They first erase the baryon asymmetry, they are so-called, that's because they violate a lepton number, and by this baryon, together with this baryon processes, they, that can, if you have the chemical potential for the leptons being zero, that wipes out everything. And then in their decays, you then generate the lepton asymmetry. Now, anyway, so this would be one explanation. Now you have to do also the thermodynamics and for that you have to consider the process in the plasma. This is the simplest, these are the simplest graph and just the decays of the heavy neutrino and the lepton hicks and then you have here a lepton number violating processes and here also and scatterings with the top and so on. I should say, in fact, the first person who analyzed that with Boltzmann equations in more depth was in fact, Markus Lutti who is now here talking about supersymmetry but he did early and very interesting work on this. Now, then over many years, groups worked on that and there are some definite results in the simplest case where you sum over the lepton flavors in the final state. I should say, in principle, you can have, in these decays, you can have an index, say alpha here, alpha being one, two, three, for the neutrinos and you can also have a flavor dependence here. You can go to one, two, three, to leptons of the first, second or third family and what is done here is you talk first about the total bearing asymmetry which you generate a lepton asymmetry and then you sum over these lepton flavors in the final state. Then that gives you a considerable simplification which is in the end not quite justified for thermodynamic reasons but it gives you the first and simple results. You can then describe thermodynamically the system by a number density of this right-handing neutrinos which has a term here. This is a Boltzmann equation. I should say this variable Z, you will see it also on the next slide, is sort of a good variable to replace time in this process. So it's the mass of the lightest of the heavy neutrinos divided by temperature. So as temperature decreases, Z increases and there is a factor by means of which you can relate that to time. So decreasing temperature close to increasing Z. Yeah, yeah. Sorry. Well, it's, phi is a total Higgs doublet. I mean, because we are in high scale above electric phase transition now and therefore you have to consider just Higgs doublet as a massless scalar doublet. Now, so you have two Boltzmann equations. One for the number density of these heavy neutrinos, one for P minus L. And you see this. Both here have a factor which is the number density minus an equilibrium number density. So you see that once you, once they are in equilibrium, the number is just constant. And once you are in equilibrium, then this is also constant. And all that happens that is by washout terms an existing B minus L asymmetry is erased. This is what you get from this term. However, if you get a departure from thermal equilibrium here, that means this term is different from zero, then this becomes a source term in here. And by a proportional to epsilon, it generates an asymmetry. This is the usual out of equilibrium decay scenario here combined with washout. And that can be worked out in glory detail. And you then have to discuss all the decay rates. I will compare it with the Haber parameter as you learned that in the lectures by Soube Sarka. I will show that on the next slide. And particularly interesting, a useful quantity in this discussion is an effective neutrino mass defined like this, which is almost like the CISO formula, just different contractional indices. And that has to be larger than a so-called equilibrium mass, which is about 10 to the minus three electron volts, which is a statement that the decay of the width of the decay neutrino gamma is larger than the Haber parameter. That gives you that this m tilde is larger m star. So this is nothing but that, but it's useful to formulate it in terms of this neutrino masses in order to compare it then with neutrino physics. And what you then get is an interesting picture. You know, I showed you that the total binary asymmetry was proportional to this one factor, so-called efficiency factor. And this is determined by thermodynamics. In order to get that, you have to solve these Boltzmann equations and calculate it. And you then find that this has, if you go below this mass of 10 to the minus three electron volts, this efficiency factor has a rather big uncertainty by many orders of magnitude, which depend on the initial state you have. So for instance, it depends on whether you start with an equilibrium number density for the writing in neutrinos, or whether you start with zero and generate the heavy neutrinos from the bath. It depends on what kind of scattering processes you have and so on. If you make this mass a little bigger, that means you go to this so-called strong washout regime, then you see that all these lines more or less merge in this and you get a firm prediction for the binary asymmetry, given this particular neutrino mass. So in this sense, the thermodynamics for this process becomes simple. And therefore that makes leptogenesis a very sort of robust thing. It's really rather well-determined, the binary asymmetry which you calculate here in terms of neutrino properties. Here I show you a comparison of the various rates with the Haber parameter. The Haber parameter here goes like this and here you see a washout rate and the binary asymmetry gets somehow, if you work it out, fixed around here as the washout becomes unimportant and the decay becomes important. And you can then generate pictures like this where you start either from an equilibrium number density or you generate the heavy neutrino abundance and then at some point the decay and you generate an asymmetry which is essentially a function of the light neutrino masses. So the picture which you get is in some sense very simple. You have a big thermal bath which are all the standard mold particles. And in this bath you have, and they have gauge interactions that is important. Therefore you have a good equilibrium, good thermal equilibrium for those. Then you have this heavy neutrino which is coupled only by small Yuccavi interactions and therefore it's coupled very weakly. So this is like somebody who walks through this plasma a little bit randomly, very weakly coupled. And the small CP violations of this object, then the decays and also inverse scatterings and so on and then give you, if you work, solve these Boltzmann equations can give you the asymmetry and that you can then use this whole to derive constraints also on neutrino masses to see where this thing is consistent. You get from these washout processes, you get an upper bound of about 0.1 electron volt for neutrinos, a lower bound for this heavy neutrino mass, this is indicated here, and you get altogether a window in which of neutrino masses, which is sort of preferred by leptogenesis where these masses are between 10 to the minus three and 0.1 electron volt. I should say that this was already arrived under the assumptions that you can sum over the lepton flavors in the final state. That makes the thing easier and then you can get these results almost analytically. Whereas if you include the flavor dependence, the story becomes more complicated and it's difficult to quote exact bounds. In this report by Davidson and Nier, it's a Nadi and Nier, they conclude that these bounds are sort of relaxed by about an hour of magnitude, but that's a difficult story. Anyway, I just want to emphasize the importance of this flavor effects, but the details are complicated. So that is the naive picture. Now, sometimes called vanilla leptogenesis because it just works or it fits. Actually, I should say the following. As I said, the original idea of Fuku Kite and Agita wasn't really pursued for a long time because people are interested in leptogenesis. Then when leptogenesis looked a little more difficult, people started to look at this and then neutrino oscillations were discovered. And the fact that what you get from neutrino oscillations fits well together with this that gave a boost to this picture of leptogenesis. And I should say this is not trivial because here, I showed you, you get roughly this upper bound of about 0.1 electron volt. At the time when much of this work was done, it was still discussed whether you could have myranum neutrino say of a few electron volts. If that would be true, this whole thing would be in very bad shape, essentially excluded. So the fact that the neutrino, so the neutrino plays somehow in this business the absolute neutrino mass scale, not the mass differences, but the absolute neutrino masses because they violate leptor number, they play about the role which the Higgs mass plays in the electro-rebiogenesis. So it just seemed to be the case that because the neutrinos are so light, it just fits very well together. Actually, I should say one thing, if you work in this stuff and you see how the various rates which contributes, the Boltzmann equations which contribute to this process, you have these washout processes, the case inverse the case, scattering with the top Higgs processes, you have scattering with gauge bosons and so on. At some point you start to wonder why it works at all because on the one hand, you need leptor number violation in order to produce in the beginning in asymmetry in B minus L. On the other hand, if you have too much leptor number violation, whatever you generate will be washed out and that is controlled by the size of the neutrino masses. So this only works because you have leptor number violation but not too much. And you have it in such a way that all the rates which you can calculate here in the end conspire to give you all together nice picture. So this is a little bit miraculous. Our priority is there's no reason for that to work. Like for electroweak barogenesis. If the Higgs mass would have been say 30 GV, then will be perfect with electroweak barogenesis. Nevertheless, at the moment we know just mass differences between the light neutrinos. We do not know really whether these exist and if they exist how heavy they are. Within this cut picture which I described, you have an idea of where the masses are. However, we don't know whether that is true and it could be that these masses are in fact still at the TV range. This is something as I said which has been pursued over the years by pilafzes and friends and this is something which I, some formula which I copied from one of their papers. So suppose now these neutrinos become close to each other in mass. Then you have these neutrinos, then you cannot just take the lightest of them. You have to take all of them into account. So you have to take the effect of the three families here. You have to take the effect of the three families here and then you get CP isometries which I take matrices in these three indices. And if you work out these CP isometries then you'll find that they go, that they depend on the difference of these masses. So making this difference small and here are the decay widths of those objects so making these masses small enhances the CP isometry. How that works exactly is still debated. It's a, this is very recent work here. I mean it's complicated and people are working on that but formula will be roughly of this type. And this is in fact something which you then can calculate based in such a model and I just show you one explicit example. This is again this time variable z equals mass over temperature and here you see the bianosimetry, the leptonisimetry here first. First very big then and then it converges here to say this way. So you can have, I mean this explicit calculation show that you can have a bianosimetry, generate a leptonisimetry in these models. However, at the price of really tuning the mass differences rather accurately. And you need a very tiny mass differences compared to the sum of the masses. And in this table these authors list that. For instance, you take here one of the mass differences is this here, which is about 10 to the minus nine. So delta m over m, I think I cannot read it here now for I think m2 and m3, two of the masses, it's 10 to the minus nine. So you can say, well that's an enormous fine tuning and all you have to construct a model where you can really understand that, how you get such from a more fundamental theory, how you can get such a pattern of neutrino masses. So this is somehow, I think the question which you have to ask, how natural these mass matrices are. But there are some people who construct some flavor models where due to some symmetries, maybe you can have that. If you, yeah. So the CP violation is in the phases of the neutrino mass matrix. No, no, no, no, no, they are of course not completely degenerate, but they are very close in mass. So this game takes place when the mass differences is of the order of the width, the decay width, okay. And then the CP violation is there. No, of course that was taken. That is all done correctly, I think. Now, if you have such a scenario, then you have some nice predictions for LHC. And I think it's very good to work, to push these models to the point where you see that. And that has been done here. For instance, imagine you have these neutrinos now with masses, the heavy neutrinos with masses of a few hundred GV, and you have additional gauge bosons. W, charge W bosons at Z prime, which is frequently discussed, LHC physics. Then you can say, per produce such heavy neutrinos which is NDK. Now, if you have such vertices here, you get via quantum corrections all kinds of processes where you have to be careful. For instance, you get a process mu to E gamma. And in order to be consistent with that, the coupling which you have at this vertex has to satisfy an upper bound. And then you have, your model has to give, has to be consistent with the neutrino masses, which you see. So, for instance, from the solar thing, you get a band where this coupling should be here, around 10 to the minus six. So, you get a very small coupling. But then what is interesting, once this is small, then the widths of these ends is long, they live long. So, when you produce them at the LHC, they will decay, they have a finite decay length, a macroscopic decay length. So, displaced vertices, which is one of the things people are looking for at the LHC. And the decay length which you may have can range from a millimeter here to a meter. So, that's in principle quite an interesting signature. If you want to have something which is consistent from the point of view of periogenesis, and with all the constraints from electric processes, processes flavor chain, laptop flavor violation, then you are led to really specific predictions for the LHC, in this case, these displaced vertices. So, these authors recently had a long discussion of the LHC phenomenology effect. Now, let me mention one thing. Leptogenesis, as I try to explain, is in principle a simple picture. And it is, because the thermodynamics is simple, it is close to equilibrium, you may think that it may be possible to make it a real theory. A real theory, because after all it's a quantum field theory, so one would think given the quantum field theory with certain parameters in the Lagrangian and say the standard cosmology, I should be able to compute the bianosymmetry which I produce with an arrow bar. Like a QCD calculation where you say the production cross-section of this or that is this number up to say 20% error. Now, that requires that you do not just Boltzmann equations here for Leptogenesis, but that you really solve the full non-equilibrium problem in quantum field theory. That's a principle, an interesting problem close to activities done in condensed matter. And so people, a couple of years ago some activities on that started and that's ongoing work. And what you have to do, but it's making has been making significant progress. And the reason that you can have some hope here is the following, these heavy neutrinos, it's just thermodynamically, it's just one degree in a big system. You have say roughly 100 degrees of freedom for the standard model and then say one additional for this heavy neutrino. So if this neutrino does a little bit, it doesn't change the thermodynamics of the system. So you can neglect the back reaction. That's a very important point and that makes it significantly simpler than for instance non-equilibrium studies in heavy ion collisions. So you can neglect that and so you can then, and the other thing is the coupling, the Yuccava coupling, the coupling of these heavy neutrinos to the particles in the thermal bath are weak. So there you can do a perturbative expansion. It still remains complicated enough, but it's something which is doable. And the formalism for that is developed. It's a Schringer-Keldisch formalism. There you study generically Green's function on a complex contour, not just you have, instead of the usual Schringer-Daisen equation, it feels you have something like a Schringer-Daisen equation on the contour. Instead of the usual Green function with five-month boundary conditions, you have now two Green's functions, one the so-called spectral function which contains information about the system and another, which is a statistical propagator which contains information about the initial conditions. That gives you a coupled system of equations, the so-called kind of Bain equations, which you can solve systematically. It's not done fully yet, but in I would say very good approximations and there is really, you can see how this work converges. And this is one example which I, by Garnier and company, what they calculated and they in fact made an application of this formalism to resonant leptogenesis. And you can compare the enhancement which you get, say in naive calculations in resonant leptogenesis with the one which you get from this kind of Bain equations. If you just use Boltzmann equations, you get something like this for the maximum Cp. Essentially R is defined to what the Cp asymmetry is, relative to a Cp asymmetry. You get an enhancement here, which involves this difference and by tuning parameters this can become very big. Interestingly enough, if you do that due to various effects, it changes, sorry, you get a plus here. So of course this is unphysical somehow that you have here minus sign and you can understand where it comes from but doing the proper calculation with this formalism also fixes such problems and you can get a reliable calculation and reliable results also for this resonant leptogenesis. So let me now summarize leptogenesis. I think thermo-leptogenesis is, say, simple in the sense that the basic picture is really very simple. You can, and that I think makes it successful, you can understand qualitatively the order of magnitude and you can systematically improve on the quality of the calculation. So it's on the way, I think, to become a real theory and it fits very well together with what we know about neutrino masses. So we'll see. Therefore what is very important is to determine the absolute neutrino mass scale in this business. So far we know only mass differences and, well, direct laboratory experiments have a rather weak upper limit on neutrino mass which is, I think, around what is it now, what is it now, an electron void or something and a catrin one is hoping for something like, I think, 0.2, 0.3 electron volts. Cosmology may be able to do, not in a more independent way, but to do much better and I think this, in this business, is very important. So it would be very nice to get evidence, say, for a smallest neutrino mass of the order of 0.01 electron volt. That would be my favorite value. That would be right in this window, say, which you saw between 10 to the minus 3 and 0.1 and it would give you a sum of neutrino masses, say, of 0.07 electron volts, something like this. Of course it would not prove this picture of leptogenesis but it would be, I think, a very strong, further support. And from various analysis, I think large-scale structure, in particular we can hope in the coming years for maybe for determination of the sum of the neutrino masses and therefore the absolute neutrino mass scale. So that would be, I think, very important in that respect. This whole question of flavor effects remains important and further work needs to be done. In particular, there is a lot of work which went on not in a more independent way but within particularly theoretically motivated models like S or 10 and so on and I think that is very valuable. Resonoleptogenesis is a possibility and can be tested at LHC but of course the question is how do you understand these masses which you need in order to this rather degenerate having neutrino masses in order to make it work. I had no time to discuss non-thermal leptogenesis but it's also a generic possibility and important. So this non-thermal is one where you generate the initial heavy neutrino abundance, not thermally but say from inflaton decays or other heavy scalar particles. And there's been significant progress now to the full QFT treatment so I think this is on a good way. Other models. So I've only this page on other models. Of course, first of all because the time is short also because I think that maybe in these cases it's not so closely related to experiment. Affleck-dyne mechanism related to flat directions in particular in the supersymmetric. Stanemoll is a generically very interesting possibility. So far we see no supersymmetry and there are also a number of questions there. So we will see what happens to that. What has also been discussed quite a bit in the literature is the decay of heavy modular which you get instinctively. It's also a possibility but generically it involves a number of parameters. An interesting aspect of that is that you can relate it to dark matter. There's something called cold periogenesis which happens after the electric phase transition. You can have periogenesis in connection with the strong Cp problem with the QCD axion. There's a recent paper on that by Cervant. You can have periogenesis just from Hawking radiation. I just saw the title of the paper. I don't have time yet to read that. Maybe interesting. If you just, you know, you go to Spire and type leptogenesis, sorry, periogenesis, then it goes on and on and on. I mean there are many, many, many papers so you can make interesting models. I think what is important to make progress is to have a link, I mean to embed that really into extensions of the standard model and to have a link to other phenomena like dark matter and inflation. So we will discuss a little bit about that later. So I'm now at the end of biogenesis and before I start with inflation maybe I should just ask other some questions otherwise we can have that later. Yeah. Sure, it's related. But I think in this particular case of leptogenesis I'm not really... Yeah. Well, I know there have been discussions on that but I must say I don't know what the latest status of that is. Clearly in principle it's an important topic in this connection because it's a very number violating process. You mean here in Aflac Dain? Yeah. Well, I mean the question is once you have a flat direction how is it stable against quantum corrections? Well, yeah, you can think... I think in principle you can shift of biogenesis and more it's with shift symmetry. Yeah, that's true. Yes. Right now I'm not aware of... Probably it has been studied, I think. No, but it's true. I mean you could just try with shift symmetry because once you do that you have to be careful I think because such a flat direction if you have say a modulus field like this then the question is... it's dangerous to take just one modulus field and an ad hoc potential for that. The question is how does that fit into a broader picture because otherwise there's a lot of freedom usually constructing that and in the end you get just one number which is the barrier asymmetry. Anyway, maybe because there are so few experimental constraints this whole thing is really a nice playground for theorists. You can try your ideas and see whether you can produce the right barrier asymmetry. Okay, so then maybe I stop here and now I come to the second part which is inflation. Of course it will be not that much time to discuss inflation but I think I should discuss some aspects here because I think there's no other lecture on that and also with some connections to say biogenesis and a little bit dark matter. Now I said for biogenesis you have just one number. Here for inflation at the moment we essentially also have just one number which is the scalar spectral index. I will come to that and this number comes from the latest data of Planck which came out 2015, a few months ago for the tensile to scalar ratio to which we will come, we don't know what it is. So far there's an upper bound of about 0.1. So where do these numbers come from? Again, as you know, just one, two numbers and probably even more papers on that and on maybe 10,000 more than on biogenesis. Now I've listed here of course not all the important historical papers but some which will be important to us later. Straubinsky and his model I think is an important thing. The discussion of horizon problem by Guth and then chaotic inflation introduced by Linde is also a standard model even if the simplest version is probably not going to work but still it's interesting. Now this you have seen many times I'm sure I will show it twice also during this lecture just as an advertisement of course. These are the Planck data from 2013, 2015 I didn't even see such a plot in their paper but there's no difference which can be seen and because it's already such an enormous precision and the point now is here but of course here we see it looks like a big structure which we have but we should not forget that this structure is just a 10 to the minus few times 10 to the minus 6 correction to something which is exactly flat. And so the first question in connection with inflation is how can the CMB be so isotropic? So I think this is an important point so I will try to make a little effort and try to explain the horizon problem. I guess some of you will be very familiar with that although I must confess I also taught it a couple of times in class but from time to time I still get confused thinking about it. So I think it's worthwhile to go carefully through this argument and I should say I follow a little bit not completely discussion I will give the reference later in some lecture notes some report by Daniel Baumann. Actually that's Daniel Baumann gave some Tasi lectures a couple of years ago which he updated so it's a version of 2012 we have the reference on the slides which discusses some things in a very clear way. Now so I will spend a few slides on that. Now we know we have the expanding universe as described by Soobir Sarkar and I think everybody knows this Robertson-Walker metric for universe being either closed, flat or open depending on whether K here is 0 plus or minus 1. Now it's convenient to change coordinates like this instead of R you use this chi defined in this way and instead of T you use tau it's the so-called conformal time that of course you can do and in these variables this distance here the metric now looks essentially as in Minkowski space so that means if you just look at Minkowski space in polar coordinates I mean if you have you have here the time you have here the radial distance now the rescaled radial distance and then you have the angular coordinates usually a sine squared theta d theta d phi at times the function which now depends on R here on chi and this is the function we will essentially always use K equals 0 in the following. Now why this is important let me say already we will come back to that now a couple of times it's important because if you use these variables then light propagates the light cone looks as in Minkowski space since it's just rescaled so that means light propagates at 45 degrees so if this is say chi axis and this is the conformal time the light goes like this or like light goes like this so this is the forward light cone this is the backward light cone very important and that is particularly important if you want to discuss what causally connected regions are and where the horizon problem comes from now an important notion is as you know I think Schubert-Sarka discussed it particle horizon so particle horizon here it's defined as in this way say an integral from some initial time to a time t dt over a of t so this is just if you look at the metric this is just sort of the distance light can travel in such a time and now what one does is one defines a cone moving distance by dividing out the scale factor at that time the distance a distance as a scale factor times this cone moving distance otherwise I would have to multiply if I take the physical distance here it's multiplied by a of t so this particle horizon sometimes also called past horizon which I like even more has to be distinguished from the event horizon or the future horizon that will not be important for us so I don't even mention it here particle horizon or the particle horizon will be important that is defined like this so this gives you the region from which information can have reached the observer until now so that means if you one should think in the expanding universe as you learned say as a balloon which blows up and this radius as a function of time is given by the scale factor and now one has the distance between points is determined by two effects one is the distance on the money fold and the other is the distance on the money fold say on a sphere is independent of the physical size of this object the size of the radius but the physical distance in centimeters depends of course on the scale factor and therefore on this radius of the sphere and it's important really important for the discussion to clearly distinguish these two things and therefore I think it's good to use these coordinates now we have the time evolution of this cosmic scale factor governed by the Freedman equation those you all know here I have used the Planck mass so I think in Super Sarka stock H-Sahab parameter the usual stuff which you are all familiar with and then what you know is that in fact you can look at how the energy density here scales with the energy density scales with the scale factor that depends on the equation of state so if you have just non relativistic dark matter then the space just expands so the density is just volume suppressed omega is 0 and rho goes like A to the minus 3 if you have radiation it's not just diluted it is also redshifted so you have A to the minus 4 and if you have vacuum say cosmological constant whatever that is you have here omega equals minus 1 and you have the miracle that the energy density despite the fact that this thing expands stays constant now what is important is to introduce another quantity which is the so called co-moving Hubble radius you know the Hubble radius they have a parameter we know and the inverse Hubble Hubble parameter today gives us the size of the universe which is about say 10 to the 28 centimeters and it's good to also normalize that to the scale factor so to take 1 over H which would be the Hubble size and the Hubble radius and divide it also by A this is the so called co-moving Hubble parameter which is the analog of the co-moving horizon and for that you can easily from these equations derive this 1 over A times H is this the same quantity today times A over A0 to 1 half 1 plus 3 omega this is sympathy which you just get combining this but it's a very useful equation so that means if for instance if A depending on what the equation of state is say for instance take omega equals 0 then or omega equals 1 third then this co-moving Hubble radius increases with A whereas if you take minus 1 for this it decreases with A this is a crucial feature so this is the so called co-moving Hubble radius now we can calculate the co-moving horizon by just using the formula and in fact if I take the formula for the horizon and just replace it by this co-moving stuff then I get that the co-moving horizon as formula will come later again as A over A times 1 over A H it's a strange way of writing these things but it's useful so you integrate from some initial value to this final value so here you see that means the co-moving horizon is the integral over the co-moving Hubble radius and integrate it sort of with the logarithmic measure over A now you can easily do the integral and you get the following this of course the co-moving horizon now if you look at it as a function of A all this increases and you have this pre-factor co-moving Hubble radius today and then you get the stuff grows with A if you have metadominated as a square root it grows linearly for radiation and it is a constant and from which it decreases if it's vacuum-dominated now for matter and radiation as I just said this thing always grows now make say what is sometimes called a natural assumption or which one assumed say before inflation you just assume that the co-moving horizon today is if I go back you know it's what says the age of the universe is 10 to the 14 billion years and if I go back to the time of recombination that says 380,000 years that should be most of the horizon so the total horizon now you say is bigger than the horizon at that particular time that seems to be a natural assumption because the equation of state as you also learn from today's recombination is about 0 so from that you get that this co-moving horizon today divided by the co-moving horizon at the time of recombination is given by this ratio of scale factors to the power one half that by definition is essentially the redshift which we know is about the factor of a thousand and therefore it means that if you take the say the region of the manifold which we see today then it depends at the time of recombination out of about 10 to the 5 causally disconnected regions and the question is how can that be now there is a very nice illustration of this puzzle which you find in these lecture notes by Bauman which is the following here you have the conformal time and here you have Kai this radius so and here you have what we do with the CMB here is the observer and the CMB gets photons which comes from the sphere of last scattering when recombination when the photons decoupled so he gets it from this direction he gets it from this direction he gets it from this direction this thing here this radius is sort of the size of the manifold if you wish this circle is a circle on the manifold of the space manifold from where these photons start the propagation to the observer now as we said we assumed that the horizon up to here it's almost the full particle horizon for the observer but a little bit is left here namely from the CMB time the time of recombination to the big bank singularity if you just take radiation and matter as equation of state and you solve the Friedman equation then you get a singularity so that would be then here that you would define as time equal to zero and here in this conformal time this is small compared to this so that means from this point here if you receive light then the back light cone of this point is given by this and you see it's completely disconnected the back light cone of this point it's here so because this is completely disconnected causally connected these two points can have never been in causal contact in the past and so you may ask well if this is the case how it is then possible that nevertheless the temperature that you see in this different direction is the same at a level of 10 to the minus 6 this is the horizon problem or the other part of the same type the flatness problem and so on but basically all the same so this is the horizon problem which is I think a real problem if you have just ordinary metane radiation now what the solution now proposes as a solution is the following it says during the phase where the universe was dominated by matter and radiation I always had this increasing co-moving other radius which is plotted here as a function of A on the logarithmic scale so this is for radiation this is for matter this is maybe again a metadominated phase and so on now what I have to do in order to solve this horizon problem if I look at this integral and if I go to the size of the scale factor at CMB I simply have to make that bigger if I make it big enough then I can solve this horizon problem and that is simply done by smoothly matching to this period where you have an increasing co-moving Hubble sphere this one where you have a shrinking co-moving Hubble sphere if you can do that then you can manipulate the integral in such a way that this horizon problem goes away I will show you again the plot now how can you do that shrinking Hubble sphere of course you need that co-moving Hubble horizon decreases which means that the second derivative has to be positive if you now look at this Friedman equation where the second derivative is proportional to minus rho plus 3P you see that this means that P is smaller than one third rho now this is impossible for ordinary matter and radiation however it is possible for instance for cosmological constant where P is minus rho so that means if before this radiation dominated phase started you had a vacuum dominated phase then you can have a period with a shrinking Hubble sphere and during that you then easily check the scale factor blows up from some initial value exponentially but note that appears here the initial value of the scale factor so that means you introduce a dependence somehow on the initial condition now but if you do that then you get this nice picture again taken from Bauman's lecture so we made it up to here so far where you have the horizon problem that means different points here at CMB have past light cones which don't intersect but if you now have this period before with a shrinking Hubble sphere then you can just increase the causally connected region more and more and more and until eventually you get the intersection and then you don't have any result problem this is what is done so I think what this illustrates is that really if you just take the CMB which we know is so well established we don't have to talk about anything else if we have only matter and radiation there simply is a big puzzle you just cannot understand the but you just cannot understand the enormous isotropy of the CMB however including a phase where the energy density was dominated not by ordinary matter or radiation but say by vacuum you get the possibility to modify the dependence I mean to modify the structure of these co-moving horizons in such a way that this puzzle can be solved of course you still have the dependence on the initial value of this Hubble parameter this where was it here you have this you see it's instructive to look at this equation suppose you have an observer who can start somehow watch everything from the very beginning then he would start at initial time at where say so you start say from a sphere or something where the radius is a i at that time horizon is 0 because if you put a i here then this is 0 then as a increases this moves to a constant so that means as long as your vacuum dominated you approach a constant this is sort of the constant the fixed horizon which you have in the city space so that remains until use NFC phase transition process after the inflationary phase and you are you move to a phase which is radiation and metadominated in that case this core moving horizon is along a constant but starts to increase during radiation and during matter until it reaches say the value which we have today but what we don't know for instance is how big is this value here or how big is this value the one today relative to this one a 0 over a i so then that means we don't know how big the causally connected part patch of say the manifold which describes you if it really is it could be close to what we see now the core moving horizon it could also be much bigger we don't know that's related to the question of how many e folds of growth you have for the scale factor and that's related to question of initial conditions for inflation so maybe that's a good point to stop