 I'm going to keep writing on the board. Now, OK, we can start the lecture. I got involved into the writing. OK, so this is the last lecture. So I'm going to summarize what we have learned so far and then discuss a couple of advanced topics, mostly primordial non-gaussianities. So this was a summary of what we did so far. We discussed the fact that inflation is needed to make sense of the universe we observed. And as an added bonus, it seems to produce perturbations from quantum fluctuations that agree very well with what we observed. And we studied the toy model, if you want. And that toy model was single field slow roll inflation, just a scalar field canonically normalized with flat potential rolling slowly down. And we discussed that there is only one variable that we need to care about as far as scalars are concerned. And this is the gauge invariant curvature perturbations on commuting hyper slices R, or its brother zeta, but we focused on R. And in our convention, this R, the basic information is the two-point correlation function, which in Fourier space can be written as the power spectrum P of R. And that power spectrum, we computed it last time. Take out this k-cube. And that power spectrum, well, it went like k-cube. That was as advertised because of the accelerometers of the center space, in particular dilation symmetry, the equivalent of time translation in Minkowski. But the most important, we computed the amplitude and the fact that there is a slight deviation from the k-cube. So the amplitude was given by the energy, by the Hubble parameter during inflation. Is it? And then there was this interesting slow-roll factor here. So this is the amplitude. This is the theory. This is the theoretical prediction. And then we computed the same thing for tensor modes. That is for primordial gravitational waves, also produced by quantum fluctuations. They're pretty much related because both of them behave like a scalar field. But this one had an epsilon in front of the action. So then you get the epsilon out in the final formula. And then there are some factors of four floating around. This is what the theoreticians do. On the observational side, what people have noticed is indeed, well, this one has not been measured. But this is how it is parametrized observationally. This one has been measured. What people observe is that indeed it behaves approximately like k-cube. So a reasonable parametrization is to put an amplitude in front of it. An s stands for scalar. And then allow for a small deviation from a perfect k-cube. And the small deviation, the parameter, is typically called an s minus 1. And similar is done for the scalars. It just so happens that people find convenient to define the scalars by introducing, instead of putting another amplitude here, a of t, they put the same amplitude as up here plus r, which is the ratio of the scalar to tensor. So r is called the tensor scalar ratio. It is usually measured at some particular scale. But OK, maybe that's a little bit of a detail that we don't need to concern ourselves with. And that is, it would have been the amplitude of tensor divided by the amplitude of scalar. This is observationally. But we know what that is, and so we can just take the ratio. And so the prediction, the theoretical prediction, is that it's 16 epsilon. And the other thing we computed the last time is that we took, of course, if you take the d log of this in d log k, what comes out is an s minus 1. And so if you do it on the left-hand side, you actually get to compute it. And I remind you that there is some k dependence hidden in the argument of epsilon and h. It was due to the fact that different modes leave the Hubble radius during inflation at different times. But we did that calculation, and we found minus 2 epsilon minus eta. So these are kind of all the predictions. OK, so the amplitude of a scalar is measured. It's this number. The number that makes our life easy. It's a small number, 10 to the minus 9. And so that means that primary perturbations are small. And that means that we can actually compute them. Because if we weren't able to use perturbation here, in none of these calculations would be possible. That means that actually we can also use analytical tools to solve the C and D, and also for large-scale structure. In fact, the part of large-scale structure that we understand well is the one under perturbative control. So this number is what makes our life very easy, because it's much smaller than what. Very good. The tilt is also measured. That's measured to be 0 point more or less. I don't really have the latest digits on this number, but something like this. So there is a deviation from 0, which is 7, 8 sigma, depending on what parameters you include. So it's really measured. And we kind of like this. Inflation likes this, because it says it should be proportional to something which we assume to be small. And this is indeed small, so that works out well. And finally, there is this tensor-to-scalar ratio. This one we have not measured. So there are only upper bounds. And the most recent upper bound from combination of Planck and Bicep, for example, is smaller than 0.07. So the James Bond is easy to remember. And this is pretty much the observational status. And of course, since we haven't seen tensors, we also have not seen what their tilt would be. But if we had seen it, that would be, I think, 2 epsilon. Not mistaken. Yes, OK. So there are a couple of interesting comments to do about this is pretty much like the basic phenomenon of inflation mid-experiment. One comment to be made is the fact that there is a nice. OK, so how many things there are around? There is Hubble, epsilon. Well, in Planck, we know what it is, because we measure Newton constant. So there's Hubble, epsilon. And then there is eta. So there are three things around, three numbers, that we don't know what they are. But here, I told you only two hard numbers that we actually measure, the amplitude and the tilt. But this one is not measured. It's just an upper bound. So I have three numbers to fix, but only two measurements. So clearly, I cannot. The first thing is that there is still some indetermination. And in fact, the scale at which inflation took place, because the amplitude measures h divided by epsilon. Since I don't know how small epsilon is, I don't know how big h is. And so the scale at which inflation took place is uncertain by at least 10 orders of magnitude. It's perhaps one of the most uncertain scales in physics. And what's remarkable about inflation is that, unlike other processes, like Raphaz was discussing nucleosynthesis, if you change the scale of nucleosynthesis by a factor of two, things are screwed up. If you change the scale of inflation by 10 orders of magnitude, the prediction are more or less the same. There is some fundamental conformal symmetry at the heart of inflation. And that makes this prediction to be so robust. So that was just one statement is that we don't know h during inflation. So if someone asked you, what energy density did inflation took place, the answer is that I don't know. It could be as high as 10 to the 50, well, in terms of Hubble, or 10 to the 12 GV, or as low as TV scales. That's comment number one. Comment number two is that there would be, well, clearly if we measure the tensors, finally we measure epsilon, and then we fix the scale of inflation. So tensors would fix h. So that's why one of the many reasons, but perhaps not the main reason, but that's one of the reasons why people are really trying to measure these tensor modes, and have not seen them. So that would tell you inflation to place at this scale, because it would measure epsilon, and therefore fix Hubble. Another interesting comment is that, OK, so now if we keep measuring things, we still have three quantities, epsilon, eta, and Hubble. But now we could measure four numbers. So you might imagine that there's going to be a relation among these four numbers. And the simplest way to write that relation is to write mt equals 8 and t equals r. And t is 2 epsilon, r is 16 epsilon, so they should be related by a factor of 8. In some sense, you could consider this some single field, slow roll, consistency relation. Difficult if you break this assumption of single field or slow roll, that relation is broken. So that would be a smoking gun. Of course, that requires a first of measuring r, which has been hard so far. We don't know how large it is. And then after you measure it, you should also measure the tilt, which is much harder than just measuring it. So perhaps for the future, of course, people will try, but it seems to be, it may be prohibitively hard. And perhaps Rafa will say something about that. And finally, oh, the last comment that maybe I already made. From the two numbers, we measure this one and this one. You see this 0 point, this is 3% if you want. So the sum of this is 3%. But epsilon is smaller than half a percent, if I put the 16 on this side. So epsilon is really smaller than eta, I already know. So in some sense, when all the books, well, epsilon is literally smaller than eta over 3%. But I'm going to be bold and say that unless you read in the newspaper every single day, unless you read that tensor modes have been discovered, this bound is going to get better and better because people are trying to measure it. And if they don't see it, this bound improves. That means epsilon gets smaller and smaller. That means that we are really moving ourselves towards a regime in which there is a clear hierarchy between epsilon and eta. And this is motivated by experiments today. So we already know this. So perhaps standard textbook never mentioned this. They are both kind of on the same order because they are models. But I think one of the things that happened in the last few years is that there is a new hierarchy in the game. And actually, that is important physical consequences that perhaps I'll discuss a little bit. Okay, well, that's the whole, oh, yes. Maybe the last thing that I wanted to mention is this life bound. And that's the fact we go back to a formula that we had before, which was how long is inflation? But we're going to reinterpret that formula in a different way. So how many e-foldings is the integral of HDT? But I can do my games and write it with the chain rule as an integral in d phi. And this quantity here, it's good to remember, is the square root of 2 epsilon because phi dot square is equal minus 2h dot and Planck square divided by m Planck. Well, not m Planck here. Okay, so I can rewrite this by saying that the integral in dn of square root of 2 epsilon is equal to the integral in d phi that I could also give a name to and call it the displacement of the inflaton during inflation. So I get this nifty formula. And the easiest way to think about what an integral is is by drawing the area under a curve. And that curve would be the curve of epsilon as a function of n. This is n. This is, sometimes, for example, this would correspond to the C and B scales or last scale structure scale that we measure. And this is inflation. And this is the end of inflation. At the end of inflation, by definition, epsilon is 1. So this is what, roughly speaking, a very naive expectation is what we expect this function to be. So this integral is just the area under the function. Okay, I don't know the area under the function because I don't know the function, right? We just have many models and they give different prediction. But one thing I can tell you for sure, what is very likely is that epsilon is gonna grow because it has to get to 1 by the end of inflation. So at least the area will be given by this part here. Okay, so one estimate that I can do with this is that, sorry, this is over and blank to get the units, right? Is that this is gonna be bigger than the square root of epsilon times the number. And this is at the beginning, at 60. Something like this. Times the number of e-foldings and this is gonna be approximately 60. So I can rewrite this formula in a more evocative way and this is gonna be something like the square root of R divided by 0.01 by doing some magic with multiplying and dividing. Okay, so that's an interesting statement. It tells me that if I were to measure R, anywhere close to this number, 0.01, and this number is not really a hard number, right? I had to make some pretty rough estimates of this integral. Maybe 0.001, you know, within a fact. But if I were to measure R at all, that would tell me something about how much Delta Phi moved. Now, if Delta Phi moves larger than in Planck, in principle, there is no obvious problem naively. In the sense the energy density could still be very well supplanked and the physics is well behaved, but that would put in question the use of a low energy effective action as we are doing it. So what we are doing is really the Landau-Ginsburg version of superconductivity, but we are doing it for inflation. We're saying there is some order parameter. We're calling it a scalar field. This is the one that controls lambda and at some point this lambda is, sorry, that's the one that controls the energy density during inflation at some point that goes to zero and inflation stops. It's a very effective Lagrangian that we are writing down. We don't derive it from any fundamental physics. When Phi moves over a distance larger than in Planck, which is probably the cutoff of the theory because when you go to Planck, gravity becomes strongly coupled and so you don't trust your theory anymore, then you might question whether you could do this. And people in string theory have been hoping well that this is the case because then you really need a fundamental theory to answer some questions about this problem. I will not say too much about this, but it's good to know that seeing R is not just exciting because we see evidence of perturbatively quantized gravity. We confirm the existence of inflation. We measure the energy density during inflation, but also we require a new paradigm, yes. Yes, that's the cutoff in the sense that at least the gravitational part of the action is gonna be strongly coupled there. It could be that it happens even earlier and then this bound becomes even tighter. This is just a conservative way of phrasing the bound. Now here, so the problem will not arise from having, so the question is what is the problem when Delta Phi is of order and Planck? It's not gonna be a problem that the energy density is too large. It's gonna be a more subtle problem about the defining effective field theories and it's perhaps not something I wanna go into it. So I'm gonna give it to you as a statement and I'm gonna be happy to discuss with you guys why this would be a problem. I think it's perhaps even slightly controversial. I wouldn't say it's controversial. I think I'd be leaving this statement, but it would take us too far from where we wanna go today. Some other questions though? Yes. Okay. So the question was about the vacuum that we chose in this calculation. Well, I have two comments about what you said, but maybe the first one is that the vacuum we chose is the same vacuum that you and I would choose in this room if we were to make some quantum mechanic experiment like the vacuum we choose for our cell phones. So that's the same Minkowski vacuum. And as long as you are at distances that are much shorter than the Hubble scale during inflation, that's the vacuum. Even if it is not exactly the sitter but the slight deviation from the sitter, that argument goes through in exactly the same way. So you still would expect bunch Davies vacuum as the least worst choice for your vacuum. So in that sense, it seems to me that it's stable. Even if it is quasi the sitter and not the sitter, the argument goes through the same. Let me just mention one other thing for the experts, perhaps. Actually this number doesn't measure really a deviation from the sitter, but rather the fact that the inflatone has a mass. Actually we think we haven't seen an deviation from the sitter yet. We know it exists, but we haven't seen it. Absolutely would be a deviation from the sitter. Okay, so, well, very good. I thought I should at least write two formula about the connection with observations. I mean, in some sense Raffer has discussed this for a long time and Shirley will start discussing it with the connection for large-scale structure, connection with OBS. But I just thought I write down one formula and just add some pros to it. The formula is a formula that also Raffer wrote down. Square, and there is some transfer function. And so this formula tells me something that I observed, the CL that Raffer has been discussing is related to this P of R that I just predicted by doing a perturbative quantum gravity in the sitter space, which sounds like a very frontier discussion of having, by an integral which I'm able to do with a function that I know what it is. So Raffer somehow told you that you just solve this five or 10 differential equations that are linear in the computer and that gives you this function and so you know this integral. So there is this cool thing that, you know, as a quantum field theorist, you would really like to know and it's related to something you observe in a very specific way. So notice how simple this relation is. This is just linear. If you multiply R times two, this is twice as large. So there is really a tight connection between all of this. Perhaps what you might have thought is more speculative physics, but with something we measure. And in the same sense, so this is as far as C and B and I'll not say anything more. Raffer really covered it in nice detail. And the other thing is the relation to large scale structures. What's that relation? Well, you measure a matter distributed around the universe and probably you like to write that down and some background plus some perturbations. And then to make prediction, you're obliged to make predictions about the perturbations. And those perturbations are also on large scales. They're also related to this R in a very simple way. In fact, everything is related to this adiabatic mode. And I just thought I write down a formula, not really because the coefficient in the formula are important, but because I can write down a formula. So you appreciate how tight the connection is. There is some other transfer function. Okay, so if you measure the density of matter in the universe and you look at the density contrast up to a function that you know very well, it's again, it's just R itself. Okay, so this is yet another way in which we are actually measuring this quantum fluctuations from inflation. And in fact, perhaps for the experts probably surely we'll discuss this more in detail, but if I were to plot the power spectrum of Delta, that's the power spectrum of matter, I get this nice dome shape as a function of log K. And so this would be the power spectrum of Delta. Well, just given by this quantity squared times the power spectrum of R. And in fact, you can understand all the features of the matter power spectrum, the linear behavior on large scales. This is just scaling variance and this decaying behavior here, they're both consequences of scaling variance. You know this transfer function is constant on large scale. So this is just K to the fourth. The power spectrum of R is one over K cube. So on large scale, this has to be linear in K. So yet again, we see that things we observe on large scales in the universe, either in the C and B or in large scale structure. Well, it just tells us about this power spectrum. Okay, you will hear much more about that probably in the next couple of days. But these things is well-established by different probes. Okay, this was a bit fast, but maybe some questions about that. Yes, but this is a Hubble. Yeah, this is conformal Hubble, probably in some notation in some, well, let's say Hubble. Since I'm putting tweedles around, you can figure out the factors of A to B correct. It's Hubble, yeah. The idea of all of this is known function times R. Maybe I could have just written known function times R. Something else? So I mentioned before this remarkable fact about our universe is that everywhere we look, it seems to be starting with exactly the same fractional density of every species. And I call that the adiabatic mode. And I told you that that is always a solution. And so we are happy because it really comes about nicely in inflation and cosmology in general is the fact that fractional variations in every species is the same as long as you write them this way on large scales. So as they re-enter the Hubble radius. Okay, so this is the statement, but how do we, how do we test it? Well, the way that people tested most in the CNB but also a bit in large scale structure is that, well, they take the difference between these two. So delta rho over rho dot i. Actually, well, let's write it like this. I think that the actual normalization perhaps is different, but. They take the difference and if you notice any of these at very large scales whereas the initial condition of your code, then you say, well, I see a deviation from adiabaticity. So this INJ can be many things. It could be dark matter and photons. It could be variants and photons. It could be neutrinos and photons. It could be dark matter and variants and so on and so forth. So you have many ways in which you can break down the adiabaticity condition. There isn't just one, one. And everything that breaks it down, I'm going to call it isocurvature, which is perhaps not the best name. Maybe it should be called entropy perturbation, but okay, it's the same. And so people have tested that. So the obvious way that you tested is that you try to give it some power in the initial condition of your code and you compare your code with observation and people have, so the quantity that you would want to know is what is the power spectrum of this S's? K, power spectrum. If you measure anything which is non-zero, then you see deviation from adiabaticity and people usually write it down like this. With this parameter alpha just to perhaps formally allow for alpha to be over the one, but it's actually not, it's very small and this thing has to be smaller than of order of 10 to the minus four. Of course much smaller than one has not been detected so there are strong upper bounds. So this statement about adiabaticity is valid to the sub per mil level under certain assumption about the cross correlation of this isocurvature with the adiabatic that I'm gonna leave as a technicality. But this adiabaticity property is really, is precisely measures, sub per mil, I would say. I just thought maybe I spend one minute telling you where does this adiabaticity come from and I told you last time where we have at least two paradigms to explain it. This is remarkable, it's perhaps as remarkable as the scaling variance. And for that we have a nice answer, it's the sizeometer of the center space. Do we have a nice answer for this? Actually we have two nice answers and we still don't know which one it is. One is the one that we discuss more in details here and is single field inflation. How does it go, the argument in single field inflation? Well I know that up to solve the constraints and using my gauge and fixing my gauge there is only one single scalar here that for example I could call R. Okay, and then I can use this Weinberg theorem that tells me R, there is always a solution in which R is constant. So there is no surprises that 14 billion years later I'm gonna measure that adiabatic mode because that was the only mode that was around during inflation and then it was concerned. This is a pretty clean answer. The universe has always been adiabatic from the get go if you want from during inflation. That's not the only answer. Another answer is multi-field inflation and I thought I should mention it because perhaps this is a contrast of two paradigms that we might be able to decide which one it is in the next maybe 10 or 20 years. So that might be one of the things that your generation figures out so maybe it's nice to know that that's something to be figured out. In multi-field things are more complicated. There is R and there is all kinds of SIJs. All kinds of isocervatory perturbations in principle I can produce them because in the early universe I have a lot of fields and as they decay into standard model particle they can decay in very different ways and so they can generate all kinds of primordial perturbations earlier on. So then how does the universe get adiabatic from here and the answer is thermalization. Thermalization means that the species and what every species is gonna have well the perturbations are gonna be proportional to the derivative of N in DT delta T. No and this is DN in DT here sorry. It will be something like the derivative of some Boltzmann factor. So every species will be occupied according to some Boltzmann occupation distribution which will be a function depending if both are not fermions but there is some E to the minus energy over temperature and if thermalization does take place and there are no conserved charges every species will have the same temperature and no chemical potentials then the perturbation in every species in the number density then you could rewrite down as perturbations in the energy density they must be related to the perturbation in temperature because this is just a function of temperature but if they all have the same temperature yet again you get the adiabatic mode. So this is a paradigm number two and that's also something that Raphael explained yesterday. Yes, what quantum corrections? I don't know this higher order quantum corrections. Sorry. This is related to the fact that well you have a lot of fields and so every one of them have different perturbations. So there is no reason why the universe starts adiabatic here while here they just doesn't have choice. The difference between these two might be something that we test. So I think that's good to keep in mind what are good questions to ask. Sometimes having the right questions is better than having the answers. Okay. Yeah, but sorry, some other questions. I know this is a little bit sketchier of course than the rest of the discussion but I thought it's nice that you get to hear some of these points discussed at least qualitatively as opposed to in full detail. Yes, yes, I need to assume that those interactions are strong enough that every species is in thermal equilibrium with the rest. Suppose that you have some conserved charge that tells you whatever the number of dark matter you have at the end of inflation is never gonna change because there is no active process that can change it while then this argument falls down. Then that model of multi-field without thermalization is ruled out. So this requires a little bit more. Perhaps you can say it teaches you a little bit more because it really puts constraint on your theory while here it doesn't. Here you can easily have a non-fermal dark matter because the initial perturbation were the same. So that's a good point. So perhaps this connects in a nicer way to particle physics. Of course it's not to us to decide which one of the two but very good. Other questions? There are no other questions. I thought I'd talk about non-gaussianity and in particular primordial non-gaussianity but everything I'm saying is pretty much primordial so I don't write it. Can I write this one as non-gaussianity? Okay, so non-gaussianity it's pretty much impossible to draw a perfect Gaussian on the board. So I guess this is maybe already non-gaussian but okay. So this is my drawing for a Gaussian distribution of for the probability density function of a certain random variable X. So I'm not doing field theory yet just to remind everyone what the Gaussian is. That's what the Gaussian is. And this has in some sense if I have X as a random variable, the good property to hang on to when we will generalize this to field theory is the fact that the two n point function of this are gonna be vanishing if n is odd and are gonna be fixed by the two point function if n is even. There is some numerical factor here. So if it is Gaussian, the only thing you need to know is this width which is given by sigma, good old sigma and that fixes X squared. That's the variance. Is the variance sigma square I guess. Is the variance, sigma is the root mean square and that fixes all of the everything you can ask about this random variable because odd correlators are zero or even are just given by powers of sigma. So Gaussians are very simple objects. In a universe, the simplest way you could ask well let me take the temperature, the temperature map that the driver was showing you, take out all the galaxies and foregrounds and for every point I ask what is delta T and I put it on a histogram. Delta T over T and this is a histogram. How many of them I get? Oh, this is really skewed diagram. So the question is that if you did this histogram within the error bars and so eventually the histogram you would not know because you don't have enough rare events but it would really look like a Gaussian as far as you can tell with the best data analysis you have. Literally, if you just take delta T and you forget where it comes from and you just shove it into one of these beams in the sky, it looks Gaussian like this. That's perhaps for me is the most practical way to think about Gaussianity. Now we will go perhaps one step higher in the level of abstraction and try to think of Gaussianity from a more theoretical point of view. Actually, this is not the most precise test of the Gaussianity that you can do. You can do much more sophisticated things but just to tell you that Gaussianity, or if you want non-Gaussianity, so Gaussianity is tested to the sub-per-mill level. So yet another property that seems to be very well very well represented in the data. Okay, so the whole discussion from now on when I say non-Gaussianity, I mean a small effect. Because of observational reason. Okay, well X in our game, as you might have guessed from this thing, are gonna be perturbations. Perturbations are small, we just saw before they have order of 10 to the minus five. So I don't wanna put too many of them, otherwise the correlator becomes tiny. So I'm gonna take the smallest n such that in a Gauss, the smallest n I can. And n equals one, no, it's silly because I always normalize the average of perturbation to be zero, so the smallest n is n equals three. So I'm gonna consider in particular three-point functions. Of course I could consider four-point function, five-point function, and always find any deviation from this behavior tells me that there is non-Gaussianity. The simplest one is to consider the three-point function. Okay, we already know a lot about this function. We know that there is some conventional factor. We know that there is momentum. Maybe some one of you wants to venture in telling me, so here I have three momenta. Each one is a three-dimensional vector, so nine variables. Do you guys know, given all the symmetries that we have discussed, how many variables actually this correlator should depend on? At least it's a number smaller than nine, you should say, because there are at most nine here. Five, five are a good start. So let's see how we get to five. So we start with nine, and we have a delta function. I always write the little three upstairs, just to remind you, these are three delta functions. It's a three-dimensional one, but that's gonna take out three things. Six is down to six. Something else, rotational invariance. I cannot write K vector, I can only write K one dot K one or K one dot K two or K two dot K two. K three I don't use anymore because it's just a function of the other two. So rotational invariance clearly has three generators, so it's gonna take out another three things. So this gives me three, at least, and I can choose those threes in a couple of different ways. I can take them to be the three sides. I know that these three vectors need to sum to zero, so they need to form a triangle. So I can take the sides, just the length of the sides of the triangle to fix the triangle. I don't care about the orientation because of this isotropy, because of rotational invariance. This is one option. Another option perhaps is to take two angles and one side, that's also an option. There's three quantities to fix a triangle. And actually you could say, what about scaling variance? After all, I told you that scaling variance was a consequences of the isometry of the sitter space. So it doesn't matter which correlator I'm computing. I can compute the 37 point function and still the sitter has the same isometry. So I expect scaling variance to be constraining every endpoint function. So in particular, of course, what scaling variance tells me is that if I multiply each one of these vectors times a lambda, the thing should scale in a specific way. And that specific way is just given by the dimension of this object, which is six, which is minus six. If I have scaling variance as well, I can subtract another one and go down to two because of course I'm putting more symmetry. So eventually this object, it was such a big beast but it ended up being something I can draw on the board, which is very convenient, just two dimensions, yes. Exactly, there are gonna be deviations from this. So this is gonna be approximately scaling variance and deviation from scaling variance are important. So when actually people do this for real, they do account for small deviations. You could have some interesting inflationary model in which for some reason the by-spectrum may be even more known scaling variance than the power spectrum, that's also a possibility. So eventually when people do this, of course they account for all this possibility. But I'm gonna always think about toy models here, the simplest possible way that you can understand something and this is the simplest and actually I would say most motivated models to some extent satisfy this property. But that's true, if you have some deviations here you expect a different scaling, very good. Of course deviations here just to say for everyone maybe, are gonna be proportional again to slow-roll parameters. So percent deviation from this number six. Okay, just in terms of notation, I've introduced some notation in my three point function besides the obvious delta function. I'm gonna call the size of this whole thing F and L. That is just how we talk about the size of this three point function. When we compute it in Fourier space, it has another name which is by-spectrum just for analogy for the spectrum, the by-spectrum. And it has an overall size and this one really depends on the shape of the triangle. Well this doesn't, this is just an overall size. So sometimes I'm gonna refer to this as the shape. Triangle, in fact perhaps more sharply since I know that the overall scaling is fixed I can always write B to be some shape depending on just the ratio of things which is the only things invariant under scaling divided by some K to the six. If I fix the scaling the only thing that I know is really the dependence on the ratios. And I'll use that in a second to make some nice drawings on the board. Okay, but as I said before it's always good to have a time model. Actually this time model ended up becoming much more relevant that you could have expected from the simplicity of the model itself but it's actually not just a time model it's really a realistic one but it's a very good time model because as simple as it could be. What is the simplest way that we can write something known Gaussian? Well the simplest way giving that that's a small quantity is to take a Gaussian and then take a Gaussian square. The simplest way to write it it's Gaussian plus Gaussian square this is clearly not Gaussian. Okay this is the annoying property that if I take the average of R as the Gaussian is centered in zero this is still zero but the average of a Gaussian square is not zero so I'm gonna take that out I don't get bothered by that. Okay but it's the same thing. This is the simple possible model and it has a name that name derives from the fact that we are multiplying the function here without any derivative at the same point. So this is called local model or local non-Gaussianity and despite its simplicity it's actually a very relevant model for what is actually produced by the dynamics of inflation. We will get to that. So here I'm doing some phenomenology of what you might expect and then in a second I'll compute all of these things from inflation, yes. Very good, yeah. So it's important to fix at the normalization of B and that fix usually usually is fixed in such a way that shape of one comma one equals six. Yeah, so that's a good point. You need to fix the ratio and in fact this way of normalizing it is not very meaningful and that results in some perhaps confusing features of F and L but we are not gonna discuss that but if you guys have questions you can ask me. Yeah, so I need to fix it in some sense. Very good. Okay, so here I was always discussing the bispectrum because I told you the homogeneity of the background that pushes me to consider always Fourier space quantities but here I gave you a model in real space so you need to transform it. It's just a matter of doing some Fourier transform. Actually it's a great exercise. I'm gonna write what the result is if you do this Fourier transform. I'll call it B loca. One, K2, K3. Actually as promised it only depends on the norm of the vectors not on the full vectors anymore. And up to some vectors. Of course it's proportional to this F and L. If I take this one to zero, this is a Gaussian field. So the final result at leading order should be proportional to F and L. And then this is gonna be the power spectrum of K1, power spectrum of K2 plus two permutations. The same with two and three and the same with three and one. Maybe what I should have said is that all the field commutes with itself so these things I can commute them inside. So the final result, one other symmetry that it has is that it's permutation invariant. It should be the same if you switch K1, K2, K3. That's actually, okay that's not so deep but it's cool because I think you can prove that it's also parity invariant using that. So try to put a minus on all of them, do a rotation, permute it and convince yourself that it should also be invariant under parity. Okay, so every time I have to write these permutations, I'm not gonna write them too much but you know they are there. Okay, so when is this quantity the largest? The problem now is that we have a function of many variables and it's gonna have different sizes on different places of this K1, K2, K3 space so things can get tricky. But if you stare at this for an hour you can convince yourself that this is gonna be largest when one of the three, it's much smaller than the other two. And the reason is that because the power spectrum goes like one over K cube, so when one is very small, this thing becomes very large. So this is when it peaks. We call these configurations squeezed and now I'll tell you why. Okay, so let's try to visualize it. I think it's easier. I realize that we used something that Paolo proposed a long time ago which is plotting this as a function of the two variables. Yes, so in principle, we're gonna plot K1, K2, K3, squared B. So I multiply the river K to the six so it's automatically scaling variant. Okay, I can rewrite this one as K1 to the six, K2 over K1, squared K3 over K1, squared times the shape. So this B of one, K2 over K1, K3 over K1. The way, and I just plot it as a function of K2 over K1 and K3 over K1. And this is gonna be our plot. And then you can convince yourself that since they have to close the triangle, these values are not allowed by short inequality. We have to be only inside here where this is one, this is one half, this is zero, and this is one, this is one. So all the information about assuming scaling variance, all the information about primordial non-gaussianity can be plot here as a contour plot. There are three interesting limits. In this limit here, you can see that K3 is going to zero. Maybe I should have said that's K3 going to zero. I can always do a permutation. So that limit is when one is much smaller than the others. Okay, in this limit, that's where a local would pick. For example, local would be very large here. So in this way of drawing it, local would be very large and then not very large. I could consider another one which picks here. Here you see K1 equals K2 because they're all one and K3 equals K1. So this is an isosceles triangle. And I'm gonna call that equilateral. Equilateral is gonna be large and then not so large. Okay, and so on and so forth. So I can do a lot of phenomenology like this and you can play around trying to think of other shapes. Now, what does inflation actually predict? And now it depends what inflation, but we're gonna study it in the simplest possible case, single field slow roll inflation. And we will see that what the result is gonna be. I don't tell you. I'm not gonna tell you in advance. So let's do this single field slow roll. Single field slow roll inflation. But what do non-gaussianity can come from? Non-gaussianity is another word for interaction, no? When you have a harmonic oscillator, the wave function is a Gaussian. When you put it on harmonic, non-gaussian. So we're at interaction in our action. Remember our action was something like this. Interaction can come from V. It'd be a non-linear function from the scalar sector itself. Or they come from gravity. Clearly here there is G times D5. So that's an interaction, cubic. Anything cubic and conical interaction. So we could have a non-gaussianity from gravity. Actually we don't know which one of the two is the largest effect in single field. It depends on the parameters. How do we compute it? First of all, what we did last time was to compute just the quadratic action, which is okay for computing two point function, but clearly not for computing three point function. We need to keep expanding this to the cubic order, the quadratic order and so on and so forth. The cubic order is sufficient to compute the three level by spectrum, of course, because you have a three point interaction. So this is enough as an order to expand. So we need to take this and expand it to cubic order. That means we have to choose a gauge, as we said last time. Choose a gauge at second order. And we need to solve these constraints. You remember we had these constraints. And so now they should be solved to second order. So in principle, they will look something like this. And you could, as you go higher and higher order, you have to solve them. After you solve them, you plug them back in and keep everything which is cubic, and that you call S3. Very good. Now, so now that gives you the cubic action, and now you need to compute the correlator. So I've always used this symbol schematically. What do I actually mean? Since many of us learn quantum field theory from particle physicists, or in particle physics, it's a little bit ambiguous what I mean with this expectation value, on what state? When we do it in cosmology, that state is the vacuum of the interacting theory, not the vacuum of the free theory. So this is what I mean with that symbol. What is the vacuum of the interacting theory? Well, that is the evolution. If I compute it in the interaction picture, it's the evolution, the t-order of e to the i, the integral of the interaction Hamiltonian in the t of the vacuum of the free theory. This is what it is, and just to make this simpler, I'm gonna call this u. u is the evolution operator due to the, so in the interaction picture, the thing in the center is just evolving according to the free Hamiltonian, which is good because we already solve r due to the free part of the Hamiltonian. That's what we did last time. We did an Ls3. So r is okay. Now we just pretty much need to take care of evolving the state. So this thing here is sometimes called correlator rather than an amplitude. It's not an amplitude. Amplitude often is the name reserved for things that are in-out from the vacuum state of the free theory to the out state of the free theory. This is not, this is different. Okay, so what I mean by the symbol r cubed is really zero u minus one r cubed u zero. Okay, this is the full thing to all orders, but we don't wanna put infinitely high order interactions. It's gonna be more complicated, so we do it in perturbation theory, and the leading term is gonna be putting all the i's, and this was first really explained by Maldesena in a seminal paper in 2003 or so. T prime is gonna be the expectation value now on the vacuum because we're doing it perturbatively of the commutator of r with the interaction part of the Hamiltonian. Interaction is because once there is hr and one is rh, and so they come in both orders. And so this is the formula. So in principle, if you have a lot of time, now you can compute the three point function because I gave you the action, I told you all the steps, and this is the calculation you have to do. And Maldesena, for the first time, did this calculation right, and that's what he found in single field, slow roll. So he did the calculation only to leading order in these slow roll parameters, only epsilon and eta, in principle, there are higher orders. And he found the term which is local that really looks literally like this simple-minded model here, that exactly this shape. It was very large in configuration that are squeezed when one of the momenta is much smaller than the other, and he found the coefficient, and then he found something else, and that something else actually is large in the other type of configuration, when it's equilateral. That's the final result. So let me comment on the interesting properties of this result. Property number one is small. Proficient are all slow roll suppressed. So let's see, what do I mean by small? Small with respect to what? And so it's always good to attach some more thoughts to the word small. What does it mean, small? Well, clearly from that formula, you can see that FNL, which is the thing that multiplies the shape, is gonna be of order epsilon or eta, and therefore it's gonna be much smaller than one. Is this small? Well, it's more than one, but perhaps in more physical terms, we would like to know if I were to draw a Gaussian, how large would be the deviation from the Gaussian shape? I mean, what does this FNL parameter really tell me? And I give you the formula before, ah, here it is. I give you this formula here. You can see FNL is multiplying the R square. So perhaps a better way of writing that is saying that R is equal to a Gaussian times one plus FNL R Gaussian. Okay, I just collected an R in front to make it clear that this part, what the non-Gaussian is really proportional to is not just FNL itself, but it's FNL times R. And R, we know what it is. This is approximately the one given by the power spectrum, so it's 10 to the minus five. So this thing is gonna be of order epsilon and eta times 10 to the minus five. So it's gonna be much smaller actually than 10 to the minus five. So the prediction of single field's low roll is that the non-gaussianity is several orders of magnitude smaller than the current bound. Really, really tiny. So in some sense, single field's low roll is very successful at predicting at predicting the Gaussianity that we observed. Where does this Gaussianity come from? Why is it so Gaussian? Well, in some sense, you can just believe this calculation, but since I haven't showed you the calculation, that's not so convincing. Perhaps more very heuristically there is some sense in which gravitational forces are weak and the forces due to the inflat on, they have to do with the derivatives of the potential. For example, the leading interactions, let's say the leading interaction of this type. Yeah. The leading interaction would be V triple prime. No, that's where I get three together. But I told you that this potential has to be very flat. It means all of its derivative cannot be too large. So if I draw this, this is a typical drawing. Maybe I'm coming down from here. Maybe I'm coming down from there. This is phi and this is V of phi. What the potential looks like. So the roll is always constraining this derivative to be small in a very specific sense. Maybe you remember that epsilon V, that was V prime over V has to be small, eta V, which was V double prime over V has to be small. And then I told you the next one has to be small. All of the derivative has to be small in this is rural sense. And in some sense, that is the origin. In single field, there is only one direction. That direction has to be very flat because I want it to be quasi-descender. So it's hard to get many interactions out of it because interactions require this potential to be bumpy. This is the way I think about this result. So pretty much unobservably small. There is a second sense. Oh, maybe I should make some comments. That was comment number one, it's small. It's hard to measure. Comment number two, on this quantity, remember that epsilon is much smaller than eta. So actually this B is just the term which has an eta. The other one are small. No, I told you epsilon is at least a third of eta and maybe in the future it will become much, much smaller. So the real term which you have to focus is the local one. Now interestingly enough, if you look at the coefficient of the local, what it was, and I told you that epsilon can be neglected, this coefficient was exactly an s minus one. Now the fact that the coefficient of this local part is an s minus one is not a coincidence. It's a consequence of a soft theorem. Sometimes it's called consistency relation, same thing. It's not to be confused with other consistency relations. And it's a consequence of the equivalence principle, consequence of, it follows from. I don't have time to prove that. Originally I wanted to do it, but I think it's too advanced, but it should be known that this coefficient is fixed. You cannot, in single field is no role, you cannot mess up with it. This is what it is. And since this is the only one that survives, this is the one we have to deal with, even more for the expert, this quantity is not locally observed, locally observable. In particular, for any local observers like the C and B, for the large scale structure, the local observer quantity is zero times B local. This is a little bit more advanced, so I'm not gonna explain it too much. But the fact that it's a consequence of the equivalence principle already suggested is probably some kind of gauge artifact locally. Okay, so the basic option was that this thing is small. So can we get something that actually we can compute that is not small and people can measure? So from the point, so there are other models, other inflationary models. I told you that this was following this result here. I prove it for single field, slow roll inflation. So clearly if I take down any of these assumptions, I'm gonna have a hope to produce something which is sizeable. Either it's not single field, or it's not slow roll, or it's not canonically normalized. So I thought I would discuss, maybe there are some other options. So let's say A and B, actually there are quite a few options and this has been a huge field in the past 10 to 15 years. One option of course is multiple fields. And intuitively if I have more than one field, the one field in which I'm not, all the directions in which I'm not moving, they're not gonna be constrained by slow roll condition. So they can be arbitrarily rough. So they can probably give me large interactions. Another option is could be non slow roll. For example, the potential could have some bumps or it could have some huge sudden drops. Okay, so you can go crazy and draw all kind of things like this and surely if you do something as violent as that you're gonna get some non Gaussianity. Another option is, which is quite interesting is non canonical. Non canonical means that the Lagrangian instead of being just d phi squared is some arbitrary function of that. That perhaps is confusing in my notation. Sometimes it's called P, the arbitrary function. And since d phi squared I was calling X, these models are called P of X's model. P is not the power spectrum, it's just some arbitrary function. I'm gonna comment on those in this following. And then there is another long list and another thing that has been mentioned is perhaps we can change the vacuum. We started with this bunch Davis vacuum, put the vacuum that has some interaction there and definitely you're gonna get interact. I'm not a big fan of this because the whole reason why we do inflation is because we are trying to explain this primordial perturbations which are the initial condition of our equations. If we write down some other initial condition earlier on we might as well just don't do inflation. Just write your initial condition at the beginning of at 10 to the nine Kelvin and just forget about inflation. So this I don't like, but some people like it and so. So that everyone investigates all possibilities. So I'm not gonna discuss that. But I'm gonna discuss the couple that I like. So what I like is that multi-field as I argue actually that's a generic possibility. I haven't given you the right tools to compute non-gaussianity in multi-field. But what I wanna tell you that it is hard. The main reason why it's hard is that as I argue before actually you don't reach the adiabatic perturbations during inflation but you reach them after inflation during the phase of thermalization. So if you really wanna compute the non-gaussianity in multi-field you need to compute them after thermalization, okay? When the adiabatic thermalization or maybe, okay, this is need to compute them. The technical real word is when the adiabatic attractor is reached, the adiabatic mode clearly because before it's reached the things are still evolving and they are not what we observed. So the calculation is much harder. It's not just an inflationary calculation. You need to go beyond that. That's hard. So people do some simplifying assumptions and that's okay but I think this is interesting but it's hard and it's very hard to make generic statement. One statement that I'm gonna try to make generic and it's clearly not true because it's so generic but perhaps it's a tendency is that all of these models have order one local type non-gaussianity. You can find exceptions to this but this is something that you could try to argue for generality, yes. Yes, so small non-gaussianity. Yeah, very good. The question was just to start with Binsky, single-fielder, slow-roll. But the Sena doesn't care. Yeah, as long as it's single-fielder, slow-roll. It's a small non-gaussianity. Question, yeah. Maybe some other question? But no. I'm gonna just finish this sentence. So perhaps in the future we improve the bounds and we are able to exclude this and then we learn that multiple models are ruled out. Perhaps not, so that's one thing that I thought I would mention. And then the final one, which I find nice and actually is probably best discussed not in this context but in this effective theory context is the possibility of having P of X's model. The story with P of X is really the action of a superfluid, just a name. So the main idea is that we are filling the universe with some scalar field and then we are looking at perturbations on top. That's the same thing clearly that breaks Lorentz invariance because now there is a preferred reference frame. And when you break Lorentz invariance like on this table or any condensed matter experiment, you expect that there is no reason why everything moves at the speed of light. Things might move at the speed of sound of the medium that breaks Lorentz invariant. And this is what is saying, this P of X's model or more on the AFT point of view, is that there is no reason why the breaking of Lorentz symmetry should keep the speed of sound of perturbation to be the speed of light. And in fact, more generically, you should expect an action of this type. But this is very similar if you remember to the quadratic part of the action that I wrote before except that now the speed is not C, which I was setting to one, but it's CS and CS can be different from one. In particular, we are gonna consider the case in which it is smaller than one. It's very easy to compute CS actually in this P of X's model. It's just given by derivatives of this quantity. Vector of two? Vector of two. This is actually what you would expect every time you break Lorentz invariance with some constant thing. The thing are not gonna move at the speed of light. This is interesting because then if you compute the three-point function, well, it's a long computation and you find this action. So it has approximately this structure. D for X, A cubed, epsilon, R dot square, R over CS square, and then there are many other terms. So CS appears downstairs. So if you take CS much smaller than one, this indeed can give you a large non-gaussianity. It can make this three-point interaction large. Actually what you should do is taking ratios of this and doing the calculation properly, but this is the correct intuition. And in fact, what these models do is they give you non-gaussianity. They cannot give you local non-gaussianity, local non-gaussianity because of this theorem here. Equivalence principle is still valid. These are still single-fill models. So local non-gaussianity is still small, but they can give you equilateral non-gaussianity. And it has to be of this form. Where the one minus CS upstairs comes from the fact that when CS is one, you recover the ceterisometers. And so you cannot have equilateral non-gaussianity. And the one over CS is because that's what's multiplied. So these are some of the models that might give you an interesting signal. Perhaps to finish current bounds, as I told you, do not see any non-gaussianity, but the specific value of the bound depends on the problem. So FNL, LOCA, the bound, approximately. And I actually don't know it now. Do you guys know what this is for equilateral? It's a 40. There are some bounds like this. And so that's the current status. There are models. The simplest model produces non-gaussianity. So if it is single-fill, we are not gonna measure anything in the next years. If there's any deviation from that, this is a great way to probe it. Okay, so this ends my series of lecture. I have given to the ceteris all the handwritten notes of my lecture in case you guys missed some equation. And so they should be on the website. I have started to put them on Lattec. And so the first lecture and a half, two lectures with a lot more details are also gonna be up there. And I'm hoping to write down a review of all of this story maybe a month. Thank you very much for the many interesting questions and for the comments that you gave me after the lecture. I really enjoyed it and enjoyed the rest of the school.