 Hello everybody and welcome to the Latin American webinar of physics today. We have a very interesting talk is the webinar number 108 of these webinar cycle that we have we started more or less in 2015. So we have a lot of webinars in the in the body, let's say. So this time is going to be a very interesting webinar because it's going to be about the primordial black hole. The speaker is Julian Ray, that he is a PhD student at the University at Autonoma de Madrid in Spain. And basically he obtained a bachelor in Venezuela in the University of Bolivia, and now he's here in Madrid doing his PhD. So the title of his talk is primordial black hole in a early early matter dominated era and stochastic inflation. Don't forget that you can, you can see the webinars in these in our YouTube channel and also to ask questions to Julian using the chat in the live transmission. Otherwise you can make comment to the video once it's finished. So Julian, welcome to physics and you can start whenever you want. Can you see the screen. Yes. Yes. So, thank you for the introduction and thank you very much for having me. So to what I'm going to talk about today is the work that I've been doing for my PhD thesis, which is about primordial black holes. The talk is based on the three papers that you can see at the bottom of the slide and at the end of the presentation there's a full list of references. So let me start by saying that primordial black holes are very relevant dark matter candidates today. And what makes them interesting in my opinion is the fact that they do not require any physics beyond inflation. And there is still a very large window of masses that remains viable, meaning that it remains unconstrained by experiments of about the five orders of magnitude. The lower end of this mass range comes from a Hawking radiation bounce from bounce on the radiation that these black holes will produce if they evaporate it. And the higher end on this mass range comes from microlensing experiments. What also makes these black holes very interesting is the fact that there are some physical signatures, namely the fact that they produce gravitational waves and gravitational lensing is something that could be probed within the next decade. What we want to do in this talk is we want to determine the effects that these two things would have on the abundance of black holes. The first one is the equation of state of the universe, at the time of which the black holes form. We're going to assume that they form either during a radiation era or during an early matter dominated era. And the second thing we want to determine is the effect of the formalism of stochastic inflation on their on their abundance. And I will tell you more later about what the stochastic inflation actually is. We're going to explore these aspects in the context of three different inflationary models, two of which are numerical and one of which is analytical. Let me tell you a little bit about dark matter first. Any dark matter candidate we know it should be dark, meaning it doesn't interact with electromagnetism. The second thing they should satisfy is that they should be quickly interacting and the third is that they should be cold and the black holes satisfy all these criteria. There are many ideas for what dark matter could be. For example, when sanctions on neutrinos and matches and by matches here I mean a neutral stars and brown dwarfs, and it's important to distinguish which is because people often lump promoter black holes together with these matches. But as I will show you in a moment, the promoter black holes that we're going to be considering are very, very, very small and at least in comparison to these two neutral stars and brown dwarfs. So it's not exactly, they are compact objects, but it's not exactly right to lump them together. The oldest surviving dark matter candidates are actually these promoter black holes because they were first proposed in the 1970s. So let me tell you what they are. Primordial black holes are black holes that are formed in the early universe by some mechanism different to the usual stellar collapse. There are many ways that you can get promoter black holes, for instance from the collapse of vacuum bubbles or from topological defects, but the way that we're going to get them today, the mechanism that we're interested in is how to get them from a given model of inflation. The plot with the constraints on the black holes, so here in the vertical axis I'm plotting the fraction of the fraction of dark matter that could be in the form of promoter black holes. So this is a number between zero and one, and in the horizontal axis I'm plotting the mass of the promoter black holes. So you see that on the left of the plot there is a window that is still open meaning that it is some constraint by experiments. And on the below 10 to the minus 16 solar masses or so. There's a, you can see the constraints coming from this, the one to say, a G gamma, these are the constraints coming from extra galactic gamma rays. And this is basically a constraint, because we do not observe the radiation that black holes and very very small black holes would produce if they evaporated. So we know that they're the black holes cannot be that small. And above 10 to the minus 11 solar masses, more or less, we have the microlensing constraints and in between this window is completely on constraint. But this is a region that could be probed within the next 10 years or so, either by femtolensing experiments or by gravitational wave interferometers like Lisa. For comparison, I have also on the right side, you can see a little tag that says LIGO, and this is where the masses of the black holes that LIGO has measured would be. So you can see that the black holes that we're considering are at least 10 orders of magnitude smaller than what LIGO has observed. So they're very, very small. So how do we get promoter black holes from inflation. And if we want the black holes to form what we need is to have a lot of very large density fluctuations at a scale that is comparable to the Hubble horizon. Yeah, so we define this number of Delta, which is the combo called a commuting density contrast and this is the ratio between the density perturbation and the background then energy density. And we're going to assume that these fluctuations are produced during inflation, as is usually done. We're also going to assume that transitions between different areas are instantaneous in this plot illustrates the situation. And then in the vertical axis, the function that I'm plotting here is the evolution of the Hubble parameter as a function of time and the horizontal axis is time or temperature, however you want to see it. So we have plotted the different eras of the history of the universe on the far left, we have inflation. And then right after inflation ends we have this EMD, which is an early matter dominated era. Then after this is something that we're going to conjecture here. It's not something that has been observed or anything. Then after this early matter dominated era we have a radiation era. And then after this radiation era we have their standard matter dominated era that occurs after a big man lucre synthesis. And then at the end of the very right of the plot we have the this lambda dominated era, which is the cosmological constant era that we live in today. So every density fluctuation every energy density fluctuation has a certain scale associated to it, because we usually work with the Fourier transform so it's better to think about the wave number. So the scale leaves the horizon as inflation goes on, and by leaves the horizon what I mean is that the scale associated to each fluctuation at some point is above the curve. So if you look at the black line that I have drawn here the one that says local point over K. You can see that the scale is we say it's outside the horizon on the left of the block where it is above the the color region, and eventually the scale re enters the horizon. So for this example it happens during radiation domination, and it is only after the scale re enters the horizon that it is going to induce the collapse. So the scale that the fluctuation is produced during inflation, and the scale associated to this fluctuation leaves the horizon and once it re enters the horizon, the fluctuation is going to tell the matter inside the horizon that it has to collapse into a black hole. This occurs long after inflation has ended. And so how do we estimate the mass and the abundance of these black holes in a in a radiation era. And usually the assumption that we make is that the mass of the black holes is proportional to the energy density contained within a Hubble patch at the time of which the black holes form. And this is what is written in the first equation here, and we put a proportionality constant in front, which we call gamma. And because of causality this gamma should be less than one, because the energy density of collapses cannot be bigger than what is contained within the Hubble patch at that moment. And this gamma basically measures the efficiency of the process. And this is a constant that tells you how efficient that the collapse is basically. And we know that this is from simulations we know that this is about 0.2. Then, if we play around with this equation a little bit. We find after a short calculation that the mass of the black holes is actually inversely proportional to the square of the square of the, the scale associated to this fluctuation studies collapsing. If you want to produce very small black holes, which are what we want from the from the block that I showed you earlier with the constraints, we know that the young constraint masses are the small ones. And if we want to produce very, very small black holes, then we need the fluctuations that are collapsing to have a very large game. The next thing to that we can estimate is the abundance of these black holes. Yeah, or, in other words, the fraction of dark matter that is in the form of black holes. And there's a number that is denoted by F BBH is the one that you can see on the last equation here. And the important thing here is that it's proportional to this function beta beta is a function that tells you how what proportion of the energy density is collapsing into black holes. And if you assume that the fluctuations are Gaussian, and this beta should be to be a Gaussian basically, and you can think of it as going roughly as the exponential of one over the power spectrum of the fluctuations. So what's important here is that the, let me tell you what the power spectrum is the power spectrum is a quantity that tells us how these fluctuations are distributed across different scales. And this is actually a quantity that is computed from a model of inflation you can if you have a model inflation that you can compute the power spectrum. And this is actually measured by CMB experiments. So one might wonder if this is actually measured. What can is the value that we have measured by CMB experiments enough to produce enough frame order black holes to explain all dark matter. And it turns out that it isn't. The value that we have measured for the power spectrum is about 10 to the minus nine at CMB scales, but the one that we need to produce an offering murder black holes to explain out of the dark matter is about 10 to the minus two or so. But this is not a problem because, as I told you before, the mass of these black holes is inversely proportional to the scale of the fluctuation. And you want the, since we want very small black holes because these are the ones that are unconstrained. We want the scale of the fluctuation to be large. So in other words, we need the enhancement of the power spectrum to occur at scales which are much, much smaller than the ones that we observe at CMB, meaning at a K that is much larger than the one that we observe. Let's see an example of the power spectrum is shown on the right hand side. Before we want to explain the plot, let me first say how we can get a large power spectrum from a given model of inflation. So in any model of inflation, the power spectrum is inversely proportional to the speed of the infl at least in the in the slow world approximation which we actually need to break but this is enough to understand the intuitive idea here. So the power spectrum is inversely proportional to the speed of the infl aton within a model of inflation, and then what you want to get a large power spectrum before the infl aton to be very, very slow. So, what we need to do is to have a feature some kind of feature in the potential that is going to slow down the infl aton was it rolls down. For instance an inflection point or a small hill. So the kind of potential that we're considering is shown on the left plot here. And you see that at very small field values on the on the left of the plot. We have the part that will correspond to them to the fluctuations that we observed at the CMB, then later on at the scales are smaller and smaller so for large K. So we have an inflection point in the potential which is the place at which the promoter black holes are going to be produced. And then at the end of the inflation we have reheating which is when inflation ends. So the power spectrum that will correspond to this potentially shown on the right side. And then you can see on the left side, the power spectrum, the, the value that we measure that CMB which is 10 to the minus nine and then as eventually for large K we have the peak that will correspond to this inflection point that you see on the left. And the simplest model that we could consider is to to obtain this primary black holes is actually a simply to put a scalar field and couple to gravity. As you can see in this action that the model that we're going to consider now this is one of the numerical models that I was telling you in the beginning. Simply to take a scalar field and couple it to the Richie scalar in this way by this via this term x i five squared times R. And then what you do is you get you can get you can actually get rid of this coupling by redefining the fields. So if you take this action and you really find the metric and you really find the the the the inflate on field. And you can get rid of the coupling to R which is what what I shown the action that is at the bottom of the page. And the price that you pay for getting rid of this couple which you see is no longer in the Lagrangian is that your potential is not going to be divided by this factor omega to the fourth. And omega it comes from the from redefining the field is defined on the in the here in the middle of the page the first so the first of the three equations. And you see that it's proportional to five. So the price for this redefinition of the field is simply that now your potential is going to be divided by a quartic polynomial. But the simplest potential that we could have chosen in the first place was also a polynomial. And the ratio would be the ratio to polynomials. So you see that the new potential is a function of age which is a new field that occurs after you really find the the fields in the Lagrangian is now a quotient of two quartic polynomials. And this is going to give us what we want because, as long as the parameter satisfy a certain relationship among themselves, we're going to get an inflection point in the potential. The inflection point is shown on the on the blue curve here the blue dash curve. This is what I'm plotting at this correspondence to the potential and the horizontal axis here is simply the field value normalized to to and plank. So you see that when age is around one and plank this blue curve, the potential has an inflection point. So around one blank is where we're where we're going to produce these black holes. But the red curve here is simply the speed of the of the of the field. And you see that it goes to zero when the field is near the the inflection point, which is what is going to give us the enhancement of the power spectrum. So my wonder is whether we actually needed to do this field redefinition to get the inflection point because on the numerator we already have a polynomial that could have an inflection point if we choose a two, three and four correctly. So what the denominator is actually doing is that at large field values, it makes the potential constant. So you see that the blue curve at large field values as the sage becomes bigger it tends to a constant. And this is going to help a lot when fitting the CNB data. So the reason that we need to include this denominator is that it makes it easier for us to fit the CNB data at large field values. So the intuitive idea that I gave you before where the power spectrum was simply one over the speed of the inflaton is actually not entirely correct. This is only valid in what is known as the slow roll approximation. And it turns out that when the power spec when the field is very, very slowly is rolling very, very slowly. We actually break the slow roll approximation and we are in a regime known as ultra slow roll and a this picture where the power spectrum goes to simply as one over the speed of the inflaton is actually broken. So to be able to compute the power spectrum, what you need to do now is to take this differential equation that's shown at the top here is known as the Bukalov-Zazaki equation and solve it numerically, usually numerically. And the power spectrum can then be obtained simply by squaring the solution that you obtain here. This equation is obtained simply by considering perturbation theory. So on the plot that I have on the left here, you can see the result that we would obtain for the power spectrum. If we follow the, if we use the slow roll approximation in the red curve and by solving this differential equation in the black curve. So you see that after the slow roll phase ends, so near the end of the plot where you are in ultra slow roll, one is actually underestimating the power spectrum if one uses a slow roll approximation by many orders of magnitude. So it is important to solve this equation numerically to get an accurate power spectrum. The main issue that we run into with this polynomial model is actually adjusting the tilt of the power spectrum at CMP scales, which is also known as a spectral index. So the spectra index is simply the slope of the power spectrum at a given scale. And the problem that we run into is that I'm trying to adjust the spectral index we run into problems with the with the bounds coming from the Hawking radiation of the black holes. So let me let me elaborate on this. If you try to that. Okay, let me first say that the spectral index that we get from the from this polynomial model is about 0.95. But the spectral index that we measure in measured by plan for instance in CMP experiments is 0.965. This is a measurement that is done by assuming only lambda CDM. But there is a bit of difference and the problem is that if we want to increase the spectral index. Yeah, if we want to fit the tilt of the power spectrum better at CMP scales what we have to do is we have to change the starting position of the peak in the potential. But if we do this, and we're also going to be shifting the position of the peak in the power spectrum. And if we change the position of the peak, then we're changing the masses of the black holes. So this is why trying to adjust the spectral index at CMP scales results in a problem with adjusting the masses of the black holes and we run into problems with the evaporation bounds. So these are the solutions to this problem, both of which are well motivated. The first one is we can actually extend lambda CDM by adding very reasonable or by adding very reasonable extensions, like the neutrino masses or adding the effective number of relativistic degrees of freedom or just allowing the spectral index to run. If you add these things to lambda CDM, the Planck collaboration has already performed this analysis and what you find is that the new spectral index is about 0.95, which is very close to what we find. So this is one possible solution to the problem, just extending lambda CDM. Another solution if you want to leave lambda CDM as it is and you don't want to add these things is to add higher dimensional operators to the potential. These higher dimensional operators are expected anyway because you're dealing with a model that is non-vernormalizable, right, because the infrared scalpel to the Ritchie scalar. So if you add these higher dimensional operators, then what's going to happen is that they're going to change the potential at very large field values and then you're going to be able to fit the CMP data better. And that low field values, which is where the inflection point is, and it's where you produce the black holes, the potential is going to be basically the same because these terms are not going to have an effect there. So the effect of these terms is to change the potential at the CMP, but leave it the same at the inflection point. And because the potential changes are the CMP scales, if you have these terms, then you're going to have a few more parameters that you can use to fit the CMP data. So this increases the spectral index solving the problem. On the left plot on this page, what I'm plotting is the abundance of these black holes once again in the vertical axis and in the horizontal axis, the masses of the black holes. So there are, you can see in the colored regions with black lines going across are the experimental constraints that I showed you before. So the red one is the one from Hawking Variation and the orange one is the one coming from microlensing experiments. And the two curves here, the green one and the blue one are the solutions that I have just mentioned. So the green one corresponds to simply using a fourth order polynomial and extending lambda CDM. And the blue one corresponds to adding hard dimensional operators to the potential. But with both of them, we can get a fraction of dark matter in the form of black holes of one so we can basically explain all of dark matter with black holes. And something important about this and that is also there is actually not not just for this model but this is a generic feature of all of these models of inflection point inflation is that we get a very large gravitational wave signal. The question is that this is important to emphasize that the signal is not coming from the mergers of the black holes or anything like this is actually do only to the to the large power spectrum that we have, because if you expand us. If you expand Einstein's equations to second order and you look at the equation of motion of the tensor modes which are the ones that correspond to gravitational waves. So at first order, these are these are on source by any other perturbation yet they're by themselves, but if you expand the equations to second order, what you're going to get is that the second order tensor modes are now going to be sourced by first order in the scalar squared. And because we have a very large power spectrum we have a peak in the scalar perturbations and then this is going to lead. Yeah this equation of motion is going to lead to a peak in the second order tensor modes. And you can calculate this peak and they turns out that for the black holes that we're interested in it falls exactly in the frequency range that lies around the cycle are going to measure. So yeah so this is something that this is a model that could be probed within the next 10 or 20 years. Once these experiments are active. So the next thing I want to tell you about is the formalism of stochastic inflation. The formalism of stochastic inflation what happens is that we allow quantum fluctuations of the field to back react on the classical trajectory. And these quantum fluctuations are then going to modify the background evolution of the field or the equation of motion that we have for the field in slow roll is what is shown here in in this great box. The first term the one involving the derivative of the potential is the one that you would obtain if you were not considering the back reaction of the quantum fluctuations yeah. So if you're in slow roll and you do not consider the back reaction what you would obtain is simply that the derivative of five is equal to this this first term yeah the derivative of the of the potential. Then, if you consider the, the quantum fluctuations what you obtain is a second term. You can think of it as a stochastic noise term so at every point in time, this, this, this term Xi, which is a stochastic noise is going to give a kick to the inflaton. So this is why it modifies the background trajectory of a field. And then if you want to check how big this effect is you want to check whether or not it has an effect, what you can do because this is this Xi is basically just a probability distribution of the noise so it's a number between zero and one. What you can do is you can divide the coefficients of both terms to see and then check whether the result is bigger or smaller than one yet to estimate whether or not it has an effect. And then if you do this, what you find is that for the stochastic noise to not have any effect on the trajectory the condition that you have to satisfy is that the power spectrum has to be much smaller than one. This is actually an equation that is only valid in slow roll, and as I told you before we actually have to break slow roll so this is not a very good estimation but it's enough to understand intuitively what's what's happening. So for the power spectrum that we measure at the CMB which is around 10 to the minus nine or so. This doesn't matter you don't have to take into account the stochastic noise because 10 to the minus nine is many much much smaller than one. But because now we are enhancing the power spectrum in order to produce the black holes and the power spectrum that we need is of order 10 to the minus two. So it is a lot closer to one and then it's important it becomes important to understand whether or not this has an effect. And what you do to take into account this, this quantum fluctuations operationally is that you take the field, the inflate on field, and then you split it into a course grain part and the perturbation. So the first term in this equation is five bar represents the course grain field. And the second term represents the perturbation. You can see that the perturbation is actually multiplied by this function w. This is a window function that separate basically it separates small scales from large scales. So this case sigma is a cutoff that is going to this cutoff is going to tell you which ones of the scales are classical and super horizon, and which of the scales are quantum and sub horizon. Then what you can do once you have this splitting of the course grain field and the perturbation is you can plug it back into the equations of motion and you're going to find that the fields now satisfy what are known as the Langevan equations, which are the two that are shown at the bottom. And this is these are the equations for the field and this conjugate momentum and each one has its own stochastic noise term. And these are the equations that you have to solve basically solving these equations numerically it's very difficult. So what we're going to do is we're going to develop an analytical formalism that is going to allow us to estimate the effect of this of these stochastic noise. Because we have quantized the perturbation in principle you have to think of the perturbation and the noise as operators, but it turns out that the commutator of the noise and the perturbations actually vanishes on small scales. And then instead of thinking of them as quantum operators you can think of them as classical stochastic variables. So they're classical because the, the, the commutator vanishes. But even though they're classical you cannot think of them as a single number, because they are probability distributions so you have to, you have to think about them as stochastic variables, even though they're classical. And it turns out that if we choose a window function that is a heavy side step function. So, yeah, this is this is the simplest possible window function that we could choose just a heavy side. And what we obtain is that the noise that is generated is then Gaussian, which simplifies the equations a lot. So this is the window function has to be chosen by hand. This is not something that comes directly from the, from the model. Yeah, you have to, you have to choose by hand what you consider classical and what you consider quantum modes. And if you choose the heavy side step function then you find that the noise is Gaussian. And now because the perturbations are stochastic variables. Their, their evolution is described simply of the properties if you want are described by their statistical moments yet for so they're they're going to be described now by their endpoint functions. In particular, we're interested in the two point functions, which you can compute by solving this, this set of three couple differential equations. And you see that this, each one of these equations is sourced by this term D, which is the one that is related to the noise because the is defined as this linear combination of Thetas and these Thetas are precisely the correlation of the stochastic noise. So once you solve these differential equations, if you if you manage to solve them and you find the two point functions of the of the statistical moments of the perturbations, then you can find the power spectrum. Because you can show that the power spectrum is given by the sum of these three terms. So the first term this D 55 the one that is shown in black here is what you would obtain if the perturbation if the if you were not considering the stochastic formalism. So this is the result that I was showing you before is the one that is obtained simply by considering this first term. These two terms that are shown in red are the additional contributions from the stochastic formalism. So what we want to check is whether or not these two terms that the sum of these two terms vanishes. And if we find that it vanishes, then what we're going to get is that the power spectrum is the same that we have before and it's not affected by the stochastic formalism. So, the way that we're going to do this is by, by, by developing an analytical model. Yeah, that is going to allow us to calculate explicit expressions for the stochastic noise. So this is a model that is actually quite generic and that all of the models of an inflection point inflation that produced by model black holes, behaving in more or less the same way. So the model that we're going to develop here is quite generic for all of the models of this type. The model that we developed is a three region model. So, and this illustrated on the plot that shown on the left here, and what I'm plotting here is the second slow roll parameter eta. And basically what eta is, is a parameter that controls the acceleration of the inflation rolls down the potential. In the beginning at early times, it is zero because the field is in slow roll so the slower parameter has to be small so simply consider it to be zero. And then eventually it grows to a very large value so bigger than three of order one. And at the end of the model, at the end of the trajectory of the input on either decreases back to a negative value. So the blue curve that I'm showing here the blue dash curve is what you would obtain for eta in a numerical model. Yes, so for instance, think about the, the polynomial model that I showed you before it is what you would obtain if you were considering the actual the numerical evolution of the inflection. So the simplest way that we can model this evolution is simply by fitting eta with three heavy side functions one after each other. And despite the fact that this is a very crude approximation, it turns out that it actually reproduces quite well the results of the, of the model. By having the simple parameterization of the town we can actually reconstruct everything that we need we can reconstruct the potential the trajectory of the infant on the, the first level parameter and most importantly the power spectrum. And because the parameterization is so simply simply a constant eta in three different regions, solving them a kind of Suzuki equation analytically is very easy. So the biggest advantage that we find with this approach is that now we can find explicit analytical expressions for the noise. So these, the expressions that we find for a theta theta remember is the correlation function of the of the stochastic noise. The results that we find are shown in this table, and the the actual values. The actual expressions are not that important right now what's important is the fact that by developing this analytical model we're able to find the explicit expressions for the for the noise. And it turns out, you can show that then by by using this expression for the noise you can solve for the statistical moments of the perturbations. And you can show that the two additional terms that appear in the power spectrum due to this new formalism actually vanish. So, what we find, basically is that the this stochastic formalism does not have an effect on the power spectrum. So, this is the same thing I told you before so at the power spectrum is the sum of these three terms and the sum of the second two vanishes. As we can show analytically and then at the power spectrum is only given by the first term which is what you would obtain if you were doing the calculation, classically. What's important is that these finishes only in the sigma going to zero limit. So, let me show you once again the, the cult of here. So, if you remember, the, the field is defined into is split into a core screen part and the perturbation and disparate the, what you consider classical versus what you consider quantum modes is basically determined by the cutoff that you choose here inside the window function. So, the cutoff that we choose is simply sigma times a times h because you can show that if sigma is more than enough, then this cutoff accurately separates the quantum modes from the classical ones. So, what we're plotting here is what's important here is that you're only going to be able to separate the modes accurately if you choose sigma to be small enough. So, if you don't choose sigma to be small enough, then you're going to get a different power spectrum, but this is only because you haven't chosen the cutoff correctly. So, on the right side. The, the, the, the color for different values of sigma which are these red lines and the black line basically tells you when the modes become, when the quantum modes become classical. So you see that for instance, if you choose sigma equal to 10 to the minus two, then, as long as the field is in slow roll. This is, this is a good, this is a good choice. Once the field is in, once you have an ultra slow roll phase, the, the modes take a little bit longer to become classical, which is why this black line is has this nasty dependence on the number of defaults. So, if you basically what you want is for the entirety of the black line to be above the red line that you're choosing. So, in this example, the minimum sigma that you would need to choose to accurately separate the quantum modes from the classical ones is 10 to the minus six. And you see indeed that the power spectrum, which is shown on the left, that is obtained by choosing each value of sigma, which are the spectra shown in red tends to the green curve as sigma becomes smaller and smaller. And the reason is that the green curve is the correct result, which is the one that you would obtain classically. So, the stochastic inflation formalism does not have an effect on the power spectrum as long as you choose the cutoff correctly. And the last thing that I want to tell you about is what happens if the black holes collapse during a matter dominated era instead of radiation dominated era. So the one that we have been considering so far is the collapse of black holes during a radiation era. If they formed during an early matter dominated era, then we need to actually need to estimate again the masses and the abundances of black holes. So, as I showed you way in the beginning, way back in the beginning, the mass of a black holes, if they produce if they're producing a radiation era is proportional to K to the minus two where K is the scale of the fluctuation. We can compute the mass in if they form during a mile during an early matter dominated era and what we find is that now they are inversely proportional to the to the cube to the third power of K. And this is not very important is basically almost the same dependence as before but what is important now is that the masses also proportional to the temperature at which the transition between these early matter dominated era and the radiation era occurs. Similarly, you can do the same calculation for the abundance and what you find is that if the black holes form well during a radiation era, then you remember that it was proportional to this function beta which was the amount of energy density that collapses into black holes. This is what is shown here as beta or D. And this, this, you would not expect the amount of density of energy density that collapses into black holes to be the same. If the universe is dominated by radiation than it if it's if it's dominated by matter that you would expect it to change and this is what is indeed what happens. So if you the the the abundance for the black holes in a matter dominated era is now going to be proportional to us to a different beta function which we know by be time D. And once again we find that it depends on the temperature at which the transition between matter and radiation occurs. So the two key points. The first is that these expressions now depend on the transition temperature. And the second is that the fraction of collapsing energy density is given by a different function. So, as I said before beta is the fraction of energy density that collapses into black holes, and the dependence of this function on the power spectrum is very different, depending on which era the black holes form in. So the radiation function is the first one that is shown in the this between these two equations in the middle. You can obtain this function by considering the pressure to formalism and postulating that a given density fluctuation only collapses as long as it is above some critical threshold, which is why this integral grows from some threshold delta C to infinity. And this is assuming that the distribution the probability distribution for this delta function is is Gaussian. On the other hand, if these black holes from during a matter dominated era what you find is the second function here. And this function was found in these papers that are cited here. So this function takes into account the non sphericity and the angular momentum of the collapsing cloud. So if you were to estimate naively what what you were to think about without doing any math to think about what would happen if the black holes from during a matter dominated era. What you would expect is that the collapse is a lot easier, because now there is no radiation pressure that is opposing the collapse. So if you were to use the first formula and simply take the limit of the equation of state going to zero, what you would find is that all of the matter in the universe collapses into black holes because there is no radiation to stop it. But this is vastly over estimating. What really happens is that the, whether the cloud that is collapsing is vertical or not has an effect. And if it is spinning, then the angular momentum actually also inhibits the collapse. So even though the equation of state is zero, you need to be careful about the calculation and if you take this effect into account and you find that even though the collapse is easier than in radiation domination, not not everything in the universe collapses into black holes. So, collapse during this early matter domination has two very big advantages. The first one is that because of what I just said, because collapse is easier during matter domination, and the power spectrum that you need to produce to explain all of dark matter in the form of black holes is actually smaller. So for in the radiation cases about 10 to the minus two what you need but in the matter case you only needed to be around 10 to the minus four. The second advantage is that the abundance of the black holes because beta is different. The abundance is now a lot less sensitive to small changes in the height of the power spectrum. So, let me elaborate on these two points. And with these plots in the plot that I show on the left side, what we have is the in the vertical axis we have the temperature, the transition temperature between matter and the horizontal axis what we have are the scales of the fluctuations that are collapsing. So, the two shaded regions what they show is that the regions that are forbidden, the blue region is forbidden because for those choices of scale and transition temperature, the scale actually reenters during radiation domination. So this is not what we want what we want is for the scale to reenter the horizon during matter domination during this early matter dominated era. And then for that to happen you have to be below the blue region. So, yeah, the white region on the right side of the plot. Well, for some of you, the orange lines that are going across in the vertically are diagonally rather are the masses of the black holes and you will produce for each one of these choices of temperature and scale. So you see that smaller masses are to the right side. And if you recall from the plot that I showed you ages ago. The smaller masses are constrained by the bounce coming from the Hawking radiation of the black holes. And this is what is shown in the red region. The red region corresponds to the Hawking radiation bounds. So it's a, it would correspond to small masses and therefore it's also forbidden. So the region that is allowed is the one shown in this white triangle. And you can see that the best case scenario would occur on the top right of the triangle, which is for the power spectrum which is given by this horizontal dashed line is smallest. In the best case scenario, the temperature that you would need to produce black holes to explain a lot of matter is around 10 to the five or 10 to the six GV. So we're talking here about very, very low reheating temperatures, very low transition temperatures. So the key point here, through all of that is simply that if you have an early matrodominated era that ends when the universe has a temperature of 10 to the five or 10 to the six GV, then you only need a power spectrum of 10 to the minus four to produce the black holes, which is an improvement over the radiation case. On the plot of the right side, what I'm showing is the sensitivity of the abundance to small changes in the height of the power spectrum. So the purple curve shows the sensitivity of the abundance if the black holes formed during a radiation era, and the blue curve shows the sensitivity of the abundance of the black holes formed during a matter era. So you see the important point here is that the blue region is a lot bigger than the purple region. And what I'm depicting here is one order of magnitude change as I change the parameters inside the function. So you see that if I change the power spectrum by a little bit just by a little bit in the horizontal axis. I already reached a one order of magnitude change in the purple in the radiation case which is why the purple region is so small, but to reach the same amount of change in the blue case. I have to change the power spectrum by a lot. So the point here is that it is that the abundance is a lot less sensitive to small changes in the height of the power spectrum. And this is good because it means that you're going to have to tune the parameters in your model less to get the same amount of black holes. As a way of implementing this mechanism, you can consider this action monodrome inspired potential. And what this potential does is it features a bunch of oscillations because of the cosine term and the quadratic near the minimum. So why is this important? It's because once the inflaton reaches the minimum of the potential, what you have is that you can show that if the minimum is quadratic enough, what you have is that the inflaton is when it oscillates around the minimum behaves as a smarter. So once the inflation ends and the inflaton starts oscillating near the minimum of the potential, what you're going to have is a long epoch of matter domination. As long as for heating is perturbative, meaning that no preheating occurs, but this is going to depend of course on the coupling of the inflaton to the other particles in your model. So the original motivation that we had, well, let me first tell you about the parameters and the potential. This P parameter that is shown here controls the slope of the potential at large field values. So this P parameter basically is what is the main thing that is going to determine how good, how well you're going to fit the CMB data. Because this is what changes the potential at large field values. At low field values, as you can see in this plot, this P parameter doesn't really have an effect. And what does have an effect is this kappa parameter. This kappa, which is multiplying the cosine, changes the depth of each one of these minima. And this depth is what you want to change in order to obtain the large enhancement of the power spectrum. So basically this kappa parameter is going to determine how big your power spectrum is. And the P parameter is going to determine how well you can fit the CMB data. So the last thing I wanted to tell you here is that the motivation that we had to choose this potential was, we thought maybe by having many, many oscillations in the potential we could reduce the tuning in the parameters of the model. The reason being that it is, if you have several minima, then maybe even if the infratum does not produce the black holes in one of the minima, it would produce them once you reach the next minima. And it turns out that this is not the case, because the abundance is so sensitive to this, to the depth of each one of these minima that you have basically just have to choose one of the minima and tune that to get the correct amount of black holes. So actually you didn't need the, you don't need to use a potential that's complicated to you to implement the matridominated scenario. As long as you can get an early matridominated error via some other mechanism, you can always, for instance, from the same polynomial model that we have before, you could also get the black holes. Nevertheless, we're going to choose this potential and the mechanism is now illustrated in these examples that I show here. So on the left side, you can see the power spectra that we get, or when all of the parameters in the potential are equal to between each example except for kappa, which as I said is the parameter that controls the depth of each minima and therefore is what controls the size of the power spectrum. And so the highest of these power spectra, which is around 10 to the minus two is what's going to correspond to the producing this black holes during a radiation era. And the two smaller spectra correspond to producing this black holes during an early matridominated era. And the corresponding, the abundance is corresponding to each one of these three examples shown on the right side. So, once again, the power spectra that you need to produce them during radiation, you can see that it's a lot bigger than what you would need if you wanted to produce them in matridomination. The different abundances are shown on the right side. And you can see that the abundance corresponding to the radiation case is actually a lot narrower than the one corresponding to the matter case. And the reason is that precisely because the beta function that the that determines the abundance is different. And because you don't have the in the matter case you don't have the exponential dependence that you had in the radiation case. And this exponential case this exponential dependence in the radiation case was actually giving you a suppression to this to this abundance. Now, because you lose that suppression, the distributions are a little wider. And this is a problem because if they are too wide then you can see that you run into problems with the, the bounds coming from hocking variation which are shown in in red. And the way to overcome this, this problem is to actually change the parameter I that is seen in the beta function and which is related to the angular momentum of the collapsing cloud. So let's go back quickly to the, the function, you see that in the second beta function, you have this parameter I which is, as I said is related to the angular momentum of the collapsing cloud. This, this should be a parameter if you estimated you find that it should be a parameter of order one. So, if you play around with this parameter meaning that basically what what you're doing is you're changing the angular momentum of the cloud that is collapsing. What you find is that you get a bit of additional suppression to the, to the abundance and the price that you have to pay of course is that the power spectrum now has to be a little bit bigger to account for this, but by changing this parameter a little bit you can evade the evaporation bounds. So the conclusion here is that the main advantage of the matter dominated scenario is also the biggest drawback, just that lack of suppression makes it more difficult to evade the evaporation bounds. And with this I conclude. So the, the, the first thing I want to say is that the simplest potential that can produce primary black holes is still viable provided that you either extend lambda by the simplest potential what I mean is that the polynomial potential that I showed you at the beginning is viable provided you either extend lambda CDM or you add higher dimensional operators to the to the potential and both of these are natural solutions. The second thing is that if there are matters in the form of black holes then we should be able to detect the corresponding gravitational wave signal with bison and the cycle, if they form the radiation domination. The third thing is that we have shown analytically that at leading order at least the stochastic inflation formalism does not affect the power spectrum, even in the presence of an ultra slow roll phase. And you know that this is only a leading order this does not mean that the formalization stochastic inflation does not have any effect on the abundance of the black holes, but at least at the with the approximations that we made. We find that it doesn't. You would have to solve the, the, the Langevin equations completely numerically to find the, the, the correct effect of the formalism. But within the approximations that we made we find that it does not have an effect. The next thing is that the promoter black hole formation in an early matter dominated era has two big advantages. One is that because of the lack of radiation pressure you need a smaller enhancement of the power spectrum to produce the black holes. And the second one is that the parameters in your potential are now going to be less tuned, precisely because the beta function is different than it is a lot less sensitive to small changes in these parameters. And the final thing is that the, the drawback of this matter dominated scenario is that the, this lack of suppression makes it a lot more difficult to evade evaporation bounds. Yeah, that is everything I wanted to talk to you about so thank you very much and thanks again for having me. Thank you very much, Julian. So just before to start with the session round you can for the people that is following the YouTube channel, you can ask questions via the YouTube chat. So, in the beginning we are going to start with some questions from here from the audience of the law physics. So I guess Alejandro Cardenas want to address a question for Julian. I'm Alberto, and thank you Julian for this very interesting talk and well presented talk with a lot of details. I'm not an expert on this, but I'm curious about them. For example, here, your second conclusion here about the detectability. So I believe what you show for this to address the detectability is the gravitational wave energy density, which is like it seems like you compute the dimensions and then you hit the noise curve to assess that it will be below, above the noise curve of Lisa. But so my question is, it's more about how actually this will be detectable in the sense that it is, I believe that what we typically do is we compute gravitational waves, and then we will do either much filtering or something to extract that signal. Can you please comment on these cosmological things, like do they actually have this. Yes, so this, what you're talking about this much filtering is actually what you would use if you were detecting for instance a burst, correct, a burst of a signal right if you had a merger or something like this. But what I'm talking about here is actually a stochastic background. So you would not see a burst but rather, what you would find is that there is additional noise in the mirrors that you do not know how to account for. It's actually the same thing that happened when we found the CMB. There was this additional noise that nobody knew where it was coming from and it turns out that it's because of this stochastic radiation that's in the background is exactly the same here. So, this is just show up a stochastic noise in the in the movement of the mirrors. Do you know if there are any works related like if this actually for example for Lisa can be extracted in the sense that Lisa will have a lot of constant sources in this like I don't know why they're population. So, this is different, like in LIGO for example is more or less what you just mentioned that you will have a burst, and then you can detect it because you can you can fit this body in Lisa is different because all the sources are sort of turned on at the same time. So do you know like even though here for example it looks like it's way above the noise curve. It means that so it will be a detectable in that sense because you can really model this noise and then extract it. Yeah, so what you're seeing here in this noise curve is precisely, precisely gives you that information right as long as you have a curve above this this curve it means that you can account for the so that the what the noise curve is showing you is every source of noise that we know about. And this is actually what it will what is modeled when you when you draw this curve. So, yeah, as long as you find anything that is not this curve as long as you find anything that is above it. So, then you know that the this is a source of noise that you have not accounted for and I guess as once you rule out everything like, I don't know astrophysical sources of noise for instance, which is actually what is modeled in this curve. The first thing that you would think is that is a stochastic background right. Okay, so it's detectable as long as it is about this this noise curve. But for example, if you think about, I don't know, for example, the nominal or like the first gravitational wave detection with LIGO, that could have been observed with Lisa. But then you see that it just leaves the band and then goes to LIGO band for example, but then even in those cases I believe it's quite actually challenging, even though it's above the noise curve. As I said, like a lot of other sources. So my question is like if somebody has actually performed an analysis saying like you can indeed model for white dwarfs and I don't know super massive black holes embers, like a LIGO type sources that will be seen first at LIGO in Lisa and then this stochastic background. My understanding is that this this noise curve actually already accounts for the all of these different noise sources. I may be wrong but my understanding is that that this this red curve for instance that I show here as Lisa is actually already has all of those noise sources into account. Maybe I'm wrong. I don't know. This is my understanding. Okay, thank you. Okay, let's, let's see if there are some questions for the people in YouTube if they want to ask to Julian. So meanwhile, we can start with some other questions here for the audience in the, in the zoom session. I don't know. Okay, thank you for the nice talk. Can you please go back to your slide I think was 16. When you talk about the formation of pvh in the matter era. Yes, okay, first one question is Tm correspond to the reheating temperature or you say transition temperature but what's that exactly. Let me show you the plot I had before. There's a plot with a different era so you can see that Tm is the temperature that corresponds between to the to the change between the early matter dominated and the radiation era. So I call it reheating temperature sometimes because the, the easiest way that you can get an early matter dominated era is if you have perturbative reheating. If the infratone is oscillating on the minimum of the potential and reheating is perturbative, then this early matter dominated era comes from reheating but the models that I'm showing you here are not dependent on that so Tm is basically the transition temperature regardless of whatever mechanism you used to get this early matter dominated era. Okay, but yeah, but it's the onset of the radiation, right. Yes, yes. And, thanks. And if you go back to that slide so 16. So maybe it's my intuition that it's miserably failing, but you say that pvh only grow during radiation. I was expecting the opposite right. I mean, in matter there's no pressure for expecting pvh to grow even faster than radiation. No, so, so what in what happens in the matter era is that this is when they form so you're correct that when they form it's easier for them to form if they are in matter era than if they're in a radiation era. But once they have formed once you have already all of your black holes in the universe. The, because the black holes behave as matter, right, this is why we can think of them as dark matter they behave as a matter fluid. Their energy density that the energy density of the total amount of black holes red shifts as the scale factor to the power minus three right because it's matter, but radiation red shifts as a to the minus four. If you take the ratio you find that radiation red shifts more quickly than the black holes and this is why the total energy density actually grows during the variation era. So if universe was always matter dominated for instance you would you would not have a growth of the of the abundance right because the ratio would be the same always. So this is why the, the, it's why it grows different radiation. The proportion of energy stored in pvh is growing. Correct. That the single pvh is like a creating or something like this. No, no, no, no. Exactly. Okay. Okay, thanks. Okay, maybe. Are there other questions for the moment. Just the from YouTube the RV fair he's saying that it was a very nice talk that he thanks to you, Julian. I don't know if there are other questions but I have a very small question about if you can comment about this ago, because you. I mean I didn't know that existed so maybe he's my. Most of the, the capability of this ago was in the, in the area in which your model is producing a lot of signal so. Yeah, so I don't know much about the experiment so I can say a lot. It's a little bit to the right in the frequency range is a little bit to the right of Lisa but there's some overlap between both as you can see here. And I don't know what this is an experiment that is planned. Yeah, and it's not something that's constructed yet same as Lisa. And there's not much more I can say I think it's an experiment that we do not have yet but yeah it falls. And it would, if it gets constructed, it would be good to probe this this kind of model. So it's a sort of complimentary to Lisa in the same frequency range. Okay, but just a nice question. So this this ego is is ground base. No, I think it's, I think. Maybe wrong but I think it's space based as well, just like. Okay. Another question I'm very in. I mean also naive question is when when you were talking about the stochastic inflation, because it looks very similar, just, just not not the physics behind but just the formalism. It looks very similar to when people try to solve the stochastic differential equation is it in the spines in the same. Yeah, this is it's a. These are these are actually stochastic differential equations. So, if you want to do them, if you want to solve them completely numerically you know to get a full solution you have to use stochastic methods. That's exactly what happens are each are each step in time this thing gets a stochastic kick. So you have to do to solve them, many, many, many times and build like a probability distribution of the solutions so yeah they are stochastic equations. Okay, so, okay. Yeah, because when I looked at the formula. Yeah, yeah, very similar because in, in other fields also they appear like anything that where is the randomness. Yeah, just that question. So I don't know if we have more question from from the audience. Okay, I guess Oscar Zapata wants to have one question please Oscar. Thank you for the nice presentation. I would like to. To Julian regarding the constraints. A name of the ventilation constraints, could you comment something about them. And, yeah, so we, we, I have heard that there is some tension in that kind of constraint. So, well, the real constraint that we have come from microlensing from femtolensing fm the femtolensing constraints are the one that are shown in, in, in pink here and this is this is not an actual constraint this is just a projection of future constraints we could have from femtolensing and microlensing constraints are actual constraints and show that they are this. So that is around 10 to the minus 11 so long masses from the Subaru experiment. So, I don't know the story about this constraints so well but you're right as you can see here there is a very, this is like a vertical line that is cutting this constraints below 10 to the minus 11. And they think that the, the, the tension that you were referring to is the fact that before this, this wasn't a vertical line right this was a full constraint that went to around the middle of this open window. But then I think I think what happened is that people realized there were some astrophysical uncertainties with these constraints and then what happened was that they, they had to revisit the original estimation and they had to cut this decide because of these astrophysical uncertainties. And then these, I think that the actual, the actual constraints that we have now are more or less this. I don't know if there are some updated plots but as far as I know they're the, these are the ones. And they end at around 10 to the minus 11 there's the sharp, the sharp cut here precisely because of the, the uncertainties that were not taken into account before. I don't know if that answers the question. Okay, I don't know. Maybe there is another question for Julian. Just check fast in the. Since that we are okay and also due to time we have to close here. So, let's invite everybody to first of all thanks to Julian to for his very interesting talk. Indeed was very, I mean I learned a lot with the, with his talk. So, for the rest of the people you can, you know that you can follow the physics YouTube channel, you can subscribe to be updated with all the latest webinar that we are programming during this year is 2021. So, and for the next webinar is going to be around two weeks more is going to be webinar 109 and it's going to be Christoph dermis from University of Torino. So he would talk about nutrients relations, more or less that is going to be the topic. I invite everybody to, to join for the next webinar law physics webinar and see you next time. Thanks, Julian again for your nice webinar. Thank you very much for having me. And we see, we see each other next time here in law physics. Bye.