 OK, we are on air. Hello, everyone, and welcome to this series of Latin American webinars on physics. My name is Nicolás Bernal from the University of António Nariño in Bogotá, Colombia. And I would be your host today. Our speaker today is Mauro Vali from INFN-ROM, who will talk about dark matter self-interactions in Milky Way dwarf galaxies. He's PhD at CISA. So now he's doing a postdoc at INFN-ROM. And in full, he's moving to the US, to UC Irvine, right? Yes. And so we are super glad to have Mauro him today as our speaker today. And I remind you guys that you can be part of discussion, writing questions and comments via our YouTube live chat system. And OK, I can hand you over to Mauro. OK, that's great. Thanks a lot for this nice presentation. Let me just share my screen. So can you hear me well? Yeah, perfect. OK. So hi, guys. So this is going to be a presentation on work that I've been doing in collaboration with IBOU. It is about dark matter self-interactions in Milky Way dwarfs. And for a nice review on the subject of a subset of topics that I'm going to talk about, you can look at this archive here. So let me just start setting up the stages for what we are going to talk about with the cold-dark matter paradigm. So as you know, a precious component that basically contributes about 1 fourth of the total energy budget of our current universe is essentially explaining a lot of data concerning observations that are related to typically large scales. Scales, essentially. And this component can be put nice one-to-one correspondence with a cold and collisionless particle beyond the standard model. Now, state-of-the-art simulations about cold-dark matter supported, providing also some dedicated recipe about the bionic physics that you need to include in these simulations are able to show, to provide an accurate snapshot of the morphologies of the things that we are able to observe today. So that's pretty impressive. Now, let me just do a step back. Let me switch off bionic physics for one second. If you do that, you end up with a universal prediction from these simulations, which is the well-known Navarro factor white profile here, basically characterized by two parameters and some correlation between them, which is the mass concentration relation that you can get from these and body simulations. Now, you take this profile with this scatter here, and you end up trying to fit the data at small scale. So at the scale of galaxies. And you end up with objects that have rotation curves that seem to not support well this behavior here, predicted by CDM on these simulations. That is with a cusp. But they do prefer a core. So this is the so-called core versus cusp problem that probably I've heard about. Now, you should start to shout on me, even if you are online, maybe you cannot do that, and say that I cannot switch off bionic feedback. That's true. Let's catch a cartoon of what happens here. Essentially, energetic events like supernova feedback can redistribute the dark model distribution in the galaxy. So we are not really allowed to not take into account bionic physics in our predictions. However, this prediction actually depends on a lot of things and are particularly complicated to take into account probably in the simulations. And depending on the specific recipe that you adopt, after all, you may end up with different results. However, core versus cusp problem may be just related to that. And so we may live well our lives without worrying too much anymore about this problem. However, there is another interesting problem that came up more recently. And I have to thank Torsten bring one for this nice slide the direction fully stolen from him. That is the diversity problem. So basically the same Navarro and Franco White group today recognizes that there might be a problem anyway when you try to explain a multitude of kinematic data related to a multitude of small scale observables that are the galaxies that we can essentially measure, for which we can measure kinematic properties. And as you can see here, in particular, they are galaxies that probe similar asymptotic secure velocities, a different kinematic structure in the data. And so if you tune your recipe of bionic physics on feedback to take into account some subset of data, you probably fail to explain other ones. And that's the problem of trying to take into account in CDM in cold dark matter, this diversity that we observe. Then we can basically focus on what is going to be the main subject of my talk. That is, those three galaxies are particularly interesting as objects. They are satellites of the Milky Way. They are orbiting around the Milky Way fairly close to us. And the most important thing that characterizes these are still physical objects is the very large mass to light ratio. It's one of the largest that we know. And so this means that they are essentially more than 99% dark matter dominated. And these essentially translate into the fact that they are essentially ideal dark matter as astrolaboratory. Now, already counting the number of these satellites becomes kind of issue in the sense that around the 70s, we were measuring only 10 of them. And today, we have a factor of five or more. However, if you take CDM on your predictions, you end up with at least another magnitude more of a satellite that you should have detected or you are going to detect. And that's exactly the problem. If there is a missing satellite problem, we really don't know because it might just be an observational limit together with some kind of other effect that still may come from biometric physics. So for a recent discussion, you may look at this paper here. And I think that it's fair to say that it might not be really a serious problem, but it's still interesting to investigate on it. But there is something more, I think, compelling in the keyword also, we can say. And besides beyond counting the number of satellites, and this is essentially related to the knowledge of the mass profile in these objects. So let me go just very quickly on how you should do that. So you may start considering a condition equilibrium for these objects for which you can write a Boltzmann equation for the phase space distribution of the star that is the tracer of the gravitational potential. So now the gravitational potential is basically attesting the mass profile of the black matter, the density profile of the black matter. So now you imply spherical called the sphere orbitals in the moment of this equation to end up with the famous spherical gene equation, which is this differential equation here. So what does it contain? So first of all, you have this stellar density of a star that can be measured along the line of sight. So you measure actually through photometry surface brightnesses, and then you can connect to this projection formula. So that is essentially unknown, not an unknown of the game. What is essentially the quantities that we want to better? Well, this equation should be solved possibly for the radial component of the stellar density spectrum. So this is the variable you would like to solve for. And together with it, there is this guy here that is the isotope. So this is a measure of ignorance about the total motion of the star in the system. Indeed, what we can measure are just line of sight projections for the motion of the stars in these objects. And then our theoretical prediction is not really directly on this guy, but it is on this guy. So to make the long story short, the bottom line is that you have this spectroscopic data that are of the rotation group data that I was showing you before for other galaxies. And the theoretical prediction is a complicated function of two things. The orbital is an isotope function. And this mass profile that comes from the fact that you have this Poisson equation here and integrated it from the potential of the system. That is essentially one-to-one correspondence with the dark matter. So now you understand that if you want to learn something from this data about two unknown functions, it would be a very tough job to do that. And indeed, if you asked me, Coral Casper in Versferro, the answer is who knows. However, there is still an interesting quantity that we may have a look at that it's very meaningful and does not depend on this degeneracy problem. And that is the mass of the Earth-like radius. Indeed, several studies have shown numerically and also more formally that you can essentially shrink your knowledge, your uncertainty about the mass profile of systems like Carina, that are very well-measured those phyrogalaxies. And essentially this band of uncertainty is mainly driven by this unknown orbital and this sort of refunction shrinks here because your dependence on this assumption is minimized. And actually you have also a nice formula that you can write for this applied mass that depends only on structural parameters and kinematic ones. So basically, parameters point is related to observations. So now why I'm just saying all this? Because now from the Earth-like mass, you can also compute if you want the circular velocity, the corresponding circular velocity, you can put all these points in a plot. And you can compare now with CDM simulations, CDM only simulations actually. And the rotation curves that you get from CDM are many, as you would expect on the basis of what I told you before. And all these rotation curves correspond to sub-bellows that you predict. Now, some of these curves indeed intersect these points, which means that you have a possible match of a sub-bellow that you predict in the simulation and what you observe. If you follow this interpretation, you end up having also a lot of more massive sub-bellows that anything that you observe. So now the question is sub-bellows that are so big, are probably too big to fail from the point of view of our detection because you should explain why so massive sub-bellows did not trigger star formation. And so are faint from our perspective of observers with respect to this one that you have observed. So this is the two big paradox. And it's a very interesting that may have the important dependencies not only from organic physics, but also from what you assume for the mass of the milky way in your simulation and the tidal effects that you should somehow think to talk out. But still, you can see that it's, I would say, a well-defined problem from this point of view, even for the Kodakmanet paradigm, it's not really easy to need several factors here to explain all this. Now, Kodakmanet's cost plus leverage it's supposed to be the first problem. Is it assigned beyond the Kodakmanet paradigm? Well, we would like to say, of course, yes, this would be exciting for the community of the Kodakmanet researchers and astroparticle researchers. And one of the compelling ideas that are around now since 2000 is this self-interacting Kodakmanet paradigm. And it's a very interesting proposal because essentially it allows a natural explanation of what first of all for the formation of the density codes. Indeed, the self-interactions of the Kodakmanet means essentially that Kodakmanet particles in high density region can talk with each other. And so they essentially allow for redistribution, first of all, of their velocity dispersion. Now, it's not more picked like this, like it's shown in this figure, like it is in CDM, but it's more redistributed in the fact that you are allowing around for a larger dispersion for a very density regions for these particles. And so you have a heat transport, you have a heat gradient essentially, and a redistribution of that matter particles and you allow for a formation of a core. And this core actually is in a region that is thermalized because these particles could talk with each other. So you have also, you're also in a thermalization region. Now, the warning is that of course, there are upper limit when you consider the scattering constructions in the account for a dark matter provosal, a dark matter candidate. And the most interesting bounds probably come from cluster and indeed many emerging cluster analysis would agree on this rough bound. But there are also even stronger bounds coming from the stellar kinematics of the brightest central galaxy. So you may look at this review for an extensive discussion on this paradigm and also these bounds and other things. But I will just want to show you the money plot about this discussion. The money plot I think is this one. So it's a wonderful idea that essentially came up to the mind of this gentleman very recently. So the idea is that, okay, maybe cluster bounds are stronger and so in principle these provosal looks a bit crazy, but let's look at the data. Let's try to use the systems that we have at our disposal a bit like particle accelerators. So each of these systems depending on its size is probing a different energy regime. So using this idea, one may show that even if cluster data favor 0.1 centimeters per gallon per section which is still a number that is allowed, here you have instead of from dwarfs to spirals since there are more around the one centimeter squared. And so you can read essentially that there must be or there probably, there will be likely to a good statistical evidence, significance, a velocity dependence on this average self scattering construction per unit mass of this dark matter particle. In order to that, these orders proposed a very simple model that has been widely tested in the SIDM simulations. This model is derived on physical grounds. So it's, I think it's very nice to describe it. So essentially you start with this condition. This condition is the condition where you require that there must be in the ELO, if the ELO contains SIDM particles, there must be a threshold R1 according to which if you are much more in the inner region of the ELO, your scatterings are a lot in the dynamical time and the dynamical time is essentially the age of the system to be another part that is the one where you are essentially on the outer region of the ELO where essentially you don't scatter anymore. And on the basis of these, you can distinguish an inner region where this thermalization that I was talking about is actually the regime you're probing and you can describe the SIDM ELO just like an isothermal gas, it is just low. On the other part, when you're scattering essentially the dynamical time goes to be less than one, then essentially you want to recover all the dark matter features that you like and you know that work very well at large scales. And so you do this matching procedure of R1 with an anathema. So now, why it's so intriguing this provolosan and why it's so nice this simple modeling? Because you see here in this, now you have a mechanism first of all to explain course, but you have something more than that actually. You have this question here is another static equilibrium equation for an isothermal gas and it includes the, it involves the, total gravitational potential of the system. Total gravitational potential of the system means that when you want to solve the equation to get the SIDM density profile, this SIDM density profile will be tightly connected not only with the dark matter distribution that is there, but also with the bionic distribution. So essentially in this gravitational potential there must be also the contribution that comes from variance. Now the contribution comes from variance seems to be exactly if you look at the diversity problem the reason why SIDM may be so successful with respect to CBM today. And as you can see here, you have completely different kinematic structures that for two different objects that however are probing the same maximum secure velocity here asymptotically. And in the inner part however the difference is that the gas, the distribution of the gas the starts is here is important is negligible. Therefore, why CBM is totally, from this point of view CBM only prediction would be always the same would be an NFW. And here the SIDM prediction depends also on the distribution of gas that starts that you're putting in. And indeed with the true SIDM profile with bionic influence you're able to sit on the data while if you look carefully at all these lines, there's a thermal profile without the bionic influence would not be able to match the data. So with the fact that dark matter self-interactions thermalized in a halo, you have an automatic correlation of dark matter with balance distributions and you solve elegantly this problem. What about milky-wade works now? So this is going to be the work that I've been recently doing with IBOU on the basis of this nice semi-analytical halo model. So just recap solving the equation that the static equilibrium equation that I was showing you before you may end up in the case you can neglect variance you are allowed to in works because in works as I told you the mass-to-light ratio is very large and actually the distribution of starts can be shown to be always subdominant even a small radius. Then you end up with this simple differential equation to solve that is characterized by two parameters. One is this one-dimensional velocity dispersion that characterizes the gas law of the thermal SADM distribution. And then you will have also an amortization or not. And then what you do, you will also have this parameter R1 or equivalent cross-section that you fix or you vary in your study and you are allowed then to match this solution of this profile and the isothermal to the NFW. So this is the dark side of the story in the study that I'm going to present you now. And the bright side is the one of the starts. So for the starts things are much simpler in the sense that we have a simple modeling, this plumber model that takes into account already, well, it can provide a good fit of the surface thickness data. And then for the real monster of the game when you study it with words which is the orbital anisotopy, I will try to use the most general parameterization that you can find in literature for this guy here which is involving four parameters. Most importantly, I will now show to you a fit to worth data that is related to only to a subset of all the words that we measure today. I will just study the eight classes of those products and this is because they are the ones for which we have a better kinematic data. So for all the other ones, it's a bit harder to make statements, but for these ones is a bit challenging. So and on top of this observation and information that I will explore it now, let me just pause one second on the fact that on SADM, in the SADM framework, I am allowed to explore it at large scales, what cold dark matter simulations cold dark pure cold dark matter simulations suggest me. So essentially, at large scales, I want to recover cold dark matter results in within the SADM framework. And to do that, we can essentially exploit the mass concentration relation that is predicted in body simulations in cold dark matter and body simulations. And this can be done in the following way. So you basically pick up the SADM simulation without planning physics and you try to read out the mass concentration relation. This mass concentration relation given the matching conditional number one will be then transferred to RONOT and SIGMONOT that however are free parameters simply in symbol that you don't know. So after all, once you have this mass concentration relation that means essentially to stay on this band on this great band here, you your real unique parameter to be determined on the data will be essentially R1. Of course, this band is pretty large and you have a lot of scattered. So there is room also for these guys to move. So let me show the fit. The fit, tentatively fit the data according to what I've just told you. So the cold dark matter fit here also is forced to follow the same band. So now you know that I have told you already that there is a bit of a problem. So you will not expect to fit well in NFW the data once you are restricted to stay in this band as you can see from this point and this band here. And indeed, for most of the objects, the cold dark matter fit which features two parameters from NFW plus four parameters from this albinoisotomy essentially fails in most of the systems that I have been studying with a high volume. However, when you go for the SIDM proposal, when you restrict yourself to a minimal orbital anisotomy parametrization like a constant. So you have three parameters from the SIDM model and these are one and on top of it, you have also an albinoisotomy that you pick up for simplicity now just as a constant. You really improve the fit quite well, but not enough. And indeed you have to go really for the most general parametrization that was showing you before to get a wonderful fit of the data. Now, this wonderful fit of the data allows also to respect the embodied constraint. So it means that what we expect now to see is that we match up large scales because the SIDM cosmology and then on at the smallest scales, we are able to take into account these points. So let's verify this. So in order to do this I performed a Bayesian analysis of with these seven parameters model and on this kinematic data, also taking into account some theory conditions about the positivity of still space distribution and the technical things that you can ask me later if you want to know. And the result is pretty nice as you can see. Now you can essentially match the points that you wanted to match. This is this you knew before because you knew that you were thinking, well, all the data actually, so you were expecting this. And this is a cross track that also you are doing within 90 percent of confidence of probability region you are exactly on the prediction of code model that you would like to respect. So that's a good signal. So now the last thing is what is the SIDM cross section that you are expecting from these exercises for each of these objects. So to do that again, one should go back to the condition of the rate of scattering almost equal to the inverse of the age of the system that can be written in this way here. And with the help of this simplification, you end up with estimating a sigma over M as this quantity with max field and approximation. We are allowed to use max field and approximation for the mean velocity because if you think about in a transition GMA at least we would expect to be much of what's one like from the side of the velocity profile. And now the point is, okay, with this formula now from my feet, one can try to get the range of the sigma over M. The last step, which is maybe a little bit delicate and is worth saying it, is that you need however to get know this to marginalize over the age of the system. And we have chosen in a kind of conservative way a large window of ages in giga years. And then you're done, you can get the cross-section from data. So this is the money plot of the talk probably. The cross-section that you read or better the posterior density function of the cross-section per unit mass in the SIDM model from those data with the cosmological trial that I discussed before shows that you basically spawn towards some magnitude. And the distributions are not picked on a single day. So that's not really a good sign from the point of view of what would be the best expectation we may have for SIDM. However, there are essentially, I would say, three groups. So one that is probing 0.3 cross-section while there is an agreement with the range one three. And then actually particularly two of these objects that really prefer side-to-side cross-sections. Now at the 68% of probability region, you can see that these distributions, six out of eight objects are in agreement with a range from 0.3 to three. Considering that there may be sort of systematic cinema that we have been doing, I think this is not at all a bad result. And more interestingly, these range of sections is in agreement with the ballpark that you have found that other people actually have found for stunning other kiloparts-sized systems like the dwarfs that I was discussing with you before here. So these 0.5 to three remains a good benchmark so for those fellow adults. But however, as we may have guessed, since from the beginning looking at SIDM simulations, the SIDM per se cannot really be the solution to this to be the problem. There are however out of layers that may be connected to environmental effects but systematic there are difficult to take into account in terms of a minimal study as the one that I've been showing to you. So I don't think this plot is a kidding plot but actually I think it's a supporting plot after all for the SIDM proposal but probably needs some more ingredient. First of all, it probably needs a test and this test may be provided by field galaxies. So field galaxies are interesting objects because essentially they don't have these requests to mark about the importance of the influence of the external types from a host galaxy. So essentially what I want to say is that here maybe there might have been problems with the Milky Way that to some extent may have been influencing this result with the tidal effect. So with this tidal field and with field galaxies you would be much more on solid grounds in trying to assess what is this range of sections so for a possible SIDM solution to the small scale structure problems. So there are simulations that already show that this might be possible even though also here you may have objects that are particular objects because these are not really on the same ground of other objects here but there might be a better situation than the one depicted instead in the Milky Way case So a different study is needed on this side probably also with the machinery that I've shown to you and for my particular friends after all I'm I'm very interested in model building as well it would be interesting to consider more seriously not SIDM per se, this is the last message I want to give to you today but SIDM plus this idea of late kinetic decoupling that was originally proposed in these papers years to explain a little bit to fail and the missing satellite problems in one shot. So essentially if you take a minimum model with dark photon and something that in that model you have another species, F, dark radiation species and if you play well with the parameter space you end up essentially with this possibility of having a kinetic decoupling of this sky F scattering at around one kiwi. Now one kiwi probably according to last data from especially from Lyman Alpha Forest seem to not favor a solution or the missing satellite problem if you look at this plot on this paper here you can see essentially that the number of sabbatos for kinetic decoupling temperature greater than one is very similar to the one that you essentially got in SIDM or the simulations this paper shows that essentially you can just reduce it about 10 or 30%. However, very interestingly, you are going very well on the side of the to be a problem instead because this late kinetic decoupling allows you to have a further mechanism of sub-passion of the mathematical spectrum. And that's the absolute here. So a combination of these late kinetic decoupling together with the fact that you're able to have course with the self-interacting dark matter proposal, maybe the natural solution for this kind of problems from the part of view of a particular physicist that doesn't want to rely on other possible astrophysics or physical explanations. And of course this modeling needs much more investigation and I think that actually this is ruled out if you look at the new Laman alpha data, but then here the orders were using a sub-scutter infrastructure of 0.1 centimeter squared. So basically this result was driven by this low TKD. So it may be interesting to study this picture with a better interface where you have a greater cross-section and a larger TKD maybe on a range that is viable. And with this I just really want to review the recap. I think my time is essentially finished 10 minutes ago. So sorry for the little delay and now it's time for your questions. Thank you. Okay, thank you very much. You can hear me, right? Yeah, so let me remember that you guys can... Okay, thanks. You can ask question guys via Tomauro, via the YouTube live chat system. So there is this chat open there for you guys. So I don't know if you have questions here from the audience. Yeah, I have a question for Tomauro. Sure. Yeah, very nice to talk, but I was thinking, I mean, I was wondering in your analysis, did you include at some point, velocity dependent cross-sections? In the sense, or if you have an idea that could, how this type of cross-section that are velocity dependent could change the outcomes of this analysis that you did? So basically in the modeling that I showed to you, so let me just, sorry, share again the screen now. So these are very interesting questions. So in this modeling here, you basically read a cross-section and a mean velocity when you study the... So let me just pick up the right slide. So basically you see, your question is, can you here include some velocity dependence? Of course, you could do it. What I'm using here is a simplified assumption. So I'm just factorizing in this way. So to read a velocity and a cross-section here, and the mean velocity after all is connected to this velocity dispersion. So the answer to this question is, so the answer to your question is that one could, I didn't do that, but one could study directly some particle cross-section function that you can predict, I don't know, with some model. So you basically write down your Lagrangian, you compute your average cross-section with a velocity and you can directly fit this expression more or less. So, yeah, it can be done and it may be an interesting thing to do. Yeah, in fact, I was thinking about the case of particle, of a particle model that can give you a non-drivial dependence in the velocity, yeah, nice, nice answer. Great, are there more questions? I have one actually. Okay, sure, okay, Mauro. So you, for your feed, you're using this set of around eight dwarf galaxies, I think, but also at the very beginning, you say that we have data from around 50 or so. What happened if you take the whole set for your feed? Hi, it would be nice to know, I should do it. Now, the problem is that, sorry, let me show you again the screen now. So the problem is that this is just to give you an idea. So this is the quality of the data that we're looking at, okay, now with my eight galaxies. These are the ones you would like to study also. So you see, it's certainly interesting to study ultra-faint words and include them in the study that I've done, but unfortunately, the quality of these data are not really, there are several problems. First of all, you really don't know if some of these objects are in the so-called dynamic equilibrium to include them in the study. And if you assume that, even if you assume that, probably they will not be so constraining. These are actually two examples of the best ultra-faints that we have probably. So together with another one that is kind of the same one. So the answer to your question is that it can be done. I don't think you would change a lot the result. Okay, thanks. So Camille, you had a question I think? Yeah, hi Mauro, can you hear me? Yeah, yeah, sure. Okay, thank you for the nice talk. I want to ask again about the velocity dependence of the cross-section. Can you go back to your slide 24, please? Yes, thanks because I don't think I answered completely. So let me go 24. Yeah. Okay, yeah. Yeah, okay. So what would happen if you plot the best fit for sigma over m as a function of the velocity of the main velocity in each galaxy? Okay. Because in principle, if they have different velocity dispersions, you should expect different cross-sections, right? Yeah, that's a great question. So the perfect answer to your and the previous question was that these galaxies are supposed to roughly probe the same velocity. So this is the reason why I'm using these answers here and try to factorize these. In other words, when you look at these paper here, okay, where they read some velocity dependence, they really have used different objects that do not belong to the same class. These are completely distinct objects that are probing really different velocities. And at this point, doing the same game that I've done, you are able to read different cross-sections. So now you would like to do the same thing only within the same class of objects and these would not be allowed, essentially. As a cross-check of this factor, you can look at the velocities that I get. Unfortunately, I don't have a table, but if you do a similar plot on the velocities, okay, then you can see that the mean velocity that I'm getting is very consistent for all of these objects and it goes from 30 to 50, essentially kilometers per second. So there is not really a velocity dependence here, unfortunately, but you are still able to some extent to fit a functional form of the velocity if you want. You could do that. But the velocity dependence would be a range that goes from 30 to 50 kilometers per second. It's not gonna be something that goes from 50 to 1,000. I hope I answered your question. Yeah, you did, but then I have another one. Yeah, yeah, sure. So can you elaborate more on how you model the age of the dwarf galaxies? Because, I mean, this is also a crucial variable. Yeah, I'm doing the most naive thing that you can do in Bayesian analysis. You don't know a parameter, you assign a flat triode. You pick up the greatest range that you can find in the literature and basically you marginalize over it. Okay. So it might be that part of this difference may be essentially addressed by this systematic, but taking into account that, I mean, this marginalization effect is in all of these histograms. So probably the effect is very small, after all. In other words, the spread of this distribution is also including this uncertainty. Okay, thank you. Thanks for the nice talk. Okay. But I want to say that one very interesting thing is that however, the prediction from this, I mean, this picture may change completely if you use something that goes in this direction here. So that would be the interesting thing to see now if changing a bit the assumptions, you may be able with this SDM plus kinetic decoupling scenario to essentially remove this tension. And this is something that I hope will be able to tell you soon. Okay, thanks. There's another question in the chat. By Neil Bandas in the same direction when I think about this slide 24. So he's asking about this, your final posterior PDF plot. So in that plot, the peaks have a range of orders of magnitude. Should I think it to be due to the diversity in different dwarf's struggles? Ah, this is a very nice observation. This is another possible interpretation of the plot, absolutely. If you want, it might be due to the fact that there are environmental effects that you cannot neglect. I have tried to neglect them. And so I ended up with different cross-sections. Okay, yes, thanks. So I have another question. Is when you show the profile, the dark matter profile that you're fitting, can you go back please to this? Yeah, sure, sure. Sorry, I'm just switching off and switching on. Okay. So what is the dark matter profile? The problem is this one, right? One with the... Ah, sorry, I... Because... Don't remember which slide, sorry. The dark matter profile... I guess it was around the slide number 24, maybe. Ah, sorry, okay. Uh... Maybe this one? You had a parametrizational profile. Parametrizational, the profile is here, in principle. But this one goes back to this one. Maybe it's this one that you wanted to know. No? No, well, it's not real. I remember a profile with some betas and this R1, I think, as well. Okay, it was the most general, the one with beta and beta-infinite. Yeah, you passed, you just passed it, I guess. Ah, this one? Yes, exactly. I think it was the previous slide, that one before. That one. Okay, this beta R is... Ah, sorry, okay. So I was confident that's not the dark matter profile or what this beta R is. Oh, very good question. Thanks for asking. So this is why I was putting these funny pictures. So this is the dark matter part. This is the stellar part. So this beta is related to the stars, not to the dark matter. Actually, in S and M... To the stars, so you say? Yes, yes. Because in the modeling, in the pinch analysis, the pinch analysis, to do this fit. Sorry, I have to go back here. So you can see that this equation is an equation for the motion of the star in the system. And the star is tracing the gravitational potential that is dominated by the dark matter. So starting the star motion, you're able to understand what is happening to the dark matter. And now this beta comes here, you see? I see. While the beta of the S and M is very... It should be critical to predict. Since it's a Maxwell Boltzmann, you expect that essentially it's zero. This beta must be zero. In the S and M simulations that people have done, they have shown that essentially, the self-interactions isotropize the distribution, the phase space distribution. And you end up with an anisotropy that is vanishing. Okay? Okay, thanks. So are there more questions? Yeah, I have one that is quite fast. So when... I mean, with this only analysis, you have a self-interactive dark matter and you're changing the shape of the inner part of your dark matter profile. So my question was related, how this affects other type of observables, like for instance, gamma ray fluxes because by changing the inner part, you're changing the part that is the most bright part in the case of gamma rays or neutrinos or so on. So is there is a kind of, I don't know, an observation of gamma rays could be explained by this type of effect. So changing the inner, the core of the dark matter distribution, something like that? Well, this is a very interesting question. So, I mean, it's something that is in my to-do list to try to study possible indirect signatures. But this is a bit more of a dependent question, I guess. Of course, you may assume that, for example, with a simple modeling, you may end up to have a J-factor computation for these galaxies and within this model and then you can study signals. But I don't see at the moment smoking gun signatures so along these directions, but actually possible constraints. So what people have done, for example, in the case of Thorsten Brinkman, a company, I'm sure you know, is basically killing simple models like these ones exactly along the lines of what you suggest, right? And so, yeah, I mean, the interface with indirect search is interesting and certainly results like those ones are not considering the profile that I am derived. So, I mean, to be fair, one should redo a study of that searches using the SDM profile. But I think that after all, for the aim of this kind of studies, if you pick up, for example, the result of Fermi that gives you for the bucket scenario, it's fine. It's going to be a core profile more or less on the same level of signal-to-noise ratio as the one that I have obtained from my feet, I guess. These are, I guess, I should do it to prove it. Okay, thanks. Okay, are there more questions? There'll be a last one. So I have one just out of curiosity. At the very beginning when we talk about the too big to fail problem, can you please go back to that slide? Sure. This I know what it is. So here, right? Exactly. So I've seen this plot several times, but okay, so the lines correspond to end-of-simulations and the points to observations, right? Exactly, exactly. So, why each dwarf corresponds to a plot, to a dot, sorry, and not to a line? So basically, yeah, very nice question. So basically, the dwarf correspond from the observation point of view to just one point because of this problem. So there is a degeneracy if you want to predict the whole line from the observation point of view. And so there will be huge bands here, okay? So what people do, they only pick up the best point of this curve. I see. So this point is the one that is minimizing this bloody degeneracy with the stellar orbital anisotropy. Now, simulations instead do not have this problem. Whatever they get, they get for the sabero. So they show you all the line here, okay? And then you have the problem that you predict many more satellites here, many more massive satellites than what you observe. So is it clear now? Yes, yes, okay, thanks. You're welcome. So I think there are no more questions. So we would like to thank you, Mauro, and all of you for this super nice talk. And let me remind you that we'll meet again for another Latin American webinar of physics in two weeks from now. We'll have a talk by Professor Ferdinand Lofi. And Lofi from Montana State University. Exactly, and Lofi. Montana State University. Sorry. Exactly, Montana State University in two weeks from now. So it will be February the 14th. So I hope to see you in two weeks. And thank you very much, Mauro. Hi, guys. Thank you all. It was a pleasure, it was really a pleasure. Thanks. Ciao. Ciao.