 So, hello everybody, my name is Roberto Lineros and I'm going to be the host of this webinar. So before to start, let me just remind that we have a different configuration this time, for instance. Now, if you want to make questions for the host, I mean for the speaker, you have to go to the Q&A in the YouTube live chat, so if you are following this stream in YouTube, for sure you're going to see a small chat in the right part of your screen. So there you can write a question and then afterwards I'm going to ask to Nestor about this. So, now also we have a WordPress page in which you can follow all the previous webinars that we have done in the past. You can watch it again and maybe comments and make questions to the host. We're going to try to connect, I mean to the speakers, I'm going to try to contact them in order to answer these questions. So let's start with what is the big thing of today. The speaker is going to be Nestor Mirabal. He is now an NPP senior fellow at NASA GoodWard Space Flight Center. He before obtained a PhD in the University of Columbia and after that he did a post talk in the University Complutancy. So the title of his talk is Identifying Dark Matters of Halos with Fermi. But before to start with the talk of Nestor, I want just to let you know that we have a very important invited person in the Hangout session. We have Luke, Miguel Angel, Roberto Ruiz, Adelostri and Roberto Munoz and they are going to also be participating in the discussion after the Nestor webinar. So if Nestor is ready, maybe you can start. Okay. Thank you Roberto. And thank you so much for the invitation. I think this is a really great initiative to do this talk. I'm born and raised in Venezuela, although I've lived most of my life away, but I think doing this for Latin America is a great idea. So my talk today will be about using machine learning to find dark matter subhalo candidates with Fermi. And the outline of the talk is we'll have three parts, mainly an introduction. Then I will talk about subhalo searches using machine learning and I will briefly mention where we go from here. The level of the talk for some people might seem low at times. I apologize for that, but I didn't know what the level of knowledge about different things was. So I'll start from the very beginning and move on from there. So let's start with what dark matter is and a brief history of what we suspect, why we suspect there's something that we're missing in particle physics. Already in 1933, Swicky had realized that the burial mass in the chroma cluster was much more than what was measured from galaxies in the actual cluster. He called this kind of dark matter because he didn't understand where the mass was coming from. And that idea was readily untouched for many years until rotation curves from spiral galaxies started to show this weird behavior. I'm showing one here on the right. There were the radial velocity of stars in spiral galaxies of these regions, instead of going down as expected in our Keplerian orbit, they kept almost a straight line, which means that even outside the disk, which seems to go down in mass in the visual, there seems to be additional mass that we're not detecting. That is consistent with the same thing that Swicky was detecting. On the lower left here, I'm showing a plot of the blue light cluster, the bright yellow is the X-ray emission, and the contours show where most of the mass could be. So we're basically, the gas is not tracing most of the mass in this image. So again, this is all coming to the same conclusion. We're missing some mass, we're not detecting, at least in the visual, or even in other wavelengths, we're not detecting this mass directly. And this is what we call dark matter. From the very beginning, there have been many ideas to try to explain what dark matter is. Most of them are related to missing particles in the standard model. For most of this talk, there are so many theories that I wouldn't have time to talk about them, even in a few sessions of this seminar. But for this particular talk, I will only prefer to win two weekly interacting mass particles. This is a prototype for cold dark matter, and that's where most of the searches that I will describe are centered on. There are many other options, but I will not touch on those today. So if we take this cold dark matter to even a large scale, it seems to reproduce what we see in the universe, more or less these very large structures. If we model this in time, we see that structures seem to start forming and take more, a better shape, moving forward, receiving more density and more structure, but the scale here is quite huge. If we go to the present day, some of the brightest dots here are the brightest clusters of galaxies and the biggest galaxies that we know of, but again, this is very large scale. So again, for this talk, we want to zoom in into one of these regions and go more to a kiloparsec scale, to a galactic scale, something like our galaxy. What happens around our galaxy in terms of dark matter? So if we do a zoom in kiloparsec scale, this is from the Aquarius project, we see that these numerical simulations of dark matter reproduce what we see in the measurements in spiral galaxies. We see this kind of dark halo that we're not detecting in the optical, but it seems to be there in the simulations, this fuzzy halo that we see in this picture. I plotted here kind of the scale of the galaxy, the inset here, and on one of the predictions in kiloparsec scale is that there should be a lot of substructure around galaxies like ours. It should be hundreds or even thousands of these things that we call subhalos, because they're part of a much larger halo of dark matter, should be many of these things. The brightest things are what we call dwarf galaxies, we have detected many of those faint ones are probably what we refer to as ultra-faint galaxies, these are things with very few stars. But again, we know only tens of those, we have measured only tens of those, we expect hundreds or even thousands of these things, and many of them we expect will not have any stars in them, so how do we find these things? And this is the purpose of this project, and what we want to do is try to find these subhalos using Fermi, this is a very small scale representation of what Fermi is doing in the universe, trying to look for dark matter in this huge structure, but we'll be mainly looking for things very nearby in this picture. So what is Fermi? Fermi Large Area Telescope, Fermi Large Area Telescope is a per conversion telescope, it's a huge field of view on 2.4 stadiums, covers the whole sky every three hours, it's in survey mode all the time, so covering the whole sky, it's almost a square meter in an effective area, and one of the key things that we'll pop up later is the resolution, the point of view spread function is only in the order of tens of degree, a few arc minutes in size, the resolution is the best we have achieved, but it's not comparable to other wavelengths, and this represents a problem and maybe an opportunity, as we will see later. So and why is Fermi Large suited for this type of searches? Because we think we can look for gamma rays from wind panel regulation, the fact that we haven't detected dark matter doesn't mean that it's not there, we just have to find it, and the key equation for that is this presented here, the first term is particle physics, we assume that the wind panel regulation will produce a standard model of particles, and this will decay into gamma rays, and this will create this highly curved spectra that are shown here on the lower left, a very distinctive spectra that we can look for, and basically the scaling factor here, how bright or how faint this thing will be is a function of the distance, square, how far this object is from us, and the mass of the density, basically how many dark matter particles are there to detect, so this J factor, the higher the J factor, the brighter this flux in the instrument should be, and we're expecting that these wimps are mainly in this region in the switch spot for Fermi Large, actually 10 GeV to 1 TeV mass range, so this is very well suited for the energies covered by the Fermi Large, so this is a very feasible search, and it makes complete sense to do this. So if we go back to this picture, again this is what we're showing here, it's just the dark matter particles simulated over many years, if we want to see what this map looks like, and the particles were annihilating, it would look something like this, what I have in this next slide, so the brightest spot would be this bright halo, this big halo that probably creates this flat rotation curve in galaxies, but if there would be a huge collection of point sources around us, that should pop up quite easily, again the biggest points here should be nearby dwarf galaxies, but there should be many, many of these subhalos, small subhalos or nearby subhalos that could be detected as well. So how does this compare to the actual view from Fermi? Well, it sort of resembles that, but it's more complicated, with Fermi we see many other things apart from dark matter, so this map here is, I think it's based on seven years of data from Fermi, and you can see basically the disc of the galaxy is quite clear, and then we see many of these point sources here, and you would say, well, you have to take the dark matter, but that's not the case so fast, because actually these are mainly black holes and neutron stars that have also made gamma rays, so most of these point sources that you see in this map are actually black holes, active-galactic nuclei, jets of active-galactic nuclei that we are detecting as the large majority of sources that make up the sky. So in a way, we have to deconstruct this map in different slices and try to find the dark matter component. So the diffuses, mainly gas in the galaxy, particles in the galaxy that create this glow, then there are all these black holes and neutron stars that exist around us, and there's an isotropic component from things that are not resolved from very far away. So if we can model these three slices, then we'll be left with hopefully the dark matter component. So is there even room for this subhalos in Fermi? So it turns out that there is room. We've only identified two-thirds of these point sources that we see here. So about one-third of all point sources that have been identified by Fermi don't have a counterpart. We call this unassociated. We haven't identified the source that creates the gamma rays. And this is a huge discovery space that we want to explore. So why are these things unassociated? And again, we go back to what I said before. The point spread function in Fermi is a few tenths of a degree, in the best case. So picture the moon, the moon subtends about a half of the day on the sky. So the localization for Fermi is usually really a third of what the moon subtends in the sky. So it's a huge area in the sky, actually, because inside this blue circle that I'm showing here, if you want to think of the moon, there will be thousands or hundreds of traders. If you're looking for what is responsible, the trader responsible for something, you would have to distinguish between hundreds or thousands of traders, and that's very hard. In our case, we're looking for actually the one source that is creating the emission, the gamma ray emission. And here on the background, you can see a typical optical sky. I'm putting different circles here for Fermi. This is with different event selections. But even with the best event selections and brightest sources, the error ellipse is still quite large. So there are many possible candidates to produce the gamma ray emission inside the circle. So that's why for many sources, it's not that obvious which one of these is making the emission. In this case, we suspect this one is probably a neutron star in this case. But again, it's a big problem because with this technique of measuring gamma rays, it's very hard to localize a source in the sky. In other wavelengths, the localization is quite tiny. I mean, you wouldn't even see a circle because it's quite small. It's in the arc-second scale. So that's not a problem, usually. Next, you see that it's usually the worst localization. But in gamma rays, this is a big problem. But again, it's an opportunity at the same time. Many of these things, or some of them, could be these dark matters of hell as we're looking for. So people have explored this discovery area, these unassociated sources, and mainly what hasn't been done so far is look for this highly curved spectra that I showed before and find objects that are recently fit by these theoretical expectations. Here's a good example here from Bertone et al. That shows this type of fit to one of the sources. So what we ask was quite simple. Can we do this in a different way? Can we train a computer to look for these candidates directly using the catalog from Fermi, the source catalog from Fermi? And this is mainly the work that I will describe today. This is a systematic search for subhalos using machine learning. And by machine learning, I'll go to a very basic explanation from Arthur Samuel from 1959. This is one of the pioneers of machine learning. Is there a field of study that gives computers the ability to learn without being explicitly programmed? So what that means, in this case, for the rest of the talk, I'm only going to talk about machine learning algorithms based on decision trees. And a decision tree is something very similar to what we do in everyday life, how we make decisions. It's just if and then are nodes that we use every day to process things. Human decisions are very basic. We can compute two or three variables at a time and make basic decisions. It's something highly curved, yes, now and then going from there. For a computer, these type of decisions, they can handle many more variables at the same time. So you can process hundreds of variables or even thousands of variables at the same time because a computer can construct these, if then, networks very easily and can come up with decisions. So basically what one has to give the computer is the algorithm to construct these networks and how these networks are weighted and how they deal with the decisions. But other than that, we don't program the computer at all. That's what machine learning is. I will show you a better workaround of this a bit later. So where do we learn from? And one of the things that was realized early on is these neutron stars that dominate the capital. We have 200 neutron stars or pole stars have a very highly curved spectra. Very similar to dark matter in the few tens of GEDs. So we could train our computer with theoretical models, but actually we have things that are almost identical to these models and have gone through our detector. So we have characterized 200 of these things. And maybe we can just use them directly to train the algorithms to find more things that look like that, very highly curved spectra. And this is what we have done. Where do we get the information? This is taken from the Fermi catalogs. This is one of the main outputs from the lab collaboration. This is 75 columns per source and the latest catalog that are more than 3,000 sources like this. And we have all the spectra information, variability of the source, location, lots and lots of information that have been done by the collaboration already. So we just want to use this table directly to look for the subhalos without doing any fitting. So this is the basic structure for a machine learning algorithm. We get a data set. In this case, we'll use this almost 200 pole stars. This will be the highly curved sources. And the things that we don't want are these black holes that tend to have this power loss spectra. So they're very distinct spectrally. So dark matter would most likely resemble something very highly curved. So we take two thirds of all these objects to train the model. We use two different algorithms for that, random forest and extreme gradient boosting. These are three base algorithms. And they produce more or less the same results. With the one third of cases that we left out, we can test how well the model does. So we built this network of decision trees with two thirds. And then we run one third of the cases through this network to see how well we do, because we have labels for them. So typically, we only miss less than 3% of all cases. We only miss identifications are very low. So then we take our model and apply it to about 1,000 unassociated sources that are there in the catalog. And in this huge data set, we find 14 more of these highly curved candidates at a latitude higher than 20 degrees. So what does this mean in terms of dark matter? We have to put this in some context. This number doesn't mean much. If these things, these 14 objects, are dark matter subhalos. It's pretty dark matter subhalos. We can see how that translates into an annihilation cross section. So this black dots, this plot is the predicted number of subhalos as a function of annihilation cross section. This is from Bertoni et al. 2015. This is for dark matter annihilation into bottom park. And the black dots here is this number 14. This is the prediction how many subhalos would we expect if an observer was detecting this huge collection from our location in the galaxy as a function of annihilation cross section. So actually, this number 14 is more or less what we expect for this thermal cross section of WIMS. It's really encouraging because it's approaching this prediction that for WIMS, the thermal cross section should be around 3 times 10 to the minus 26 cubic centimeters over a second. So this is promising. But again, is this the only explanation? We will talk about that in the next slide. And just to place this, this limits in comparison to what has been done with Fermi, Fermi can do measure this, can look for dark matter in many other ways. What are satellite galaxies in the galactic center in the halo in clusters of galaxies? All these curves are basically looking for dark matter in many, many other places. And this green thick curve is what we obtain with this work. And around 50 GB, they tend to converge more or less. Below 30 GB, you start reaching even under the thermal cross section. So many methods are converging on these upper limits. We're not detecting dark matter to very faint limits. The faintest here is from dwarf galaxies. It's very promising that the limit from subhalos is consistent with this. What else could it be? We did some simulations of these neutron stars to see how many more would we expect in the catalog that we haven't detected. This is running a simulation over 13 billion years, the age of the galaxy. We can see that from the simulations, we expect at least 10 to 20 additional millisecond pulsars in the catalog that we haven't found. It's very similar, highly curved spectrum. So again, since we're looking for things that are very similar around dark matter annihilation around a few 10 GBs, we'll be very similar to these detected neutron stars. These things could be pulsars as well. So it turns out that from the list we've made of 34 objects above 5 degrees from the galactic plane, so far, 8 has been ruled as pulsars. And many of the others, people are working on them, but sometimes they're very faint. But this is the only way of pulsation. Finding the pulsation from the neutron star as it spins around is the only way to distinguish the two. So this work needs to continue. Some of these things could be dark matter, but we still don't know. We still need to work on that. So where do we go from here? Showing a projection of the sensitivity that Fermi will attain in 15 years of operations. And below 200 GB, this is quite impressive. I mean, it will explore areas well below the thermal cross section. This will be fantastic. Again, this is with different methods that we should converge and find similar results. And hopefully, in 15 years, this won't be a limit, but this could be well at a detection. And we hope for that. We hope to keep trying. Above 200 GB, I think the future will be the main driver will be the Cherenkov telescope array. This is a ground-based array that is being built with a combination of large, medium, and small-sized telescopes. This is using the technique known as Cherenkov, imagine Cherenkov telescope. Basically, any gamma ray impacting the atmosphere will produce a cascade of Cherenkov photons that can be detected by this array. This is quite powerful, above 200 GB. I'm pushing the limits to only 300 GB. And this will be on a very operated with one side in the north and one in the south. And it will totally beat what we have, the current instruments, similar instruments by a factor of 10. We expect that these things will be a factor of 10 more sensitive than current ground-based telescopes. And this will cover the area above 200 GB. So the main search area for subhalos will be this key project to image a quarter of the sky. The huge advantage with Fermi is that you can cover the whole sky every three hours. That is amazing, but no instrument from the ground can do that. So the best CTA can do is slowly but surely cover one fourth of the sky to probably five nmig crab insensitivity. And this will be an area that will include the north Fermi bobble, but we know the north Fermi bobble. It's a structure above the galactic plane. It will cover Virgo, Coma, and Senei, among other things. And maybe from this imaging, we'll find these things that don't have a counterpart. It could be due to dark matter, just of direct signature of subhalos. So putting this together on the green, we have Fermi in 10 years, in red. This is CTA north. Magenta CTA south in red, magenta CTA north, in blue hawk. So the search, there will be almost no gap in the search for wins from tens of GB to hopefully 300 GB or more. And this will let us explore everything that is there in the sky. Putting the sensitivity, the 15 years, this is the projection for Fermi in 15 years. And the dotted lines, the red lines, are mainly CTA lines. And you see that it will also go below the thermal cross section above 200 GB. This will be a key observation to see subhalos or dark matter in other parts of the sky or there. The timeline construction has begun in the northern side especially. Hopefully in five to six years, we will be getting the first sign verification data. And the hope is that Fermi will continue to operate while CTA is functioning. Because that is the ideal scenario, both Fermi and CTA working together. So just to summarize what I've done through so far, the Fermi mission is playing a key role in the hunt for dark matter. Machine learning algorithms are very good to pin down promising targets. For dark matter, the number of these highly curved candidates matches the prediction for a thermal cross section. But unfortunately, the numbers are also consistent with what you expect from undiscovered millisecond pulsars in the Fermi catalog. So the only way to tell these things apart is pulsation, finding the pulsation from the object. And work is ongoing on that. And for the future, Fermi and CTA operating together with Hock might finally solve the dark matter mystery. I will stop there and open the floor for questions if you have any. Thank you. Thank you very much, Nestor. It was very interesting to talk. So I can, sorry, OK. So just to tell to the audience, before to start with the question from the people from this Hangout session, that you can make any questions in the Q&A in the YouTube live chat. So if you are following this in YouTube, it's easy to find. If you are following the transmission in the WordPress page, please, you can click on the link that is down to the YouTube live chat. So maybe we can start with questions from the people that is here for the audience of the participant of the Hangout session. I don't know. Someone has a question? Yes, I have one question. Can you hear me? Yes. We can hear you. OK, I think everything that you have shown is based on the dark matter annihilation to BB bar, right? So if you change the channel to W plus W minus, did it change significantly or not? Or everything remains basically the same? Let me go back to if I can find this plot here. Mainly, we've tried, again, we're using the pulsar as a proxy for dark matter. So a pulsar is most like BB bar, I would say. But if you see in this plot, many of the channels have this very distinct curvature. You're right that probably for W's, it might make a slight difference. But if the decay channel has this highly curved structure, then you would get it regardless of what the channel is. So we haven't tried other channels, flatter channels we haven't tried. Basically, this concentrates on this highly curved decays. So only BB bar, tau are the main ones. OK, thank you. So I have a question. Can you hear me? Yes. OK, so I'm Roberto Muñoz. OK. I'm an astronomer, but I've been working in Virgo and Fornax cluster because we're looking for dwarf galaxies. Sure. And for sure, those dwarf galaxies are quite small, like let's say like to be detected by Fermi. But still, I think that's the opportunity is CTA, OK? So my question, I have a question related to the machine learning model you are using. All the columns, all the features you are using were measured on the data. Like let's say you have a standard pipeline to measure quantities from the spectra you get from Fermi. And then you train the model. So anyone have tried to basically apply a different approach, let's say try to compute different features that people used to compute? Or? Well, so let me see if I can go to this one. For instance, this is a typical Fermi spectrum. I don't know if you are using the screen now. Yes. OK. So unfortunately, the amount of photons we have is not the same as in the optical. So you're dealing with spectra, we call this spectra. We wish we had something like an optical spectrum. But this is very far from me. Unless the source is very bright, it's very hard to get a lot of structure out of the sources. So the beams are quite large. And the scatter is present. You can see that probably this source is not as a structure. But this is the best we can do. But I think it's worth looking into small features, but finding significance in this type of spectra will be difficult to prove. But it's worth looking into. People have done it in the past. People have looked for lines. But so far, lines is the main search. But lines have not resulted in anything positive, significant, at this point. OK. And right now, basically, you're mostly focusing in that matter or annihilation processes that could occur in the Milky Way, or could be any extragalactic source? It could be any source. The problem is distance. So even in 931, to firm me, it's only a few photons. This is a challenge. We wish we had a larger area, but we don't. And then in terms of clusters, I think this is very interesting what has been going on with clusters and all these galaxies that are being found. But one problem that will be there is that we also might expect gamma rays from the cluster interstellar, intercluster medium. So detecting a galaxy in a cluster will even be a huge challenge, too. So but the main challenge is distance. I mean, the easiest thing to detect is our nearby and massive. So even, as I said, M31 looks like a fuzzy thing to us. So the closer the better. If you can find a cluster next to us, that would be amazing. OK, thanks. OK, I don't know. Are there more questions from the? There is one from me right now, please. OK, I have one more question. So did you also try, because you used random forest and XGBoost on basically features that are figured out by humans from the raw data of Fermi? But they are basically like two dimensional counts, right? So did you also try something like a deep learning network or a convolutional network on this data so that it can figure out the important features itself and then go further on that? I think we haven't. I think that's the next challenge. That's always something I like to do. But again, the challenge is that dealing with very few photons is very difficult. So the production of the catalogs, it's an amazing undertaking. So even extracting all the features for one source, it's very time consuming. So I mean, the ideal thing would be to face the sky with an algorithm. But I think it might take more time to do that. But I think that's the next thing to try to do it blindly on supervised. Yeah, OK, thanks. So someone else was with another question, no? Yeah, I may have a question. Please. So it's also related to this differentiation between pulsars and dog matter, next to. So giving all the somehow negative answers that you have given us in the last few minutes about this. So can you actually foresee any kind, any way to proceed in the near future regarding this? So are there any ways that you can think of to actually do better with this differentiation? And related to this, for instance, with CTA, you don't have this problem in principle, because pulsars are basically not respected at those high energies, right? So in that regard, CTA would be an ideal instrument for at least to get rid of all this contamination from pulsars, right? Can you comment a bit farther? You're right. I think CTA, the only thing we probably, and Alberto Amingas is probably someone to ask about that, would be ADN with this weird EVAL features that might create something like that. So I'm not sure if you can create objects, highly curved objects with the EVAL, might be possible. So that could be a problem for CTA if things like that are there. I think in that case, if I can comment a bit, maybe. And in that case, I think you cannot expect a bump that happens in a much wider energy range. So maybe of factor 10, like maybe one or another might do, actually, in energy. So that would be a possible difference between an EVAL and ready feature and a source. But probably also you're right there. But again, yeah, pulsars won't be a problem for CTA. For Fermi, unfortunately, well, your own expertise, you know what spectra we're dealing with. So I think pulsation is the only way. And what we can do selecting these candidates is to give the pulsar guys. There are many people running these pulsar searchers, the Einstein consortium, and many, many people who use huge computer power to look for pulsations because you're dealing with very few photons here. So it's very hard to find the pulsation. So I think at the end of the day, there will be many sources which remain without pulsation. People will have to look for counterparts in other wavelengths. And those sources that are really mute, that don't show anything, I mean, will have to hunt them down to see what they are. I mean, whatever is left at the end of the day might be the most interesting. But it will take time to get there. That's the only way I see. Because as Fermi pushes to fainter limits, he will find many, many more candidates. But it will be harder and harder to find pulsation. So I think what I would do is go from brighter to fainter and try to keep ruling out sources slowly. Whatever is left at the end could be it. But it's a lot of work. It's a lot of work. I fully agree in this. So also as a person working in the field like you, I'm also worried about the if we can actually use this approach to claim any kind of discovery. So that's a completely different discussion, I would say. So how really to say, even if you are left with just one single source, that matches pretty well what you expect from that maturation and so on. Still, how can you claim? How can you use that object to really claim any kind of discovery? So I think, yeah. So then, as you know, we have to find it someplace else too. I mean, it has to be in the galactic center. It has to endure galaxies. If we don't find it, then there is something inconsistent in the model at some point. So we have to find candidates. And to make a claim, you really need at least detection in two different type of sources, I would say. I mean, that's what would be my minimum requirement to claim such a thing, to find it in subhalos and the galactic center or subhalos and dwarf galaxies or subhalos and clusters of galaxies, at least find two distinct populations where you find these things, where you find the same signature. So but we have to talk and talk and sit down and figure out what to do. OK, so our total questions here in the audience. Maybe we can start with from the Q&A. There is one question from Andres Perez. The first of all, he's acknowledged your nice talk, Nestor. And the question is, if the amount of subhalos candidate is compatible with the cold dark matter simulation, the prediction of the cold dark matter simulations? I guess. Yeah, so I was given the constraining time. I didn't fully, I don't think I did a good job explaining where this figure comes from. So the way this plot is made, this prediction is made, this predicted number of sources is basically placing an observer inside these simulations of cold dark matter and saying, if this observer has Fermi in his hands, how many subhalos would he be able to detect in these simulations? So this is where these numbers come from. So the numbers, again, are consistent with cold dark matter simulations because they are coming from cold dark matter simulations. So maybe there's some sort of weird bias here because we're using these simulations to interpret the observations, but it's the only tool we have so far. Yeah, so the numbers are fully consistent because these predictions are coming from the best cold dark matter simulations of Milky Way size galaxies that we have. OK, so a little bit to continue with this question. So that means that this, because the cold dark matter only simulation, usually people, they have this argument that they overestimate of subhalos. So this is kind of the optimistic point of view of the question to look for subhalos because another simulation that includes, for instance, variant feedback, you're going to have less subhalos or dependent cosmology of how they treat the problem. You're right, I mean, I think I don't know if this predicted number of sources deals with that problem directly because what this tells you, this plot is basically how many nearby sources you have. So because of resolution, you're right. I mean, so maybe what this is saying is that we're not resolving things nearby quite well. So you don't see that huge excess in this plot. But you see the excess when you move further out. But that's not represented in this plot because you wouldn't detect that source because it would be too far away. So for the cold dark matter simulations, resolution, and computing power will be key. I mean, that's also another challenge that is left over to do. I mean, if our smaller resolution is 10 to the 5 solar masses, we don't really know much below that. The resolution is not that good to make these predictions. So there is a huge challenge ahead to improve the simulations. And it will be very difficult because this is a lot of computing power. I would like to make a question. Yes. Well, regarding the machine learning part of your talk and the future, what do you think if it is something that will happen that this kind of technology will be used in dark matter searches, not only with Fermi, but with other fields in which we have a lot of data. And we can look for dark matter there. Are you thinking that those methods of machine learning or artificial intelligence will be applying there or will be necessary to apply them into this field? Well, for sure. I mean, if you know what you're looking for, if you have a training set, if you can give the algorithm something to train on, you can basically train on anything. So I think this is actually a quite uneasy machine learning problem because there's not a lot of data here. But these algorithms are thought for even bigger samples. So maybe when surveys, much bigger surveys come online, then if we have an idea of things to look for, then it will be interesting to apply these ideas to that. Because the success of the techniques is even better when there's more data and more things to look for. This is actually quite small in comparison. If you have huge data sets, it is the ideal place to apply machine learning algorithms. And if someone has an idea of what they want to look for, it is very easy to implement. You should. So yeah. So OK. So I guess, Hermann, and there are no more questions here. I have just another question because of the machine learning method at the end, in some sense. So given that to train the algorithm with pool source, the algorithm is able to find new pool source, let's say. But then how confident is to say that a possible source is not pool source. Because at the end, all the time, there is the problem of the statistic in the machine learning problem. I mean, if you have 1 million samples of pool source, I guess the confidence level of a source to be pooled or not is going to be high. But if you have few, maybe there is a misidentification. So yeah, what we do is, I mean, this is the standard way of measuring this is to use the whole data set and leave a fraction of your data set to test how good the algorithm works. And in our case, what we did is to require that two algorithms, which are completely different, have to agree on the classification as well. And we have done also the other thing that we have done is gone back to previous catalogs and applied the methods to see how well the methods perform. And actually, they're quite good. When these things have been applied to previous Fermi catalogs, they find most of the pool source that have been found since then. It works. I mean, the only thing is that, yeah, we might be finding pool source only and not dark matter. That's a disappointing result from a dark matter perspective. But they're quite reliable, I think, because we're still dealing with very bright sources. It is the algorithms are quite reliable. As Fermi pushes on and on to fainter sources, I think the misidentifications will increase. And it will be harder and harder to do this, because there won't be as much data as for bright sources. But we're quite confident that these things are well predicted based on what we have measured from the testing. And maybe you mentioned also that if you start with the approach to starting from theoretical prediction from Pulsar's spectrum, this method can also be performed in some way. Maybe the results are worse, but at least could help to identify Pulsars that are not in the set of Pulsars that you gave to the algorithm. Yeah, that's right. So I don't know, maybe you could know that. But this method is, you know, they have been applied also for, for instance, optical telescopes or radio telescopes that also they have catalogs of very large sources. I mean, the site of dark matter just. Yeah, yeah, yeah. So these methods are starting to be used in many fields in astrophysics. Galaxy classification, for instance, is a huge field. And this transient is also looking for likeers from for very specific things. For a huge surveys like LSD, this will be key because you have millions of objects and you have to identify galaxies. And this is, yeah, people have been doing this for the past decade or even more. Yeah, so these methods are starting slowly to drift into astrophysics. People are using them for many things. Planet transit, I think some applications are even there. So, yeah, the methods are coming, yeah. That's good news. Yeah, so I guess maybe if there are some other question, if not, because in the Q&A there are no more questions. Maybe here, I don't know if someone else doesn't have any question. If not, I guess we can acknowledge thanks to Nestor because of this very nice talk and it was very enlightening to learn about what this is doing in machine learning and family and in the quest of dark matter. So I guess we can close this webinar from today and just let me show this. And just to tell to the people that if you are following us in YouTube, you can subscribe or leave a comment. And if you want to get the latest news about the next webinar, you can check our Facebook page or our WordPress page. So this was the 2009 webinar of this series. This is the last webinar of the season three. So next, we're going to start with the season four and let's see what is the future in this webinar and to know about what is doing people on physics and astroparticle physics. So see you then in the next time. Okay, thank you.