 Let's get started. OK, I think we are live. So hello, everyone, and welcome back to the Latin American Webinars on Physics. I'm Joel Jones from the PUCP in Peru, and I will be your host today. So on this day, we have Jose Zuritas, a speaker who is a junior group leader at IFIC in Valencia. Jose carried out his PhD in Buenos Aires and then had a post-doc positions in Zurich, Mines, and Calcio. For this webinar, which is 136th one of our series, Jose will give us an overview of searches for long-lived particles. And we are very happy to have him as a speaker today. Now, before we begin, let me remind everyone that you can be part of the discussion, writing questions and comments via the YouTube live chat system. And the questions will be passed over to Jose at the end. OK, so we're all yours. OK, thank you very much for the nice introduction. So let me share my screen. I have to wait too many windows open. Uh-huh, I don't see my keynote where it is. Same place. Ah, not enough. I had it last time. You see, this is why one should have that. Give me one second of having technical difficulties. This is a problem for asking you not to share the slide at the beginning. Yes, this was my strong motivation. So let me see if I can get this back on track. OK, perfect. I think it's there. I don't know if everybody can see it. And there should be no green bar whatsoever. Yes, everything good. Good. Perfect. So I want to thank you all not only for the nice invitation, but also to win this series of webinars. I did my PhD in Latin America. So I'm very well aware of this fact that we get the meal, but we get it cold. So this is why I will try to contribute to this effort to bring you an overview of what is hot right now in the realm of lonely particles, or in a more quality Buddhist setup, the lifetime frontier. So this is a topic that touches upon on many different aspects of high energy physics. So I won't be able to make justice to many of these branches or research lines. But of course, you can just email me if you want to have more reference to the literature. Or I mean, at least I can try to guide you if you want to know more about this particular topic. Or of course, if you want to just give me feedback, comments, or whatever. I'm open and probably in one of the many papers I will cite at work, you can just go there and find my email address. OK, so I'm a collider physicist. What I want to do, I want to disprove the standard model. It's a beautiful theory. It works fantastically well. But it had a bunch of issues and problems and things that make us slightly uncomfortable and don't let us calmly sleep at night. And like 10, 15 years ago, people said, OK, these new physics problems are going to be solved, most of them by physicality scale. The flagship was the hierarchy problem. And we said, there's no way this can evade LHC. Now, the hard truth is that there's nothing really significant coming out of the LHC data. And one is to take a small break, breathe deep, and say, OK, what's going on? What are the possible logical explanations for this apparent lack of this new physics? Well, this would take us hours of therapy, but we need to admit that maybe the new physics doesn't like collider tip. Maybe we need to directly look for actions pointing at the sun or go to something like Juno HyperK and neutrinos will reveal it to us without any inheritance from colliders. The second option is, well, maybe we screwed up at some point in the calculation, the estimation of this where the effect should be. Maybe it wasn't at the TV. Maybe it was at 10 TV. Maybe it was at 100 TV. So maybe we need more energy. Maybe we need to do these things more precise and try to find deviations when people at Dell would do. Or the third option, which is the one I'm going to focus mostly on today, is that the new physics is there. It's not that it hates us, but quite of like doesn't it's playing hard to get. And this is when it works in stealth mode. So it's there, but there are some limitations of the LAC mostly that doesn't allow us to directly see as we would have expected. And this can be as low as something like, let's say 100 GB to one TV there could be still plenty of new physics there. That just isn't in the best team form for us to grasp from the data. So these difficulties also bring us together a bunch of opportunities, a very fertile territory to look for new physics, both from a theoretical and an experimental perspective. So what I will try to do today, my plan is to first try to convince you that LLP are something that are sound and it's not sort of last minute resource that theories are doing because we are not finding the new physics. Of course you can think where it was making a new particle that we know don't exist and you can write down some popular article in The Guardian but this is not what I want to do. I believe in the particles I'm postulating. Then I'm going to briefly touch upon I'm not an experiment that is but if you follow the archive you might have seen that there are a lot of dedicated detector for LPs that are coming more or less once a month you should have a new one. And then I will try to tell you what is this idea about and why there are so many. And then checking this webinar and the height and the size of the keywords that are in the web page I realized that there's a very strong precedent for that matter. So I'm going to focus my three big examples of LPs on aspects of that sectors. I'm going to talk about wind that matter. I'm going to talk about the theme that matters of one of the nicest is one of the best world known experts on things that matter. So one need to play match and he can answer questions on that aspect. And I'm also going to talk about a sort of early novel topic or maybe it's not so novel but it had received some extra boost in the past which is a topic on dark showers that have to do with having a strong interact in that sector. So that sector's word interactions look more or less like QCD at least is our guide for that. Okay, so what about the only particles the motivation and the challenges is what I want to introduce them. So for the purpose of this talk I'm going to define an only particle as a new physics state that leaves a macroscopic distance such as I see the attack. It's a very big definition and it's important that I say BSM, new physics because if you look at the diagram on the left that I am hopefully pointing with my pointer and you're seeing in this diagram you can see the lifetime in meters on the Y-axis versus the mass of the standard model states not only fundamental but also composite particles like the B-mesons, decay of the pions and et cetera. And you see that this in the standard model alone expands a very large range of lifetimes. Okay, and there are very good reasons for that there are theoretical mechanisms that make these particles long lived. So, but we want to look for particles that are not the one that we know of. Now, this is one option say, okay, COD can give you these particles and you can say, I want to look into CODs that solve this kind of electrophobic biogenesis of matter or other problems that are in the standard model. There are very concrete models where LLP exists. Or you can take an approach that is more from the experimental perspective that is, okay, I need to look for these particles. There are a lot of very interesting signatures because the LHC was designed for collisions coming from here, the interaction point and particles being produced that don't live too much, particles that are what we call prompt. Here in contrast, depending on how long the particle live and which kind of particle it is, what is the standard model quantum numbers, what it decays to, you will have this very nice sort of pixel diagram of the signatures on here. Not only out of time decays are not pictured but also the full breath of that shower is not present in there because when this diagram was done back in 2017, that shower was just emerging as a sort of subfield within a long loop particles. Okay, so essentially if you want to look for zero motivations and try to loop over different models and why they will give you LLP signals, I recommend you this publication. And if you prefer the signature approach, I would recommend you this other one. In both cases, you can find more information about these things. So what I want to do now is take stock and say if in the standard model, there are long-lived particles, I can try to scrutinize some of them and see why they are long-lived and then try to see if those mechanisms exist in BSM. I'm not trying to make the claim that BSM should be like the standard model. All I'm saying is that the mechanism that make particles being long-lived can apply to other BSM theories. And I will try to also tell you in each of these cases what is the LAC limitation that doesn't allow us to be fully sensitive to these scenarios. So if I just ask people, okay, tell me what a particle in the standard model that lives long enough, I think that people will answer 80% the neon. Okay, so the neon lives long enough because the W, the case of shell and then the rate of the W mass, which is the one that says the energy of the full thing or if you want the inverse of the Fermi constant, however you want to see it, makes the width very small. Now, if you do this calculation in the standard model, but you assume that this I could try to BSM, so I could change right now electron, neon and the coupling for some BSM parameters, you will see that what you have is an electron over neon mass to the force power. That is what gives you this very small width of a very large lifetime. And here I can just put some numbers in concrete theories where you will get something like a centimeter. Now, all the things that can happen and what happens to us, luckily in a way that the neutron decays to the proton and not vice versa, is if the spectra is very compressed. Okay, and what makes this lifetime here larger is the fact that the mass difference of the neutron and the proton, divided by the either the proton or the neutron mass is a very small number and to the force power is an even smaller number. So the width is small, the lifetime is large. And this is something that can happen for instance, if you look at supersymmetry or even minimal models of the matter, it will give rise to compressed spectra for theoretical reasons that I won't describe here, but this is a very normal situation. Now, the other thing that could have happened in the standard model, it did not happen quite in the template I'm going to give, is that you could have a very small coupling for whatever reason. This could be a coupling that must be small because of some final reasons, okay, neutrino mass is a very small and then you got whatever it is small and then I can think why this is small or not. I need to live with the fact that it's small or it could happen that for instance, some symmetry that got broken and this generated this sort of soft breaking of asymmetry generated a coupling that if the symmetry is stored will go to zero so the coupling is sort of faded to be small. And for instance here, the example I like is to think of a set decay into neutrinos but if the coupling is small enough, something of 10 to the minus six, then the square is 10 to the minus 12 and you can easily get something like a centimeter. This is not the case in the standard model but if I take the standard model and change the axial and vector coupling so the set I can probably make the set to be long lived which just don't happen to have it. And the point is that each of these mechanisms either having a hierarchy masses or a compressed spectra or a very small coupling is tied to LAC limitations. If you have a very large energy scale, you are not directly producing this particle it's outside of your reach and then you need to deal with indirect effects. I mean, you are not able to produce it as we say on Shirley. Now, is the spectric compressed? This is already, there's no problem with the spectrum. Well, the problem with the spectric compressed is that the decay products of this process tend to have very small energy and moreover, very small transfer momentum. So the ruler to have the quality of an object an electron, a neon and shed as a collider is tied to its transfer momentum. And if you don't have enough energy you don't have enough transfer momentum and then you go below the reconstruction thresholds. This is quite a challenge and people have become quite inventive in trying to overcome it. And then the last one is sort of obvious if you have a very small coupling and you're using it somewhere in your production then your rate is very low and that's it. And it's something you have to deal with it. This coupling is doomed to be small or trying to test your modeling in another way. So I just want to point out because when I started focusing more on LLPs people don't, yeah, you are doing probably like displaced vertices, right? And the richness of LLPs were way beyond that. Sure in this diagram there are displaced leptons, displaced vertices, displaced sheds but there's also stuff like heavy stable charge particles disappearing tracks or this out of NDKs and dark showers that are not just displaced vertices a displaced tau, displaced shed, displaced neon but something that is slightly more complex which is more challenging but it plays also a lot of fun when you work on it. So what are the hot topics, the discussion, the challenges, the things that are happening sort of the LLP world? So the problem I already stated it none of the general purpose detectors I'm thinking of FATAS and CMS mostly were built with lonely particles in mind. They were built to try to catch particles that are prone or dishing out from the direction point. So issue number one, huge issue, triggers. In the LAC you get bunch of protons something like this in 25, 40 protons, whatever they clump together and then there's another bunch coming right behind you, they have to collide. There's no way the information that comes out of this collision can be recorded. There's no humanly possible way to record. So what the general purpose detectors do, they say, okay, we're going to look for interesting effects. For instance, I'm looking for the matter if there's a lot of missing energy, I am going to save the event but this is too little, I'm not interested. And sometimes the triggers work for the model with LLP that you have in mind and sometimes the triggers are highly suboptimal for what you are trying to achieve. And this is why the LLP working group, row this, I'm going to tell about LLP working group in two or three slides, row this document essentially trying to exploit and explore trigger opportunities that could be deployed in the next LAC run, Amy LLP. Now you are dealing with objects that you are not expecting to deal with or you need to change your analysis strategies. This is something that goes pretty much within the collaborations. It's not so much something that affects our theories but certainly there are a lot of creativity and inventive and thinking out of the box to deal with this stuff because you wanted a new one that had a track that points to the primary vertex. But what if you have something that became to the new one but after the tracker? So you only have information from the calorimeters or from the neon chambers. Okay, so you lose an important component with what you call the neon. It's a different object, period. That's the way to, even though it's a neon that got displaced from the detector perspective it's a different object. Now something that is a very hot topic in, I could see in almost every any LAC search is the topic of reinterpretation. An LAC search is done with a concrete model in mind and they normally present limits in that model and if you're lucky enough they present limits in some model independent way or proto-model independent or pseudo-model independent way. Now what do you want to have? You want to have the flexibility that you take one LAC search and you apply it to a model that wasn't the one that was designed the search for and you still get the results. And this is certainly quite complicated for several reasons I already pointed out. And there's a lot of effort on going to the reinterpretation forum that mostly was dealing with prompt signals but now it's moving to tackle the issues with the LACs. Now that this full new chapter that I hope to have enough time to discuss which is target hours. So this is a sort of new kind of signature that wasn't systematically explored before in the searches. And there's a bunch of new experience that are placed slightly outside of the LAC to catch longer particles with that goal that not only are being proposed but they are also being approved. So if you say, okay, I don't care about these things but they're becoming like official experiments with the same status as Atlas, CMS, LACB or all the others. So just a very quick propaganda slide. There are two important bodies that I want to tell you about. One is the LLP working group. The idea of the LLP working group is try to establish a dialogue between the experimental collaborations, the Siri community, provide recommendations, help to communicate things for how to reinterpret a given search, discuss possible new searches, et cetera. There are combiners from the different experiments on the Siri community. And as you can see, Giovanna got in from Chile who's based in Chile is one of the Siri combiners together with me and Nishita Desai. And of course, if you're interested, please click on the web page. The link hopefully should be clickable. If not, you can just type in a browser. And it will have a mailing list where we update on the approaches that we are having and so on. And the second body is the lonely particle community that is operating since 2015, organizing workshops where people come, discuss the stuff. So there was a white paper that is the one that had this signature base approach. There's a lot of things we have done there. There's the link. And this has been also the incubator and the source of many initiatives, many collaborations I wrote, more dedicated workshops and some other initiatives like the LLP recasting repository. I'm going to discuss in a few slides. They all came, grew up, flourished, thrived under the LLP community umbrella. And maybe next year we will move from a two times a year workshop to a yearly workshop according to a poll that is being processed right now. But nonetheless, I hope to see many of you there. So after this advertisement is done, let me tell you slightly about the dedicated detectors. I'm more or less on 20 minutes mark, so I'll try to rush up. So here I have one example, which is Matusla. Matusla is a very huge detector of something like 100 meters, called 100 meters times 25 eyes, so think of a football stadium or an IKEA shop that is trying to catch the particles that come from here that could be either at the same CMS. The idea is incredibly simple. LAC can produce a lot of particles because it has a very large energy range up to 14 TV. It's one of our best source of lonely particles. And the only thing we need to do if these particles are neutral and get out of the detector and decay afterwards, we just need to put the detector somewhere to catch them. How, which technology, how large, where, it depends on what your physical cases are. But then this is the idea about these many detectors. Now, why to build them? A general purpose detector is supposed to be robust. It's supposed to be with quotation marks, model independent, try to catch as many new physical signatures as possible. And it's composed of many systems, tracker, calorimeter, muon chambers. So you have a lot of handles on the particles that you see. But it wasn't designed for LAPs. Certainly, there are many difficulties. Now, on the contrary, a smaller detector, which also I didn't put in the slide, but it's way cheaper than the LAC, than the other CMS or LACB, can target a particular case if you put it in some particular place with the goal of reducing or even scheming the backgrounds, you don't need trigger. And if you say, I want to look for lonely particles that decay into photons, then you do a detector that is beautiful on photons. And that's it. Now, also something that is important is this particle that you produce and it's long lived could be one gem, could be one TV. If it has, if it's too light, it will have very small PT. So it will mostly go in the forward direction. And if it's much heavier, it will go mostly in the transfer direction. So you need to put detectors in both sides. And a very nice summary was done by my colleague, Matthew Citrom from CMS. Well, what he did here is he classified the detectors by distance to the interaction point that go from meters to hundreds of meters and the kind of mass range that is targeting either MEV, GB or TEV. And then whether in blue this, you look directly for the lonely particle or in red you look at the decay products of the lonely particle. And you see there are many names and some of these names are actually official detectors of TLC and some other names are essentially in a very advanced process of time to become a detector. Official experiment. Good. So what are these detectors good for? These detectors are good for light, standard model, neutral in physics in a way that because if you want to have light new physics or like the GB scale, it should have a baby left, the baton, searches for EDM, flavor factories, et cetera. And a very simple mechanism to make those constraints be mostly relevant is to say, I have a portal. I have a new physics that is over here in the Lagrange and I have a standard model on the other side and I have a term that connects new physics with standard model. And depending on whether this particle, this new particle is a scalar vector, fermion or pseudo scalar, you get except to use the case, dark photons that can be that matter mediator, a heavy neutral leptor or sterling neutrinos or this is connected with the neutrino masses or an action like particle that is obviously connected with the strom tp problem. And if you look at these different models, what these different detectors will do is produce very busy plots like that where you have many lines, some of the lines are Bindam experiments, some of the lines could be for instance over here, a BBN, so from come from other physics like BBN or supernovas, here you have like Charm which I understand is a Bindam experiment where you can have LAC experiment or these proposed experiments like Codex B or Matusla and then you can do this for, depending on the K product, these are the photons of the K visibly or the K visibly or these are having neutral leptons. You have your own very busy plots where many, many different experiments from also different communities, particle after particle, et cetera, coming together. And if you look at these plots, these extra lines that are not shadowed because they are proposals, they tend to cover more parameter space which is what morally justifies the design and the deployment of these detectors. Good. Now, as I try to tell you LLP is almost everywhere and there are many things I'm not going to touch upon. I leave you here a probably incomplete and un-updated list but and you can probably add your favorite topic on that and maybe you will add a paper at some point. I'm going to focus as I said before on dark matter, okay? Good, so I'm doing more or less 25 minutes Mark. Okay, I'm thinking more or less. So I'm going to start with the compressed wings which essentially manifests most of these competing tracks. And I'm going to tell you about these two final papers that I wrote and also one at last paper that we strongly exploit. So this is a lot of theoretical, blah, blah, but the idea is fairly simple. If you want to have the matter and you want to make it in a minimal way, the only standard model charge that matter can have is SU2 left. It cannot have color because it will destroy stuff like structure formation will be totally different and it cannot have electric charge because we wouldn't be calling it dark. So the four views. So what people do is they take different multiplets like triplets or doublets and then they play with the spins like could be a vector or a fermion and they try to get that matter out of this. This is normally going under the paradigm of minimal matter and when you have this electro multiplets what happens is that you have one of the components you want to have the quantum number such as this electrically neutral and that's your dark matter candidate. But if you have more than a single if you have a doublet, triplet, fourplet, et cetera you start having electrically charged states. And those electrically charged states are slightly heavier than the neutral one but not that much. Something of the order of handy the maybe or so which is already enough to make them long lived and the something which is very robust is a one loop electro recollection that goes in there and it's pretty much independent of the mass if your dark matter is sort of about 100 to 100 jet there's almost no effect on the dark matter matter itself and this mass splitting is essentially a function of the spin and the SU2 charge of your multiplet. So what people were doing is looking at them this charged particle being produced is long lived and then it decays into a neutral one and normally a pion because this is what the mass different gives you but the pion is something like, let's say it's 100 MeV and you have stuff that could be one TV. So you have a particle that is one TV plus 100 MeV that decays into a particle that is one TV and then there's like 100 MeV for the pion. So the pion is totally soft, lost, nobody sees the pion although it could be seen in some other environments and what you have is something that looks like a charge strike that all of a sudden boom disappears and then what continues is the dark matter that is invisible to the electric. This is why it's called disappearing track. So what is the relevant variable here is the pt of this charge guy which is what we are plotting here for the different models. And our point here is that the red and black lines that were the ones that would directly map into hexenon window matter in supersymmetry are the ones that was using as a benchmark but if you take two other examples that would be like a scalar doublet or a vector triplet the pt distribution is totally different then you cannot directly interpret the results as they are from Matlab's paper because they are based on these two models that actually present one plot for each model but kinematic wise they're essentially the same and this is because what is mattering here most is the spin rather than the actual what number was the doublet of a triplet. So then Atlas produces this kind of information for a given mass and a given life and it gives you an efficiency map. It tells you if you have a particle that you generated with the Monte Carlo particle level how does it translate into disappearing track itself? If you do that and you put that information what you can try to do is try to reproduce the Atlas curve and then you can reapply those constraints to this two new models, the green and orange are were models that were not considered by Atlas and by reinterpreting this result of disappearing tracks we got the strongest constraints on this model up to date in some region of this lifetime. Okay, and here you have tix, tix is where they're missing energy searches by you. So whatever is this sort of belly shape is new territory that we excluded by simply reinterpreting this. But then you can tell me well, but that matter I, these minimal models I really care if I hit the term already like 1.1 TV for kixinos. Can you do it? LSE cannot do it. But this can be done for instance, future colliders so both for kixino and guino. And these two upper plots are result from a paper that I wrote with Rodolfo Caderilla, Fede Melolia and Rosa Simoniello on trying to look for disappearing tracks but of the neon collider which is a total nightmare because the neon decay and the background is huge, is huge. When people tell you a lepton collider is clean, they are not thinking of a neon collider, they are thinking of a electron, a positron collider. And then after doing a lot of struggling with these backgrounds and designing a strategy to remove them, we actually are very, very competitive with the SSTH which obviously we knew before we started the study that could do it. And of course it's a preliminary study of the collaboration, a lot of room for improvement but the thermal target will come with future colliders. So LLP is also an important physics case for future colliders. And also the idea, the initiative I wanted to showcase is that this reinterpretation of the adverse disappearing track search when they started to head to something that with some other colleagues we was put forward mostly by them, not by me, that this is LLP recasting repository where the idea is that if someone takes one of these LLP papers and tries to make an interpretation and you do some code that will do that or even design a procedure you just write a couple of pages, you can share your results because they can be useful for the community. And actually what we did for this other paper that I was showing you on the disappearing tracks, here they enter over here that says new, I took the same screenshot when we uploaded this, we uploaded the code and sample events that encapsulate our procedure in full. And of course, if you are looking at others, searches, some of these efforts were put there. Of course, it's not mandatory to do that, but we believe it's a nice resource for those who want to reinterpret search scene in their own models. Okay, now I'm going to talk about the slightly other paper that is about Feeble Interactive Massive Particles of Femmed Darmatter. Now I don't know how much you know about Femmed Darmatter, but probably should be asking Nicolas and not me, but nonetheless, let me try to explain this. So the normal freestyle mechanism, what happens is that one is in equilibrium with the photons and with all particles in equilibrium, standard model goes to Darmatter, Darmatter goes to Standard Model, boom. Everything is working fine until the universe decides to do two things, expand and cool down. When it expands, yes, the standard model particles produce Darmatter particles, but then these particles became far apart and then they could kind of find each other to decay to the standard model. So the decay from the dark sector to the standard model is blocked. When things cool down and assuming that, for simplicity argument, Darmatter is heavier than the standard model particles, at some point the standard model particles don't have enough energy to produce Darmatter, temperature is energy. So that's it, you stop the production and this is how you get the black line. This is a very simplified picture, but I hope it works. Now, freezing has a slightly different approach. In freezing you are never in thermal equilibrium. You have a very small coupling that prevents you to get in there and what you do is sort of, your Darmatter looks very much like when you put dust under the rug, okay? You start with a certain number that could even be zero and slowly, slowly, slowly you start accumulating and then aggregating it, okay? Until you get Boltzmann's oppress, the process cannot continue and then you're frozen at whatever value this had and with a bit of luck you're frozen at the, I mean, there will be some parameter that will give you the right relic. So what is relevant for colliders? For colliders, what is relevant is to think of a model I should have put the Feynman diagram where you have a Yukawa-like structure, where you have a standard model particle that here I'm calling F in this notation. You have a new Darmatter particle I'm calling S, a scalar and then you have a standard model fermion. It's a very simple vertex. There are many constraints that you will put in there by the idea that this coupling due to the freezing nature of needing to be out of equilibrium needs to be small. Small coupling LLPs, I mean this is what at least should come out of the talk. So we move forward with that idea and you see that you can get the lifetime as a function of the relevant parameters and more interesting in this very simple model of two new particles, you get an integral that goes from the reheating temperature to the temperature nowadays. So there's a certain sensitivity to reheating temperature and if you reheat way too below the weak scale, then this would disprove essentially electro-genesis and biogenesis in its more vanilla form. So what is the idea? There are many, many searches here that we reinterpreted in terms of lifetime and mass. And one of the strongest messages of this plot that I like a lot is that you cannot rely on one single lonely particle search. They target different lifetimes. So if you follow the envelope, each of the searches is doing something and also if you follow the envelope, the middle regime is the most challenging one. And there could be easily be something. Look, if you say the LSE has 14 TV, but here I'm screwing something like, not even 300, 200, 80 or so where I'm putting the pointer and this sort of middle regime between having a very lonely particle or a very plump particle is certainly a blind spot of the searches. Now you can draw different curves in this plot depending on the parameters, including the heating temperature. And then you can take a look into the future. So extrapolate these searches to the Hilumi LSE under some wide or pseudo reasonable assumptions. And you still see that you will not get much farther than 400 J or so. The dash lines of extrapolation, the solids were the current one at the moment. They didn't move much because we were already mostly at the end of the run. But this means that you have a shot at with the green one and with the red one to catch this other line that is where the heating temperature becomes 160 J. So you can start making some cosmology with the lonely particles and a collider. Now let me get into the world of dark showers. So in terms of dark showers, I'm not going to focus. So you have the work that I grant that to you but I'm not going to focus on that matter itself. I'm going to focus more on the dark sector. And the question that leads to the showers is, okay, what if the dark sector, what that matter could leave is not weakly interacting, but rather strongly interacting. And our guidance there is QCD. And this is something that goes back to 2006 under the name of Hidden Valley Models. This is a seminar paper by Matthew Stratler and Catherine Surek. And what you need to do is very simply. You put new matter fields. We call it dark quartz QD, new gauge fields, dark gluons. You can couple both through the port as I defined before. And this, okay, I have a certain number of colors. I have a certain number of flavors. At some scale, this should come fine because I'm really thinking of QCD like regime. Of course, depending on NF, NF dark, NC dark, you will or will not have confinement. Okay, you will need to compute the beta function, et cetera, like you do in QCD and say, oh, this the synthetic freedom that's confined and fine. Now, from a pure phenomenological perspective, practical perspective, what should I care about? The dark or mass, the confinement scale and the energy. And essentially in this very broad picture of having strongly interacting sectors, there are three regimes. One regime I want to focus is where the dark quarks are below the confinement scale. And the confinement scale is much smaller than the energy. This means that I can produce many particles. Yes, because I have enough energy to produce many of these particles. And this dark quarks sector is such that this will confine like it happens with everything below the bottom in the external model. So there will be sort of dark pions, dark kions, dark Bs, dark D mesons out there. Now, the other option that could happen is that your confinement scale is maybe too high. Maybe you make two or three of these particles at most. And then you are not looking anymore at the dark shower where you're looking at the resonance. And you can apply normal stuff. So this regime, it's interested, this is a dark bound state that you will form but it's certainly not as striking as what I'm going to tell you about. And then the other option that I'm not going to tell you about which is super interesting, but this probably serves as another separate seminar is Quirx, where this is like having the top. You have the confinement scale is below your quark state. So you have three quarks out there, okay? They don't hydronize, they form a bound state that is very weakly bounded. And the phenomenology is super interesting, but again, it does not really belong to this talk. But if someone wants, I can point to the literature in that topic. So, as I said before, dark showers, strong interacting sector, I am going to take my lore from the Statenmole QCD. What do I know about QCD? I only need to properties here. When the energy goes to infinity, the coupling is equal to zero, vice versa, the coupling goes to zero, the energy goes to infinity, that if we call it asymptotic freedom. And when the coupling blows up, when your energy approaches, the what is called the land-accused confinement scale. Now, the problem or difficulty or obstacle from the particle is that most of the people that do new phases and colliders is used to be able to compute the use of the aggregation parameters observables. And right now, we cannot. We cannot compute the hydro masses or anything like that because the theory is still interacting and you need to use some non-partiality methods like this. But what you can say, you can make some statements about the infrared perspective. So, if you have, for instance, two flavors, yes, in QCD, you know that you get variants that are the neutral and proton around one JEP, so 1,000 MED. You get vector resonances, rho, omega, that are more or less a 700 something. Land-accused is of the other 300 MED. And below land-accused, you get the biomes, okay? So, since the biomes are the lowest-lying hadronic states, this is what I'm going to think about. I'm going to think that my dark sector generates a lot of dark biomes. Okay, in the standard model, look how funny this is. The charged biome leaps almost same meters and the neutral biome leaps hundreds of millions of emitter. So, millions of centimeters, it's quite striking that these two guys are much the same. In principle, okay, and there's a small correction, pi plus is heavier than pi zero, and this totally changes the lifetime of this part. So, QCDA, the Hadron Collider, is very complex. It has many different ingredients, but what is interesting for us to know is that the part-on shower, essentially once you produced what is called your hard process that is what you draw in drawing, normally drawing a five-man diagram. Then what happens is there's a mission of gluons and quarks and a quarks and mixed to gluons and so on, and this part-on shower is perturbative. It's under control and can be computed. What cannot really be computed is hadronization, where you turn your quarks into Hadron's or mesos. Good, so then in view of that, your dark shower can be characterized in the following way. You have a production mechanism that is going to be of a man diagram. You have the shower and hadronization part and then you have the decay. And the real variable that you should care about is the product of the pine structure content of the strong sector times the number of colors. If this is a small, so like in QCD, you actually form jets, dark jets. They call that this value something like 0.3 in the standard model QCD. If this number is large, okay, then you don't do dark jets. Your energy instead of being focused in a couple of structures, it gets more split isotropic in the detector and it gives rise to what is called soften cluster energy patterns or soups for short. Now, the other variable that I need to care about is myder sector is such that my pions are long-lived or not. If my pions are not long-lived, you will have what is called semi-visual jets. If your pions are long-lived, then you will fall into the realm of energy jets. So what do you normally do? You take a production mechanism that you don't need to take to be fair because it's just a trick to have some signal at the LAC. And you say, okay, I can take a set prime and this can give me rise to semi-visual jets or I can take a T-channel model that's also this other one that gives rise to energy jets. Now, one of the, just to see how new this all is, there's only one Monte Carlo in the market which is PTI8 that can simulate that showers. And normally, if you try to go to any LAC process, you have a very large choice of Monte Carlo that can be at the general purpose sort of like malgraph or can be specifically targeting particular processes and be like action. This year, there's only one tool, which is also limitation but just tells you how new the subfield is. Okay, so for semi-visual jets, what happens is that you produce our pions but these are pions don't leave much. So they add to the missing energy component of the jet that normally you will have neutrinos. So what the search does at CMS is they essentially count how many stable particles you have in the jet and they normalize the total number of particles. This is the invisible fraction of the jet. So if you have no invisible fraction, you have a resonance. And if you have a full invisible fraction, this is like a matter of search. And essentially what the semi-visual jets can do for you is if you look at the plot on the right is interpolate between the r equals zero and they are equal to one regime where a digit resonance or a normal contact interaction slash missing energy search will win but the red care will win all in the middle. And this fraction is a number that will depend on exactly which that sector you're putting in there. Now there's only one CMS search that does a very simple selection. Then it follows the site guides and takes esteem variables and boom directly into the VDT and get a result. This is a very first experimental result for semi-visible jets of the LHC. And if you look at the ARCA date, this is December, 2021. So again, very noble, very fertile, very young sand field. There's a steel room for you to jump in. Now a merchant jets, you do your Dalmatians normally your Dalpanias but they are long lived. Okay, so normally you have something like this place that you have a particle that is long lived, travels and distance and decays into sheds. Here what you have, you have pions. You can have many of them because depending on how many flavors you have, you have NF square minus one pions. If you have two flavors, you have eight kinds of pions. And this makes that you don't have one single long lived particle that decayed but the multitude of them, you can have many, not one, not two, eight, 10 that decayed at different positions. Okay, and then this shed is not coming from one single point. You cannot form a vertex. It's really a bunch of that hard runs that travels and distance and then decay back to summer model. This is why it's a very exciting, very peculiar signature. And here what you can see in this Fino paper is that there's a certain interplay with the normal searches and search is dedicated for this signature. So here you have a four sheet search that works very well when the staff is mostly prompt. So one millimeter is not a huge displacement in terms of the collider. You also have the missing energy search because if these guys are very, very, very long lived and most of them decay outside of your detector, then it looks exactly like missing energy search. And in between this green curve is what the new search will win for you in terms of masses. And it's a lot, if you move for this 400 job here all the way up to 1.5 TV in mass over here by doing this search. So CMS followed this paper here by Dan Stollarsian and Deweyler from the 2015. And they did again a very similar approach of doing some selection. There's a criteria to tag and so to identify this immersion search from normal search that uses places variables is quite involved but at the end of the day, you can produce the very first result for immersion search. This is slightly old, that is from 2018 but it's the only immersion search result up to date from LAC. I know that that was an LACB habit on store. This is being analyzed. But once again, this is just to show you how novel the new field is. Okay, let me see. And finally, there are these soups. These soups are very special because they go essentially to a regime that you're not used to. You have a very, very strong coupling and the radiation there is a fairly symmetric in the rest frame of the party team. So you produce something, if you look at the event display I just told is from a talk by Maxis Trasler, you have activity all over the place. It's a sort of like a Christmas trip for your detector but this activity tends to be very soft because what you are really doing here is splitting the energy of the collision into all these different energy deposits. Okay, so this is a gigantic nightmare, how to trigger, how to analyze, how to reconstruct the goals very, very well deep into expert detector expertise. So it's not something that is at the reach of every experimental, it's not only people that is very well known and versed on the incredibly soft radiation can try to tackle these analysis. And this is why there's only a phenotype for another analysis on these signatures. Plus beside that is essentially almost impossible to calculate because this is in the very, very strong regime. But it's a signature, it's a lonely signature and it's the signature of the strong interact in their sectors and it's quite interesting. Okay, I reached to my conclusions probably slightly off time but let me try to summarize what I said. I hope that by the end of this talk you will all sit back in your chair and say, well, a little piece of elixir that you can really will motivate. Many models happen. This is not a last resource of the theories that are just trying to sell their crappy theories by my book, but it's rather really serious models that have built in these lonely particles, make these mechanisms that produce lonely particles and that give you a very striking phenomenology in particular colliders. Now, current experiments, proposed detectors, future colliders they cannot do elixir and they are all using them as an important physics case. Now, roughly speaking, there are three classes of signatures. You can have neutral lonely particles. They could be fairly light because they would evade most constraints. They don't have standard model charges that would be mined by neutral. And these are mostly being targeted by the KTA vectors. Of course, the elixir people want to find them but if they are just too long lived, elixir will not see them. Or you can have a lonely particle charge in the second category under the standard model. And there you have all sorts of signatures like the disappearing tracks that I was mentioning or displaced leptons, okay, distributed particles like the signature that I didn't go into detail but we use that in the freezing plot. And that come with its own challenges, its own merits but since these particles are long lived and charged if they are produced in the LAC, we will see them because they will interact with the detectors. So this is a part where the dedicated vectors are not needed. And the last set of signatures is this strong interacting the sectors of our showers that if you want from a perspective, they are standard model neutral but they give rise or they can give rise to very peculiar signatures with dis-emerging sheds or multiple displaced vertices that will not happen in a normal neutral LLP theory where you maybe produce one or two long lived particles in the dark showers. Due to this part on shower in single, you're gonna start radiating, radiating that blue and that split into a pair of dark quarks and so on. You can have more than two, more than three, more than four multiple displaced vertices, multiple emerging sheds, multiple semi-visible sheds or a collection of these signatures like maybe you can have one semi-visible shed and one emerging shed that are incredible striking and that they strongly depend whether the theory does look like QCD or not that we don't know. And how this dark hadron, what is the lifetime of the dark hadrons which is also a parameter that at the moment we are treating as an unknown because even if in a given model you could compute it, you need to do a very complex calculation like lattice calculation just to look at the lifetime of the pion. So the phenomenological exploration that is a very rich territory there to look is essentially based on trying to make a very simple model with a dark pion mass, dark pion lifetime, roll along and see where this search is taking. And as I tried to point out, this is a very novel, very fertile territory that are showers, this is why I wanted to bring it up in this overview. And well, I hope you have all enjoyed and finally a message that I start from one of my colleagues. Thank you very much. Thank you very much Jose for this very, very nice, very informative talk. So let me remind again that all YouTube viewers can ask questions through the live chat and they will be passed immediately once I get them to Jose. So no questions yet. So let's see if there's any questions from the Zoom audience. So we open up for questions. Okay, oh Roberto, yeah, go ahead. Yes, I have a question for Jose. Very nice to talk, I mean, to see all the possible outcomes that could be searched with the different search strategies. So I was wondering, because you mentioned that the only kind of Monte Carlo software program that is making an effort to try to get this signature is Pidia, so could you comment a little bit more? I mean, in the sense, because when people try to use Pidia to adapt to new signatures, it's kind of very hard to unveil. So is this like an effort from Pidia within or just other people that start to tweak Pidia just for this purpose? Okay, so let me tell you the bit more of a sociological component of this. So yes, as you said, Pidia is a Monte Carlo that requires that you essentially hard code them all. Okay, so it's maybe not the tool that you would first use for VSA. So what happened is that in 2011, 2012, there was either a master or a PhD student in Pidia, so these hidden Bali models were out there, but the whole LP thing that exploded much later. And the main head of Pidia at the moment, so a German, Josh Strand, and probably mispronouncing it, took a student that said, hey, why don't you implement a hidden Bali model in Pidia? Actually, the model is called hidden Bali model, and then people say, oh, but this is our showers. And then the student did that, and the student left the field. And then the whole LP fever started, yes, maybe fever wasn't the most happy example, but the whole effort started back, and then people look back at what tools they had, and there was this module that, I mean, just was there, nobody used it, and essentially allows you to do this kind of games of change the biomass, change the lifetime. It has some limitations of course, you cannot do everything. It has been actually recently updated in the context of the snow mass, there's a new version of Pidia that has like a new world art, and there's a lot of people that is heavily involved into doing that. I understand that other, so it's not the Monte Carlo, but it's really the showering program, so Herbie and Sherpa are the other main players in the game. So at the moment they haven't catch up, I imagine that they have plans, I mean, at some point, they will become more or less apparent. Some people inside Herbie have certain plugins to run, but it's more sort of home brute stuff. So Pidia is the one that is available and robust. So the issue is not so much that I'm a diagram, I mean, you can do it in Minecraft, you can do it in Pidia itself, but the showering, right? Say if I have here a party with this Pidia, okay, now I am in the dark blue room. Now this dark blue one goes to two dark quarks. Now this dark quark hadronizes into two dark pions and one dark kion, yes? This part is the part that is really hard, yes? And essentially Pidia is only doing series that are SUM with certain limitations, but at the level of very rudimentary exploration that we are doing things is more than enough. I mean, it won't work, I want to do, I don't know, let's E6 shower and connect it to strings here. It would be fantastic, but Pidia doesn't do E6, it's just doing SUM, it wouldn't even do SOM, okay? So this is something that at the moment is working. Obviously it's not that ideal to have only one tool that does a simulation because you also want to assess systematic effects and the point between most Pidia and herby is that the way they approach the part on shower is conceptually different. They use different models to do this, the hadronization also is different. So it actually will be desirable to have it, it's not there. So but if you see, this is go back to some point in 2011, 2012, this was implemented and then it is sit there for some time and then now people is more actively using it. But in principle, you could use Mad Graph with a more complex model as long as everything would decay into dark quirks and gluons such that Pidia would do the shower. You need Pidia for the shower. The real problem is the shower. Yes, and Mad Graph cannot do the shower. Yeah, but that's true also for standard models. Exactly. Now some people have, I think for the soup scene, I think Max Trafler has like a roto generator for soups but that's his own very own thing and I mean, it's not public or whatsoever. So there's still a lot of effort or there's still a lot of room, not only for people doing BSM and so on but also for people developing part on shower codes that could jump in, right? So it's a quality of opportunity. I hope that at some point, the other, the competitors will try to catch up, develop their own products and this competition will only be beneficial. But at the moment, I think we're all running PTI8. Okay, thanks. Great, let's see nothing on YouTube yet. So let me ask one question now that we're on this topic. So okay, we have our SUN dark sector. So apart from most likely giving us a dark matter candidate, what other problems could such a sector have their works on how a dark sector could help with biogenesis or what are the motivations for? So I didn't want to go into those details, but for instance, if you think of the framework of asymmetric dark matter, where people try to connect the binary asymmetry with the matter asymmetry, this at least a setup that I know work in a strong interacting dark sector, right? So that would be one possible motivation for that. If you say, well, I'm going to take a symmetric dark matter seriously. Fine, you need a strong interacting dark sector. Then they're also worth looking at a strong interacting dark sectors could give you strong phase transitions in the early universe and they could even source gravitational waves. There's a very nice paper by Petrish Valer on the topic for instance, the PLL. So this is also another aspect of having a strong interacting dark sector that you could connect to. And of course, gravitational waves are quite trendy. And only more data is only, I mean, doom to come. So I think there's a very interesting connection on these kinds of sectors with that. So I don't know much about the biogenesis part because I'm not an expert, but since you can do asymmetric dark matter, I imagine you can also use them for that. You can also do that. I mean, this is not something new, right? People have used these kind of setups. Now, how this connects in terms of, I don't see, I haven't seen much work in terms of making a proper link between let's say a dark shower search and putting them on a biogenesis model. Precisely because there are many limitations. How will you compute? Okay, this is the lifetime and the mask. How do you translate it into, I mean, it's not like in the freezing paper that I was showing you where I have a way to, let me go here. Yeah, I know like this line is that matter of mass of 12 KB, reheating temperature 10 to 10 J. I cannot do that in a dark shower model. I don't even know how to compute the lifetime of the dark pion given the SUN, C and the SUNF. But certainly if you only talk about motivation, these strong interactive sectors can do much more and I should have mentioned. Yes, so, but at least right now the idea would be, I mean, I didn't talk about the relic, right? As a dark sector, it might look like QCD because we know QCD and then we roll along. Yeah, yeah. Within that you have a palette of options that are essentially an explore. I mean, very little, very limited results from CMS as it. So the revival is also very, I mean, we're talking 2015 and we have had, I don't know. I didn't mention this, but RPB Susie gives you lonely particles, but RPB Susie has been around for 30 years probably, or more, right? Well, these are showers, it's something fairly, fairly novel. So I hope it becomes a, I mean, my old bias I'm working on these topics. I think it had the potential to become an important topic and also because it doesn't fall into the previously classification of one or two neutral lonely particles, one to two charged, it's really a different phenomenon. Yeah, it's so different, yeah, completely. And so speaking about that specifically about SUP that you were talking about triggering. So what's being discussed right now regarding how to trigger these kind of signatures? You shall take a set. Sure. You shall take a set. You have a set that decays to leptons and then whatever comes with it goes to SUPs. So you say that you generated with an associated standard model particle and that does a trigger. That I think is a strategy as far as I, I haven't looked at it at the worst, but that's the strategy. Because I don't think you have a way to say, I just got 1,000 hits of charged tracks of PT1 gem in my detector, they are very close to me. Can you please save this collision? No, it just won't happen. So it requires a different approach, essentially. And I think right now, if also if you go back to additional paper of a stress level on hidden valleys, I think that's pretty much what they are doing. Because if it's fully hydronic and soft, the standard model is there with much more, which much higher rate. And there's no much you can do. But I mean, if you look at this paper of Simon, Simone, and Michele and Dean over here, the one where they actually call this a soft bomb, I guess the referee and like the term. So they moved to this funky acronym. They discussed how these things could be done. So it's not hopeless, but it's incredibly challenging. And also, sorry, this puts you in a situation where you really need to understand a strong regime. So if you're turning some of this solid paper, that's even elements from string theory or sort of the ADS-CFT conjecture, but could try to, if you really go to a very conformal regime of the soups, you could try to use some of these tools to make estimations. It really goes to the frontier of calculability. I mean, this is hardcore QFT. I removed the hardcore QFT elements from my talk most, but it's quite hardcore. I see. So any other questions? Nothing from the YouTube so far? We're very close to our time limit. So let me ask one final question regarding the comparison of dedicated detectors. Could you go back to this? Yeah, yeah, exactly. So there you have, right? Distance from the interaction point, right? And you have the LLP mass range, but you have a bit of degeneracy over there, right? You have a Nubis and Codex B sharing the same range. So what are the advantages of this kind? Like Matusla is right in the border between being 10 and 100. So it's a very complex world because all of these experiments are fighting to get recognition. And if they are actually proposed, I would argue that each of them has some peculiarity that make them worth it. So some issues are more of practical terms. For instance, Millikan that is here has essentially been recently integrated as a sub detector of CMS because it's using one of the CMS interaction points. So there's no need for a preview, but it's just moved to be a sub component of the tracker. Now Matusla, Codex B, and Nubis, which are probably the, I mean, a very strong proposal. So the Matusla for instance on Matusla is again a football field on top of CERN. You can look at the display that I stole from Emma here. This is this huge thing over here. This is an aerial take. Now this will be costly, of course. We'll take a lot of money and you say, oh, this is only doing LLPs or whatever. Now the point is that if you look at Matusla physics case and the different documents of Matusla, there's also a program to do cosmic ray physics because the area that you're covering is almost as high as the project. Okay, so it also has a cosmic ray program that since I'm not, I mean, here. The effective area, I guess. Yeah, yeah, effective area. I mean, if you sum up the number of particles in the area, that's what I mean, right? I mean, obviously, the area called by your shade is much larger in terms of distance, but how, I mean, the flag that you're capturing is almost similar. Now I'm part of the Matusla collaboration, but I know that cosmic ray physics is, so I cannot really say how strong the program is or so, but I'm saying for those who know the studies are there, how well you can measure the sort of shower depth, air depth, the six zero, and there are many studies. So for instance, and none of the other experiments as far as I know, can do a cosmic ray physics program. Of course, you put it in the detector proposal and say, okay, this is something that we can do. Now Anubis, for instance, Anubis is essentially, so if you wanna go to the DC, you need to take an elevator, but normally you only take that when you only use the elevator when you need to go down because something broke, so during the shutdowns and so on. So what the Anubis people say is, why don't we put the detector in this elevator that nobody will be used, okay? So this is, if you want this the cheapest proposal in terms of just go there and drop a box and leave. Many of the proposals actually are the same, but let me just tell you about this one Anubis. Now, Anubis is getting criticism because other proposals shield themselves. Matusley is shielded by the whole rock between the LHC and the surface. Codex B, which is something that will be put next to LHCB, is going to have additional shielding to remove kions, pions, et cetera, that are produced in, but Anubis is not having any shielding and they are making claims that they can do that. So the question is how effectively they, I think most of these experiments when they get back to the stage, they run sort of a prototype version too. And right now the point is to make it a convincing physics case that how they are going to operate. And since they are different shielding, no shielding, just the rock that shields you or you need to put additional shielding and whatnot here and there, this change is what you can do. And I mean, also Phaser is like Matusley, but way smaller, but it's already approved. So also if you were essentially competing with Phaser, you cannot really do that. Now, for instance, as far as I understand, Phaser doesn't have any tracking or whatsoever. And something that Matusley has is five trackers layers with a space of one meter. So if you actually get an LP that decays into VR, you will get many tracks. And if you decay the LP that decays into a mu one or a tau, you will get less tracks. So with Matusley, you can do some more of, if you get a signal, you can do more identification, while in Phaser, you essentially are doing a count. Yes? So I think Phaser is not measuring energy. In either media Matusla, I mean, it would be fantastic to have, you know, a food detector that can give you information. So it's a very complex game and each of the detectors have classes and cons. So this item is a good way to compare them. But again, each of them have its own... Has got its own subtleties. Okay. And then the question is, who will win? It's a combination of many different factors. And of course, you have to make the best physics case of all these things. And if you are running a demonstrator that show that this demonstrator is effectively working and your expectations were not wrong. That's what I think is the game. Now, many detectors are actually operating, MAP, SNDLAC, Phaser. So in the next one or two years, all these other proposals should condense into whether they go forward or not. Now, this proposal I didn't discuss is the proposal to do a forward physics facility at seven where in this facility, you will host many of these experiments. And this is something fairly new. It's in the context of Snowmass. You can probably just look for it in Spires, a FBF forward physics facility, and there's a very large amount of documentation on why the physics case. So if you wanna know why, I haven't looked at that into details, I don't know what is the advantage of that versus other stuff that is forward, but for Intas Matus, like all these things are transverse. They are really trying to look for particles with sort of heavier masses, more moderate, large particles. So this is something essentially going almost along the beam. And if you look at the SNDLAC, I think they look also very forward, something like an Intas of eight, which is almost an angle of zero. LACB goes, I mean, other CMS goes from up to 2.5, CMS goes from two to five, and then this SNDLAC is in a different angle. So it's, they start different physics purposes as well. It's very hard decision to be fair on how to wait. Hopefully I don't have to do that. The more I can do is going Matus, I say Matus, that would be great to look for this and I'm looking to that. Great, okay. So we already passed the time. So I think it's a good moment to cut off the meeting. So thank you very much Jose for this very nice, very clear, very comprehensive webinar. And we'll be seeing everybody next on the 23rd of November, where we have Lina Nesim. So thank you very much for your attention and we'll see you next time. Take care.