 Okay, I think we are live, so all right, hello everyone and welcome once again to the Latin American Webinars on Physics. I'm Joel Jones from the PUCP in Peru and I will be your host today. This is webinar 130 and we're having Giovanna Cortina as a speaker. Originally from Chile, Giovanna did her PhD at Cambridge, UK, which was then followed by two post-op positions. The first one was in National Taiwan University in Taipei and the second one was at the Pontificia Universidad Católica de Chile. She is now assistant professor at Universidad Dolcibañas, as well as a young researcher at the new Millennium Institute for Suatomic Physics at the High Energy Frontier, both in Chile. So today, Giovanna will tell us the latest news regarding long-lived fermions at the LHC and a future experiment. We are very happy to have her as a speaker today. Before we begin, as usual, let me remind the viewers that you can ask questions and make comments via the YouTube live chat system. And these questions will be passed on to Giovanna at the end of her talk. So, well, Giovanna, we're all yours. Thank you all and the organizers for this nice invitation. I will comment on some proposals for long-lived particles based on recent work that I've done. So I will share my screen so we can start the presentation. Let me know if you can see all right. Everything good. Okay, so I will first like to give a brief overview of long-lived particles and why are we looking for them and then focus on two different frameworks that predicts long-lived fermions, frameworks involving heavy neutral electrons and light neutralinos. So this is work all done in collaboration with these great people. There are three articles that you can already find online. These are published articles and one work in progress that also I would like to comment to you about, which is done in collaboration with Juan Carlos Selo from Chile, our student Faya Hernandez and Iguelas Mil and Sinova. So let's start. This will be the outline of the talk, like I mentioned just a brief introduction on long-lived particles beyond the standard model, why do we want them? And then I will focus on the phenomenology at the Large Hadron Collider for models predicting heavy neutral electrons. And I would like to discuss briefly two scenarios, the minimal H&M model, as well as a model within effective field theory, basically extending the standard model effective field theory with right-handed neutrinos. And then by the end of the talk, I would like to talk to you about the collider phenomenology at the LHC of an Arparative Violating Susy model that predicts another type of long-lived fermion, a long-lived neutrality which can decay. And also I wanted to finalize the talk by highlighting that not only the LHC can catch long-lived particles, but also atmospheric neutrino detectors such as Supercamiocande have something to say in constraining models predicting long-lived light neutrinos. So let's start. Long-lived particles are basically defined as particles that can travel macroscopic distances before decaying inside a particle detector. And depending on your detector capabilities and geometrical acceptance, this can range from order of microns to several meters, even kilometers. If you have such large, long far detector. Now, our standard model of particle physics has plenty of long-lived particles, as I'm trying to show in this image, made by range B for our community white paper. You can see the proper decay distance of several standard model states as a function of their mass. So, you can see that we have a quite a spread here. We have very short-lived particles such as the Z or the HITS boson, which means that they're produced and then immediately decay to other states very quickly. On the other hand, we have very, very, very stable particles such as the proton or the electron, whose stability is basically protected by symmetry. And we have a lot of things in the middle. We have long-lived particles such as the muon, for example. The muon is long-lived because it decays through a heavy mediator such as the W boson. And so on and so forth. You can basically find several theoretical reasons, such as the presence of conserved symmetries, approximately conserved symmetries, small or feebly couplings, heavy mediators, hierarchy of mass scales, small phase space. These are reasons why you would have a particle with a very small decay, total decay width, which in turns gives you a large proper decay distance. So, one can say, look, okay, the standard model has plenty of long-lived states. Why not a more exotic sector of the standard model physics, which share the same structure? And there is no reason to why it should not have long-lived particles this BSM sector. In fact, the theoretical motivations to search for long-lived particles are the same reasons as to why we're looking for BSM physics in general. We want to understand particles that matter, what it is. Perhaps you want to explain biogenesis within your model, neutrino mass generation within the standard model, perhaps naturalness. So, the motivations are transversal and within BSM frameworks, one can find or one can classify very broadly five classes of theories predicting these long-lived states. So, this is work that we've done in the community, a long-lived particle white paper, where we classify these models according to common signatures. So, these can be suscetype models, hexporm models, dark matter, gauge portal or heterineutral levels. Now, all of these models will basically have a common phenomenon within each category. And basically, several classes of theories can be embedded in these five circles. But the common thing that they will have is that similar type of signatures. So, this is where I live. This is what I like to do, think about the phenomenology of these different models. What are the LLP search strategies that are perhaps ongoing or one has to propose to be able to identify or cover gaps in corners of regions in parameter space of this model. And perhaps reinterpret a long-lived particle search result from the LHC and see how you can constrain new models predicting long-lived thermions or long-lived particles. Long-lived thermions, in particular, I am focusing at the moment. And this phenomenology relies quite a bit on what the experiments have to say about this. Because they are the ones that are trying to actually hunt the long-lived particles in the data presenting the experimental results. And what I would like to basically do is understand to which extent the experiments are not yet covering all the possible regions or corners of parameter space of new physics models predicting these long-lived particles. I leave you here two references. One is the Matusla physics case, which has a very comprehensive theoretical review of models, basically addressing top-down motivations to search for long-lived particles, and the LLP community white paper where you can see the details of the classification for simplified models. And the other references we've done according to common signatures, as well as many other issues in LLP phenomenology. These are like two complementary very, very, very interesting and comprehensive reports on long-lived particle physics and the state of the art of things. Now, concerning the interplay between phenomenology and experiment, okay, we've been looking for new physics at the LHC, but so far no new fundamental particle after the discovery of the Higgs boson has been seen. So one can ask why? Where is the new physics hiding? And one reason might be that it's hiding in exotic decay patterns that can arise from long-lived particle signatures. So let me go back to this image. If you think about LHC or sort of collide the proper decay distances, in terms of what a long-lived particle is, you might be in between here. So where all long-lived mesons lie, a few microns up to several meters. Now, the majority of the LHC searches do rely on the fact that particles are produced and then promptly decay to other central states. So most of the experimental work is done in these two extremes. The other extreme can be, okay, maybe your particle is totally stable for your detector timescales. It transfers the entire detector before decaying, or it doesn't even decay at all in the form of transmission or missing energy, for instance. So there is still, I believe, more that can be done within this collider LHP region to basically try to perform a systematic program in the same way that has been done with promptly or totally stable searches. And this is basically what the community has been trying to map this frontier, this new frontier in collider physics, considering the lifetime of the particle. Now, of course, the proper decay distance, you need to, if you want to understand how many decays of this long-lived particle you will have inside your detector, this follows an exponential decay. So here I show you this nice slide from Heather Russell, based on the atlas geometry. So basically for Abidah Gamma Sitau factor of 1.5 meters or so, you can see that most of the decays will happen in the calorimeters or the muon systems, but you can see some of the decays in other parts of your detector. So 25% in the inner trackers of the LHC experiments, you know that atlas or CMS are these experiments, detector-based shaped like an onion, so you have many detector subsystems. So you can have possibilities of decays anywhere across your volume, and you can maybe think of placing a very far detector to the far right to catch basically more distance travel from your long-lived particle, basically, higher life types. So anything, so here in this talk I will comment specifically on one displaced vertex search strategy, so I will focus on decays inside the inner trackers of LHC main experiments. But what I wanted to highlight is that depending on where your particle actually decays, this can give rise to many different exotic signatures. And it does not only depend on where the particle decays in terms of travel distance, but also depends on the mass of the particle, the energy, the charge, the color charge, et cetera, all the quantum numbers as well as lifetime. So things are more complicated than what happens with prompt searches. In prompt searches such as the diagram I'm showing to the right, particles are produced in this star where the proton-proton LHC collision happens, and then everything decays and then very promptly so you can reconstruct inside experiment standard objects, such as muon, prompt muons, prompt electrons, missing transfer momentum or jets. But what happens if you have a macroscopic lifetime? Well, a plethora of new signatures can happen. You're going to have displaced vertices inside the inner trackers of your LHC detectors. If your lonely particle travels more than millimeters like a meter, then you might see a decay in the calorimeters. If your particle has color, then you can see a displaced jet or an emerging jet. If your decays are farther away, you can have maybe displaced vertices in the muon systems, and also more complicated signatures and perhaps signatures we still haven't thought of. So this is what makes, I believe, lonely particle searches even more difficult for experiments than standard prompt searches or stable searches. Because detectors, LHC, the main detectors such as Atlas, for example, was not really designed to be efficient at reconstructing, for example, charged particle tracks that are being produced farther and farther away from your proton-proton interaction collision. So these are more challenging experimentally, but nonetheless very fun and interesting to see what our detectors can do. So, like I commented before, I will focus on a particular lonely particle search strategy based on displaced vertices. And let's talk about lonely fermions. So first I wanted to comment on heavy-neutral leptons. Heavy-neutral leptons arises in mechanisms that are able to explain why the neutrinos in the standard model have mass and why they have so little less. So there are several things that we know. We know that the neutrinosilations happen, therefore, some of the neutrinos must have a mass, but we don't know much about the underlying mechanism behind the mass generation. So involving this heavy-neutral leptons, so we don't have a CISO mechanism in the CISO, etc. We also don't know which specific beyond the standard model framework of neutrino mass generation, this new CISO mechanism can be embedded. Maybe there are new interactions of heavy-neutral leptons beyond the standard outcome interactions. We don't know the nature of the H&L, whether if it's TIRAC or Majorana, we don't know the mass case. So I would like to comment on the minimal H&L model. This framework predicts heavy-neutral leptons. In this framework, the heavy-neutral leptons can mix with the standard model neutrino, and you can realize this in many beyond the standard model theories. Now, within this mixing, the important parameter here is that the mixing square of this H&L N goes as the ratio of the small neutrino mass suppressed by this Majorana mass case. So this is a standard type one CISO model. Now, if you compute the total decay width of this H&L, you can naturally have this guide to be a lonely particle for small mixings and, let's say, all the GD skate masses. And this is what I'm trying to show in this plot to the right, where we have this mixing square as a function of the H&L mass below the W boson mass. And these different lines represent different values of the property GD, so 4 millimeters up until 10 meters. So this means that your H&L in this sort of minimal model, if has a mass below the W boson, it can decay to leptons and jets somewhere in the, for example, tracker region of Atlas, with a macroscopic vertex, which you can reconstruct from charged decay products. And here you see that the long lifetime comes from the small mixings and also because the decay is mediated by the soft shell W boson or Z boson, if you're thinking of charge or neutral current interactions. So one thing that I would like to comment is that here from now on, since the particular specifics of the CISO mechanism is very model dependent, the phenomenological approach is to always from here on and now on to treat the mixing and the mass of the H&L as two independent parameters for all the numerical simulation purposes. Okay. So this is the phenomenological approach. Now, let me comment on some works in the minimal H&L model. How you can catch this guy, this long leaf Permian H&L with displaced vertices. So back in 2018, we proposed a search strategy based on one of the Atlas Susie searches, but optimized it for a low mass H&L so we are thinking of H&Ls being produced from the decays of W bosons at the LHC being produced. And that means that their masses are below the in between the B meso mass, so we are thinking in our kinematic mass range between five and sort of ATG or so. And then this guy can decays we are charged on neutral currents to displace objects and to displace tracks so what a track means any charge particle after basically atronization. And this strategy, like I said, is able to efficiently reconstruct vertices in the inner detector and the Atlas inner detector, the sort of fiducial acceptance of this analysis starts from four millimeters to 300 millimeters or so. And the analysis is based or basically by selecting high quality displaced tracks. And what that means is that you placed a specific cuts on the so the rapidity of the track, the transverse impact parameter of the truck must be larger than what you would cut on for prompt searches and also a high PT. After you count basically all the tracks that you can have, you reconstruct the displaced verdicts so you require something of an environmental mass, larger than 10 GB or so, and a minimum amount of tracks are also required, at least three, starting from three tracks. And then you can have basically a design a background free search because nothing in the standard model will give you a high mass high trap multiplicity displaced verdicts above the D mess from us. The design basically building up on previous words recently in this year, our new article was published where we optimize this strategy. And basically we apply it to several theoretical models predicting hls. Also here, I forgot to mention there are some cats on transverse and longitudinal distance of these displaced verdicts as well to be in the Atlas fiducial inner tracker region. So, so these are our results. These are our sensitivity estimates for the minimal h&l model. Now, what it's, we focus here. I'm showing here mixing in the electron sector only. And our projections for different values of the luminosity, 3,000 and 300 inverse and two and are shown in bed and grew respectively in the shattered a gray area, you can see current experimental constraints, but just a something that I'm learning more young recently, and there are two new Atlas and CMS results for a channels with a very similar signature so they're looking for this place leptons, leptons reconstructed as electrons or beyond a coming from a common origin so this is also a space for Texas strategy, and they focus in the same sort of process being produced from double dedication of W's, and they have provided provided a new limits with sort of 139 inverse and two barn for Atlas and 138 inverse and two barn for CMS. So these are not included in the shattered a region. But you can still see that most of the parametric space that these two new searches are able to exclude are already contained in the previous versions of this analysis with lower the CMS search my extent, this shattered region for lower masses. So below five, five GB, as you can see in the far right blood and this is interesting because they do something very they do not basically require that the space vertex starts from some point, but do a very complicated and detailed background estimation in those regions below five GB. So it's quite interesting seeing this new new results very recently. And just one thing to comment is that our strategy is also optimal to constrain mixing in the style sector. So this is something that it's like a standing gap in coverage still for the experiments to do. Tau leptons are are are trickier to to deal with experimentally you have reduced sensitivities and reduce efficiency due to the difficulties in reconstructing tau leptons as opposed to electrons and new ones. So our multi tracks sort of display vertex search strategy as it relies on tracks and can also can provide constraints in the in the tau sector, which is far less constrained experimentally as you can see from the from the dark area. I would also like to comment that there is there was a recent review and releasing on the archive for for the snow mass initiative from the United States, where they highlight very comprehensively. The present and future status of heavy neutral leptons also including lifetime frontier experiments and several models and also different kinematic regions, such as the ones that I presented here so so I just leave you the reference there. Now, one can one can wonder okay what's happens beyond this minimal and a channel model. And, in fact, there are similar LLP a channel display search strategies that have been proposed, or are currently being proposed to be able to constrain a different sort of mixing a patterns, different combinations, different mass regions, different production mechanisms, and even in some extensions of the gauge symmetry of the standard model, like the right models, you can also have a displays very this is coming from, let's say, the case of right handed W. There are also models in which this a channel can be per produced via, for example, as a prime, which can happen, if you extend the, let's say, the standard model by a new one prime symmetry, etc. So there are many possibilities where you can have predictions of even one or two displays vertices, and sometimes with not even a problem that you can trigger on as in the in the minimal case. So, so, so, so, so there is here I leave you some of some of the words that have been done. And to sort of tidy the signatures, but one can think okay maybe there is a systematic way in which I can study such non minimal a channel frameworks and one can apply effective theories so so I would like to show you a few examples of the recently worked out with my collaborators in this direction. Basically, you can think of extending the standard model basically the same language and for the minimal model that I showed you before, but with non renormalizable operators which are suppressed by a new physics K lambda. So we, we've been studied these operators. We've been we've been studying the pension six for familiar operators in two cases. One, one set of operators, one table correspond to operators with a single h&l. And we can also study operators with pairs of h&l so I leave you the references here. You can see that no love not all of these operators will be sort of relevant for a like see phenomena. So we focus on the three, the three first lines, basically, and just to give you an idea of how expectations or sensitivities can change with respect to the minimal scenario is that I wanted to comment on these four premium operators with pairs of h&l. Now, here, what you can see is that the production is not longer dominated by the small mixing of the h&l with the standard models, but by the operator. And the h&l can decay with the space verdicts in the same way as in the minimal case the other machine with the light or active nutrients. So we propose a strategy that is similar to the one that I commented before for a for the minimal case. And also, since we have now, the, there are several proposed fact the fact detectors, where you can search for lonely particles. So we also study what happens, what are what these two proposed for detectors can say. So, there are two different approaches because we basically for Atlas do a full simulation of the h&l decay for far detectors what one can do is to compute decay probabilities for this simulated h&l in your fiducial volume. So let me comment a little bit about what are the different proposals and what are all these far detectors. And I stole a slide from Larry Lee at the, at the, at the large hydrocolider physics conference last year, and because he showed this very sort of comprehensive image of where all these far detectors are so perhaps you've heard of this. So, so we have different proposed experiments and new experiments. So the ones that are in green are experiments that are approved to be built. So for example, Pfizer, Pfizer will be taking data. I think in brand three model and map also are experiments lifetime computer experiments as well that are already approved. But it is not yet approved. So you can see, for instance, a that a face or for instance, you can see in the in the in the upper right corner, sort of the distance at which these experiments are placed. So Pfizer is a very forward detector cylindrical shape, more or less 480 meters away from the Atlas interaction point. So if you consider Matusla, for example, Matusla is a much bigger proposed surface detector on top 100 meters, let's say on top of Atlas or CMS. So basically you will be able to catch lovely particles that are produced in the primary product of collision at the LHC travel 100 meters and then you can catch them with the surface detector. There is also, for example, Codex B, which is proposed to be built as an extension of the LHC experiment. So this cube box here, and there is also for example, Alex tree, which is proposed to be built near the Alice experiment. The map is proposed to be a sub detector of the model experiment, which is basically also near LHCV designed to catch neutral particles. As I said, there is also Milligan and etc. There is a plethora of new proposal anubis I am missing because I will comment on this as well. It's also one of the sort of cylindrical shape proposed for detector experiments to be built in one of the service shafts above, just above Atlas or CMS where you can play sort of tracking layers and also catch lovely particles. So you see that there is very different geometrical acceptances, very different sizes of these experiments, very different volumes. And these of course will translate in different sensitivities for your particular model, and what each of these experiments can do. So I will talk about H&L so with the far detectors you can go lower in mass and lower in vaccines because you will have larger displacements. And just to show you an idea of the of qualitatively how the sensitivity of what you will see the next slide is limited by so on the one hand. The mass reach is limited by the cross section of your H&L of course, but on the on the two and on the two diagonal lines. Perhaps your H&L will start to get into prompting on the farther right so so so that will limit your sensitivity and each of the of the experiments, or on the other hand, on the bottom, sort of vertical line, your decades will start happen very very far from the outside. And so basically this is this is a very cartoonish way of representing what are the important sort of geometrical parameters. So each of the five protectors we consider so we consider these six setups. Of course this builds up on earlier work and you can find all the probability formulas in these references. But basically this is, this is what you can do you can basically simulate the probability that each H&L will decay. This is the acceptance of all of these experiments and the acceptance is parameterized by all these parameters and in black, for example, you can see the, the, the point of which sort of the LHC interaction interaction point. And for, for, for, for these experiments which can be at us for example or LHC. So I will show you the estimates for the equal six per frame operators with pairs of H&Ls. So, we have two cases here, and both Dirac and Majorana H&Ls. And this choice is for a fixed value of the new physics case is fixed to TV. And here I'm showing operators with a first generation parts so that's the one one. What's what's what the one one means for you and so basically sort of the intermediate, not the most conservative case for for what are your prospects. There are different color lines, each of the experiments that I mentioned, and including the Atlas, sort of projection in Brown, and different stages of luminosity. So, you can see that if, if in this framework effective using considering sort of your production mechanism. If you consider that your operator, the production is dominated by the operator now. This basically enhances the cross section. And you can see how the big seems that you can grow go as down as 10 to the minus 22 so even well far below see type one sees all sort of expectations. So basically, within the lines overlaps and complementarities of the different far detectors experiments as well as Atlas, can be understood that the higher must reach for example, for the H&L it can be brought with Atlas. And Mathus la dominates basically the, the mixings that the because of the so far away and as a large sort of sort of volume. You can catch many H&L decays there and that's the, the, basically the yellow line leading sort of the sensitivity prospects. Let me just briefly comment on a different type of heavy heavy heavy heavy for the long term. But not so heavy so this I wanted to talk a little bit about a long live neutral in us within the operating violating social framework quickly both at the LHC and also what can happen. So it's interesting because the same sort of the same signature that I presented to you before in the context of the minimal H&L model can be realized in in a framework of Susie once you break these are partly the unknown zero lambda prime operator on the prime coupling. And here we're thinking about production of a little trailing through an S lepton, and the small coupling and the lightness of the new trailing on us will give rise to macroscopic lifetime. So, if you sort of replace a new trailing with the H&L now and the sort of w with this lepton, you can see the same sort of process. So the same signature can be applied and see what can we do to constrain, like in training us which are far less constrained, experimentally than for example as quarks or windows at the LHC. And also how you can prove this lambda prime 111 copy. And this is, I wanted to show you this because it's a preliminary very preliminary work done by my student Fabiano Landis from University of La Serena he's a master student. And this is his, his, his work in progress for his thesis. You can see that with the displays vertex search strategy in the atlas in a trucker masses of neutrality knows as light as for let's say 50 to 200 GBs can be broke with a displaced vertex. In this set of plots you have different choices for this electron mass from one TV to 50. And this is interesting because if you think about what are the other current experimental constraints or lambda prime 111 display we find that displays vertices are can prove much lower values of this. Okay, so what can happen beyond the LHC and with even lighter. So, this is a work with that I must confess is a very, very out of my comfort zone. It's very new. The idea of think about how ongoing atmospheric neutrino detectors and can constrain lonely particles. So there is a seminar work done in 2020 by these people and one of my collaborators is Victor Munoz. So, basically, in this article they provide sort of a pipeline and some benchmark models to what happens when lonely particles are produced from the decays of medicines come from cosmic ray air showers and what, how can those signatures be captured by, for example, the super cameo can mean a little bit. So, in our work from last year, we focus on two benchmark scenarios. Benchmark one which which we focus on production of nutrients coming from the mason and benchmark two from pianos. So, the signature would be basically a shower in final states. And these are what some super coming down the circles he liked signature events, which can come from a sort of a droning showers, as opposed to this new unlike signatures in super. So, these are two sort of basic benchmarks that we study. And basically, you can compute method production from this cascade equations from cosmic ray physics, and the neutrino can be produced from the database messes which are generated in this cosmic ray or shower. Now, a light signal in super K, we will originate from this sort of electromagnetic electronic show in the detector, so we can compute the neutrino event rate in the detector, which basically consider geometrical acceptance and neutrino probability of decay inside your system. And in the same way as I tried to show with the fact that actors here like sort of the relevant geometrical parameters. Involving the super key you need to consider, for example, the distance traveled from the neutrino being produced in the atmosphere, how much it travels before it can reach a super key which we model, like the cylinder in the detector. And we sort of analyze the, the super cameo can the data and calculate the number of expected events for this sort of like signals. And just to show you that the result in these two benchmarks which represents the production from the mess on some production from chaos. I think it's very interesting because a benchmark one we chose on purpose because there's already been a lot of research done in the context of light neutrinos and collider and being done experiments. So we wanted to see what can put super come you can do and how it competed with this lifetime frontier experiments for benchmark one, and which shows exclusions in the sort of way branching ratio, branching ratios of the dimension to the neutrino and then the neutrino became to be shower like signature. And super cameo can they cannot do as well. But on the other hand, if we go to the second benchmark production from chaos chaos are more abundantly produced on the mess on the atmosphere so here is where we find super cameo can do better and explore a regional parameter space that's a no other lifetime frontier experiment that has proved in the past. So I believe this is very interesting with super cameo can the research can prove basically a wide range of times as you can see on the x axis with the sort of seat down speaking at the order of the one kilometer so much, much longer lived than the collider studies I presented. Okay, so so so I'm reaching the end of the dog. I basically wanted to give you a sort of like a snapshot of different words, where the phenomenology of lonely particles, what are currently being done, what is currently being done at the end of the day, see and propose fire detectors, what they can say on a specific models predicting locally fermions. I focus on two scenarios that the channel models and light neutrinos, and these models lie in a region where we don't know what is the under underlying mechanism behind the generation. We don't know this unknown physics. And if there is one take home message that I want to convey to you is that the lifetime frontier basically is a transversal line of exploration across many different experiments. So, I think this is why it's worth to keep investing in this comprehensive lonely particle program at collider and even other experimental facilities such as a goes very new pinot detectors, which can also provide perhaps the discovery of lonely. Thank you. Excellent. Thank you very much, Johanna for such an interesting and very clear talk. So we can now go through to towards the questions part of the talk. So, I have a couple questions but okay I'm going to let Nicholas do the first question. Thank you. Thank you Johanna for a nice talk. So I was surprised a bit with the results on your flight I think was 16. The result from Atlas and CMS. I've never seen this before. Let me share again. Can you see this. Exactly. Oh, sorry, I shared the PDF. Yeah, I shared the PDF so let me share my my Google presentation. Sorry, I lost the zoom. Okay, here it is. You see this right. This one or this one. I think it was 16. 16. Yes. Six. That one. Why such limit are so like narrow, because being previous plots, the limits were more like, like triangular know more broken. Oh, but perhaps it's because here it's a log scale on the way to as opposed to the minimal case. Maybe it was in previous slide. Yes, I think there were some official results from CMS and Atlas, but maybe it was your previous slide sorry. So I also show you the sensitivity rich in the medium. Which looks like this. Exactly. Oh, sorry, sorry. Yes, what this 10. Yes. Exactly. So for instance, if you compare with the plot on the left, this is quite different and also is closing the right. So the one of CMS is compared to the one on the left. But at least quite different. Why the Atlas and CMS are I see. So, unfortunately, if you go through these links, the Atlas result was from March 12. So there is no paper nor cost note yet that that I can read. And so my feeling is that they, the analysis is different. So one of the differences that I know is that the CMS search does not require explicit vertices to start in from let's say four millimeters, which was done in the in the previous Atlas search. And basically are not limited by an initial sort of displacement. So that's why perhaps it looks different starting from one GB to five GB you can prove that region. Now, I don't know what I just did between the one GB and five GB region in the new result. So that cannot read the paper, but the main differences between our proposal and the these two other Atlas or CMS is that the signature requires explicitly leptons in the final state. So you must have two leptons coming from a common displays origin. In our case, we don't care if the track it's a lepton or it's a bayon coming from the organization one of the parts. So it's a different strategy so you see that the shapes will be very different. But to be honest, I don't know why the Atlas curve does this bumpy thing cut for GB. I haven't been able to read the analysis yet. I haven't found it. I think the impression that it's like to two zones that are merged or something. Right, this is this is what I think they do something for something a medical region and then perhaps do something different. But, but still they are basing their sort of search on display leptons coming from common oligies and triggering on prompt objects from the prompt lepton associated with the W blush. That's the main pipeline for their search. But yeah, this is very nice. Very nice. That's all I can say keep and keep an eye on on how these websites evolve. They showed you the paper or not yet. I checked yesterday and I didn't see it. So, so in this analysis, they since they it says Majorana, I guess they also use same same sign leptons. Well, well in your case you don't care about that. I don't care what I what I just look for tracks but they do they they usually present their results in in in some categories of same sign or of course in sign leptons based on charge. Since since we're already talking about the tracks, I'd like to ask you a little bit about that because in principle you need like a minimum number of tracks in order to reconstruct the displays vertex. I remember that when you have neutrinos that are lighter than one GV usually, you know, you have to to to to calculate the decay into, let's say one electron and one charged pion or something like this. So, once you get past one GV, you have the case into several several had rooms right. So, so the question is how do you deal with that because of course, right, you calculate the three body decay and then that will hydronize. So, how are how many tracks, do you actually get a when when when when doing this, this analysis, because they're not that heavy that heavy little leptons right so. So, so just a brief comment so in these say words, a production only comes from the use so we are not considering mess on production, which can be important at one GV. Okay. And so our course are sort of conservative in that sense. Now, depending on how background free you want your search to be a you require a limited sort of that's where you place the limit in the sort of displays vertex in body and mass and number of track region. So, this is something that we do basically from the experiment. So the experiment presented some sort of efficiencies in these two variables. And you could, you could see that starting from three tracks and five GV, you were in a region where no in a signal region basically where no background was expected to be found. Okay. Yeah, so this is basically what drove our choice of the cats to sort of be the most optimized as possible, but still being a background free search because we did not want to be with with with instrument backgrounds coming from the search standard more backgrounds. Okay, you can get rid of with with with the digital cat from the message or the mascot, but there are other more complicated sources of backgrounds that are purely instrumental. Yeah. And this and this and this efficiency is already encapsulated those material effects. Okay. And so, basically, if I remember correctly from the simulation. And you could see all the tens, tens of tracks, 20 tracks, depending on on on on the bus points that you have. But once you see a few events, then you just run things, you know, and see, and see the, and see efficiencies and don't don't look at individual tracks. Yeah, but that's the order of money to I mean, perhaps hundreds. Also could be possible. Okay. Interesting. Because for for lighter, I would expect, you know, like just like two tracks. The lighter you go, of course, then then then this starts to become an issue right. But still you can assume that all the tracks have the biomass. This is the hypothesis in this analysis so. Yeah, usually three, two or three, but passing your events about three, about three, that's more or less. Thank you. Let's see we have any questions from the audience. We have a comment. Well, yeah, we have a question from Guillermo Gambini. I guess that I missed it. I guess he was referring to our discussion on the Atlas and CMS results, asking if the shape could be related to the reconstruction efficiencies. And it could be. It could be so you see below five GB you start to have all these demos right the chaos these I don't know perhaps they are excluding some of these windows specifically within this window. Yeah, I mean, I think it could happen because of the different strategies, which translates in how efficient, how efficient the searches are in that particular mass region. Okay. We have usually a bit of a lag. So let's see if he answers. Do we have any other questions from the audience. Yeah, just a question for Johanna. Yeah, so very nice to talk to Johanna by the way. So I was wondering, do you speak any, any kinematics consideration in the case if your lonely particle has a different spin them one half point and instead of right handed neutrino it's a kind of a boss on or spin three and a half. That could be challenging for the, for the detection of the display perfect. And this question. So from, from, from the top of my head. I would say, yes, there could be, because like I commented in the beginning so the particular quantum numbers of your LP are going to basically drive the type of signature. But to be honest, I haven't thought about for LLPs beyond spin one half. But there are several words that you can check in our right paper within the model categories. Now, of course, the efficiencies also will depend on the particle mass, the H&M mass. Of course your acceptance within the detector. I could imagine the energy. I could imagine some sort of energy distributions. Let's say across, across the, maybe you can construct some angular variables that perhaps will help you discriminate between different spins. I don't know. I haven't really much thought about it, but I know that there are works that are basically targeting those models with the researches. And a nice review would be the Matusla physics case or the LLP white paper so you can see them. Yeah. Oh, thank you. Yeah. So Guillermo Gamini says thanks. Any other questions? Okay, I'll ask another one. So could you go back to your results with your dimension six operators? Yeah, one sec. The famous slide 16 of Nicholas. I don't know why. Every time I try to go back to this slide show, it fills my entire screen. So there you go. This one. I think, yes, there. Right, so here. So, so first in this, in this kind of analysis, what sort of signature are you looking for you again looking for a many tracks or maybe the two same sign leptons. Since you mentioned it, it makes me think about. Yeah, okay. So, so, so let's say that for the decay part, the signature is the same as the one from the minimal H&L scenario so it's a track based research. But since you see here I'm showing you plots with pairs of H&Ls. So you don't have really the prompt lepton anymore. So you can see, for example, this cartoon drawing that I showed here. Yeah, so it's, it's, it's a bit different. And we do this trick that we are forced to do due to the lack of dedicated sort of displays triggers, but we'll be able to catch the signature right away. Basically, there was this Atlas Susage search, where you find displays electrons that don't have any isolation cuts as prompt leptons prompt electrons do. And basically what you can do in Monte Carlo, because in Monte Carlo you know everything that you're simulating, you can sort of identify the Monte Carlo truth ID of that electron, and basically force and truth match that lepton index to the electron that would apply their trigger to correspond to the one coming from the displays vertex. And then, then you require the same sort of high mass multi track displays vertex search. But this is trigger and we consider basically only AJJ because by justifying these displays electrons data samples that could be available in the Atlas data. And we found those to be with the like, let's say, that was all me on site. I don't think so. I haven't seen a justification, let's say to, to do this, this place. So basically Atlas in the search for a photon photon triggers, but on triggers where you can find displays electrons is in them. So, so yeah we do this trick so with truth match the index and then, and then we can have such a signature, we propose such a way to to constrain the CFT models with per-produce signals in that way. Because then the lepton that you're triggering is, is not prompt. Exactly. That's what you're saying. Exactly. And you can trigger on that on a non prompt electrons. We're saying that Atlas could. That it could, okay. That it could if they looked, if they looked at these data sets. Right. We know, we know they have because they use them for displays leptons from a Susie search in which they basically, I know they have those samples. So, so, so basically we're saying, okay, maybe you can start looking at these data samples even more because it's important for a channel, not minimal models. Right. Okay. Okay. And, and, and thus, so, so your results are of course valid for a particular a scale right you say to TV okay to TV square. I see. Yes, yes, this was just an optimistic choice. So if you see in our paper, we have several scenarios. So once you start increasing this lambda, of course, this vision starts to shrink more and more. Right. Right. Because the cross section from these operators escapes with a lambda to the minus four. So, up until 13 TV, Atlas can still say something depending on the, I think for this you and enable operator and we have different choices in the paper but Atlas, I think that the limit is like 15 TV or so. But yeah, this is like the bulk of what the LHC could, could be able to prove but it's not the actual limit. And here what's the difference between data can Majorana. I mean, how does that affect the analysis. Okay, and actually, in our work, we find that the, the, so, so basically the main difference is the mass reach, because the production cross section for the Majorana data case differ. So you have an enhancement for the major for the, sorry for the direct case. You have, you have higher cross section so there's a little bit of a better and mass reach, but that's that I would say that that would be, I mean this strategy would not be able to distinguish these two cases. And probably have a somebody destructive interference going on in the case of Majorana. Correct, correct. This is something that yeah, but this need differences is noticeable for for higher H&L masses for for lower H&L masses is like the same but then did that goes higher. Yeah, we explained that in in our paper because we also found that. We have some analytic experiences as well. Okay, yeah. So, um, I don't know if there's any other questions from the audience, let me have a look at the chat. Okay, so no questions on the on the chat so far. Nobody else from the audience. Okay, so I think that that's it. Yeah, we're past the hour already so I understand that. Okay, sorry for for taking all the quest all the time for the question. Okay, so, so then that's it. Thank you very much, Giovanna for for such a very nice talk. And we'll see everybody on the next webinar which I think it's not in two weeks but in four. Right. So anyway, we'll see you around. Thank you so much for the invitation. Bye bye everybody.