 Okay, here we go. Hello everyone and welcome to our 48th web seminar of the series of Latin American webinars in physics. My name is Joel Jones from the PUCP in Peru and I'll be your host today. Our speaker is Werner Poro, who is a professor at the University of Buldesburg. He will talk about the interplay between Susi and Ultrino Physics and the LHC. So Werner received his PhD from the University of Vienna and has carried out postdocs at Desi, Valencia, Vienna, and Zürich, and he's probably best known for being the author of Sfima, among other pieces of useful code. So Werner's talk today is titled Super Symmetry in Ultrinos and the LHC, and we're very happy to have him as our speaker today. So but before we start, we'd like to remind you that after the talk, you can be part of the discussion by writing questions and comments via the YouTube live chat system. I'd also like to remind you, hopefully you can see this, to check our WordPress, Facebook, and Google Plus pages. If you want to write one now, you should write the first one because we have links to all of the other pages down there. So remember it's lowphysics.wordpress.com, okay? So after this, I'll hand you over to Werner. So let me, okay, so they're all yours then. Thank you very much for the introduction. It's a pleasure to speak here. Before we start, I want to say that the first time I do this type of presentation and I know the question should be at the end. However, I know also that if nobody interrupts me, I have a tendency to be very fast. So please, if I get too fast, please interrupt me, never the less of your room. So I'm not going to switch to the slides. As Joel has said, I'm going to speak about supersymmetry, neutrinos, and DLHC. I have mainly two topics. The first one is the implication of fixed discovery and beyond the standard model surges at DLHC for model building. And then I will focus on a special class of supersymmetric models called natural susie. They will briefly discuss the MSSM context, tell you why I'm not that happy about this class of models and how the problem that I see there can be solved in an extension of our role. Well, let me start, make sure, well-known results, maybe Higgs physics, which was the last part of the standard model which needed to be discovered. This mass has been measured with a very high precision to 125 plus one, plus minus 0.2 GV. You see there was an update within uncertainties that consisted of the previous measurement from one one. What's also important beside the mass measurement shown here is a measurement of the relative signal strength. So the new parameter that you see here on the axis, that's the ratio of the measurements over the standard model predictions. So for example, the new gamma gamma is the production of a Higgs boson which indicates further into gamma gamma divided by the standard model prediction. The perfect world, if the standard model were correct, let's assume that for a moment, this would be correct, then everything should be on this dotted line here. As you can see here, the error bars and certain DCL within one sigma, there's more or less consistency between the standard model predictions and the measurements. This hasn't changed too much. The front door is even getting somewhat closed, closer to the standard model prediction than the C of the picture, but essentially the same remains here. Now, if now looks at the value of the Higgs mass, this has been measured to 125 GV. Now in supersymmetric models, in contrast to the standard model, the Higgs mass is a prediction, particularly the MSSM, it's essentially bounded a trillion by the set mass, and then I need greater decorations to get it to larger values. This can be accommodated, thanks to the large copy of a coupling, but what we have to look at is actually not the mass, but the mass squared. If it looks what 125 GV squared means, and that's the set mass squared plus approximately 86 GV squared, which means we have ordered 80 to 90% corrections. Remember, the set mass and upper bound is actually the value that you get in the MSSM, which means you have huge corrections. We can do that because they thought the cover coupling is order one, and therefore we have large correction. However, as we will see, it describes us in a part of a parameter space, which is somewhat uncomfortable. At the same time, MSSM has not only looked for Higgs, but also for physics beyond the standard model, and it can be unfortunately summarized on this slide, a lot of exclusions, but no signal. So what does this mean for model building, particularly for supersymmetry? Well, before coming to that, let's ask first the questions. Do the bounds imply that supersymmetry has to be really heavy? So the show here are two examples, one from Atlas, one from CMS. The first one is quark production, where the quark decays into quark, and the traginal, and traginal decays further into W, and the neutrilino. That's one out of many searches, but the other way to do it for quarks is about the same. And the C here for Meri-like neutrolinos, so the C has to be plane, neutrilino plane versus quark mass plane, and the traginal is assumed to be average of the squark mass and the neutrilino mass. At this kinematic assumptions, they find exclusions up to order 1.2 TV. If the neutrilino is rather light, it's up to order 200 TV. But as a script here, where the neutrilino mass comes close to the squark mass, they can go down to, let's say, order 455 TV, and they still would be consistent with all experimental observations. So the exclusions do not necessarily imply that all squarks have to be very heavy. However, they would imply that we have a very complex spectrum. And that's actually rather difficult to obtain in simple Suzy models like GMSP and so forth and so on. So there you would need, if this don't want to be true, there's a simple form of supersymmetry breaking. Similar results hold if you go for the collinoproduction, you can even find bounds of up to 1.81.9 TV, depending on the exact term of the channel. As you can see close to the kinematic limit where you have rough, soft jets, because that's the reason why you can get here with low masses, simply the cuts on the jet energy goes beyond what goes below what the components want to have. You still could get rather light to use, let's say, order 708 IG. That's not exclusive experiment, but generically you find rather strong bounds. Now, what does this combination imply for the model building board? As I said before, we have a Higgs mass of 125.2 GB, we have a rather large loop on the collusion stair form. And this implies that the scale of partners of the top quarks, the so-called stops, have to be rather heavy. And, or we need an additional large, that's why it means in the stops, just to remind you, there are lots of similar supersymmetric models. For every LSD part of the top quark, you have a supersymmetric partner, they mix among each other. And that's caused by a so-called trailing coupling, which is noted by atop, or if you go up to the high-scale by a zero. But if you look in different high-scale SUSE scenarios, but you see, for example, in so-called gauge mediation, that is SUSE breaking its transfer from the hidden sector to the visible sector via gauge interactions, that they're the minimal model. You hardly have any left-right mixing. You definitely need to rather heavy stops. And for those people who found a way in 2012, by our job collaborators, that the stop should be heavier than about 16V. The reason you have small left-right mixing, the main contribution goes rather speaking like lock and stop-squirt over M-top-squirt. Of course, that's the minimal model. You can go to a more complicated model that's based on this paper by Mayday, Cyberg and Sheep. In general, gauge-mediated SUSE break can get additional terms. If you do that, then these people here, Knackbrand, Friedigolo, he did a recent study last year that he really learned over different variants of these type of models and he found that he do get, because he get additional left-right mixing, down-to-stop masses equal about the spotter masses if the so-called messenger scale is this scale, but the SUSE breaking is transferred, the minimal model is all of 100 TV, this class of models to get so small stop and spotter masses, you have to go up to 10 to the 15 TV so close to the ground scale. I should add here that this top and the spotter, this class of models are the lightest strong-wrecking supersymmetric particles. So 16V, this would be beyond the reach of the 14 TV, LSC, you really need to 100 TV collide as already discussed here and so on. Now, if you go to unified models like the CMSSM or the new HM, which means that you have non-universal Higgs masses, so what you assume here is you have non-universal scalar masses characterized by this parameter M0. The CMSSM, this would be the same for the scalar patterns of the fermions, the fermions and the Higgs bosons. In the non-universal Higgs mass models, this mass parameter would only be for the fermions, but would be different scalar mass parameter for the Higgs sector. Independent of the class of models, you find approximately that the value of this trial in your coupling, which governs the left-right mixing for the stops, has to be about twice M0. This has been worked out by several people, deduces and implications. Recently, the deduces group did fit taking into account all constraints from low and high energy physics, which means as to physics, low energy observer for the RSC data. And you find, okay, we fit the CMSSM, you can do that, after about this relation here, you would find that squawks into the industry would be about two TV, sure might see a glimpse until the end of next year, slept on mass about 600GV, they probably slept on right. The left slept is actually heavier, about 700GV, and the lightest new 30 would be all of 450GV. You can even go to more generic high-scale models, but independent of the extractorization, usually you'll find, we need a large tri-linear parameter, which is order, that's preferred from the sign, it's a negative sign, between one and three times the maximum of the genome mass parameter, M1-1.5, or one of the squawk mass parameters, the scalar top-sector and stop-sector, at the God scale, again, after RG evolution down to the low scale. The negative signs preferred cause you get an additional contribution, which comes negatively with the M1-1.5 parameter. A nice summary has already been worked out of this class of models in 2012 by Perlman, Kulkarni. So to see here, if you have a light stop, which means order one to two TV, which is accessible still at the LSE, needs a large tri-linear coupling. But that's actually somewhat problematic. The reason being, we have many scalars in SUSE models. It's not as easy as in a standard model, we just have the usual X potential, if you're much more complicated potential. Now, when one does SUSE start this, one usually chooses a parameter set such, that one calculates the new and the corresponding soft SUSE break in parameter B to obtain the correct electro-eximity break into a complete correct set mass and so on. This approach engineers clearly a minimum of the potential that's consistent with all the data, no question about that. But it does not exclude the additional other minima, which are potentially lower line, which break charge and or color. So what we did in a study was to really look over the parameter space, calculate all possible extrema of the potential at the tree level, use a control with some mathematics and then include loop level, the one loop effective potential started from the tree level extrema and going into the one loop corrected extrema, looking the average rate in minima. What you see in these plots are different parameters, the combinations are the first plot, the A0, M0 half plane, fixing M1 half to one TV, and better than new, larger than 10. The second one we fixed M1 half, the Kichino mass parameter and the scalar mass parameter M0 to one TV, and show this in a better A0 plane. The lines that you see here is our lines of constant X-masters. The kind of coding is such that in the green area, we do have that is the assigned minimum to be the global minimum. In the blue and red area, actually turns out that either color or charge breaking minima is lower line than the one that you would like to have. Blue means it's more or less metastable. Metastable means we took here about a 10% probability that it survived until today and red means you're below the 10%. Of course, 10% is the rate shows number, you can switch that. You see also here by the dots, there's a lot of numerics involved, so there's some fluctuation. So what you should take is one of the speaking spawns here seriously, but that's really metastable or unstable about the 10%, that's not that important. The important thing is here, if you look at the range, so we have 125 GB for the X-mas, we can be in the stable region, but it's a large part, and in this part it's even more pronounced, but actually an unstable region of the one that doesn't exist. We should add here what we took here were at most three Vs for the sperm ions, because the mathematics to get all the three-level minima is quite involved, and you find a huge number on, which is case roughly speaking, like the number of the Vs that you allow to the power three. So in addition to the two Higgs Vs, we have an upper limit on the minima of five times, or three, five times the power three, and that's quite a bit. So the should take from here is, if you have large particular from this plot here, that we had large parameters in the best fit point, that is really dangerous that you're not color or charge breaking minima. This should be taken into account for the constraints. Of course, these are, we think, MSSM and SUGRA. We have to do this in more general context, but it turns out for generic highs can also be presented before. This stays true, but for GMSB, they don't have to have right-mixing parameters for any minima version. Now, if you can go to the general MSSM as a damn part of the parameters, this color charge breaking minima issues, but of course you have much more freedom in parameters so general MSSM means you define every parameter at electric scale, taking into account flavor consequence, which more or less kills large flavor mixing parameters. So what's usually assumed is that everything is like in the standard model, so the only sources that you have isn't used by the CKM in the PMS matrix for flavor mixing and the sub-susi breaking parameters do not induce additional flavor mixing. Of course, the generic signatures in such scenarios they are well-known. Multi-laptom, multi-chets, missing energy, so let's see Mila put the MSSUGRA search. However, what really matters here is the kinematics. So the MSSUGRA usually have a large hierarchy, squawks much heavier than slaptons and those significantly heavier in chachinos, neutrolinos, keeping rise to how chachs, hard leptons and quite a lot of missing transverse energy. In the generic MSSM, you do have the possibility to get a complex spectrum as I've seen in the exclusion plots, the food means that they can get to small mass, but red is more masses for squawks, including the sort of UVGV consistent with all the data. Now a subclass of the generic MSSM is for people in tube natural supersymmetry. You see here the reference which induce those, introduce those. And the basic idea is let's keep only those supersymmetry part of the slide that you need to get a natural Higgs force, so this means you do not have a huge cancellation between different sectors contributing to the Higgs mass. Essentially this means in practice that there's top, this bottom which is the Susie part of the bottom fork, the gluino, and the lightest neutrolinos should be considered. The lightest neutrolinos and chachinos should be xenolight, because in this class of models, the so-called new parameter should be order of MZ, let's say, after order of UNGV. Now if we put all the Gachino mass parameters to let's say one, two, three TV, this implies that the Higgs genus are relatively degenerate. Depending on the ratio of the Xenomass parameter over the Xenomass parameter, the electric scale, you find a mass splitting between the chachinos and the neutrolinos between a few hundred MEV up to five to 10 GV. Of course, if it goes below a few hundred MEV, what you'll produce are usually subfines, in the cascade decays, which you do not see. This implies that actually all three of those particles, the neutrolino one, neutrolino two, and the chachino essentially invisible at the LHC. If it would go even further down, let's say it goes really close to one at MEV for the mass difference between chachino and the neutrolino, what you'll find is that there are a number of chachino and then you get, in addition to the direct searches, constraints from searches for made-to-stable charged particles. And then you get actually a bound of the order of a few hundred GV already. However, if the mass difference is such that there's more than one decay, then the bound is currently over one hundred GV for those neutrolino and chachinos, consistent with this consideration concerning the naturalness of the X-posites. Now in this class of models, essentially what you have are final states containing top quarks, bottom quarks, and Ws, because for the catch in the strongly-directing sector, at the gluino, the light's top, sorry, the light's bottom, which then decay further into top quark, bottom quark, neutrin, chachino, Ws, bottom, or top, or the gluino decaying the top, top, bottom, bottom. Now, the small mass difference that before implies that in the hexino sector, essentially the chachino and neutrinos are all invisible. So if you see something like a top or bottom plus missing energy, in this class of scenarios, you do not know that you have produced a stop or a stop. Particularly as in branching ratios, which depend on the nature of the stop and the stop. Again, you have hardly a handle on it. One more word concerning the hexino mass parameter. This natural supersymmetry implies that actually the new parameters, the potential parameters should be order MZ up to a few unit GEV. However, you can construct models that have a sub-susi-breaking parameter for the hexinos. It's possible because they come in a vector-like representation of SU2. So the hexino mass is actually a new parameter plus these sub-susi-breaking parameters. So the generic statement defined in the literature that natural supersymmetry implies where the finite hexinos might not be true if you have an addition to this new prime, the sub-susi-breaking parameter for the hexinos. It's just as a word of caution. Now, what's the problem with this type of scenario? If we had hexinos as the supersymmetric particles, then they have a very large relation force section, which means that if you look at the dark matter aspect, those guys with this mass range of 100 units GEV cannot be the dark matter of the universe. So in this class of scenario, you would need some additional source for the dark matter. And that's a feature which I do not like and therefore, I'm not sure whether this type of scenario should really be called natural supersymmetric scenarios. So what are the current situation if you look now, for example, at stop searches, and the spotting searches are rather similar. Again, what you see here is an example of Atlas and CMS searches, and you see the different searches that they have to go up to one TV for the stop mass if the neutrino mass is relatively light. What they look at to be more precise is actually stop decaying into top fork with on-shell or off-shell in the lightest neutrino is for both Atlas and CMS. There's some area here which is not really covered. The course that is top mass is approximately the top mass. And then you have a huge background for the top bare production. So there's a small gap in these plots here but still a light stop would be possible, but it's getting smaller and smaller as the experimented techniques are refined. Now let's assume really we would have such a light stop, it's the order of M tops squared. We would have neutrino masses of order, let's say two, three TV. And the Higgs mass, which would mean that the M2 squared would also be in the order of MZ squared. Now this would imply if you look at the RGE running that you would get a negative contribution for my large neutrino mass. And the top-to-carb contribution, which is positive, the neutrino mass would be negative when running up. The top-to-carb cutting would be large. Sorry, it's going to be smaller than the neutrino mass. And this might imply that an intermediate scale you would have that the stop mass parameter for the right stops could turn negative, implying potentially the need for new physics at an intermediate scale or the God scale comes. Which would be quite interesting. But that's always a side remark. Now, as I said before, this type of scenario next is our dogmatic candidate. Which brings me to the second part of this talk, namely extended models. What I have in particular in mind are actually models, extra page groups. They are motivated because they have different D terms contributing to the Higgs mass at train level. For example, if you have an extra U1 high, as you would get from SO10, this would be the additional U1 in 15 SO10, which is orthogonal to the standard model group. You do get a contribution which corresponds to the electro-higgy couple. G is high here, that's approximately as the size of the hyper-charge coupling, which boosts up to three levels to the order of 100 GV instead of the set mass. In the term of the breaking of SEPN, you could also explain why R bar is conserved, because this will depend on the breaking of the U1B minus L, but they have either conserved R bar or D, or spontaneously broken R bar. You can explain this class of models easily in neutrino masses because the SO10 representations do contain the right in neutrinos. And there you can have either the usual CISO-mechanisms or the inverse CISO. Most interestingly, if the mass scale of the right in neutrinos is at an electro-big scale, you have the possibility to get the right-handed, the right in neutrinos, the subsimilient part of the right-handed neutrinos, or another exoptic scale as a dogmatic entity. And this might be interesting in combination with natural SUSE, which will occur in a modified form now. For those who are not very familiar with neutrino physics, let me briefly summarize and dislike here the idea of the inverse CISO-mechanism. So it would be in a base, it would have here the left-handed neutrino, the right-handed neutrino, and some additional neutrons from your left one number. This would be the ordinary eucarba couplings, because these would be eucarba couplings coming from a different x-ball zone, between the right-handed neutrinos and this additional fermion. This new asset, this would be a left-handed number, but a left-handed mass term, which is assumed to be smaller. Now, this of course would be three per three generations in a most natural generic case. Let's go on the limit with only one generation, this new asset zero, then of course, this one here, so all scalars, this would be zero entry, and you would immediately solve this system, you'll find you'll have a zero mass eigenvalue and zero compare. In this way, even if the eucarba couplings here are large, as large as top eucarba coupling, you would get here in this class of models light, neutrinos together with a relatively heavy, more or less direct light pair. So it would have a light-neutrino case, if you assume mu as to be non-zero, plus a quantity of pair, which is relatively heavy, which means a few hundred GV out of this. Of course, as this neutrino sector would be much more complicated, I just thought this complexity, I don't go into details here. As the other slapdowns or fermions in general get some additional crediton contributions, I just give you an example of the slapdowns. So you have the usual left slapdown part here, right slapdown part here, left right, mixing part, often not here. And usually in these parts, you have the m-sets contribution, but now you get from the additional set prime, actually you want contributions of this form here. And this has quite some impact on the masses. So what I show here is an example of this fixing some high-scale parameters, which are still consistent with current cementless considerations, about maybe the set prime. And then you see the spectrum fixing the soft-susi breaking parameters at the high scale as a function of the set prime mass. And the light neutrino is actually more or less a single neutrino. So it's a potential candidate for that matter here beside the neutrino here, which in this context would be more or less a neutrino light. So here you would have the usual higher-heel left-sleptons, right-sleptons. There's somewhat heavier than the stars, neutrino. Then you would have the light neutrino. Or you would have PN addition at the lights neutrino. Or if you go to this part here with rather heavy set primes, we actually have a rather complex spectrum between the left-sleptons and the right-sleptons so opposite what you usually would expect in a CMSM-like context. And you would have the neutrino in between the right-sleptons and the left-sleptons. And the LSP would be again the neutrino. But this would be a good dog matter candidate or not depends where the details of the parameters is between the sector of the previous mass matrix and has to be really investigated in detail. So that's not the claim that over the whole line, every point corresponds to a good dog matter prediction, or let's say explanation. But some part of this does it. I think if I'm correct, this part here around 2.8 to 2.9 TV. Now the current limit on the sectoral mass in this model is actually close to 2.2 TV. It's somewhat smaller than the usual limits because in this class of models, you have an effect called H-kinetic mixing, which aren't the cross-section for the set prime as well as the branching ratios. As I said, this quark sector shows some dependence. You get a somewhat larger hierarchy, but that's not too much different from what they usually have in Suzy models. Now in this class of models, you have relatively light white-tended neutrinos. So they might appear to have kept in place of the Suzy parameters. We should have a look what they use by the decays. So they became decay either into W in the lepton or set in neutrino or into Higgs boson. The lightest one usually plus a neutrino. Roughly speaking, what confines is if you look at the branching ratios of the into W and charged leptons versus set neutrino or Higgs neutrinos, such as picking one half to one over four to one over four. The reason being that so many are given by the Gaussian component of the gauge bosons and the otherwise of the Higgs boson. You can play in parameter space in this class of model defines there are a couple of points that get Suzy decays, but they're never large, the most a few percent. So really, we have essentially half standard model decays for this class of scenarios. We have a lot of changes in the Suzy cascades. I'm just depicting here the most prominent ones. Because usually what you have is in CMSs, I'm gauging mediation that the right squawks that came straight into a jet plus lightest neutrino, giving rise to very hard jet plus missing energy. In several of the bounds, this Suzy surges comes edge with very hard jets plus missing energy surges. Now, what you see here is points of different spanking points created for this type of models. And essentially what they find is that there's scenarios that decays into quark in neutrino as usual, but not in neutrino is not stable, but the case further in the neutrino is neutrino. Ah, sorry, this shouldn't mean. And then the neutrino decays further into one of the lightest neutrino plus a set boson or W or you can even have this heaviest neutrino decays further into the lightest neutrino, so I have a lot of additional electric stuff and leptons compared to the usual sequence of one jet. Or you can really have here, you can have the decay into one of the heavier neutrinos which would be in this case the gegino of the extra one. And this would decay further as again into leptons, W's, and so on. Similarly, the scenarios that are really the neutrino this light of the neutrino have a lot of additional leptons. In particular, if these leptons are lighted in neutrino as we have seen in the plot before. So what you find is that you have enhanced multiplicity, particular for leptons and jets, but jets are very leptons are much more important now compared to the TMSSM. So what we did was actually able to take natural mucus in scenario, so I think all the stuff that I've shown before, I thought like, let's say, like the neutrinos, assume that actually the stops and neutrinos are just above what can be seen, but take this leptons like motivated by the fact that the neutrons can reduce the masses for the leptons. Now what we had was just the right hand, the superpotential, the MSSM part, plus some mass term which corresponds to the model of my random mass term for the right neutrinos plus your cover term. And then you can express your covers in terms of the super arm matrix and the constant variable matrixation. There, for example, like I've shown here, this particular case cosine and sine complex angles appear. And they imply that in the eukaryotes you can have cosine and hyperbolic of a round matrix which would not have a gamma of five, six times the phase factor. The cosine and hyperbolic implies that you can easily get an order of magnitude or two orders of magnitude in the size of your cover copies compared to the usual case. That's what we assumed here. They can delight that neutrinos will be order KV, so they can potentially serve as a dark matter candidate in addition of these neutrinos. And the M5 and 6, this can be order few to order one HED data, and we assume that's actually 20 GV of X number. Now, as I said before, we assume that the slafters could be light. So what one sees here in the plot, I hope it's visible, are the cross sections of the different channels. The largest one would be the Italian process. For example, I have the left slafter with the right slafter with the left neutrino together, and we have the left slafters be charged or neutrally bare-wise, and the right slafters would be the least important concept. So we want to immediately see this by looking at the order of magnitude, the most important constraint that comes from the Italian process, producing one neutrino together with one left slafter. Now there are dominant decays that actually either into a lepton and the light is neutrilino or into neutrino and trachino. And depending on the neutrino one, if the LSP or not, it would stop here. If this neutrino, the arse neutrino slide that is the neutrino, this would decay further into a neutrino and this neutrino, so it would have missing energy here. And here it would get, for example, the neutrino decay into a lepton and arse neutrino, so as a lepton plus missing energy. And similarly for the left neutrinos. So what we looked at was a different scenario where we really had the Xeno LSP. So we stopped the cascades at the neutrino level. And what you see here in this plot is at the left lepton mass parameter, the right lepton mass parameter, we fixed mu to be more than 20 GV and then better equal 10. And we used the ATV data plus the 13.9 inverse sample bond data at 13 GV within the super checkmate made a package. So what you do, you put it in your model points to the color sample, the black and yellow color sample, the checkmate atomization is done and then it's compared to the available Atlas and CMOS status inside this package. The most important ones for our consideration is for the two leptons plus missing energy searches. Now the package itself gives you one parameter called R which tells you if R is smaller than one and the point is excluded, if it's larger than one, it's allowed. But of course you should take into account statistical uncertainties, cross-section uncertainties. So that's what we did. So we said, okay, let's take a 10% margin on this R value, so it's below 0.9, then a point is excluded, that's the red point here, it's above 1.1, then that's allowed and everything in between should be started more carefully and that's what we would call ambiguous. So we see here in the leptons masses, if they, let's say are 4GV or below, that's excluded by data, if they are heavier, then the R leptons, which have a significantly smaller production cross-section coming to gain, but their rebound is only roughly speaking order 200GV. Now of course this is a certain point, 9 inverse frame-to-points, the additional data now orders 35, 36 inverse frame-to-point, then here these points will start getting red becoming, so as I look at the exclusion, my guess would be, can get the exclusion up to 200GV in taking account the full set of data available. Of course the new parameters are at this small, so the question is what would change if you increase mu that's shown here on the right plot. So the blue line here, that's the diagonal part here, taking mL, delta equal mE delta, and going along this line here, so that's the uncertainty band here. Then we took the green line, new equal 150, the red line 200, and the black line 250, you see, of course the bounds grow up, the part here decreases, of course we have less phase space, which means we have various of black tones, therefore they're not sensitive, then there comes the part that we're sensitive, and then here roughly speaking we can get to the same part as long as the sufficient phase space. Now if mu gets about 250, we don't have yet sufficient phase space, anymore for hard chat and significantly large cross section, so you wouldn't get any boundary anymore. So the left side can go down to let's say, order 270 in GUB, they would not be sensitive, between the current and the seed data. Of course, more interestingly it is actually that the R-centeredness of the data is B, and then we get an additional constraint from the direct production of two trochanos, which would indicate further to the left tone, so that's shown here, so what you see here is this neutrino mass versus the new parameters essentially is the mass of the trochano, and you see if the data set available, we can exclude roughly speaking masses up to orders 270, 270, 280 GV, if this neutrino is light, of course if it's again order 200 GV, we don't have any bounds because the leptons are not sufficiently energetic to pass the cuts for the analysis. So essentially you see we have here a band, we have relatively soft leptons, therefore we are not sensitive again, so the color coding is again red excluded, green and beacons blue allow. So we actually choose leptons masses in this band here, that's the blood that is on the side here, the mu is chosen to be 25 GV, plus this neutrino mass, and this should be of course smaller than the lepton masses, where we said equal the left lepton mass parameter to right lepton mass parameter and the trochano exists here. So we can see here, if there's sufficient phase space, we can get up to order 550, 600 GV for the lepton mass parameters. If we now turn to the part here, so the bounds are given here, sorry this part here is essentially cross section, we don't have sufficient statistic, this part again here is the hardness of the leptons, which gives us the boundary. Now going to this part here, we can fix for example the mu parameter for GV, which implies that here the leptons only have three by the case, or the case for example of a left lepton into a W under R-nitrina. But of course, this would imply this most likely small as small branching ratio, because you have a relatively small and that's why it makes it into phenose sectors. Essentially what you have here is a three by the case mediated by a virtual exenos. And then what you find is actually have a lot of Higgs bosons, Ws and sets into the final state plus missing energy. And to account that at least inter analysis implemented in check my package, which were quite a lot, they're not sensitive in this region here. So there are parts where you can go down and set them up to 200 GV and it would not be sensitive in this class of models. Above here you have two body decays and you have similar constraints as before. Below here actually, sorry, parts of the parameter space we have sufficiently energetic leptons in the cut type decays, so boosted Ws to again sensitive to the analysis. This brings me to my conclusion. So what you have seen here is that the bounds on the Higgs mass and combination with no base emphasis is actually consistent because it would have to have relatively heavy spectrum in the Sousa sector. The same and it would have in unified models relatively large a zero parameters that's the danger of current charge breaking minimum it should be looked at. In general MSM, it could have a possibility of compressed spectrum and the Q and GV for the Sousa might would still be allowed. Particularly in the class that I've discussed on natural Sousa, we have only the state light which contributes at the loop level to the Higgs masses but they have this disadvantage in the minimal version so I cannot explain that matter. Which motivated me to go to extend the gauge groups then we also have in phenophysics automatically included you can easily get a mass of 125 for Higgs without that large correction so the charge and color breaking minimum that's important. And then you could have this new R so it's in part divided between S, L, S, B which would be compatible with Dacmeto very density considerations but it would avoid because it's a big thing that essentially the Dacmeto constraints for the direct structures. And then we have seen as one example if you have natural Sousa, that's the ones between the bounds on the Sousa particles that's given here. Thank you very much. Okay, thank you very much for this very interesting talk. Let's see. Okay, so I'm wondering if there are any questions from the audience? Yes. Oh, there we are, Dio, please go ahead. Hi there, Nef. How are you? I'm fine, how are you doing? One question about the last model with right-handed neutrinos. What happened with Sousa? Because in principle you can have the Sousa lighter than the first generation of particles. There is an exclusion there. Actually, from the stones and the exclusion are much less important than for the first generation. The reason being that the tiles are much harder to detect at the LHC due to the hydronic decay modes. So today actually I talked with somebody from CMS and they are currently finalizing their analysis. And he told me, well, that's half of the range that they get for the first generation of slaptons, polytyrate structures. So for this type of consideration that we played, we had here the tiles are much less important than electrons and ions. So if we could take this house a little lighter than the left-saptons of the first generations, the picture would have changed much. Okay, I don't know if there are any other questions from the audience. Well, I do have one question which is about the D terms given by this U1 extra symmetries. So you said that you could enhance the Higgs mass by about to about 100 GB, the tree-level Higgs mass. Yes. Right, so what gives you that upper bound to the tree-level contribution? If you look, when you look at the MSSM, look at the mass matrix is essentially, it's not only the D terms at tree-level. Usually you press them plus the mass of the pseudo-scale Higgs response. Right. So in this extended gauge models, we do have more or less the same structure, but in addition to the usual D terms of the SU2 and the U1 hypercharge, we have this U1, this case, which we call high. Then of course you can express the usual SU2 and hypercharge contributions as the set mass, this gets there and sets squared, plus the contribution comes from this additional vector goes on, but it's from the additional data group. Right. But for the calculation, I can give you the paper later on. I don't feel happy to have it now written down here. Well, but from that plot, I was wondering, because you can always make the VIV higher. No, no, the vector appears here as the SU2 VIV. Ah. Oh, oh. It's not part of the additional vector also. Oh, I see, I see, I see. But I don't understand that. The reason is what you have is a mixing of the set of the set prime. And this gives you this additional contribution here. I see. That was Oscar, by the way. Okay, I don't know if it works. But essentially what you have is you have an optimal term which is goes like VEF times the SU2W VEF times the VEF of the extended sector, let's call it VR, and then you can make approximate CISO dilation and get these parts squared divided by the mass of the extrotage also, which gives in this case, the VR squared, then you're left with the vector required squared here. Okay. I have a question. So in these models, have you looked into models with an additional singlet that can give you an additional three level contribution to the Higgs mass and also give you additional neutralinos? No, we did not include an additional Higgs singlet in this class of models. But it's not as a courier solution that in the extended Higgs sector, we do get an additional Higgs portion, which comes from the Higgs fields giving VEF to the set prime, which is relatively light, which means all of 100 GB. Oh, okay. But it's not the singlet that you have on the usual NMSFM. This would be one of the things you might know is VEF as prime to the VEF balance sheet collaborators where they have this anomalous U1 where you have this NMSM Higgs portion which was charged under this anomalous U1, which is similar situation. And on these models, you have an additional gauge portion, right? And it has to have a mass. You looked into, so how small can this mass be? But taking the balance until last spring, so this means roughly speaking more at all. Just as much as you can go down to, let's say, 2.2 TV. So this roughly speaking the balance. 2.2 TV or GEV? TV. GEV, okay. No, TV, terrible. Okay. Because I mean, I guess you can also have supersymmetric kinetic mixing of this new gauge boson with. The is gauge kinetic mixing between these set primes with the usual U1 form. This is actually important for terminology because this will alter the couplings of the set prime, the quarks and laptops. Yes. Then you see if you really make the assumption that you don't have any gauge kinetic mixing and use a sensible mixing at electric speed, actually it uses the cross section of up to 40, 50%. What about the other end of the spectrum? Can you make this dark photon really, really light? It's not a dark photon. Sorry, this U1 gauge boson. No, you cannot make it, it comes to standard models on yours. Oh, okay. So usual searches apply, but they get modified because of this gauge kinetic mixing. Okay. Thank you. You're welcome. Well, maybe I am Oscar, maybe I can ask something also. Hi, Gordon. Well, about this, this X, U1 X, I see that this U1 X is apparently a combination but from UR and UB minus L, no? It's an instant. Autoconautic one to the hyper charge. Mm-hmm. Autoconaut to the hyper charge. And so I assume that the Higgs particles, the Higgs outlet is not charged under this U1 X. That's true because if not the V that we have above could be the V of breaking of U1 X. I could assume. Yes. So then why you have one fourth G kappa that I don't understand? You said it's because of the mixing that I would expect some kind of a small mixing between the gauge bosons. Yes, the mixing is really the W of F divided by the VR. We have to answer this one. That's actually mixing the Higgs sector. And it's a Higgs sector, but it's really remnant to make this type of approximate CISO formula in this Higgs sector that you get all this from here. I see, I see. So it's similar to the CISO formula with the two BEPs, I guess. With the two BEPs, yes. Okay, and another question. How does it affect to the other Higgs bosons? This, this, why? I must admit, we never looked into that, so I cannot answer this. Well, probably it will not affect much, but I don't know, it could be interesting. By the way, have you heard that there is a possible excess at 440? I had heard some rumors, but did not take them too seriously, I have to admit. Well, I don't know. I saw it, it is published in the note, but they don't make a big fuss of it. It's in a paper. Okay, I didn't know that it's in a paper. Yeah, yeah, yeah, I had the reference somewhere. So maybe the second later in there, we have that last note. Could be interesting for these things. Okay, that's it. Any other questions? Yes, I have a brief question first. You are in the M2 grammar, where you have color and charge break in minima. I assume that if you go to the NMSSN, those constraints totally relax, right? Actually not, there was no, let me remind, let me remember. I think it was the Korean group who looked into that. And there are parts of your parameter space that does get relaxed, but the other part where it's really still strong constraint. Okay, thanks. I mean, okay. If you have, if you find then the reference, it would be nice if you send it. I can do that. You're Federico, aren't you? Yes, yes, hi. Hi. Okay, and I have another brief question. I hope I stay correctly. When you have your right-handed neutrino, you enhance the Yuccava couplings, basically doing a mixing with a hyperbolic angle, right? Now, this is giving you with some CISO mechanism the masses of the neutrinos at three levels. At some point you will need to do radiative corrections to those mass terms, right? So, how far can you go before that breaks down? We don't know yet, but actually approach to your country, local, we plan to work on. But can I give you the answer? Right, thanks. So, how far do you go actually in your analysis? I didn't see that. So, what we took here was 20 GB for the right-handed neutrinos. And first, can you remind me, I've got to write down the number that we took for the gamma 566 was 8, can this be? Yes, it was about 8, so that it wouldn't have any issues with the Lipton-Fleur violation. Yeah, so the copper couplings here still order 10 to the minus 4. All right, sir, it's a pretty small list. Yeah, thanks. I don't think that radiative corrections are really an important issue in this part of the parameter space we looked at here. Okay, any other questions, maybe? Okay, let me finish with one question. Still with the D-term contributions, I was wondering what happened with the loop contributions to the Higgs mass, because then the stops are also getting D-terms, extra D-terms and things like this. So wondering how did the loop corrections change? Let's go this way here. What you see here is essentially the D-terms, depends on this M-set prime, yes, where are the D-terms? So we have essentially the same D-terms as above here, but the pre-factor changes. I think this is for the right stop, that's one over four, but I would have to look it up. You see, it depends where if you might in case of this box, because you already have a master's. I see. I should say here that the D-terms was fixed with 0.94. Turns out you cannot go too far if you want to have M-set prime up to 560V if you want to avoid dachonic states. Okay. Of course, D-terms is not that large. It doesn't get to order one. If you want to order, let's say 0.1 or below. Okay. So any last questions? Let me check the YouTube channel if there are any other... No. So nobody from the live chat. So okay. All right. So there are no other questions. Oscar has sent the link to the paper that he talked about. So maybe we can discuss that offline or something like that with a bit more time. So okay. If that's it, I'd like to thank you, Werner, for giving this very good talk. Thanks. And yep. Sorry, it took so long. I just realized that it's more than 40 minutes. Oh, no, no, it's perfect. It's perfect because the whole thing just lasted one hour. So great. So anyway, before we log out from all of the people here, we'd like to send our virtual support to our friends and colleagues and everybody in Mexico who are having a very hard time right now. So please hang in there. We'll see those. Thank you. See you around. Thank you.