 Okay, I think we can start. Hello, everybody. We are here to talk about physics, and my name is Roberto Naderos, and I am from the Instituto Físico de la Valencia, and I will be the host of this webinar. So, today we are going to have a very interesting webinar, because it's about phenomenology, and it's about physics, but before, we are just to announce in this case that you can make questions and questions, the Q&A, past Q&A, part of the streaming, and also in Twitter with the hashtag, as you can see also here. So, and also, it's very important because we have a few questions with respect to what we were doing last year. This time we have a lot of questions, which also appear in here, the information about the document, and the website where we are going to centralize all the information that we can find, all the information, the announcements, and everything. So, I guess, I'm going to try to explain to you both. The speaker of this webinar will point out what he's a postulat researcher, he's in the cartoon, but before he did a PhD in physics in the University of Notre Dame in USA States, and after that he did a postdoc in Triumph in Vancouver. So, the title of his talk is compressing the inner doublet model, sorry, and I guess, if you're ready, Alejandro, you can start. So, how are you? Very good, thank you for the opportunity, and I'm glad that this is being done for people that want to share their work and cannot travel much. So, I'm looking forward to the talk, and I hope you all enjoy it. So, let me put the talk up and share the screen. Okay, so, can you see it now? No, for the moment we don't see the presentation, you'll see your webcam. Okay, sorry, representing to everyone. It says I'm sharing the screen, so... Ah, sorry, there we go. Okay, I'm sorry. Okay, now we see it. Okay, so, I'm ready to start. Okay, so, like I said, like Roberto said, I'll be talking today about one of these most studied models in phenomenology up to date. It's been very well studied, but I'm going to study a very particular limit of this model that gives interest in phenomenology and that has escaped a lot of the direct detection efforts that have been done on this model. So, this is an interesting model because we will be able to probe it in this limit with future colliders and maybe run to LHC with very high luminosity. So, the work was done with my colleagues in Vancouver, with Nikita Vlinov, Jonathan Kosaksuk and David Morrissey, and it's based on a paper that is actually almost in physical review. We just have to finish some reviews. So, let me just give you a little outline of what are we talking to you about. First, I'm just going to kind of go over the NRW model, which probably most of you already know. Then I will tell you how you can naturally get small mass splitings between the exotic scalars of this model. Then I will go over some basic bounds that need to be addressed in order to look at the viability of this model up to date. So, electro-weak and Higgs bounds in particular. Then I will tell you about whether or not this model predicts a good dark matter candidate that is consistent with indirect matter constraints and also indirect dark matter constraints. Then I will tell you about the current and projected LHC constraints and then also what a future higher energy collider can do about searching for this model. So, let me start with a little review of the NRW model. So, the NRW model is basically an extension of the standard model that contains an additional electro-weak complex scalar Higgs doublet. The important feature of this model is that this new Higgs doublet, it's parity odd under a discrete set to symmetry. And if you want the set to symmetry to remain unbroken, then you have to arrange the parameters so that only H1 gets a VEP. So, only the standard model Higgs complex doublet gets a VEP. And this is very important because that will allow you to generate no mixing between the Higgs. So, in a way, the phenomenology of the Higgs that we have observed in 2012 and that we've confirmed almost completely, or that we are still confirming it's pretty much a standard model like, it doesn't get modified that much, but I will tell you about ways it could be modified in small amounts. Also, this particular symmetry is relevant because if it's unbroken, you don't have direct couplings of this new exotic scalar doublet to fermions. And this is important because most of the phenomenology will depend on how this exotic scalar couples to the standard model Higgs and to gauge bosons. So, the last nice feature about this model is that because this set to symmetry will remain unbroken by construction, the lightest scalar that is not electromagnetically charged, whether it's H or A, a CP even or a CP other scalar, will most likely be a good dark matter candidate because it will be stable cosmological. So, this model has been very well studied. And it was first originally proposed by Andres Pandey and Ernest Ma in the 70s. And since then, people have done a lot of work and studied many different aspects of this model. First, Barbieri, Hohl and Rikov have studied whether this model can increase the naturalness of the standard model. This was before, you know, we found a 126 GB Higgs. People have also used this model to investigate deviations in the properties of the Higgs boson. Also, this model gives you very nice signatures with missing energy. So, people have studied that aspect of the model. And also, because this model can give you a good surface transition and induce electrophobic pyogenesis, which can address the fact that we see more variants and antivirions in the universe today. So, with that said, I'm going to study a very particular limit of this model, which is still very interesting because it has escaped a lot of bounds. So, basically, in R2 Higgs doublet model, it can be characterized by the following potential, scalar potential, that it's invariant under a shift where H2 goes to negative H2. So, a discrete set to parity. So, the most general potential it's given by the following equation. And as you can see, you have the additional couplings that do not appear in the standard model, such as lambda 2, lambda 3, lambda 4, and lambda 5. In this model, because the set to parity is unbroken, the Higgs doublet that corresponds to the Higgs and the exotic Higgs doublet do not mix. And so, the mass of the standard model Higgs is just as in the standard model. So, it's 4 lambda 1 times the VET square, where V is 174 GV. And then the three other exotic scalars, one CP even, one CP odd, and two charged scalars, they depend on mu 2 and then the other parameters of the model, which are the mixing lambda 3, lambda 4, and lambda 5. So, what I'm going to discuss today is whether there's a natural way to make these three scalars degenerate or close to degenerate. So, the first thing you should note is that if you extend the Z2 symmetry by a global U1 acting on H2, then you can naturally set lambda 5 equal to zero. And that automatically lead to a degenerate CP even and CP odd scalar. Okay, this is basically the idea of Apache Queen symmetry in a way. Okay, and it's also relevant. This is important symmetry, as I will tell you in another slide, because this symmetry is non-anomalous. Okay, another symmetry that you can extend the Z2 by a global SU2 just acting on H2. And this symmetry will give you lambda 4 and lambda 5 equals zero, and this will lead to all three exotic scalars being degenerate. Okay, so ma and ma plus will have the same mass. In the limit of a small lambda 4 and lambda 5, the splitings between the neutral scalars will just be proportional to lambda 5. So, as lambda 5 goes to zero, the splitting goes to zero. And the splitting between the charge component and the CP even one is proportional to the sum of lambda 4 over lambda 5. So, if either lambda 4 or lambda 5 goes to zero, that's not enough to render this splitting small. But, you know, lambda 5 is set to zero, lambda 4 could be small. So, there are many ways to have this splitting small in a natural way by imposing as SU2 global symmetry. However, like I said, a vanishing lambda 5 is actually technically natural because this global U1 acting on H2 commutes with all the gauge symmetries of the standard model. So, this U1 is non-anomalous. That means that the running of lambda 5 will basically just be proportional to lambda 5. So, if lambda 5 is equal to some number at a high scale, it will... Sorry, if lambda 5 is equal to zero, so it vanishes at some high scale, it will vanish at all the scales below that. However, lambda 4 and lambda 5 cannot be completely set to zero together in a consistent way because SU2, actually, it's broken when you gauge the standard model gauge symmetry, the SU2 cross U1. So, let me show you how that works. So, the running of lambda 4 is basically proportional to G squared G prime squared. And this, you can actually see this because this SU2 symmetry could be an extension of the custodial symmetry of the standard model. And this is being well studied. So, we know that the SU2 custodial, it's broken by the hypercharge. So, the same thing happens with this SU2 global symmetry. The running of lambda 4, which goes to zero when you impose an exact global symmetry, actually it's generated at some lower scale because its running is proportional to G squared at times G prime squared. So, the SU2 left and the U1 y are couplings. And lambda 5, as I already said, is actually just proportional to lambda 5. So, lambda 5 could be set to zero in a very natural way and it will remain zero. But lambda 4 cannot be set to zero and it will remain zero. So, that has a very important implications for what you can do with this model because one idea of model building is to not generate more fine tunings that you already have. So, for example, in the plot here on the right, what I'm showing is in the y-axis you have lambda 4 at some high input scale as a function of the high input scale, the logarithm of that high input scale. And I've chosen lambda 2, lambda 3, and lambda 5 at the mass of the Z, so the electric scale to be 0.1, 0.5, and 0. And I've chosen a CPEven exotic scalar mass of 100 GV. So, this plot basically tells you that these dashed lines correspond to a mass splitting of a charge mass splitting of 1 GV. And the coloring gives you a larger splitting between minus 10 and 15. So, basically, what this plot tells you is that if you want to have a lambda 4 that vanishes at some high input scale and to remain close to zero at the electro-width scale, you really need to fine tune your model very much at the high input scale or have a new physics appear very close to the electro-width scale. So, with that said, in order to avoid some large degree of fine tuning in this model, you want to stick to values of the charge mass splitting that are close to or above 1 GV. And that's to keep the model very finely fine tuned, so not largely fine tuned. Now, the mass splitting with respect to the two scalars, the CPEven and the CPEven, that could be z to zero, but as we will see later on, there are some limits on how small that can be. But for all purposes, you can set these two scalars to be degenerate and you will have a technically natural theory. So, what I will discuss in the remaining of the talk is I will talk about what is the phenomenology of having compressed spectrum. So, when the two scalars are very close to degenerate and the charge mass splitting is above 1 GV. And in particular, I'm going to focus at mass splitting between 1 and 5 GV. And I'm going to give you results for masses of these scalars between what's allowed, the lowest it's allowed by, let's say zero for now, and then up to 500 GV. But in order to kind of probe these guys at colliders, we want to really focus on masses up to 100 GV. Now, a degenerate spectrum can render some of these guys very long-lived. And if they're very long-lived, except for the lightest one, which will be completely stable, the next two lightest could be long-lived. And that can have implications for big band nucleosynthesis and it can have implications in colliders because these guys will produce and they will live for an amount of time. That amount of time could be large enough that these guys escape the detector and they appear just as missing energy. That amount of time could be small enough to leave a displaced vertex at the LHC that you can actually reconstruct by looking at the decay products. And there's been a lot of work done on displaced vertices in the last two, three years. And the LHC, the machine, the two detectors are getting much better at reconstructing secondary tertiary and third diseases as a whole. So it's very interesting to look at long-lived particles. But I'm not going to focus much about that, but I will show you a plot on how long-lived these guys can be. So this basically tells you, you know, the lifetime of these scalars as a function of the mass splitting. So the red line corresponds to, and this is considering H, the exotic scalar to be the lightest particle with our loss of generality. So A will be the next lightest. And so in this case, the red line depicts how A will decay to H and it will decay to H as Z boson and then the Z boson will decay to fermions. So depending on the mass splitting, you can have a particle, an A particle that is very long-lived. But if you want it to be prompt at the collider level, you want sort of a lifetime or a CTA which is on the order of one millimeter or below that. And for those values of your lifetime, you want mass splittons around the order of anywhere between 500 MB and above that. For the charge mass splitting, as I've already said, focusing on values below 1GB renders a model which is fine-tuned. So the charge particle just by construction and by imposing very little fine-tuning, then the H plus will naturally just be a stable, no, not sorry, it will be prompt. It will decay within the detector with lifetimes above one millimeter. So I'm not going to focus on long-lived particles but, you know, as I will show you later on, there is phase space where A could be long-lived and that could have interesting implications and we can study that in future future work. So what are some of the bounds or one constraints that have already put bounds on how large the splitting can be? The most important basic bound, I'll call it basic because it's based on data that's been taken for years at LEP. Now the LHC is how large it could contribute to the oblique parameters. In particular, this model contributes very largely to the T parameter. This parameter depends on the masses logarithmically and it's not so large but the contribution to the delta T is proportional to actually the mass of splitting. So on the degenerate limit, the contribution to delta T will be small. But let's just for the sake of talking, let's say, what will happen in a model with two Higgs doublets and a large Higgs mass below one of the Higgs? Well, you will need a very large delta T because as you increase the Higgs mass, you start going away to a smaller delta T in the electroic fit. So actually this model is very good. If we hadn't discovered a Higgs and we discovered a Higgs that was larger in mass, let's say 500 GB, then this model can actually fix that because it will give you a positive delta T contribution. But in our case, in the compressed limit, delta T is small enough so that it doesn't really constrain the model. The second basic bound is basically how the Z will decay to this new exotics and because the Z is a Cp odd particle, it will decay to a Cp even Higgs and a Cp odd Higgs. And people have studied this very, very well. There is a decay process that has been measured well at LEP and this is a Z decay into two fermions and two neutrinos and it's basically a standard model background to this decay, a Z decay into MH and MA. Now that decay basically gives you a background of roughly three events. So in order to actually be okay with this bounds, which predict a very small standard model background, you will like to stay above the kinematic threshold. So you will want your mass of your exotic Cp even and your exotic Cp odd, the sum of that to be above the mass of the Z. You can arrange the masses so that this, you know, the sum is less, but that will give you a certain hierarchy on the two masses and you really, again, it will be, you can call that a prediction, but in order to kind of have more accessibility to parameter space, we choose the sum to be greater than mz and we go from there. So from now on we're going to focus our analysis on masses of MH plus MA above the mass of the Z. Okay. The second, the third basic bound that I want to discuss is Higgs decays to exotics. And now that we've measured, we know there's a Higgs and it's 126. Now we're in the business of trying to measure its branching ratios precisely, but there's still sort of a large room for new physics there because you can arrange couplings in such a way so that the properties of the Higgs that we have measured do not change. But if you want to be okay with the invisible decay with of the Higgs or what the Higgs is allowed to decay that is not a standard model like for now on, that puts strong constraints on the sum of the couplings lambda3 plus lambda4 plus lambda5. Now this is assuming that the standard model Higgs only decays to two Cp even exotics to phase space for the Higgs to decay to two Cp odds or two charged exotics, then that will put constraints on the other combinations of the parameters. So for example, if H2AA is allowed, then you will have a constraint on lambda3 plus lambda4 minus lambda5 over 2. And if the H2H plus H minus is allowed, then you'll have a constraint on lambda3. Because the coupling between the charged component and the standard model Higgs is proportional to lambda3. But if you just set MH to be 60 GB or 10 GB, then you have basically the sum to be less than 0.012 or 0.07. This is an easily avoidable constraint in the compress limit. So we don't worry much about this. But as I will show later, lambda3 is very much like lambda4 as that its coupling runs with gauge couplings. So lambda3 can be small but not too small. Because if it's zero at some high energy scale, it will be generated at the low energy scale and it will be generated basically mainly by gauge couplings. In the limit that all the other couplings are small. So lambda3 cannot be naturally too small. And you can see that better when you look at how the charge Higgs will decay to 2 photons. So that puts actually a bound on lambda3 that needs to be less than or equal to 1. So this is basically when you have a Higgs decaying to 2 photons through a loop of charge Higgs. Okay. So these are the basic bounds that I wanted to discuss. So just what I want you to get out of this slide is that lambda3 cannot be that big. Otherwise you'll start running into problems with Higgs precision measurements. And yeah. So now let me tell you a little bit about the dark matter story of the compressed region. The dark matter story is without loss of generality, we're going to assume that the CPE even guides the lightest one. And it will be a good dark matter candidate because it will be stable. But the nice thing about the degenerate limit is that you have a lot of room for cohenylations and that will suppress the relic abundance in this model. So actually it works against you in a way in places where you might get the relic abundance. Cohenylations will basically suppress that. So cohenylations play an important role. And then I will show you where you can have a good dark matter candidate. As I already said, it's natural to consider a small neutral splitting because of lambda5 being naturally zero. So we're going to keep delta plus minus above 1GV. So here are some... Now the way we calculate the dark matter is we vary delta plus minus and also lambdaL, which controls the coupling of HH annihilating the standard model particles through the Higgs portal. So it's basically lambda3 plus lambda4 plus lambda5 over 2. So we vary those two parameters and we keep the neutral mass splitting below 1GV. Just slightly below 1GV. Now if lambda4 and lambda5 are small enough, then lambdaL is basically proportional to lambda3 over 2. And as I already told you, Higgs physics tells you that lambda3 cannot be that big. So basically when you look at these two plots, lambdaL, which are close to 1, those guys will kind of be in the place where the Higgs candidate to exotic scalars, where it's MH over... When the exotic scalars is MH over 2, the mass of the Higgs over 2, then you will want to have lambdaL below 1, let's say. But in all of the places, lambda3 could be large enough... large enough so you can get the right relic abundance. So there's a couple of things I want to show you about these plots. So just where co-annihilations with the charge Higgs are important. So that's on the left, the blue line on the left. So when the charge is split in delta plus minus, it's a small, let's say 0.1GV. Co-annihilations with H, the cross-section is very large and that very much impresses the relic abundance. So the dashed line is the correct relic abundance as measured by Planck. There's also regions where you have these particles, you know, you have annihilation through a resonance, right? So you have a co-annihilation of H with A. So this is very important in the region where MH is equal to the mass of that Z over 2, right? And here all curves will collapse and they will basically drop and you will not get the right relic abundance. And this also happens when you have a Higgs funnel. So when the mass of the exotics, the sum of the mass of the exotics scalar is equal to the mass of the Higgs, okay? And then above masses of 100GV, the annihilation is mostly through gauge bosons, right? Sorry, the annihilation is into gauge bosons. So you have two dark matter particles annihilating through a Higgs, through a standard model of Higgs, into gauge bosons on a rough shell, one or both of them. So you have two, three, four body decays and all of them have been implemented. And you end up getting the right relic abundance for masses close to 500GV. So unfortunately, we cannot get the relic abundance for masses below 100 and above MZ over 2. But that's just a feature of the degenerate limit where co-annihilations are very important than to suppress your relic abundance. Not to say that this is a model that doesn't need to account for all the dark matter, if it were to account for all the dark matter, it would fail in that aspect. Again, another aspect that we need to study are dark matter direct detection. And these are basically spin-independent scattering. So here are the two plots I want to show you. And I just want to point out that the mass splitting between the two neutral exotics cannot be below 200KV. So here we have a lower bound now how small this splitting can be. This is because there is a very large inelastic scattering with a nuclear where the exotic H is scattered with a nuclear and then a pseudo scalar gets emitted. And this is also being very well studied, but it's very large. So in order for it to not be so large, you have to have a mass splitting that is above 200KV at least. So as you can see, basically divide detection constraints, you are okay with them in the region between MZ over 200KV if your lambda L is not too large. So basically because lambda L controls the standard model of Higgs coupling to the exotic. So to be consistent with LACS, you have to have lambda L which is roughly below 0.1. Discards have already been multiplied by the amount of dark matter they produce. So the density of their mass, so they've been properly normalized also. Okay, so now let me tell you about collider limits. The first collider limits that need to be addressed are the ones from LEP. LEP run for a long time and they look for Higgs's standard model or not very much. So here the dominant production modes of these two scalars, you have E plus E minus going to HA throughout Z. And you have E plus E minus going to H plus H minus throughout Higgs, throughout that as well. So what we did is we fully recasted limits from Delphi, which we use Delphi because they provide the most information regarding acceptances and 95% confidence limits. So we fully recasted their searches for neutralinos, light is neutralino and the next light is neutralino. And we actually recasted work done by Lansstrom, Gustavson and Ezio who do a very in-depth recast of LHC limits. And we find that there is no improvement over the Z decay bound for delta 0 less than HGV. So if our neutral mass splitings are below HGV, really LEP cannot really constrain this model at all. So in the degenerate limit that we are interested in, LEP cannot really rule out any masses of H and A. In order to put bounds on H plus and minus, E plus H minus, we actually had to look at the experimental papers because there was no complete reinterpretation of LEP searches for chi1 plus chi1 minus. So what we did, we used the most complete LEP search by Opal and here is the paper that looks for H plus and H minus in a very general way, not really in the – actually this search is not sensitive to the compressive gene, but we still did it just to see for any charge mass splitting what LEP has to say about this model. And we found constraints to be less constraining than those for supersymmetry and that's because production cross-sections for scalars are slightly smaller. I think I run out of time, right? You are okay with the time. Oh, okay, sorry. Then we also recasted an Opal search in the compressed region. And this search actually searches for soft-charge tracks from the production, from the decays of the gauge bosons. So basically it looks for missing energy and soft-charge tracks from the decays of a W. And in order to actually get constraints out of this, we used the efficiencies for hexynolite charge genomes. And this is because it's really hard to actually simulate the LEP detectors nowadays. I mean, we can do a really good job simulating more complex detectors, but there is a very lack of tools for simulating LEP detectors. I mean, I'm sure there are people who have done that. But we find that using the efficiencies for hexynolite charge genomes do quite a good job. But fortunately enough, we don't find any constraints against because the production cross-sections are much smaller. So we find no constraints for charge mass splitings below 5 GB. Now, in the lower mass limit, we start getting close to one. But other than that, we find no constraints. So we then looked at whether or not the LHC with run one LHC has constrained this model at all. And fortunately for us, LHC exclusive lepton searches, they place very large PT requirements on their leptons, and so that suppresses a lot. So that yields a very small acceptance and basically suppresses the cross-section of this signal. But there is another channel that you can look at which actually does not need high PT objects, except for a high PT jet coming from either initial or final state radiation, and these are monojet searches. But we find that the current monojet searches are not very constraining, so not constraining at all. What we do is we look at whether a monojet search with 14T VLHC can actually begin to probe the degenerate window through monojet production. So what we did is we basically scaled up all the cuts that people at Atras and CMS have done in their basic monojet searches, and we simulate all the modes of production of these two scalars. And we veto leptons with PT greater than 7GV. So it's very sensitive to the degenerate region where the leptons from the decay products of the gauge bosons will be very soft. And we require a missing energy of much above 1TV and not more than two jets. So as you can see in the plot on the right, we're using 3,000 inverse phantobarn of data. A 14TV LHC can begin to have sensitivity for masses below roughly 7GV. And this is in order to exclude, but there's not enough data to make a discovery. And also, this is very highly dependent on how you take care of your systematic uncertainties. So the uncertainties of the background, what we did is we varied them between 1% and 5% with the solid black line being 3%. And we found that when the systematics are very small, then you can have a very large signal to square the background. But again, these are probably very unrealistic expectations for the background, so we will have to really wait and see what the LHC does about this. So that's basically LHC. What I want to mention now is that there could be an improved sensitivity if you, instead of requiring just one monoget, you do demand an additional soft lepton from the H Plus or the ATK. So from the gauge boson, right? And you can actually do that. And also, for small mass splitings, another way to look for disguises is in the very degenerate window of neutral mass splitings, the A to HZ star at the K will be displaced. And I showed you there are a lot of displacements. And the C tau should be less than one millimeter, that was a mistake. And people have also looked at this in a very general way and it would be good to actually recast these results for the inner doublet model, which we didn't do. But a naive expectation is that both scenarios will be unlikely because, again, both leptons from the ZDK or from the decays of an H Plus or an ATK in the prompt or displaced scenario will probably be really soft. And there have been studies on how soft these decays products need to be in order to be probed at the LHC. And you can take a look at them. But we find that with the RUN1 at the LHC, you really cannot do much about... You cannot increase the sensitivity by requiring soft leptons. The acceptance just goes down too much. So let me just tell you a little bit about future colliders and this is where you can actually expect to probe these models. To probe these models, I showed you that with a 14T LHC you can do it, but you'll need a lot of luminosity and you need very small systematics. And then a 100GV proton collider has been shown to surpass current LHC sensitivity to electro-wikinos. So because these models are very similar in their phenomenology and their final state products, then you can expect also an enhanced sensitivity on the inner doublet model. And here, what I've shown is that with a monojet search with a 100GV proton-proton collider, bearing again the systematics between 1 and 5%, you can start probing masses or excluding masses all the way up to almost 200GV. And you can have a 5 sigma discovery already at 160GV. So this is actually using a very large emitting energy cut of 5TV and using lots of luminosity and I don't know how fast the luminosity at a 100TV collider can be accumulated, but people are actually studying in depth a 100TV collider right now and it's actually something that might happen in the future. So it's actually a good thing to keep an eye on this. But again, I know we need to be careful about the systematics and see what the experimenters can actually do, but 5% systematic might be too small as an expectation for now. Another thing that we can do is we can look at an international linear collider so an E-minus collider. And what people have done already is they have looked at similar models to the NRW model and they have made projections. So the authors, Choi, Han, Malowski, Obieki and Wang, what they did is they studied a very particular limit of the MSSM where you have a very nice doublet in the MSSM which is the left-handed slept-on doublet. And if you arrange your couplings and you put in additional D terms so that the mass is splitting is right enough, then the S-nutrino can be lighter than the S-electron on S mu or whichever one you want to have as your lightest one. And you can make the following one-to-one correspondence. So the S-nutrino is basically a linear combination of h plus i a over a square root of 2 and then l tilde or the slept-on is basically your h minus state. In this model you also set up to have this particle, this doublet have only direct couplings to gauge puzzles which is what you have in the NRW model. And what these authors saw is that you can have an exclusion or even a 2 sigma reach of about an exclusion of up to 160 GB or a reach up to 140 GB for h plus h minus h a using 500 inverse phantom barn and center of mass energy of 500 GB. Now this projection uses polarized beams so again we will have to see if an ILC is actually something that's going to happen and what kind of beams are going to use and well more work needs to be done once we know exactly where we're heading, whether we're heading for a very large energy proton-proton collider or a smallish energy high luminosity, very clean e plus c minus collider. So again all this, what I want to say is that all these models and all these different limits of these models which are natural in a way, they're very good motivation to study because they give momentum to whether or not these high energy colliders might be a reality after the LHC stops taking data. So let me just summarize what I've shown you is that mass degenerates in the inner doublet model can arise in the presence of approximate global continuous symmetries and some of the symmetries are actually very well motivated such as the Petri-Quinn symmetry in the case of the scalar and the pseudo scalar or an SU2 which is part of an enhanced custodial symmetry of beyond the standard model scenario like in that doublet model and we know custodial symmetry is very well approximate in the standard model because that's what we observe so any model beyond the standard model of physics should have some custodial protection to protect the standard model of H-pulsons from becoming much more degenerate and unphysical. So these limits are actually very, very well motivated. I've shown you that light electrowinkly charge scalars are compatible with electric precision data, Higgs physics and dark matter experiments so even though you might not be able to produce the right relic abundance that we observe today, it's clear that conilations help you not producing too much dark matter. So whatever model of new physics it's out there and if we do observe dark matter in future experiments then we know that this inner doublet model has to be compensated by more new physics in order to account for the right dark matter abundance and then maybe I want to make a side comment people have studied the inner doublet model or two Higgs doublet models in general with additional Higgs multiplets in order to address this diphoton axis and some of these extensions can't help you maybe achieve the right relic abundance of dark matter and then we will have to wait and see this summer to see whether or not this diphoton axis is a reality and then you can actually extend these models of physics which are really nice and well motivated to account for this new physics. So again, if the diphoton axis is a reality we should expect more physics than just a singlet scalar or pseudo scalar we should expect maybe vector like thermions we need a dark matter candidate etc. So first of all, these are very exciting times we have also shown you that we found no direct constraints on masses above NZ over 2 for charge neutral mass splitings below 5GB and this is because the ZDK bound is a very strong constraint so you want to stay above that and then current LHC data is not sensitive but projected LHC 14 will need a lot of luminosity and very small systematics and future generation colliders do appear to be more promising and it will be good so I guess one direction of research should be really proposing well motivated search strategies to look for these particles we've recast it we've looked at a model that basically maps into the inner doublet model but it is not, it's a supersymmetric model so we're actually working on whether the inner doublet model we're working on a complete analysis of the inner doublet model for future colliders so with that I want to thank the organizers for the opportunity to present in the Latin American webinars of physics and I look forward to hearing some of your work in the future Okay Alejandro, thank you very much so I guess it's a very interesting talk in fact Thank you, thank you So I think it's time for people that are participating in this Hangout session that if they want to make some questions I just want to tell to the people that are following the streaming that you can make questions through this system in Google Plus, the Q&A or through Twitter as well then if later you are watching this video in YouTube after the webinar you can also leave comments and make some questions that we can try to address later to the speaker So for the moment who has questions to Alejandro from the hearing session So I do have one Yes please Alejandro, when you talk about the dark matter you show this plot and if you can go back to Let's see the relic This one, yes So in the right panel the curve I know it's called the greenish the lambda L equal to 1 So on the right Sorry about that, can you repeat that again? So in the right panel so you have this greenish curve the one corresponding to lambda L equal to 1 Yes Okay, you talk about these two funnels but there's like a fourth one that's 100 and something Is this generation into a couple of pixels? What's that? Yeah, it should be actually Yeah Yeah, yeah It only shows up for this line but not for the other two It's just because the coupling, the lambda L is too small Yeah, for the other one for the lambda L is too small because the lambda L is basically the coupling constant that basically controls the annihilation So So that line it's an additional funnel and that you can see more when the coupling constant is much larger Yeah, let's see This is the one that's a little bit above 100 GB, right? Yes Yes, it could be Yeah, this is a that's the there's a hex funnel, right? Yeah, annihilation into a couple of hexes Yes, that's two a couple of hexes, yeah And in the next slide when you talk about the regression Okay, no, next one Okay Yeah, so again, the right panel you have these blocks What's that? Ah, no, sorry, I didn't comment on that but, you know, like I mentioned in the beginning that even though we're looking at the degenerate window of this model all the way up to 1000 GB masses because the LHC with the energy that we have now it's more likely to probe lower mass scales but we were focusing really strongly on masses above MZ over 2 and below 100 GB But again, I mean, if you really want to to account for all the dark matter you really will need much larger masses So it's just it's just a box to show you that with a small enough lambda L you can be consistent with direct dark matter detection limits in that window Okay, so are there there is someone else who has a question because also I want to I'm going to start with my question In the beginning when you were talking about this that you can promote your Z2 to an SU1 or SU2 symmetries in order to be more interesting the model is it possible that you can do the opposite to start from an SU2 custodian and then you break it and you finish with a Z2 that protects dark matter to decay I mean, I don't know if you can comment about that No, no, yes, yes, you can You can do exactly the opposite So you can think of this U1 symmetry and SU2 symmetry as parent symmetries that do break and leave you a remnant Z2 symmetry which is unbroken So in my opinion that would be a more natural way to embed a model like this You start with a larger global structure that is well motivated by what we see by physics a whole and then you arrange for it to break Well, it will break but to leave this Z2 unbroken So basically just arrange for the coupling so that H2 doesn't get a bad Does that answer the question? Yeah, yes But I have another one in the same line So in this case because the model since it's a inner dark matter doublet it's very interesting this degeneracy in the case of neutrino physics because there are some kind of models that are called escotogenic in which you have the same setup that's a two symmetry that stabilizes the dark matter plus right-handed neutrino and in that sense the dark matter and the right-handed neutrino is leaving mass to the neutrino to active neutrino through loops I mean it's not a true level process it's a loop and in that sense if I remember well or something like that this kind of degeneracy you can make some correlation or something like that with neutrino masses I don't know if you are considered these I am aware of two hex doublet models to do some sort of radiative seesaw right? Yeah I am aware of that but I haven't actually looked at so one of the problems with one loop radiative seesaws I think in my opinion is that okay they do predict many a larger okay so I've worked on on three loop radiative seesaws okay you have a three loop you already have a very small suppression and you have a small mass but one of the problems I think with one loop radiative seesaws is that to predict a very small neutrino mass is actually pretty hard but I've never looked at the two hex doublet model in the degenerate window and its implications for neutrino masses so that would be an interesting thing to look at and I would look at it so Scott O'Genek you call him right? Yeah Scott O'Genek is one of the version that I guess one of the first it was a proposal as Ernest Ma okay everything is from Ernest Ma in this point Yeah I will see you know if there will be some interference right from the two particles in the loop maybe and that will I don't know I will look at it yeah thank you I'm sorry when I add to this point I mean we are looking at these kind of models and you can have degenerate or not degenerate scalar so it doesn't change so much I mean you would change the parameters but you can adjust these parameters again adjusting the ukala couplings so yeah so basically you can have degenerate or not degenerate parameters I don't think this is a significant difference between the two cases are you talking about the neutrino physics now or are you talking about the model right in the scotogenic models when you have to degenerate scalars or not degenerate scalars I don't think it constrains basically the generacy the neutrino physics yeah I mean I think that okay I think that maybe okay so I have to say that maybe it does but again I'm talking I haven't looked at models but just like with B physics and K on physics when you have new physics altering some of these decays when you have particles that new particles that appear at the same scale you tend to have interference among them so if you have two particles that are really degenerate and close in mass they can contribute at the same level and I don't know whether their contributions will constructively interfere or destructively interfere but again I don't know so maybe you're right that it doesn't have has anything to do with the size of the coupling but I will have to actually look at it in more detail to make a but I don't know maybe you're right yes yeah in fact just to make another comment in the because I remember an article from in fact Diego is here also part of this session that in fact in this kind of model like photogenic they can have an effect of co-annulation that is instead of destructive in the sense reducing the size of the annihilation cross section they can have the effect opposite that they can enhance the annihilation cross I mean instead to have more dark matter you can reduce it the opposite effect that you are having in your model I was trying to figure out if maybe you can cancel both effects at the same time and instead to have a dark matter that is very low in your cross section you can price it or load it all these models need saving in some way the idea in my opinion like this is gotogenic models that you want to save and in a way that you predict other things like neutrino masses and stuff so yeah like I'm going to take a look at it and then I'll probably bother you a bit and see if you have any comments sir I don't know people I'm going to check just the Q&A if we have some questions just give me a second and in principle no we don't have more questions there I do have a little comment if any of the listeners want to discuss things about this model or anything else then feel free to write me an email or whatever yes we are going to put in the description of the video in YouTube how to contact you okay great I like that they send us a message in private and we can address in order to avoid some spam or something like that thank you so much another question another question is just a curiosity how much because you're talking in the beginning that this place vertex could be very interesting idea to I mean the strategy to look for this kind of model with high degeneracy how is the status of these searches okay so I worked on this about a year ago and the status is last year late in the middle of last year ATLAS released a very very in-depth search on this place vertices I mean they covered all of the spectrum so I think that this place that that search will put really strong constraints on this model if the displacements are above let's say I want to say one millimeter maybe a little bit above that and 20 millimeters I think if you have displacement I think this model is pretty much rolled out with that ATLAS search the status on this place vertices that they have done a really good job at ruling out models that predict this place vertices to the point where in my opinion suppose the top is just very slightly long lift suppose that's true you can use a displaced vertex search together with existing from searches and you can pretty much rule out all of the parameter space with it with enough luminosity I mean you don't even have more space to arrange the neutrally no mass so that you can expect escape detection so we should really be we should really be how do you call it we should keep an eye open on this next generation of displaced vertices because they're doing a really good job and in my opinion when you have displacements that are in between one and 10 millimeter they complement from searches so they actually help each other out okay so I have a paper where I discuss this in really detail so as long as the B tag inefficiency is large enough as long as we can reconstruct vertices well enough I think those two searches in a range of one to 10 millimeter complement each other really well and they're going to start hurting models big time including this one okay thank you I mean because this is a very clear signature right so you have a let's say you produce an A and an H the H is missing energy the A will decay to an H and a Z and the Z will decay to fermions if the A is displaced you just have to look for two fermions that are soft and be able to reconstruct a secondary vertex right and that search has been done so I don't know how soft the leptons need to be how hard the leptons need to be or how hard the tracks need to be because you're really looking at tracks but that search has been done so it's all about just recasting and I think they will constrain this very much so in principle it's going to be a very good yes a very a very good thing so we have a question from the Q&A that is from Diego and he's Diego Restrepo he's asking if there is any constraint from long-lived charged tracks in the LHC right yeah I guess so okay so again so there's been a search for long-lived charged tracks and there are constraints and I think they place so here you will have the H plus the long-lived we didn't consider that so I think if the H plus is long-living off it will be ruled out because there are constraints on charged tracks that's I think CMS sorry Atlas did this search and CMS did this search they both did this search now it's really hard to recast these things because you need a model you need well you need to modify your PTA you need to do a little bit of work but I think if you use naively use the constraints from the LHC you will rule this model and I want to mention that these long-lived charged tracks are searches that have been done but they tag other objects okay so you need a trigger right in order to trigger on these guys and in most cases these are muons okay so but yeah now we didn't consider those constraints because like I said a charge mass splitting that is below one GB in our framework was unnatural so we consider only charge mass splitings of above one GB okay but if Diego wants some nice review on all the displaced vertex and charge displaced vertices that have been done you can just look at my paper on long-lived color scalers I'm just giving you like my own marketing but just look at the introduction I don't expect to read anything the introduction I have a summary of all the of every single search that was done to date with the ATV LHC so there are constraints on charged tracks okay hello can I add something maybe Alejandro was right when I looked at the formulas I mean when in the scotogenic models you will have a destructive interference between the A and the H the CP even in CPO scalers so whenever when they are degenerate the masses the mass generating term goes to zero and so basically you need larger Yuccava couplings as this as the mass splitting goes to zero and that's nice at the beginning and then at a certain point they will be too large and you will have other kinds of constraints so basically when the Yuccava couplings reach values of order one you will have left of favor violating constraints but I think it's maybe it's a nice motivation to have them kind of degenerate I don't know of the order of it sub GV you can have basically another suppression which is stronger which is what you wanted from the beginning right yes okay yeah I'm gonna look at this if you have a what's the reference on the main scotogenic model with mass well but I mean I can the formulas are written in many places I can send you afterwards if you don't mind thank you very much something I mean alright thanks for looking into that so very good so I guess if we don't have more questions I guess it's time to stop the broadcast so first of all I want to thank Alejandro to give this webinar was very interesting and with a lot of debate so yeah perfect so I guess it's time to just to stop here so let me just to show the people the working now just to finish this stuff and people that is following us not to forget to subscribe to the youtube channel in which you can see all previous webinars and I hope to see you all of you again in the next webinar so see you next time thank you