 Ko smo prišli na to, da se se zavrili. Ja pa vse bila. Vse posledaj, da se pa ne vse dobro izgleda, in pa zato, tako, o čemu da možemo vse pošli別i, na čemu je bilo, da ne zelo so tako zelo, izgleda se, da se ne zelo, da se pošli dobro izgleda, pomemb način da se pogleda, da se pošli pošli dobro, da se pošli dobro, zelo, da se velesniti, nekaj ne puno začno, že inčenitva inčenitva je ustal, ko se ono ne samo prišla, bo umarila, da�로 nisu, neko veseljno bi ne začal, počušel je, da je začno, enčenitva je pripeča, da pogledaj nosi državni, na časke vese. Zelo, da je začno koncentrativno, če izkončilo, ne čečenitva nečater, nečastr, nečastro, nečastro teori, da je vse blj edited, temperatur of unnaturalness of the Higgs mass which is in turn an electoral-wiximary breaking scale is the degree by which it is unpredictable. And as you can understand it by, if you want thinking that linking composites, we will one day have, if you have reductionist viewpoint on natural that the Higgs mass is a low energy effective parameter which is derived from a true theory which is indoked by this own true theory parameters and it comes as an integral over different energy modes. You may have localized contributions that stand from three-level contributions, you may have loop contributions and in composites you do have only loop contributions, and so you really have these integrals here. And then the usual argument, if you want, due by the poor man version of the Wilson idea, is that you can split the integral into low energy and high energy mode contributions, and the ones which are at low energies below the standard model cutoff, well, that ones you expect to be able to predict, because you do know how the virtual particles that run into the loop are into the standard model, there are, there is a top with its own coupling, and if you just focus on this part, you find what we usually call the quadratic divergence contribution to the X-mass. The other one we cannot compute, we can just estimate, and if the first one is very large, it means that the second one has to be almost equal and opposite to the first one to give the X-mass of 125 we see. This is fine-tuning, and it is given by the usual formula that for 126 or 25 gvx tells us that the standard model cutoff should be around half a TV, okay? So that's the usual argument, and the usual implication of the usual argument is that if this degree of cancellation or fine-tuning is, let's say, 10 digits, well, it doesn't matter if your theory formally predicts the X-mass, like composite tics with all that tuning will still formally predict the X-mass, but, well, we will have no chance of really computing the X-mass starting from the UV theory, because, well, if you have, I mean, 10 digits cancellation going on among these two pieces, you will need an accuracy of at least 11 digits on each of them to just have an order of magnitude estimate of what is on the left side. So that's a quantitative measure of how unpredictable the X-mass is, and, of course, if the tuning is very large, then, well, better be that the X-mass, at that point, is a fundamental input parameter, as it is in the standard model, or it has some other origin, which could be entropic, which could be, I mean, we recently saw a relaxing like, but, in any case, it's an important information to know about the X-mass, and this we can do by studying where is the cutoff of the standard model, I mean, we can do it experimentally. So, I was saying it's quantitative, and then that's what many people say, say, if you have a partially unnatural theory with a tuning of 10, well, we will just not care about that. After doing a lot of work, we will be able to overcome this one digit cancellation and make prediction of the X-mass, and perhaps in 40 years from now, we will even forget that there was this little fine tuning going around, while if the tuning is 1000, that's probably not okay. So, the LAC and future luminosity, and perhaps even beyond, are for sure, this is an opportunity, testing and naturalness, for even a certain degree of unnaturalness, at the LAC is interesting to be found, even if what is found is the unnaturalness option. Let's go to composite X. Why is this? Okay, I hope. Yeah, it works, okay. Let's talk about composite X. Composite X is the idea that you can generate the X-mass as a bound state, okay. The confinement scale of the composite sector that delivers this bound state is called M-star, is TV or multi-TV-like, and if you want one of M-star is the size of the X, okay. The hierarchy between the high scales that we can take to be gut, for instance, and this one is generated like it is in QCD by dimensional transmutation. There is slow running close to a fixed point by a new composite sector. And the composite sector, well, it first of all contains the X, which emerges as a goldstone boson, and that's why it is much lighter than this M-star, which is, again, a few TVs, and this is due to a global symmetry group, okay. G that gets spontaneously broken by the composite sector. Composite sector is characterized by two parameters, one I already told you, it's the mass, and the other is the typical coupling, G-star among composite sector resonances, and we know that theoretically there is, well, no reason why this coupling should be the maximal one of four pi, there are, I mean, large intensity theories display the possibility of having this coupling, say, freely ranging from zero to four pi, even though, well, we will take it large-ish, okay, let's say, above the couplings of the other sector is gonna appear here. And the other important parameter is F, okay. So F is the order parameter of the spontaneous symmetry breakdown, and it needs not to coincide with M-star, actually, the ratio of the two gives you G-star, okay. And the other sector is the, well, the standard model sector, let's say, aside from the Higgs, of course, which lives here, and it contains the gauge fields, including actually also the gluons, okay, and the fermions, and this sector is weakly interacting, or at least is more weakly interacting than this one, and for, as an analogy, you may think to QED, okay, QED photon and electron, coupled to the QCD Lagrangian in the infrared, that's exactly the way in which things behave. The way in which this is coupled, I mean, in which, really, the interaction proceeds, it's rather straightforward for the gauge, the degrees of freedom, they are gauge particles, so they interact by gauge interaction, so there is, say, W mu times the current, a current that lives partially also in this composite sector here, and this gives an interaction. While for the fermions, there is the analogous think, which is called partial composiness, and it's a linear interaction between this elementary fermionic degrees of freedom and some composite operator. That's the way in which you do understand how things may work, especially for the generation of the Yukawa coupling of the top. The other, the important thing of this interaction is, of course, that, as you see, it breaks, right, the global symmetry, so the global symmetry is exact from the viewpoint of the composite sector, so the X is exactly massless, it gets some mass, which is dictated by these couplings, which, of course, are also the same one that make the Higgs interact with the gauge fields and the fermions, and so generate the masses after electroveximetry breaking, okay? So that's the generic picture, and the only last parameter you need to understand the phenomenology is xi, which is the ratio between the electroveximetry breaking scale of 246 gV, so that's fixed, and f, which is, again, the order parameter for the breaking g2h, and these parameters we knew before the starting of the LIC, it had to be somewhat below one, okay? It can be shown that it goes actually from zero to one, okay, given that it is the sine square of something, but we knew already from electroreposition test that it should have better been around or below 0.2 or 0.1, okay? And that for us is a source of fine tuning, because in most models where, of course, you compute the x-potential, right, or at least you compute the form of this potential by knowing precisely how this structure is made, you do find a minimum that is generically pointing, say, in a random region from zero to one, and so if you have 0.1, there is some tuning you have to do in the x-potential, okay? This doesn't mean to me that we have yet explored all the possibilities. There are actual examples, even though they do not work in which this xi could be naturally small, in which case, well, this would not be true, but for the rest, given that our current, say, understanding of the theory, xi small means large tuning, okay? Okay, here we go to the signatures that we have, that we have to discuss, so there is, of course, the commodification of the x-couplings, okay? The production of resonances, and there can be electro-week-charged spin-1 vectors, and there can be fermionic top-part in states, so let's go through all of them. It's coming up particularly interesting because they are universally predicted, so assuming that you do have only one x, okay, which is not necessarily, well, I mean, a good assumption, and so we should check, of course, at the LEC, the presence of more x's, and if you do have more x's, then these results, if you want to mix up, okay, with the effect of the other x's that mix with a guy, but if you have only one, there is only one coset, say, one symmetry breaking pattern you can consider, which is the minimal one, so five broken to S of four, in which case you do are able, in any model, independently of any detail, to predict the deviations, for instance, of the x-coupling two vectors, which is given by the standard model result, time and deviation, square root of one minus xi, which is just, I mean, the same that Gustavo was talking about before is cosine V of f, so that's universal, all all models where the x is a good tomboso. Fermion kaplings are less sharply predicted, because they depend on one extra model building ambiguity, which has to do with some representation of some fermionic operator, but still there is a discrete choice of possibilities that you can explore, and they define trajectories in the experimental plane of Kv, so vector coupling modifications Kf, which you can compare with data, and that's the first, say, the simplest exercise that we can do, and we show that basically xi equal to, say, as far as CMS is concerned, xi would be, I mean, there is not much constraint, xi could reach even 0.2 or even more, atlas is more constraining, because it points in the wrong direction, but still 0.1, 0.15 is still allowed, and again, this is more or less what we did expect already from a retro-whip position test, so for the time being, x-couplings modification, current measurement of x-coupling modification are not particularly new with respect to the bounds of this scenario. The expected final reach of the LAC, which, of course, means assuming that all the ellipses are centered inside the standard model, is of around xi equal to 0.1, and this we will be able to know in some years, if there is something or there is just a standard model. Sping-1 vector resonances are also very important, because they are a very direct manifestation, and it is very direct for the reason that in these theories, we do expect to find all the particles that come corresponding to all the operators that we have in the theory, and in particular, necessarily the composite sector as an operator, which is the global current, and even forgetting about custodial SO4, there would be the global current associated with the standard model as you to left group, and that's the same, actually, also in other previous attempts, like Technicolor. I mean, you do always have this kind of operators that lead to this kind of vector triplet particles, and they can be searched for at the LAC, and they're phenomenology described by basically two parameters, which is their mass and star, and their coupling g-star, that I mentioned also before, and also you see that f is connected with g-star and m-star, and so xi, which was v square of s square, the level of tuning if you want of the theory, can be also extracted by these two parameters. The phenomenology is quite peculiar because it's true that g-star is the intrinsic resonance coupling, so it means that the large g-star makes the v, for instance, interact strongly with other composite objects, like the Higgs and the logitudinal component of the w and of the z boson, so that vertices v to logitudinal w, z and x aren't answered by g-star, while, instead, the couplings with the right quarks and fermion are suppressed at the large g-star for a reason which is very similar to the, say, the raw photon mixing in QCD, and this makes that v interacts with w, only through a mixing, and this mixing is weighted by the coupling of the w divided by a coupling of v, and this makes, so, the reduction of these vertices. So, these particles tend to be difficult to see at the LHC because the production mechanism, dominant one, is Drljan, and it's reduced for large g-star. And furthermore, given that for large g-star, the decay of v to vector bosons grows, while the vertex to light quarks and fermion decreases, you do not expect strong bounce from dileptons, from dilepton-finance surges because most of the branching ratio of the branching fraction goes into diboson-finance states. And that's shown in this plot. So, you see that for an ordinary weakly coupled-like vector, the search that dominates is, in this case, a leptonic one, okay, in particular lepton-neutrino, and you set a limit which can be around 3Tv, but, however, it's not a generic robust limit on vectors, and in particular, not on the interesting one, which have a coupling that if I had to take the energy, say, with QCD, it should be strongish, and we do expect it to be, say, in the 3, 4, 2 region, okay, for sure, not down here. So, as far as these models are concerned, as you see, diboson-finance state surges, which are these blue contours here, are much more important, and furthermore, well, we have to pay attention that if you go too much above, say, with a coupling, you are really not seeing anything yet at the LAC. And, well, and also notice that this region here, well, all these surges are competitive, both diboson and dileptons. So, the first run of the LAC, well, again, didn't tell us much, also because there is an electro-whip position test constraints that tell us that, even if you relax them a little bit, at most you can leave around this region, okay, but, well, going to this kind of mass is very unlikely, okay, because these particles contribute to electro-whip position test. Okay, this is just to compare with the case in which you have, for instance, another triplet, but it's coming from a gauge theory, okay, just to show that, well, in that case, of course, you don't have the mechanism according to which the branching ratio to dileptons are reduced with respect to dibosons, and so dileptons are much more powerful in absolute reach, and so they dominate all the way, and in particular, they exclude everything at least up to around 2Tv, also, okay. So, the message is that composite strong vectors have weak bounds, and for this reason, they have to be searched more extensively at the next run, where, of course, the situation will improve. So, assuming, of course, there is nothing, these are the exclusion contours that you may find at LAC 300 inverse pentobard, here in between, you see there is a huge improvement, not only in the mass direction, okay, you can test higher masses, but also in the coupling direction, which for us is very important, and also you can appreciate that iluminosity LAC, which is sometimes meant to be basically useless, is not so useless in this context, because it's true, it doesn't understand much on this axis, on the region mass, but as far as weak couplings are concerned, weak coupling, remember, sorry, this is strong coupling direction, which means weak or small production rate, of course, the iluminosity helps quite a bit. And on the same plane, you can also superimpose the limit on xi, because I told you that xi is related to g star and n star, so all what is above here is excluded by, it will be excluded by LAC run one, so the 0.1, okay, which of course over will arrive probably much, I mean in many years from now probably, because it will need a lot of luminosity and a lot of improvement in analysis, and this is iluminosity, some projection of how better it could do, and so all this region is covered by that, and that's other precise machines like ELC or T-Lap and Click that clearly measure the x-coupling better, and so they exclude more. Of course, it's an obvious exercise to generalize this to future colliders, for instance, under TV collider, you just get the same picture, but enormously enlarged in the mass reach and also in the coupling reach, and well, here there's not much to say, it's completely obvious that by result like this, that extend the reach in a mass range that deeply goes from 10 to 20 TV, okay, basing independently on the coupling, okay, or and or also using say like indirect information coming from T-Lap like colliders, you see that basically, I think there is no doubt that this will make the naturalness hypothesis, I mean, we'll say the final word on naturalness in favor of unnaturalness. The question is whether the ELC itself would be enough to tell this final word on naturalness. Okay, the, well, perhaps the most important source of constraints will come, it's not yet there, but will come from another kind of particles, which are top partners, and the way this works is like the stops in supersymmetry, so you do have that the potential of the Gorson bosonics is due to breaking of the Gorson symmetry, so it has to fill this insertion of the elementary couplings, and the ones which are bigger are those associated with the top quark, and so the diagrams you will have to compute, you do compute when you compute this potential, are something like an insertion of the coupling of the top quark, which is a linear coupling by partial compositeness with its own operator, which means with its own composite resonance particle, which is the top partner, and well, you can say count how many insertions you have and expand the potential. Okay, the second term will be relevant in a couple of slides, for the moment I focus on the first one, which is of course the dominant, and it gives a correction to the X-mass, it gives a contribution to the X-mass, which has precisely the same form that I, well, estimated on the previous slide, so it's some star of a 400 gv square, was not for the fact that actually here what appears is not directly the yukau of the top, but it's G's GE elementary composite coupling, but well, it can be shown that this GE is necessarily larger or at most equal in some models to Y-top, okay? So this is the usual formula, and that's with the usual interpretation that says that fine tuning is at least given by the standard model cut-off over fine energy v square, where the standard model cut-off is identified with the mass of the particles that give rise to the X-potential in a more, say, significant way. So in supersymmetry we identify this with the stop in composites with the top partners. And the top partners are massive fermions, okay, of spin one alpha, they carry QCD color because they have to mix with the top. They have suitable, well-defined electroweak charges, okay, and they also have so specific phenomenology, so we can say a lot about these particles and how they behave, or they would behave at the LAC. But the first thing we have to notice is that they have to be light, basically because of this formula, this is one way to appreciate that they have to be light. This is the mass of two charge five-third and two-third guys in TV. That's a low-tuning model. And well, these black dots are all the ones that you have to look to want, too, because they have the right light x-mass. And you see that, well, in this theory, no matter what you do, at least one of these two guys has to be below around 1.5 TV. And that's general, okay. It's obtained by a scan in a model, but it can be, as I argued also before, it can be proven on general grounds. While on the same kind of theory you may still well have three or four TV vector resonances because they are less involved in the tuning and in the x-mass. If you do more tuning, you can decouple everything. This is psi equal to 0.1, but still you can never reach, say, two TV. I mean, you cannot go above two TV with both of these partners. So there is a clear direction because these particles are colored, so they are easily produced at the LEC. There are also other production mechanisms, single production that can be exploited and make that there is a well-defined program of testing this axis here for testing these models. And if you want in a simplified version of the model, which is made of only two parameters in which we can put all available searches, okay. There is some reinterpretation going on here, so don't take this too much seriously, but we are quite confident. This is the current situation, psi equal to 0.2, still there is space, but not, I mean, we still have started scratching this possibility. Now we have psi equal to 0.1, which is much more free. And here is what the 13 TV run can do with 20 investment bar and the small psi, right, because the big one is excluded already if nothing is found and you are left with this narrow strip here. This is what we can do with under the investment bar. We exclude 0.1 and we go down to 0.05 and so on and so forth. So that's what we heard already that, well, colored particle searches are the most dangerous for these kind of models at the LAC. We have also heard already, yeah, we take just two minutes, more, so the, you've already heard from the previous talk that model builders are already at work to escape possible future bounds and in Composite X, there is one simple way to do this, which actually defines UV complete theory that may be too complicated to be believed or too crappy, but still, I mean, still is there. And the idea is that you can engineer for some selection rules that realize the tuning cancellation that makes that, remember the first term in the X potential, well, was quadratic in the coupling, you can cancel it away and you are left by structures and symmetries, only with the quartic order terms, which leads, of course, to a smaller X mass and in particular what is important to a contribution to the X mass, so to a tuning scale, that is not anymore the resonant scale before, but it's the elementary coupling square times F square. So you disentangle tuning from resonant scale and this can lead to prediction, like resonances from the 5 to 10 TV range for all the standard model charge resonances, still a tuning of around 10 and only neutral, completely neutral light states, which is, I mean, which could be, but still has to be checked, there are signatures in visibot at the LAC and we may need future colliders. I think in the last three minutes I can leave you with a message of hope, concerning the Atlas success that is telling that maybe we don't need all these, so the, I mean, the other success you probably know about is an excess in the die jet environment distribution, where these two jets are tagged as adronically decaying electro weak bosons, you can discriminate a little bit between W and Z and so this is the selection that corresponds to one W and one Z. You see it shows a sort of a little bit of a peak that then goes down, okay? So it's quite, I mean, it may seem a resonance. The statistical significance at the peak, I think it's of around 3.4 sigma, 3.4, but, well, looking elsewhere becomes 2.8. And the same, but is seen in correlated channels, which are not independent at all, but they are not even completely correlated in the sense that only 20% of the events in the excess fill the three histograms, okay? But still, again, they are not independent because these W or Z selections that they do to tag the jets are very inaccurate, everything can fit in, but still you see the same, more or less the same behavior. So this fits with the very same triplet models I spoke about at the beginning, which I advocated as a generic and robust signature of composites, but of any model which has electrowicks sector involved in electrowicks, sorry, strongly gavel sector involved in electrowicksim re-breaking. And this is the region where it fits, okay? Notice that the mass, even though the peak of the excess is, say, at 2 TV here, but it's at 1.9 TV on this other plot, remember that there is systematics, okay? So you can move points here and there, and also remember that the beams are under GV wide. So it's not obvious at all that this is at 2 TV resonance. So we try to consider these values, 1.8, 1.92. We determine the only parameter in this theory that controls the production of this resonance, this triplet, which is made of a charge and the neutral, and it decays to WZ or WW, which is relevant in this channel, all controlled by the same number, and we try to see which values of the parameter reproduce this excess, and well, we also cross-checked that it also reproduces the excesses in these other beams here. This is interesting because this point of around 2 TV, let's say, and around three couplings was the one which we knew already. It was close to the specter sensitivity in several other channels, okay? So, for instance, that is CMS digest. Actually, say, privately people from CMS are those who first advocated the existence of a little excess. Their analysis actually doesn't have such a big excess, but what's important is that we are, I mean, the signal hypothesis with the one sigma band is, of course, not excluded, and it may also explain this little fluctuation. It may also explain this other little fluctuation in the channel, in the electronic channel, with ZZ, it has to be seen whether this is enough to produce that, but I found that sometimes it's surprising what, say, a weak signal can do in terms of jumping the exclusion. There is another one, which is HW by CMS, that also displays a bump, and there is also a dilapton surge that displays a bump, but dilaptons are more model-dependent, so, well, I will not talk about that. Zero, yes. So, some others do not, okay? Some others do not fluctuate, and the most dangerous one being WW by, what is this, by Atlas, and maybe it's just how it should look like, okay? Perhaps a weak signal should show up exactly in this way, that searches with a comparable reach, some of them must fluctuate to an half sigma up, or three, some others should fluctuate less, and some others should not fluctuate at all, okay? I think that it should be worth studying it more, waiting for the new data that will tell us more about this excess. So, I will just read, I mean, I will just, I mean, I think the conclusions are just that compositing is a playground for a naturalness test, and I outlined the most important signatures. Top partners are the best probes, and model builders, indeed, are trying to avoid possible future constraints to answer the question whether the LAC will have the final word on naturalness, and the other success, to me, is just a suggestion that, well, after all, there is no natural implosible new physics that could show up at 13 TV, and so we should look for it, hope for the best, but still, I mean, I still remember that natural or unnatural, the physics will be, the LAC will tell us, and this will be an important information, okay? Thank you. You can shout perhaps. Sorry for the question. I am very new to this field, but what's wrong with dimensional regularization? Using it will have not these quadratic divergences. I mean, you see, when I compute the mass formula for the Higgs, for the Higgs, for the Higgs in my compositing, I don't have to do any normalization because it's a finite formula, okay? It's a perfectly finite formula, and so all these issues, I mean, if you look at the model that gives the physical origin to the Higgs mass, it really disappears, and in this formula you do recognize exactly that there is one piece that goes back to the standard model because it has two, because the theory has to go back to the standard model and another piece, and that these two pieces cancel. Okay, if you want, it's a very instructive pedagogical example. Regarding your explanation of the Atlas XS, so presume your SUT left triplet has a rather large coupling to quarks? No, no, no, exactly. Well, large enough to be produced, okay? And doesn't that lead to problems with zed decays? With the what? Zed decays, the hadrons. Zed decays? Well, zed boson decay, you mean precision electro-wik physics of the Zed? Ah, sure, sure, well. So, this was the, more or less, so this is at the boundary of what is reasonable with the electro-wik precision test. And this is shown also here, the dashed line in this plot is, I mean, is the, you see that two lines, okay? The continuous line is strict electro-wik precision test, and the other one is relaxed electro-wik precision test, assuming some other contributions that come in this theory, which is complicated, right? There is no part in floating around. And so, 2TV sits a little bit on the left of the strict electro-wik precision test, but still, you see, it's not dramatic, okay? Of course, the philosophy here is that, I mean, if I see that particles, I would be sure that there is other contribution that can change the electro-wik precision test, okay? But again, it's not that electro-wik precision test limit is 3.5, no, okay, it's close to that. No, because if the particle is there, they have to make it better. Just a quick one. On the atlas axis again, would you imagine that there will be strong couplings to TT bar? Sure, sure, sure, that's a, sorry, yes, that's a very good. Now, this depends also on the model. I mean, the fact that we haven't seen top partners, we will have to see how this will end up, right? But assuming the structure of the theory is the one we believe, which means that it has partial composiness, then there is another, so there is another decay channel of these particles, which is the decay to tops or to top partners, which is enhanced with respect to what is assumed here. As far as the excess is concerned, as far as the bounds are concerned, they are not strong enough. TT bar, final states, or TB, they are not competitive. As far as the interpretation of the excess is concerned, you lower it a little bit all the branching ratio because you have another channel, but this, well, you absorb it in the couplings and it doesn't matter. Another question or remark? Okay, then we thank the speaker again.