 OK. Can you hear me? Yes. So I'm presenting X results from on behalf of Atlas and CMS. And so briefly I will describe the data sample collected by Atlas and CMS, then how standard modeling's boson production works at LHC. And I will discuss the boson channels that allow to, for example, to measure the mass and the width and cp and spin. Then I will talk a bit about diaphragmian channels, the case. And then about cross-section, branchy ratios, co-op couplings and a bit of prospects. So, OK. Basically all these measurements are based on the run one dataset that has been collected in 2010, 2012. And the total luminosity is basically 6 inverse phantobarn collected in 2011 and 23 collected with ATV center of mass energy. The six were with seven TV. And this is the delivered luminosity. The collected by single experiments is a bit lower, of course, due to experiment efficiencies. So before going to the X, these are other measurements of standard model cross-sections done at LHC. You can see that we go from total cross-section to jets. We have w and z with and without jets. Then you have here TT bar, for example. And here you have x. So these are all, and here are basically dibosons. So these are all measurements that have been done, in this case by Atlas, you have something similar from CMS and compared to theoretical predictions. And this show over a very wide range of processes and cross-section how more or less well we understand the standard model. So we have a very nice picture of all the processes that are happening. So this is good to understand backgrounds. OK. So going to the X boson production, X boson production occurs basically through these two different processes. The highest cross-section is from the glon glon fusion that is this one. Then you have what is called vector boson fusion in which that is this one. This is characterized by having two jets with rapidity gap in addition to the X in the detector. Then you have associated production of X and W cross-section is shown here. Or it can be W or Z, of course. And finally the TTH production that has low cross-section. So typically uncertainty on the theoretical predictions are a bit higher for the glon glon fusion processes and basically coming from the QCD scale variations and from the PDFs. At least from what regards the QCD scale for the vector boson fusion processes. Also this 8% for glon glon fusion probably is going down with newer calculation. OK. Then on top of production there are decays. So this mass of 125 gV is very interesting because there are more or less all the possible decays with a significant branching ratio. The highest branching ratio is in BB bar. That is a very difficult channel to detect. And following that we have WW and ZZ. And if you go a bit, you look here. You have tau tau. And if you go farther down you have gamma gamma. That is an important one. And the width of the expected for the X boson and this mass is very small. It's about 4 MeV. So below any experimental resolution. So the first channel I would like to present is the X in 2 for leptons. Basically this is the channel with a very good signal over background ratio. So the main background is standard model for lepton production. And it's a simple analysis. Basically you calculate the invariant mass to lepton pairs. And of course what is important is experimentally is to have a good lepton identification and also a good moment of resolution to resolve the peak. And this is the measured for lepton mass spectrum from Atlas in this case. Here you see essentially there is this peak here that is basically from Z decaying into for leptons. And then at high mass you have the opening of the phase space for 2Z production on shell that is basically dominant in this region. And here you have this peak that is the production of the X that goes in for leptons. And this is basically a very nice signal over a small background. And using this here I show the exclusion range produced by CMS. Basically an X in the range 114 to 832 is excluded except in this small region around 125. And you can try to evaluate the signal strength to compare if the number of X that you serve is compatible with the standard model. So the important point is that there is, this depends on the mass at which you look. So you can make a plane signal strength versus mass. And so depending on the mass that you choose you have a different signal strength. This is to be keep in mind for the next, for all the cases where I show signal strength. In this case for atlas, what is observed is 144. That means that cross-section observed over standard model cross-section is 144 with this error. So it's well compatible with one. And the same for CMS, it's very well compatible with one. OK. So the other important channel is X to gamma gamma. Here, again, one aspects a peak in the diphoton invariant mass. There is a background here. This shows the diphoton invariant mass spectrum from atlas. Here you see the backgrounds mainly. This is jet plus photon. So this includes jets misidentified as photons. And this is the real gamma gamma production. And you can see basically you need a good photon identification to reject jets. And of course there is this background that is not directly cannot be reduced because it's real gamma gamma standard model production. So you have to try to find a peak on top of a smooth background. So what is important is to have a good photon energy resolution but also angular resolution to get the angles at which are produced, the opening angles of the two photons. This is done with two slightly different approaches by CMS and atlas because atlas are admitted to select the hard scattering interaction among the many interactions occurring in the bunch crossing. And while atlas also exploits the longitudinal segmentation of the calorimeter to point toward the correct vertex. And another important point is what is used in this analysis but also in the previous one actually but is particularly important in the X2 gamma gamma is the categorization. So you don't do simply an invariant mass peaks, mass spectrum but you try to classify events into based on expected signal over background kinematics and photon ID variables. So then you can do plots like this in which you weight each event by its signal over sigma plus background and search for a peak. These are the two plots from CMS and atlas. You can clearly see the peaks there. Both have a local significance above five sigmas and if you look at the signal strength or data over standard model rate you see they are very compatible with unity. The other diboson channel is X2WW. This is very sensitive for 130 to 200 gv. And in comparison with the previous channels has the problem that you cannot have a complete reconstruction of the final state because you have the neutrinos that are going away. You have quite large background from standard model WW production but also from TT bar in Relyan. And what one can do to select the signal is exploit the characteristic angular correlation that we have in the production of X and you have that X is scalar and so the spin of the 2Ws have to be back to back and due to the V minus A decay of the Ws you will have that the two leptons tend to go in the same direction. So the analysis requires two opposite side leptons missing in T then categories based on zero, one and two jets then topological cuts on the two leptons mass, PT, phi and what is very important is that you have to use background dedicated background sample to estimate the background use controlled samples because you don't have a clear peak so you need a good estimate of the background from data and when you put all this together you have a plot like this this is the transverse mass of the system you have this distribution is the X signal and this is basically the other WW processes when you subtract the background you have this kind of signal and if you look at the plot signal strength versus mass you can see that these measurements don't give a very strong constraint on the mass so you give this kind of shape so what is done is you take the best mass for the other measurements for example 125 and you look at the signal strength at this point and these are the signal strength from CMS and Atlas again very compatible with one and both the two experiments have significance between 4 and 5 sigma so once you have measured all these three processes what the two collaborations did is to try to get a precise measurement of the mass so what is important is that the response of the detector are to be calibrated and this was done really to very good accuracy at the per millevel exploiting non-dimeon and dielectron resonance from other standard model processes this is for example as a function of PT the mass scale for mions from Atlas where you see Z this is the mass spectra from CMS and compared to Monte Carlo so you see the calibrations are very good so what is left is that the final error is basically statistically dominated and here you have the results from X to gamma gamma Atlas and CMS for leptons, Atlas and CMS so at the end they are basically compatible there is some 2 sigma level fluctuation and when you combine everything you get this mass that is 125.09 with something like .2 statistical and .11 scale uncertainty ok so there of course on top of the mass you can try to measure the width what we said that the the expected width is for MEV so the experimental resolution of these peaks is order 1 to gv so what can be done is only to put upper limits so at the moment the upper limits are 5 gv for X to gamma gamma 2.6 for X to for leptons and where you combine you go below 2 gv ok, what you can do is to use a trick so exploit of shell production to measure the width so basically when you take the bright victim of a resonance you integrate what is in the peak and you look at what is in the tail basically you have that the cross-section in the peak contains the width the cross-section in the tail so you can try to measure the ratio of production in the peak region in the tail and get some some some limit on the width in this way and here you see that actually in the width you have different processes that contribute apart from the X production there is the zz production that is some interference and on top there is the qq bar production of 2z that also contributes and so the analysis is based on zz and 2w production and the results are shown in here so basically ok, this is explains what I was saying and this is the the limits that are obtained basically the limits on the ratio of width of the exit over a standard model width is at the level of 5 and very similar ok, similar results are obtained by both CMS and ATLAS and in both cases these are a bit lower than what would be expected and of course this depends a lot on assumption of what is the cross-section for the other processes like qq to the both so it's a bit indirect measurement ok ok, so what can be done with this sample is also to study spin and cp properties so studying the kinematics and angular decays of these 3 processes one can measure JPC or at least test hypothesis that are alternative to the standard model 0 plus plus 1 so the trick is for example you take the x into 2z in for leptons you have many angular distribution that you can do looking at each one by itself you don't see very much ability to distinguish for example from the standard model and then 0 minus hypothesis when you take many of them together into a multivariate discriminant you start to see some difference that can be distinguished experimentally then you combine the 3 channels and you get very good limits so in practice results is that everything is consistent with the standard model hypothesis spin 1 of course is excluded by the fact that you serve the decay in 2 photons the 0 minus 2 plus hypothesis are all excluded at more than 95% confidence level this is shown in this table basically these are the different hypothesis for in particular for the tensor hypothesis you have to choose different models with different couplings to the quarks for example because these change the distribution ok on top of excluding different particles that are completely different jcp you can also try to check if there are contribution to the X coupling that are different from what you expect from the standard model so you can try to write a generic Lagrangian for a scalar particle then here you have the couplings that correspond to the standard model X but then you have other couplings from that are still cp even but they are not the standard model 1 and then you have cpo the couplings for example what is done is to try to put limits to these extra couplings these are plus for example from Atlas ok for some particular caping like this and this is a summary from CMS that looks at the different channels and this A2 and A3 correspond basically to this particular couplings here and you see that in the end when you combine all the channels you have this green line is what is allowed it is very consistent with the standard model basically ok now we go to the search for the case of the X bosons into fermions and the most promising channel is X2 tau tau so there is of course a large background particularly from Z to tau tau and also from the bosons and in this case we have low cross section and large background so in this search there is a combination of all the different possible channels from tau tau going to leptons but also adronic tau and it is very crucial for all this measurement a good identification of adronic tau this is for example a plot from CMS that show the adronic tau from these where you see the visible mass of the tau and how well this is reproduced by the different by standard model expectation so these experiments are really good identification for tau, ok so these different production models are combined using signal or background in categories as explained below and the other the point is that to reconstruct the mass in the case of tau you have escaping neutrinos so you need some method to exploit the kinematics and the missing energy to reconstruct the mass at best this is an example of reconstructed tau tau mass for a cut base selection in which you see you see basically here the z to tau tau and on top of this this excess and when you subtract the excess from the background you have this kind of thing so some excess here that is broadly compatible with an X expectation these are different mass hypothesis for the X so the results when you do a full multivariate analysis combine the different channels of CMS where you see the results for different channels and the combination you have these combined results that are compatible with the standard model and with precision that are still low I mean these six mass are the three to in this case for sigma level so it's basically an observation of the three sigma level so if you go then to X to BB bar this is a channel where we have very large cross-section but also huge backgrounds from QCD so what is done is that one has to use special production channels like associated production with boson or vector boson fusion to increase the signal of a background and there have been many cross-checks in which one tries also to look at a very similar process of VZ production with the Z decaying into BB bar that is something similar and here we see for example the invariant mass of the BB bar from Atlas where we see the in gray the VZ production and in red what we expect from X and this is the associated production this is the vector boson fusion case from CMS where you see this bump over a very large background so again what is done is a multivariate analysis and you combine different decay modes, different channels and you end up with a combined result that so it's basically a signal at the 2 sigma level for the associated production consistent with the standard model for CMS, Atlas and there is also a result from VBF from CMS that is also there so this is what has been observed now all these things can be used to measure also cross-section and compared to QCD prediction so this is a combined result from Atlas where the cross-section has been measured and combined between 4 leptons and 2 photons and this is compared to theoretical calculation this is the standard one used in most comparison to the X and this is a quite recent calculation done at the entry low level that shows that when you go to this number of loops basically you have a very small uncertainty from the QCD scale that the major cross-section is a bit higher but still very compatible with the expectation if we look at differential cross-section in this case as a function of Pt of the X you see still a very good agreement between data and different calculations ok, now the other important channel is the TTH production where this is important basically because the coupling of X to TT bar is important this is already accessed in the loops of GGH and H gamma gamma but of course in the loops you can have other channels so it's interesting to look at TT at the three level and here the cross-section is really small but the advantage is that the signal of a background is quite good so what is done is again combining all kinds of of decays and these are results from two leptons and three leptons and here you have also with two adronic leptons for example and this is the combination from ATLAS this is something similar from CMS that includes also BB bar and gamma gamma results so at the end the results from CMS is 2.8 plus minus 1 so a bit let's say is 2 sigma higher than the one expected in the standard model from ATLAS are compatible within one sigma but with the standard model and the CMS results so this is something maybe interesting to look in Iran too so once we one has measured all these different channels what can be do is to trying to put together all these measurements and fit couplings and so on so what is done is that for each of the measured channel the data are classified based on the production method using selection dedicated to enhance the signal from a particular production so these are basically the input that goes into the combination and when you do the combination you have basically these results so in this case of ATLAS you have the signal strength for the different decay channels so this is the combination in which you look at the decays so you see all this is compatible with the standard model if you do an overall combined mu you get this that is very close to one from both experiments ok what is a bit more interesting is to analyze this data in some phenomenal let's say in this framework that is called K framework so basically the standard model coupling is parameterized with this KF this K factor that is one in the case of standard model can be different and one put together all the measurement and measure these different K values and for example if one looks at the ratio one looks at GGH that goes to gamma gamma measures this combination of factors so each of the process measures a different combination of these factors to extract them extract the coupling to the different standard model particles basically so what is important is that if you assume that only standard model particles enter in the loop then there are relations that connect the photon and gluon couplings to the to the other KAI this is the example of the x2 gamma gamma in particular in this case you see that there is a negative interference in the top loop and the W loop that allows to also to measure the sign of these couplings so these are results on the on these K factors these couplings with the assumption of no beyond the standard model decays and no beyond the standard model particles in the loops so you see the precision is everything is compatible with standard model the level of precision ranges from 15% to 20% 30% ok, and you can also take these results and put them in this nice at least propapagandistic let's say plot where you put the coupling strength as a function of the particle mass and you see very well that the coupling strength is proportional to the particle mass as expected for for the standard model X another of course you can try to fix these groups of these K K factors for example you can assume that all the couplings to vectors are the same of the same scaling and all the couplings to fermions of the same scaling kv, kf let's say s plane where you can see here that so this is basically yes the difference between yukawa engaged couplings and assuming that there are no beyond the standard model contribution to the width and to the loops you can get this exclusion limits where you have limits from all the different processes and this ellipse is the boundary line limit from atlas here and from cms here this is the best combination this is the standard model point here is the standard model point in the case of atlas both are within one sigma and if you take look just you project this on one of the two axes you have something like 6% precision on the vector and 14% precision ok so these measurements can be used can be used in interpretation on the standard model models for example if you take the 2x 2x models 2x doublet models where you have basically the x plus this extra heavier x the axial x and the charged x so you can see that basically there is a relation between these parameters that are tangent beta and alpha that is the mixing angle between the two scalar x and the parameters that we just constrain so just from this measurement of the different kv, ku you can set limits in the plane tangent beta and cos beta minus alpha for this model and so you see that at low tangent beta more or less there is a almost everything is excluded except the standard model and at large tangent beta you have this region still allowed and you can interpret this in the framework of the minimal supersymmetic standard model that is a particular case of the 2x doublet model and here for example you have the limit on tangent beta versus the mass of the axial x and you see that for large tangent beta basically you have a limit of 400 gv basically and this is a plot basically as been shown in the previous talk before the cost of feedback so this exclusion in kf, kv can also be interpreted in term of the of these compositing models so these are in the case of these particular models basically you can relate psi to these couplings that have been measured and here you have for different values of psi that you move in this direction in the plane and basically this is the region basically allowed that is around 0.01 this is true that in the case of Arlas we have a fluctuation in the opposite direction in the case of cms this is more compatible anyway so we will see in the next round probably now of course one can use these measurements also to check if there are the case of the x into invisible particles basically one can try to sum up all the observed decay and see if there is anything somehow left and basically technically what is done is one takes the fit that I said before and keeps free couplings related to the loops because in this case we allow for non-standard model particles in the loops and also we add a new parameter that is a branchy ratio to invisible and you can fit these parameters and get limits on the invisible branchy ratios that are about 30% so one can do even more so look directly at invisible decays this is done for example looking at z or at w that recoils against anything so something missing so this is missing at t for example in the z in the analysis of invisible x recoiling against that z this is the analysis in the vector boson fusion production mode looking at missing transverse energy from cms and these are the limit that you can obtain for these direct searches that are a bit less stringent than the one shown before for this case for the bbf limits they are again around 30% or 40% let's say if you can read the numbers yourself cms combine the direct and on direct limits and obtain again something like 30% so these limits on the invisible decays can be as many interpretations of course depending on your favorite model one is dark matter so there are these exportal models that somehow predict the coupling of the of these dark particles to the x so you can in these models you can set limits on on dark matter ok, so this this is a dark matter mass these are the limits under different assumption of vector fermion or scalar you see that these limits are very competitive to direct search limits for darmac that produced ok and of course these are only valid below one half of the x mass because you need the x to be able to decay into dark matter stains so just overview of the next towards the next LHC years LHC run 2 this year we will collect probably about 8 inverse femtobars at 13 TV center of mass energy so basically this will allow to confirm the run 1 results and will give also some sensitivity to high mass x beyond the standard model this is particularly true because here you can see the of pattern luminosity is between 13 and ATV you can see that if you stay at the x mass you have a factor 2 so generally x cross section are a factor 2 higher at 13 TV well you start to go at 1 TV you get factor 10 in cross section so for run 2 we expect about to collect in by 2018 100 inverse femtobar so we will have all these channels like ttautao, bb and observe probably also tth at least 3 sigma we can improve all this mass and coupling measurements at the end of run 2 the expectation is to have couplings measured at the level of 10% and of course if there will be a luminosity LHC we will be also able to look at very rare decays like x to mu mu, z photon measure the couplings at the few percent level and also probably have an observation of the 2x productions and of course of all kinds of searches beyond the standard model ok this is a projection of what can be the couplings the precision that we can obtain of the couplings at the end of run 2 and at the end of high luminosity LHC I don't go in the detail because I basically say it the only thing I would like to stress that run 2 has started we already started to collect data 8 inverse picomars so far we have very nice signals this is pet zero signal from from CMS this is the dimune mass spectrum from ASAS where you see all the standard model resonances and this also shows that we have already very good calibration very good data to Monte Carlo agreement so this is very promising and ok these are the conclusion that I don't know if I go through basically is a repetitive ok so there is a very clear exposure signal at a mass of 125.09 gV the spin 1 2 and axial scalar hypothesis are basically excluded we have signals for fermion decays at the 3 sigma levels for Tautao 2 to 3 sigma levels for BB bar and we did all kind of feats using sigma cross section time branch duration for the different channels and everything is consistent with standard model but also allow to constrain the white class already to constrain some model beyond the standard model so the run tools just started and we will get probably better mass in coupling measurements thanks question so you had just a slide before you had some data from CMS so that's a good news that means that they are seeing something they are saying something so you know probably that it was this problem with the magnetic field by zeros you don't need the magnetic field you just measure photons probably you measure them better than with the magnetic field but is there any news about the problem ok, they say that probably ok, I don't know this is not official so I think they say that probably they will ramp soon and this will work it's already working ok, you know better so we have the good news good perfect any question in the meantime or so everybody is happy thank you again